Air MozillaThe Joy of Coding - Episode 142

The Joy of Coding - Episode 142 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting, 20 Jun 2018

Weekly SUMO Community Meeting This is the SUMO weekly call

Botond BalloTrip Report: C++ Standards Meeting in Rapperswil, June 2018

Summary / TL;DR

Project What’s in it? Status
C++17 See list Published!
C++20 See below On track
Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17
Concepts TS Constrained templates Merged into C++20 with some modifications
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication!
Transactional Memory TS Transaction support Published! Not headed towards C++20
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way
Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet.
Concurrency TS v2 See below Under development. Depends on Executors.
Networking TS Sockets library based on Boost.ASIO Published!
Ranges TS Range-based algorithms and views Published! Headed towards C++20
Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain
Modules v1 A component system to supersede the textual header file inclusion model Published as a TS
Modules v2 Improvements to Modules v1, including a better transition path Under active development
Numerics TS Various numerical facilities Under active development
Graphics TS 2D drawing API No consensus to move forward
Reflection TS Static code reflection mechanisms Send out for PDTS ballot
Contracts Preconditions, postconditions, and assertions Merged into C++20

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post.

At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS.

C++20

C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report.

Technical Specifications

In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development.

Parallelism TS v2

The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication.

The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft:

The working draft, as modified by these changes, was then approved for publication!

Reflection TS

The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature.

Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward.

Library Fundamentals TS v3

The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups.

Future Technical Specifications

There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point:

Graphics

At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time.

As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward.

As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being.

(That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).)

Executors

SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting.

At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal.

As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here).

Merging Technical Specifications into C++20

Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain.

Ranges TS

The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20.

The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review.

Coroutines TS

The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities).

Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines.

Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.)

As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.)

The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision.

Networking TS

The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain.

Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting.

Modules

Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless.

I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23).

The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.)

There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20.

What’s happening with Concepts?

The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)).

As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction.

The Role of Technical Specifications

We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays.

The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice.

The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote).
  • A couple of minor tweaks to the contracts proposal:
    • In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller).
    • Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting.
  • try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time.
  • More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable.
  • Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time.
  • Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives).
  • std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime.
  • Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while.
  • Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {.

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:


Proposals for which further work is encouraged:

  • Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal.
  • Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out.
  • Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal.
  • Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this:

    template <typename Self>
    R foo(this Self&& self, /* other parameters */);

    EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this:

    template <typename Self>
    R foo(/* other parameters */) Self&& self

    (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out).
    EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type).
  • constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms.
  • Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter).
    There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter:

    • issue a warning
    • issue an error
    • extend the lifetime of the returned object



    In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on.
    EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back.

  • Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further.
  • How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it).
  • Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious.

Rejected proposals:

  • Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected.
  • Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible<T>, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible<T>. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary.

Concepts

EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs).

Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type).

The four different proposals in play at the last meeting have been whittled down to two:

  • An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template <Sortable S> void sort(S& s);) to require braces for type parameters, so that you’d instead write template <Sortable{S}> void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.)
  • Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone.

Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax.

After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation).

In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule.

Coroutines

As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design.

Core Coroutines makes a number of changes to the Coroutines TS design:

  • The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated).
  • Syntax changes:
    • To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption).
    • To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests).
  • A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function).

EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines:

  • While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases.
  • There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase.
  • While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.)

As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work.

While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting.

Modules

As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”).

Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals.

The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment.

Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics.

EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus.

One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting.

As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky.

It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20.

Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal.

Papers not discussed

With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone.

Other Working Groups

Library Groups

Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue.

I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.

The following are among the proposals have passed design review and are undergoing (or awaiting) wording review:

The following proposals are still undergoing design review, and are being treated with priority:

The following proposals are also undergoing design review:

As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.

(These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.)

Study Groups

SG 1 (Concurrency)

I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors.

The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline:

Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread.

Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization.

Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold.

SG 7 (Compile-Time Programming)

It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting.

That said, SG 7 did meet briefly to discuss two other papers:

  • PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill).
  • Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon.

SG 12 (Undefined and Unspecified Behaviour)

SG 12 met to discuss several topics this week:

  • Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour.
  • Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered.
  • Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers.

SG 15 (Tooling)

The Tooling Study Group (SG 15) held its second meeting during an evening session this week.

The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community.

SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system.

The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself.

It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move.

Next Meetings

The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018.

However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then:

  • A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals.
  • A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal.
  • A meeting of the Concurrecy Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors.

(The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.)

Conclusion

I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included:

  • The entire Ranges TS being on track to be merged into C++20.
  • C++20 gaining standard facilities for contract programming.
  • Important progress on Modules, with a merged proposal that was very well-received.
  • A pivot towards package management, including as a way to make graphical progamming in C++ more accessible.

Stay tuned for future reports from me!

Other Trip Reports

Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well!

Gregory SzorcDeterministic Firefox Builds

As of Firefox 60, the build environment for official Firefox Linux builds switched from CentOS to Debian.

As part of the transition, we overhauled how the build environment for Firefox is constructed. We now populate the environment from deterministic package snapshots and are much more stringent about dependencies and operations being deterministic and reproducible. The end result is that the build environment for Firefox is deterministic enough to enable Firefox itself to be built deterministically.

Changing the underlying operating system environment used for builds was a risky change. Differences in the resulting build could result in new bugs or some users not being able to run the official builds. We figured a good way to mitigate that risk was to make the old and new builds as bit-identical as possible. After all, if the environments produce the same bits, then nothing has effectively changed and there should be no new risk for end-users.

Employing the diffoscope tool, we identified areas where Firefox builds weren't deterministic in the same environment and where there was variance across build environments. We iterated on differences and changed systems so variance would no longer occur. By the end of the process, we had bit-identical Firefox builds across environments.

So, as of Firefox 60, Firefox builds on Linux are deterministic in our official build environment!

That being said, the builds we ship to users are using PGO. And an end-to-end build involving PGO is intrinsically not deterministic because it relies on timing data that varies from one run to the next. And we don't yet have continuous automated end-to-end testing that determinism holds. But the underlying infrastructure to support deterministic and reproducible Firefox builds is there and is not going away. I think that's a milestone worth celebrating.

This milestone required the effort of many people, often working indirectly toward it. Debian's reproducible builds effort gave us an operating system that provided deterministic and reproducible guarantees. Switching Firefox CI to Taskcluster enabled us to switch to Debian relatively easily. Many were involved with non-determinism fixes in Firefox over the years. But Mike Hommey drove the transition of the build environment to Debian and he deserves recognition for his individual contribution. Thanks to all these efforts - and especially Mike Hommey's - we can now say Firefox builds deterministically!

The fx-reproducible-build bug tracks ongoing efforts to further improve the reproducibility story of Firefox. (~300 bugs in its dependency tree have already been resolved!)

Niko MatsakisProposal for a staged RFC process

I consider Rust’s RFC process one of our great accomplishments, but it’s no secret that it has a few flaws. At its best, the RFC offers an opportunity for collaborative design that is really exciting to be a part of. At its worst, it can devolve into bickering without any real motion towards consensus. If you’ve not done so already, I strongly recommend reading aturon’s excellent blog posts on this topic.

The RFC process has also evolved somewhat organically over time. What began as “just open a pull request on GitHub” has moved into a process with a number of formal and informal stages (described below). I think it’s a good time for us to take a step back and see if we can refine those stages into something that works better for everyone.

This blog post describes a proposal that arose over some discussions at the Mozilla All Hands. This proposal represents an alternate take on the RFC process, drawing on some ideas from the TC39 process, but adapting them to Rust’s needs. I’m pretty excited about it.

Important: This blog post is meant to advertise a proposal about the RFC process, not a final decision. I’d love to get feedback on this proposal and I expect further iteration on the details. In any case, until the Rust 2018 Edition work is complete, we don’t really have the bandwidth to make a change like this. (And, indeed, most of my personal attention remains on NLL at the moment.)

TL;DR

The TL;DR of the proposal is as follows:

  • Explicit RFC stages. Each proposal moves through a series of explicit stages.
  • Each RFC gets its own repository. These are automatically created by a bot. This permits us to use GitHub issues and pull requests to split up conversation. It also permits a RFC to have multiple documents (e.g., a FAQ).
  • The repository tracks the proposal from the early days until stabilization. Right now, discussions about a particular proposal are scattered across internals, RFC pull requests, and the Rust issue tracker. Under this new proposal, a single repository would serve as the home for the proposal. In the case of more complex proposals, such as impl Trait, the repository could even serve as the home multiple layered RFCs.
  • Prioritization is now an explicit part of the process. The new process includes an explicit step to move from the “spitballing” stage (roughly “Pre-RFC” today) to the “designing” stage (roughly “RFC” today). This step requires both a team champion, who agrees to work on moving the proposal through implementation and towards stabilization, and general agreement from the team. The aim here is two-fold. First, the teams get a chance to provide early feedback and introduce key constraints (e.g., “this may interact with feature X”). Second, it provides room for a discussion about prioritization: there are often RFCs which are good ideas, but which are not a good idea right now, and the current process doesn’t give us a way to specify that.
  • There is more room for feedback on the final, implemented design. In the new process, once implementation is complete, there is another phase where we (a) write an explainer describing how the feature works and (b) issue a general call for evaluation. We’ve done this before – such as cramertj’s call for feedback on impl Trait, aturon’s call to benchmark incremental compilation, or alexcrichton’s push to stabilize some subset of procedural macros – but each of those was an informal effort, rather than an explicit part of the RFC process.

The current process

Before diving into the new process, I want to give my view of the current process by which an idea becomes a stable feature. This goes beyond just the RFC itself. In fact, there are a number of stages, though some of them are informal or sometimes skipped:

  • Pre-RFC (informal): Discussions take place – often on internals – about the shape of the problem to be solved and possible proposals.
  • RFC: A specific proposal is written and debated. It may be changed during this debate as a result of points that are raised.
    • Steady state: At some point, the discussion reaches a “steady state”. This implies a kind of consensus – not necessarily a consensus about what to do, but a consensus on the pros and cons of the feature and the various alternatives.
      • Note that reaching a steady state does not imply that no new comments are being posted. It just implies that the content of those comments is not new.
    • Move to merge: Once the steady state is reached, the relevant team(s) can move to merge the RFC. This begins with a bunch of checkboxes, where each team member indicates that they agree that the RFC should be merged; in some cases, blocking concerns are raised (and resolved) during this process.
    • FCP: Finally, once the team has assented to the merge, the RFC enters the Final Comment Period (FCP). This means that we wait for 10 days to give time for any final arguments to arise.
  • Implementation: At this point, a tracking issue on the Rust repo is created. This will be the new home for discussion about the feature. We can also start writing code, which lands under a feature gate.
    • Refinement: Sometimes, after implementation the feature, we find that the original design was inconsistent, in which case we might opt to alter the spec. Such alterations are discussed on the tracking issue – for significant changes, we will typically open a dedicated issue and do an FCP process, just like with the original RFC. A similar procedure happens for resolving unresolved questions.
  • Stabilization: The final step is to move to stabilize. This is always an FCP decision, though the precise protocol varies. What I consider Best Practice is to create a dedicated issue for the stabilization: this issue should describe what is being stabilized, with an emphasis on (a) what has changed since the RFC, (b) tests that show the behavior in practice, and (c) what remains to be stabilized. (An example of such an issue is #48453, which proposed to stabilize the ? in main feature.)

Proposal for a new process

The heart of the new proposal is that each proposal should go through a series of explicit stages, depicted graphically here (you can also view this directly on Gooogle drawings, where the oh-so-important emojis work better):

You’ll notice that the stages are divided into two groups. The stages on the left represent phases where significant work is being done: they are given “active” names that end in “ing”, like spitballing, designing, etc. The bullet points below describe the work that is to be done. As will be described shortly, this work is done on a dedicated repository, by the community at large, in conjunction with at least one team champion.

The stages on the right represent decision points, where the relevant team(s) must decide whether to advance the RFC to the next stage. The bullet points below represent the questions that the team must answer. If the answer is Yes, then the RFC can proceed to the next stage – note that sometimes the RFC can proceed, but unresolved questions are added as well, to be addressed at a later stage.

Repository per RFC

Today, the “home” for an RFC changes over the course of the process. It may start in an internals thread, then move to the RFC repo, then to a tracking issue, etc. Under the new process, we would instead create a dedicated repository for each RFC. Once created, the RFC would serve as the “main home” for the new proposal from start to finish.

The repositories will live in the rust-rfcs organization. There will be a convenient webpage for creating them; it will create a repo that has an appropriate template and which is owned by the appropriate Rust team, with the creator also having full permissions. These repositories would naturally be subject to Rust’s Code of Conduct and other guidelines.

Note that you do not have to seek approval from the team to create a RFC repository. Just like opening a PR, creating a repository is something that anyone can do. The expectation is that the team will be tracking new repositories that are created (as well as those seeing a lot of discussion) and that members of the team will get involved when the time is right.

The goal here is to create the repository early – even before the RFC text is drafted, and perhaps before there exists a specific proposal. This allows joint authorship of RFCs and iteration in the repository.

In addition to create a “single home” for each proposal, having a dedicated RFC allows for a number of new patterns to emerge:

  • One can create a FAQ.md that answers common questions and summarizes points that have already reached consensus.
  • One can create an explainer.md that documents the feature and explains how it works – in fact, creating such docs is mandatory during the “implementing” phase of the process.
  • We can put more than one RFC into a single repository. Often, there are complex features with inter-related (but distinct) aspects, and this allows those different parts to move through the stabilization process at a different pace.

The main RFC repository

The main RFC repository (named rust-rfcs/rfcs or something like that)
would no longer contain content on its own, except possibly the final draft of each RFC text. Instead, it would primarily serve as an index into the other repositories, organized by stage (similar to the TC39 proposals repository).

The purpose of this repository is to make it easy to see “what’s coming” when it comes to Rust. I also hope it can serve as a kind of “jumping off point” for people contributing to Rust, whether that be through design input, implementation work, or other things.

Team champions and the mechanics of moving an RFC between stages

One crucial role in the new process is that of the team champion. The team champion is someone from the Rust team who is working to drive this RFC to completion. Procedurally speaking, the team champion has two main jobs. First, they will give periodic updates to the Rust team at large of the latest developments, which will hopefully identify conflicts or concerns early on.

The second job is that team champions decide when to try and move the RFC between stages. The idea is that it is time to move between stages when two conditions are met:

  • The discussion on the repository has reached a “steady state”, meaning that there do not seem to be new arguments or counterarguments emerging. This sometimes also implies a general consensus on the design, but not always: it does however imply general agreement on the contours of the design space and the trade-offs involved.
  • There are good answers to the questions listed for that stage.

The actual mechanics of moving an RFC between stages are as follows. First, although not strictly required, the team champion should open an issue on the RFC repository proposing that it is time to move between stages. This issue should contain a draft of the report that will be given to the team at large, which should include summary of the key points (pro and con) around the design. Think of like a summary comment today. This issue can go through an FCP period in the same way as today (though without the need for checkmarks) to give people a chance to review the summary.

At that point, the team champion will open a PR on the main repository (rust-rfcs/rfcs). This PR itself will not have a lot of content: it will mostly edit the index, moving the PR to a new stage, and – where appropriate – linking to a specific revision of the text in the RFC repository (this revision then serves as “the draft” that was accepted, though of course further edits can and will occur). It should also link to the issue where the champion proposed moving to the next stage, so that the team can review the comments found there.

The PRs that move an RFC between stages are primarily intended for the Rust team to discuss – they are not meant to be the source of sigificant discussion, which ought to be taking place on the repository. If one looks at the current RFC process, they might consist of roughly the set of comments that typically occur once FCP is proposed. The teams should ensure that a decision (yay or nay) is reached in a timely fashion.

Finding the best way for teams to govern themselves to ensure prompt feedback remains a work in progress. The TC39 process is all based around regular meetings, but we are hoping to achieve something more asynchronous, in part so that we can be more friendly to people from all time zones, and to ease language barriers. But there is still a need to ensure that progress is made. I expect that weekly meetings will continue to play a role here, if only to nag people.

Making implicit stages explicit

There are two new points in the process that I want to highlight. Both of these represents an attempt to take “implicit” decision points that we used to have and make them more explicit and observable.

The Proposal point and the change from Spitballing to Designing

The very first stage in the RFC is going from the Spitballing phase to the Designing phase – this is done by presenting a Proposal. One crucial point is that there doesn’t have to be a primary design in order to present a proposal. It is ok to say “here are two or three designs that all seem to have advantages, and further design is needed to find the best approach” (often, that approach will be some form of synthesis of those designs anyway).

The main questions to be answered at the proposal have to do with motivation and prioritization. There are a few questions to answer:

  • Is this a problem we want to solve?
    • And, specifically, is this a problem we want to solve now?
  • Do we think we have some realistic ideas for solving it?
    • Are there major things that we ought to dig into?
  • Are there cross-cutting concerns and interactions with other features?
    • It may be that two features which are individually quite good, but which – taken together – blow the language complexity budget. We should always try to judge how a new feature might affect the language (or libraries) as a whole.
    • We may want to extend the process in other ways to make identification of such “cross-cutting” or “global” concerns more first class.

The expectation is that all major proposals need to be connected to the roadmap. This should help to keep us focused on the work we are supposed to be doing. (I think it is possible for RFCs to advance that are not connected to the roadmap, but they need to be simple extensions that could effectively work at any time.)

There is another way that having an explicit Proposal step addresses problems around prioritization. Creating a Proposal requires a Team Champion, which implies that there is enough team bandwidth to see the project through to the end (presuming that people don’t become champions for more than a few projects at a time). If we find that there aren’t enough champions to go around (and there aren’t), then this is a sign we need to grow the teams (something we’ve been trying hard to do).

The Proposal point also offers a chance for other team members to point out constraints that may have been overlooked. These constraints don’t necessarily have to derail the proposal, they may just add new points to be addressed during the Designing phase.

The Candidate point and the Evaluating phase

Another new addition to the process here is the Evaluation phase. The idea here is that, once implementation is complete, we should do two things:

  • Write up an explainer that describes how the feature works in terms suitable for end users. This is a kind of “preliminary” documentation for the feature. It should explain how to enable the feature, what it’s good for, and give some examples of how to use it.
    • For libraries, the explainer may not be needed, as the API docs serve the same purpose.
    • We should in particular cover points where the design has changed significantly since the “Draft” phase.
  • Propose the RFC for Candidate status. If accepted, we will also issue a general call for evaluation. This serves as a kind of “pre-stabilization” notice. It means that people should go take the new feature for a spin, kick the tires, etc. This will hopefully uncover bugs, but also surprising failure modes, ergonomic hazards, or other pitfalls with the design. If any significant problems are found, we can correct them, update the explainer, and repeat until we are satisfied (or until we decide the idea isn’t going to work out).

As I noted earlier, we’ve done this before, but always informally:

Once the evaluation phase seems to have reached a conclusion, we would move to stabilize the feature. The explainer docs would then become the preliminary documentation and be added to a kind of addendum in the Rust book. The docs would be expected to integrate the docs into the book in smoother form sometime after synchronization.

Conclusion

As I wrote before, this is only a preliminary proposal, and I fully expect us to make changes to it. Timing wise, I don’t think it makes sense to pursue this change immediately anyway: we’ve too much going on with the edition. But I’m pretty excited about revamping our RFC processes both by making stages explicit and adding explicit repositories.

I have hopes that we will find ways to use explicit repositories to drive discussions towards consensus faster. It seems that having the ability, for example, to document “auxiliary” documents, such as lists of constraints and rationale, can help to ensure that people’s concerns are both heard and met.

In general, I would also like to start trying to foster a culture of “joint ownership” of in-progress RFCs. Maintaining a good RFC repository is going to be a fair amount of work, which is a great opportunity for people at large to pitch in. This can then serve as a kind of “mentoring on ramp” getting people more involved in the lang team. Similarly, I think that having a list of RFCs that are in the “implementation” phase might be a way to help engage people who’d like to hack on the compiler.

Credit where credit is due

This proposal is heavily shaped by the TC39 process. This particular version was largely drafted in a big group discussion with wycats, aturon, ag_dubs, steveklabnik, nrc, jntrnr, erickt, and oli-obk, though earlier proposals also involved a few others.

Updates

(I made various simplifications shortly after publishing, aiming to keep the length of this blog post under control and remove what seemed to be somewhat duplicated content.)

Air MozillaGuidance for H1 Merit and Bonus Award Cycle

Guidance for H1 Merit and Bonus Award Cycle In part 2 of this two-part video, managers can use this Playbook to assess their employees' performance and make recommendations about bonus and merit.

Air MozillaManager Playbook_Performance Assessment

Manager Playbook_Performance Assessment In part 1 of this two-part video, managers can use this Playbook to help assess their employees' performance and make bonus and merit recommendations.

Bryce Van DykSetting up Arcanist for Mozilla development on Windows

Mozilla is rolling out Phabricator as part of our tooling. However, at the time of writing I was unable to find a straight forward setup to get the Phabricator tooling playing nice on Windows with MozillaBuild.

Right now there are a couple of separate threads around how to interact with Phabricator on Windows:

However, I have stuff waiting for me on Phabricator that I'd like to interact with now, so let's get a work around in place! I started with the Arcanist windows steps, but have adapted them to a MozillaBuild specific environment.

PHP

Arcanist requires PHP. Grab a build from here. The docs for Arcanist indicate the type of build doesn't really matter, but I opted for a thread safe one because that seems like a nice thing to have.

I installed PHP outside of my MozillaBuild directory, but you can put it anywhere. For the sake of example, my install is in C:\Patches\Php\php-7.2.6-Win32-VC15-x64.

We need to enable the curl extension: in the PHP install dir copy php.ini-development to php.ini and uncomment (by removing the ;) the extension=curl line.

Finally, enable PHP to find its extension by uncommenting the extension_dir = "ext" line. The Arcanist instructions suggest setting a fully qualified path, but I found a relative path worked fine.

Arcanist

Create somewhere to store Arcanist and libphutil. Note, these need to be located in the same directory for arc to work.

$ mkdir somewhere/
$ cd somewhere/
somewhere/ $ git clone https://github.com/phacility/libphutil.git
somewhere/ $ git clone https://github.com/phacility/arcanist.git

For me this is C:\Patches\phacility\.

Wire everything into MozillaBuild

Since I want arc to be usable in MozillaBuild until this work around is no longer required, we're going to modify start up settings. We can do this by changing ...mozilla-build/msys/etc/profile.d and adding to the PATH already being constructed. In my case I've added that paths mentioned earlier, but with MSYS syntax: /c/Patches/Php/php-7.2.6-Win32-VC15-x64:/c/Patches/phacility/arcanist/bin:.

Now arc should run inside newly started MozillaBuild shells.

Credentials

We still need credentials in order to use arc with mozilla-central. For this to work we need a Phabricator account, see here for that. After that's done, in order to get your credentials run arc install-certificate, navigate to the page as instructed and copy your API key back to the command line.

Other problems

There was an issue with the evolve Mercurial extension the would cuase Unknown Mercurial log field 'instability'!. This should now be fixed in Arcanist. See this bug for more info.

Finally, I had some issues with arc diff based on my Mercurial config. Updating my extensions and running a ./mach bootsrap seemed to help.

Ready to go

Hopefully everything is ready to go at this point. I found working through the Mozilla docs for how to use arc after setup helpful. If have any comments, please let me know either via email or on IRC.

Emma IrwinCall for Feedback! Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS)

<figcaption class="wp-caption-text">open source stars — opensource.com CC BY-SA 2.0</figcaption>

 

In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. Contributors so far include:

Alexander Serebrenik (Eindhoven University of Technology) , Akshita Gupta (Outreachy), Amy Marrich (OpenStack), Anita Sarma (Oregon State University), Bhagashree Uday (Fedora), Daniel Izquierdo (Bitergia), Emma Irwin (Mozilla), Georg Link (University of Nebraska at Omaha), Gina Helfrich (NumFOCUS), Nicole Huesman (Intel) and Sean Goggins ((University of Missouri).

Our goals are to first establish a set of peer-validated goal-metrics, for understanding diversity & inclusion in FOSS ; Second, to identify technology, and research methodologies for understanding the success of our interventions in ways that keep ethics of privacy, and consent at center. And finally, that we document this work in ways that communities can reproduce the report for themselves.

For Mozilla this follows the recommendations coming out of our D&I research to create Metrics that Matter, and to work across Open Source with others projects trying to solve the same problems. I am very excited to share our first draft of goal-metrics for your feedback.

<figcaption class="wp-caption-text">D&I Working Group — Initial set of Goal — Metrics</figcaption>

Demographics

Communication

Contribution

Events

Governance

Leadership

Project Places

Recognition

Ethics

Please note that we know these are incomplete, we know there are likely existing resources that can improve, or even disprove some of these — and that is the point of this blog post! Please review and provide feedback either — via a Github issue, pull request, or by reaching out to someone in the working group, or by joining our working group call (next one July 20th, 9am PST) — which you can find the video link here.

You can find one or more of us at the following events as well:

FacebookTwitterGoogle+Share

Mozilla VR BlogIntroducing A-Terrain - a cartography component for A-Frame

Introducing A-Terrain - a cartography component for A-Frame

Have you ever wanted to make a small web app to share your favorite places with your friends? For example your favorite photographs attached to a hike, or just a view of your favorite peak, or your favorite places downtown, or a suggested itinerary for friends visiting?

Right now it is difficult to easily incorporate third party map data into your own projects. Creating 3d games or VR experiences with real world maps requires access to proprietary software or closed data ecosystems. To do it from scratch requires pulling data from multiple sources, such as image servers, and elevation servers. It also requires substantial math expertise. As well, often you may want to stylize the rendering to suit your own specific use cases. You may have a tron like video game aesthetic for your project and yet the building geometry you're forced to work with doesn't allow you to change colors. While there are many map providers, such as Apple, Google Maps and suchlike, and there are many viewers - most of these tools are specialized around showing high fidelity maps that are as true to reality as possible. What's missing is a middle ground - where you can take map data and easily put it in your own projects - creating your own mash ups.

We see A-Terrain as a starting point or demo for how the web can be different. With this component you can build whatever 3D experience you want and use real world data.

We’ve arranged for Cesium ion (see http://cesium.com) to make the data set available for free for people to try out. Currently the dataset includes world elevation, satellite images and 3d buildings for San Francisco.

For example here is a stylized view of San Francisco as viewed from ground level on the Embarcadero:

Introducing A-Terrain - a cartography component for A-Frame

You can try this example yourself in your browser here (use AWSD or arrow keys to move around):

https://anselm.github.io/aterrain/examples/helloworld/tile.html .

This component can also be used as a quick and dirty globe renderer (although if you're really interested in that specific use case then Cesium itself may be more suitable):

Introducing A-Terrain - a cartography component for A-Frame

I have added some rudimentary navigation controls using hash arguments on the URL. For example here is a piece of Mt Whitney:

https://anselm.github.io/aterrain/examples/place/index.html#lat=36.57850&lon=-118.29226&elev=1000

Introducing A-Terrain - a cartography component for A-Frame

The real strength of a tool like this is composability — to be able to mix different components together. For example here is A-Terrain and Mozilla Hubs being used for a collaborative hiking trip planning scenario to the Grand Canyon:

Introducing A-Terrain - a cartography component for A-Frame

Here is the URL for the above. This will take you to a random room ID - share that room ID with your friends to join the same room:




As another example of lightweight composability I place a tiny duck on the earths surface above Oregon. This is just a few lines of scripting:

Introducing A-Terrain - a cartography component for A-Frame

This example can be visited here:

https://anselm.github.io/aterrain/examples/helloworld/duck.html

To accomplish all this we leverage A-Frame — a browser based framework that lets users build 3d environments easily. The A-Frame philosophy is to take complicated behaviors and wrap them up html tags. If you can write ordinary HTML you can build 3d environments.

A-Frame is part of a Mozilla initiative to foster the open web —to raise the bar on what people can create on the web. Using A-Frame anybody can make 3d, virtual or augmented reality experiences on the web. These experiences can be shared instantly with anybody else in the world — running in the browser, on mobile phones, tablets and high end head mounted displays such as the Oculus Rift and the HTC Vive. You don’t need to buy a 3d authoring tool, you don’t need to ask somebody else permission if you can publish your app, you don’t publish your apps through an app store, you don’t need a special viewer to view the experience — it just runs — just like any ordinary web page.

I want to mention just a few of the folks who’ve helped bring this to this point — this includes Lars Bergstrom at Mozilla, Patrick Cozzi at Cesium, Shehzan especially (who was tireless in answering my dumb questions about coordinate re-projections), Blair MacIntyre (who had the initial idea) and Joshua Marinacci (who has been suggesting improvements and acting as a sounding board as well as testing this work).

The source code for this project is here:

https://github.com/anselm/aterrain

We’re all especially interested in seeing what kinds of experiences people build, and what directions this goes in. I'm especially interested in seeing AR use cases that combine this component with Augmented Reality frameworks such as recent Mozilla initiatives here : https://www.roadtovr.com/mozilla-launches-ios-app-experiment-webar/ . Please keep us posted on your work!

Dave TownsendTaming Phabricator

So Mozilla is going all-in on Phabricator and Differential as a code review tool. I have mixed feelings on this, not least because it’s support for patch series is more manual than I’d like. But since this is the choice Mozilla has made I might as well start to get used to it. One of the first things you see when you log into Phabricator is a default view full of information.

A screenshot of Phabricator's default view

It’s a little overwhelming for my tastes. The Recent Activity section in particular is more than I need, it seems to list anything anyone has done with Phabricator recently. Sorry Ted, but I don’t care about that review comment you posted. Likewise the Active Reviews section seems very full when it is barely listing any reviews.

But here’s the good news. Phabricator lets you create your own dashboards to use as your default view. It’s a bit tricky to figure out so here is a quick crash course.

Click on Dashboards on the left menu. Click on Create Dashboard in the top right, make your choices then hit Continue. I recommend starting with an empty Dashboard so you can just add what you want to it. Everything on the next screen can be modified later but you probably want to make your dashboard only visible to you. Once created click “Install Dashboard” at the top right and it will be added to the menu on the left and be the default screen when you load Phabricator.

Now you have to add searches to your dashboard. Go to Differential’s advanced search. Fill out the form to search for what you want. A quick example. Set “Reviewers” to “Current Viewer”, “Statuses” to “Needs Review”, then click Search. You should see any revisions waiting on you to review them. Tinker with the search settings and search all you like. Once you’re happy click “Use Results” and “Add to Dashboard”. Give your search a name and select your dashboard. Now your dashboard will display your search whenever loaded. Add as many searches as you like!

Here is my very simple dashboard that lists anything I have to review, revisions I am currently working on and an archive of closed work:

A Phabricator dashboard

Like it? I made it public and you can see it and install it to use yourself if you like!

David HumphreyBuilding Large Code on Travis CI

This week I was doing an experiment to see if I could automate a build step in a project I'm working on, which requires binary resources to be included in a web app.

I'm building a custom Linux kernel and bundling it with a root filesystem in order to embed it in the browser. To do this, I'm using a dockerized Buildroot build environment (I'll write about the details of this in a follow-up post). On my various computers, this takes anywhere from 15-25 minutes. Since my buildroot/kernel configs won't change very often, I wondered if I could move this to Travis and automate it away from our workflow?

Travis has no problem using docker, and as long as you can fit your build into the alloted 50 minute build timeout window, it should work. Let's do this!

First attempt

In the simplest case, doing a build like this would be as simple as:

sudo: required
services:
  - docker
...
before_script:
  - docker build -t buildroot .
  - docker run --rm -v $PWD/build:/build buildroot
...
deploy:
  # Deploy built binaries in /build along with other assets

This happily builds my docker buildroot image, and then starts the build within the container, logging everything as it goes. But once the log gets to 10,000 lines in length, Travis won't produce more output. You can still download the Raw Log as a file, so I wait a bit and then periodically download a snapshot of the log in order to check on the build's progress.

At a certain point the build is terminated: once the log file grows to 4M, Travis assumes that all the size is noise, for example, a command running in an infinite loop, and terminates the build with an error.

Second attempt

It's clear that I need to reduce the output of my build. This time I redirect build output to a log file, and then tell Travis to dump the tail-end of the log file in the case of a failed build. The after_failre and after_success build stage hooks are perfect for this.:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

after_failure:
  # dump the last 2000 lines of our build, and hope the error is in that!
  - tail --lines=2000 build.log

after_success:
  # Log that the build worked, because we all need some good news
  - echo "Buildroot build succeeded, binary in ./build"

I'm pretty proud of this until it fails after 10 minutes of building with an error about Travis assuming the lack of log messages (which are all going to my build.log file) means my build has stalled and should be terminated. Turns out you must produce console output every 10 minutes to keep Travis builds alive.

Third attempt

Not only is this a common problem, Travis has a built-in solution in the form of travis_wait. Essentially, you can prefix your build command with travis_wait and it will tolerate there being no output for 20 minutes. Need more than 20, you can optionally pass it the number of minutes to wait before timing out. Let's try 30 minutes:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - travis_wait 30 docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

This builds perfectly...for 10 minutes. Then it dies with a timeout due to there being no console output. Some more research reveals that travis_wait doesn't play nicely with processes that fork or exec.

Fourth attempt

Lots of people suggest variations on the same theme: run a command that spins and periodically prints something to stdout, and have it fork your build process:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
  - time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
  # Killing background sleep loop
  - kill %1

Here we log something at 5 minute intervals, while the build progresses in the background. When it's done, we kill the while loop. This works perfectly...until it hits the 50 minute barrier and gets killed by Traivs:

$ docker build -t buildroot . > build.log 2>&1
before_script
$ while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
$ time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
=====[ 495 seconds, buildroot still building... ]=====
=====[ 795 seconds, buildroot still building... ]=====
=====[ 1095 seconds, buildroot still building... ]=====
=====[ 1395 seconds, buildroot still building... ]=====
=====[ 1695 seconds, buildroot still building... ]=====
=====[ 1995 seconds, buildroot still building... ]=====
=====[ 2295 seconds, buildroot still building... ]=====
=====[ 2595 seconds, buildroot still building... ]=====
=====[ 2895 seconds, buildroot still building... ]=====
The job exceeded the maximum time limit for jobs, and has been terminated.

The build took over 48 minutes on the Travis builder, and combined with the time I'd already spent cloning, installing, etc. there isn't enough time to do what I'd hoped.

Part of me wonders whether I could hack something together that uses successive builds, Travis caches and move the build artifacts out of docker, such that I can do incremental builds and leverage ccache and the like. I'm sure someone has done it, and it's in a .travis.yml file in GitHub somewhere already. I leave this as an experiment for the reader.

I've got nothing but love for Travis and the incredible free service they offer open source projects. Every time I concoct some new use case, I find that they've added it or supported it all along. The Travis docs are incredible, and well worth your time if you want to push the service in interesting directions.

In this case I've hit a wall and will go another way. But I learned a bunch and in case it will help someone else, I leave it here for your CI needs.

Mozilla GFXWebRender newsletter #20

Newsletter number twenty is here, delayed again by a combination of days off and the bi-annual Mozilla AllHands which took place last week in San Francisco.
A big highlight in the WebRender side is the work on porting all primitives to the brush system approaching completion. Individually, porting each primitive doesn’t sound like much but with all of the pieces coming together:

  • Most complex primitives can be segmented, moving a lot of pixels to the opaque pass and using intermediate targets to render the tricky parts.
  • The majority of the alpha pass is now using the brush image shader which greatly improves batching. We generate about 2 to 3 times less draw calls on average now than a month ago.

This translates into noticeable performance imporvements on a lot of very complex pages. The most outstanding remaining performance issues are now caused by the CPU fallback which we are working on moving off of the critical path, so things are looking very promiscing especially with the mountain of other performance improvements we are currently holding off on to concentrate on correctness.

Speaking of fixing correctness issues, as usual we can see from the lists below that there is also a lot of progress in this area.

Notable WebRender changes

  • Kvark fixed an issue with invalid local rect indices.
  • Hugh Gallagher merged the ResourceUpdates and Transaction types.
  • Lee implemented rounding off sub-pixel offsets coming from scroll origins.
  • Kvark fixed a bug in the tracking of image dirty rects.
  • Glenn ported inset/outset border styles to brush shaders.
  • Kvark fixed an issue with document placement and scissoring.
  • Kvark added a way to track display items when debugging.
  • Glenn ported double, groove and ridge broders to the brush shader system.
  • Patrick fixed a crash with pathfinder.
  • Martin improved the API for adding reference frames.
  • Kats fixed a crash with blob images.
  • Gankro fixed a cache collision bug.
  • Glenn ported dots and dashes to the brush shader infrastructure.
  • Glenn fixed the invalidation of drop shadows when the local rect is animating.
  • Glenn removed empty batches to avoid empty draw calls and batch breaks.
  • Kats moved building the hit-test tree to the scene building phase.
  • Marting improved the clipping documentation.
  • Glenn removed the transform variant of the text shader.
  • Lee fixed support for arbitrarily large font sizes.
  • Glenn fixed box shadows when the inner mask has invalid size.
  • Glenn fixed a crash with zero-sized borders.
  • Nical added a debug indicator showing when rendering happens.

Notable Gecko changes

  • Sotaro enabled DirectComposition to present the window.
  • Sotaro fixed an issue with device resets on Windows.
  • Kats enabled a lot of tests and benchmark on the CI (spread over many bugzilla entries).
  • Kats improved a synchronization mechanism between the content process and WebRender.
  • Sotaro implemented a shader cache to improve startup times.
  • Kats improved hit-testing correctness with respect to touch action.
  • Kats fixed a bug related to position-sticky.
  • Jeff fixed the invalidation of blob images with changing transforms.
  • Lee fixed an issue with very large text.
  • Sotaro improved the way we reset the EGL state.
  • Kats fixed some async scene building bugs.
  • Kats avoided a crash with iframes referring to missing pipelines.
  • Kats prevented blob images from allocaitng huge surfaces.
  • Kats fixed a shutdown issue.
  • Jeff fixed some issue with fractional transforms and blob images.
  • Bob Owen improved the way memory is allocated when recording blob images.
  • Kats fixed a race condition with async image pipelines and display list flattening.
  • Kats fixed an APZ issue that affected youtube.
  • Kats fixed an issue causing delayed canvas updates under certain conditions.
  • Kats fixed some hit testing issues.
  • Sotaro fixed a startup issue when WebRender initialization fails.
  • Markus removed a needless blob image fallback caused by invisible outlines which was causing performance issues.
  • Kats fixed a crash.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other pref.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

This Week In RustThis Week in Rust 239

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is SIMDNoise, a crate to use modern CPU vector instructions to generate various types of noise really fast. Thanks to gregwtmtno for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

66 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Africa
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In Rust it’s the compiler that complains, with C++ it’s the colleagues

– Michal 'Vorner' Vaner on gitter

(selected by llogiq per one unanimous vote)

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Nicholas NethercoteSan Francisco Oxidation meeting notes

At last week’s Mozilla All Hands meeting in San Francisco we had an Oxidation meeting about the use of Rust in Firefox. It was low-key, being mostly about status and progress. The notes are here for those who are interested.

Firefox UXWritten by Amy Lee & Eric Pang

Written by Amy Lee & Eric Pang

Firefox has a motion team?! Yes we do!

Motion may sometimes feel like an afterthought or worse yet “polish”. For the release of Firefox Quantum (one of our most significant releases to date), we wanted to ensure that motion was not a second class citizen and that it would play an important role in how users perceived performance in the browser.

We (Amy & Eric) make up the UX side of the “motion team” for Firefox. We say this in air quotes because the motion team was essentially formed based on our shared belief that motion design is important in Firefox. With a major release planned, we thought this would be the perfect opportunity to have a team working on motion.

Step 1: Make a Sticker

We made a sticker and started calling ourselves the motion team.

<figcaption>We created a sticker to formalize the “motion team”.</figcaption>

Step 2: Audit Existing Motions

The next plan of action was to audit the existing motions in Firefox across mobile and desktop. We documented them by taking screen recordings and highlighted the areas that needed the most attention.

<figcaption>An audit of current animations was performed and documented for Firefox on Android, iOS, and desktop to explore areas of opportunity for motion.</figcaption>

From this exercise it was clear that consistency and perceived performance were high on our list of improvements.

The next step was to gather inspiration for a mood board. From there, we formed a story that would become the foundation of our motion design language.

During this process, we asked ourselves:

How can we make the browser feel smoother, faster and more responsive?.

<figcaption>Inspiration was collected and documented in a mood board to help guide the motion story.</figcaption>

Step 3: Defining a Motion Story

With Photon (Firefox’s new design language) stemming from Quantum we knew there was going to be an emphasis on speed in our story. Before starting work on any new motions, we created a motion curve to reflect this. The aim was to have a curve that would be perceived as fast yet still felt smooth and natural when applied to tabs and menu items. Motion should also be informative (i.e showing where your bookmarked item is saved, when your tab is done loading) and lastly, have personality. We defined our story based on these considerations.

<figcaption>Motion curve that was applied to menu and tab animations.</figcaption>

The motion story was presented to the rest of the UX team during a work week held in Toronto (the UX team is distributed across several countries so work weeks are planned for in-person collaboration).

This was our mission statement:

The motion design language in Firefox Quantum is defined by three principles: Quick, Informative and Whimsical. Following these guiding principles, we aim to achieve a cohesive, consistent, and enjoyable experience within the family of Firefox browsers.

Next we presented some preliminary concepts to support these principles:

Quick

Animations should be fast and nimble and never keep the user waiting longer than they need to. The aim is to prioritize user perceived performance over technical benchmarks.

<figcaption>Panel and new tab animation.</figcaption>

Informative

Motion should help ease the user through the experience. It should aid the flow of actions, giving clear guidance for user orientation: spatial or temporal.

<figcaption>Left: Download icon animation indicates download progress. Right: Star icon animation shows the action of saving a bookmark and the location of the bookmark after it’s saved (the library).</figcaption>

Whimsical

Even though most people would not associate whimsy with a browser, we wanted to incorporate some playful elements as part of Firefox’s personality (and maybe ourselves).

<figcaption>Icon animations with some whimsy.</figcaption>

After getting feedback and buy-in from the rest of the UX team on the motion story, we were able to start working with them in applying motion to their designs.

Step 4: Design Motions

The Photon project was divided across various functional teams all focusing on different design aspects of Firefox. With motion overlapping many of these teams we started opening communication channels with each that would directly impact our work. We worked especially close with the visual/interaction team since it didn’t make sense to start motion design on components that were not yet close to complete. We had regular check-ins to set a rough ordering/priority of when we would schedule motion work of specific components.

Once we had near final visuals/interactions, it was time to get into After Effects and start animating!

https://medium.com/media/f099215e9a6dea67eec0d1a6e4ee729b/href

Step 5: Implementation

Implementing animations, especially detailed ones such as bookmarking and downloading, was an interesting challenge for the development team (Jared Wein, Sam Foster, and Jim Porter).

Rather than have a developer try to reproduce our motion designs through code, which can become tedious, we wanted to be able to export the motion directly. This ensured that the nuances of the motion was not lost during implementation.

To have the animations performant in the browser, the file sizes also needed to be small. This was done by using SVG assets and CSS animations.

We explored existing tools but did not find anything suitable that would be compatible with the browser. We created our own process and designed the animations in After Effects and used the Bodymovin extension to export them as JSON files.

One developer in particular, Markus Stange made this method possible by writing a tool, to convert JSON files into SVG sprite sheets. Sam further refined the tool and it became an essential asset in translating timeline animations from After Effects into CSS animations.

<figcaption>Page reload icon animation using a SVG sprite sheet.</figcaption>

After rounds of asset production, refinements, reviews, and testing, the big day came with the launch of Firefox Quantum!

<figcaption>A large part of the Firefox Quantum team was located in Toronto, Ontario.</figcaption>

We were a bit anxious awaiting feedback since this was the most significant update since Firefox 1.0 first launched in 2004.

Thankfully the release was met with positive reviews. We were happy that even some of our motions got a kudos.

So that wraps up the story of how the “motion team” for Firefox came to be. But that’s not all we do here at Mozilla. These days you’ll also find Eric busy working away at UX for Web Payments, Privacy and Search, while Amy casts her Dark theme magic and spins out the next UX iteration of Activity Stream and Onboarding.

If you haven’t given Firefox Quantum a try, take it for a spin and let us know what you think.


Written by Amy Lee & Eric Pang was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaCreative Media Awards Webinar

Creative Media Awards Webinar This is an informational webinar for a global public audience interested in learning more about Mozilla's Creative Media Awards track. This event is being streamed...

About:CommunityFirefox 61 new contributors

With the upcoming release of Firefox 61, we are pleased to welcome the 59 developers who contributed their first code change to Firefox in this release, 53 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

QMOFirefox 61 Beta 14 Testday Results

Hello Mozillians!

As you may already know, last Friday – June 15th – we held a new Testday event, for Firefox 61 Beta 14.

Thank you all for helping us make Mozilla a better place!

From India team: Aishwarya Narasimhan, Mohamed Bawas, Surentharan, amirthavenkat, Monisha Ravi

Results:

– several test cases executed for Fluent Migration of Preferences, Accessibility Inspector: Developer Tools and Web Compatibility.

Thanks for another successful testday

Tarek ZiadéIOActivityMonitor in Gecko

This is a first blog post of a series on Gecko, since I am doing a lot of C++ work in Firefox these days. My current focus is on adding tools in Firefox to try to detect what's going on when something goes rogue in the browser and starts to drain your battery life.

We have many ideas on how to do this at the developer/user level, but in order to do it properly, we need to have accurate ways to measure what's going on when the browser runs.

One thing is I/O activity.

For instance, a WebExtension worker that performs a lot of disk writes is something we want to find out about, and we had nothing to track all I/O activities in Firefox, without running the profiler.

When Firefox OS was developed, a small feature was added in the Gecko network lib, called NetworkActivityMonitor.

That class was hooked as an NSPR layer to send notifications whenever something was sent or received on a socket, and was used to blink the small icon phones usually have to signal that something is being transferred.

After the Firefox OS project was discontinued in Gecko, that class was left in the Gecko tree but not used anymore, even though the option was still there.

Since I needed a way to track all I/O activity (sockets and files), I have refactored that class into a generalised version that can be used to get notified every time data is sent or received in any file or socket.

The way it works is pretty simple: when a file or a socket is created, a new NSPR layer is added so every read or write is recorded and eventually dumped into an XPCOM array that is notified via a timer.

This design makes it possible to track along sockets, any disk file that is accessed by Firefox. For SQLite databases, since there's no way to get all FD handles (theses are kept internal to the sqlite lib), the IOActivityMonitor class provides manual methods to notify when a read or a write happens. And our custom SQLite wrapper in Firefox allowed me to add calls like I would do in NSPR.

It’s landed in Nightly :

And you can see how to use it in its Mochitest

Cameron KaiserTenFourFox FPR8b1 available

TenFourFox Feature Parity Release 8 beta 1 is now available (downloads, release notes, hashes). There is much less in this release than I wanted because of a family member in the hospital and several technical roadblocks. Of note, I've officially abandoned CSS grid again after an extensive testing period due to the fact that we would need substantial work to get a functional implementation, and a partially functional implementation is worse than none at all (in the latter case, we simply gracefully degrade into block-level <div>s). I also was not able to finish the HTML <input> date picker implementation, though I've managed to still get a fair amount completed of it, and I'll keep working on that for FPR9. The good news is, once the date picker is done, the time picker will use nearly exactly the same internal plumbing and can just be patterned off it in the same way. Unlike Firefox's implementation, as I've previously mentioned our version uses native OS X controls instead of XUL, which also makes it faster. That said, it is a ghastly hack on the Cocoa widget side and required some tricky programming on 10.4 which will be the subject of a later blog post.

That's not to say this is strictly a security patch release (though most of the patches for the final Firefox 52, 52.9, are in this beta). The big feature I did want to get in FPR8 did land and seems to work properly, which is same-site cookie support. Same-site cookie support helps to reduce cross-site request forgeries by advising the browser the cookie in question should only be sent if a request originates from the site that set it. If the host that triggered the request is different than the one appearing in the address bar, the request won't include any of the cookies that are tagged as same-site. For example, say you're logged into your bank, and somehow you end up opening another tab with a malicious site that knows how to manipulate your bank's transfer money function by automatically submitting a hidden POST form. Since you're logged into your bank, unless your bank has CSRF mitigations (and it had better!), the malicious site could impersonate you since the browser will faithfully send your login cookie along with the form. The credential never leaked, so the browser technically didn't malfunction, but the malicious site was still able to impersonate you and steal your money. With same-site cookies, there is a list of declared "safe" operations; POST forms and certain other functions are not on that list and are considered "unsafe." Since the unsafe action didn't originate from the site that set the cookie, the cookie isn't transmitted to your bank, authentication fails and the attack is foiled. If the mode is set to "strict" (as opposed to "lax"), even a "safe" action like clicking a link from an outside site won't send the cookie.

Same-site cookie support was implemented for Firefox 60; our implementation is based on it and should support all the same features. When you start FPR8b1, your cookies database will be transparently upgraded to the new database schema. If you are currently logged into a site that supports same-site cookies, or you are using a foxbox that preserves cookie data, you will need to log out and log back in to ensure your login cookie is upgraded (I just deleted all my cookies and started fresh, which is good to give the web trackers a heart attack anyway). Github and Bugzilla already have support, and I expect to see banks and other high-security sites follow suit. To see if a cookie on a site is same-site, make sure the Storage Inspector is enabled in Developer tools, then go to the Storage tab in the Developer tools on the site of interest and look at the Cookies database. The same-site mode (unset, lax or strict) should be shown as the final column.

FPR8 goes live on June 25th.

Dustin J. MitchellActions as Hooks

You may already be familiar with in-tree actions: they allow you to do things like retrigger, backfill, and cancel Firefox-related tasks They implement any “action” on a push that occurs after the initial hg push operation.

This article goes into a bit of detail about how this works, and a major change we’re making to that implementation.

History

Until very recently, actions worked like this: First, the decision task (the task that runs in response to a push and decides what builds, tests, etc. to run) creates an artifact called actions.json. This artifact contains the list of supported actions and some templates for tasks to implement those actions. When you click an action button (in Treeherder or the Taskcluster tools, or any UI implementing the actions spec), code running in the browser renders that template and uses it to create a task, using your Taskcluster credentials.

I talk a lot about functionality being in-tree. Actions are yet another example. Actions are defined in-tree, using some pretty straightforward Python code. That means any engineer who wants to change or add an action can do so – no need to ask permission, no need to rely on another engineer’s attention (aside from review, of course).

There’s Always a Catch: Security

Since the beginning, Taskcluster has operated on a fairly simple model: if you can accomplish something by pushing to a repository, then you can accomplish the same directly. At Mozilla, the core source-code security model is the SCM level: try-like repositories are at level 1, project (twice) repositories at level 2, and release-train repositories (autoland, central, beta, etc.) are at level 3. Similarly, LDAP users may have permisison to push to level 1, 2, or 3 repositories. The current configuration of Taskcluster assigns the same scopes to users at a particular level as it does to repositories.

If you have such permission, check out your scopes in the Taskcluster credentials tool (after signing in). You’ll see a lot of scopes there.

The Release Engineering team has made release promotion an action. This is not something that every user who can push to level-3 repository – hundreds of people – should be able to do! Since it involves signing releases, this means that every user who can push to a level-3 repository has scopes involved in signing a Firefox release. It’s not quite as bad as it seems: there are lots of additional safeguards in place, not least of which is the “Chain of Trust” that cryptographically verifies the origin of artifacts before signing.

All the same, this is something we (and the Firefox operations security team) would like to fix.

In the new model, users will not have the same scopes as the repositories they can push to. Instead, they will have scopes to trigger specific actions on task-graphs at specific levels. Some of those scopes will be available to everyone at that level, while others will be available only to more limited groups. For example, release promotion would be available to the Release Management team.

Hooks

This makes actions a kind of privilege escalation: something a particular user can cause to occur, but could not do themselves. The Taskcluster-Hooks service provides just this sort of functionality: a hook creates a task using scopes assiged by a role, without requiring the user calling triggerHook to have those scopes. The user must merely have the appropriate hooks:trigger-hook:.. scope.

So, we have added a “hook” kind to the action spec. The difference from the original “task” kind is that actions.json specifies a hook to execute, along with well-defined inputs to that hook. The user invoking the action must have the hooks:trigger-hook:.. scope for the indicated hook. We have also included some protection against clickjacking, preventing someone with permission to execute a hook being tricked into executing one maliciously.

Generic Hooks

There are three things we may wish to vary for an action:

  • who can invoke the action;
  • the scopes with which the action executes; and
  • the allowable inputs to the action.

Most of these are configured within the hooks service (using automation, of course). If every action is configured uniquely within the hooks service, then the self-service nature of actions would be lost: any new action would require assistance from someone with permission to modify hooks.

As a compromise, we noted that most actions should be available to everyone who can push to the corresponding repo, have fairly limited scopes, and need not limit their inputs. We call these “generic” actions, and creating a new such action is self-serve. All other actions require some kind of external configuration: allocating the scope to trigger the task, assigning additional scopes to the hook, or declaring an input schema for the hook.

Hook Configuration

The hook definition for an action hook is quite complex: it involves a complex task definition template as well as a large schema for the input to triggerHook. For decision tasks, cron tasks, an “old” actions, this is defined in .taskcluster.yml, and we wanted to continue that with hook-based actions. But this creates a potential issue: if a push changes .taskcluster.yml, that push will not automatically update the hooks – such an update requires elevated privileges and must be done by someone who can sanity-check the operation. To solve this, ci-admin creates tasks hooksed on the .taskcluster.yml it finds in any Firefox repository, naming each after a hash of the file’s content. Thus, once a change is introduced, it can “ride the trains”, using the same hash in each repository.

Implementation and Implications

As of this writing, two common actions are operating as hooks: retrigger and backfill. Both are “generic” actions, so the next step is to start to implement some actions that are not generic. Ideally, nobody notices anything here: it is merely an implementation change.

Once all actions have been converted to hooks, we will begin removing scopes from users. This will have a more significant impact: lots of activities such as manually creating tasks (including edit-and-create) will no longer be allowed. We will try to balance the security issues against user convenience here. Some common activities may be implemented as actions (such as creating loaners). Others may be allowed as exceptions (for example, creating test tasks). But some existing workflows may need to change to accomodate this improvement.

We hope to finish the conversion process in July 2018, with that time largely taken with a slow rollout to accomodate unforseen implications. When the project is finished, Firefox releases and other sensitive operations will be better-protected, with minimal impact to developers’ existing worflows.

Byron Jonesin-tree annotations of third-party code (moz.yaml)

I've recently landed changes on mozilla-central to provide initial support for in-tree annotations of third-party code. (Bug 1454868, D1208, r5df5e745ce6e).

Why

  • Provide consistency and discoverability to third-party code, its origin (repository, version, SHA, etc), and Mozilla-local modifications
  • Simplify the process for auditing vendorerd versions and licenses
  • Establish a structure which allows automation to drive vendoring

How

  • Using the example moz.yaml from the top of moz_yaml.pl create a moz.yaml in the top level of third-party code
  • Verify the manifest with mach vendor manifest --verify path/to/moz.yaml

Next

  • We will be creating moz.yaml files in-tree (help here is appreciated!)
  • Add tests to ensure moz.yaml files remain valid
  • At some point we'll add automation-driven vendoring to simplify and standardise the process of updating vendored code

moz.yaml Template

From python/mozbuild/mozbuild/moz_yaml.py#l50

---
# Third-Party Library Template
# All fields are mandatory unless otherwise noted

# Version of this schema
schema: 1

bugzilla:
  # Bugzilla product and component for this directory and subdirectories
  product: product name
  component: component name

# Document the source of externally hosted code
origin:

  # Short name of the package/library
  name: name of the package

  description: short (one line) description

  # Full URL for the package's homepage/etc
  # Usually different from repository url
  url: package's homepage url

  # Human-readable identifier for this version/release
  # Generally "version NNN", "tag SSS", "bookmark SSS"
  release: identifier

  # The package's license, where possible using the mnemonic from
  # https://spdx.org/licenses/
  # Multiple licenses can be specified (as a YAML list)
  # A "LICENSE" file must exist containing the full license text
  license: MPL-2.0

# Configuration for the automated vendoring system.
# Files are always vendored into a directory structure that matches the source
# repository, into the same directory as the moz.yaml file
# optional
vendoring:

  # Repository URL to vendor from
  # eg. https://github.com/kinetiknz/nestegg.git
  # Any repository host can be specified here, however initially we'll only
  # support automated vendoring from selected sources initiall.
  url: source url (generally repository clone url)

  # Revision to pull in
  # Must be a long or short commit SHA (long preferred)
  revision: sha

  # List of patch files to apply after vendoring. Applied in the order
  # specified, and alphabetically if globbing is used. Patches must apply
  # cleanly before changes are pushed
  # All patch files are implicitly added to the keep file list.
  # optional
  patches:
    - file
    - path/to/file
    - path/*.patch

  # List of files that are not deleted while vendoring
  # Implicitly contains "moz.yaml", any files referenced as patches
  # optional
  keep:
    - file
    - path/to/file
    - another/path
    - *.mozilla

  # Files/paths that will not be vendored from source repository
  # Implicitly contains ".git", and ".gitignore"
  # optional
  exclude:
    - file
    - path/to/file
    - another/path
    - docs
    - src/*.test

  # Files/paths that will always be vendored, even if they would
  # otherwise be excluded by "exclude".
  # optional
  include:
    - file
    - path/to/file
    - another/path
    - docs/LICENSE.*

  # If neither "exclude" or "include" are set, all files will be vendored
  # Files/paths in "include" will always be vendored, even if excluded
  # eg. excluding "docs/" then including "docs/LICENSE" will vendor just the
  #     LICENSE file from the docs directory

  # All three file/path parameters ("keep", "exclude", and "include") support
  # filenames, directory names, and globs/wildcards.

  # In-tree scripts to be executed after vendoring but before pushing.
  # optional
  run_after:
    - script
    - another script

Daniel PocockThe questions you really want FSFE to answer

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

Niko MatsakisMIR-based borrow check (NLL) status update

I’ve been getting a lot of questions about the status of “Non-lexical lifetimes” (NLL) – or, as I prefer to call it these days, the MIR-based borrow checker – so I wanted to post a status update.

The single most important fact is that the MIR-based borrow check is feature complete and available on nightly. What this means is that the behavior of #![feature(nll)] is roughly what we intend to ship for “version 1”, except that (a) the performance needs work and (b) we are still improving the diagnostics. (More on those points later.)

The MIR-based borrow check as currently implemented represents a huge step forward from the existing borrow checker, for two reasons. First, it eliminates a ton of borrow check errors, resulting in a much smoother compilation experience. Second, it has a lot less bugs. More on this point later too.

You may be wondering how this all relates to the “alias-based borrow check” that I outlined in my previous post, which we have since dubbed Polonius. We have implemented that analysis and solved the performance hurdles that it used to have, but it will still take some effort to get it fully ready to ship. The plan is to defer that work and ultimately ship Polonius as a second step: it will basically be a “MIR-based borrow check 2.0”, offering even fewer errors.

Would you like to help?

If you’d like to be involved, we’d love to have you! The NLL working group hangs out on the #wg-nll stream in Zulip. We have weekly meetings on Tuesdays (3:30pm Eastern time) where we discuss the priorities for the week and try to dole out tasks. If that time doesn’t work for you, you can of course pop in any time and communicate asynchronously. You can also always go look for work to do amongst the list of GitHub issues – probably the diagnostics issues are the best place to start.

Transition period

As I mentioned earlier, the MIR-based borrow checker fixes a lot of bugs – this is largely a side effect of making the check operate over the MIR. This is great! However, as a result, we can’t just “flip the switch” and enable the MIR-based borrow checker by default, since that would break existing crates (I don’t really know how many yet). The plan therefore is to have a transition period.

During the transition period, we will issue warnings if your program used to compile with the old borrow checker but doesn’t with the new checker (because we fixed a bug in the borrow check). The way we do this is to run both the old and the new borrow checker. If the new checker would report an error, we first check if the old check would also report an error. If so, we can issue the error as normal. If not, we issue only a warning, since that represents a case that used to compile but no longer does.

The good news is that while the MIR-based checker fixes a lot of bugs, it also accepts a lot more code. This lessens the overall impact. That is, there is a lot of code which ought to have gotten errors from the old borrow check (but never did), but most of that code won’t get any errors at all under the new check. No harm, no foul. =)

Performance

One of the main things we are working on is the performance of the MIR-based checker, since enabling the MIR-based borrow checker currently implies significant overhead during compilation. Take a look at this chart, which plots rustc build times for the clap crate:

clap-rs performance

The black line (“clean”) represents the “from scratch” build time with rustc today. The orange line (“nll”) represents “from scratch” build times when NLL is enabled. (The other lines represent incremental build times in various combinations.) You can see we’ve come a long way, but there is still plenty of work to do.

The biggest problem at this point is that we effectively have to “re-run” the type check a second time on the MIR, in order to compute all the lifetimes. This means we are doing two type-checks, and that is expensive. However, this second type check can be significantly simpler than the original: most of the “heavy lifting” has been done. Moreover, there are lots of opportunities to cache work between them so that it only has to be done once. So I’m confident we’ll make big strides here. (For example, I’ve got a PR up right now that adds some simple memoization for a 20% win, and I’m working on follow-ups that add much more aggressive memoization.)

(There is an interesting corollary to this: after the transition period, the first type check will have no need to consider lifetimes at all, which I think means we should be able to make it run quite a bit faster as well, which should mean a shorter “time till first error” and also help things like computing autocompletion information for the RLS.)

Diagnostics

It’s not enough to point out problems in the code, we also have to explain the error in an understandable way. We’ve put a lot of effort into our existing borrow checker’s error message. In some cases, the MIR-based borrow checker actually does better here. It has access to more information, which means it can be more specific than the older checker. As an example1, consider this error that the old borrow checker gives:

error[E0597]: `json` does not live long enough
  --> src\main.rs:38:17
   |
38 |         let v = json["data"]["search"]["edges"].as_array();
   |                 ^^^^ borrowed value does not live long enough
...
52 |     }
   |     - `json` dropped here while still borrowed
...
90 | }
   | - borrowed value needs to live until here

The error isn’t bad, but you’ll note that while it says “borrowed value needs to live until here” it doesn’t tell you why the borrowed value needs to live that long – only that it does. Compare that to the new error you get from the same code:

error[E0597]: `json` does not live long enough
  --> src\main.rs:39:17
   |
39 |         let v = json["data"]["search"]["edges"].as_array();
   |                 ^^^^ borrowed value does not live long enough
...
53 |     }
   |     - borrowed value only lives until here
...
70 |             ", last_cursor))
   |                ----------- borrow later used here

The new error doesn’t tell you “how long” the borrow must last, it points to a concrete use. That’s great.

Other times, though, the errors from the new checker are not as good. This is particularly true when it comes to suggestions and tips for how to fix things. We’ve gone through all of our internal diagnostic tests and drawn up a list of about 37 issues, documenting each point where the checker’s message is not as good as the old one, and we’re working now on drilling through this list.

Polonius

In my previous blog post, I described a new version of the borrow check, which we have since dubbed Polonius. That analysis further improves on the MIR-based borrow check that is in Nightly now. The most significant improvement that Polonius brings has to do with “conditional returns”. Consider this example:

fn foo<T>(vec: &mut Vec<T>) -> &T {
  let r = &vec[0];
  if some_condition(r) {
    return r;
  }
  
  // Question: can we mutate `vec` here? On Nightly,
  // you get an error, because a reference that is returned (like `r`)
  // is considered to be in scope until the end of the function,
  // even if that return only happens conditionally. Polonius can
  // accept this code.
  vec.push(...);
}

In this example, vec is borrowed to produce r, and r is then returned – but only sometimes. In the MIR borrowck on nightly, this will give an error – when r is returned, the borrow is forced to last until the end of foo, no matter what path we take. The Polonius analysis is more precise, and understands that, outside of the if, vec is no longer referenced by any live references.

We originally intended for NLL to accept examples like this: in the RFC, this was called Problem Case #3. However, we had to remove that support because it was simply killing compilation times, and there were also cases where it wasn’t as precise as we wanted. Of course, some of you may recall that in my previous post about Polonius I wrote:

…the performance has a long way to go ([Polonius] is currently slower than existing analysis).

I’m happy to report that this problem is basically solved. Despite the increased precision, the Polonius analysis is now easily as fast as the existing Nightly analysis, thanks some smarter encoding of the rules as well as the move to use datafrog. We’ve not done detailed comparisons, but I consider this problem essentially solved.

If you’d like, you can try Polonius today using the -Zpolonius switch to Nightly. However, keep in mind that this would be a ‘pre-alpha’ state: there are still some known bugs that we have not prioritized fixing and so forth.

Conclusion

The key take-aways here:

  • NLL is in a “feature complete” state on Nightly.
  • We are doing a focused push on diagnostics and performance, primarily.
  • Even once it ships, we can expect further improvements in the future, as we bring in the Polonius analysis.
  1. Hat tip to steveklabnik for providing this example!

Nick CameronWhat do you think are the most interesting/exciting projects using Rust?

Last week I tweeted "What do you think are the most interesting/exciting projects using Rust? (No self-promotion :-) )". The response was awesome! Jonathan Turner suggested I write up the responses as a blog post, and here we are.

I'm just going to list the suggestions, crediting is difficult because often multiple people suggested the same projects, follow the Twitter thread if you're interested:

The Mozilla BlogWITHIN creates distribution platform using WebVR

Virtual Reality (VR) content has arrived on the web, with help from the WebVR API. It’s a huge inflection point for a medium that has struggled for decades to reach a wide audience. Now, anyone with access to an internet-enabled computer or smartphone can enjoy VR experiences, no headset required. A good place to start? WITHIN’s freshly launched VR website.

From gamers to filmmakers, VR is the bleeding edge of self-expression for the next generation. It gives content creators the opportunity to tell stories in new ways, using audience participation, parallel narratives, and social interaction in ever-changing virtual spaces. With its immersive, 360-degree audio and visuals, VR has outsized power to activate our emotions and to put us in the center of the action.

WITHIN is at the forefront of this shift toward interactive filmmaking and storytelling. The company was one of the first to launch a VR distribution platform that showcases best-in-class VR content with high production values.

“Film is this incredible medium. It allows us to feel empathy for people that are very different from us, in worlds completely foreign to our own,” said Chris Milk, co-founder of WITHIN, in a Ted Talk. “I started thinking, is there a way I could use modern and developing technologies to tell stories in different ways, and tell different kinds of stories that maybe I couldn’t tell using the traditional tools of filmmaking that we’ve been using for 100 years?”

Simple to use

WITHIN’s approach is to bring curated and original VR experiences directly to viewers for free, rather than trying to gain visibility for their content through existing channels. Until now, VR content was mostly presented to headset users via the manufacturer’s store websites. So if you shelled out hundreds of dollars for an Oculus Rift or HTC Vive, you would see a library of content when you fired up your rig.

With its new site, WITHIN is making VR content accessible to everyone, whether they’re watching on a laptop, mobile phone, or headset. The company produces immersive VR experiences with high-profile partners like the band OK Go and Tyler Hurd. It also distributes top-tier VR experiences, like family-friendly animation and nature shows, with 360-degree visuals and stereoscopic sound.

“We aim to make it as easy as possible for fans to discover and share truly great VR experiences,” said Jon Rittenberg, Content Launch Manager at WITHIN.

WebVR JavaScript API

The key to reaching a vast potential audience of newcomers is to make a platform that is simple to use and easy to explore. Most importantly, it should work without exposing visitors to technical hurdles. That’s a challenge for two reasons.

First, the web is famously democratic. Companies like WITHIN have no control over who comes to their site, what device they’re on, what operating system that device runs, or how much bandwidth they have. Second, the web is still immature as a VR platform, with a growing but limited number of tools.

To build a platform that ‘just works’, the engineers at WITHIN turned to the WebVR API. Mozilla engineers built the foundation for the WebVR API with a goal to give companies like WITHIN a simpler way to support a range of viewing options without having to rewrite their code for each platform. WebVR provides support for exposing VR devices and headsets to web apps, enabling developers to translate position and movement information from the display into movement around a 3D scene.

Adapting content to devices

Using the WebVR specification, the company built its WITHIN WebVR site so it could adapt to dozens of factors and give new viewers a consistently great experience. In an amazing proof-of-concept for VR on the web, the company was able to optimize each streaming experience to a wide range of platforms and displays, like Vive, Rift, PSVR, iOS, Android, GearVR, and Oculus Go.

“The API really helped us out. It gave us all the pieces we needed,” said Jono Brandel, Lead Designer at WITHIN. “Without the WebVR API, we could not have done any of this stuff in the browser. We wouldn’t have access to VR headsets.”

Gorgeous content

The WITHIN WebVR site does a fantastic job of adapting its VR content to a range of devices. The site can identify a visitor’s device and push content suited for that device, making it easy on the end user. The majority of visitors to WITHIN’s VR site arrive on a Cardboard device that works with their smartphone. That delivers a basic experience: 3D stereoscopic visuals with some gyroscopic controls.

WITHIN uses WebVR to connect to any viewer or device

Headset users get the same content with higher resolution visuals and binaural audio, which brings life-like sound effects to VR experiences. The VR content can also adapt to different head and hand tracking inputs, and supports common navigational tools in popular headsets. Folks visiting via a browser can view VR content just as they would play a 3D, interactive game or watch a 360-degree video online.

To get this level of adaptive support took quite a bit of work behind the scenes. “We have a room filled with a ton of devices: smartphones, computers, and operating systems. We’ve got everything,” Brandel said. “It’s really cool that one code base supports all of these platforms.”

A more capable web platform

The web is a great platform for creating and experiencing VR. It’s easy to share content broadly, across continents and cultures. And it’s simple to get started building 3D experiences using free tools like A-Frame, invented by Mozilla engineers with help from a talented and dedicated open source community.

“We’re excited to see such big platforms making a bet on WebVR,” said Lars Bergstrom, Director of Mixed Reality at Mozilla. “As new devices reach more people, we expect the WebVR specification will continue to grow and evolve.”

Mozilla and WITHIN are also collaborating to make the open web platform even better for VR distribution. The two companies are working together on a series of experiments to make WebVR versions of popular players as capable as native applications, using tech standards WebGL and WebAssembly.

The goal is to make it simpler for content creators to push their stories and games to the web, without having to do a lot of coding work. The two companies are exploring how to use Unity’s popular gaming platform to streamline the publication to the web, while still delivering performance, stability, and scale for immersive experiences.

“The Unity ecosystem is already mature – and it’s where the designers and developers are focused,” said Christopher Van Wiemeersch, Senior UX Web Engineer at Mozilla. “We’re hoping that WebAssembly and the Unity-WebVR Exporter can help us use the web as a distribution platform and not only as a content-creation platform. You’re using JavaScript under the hood, but you don’t have to learn it yourself.”

Earlier this year, Mozilla released Unity WebVR Assets, a tool that aims to reduce complexity for content authors by letting them export content from the Unity platform and have the experiences work on the web. You can check it out in the Unity Asset Store.

If you’re a filmmaker interested in getting your VR experience on the WITHIN platform, you can submit your project here for consideration.

 

The post WITHIN creates distribution platform using WebVR appeared first on The Mozilla Blog.

Daniel Stenbergcurl survey 2018 analysis

This year, 670 individuals spent some of their valuable time on our survey and filled in answers that help us guide what to do next. What's good, what's bad, what to remove and where to emphasize efforts more.

It's taken me a good while to write up this analysis but hopefully the results here can be used all through the year as a reminder what people actually think and how they use curl and libcurl.

A new question this yeas was in which continent the respondent lives, which ended up with an unexpectedly strong Euro focus:

What didn't trigger any surprises though was the question of what protocols users are using, which basically identically mirrored previous years' surveys. HTTP and HTTPS are the king duo by far.

Read the full 34 page analysis PDF.

Some other interesting take-aways:

  • One person claims to use curl to handle 19 protocols! (out of 23)
  • One person claims to use curl on 11 different platforms!
  • Over 5% of the users argue for a rewrite in rust.
  • Windows is now the second most common platform to use curl on.

The Mozilla BlogLevel Up with New Productivity Features in Firefox for iOS

Today, we’re announcing new features in Firefox for iOS to make your life easier. Whether you’re a multi-tasker or someone who doesn’t want to waste time, we’re rolling out new features to up your productivity game.

Taking your file downloads on the go

For those times when you need to leave the office but want to read that file during your commute or at a later time, we now provide support for downloading files to your mobile devices. First download them to your device, then access and share it in the main menu where you’ll find a folder with all your downloads.

Easy to download

 

A folder with your downloads

One-Stop Place for Saving your Links

Did you ever come across an article that catches your eye but don’t have time to read it right away? We’ve added a one-stop menu to give you choices on what to do with the link. You can open it directly in Firefox, add it to your bookmark or reading list to peruse at your convenience, or send it to another device that’s linked to your Firefox account.

A one-stop menu with choices

Are you Synced?

You’ll get the answer quickly now that we’ve made it easier to see whether your mobile device is connected to all your devices where you use Firefox. Click on the menu button listed prominently at the top of the application menu. From there, you’ll see if you’re synced, and if you aren’t you’ll have the option to do so. If you don’t have a Firefox Account, you can sign up directly here.

From the top of the app menu you’ll see if you’re synced or not

Firefox continues to bring convenience to iOS users. To get the latest version of Firefox for iOS, visit the App Store.

The post Level Up with New Productivity Features in Firefox for iOS appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 238

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is clap-verbosity-flag, a small crate to easily add a verbosity setting to Rust command line applications. Thanks to Yoshuawuyts for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

110 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

explicit lifetimes no longer scare me the way they used to.

– Nicholas Nethercote on his blog

(selected by llogiq per one unanimous vote)

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

The Mozilla BlogOur past work with Facebook

Over the last three months, Mozilla has been a vocal critic of Facebook’s practices with respect to its lack of user transparency. Throughout this time we’ve engaged with Facebook directly about this and have continued to comment publicly as the story about Facebook’s data practices evolves.

Mozilla Corporation recently received two termination notices from Facebook about work that we did with them in the past. These appear to be part of Facebook’s broader effort to clean up its third-party developer ecosystem. This is good – we suspect that we weren’t the only ones receiving these notices. Still, the notices, and recent reporting of Facebook data sharing with device makers, prompted us to take a closer look at our past relationships with the company and we think it is important to talk about what we found.

At a high level we found that Mozilla Corporation had two agreements with Facebook initiated in 2012 and 2013 respectively. No information from Facebook was transferred to Mozilla Corporation in either situation but there were permissions granted to Mozilla Corporation in the agreements with respect to user data. In fact, in one case, our engineers noticed the overly broad access and requested that Facebook limit it. Here are some additional details:

In 2012, Mozilla Corporation had an agreement with Facebook that was intended to make it easier for individuals using Facebook through the Firefox browser to interface with the Facebook application. The relationship was part of our work on the Social API, an effort to integrate social experiences more seamlessly into the browser. As part of that agreement, Facebook was able to display web pages, including users’ data appearing on those pages, in specialized locations in the Firefox browser. This means that data was sent directly to the browser client, and none of the users’ Facebook information was shared with Mozilla Corporation. You can find more publicly available information about this integration here.

The 2013 agreement related to our now-defunct mobile operating system, Firefox OS. When users began using a Firefox OS device, they were given the explicit option of importing their Facebook contacts onto that device. Again, none of the users’ Facebook information was shared with Mozilla. When users disconnected their Facebook account, they were given the option of removing their Facebook data from the device. You can see in our public bug tracker that our team actually asked Facebook to remove some data access permissions because “we shouldn’t request permissions we don’t need.”

While these agreements have remained in effect, the work on these projects had already ended. We finished deprecating the Social API in 2017. Mozilla stopped development of Firefox OS in 2015, although any Firefox OS devices still in use today may retain access until Facebook shuts down that access in accordance with their termination notice.

We are bringing this to your attention because we want to be clear that our products technically had access to Facebook’s APIs and because we want to explain what was done with that access. We encourage other companies to review their relationships with Facebook and to be transparent about what those involved. That level of transparency is what is needed today to build a healthier, more trustworthy Internet that puts users first.

The post Our past work with Facebook appeared first on The Mozilla Blog.

Gervase MarkhamMozilla All Hands

Today, from across the world, Mozillians are gathering in San Francisco for our six-monthly All Hands. For obvious reasons, I won’t be able to be there, so I want to wish all the best to everyone, and I am confident that more awesome ideas for rocking the free web will emerge from their deliberations. Each year brings different grey clouds to the sky, and requires us to adjust our strategy and tactics to deal with new threats. Some we win and a few we lose, but it’s clear that the world and the web are much better places with Mozilla in them fighting for what is right.

Robert O'CallahanBay Area Visit

I'm on my way to San Francisco for a guest visit to the Mozilla All Hands ... thanks, Mozilla!

After that, I'll be taking a break for a few days and going hiking with some friends. Then I'll spending a week or so visiting more people to talk about rr and Pernosco. Looking forward to all of it!

Cameron KaiserLet's kill kittens with native messaging (or, introducing OverbiteNX: if WebExtensions can't do it, we will)

WebExtensions (there is no XUL) took over with a thud seven months ago, which was felt as a great disturbance in the Force by most of us who wrote Firefox add-ons that, you know, actually did stuff. Many promises were made for APIs to allow us to do the stuff we did before. Some of these promises were kept and these APIs have actually been implemented, and credit where credit is due. But there are many that have not (that metabug is not exhaustive). More to the point, there are many for which people have offered to write code and are motivated to write code, but we have no parameters for what would be acceptable, possibly because any spec would end up stuck in a "boil the ocean" problem, possibly because it's low priority, or possibly because someone gave other someones the impression such an API would be acceptable and hasn't actually told them it isn't. The best way to get contribution is to allow people to scratch their own itches, but the urgency to overcome the (largely unintentional) institutional roadblocks has faded now that there is somewhat less outrage, and we are still left with a disordered collection of APIs that extends Firefox relatively little and a very slow road to do otherwise.

Or perhaps we don't have to actually rely on what's in Firefox to scratch our itch, at least in many cases. In a potentially strategically unwise decision, WebExtensions allows native code execution in the form of "native messaging" -- that is, you can write a native component, tell Firefox about it and who can talk to it, and then have that native component do what Firefox don't. At that point, the problem then becomes more one of packaging. If the functionality you require isn't primarily limited by the browser UI, then this might be a way around the La Brea triage tarpit.

Does this sound suspiciously familiar to anyone like some other historical browser-manipulated blobs of native code? Hang on, it's coming back to me. I remember something like this. I remember now. I remember them!

Plugins!

If you've been under a rock until Firefox 52, let me remind you that plugins were globs of native code that gave the browser wonderful additional capabilities such as playing different types of video, Flash and Shockwave games and DRM management, as well as other incredibly useful features such as potential exploitation, instability and sometimes outright crashes. Here in TenFourFox, for a variety of reasons I officially removed support for plugins in version 6 and completely removed the code with TenFourFox 19. Plugins served a historic purpose which are now better met by any number of appropriate browser and HTML5 APIs, and their disadvantages now in general outweigh their advantages.

Mozilla agrees with this, sort of. Starting with 52 many, though not all, plugins won't run. The remaining lucky few are Flash, Widevine and OpenH264. If you type about:plugins into Firefox, you'll still see them unless you're like me and you're running it on a POWER9 Talos II or some other hyper-free OS. There's a little bit of hypocrisy here in the name of utilitarianism, but I think it's pretty clear there would be a pretty high bar to adding a new plugin to this whitelist. Pithily, Mozilla has concluded regardless of any residual utility that plugins kill kittens.

It wasn't just plugins, either. Mozilla really doesn't like native code at all when it can be avoided; for a number of years in the Mozilla developer community I remember a lot of interest in making JavaScript so fast it could be used for all the media decoding and manipulation that native plugins then did. It's certainly gotten a lot faster since then, but now I guess the interest is having WASM do that instead (I'll wait). A certain subset of old-school extensions used to have binary components too, though largely to do funky system-level things that regular XPCOM in JavaScript and/or js-ctypes couldn't handle. This practice was strongly discouraged by Mozilla but many antiviruses implemented browser scanners in that fashion. This may have been a useful means to an end but binary components were a constant source of bugs when Mozilla altered something that the add-on authors didn't expect to change, and thus they kill kittens as well.

Nevertheless, we're back in a situation where we actually need certain APIs to be implemented to make certain types of functionality -- functionality which was perfectly acceptable in the XUL addons era, I might add -- actually still possible. These APIs are not a priority to Mozilla right now, but a certain subset of them can be provided by a (formerly discouraged) native component.

It's time to kill some kittens.

The particular kitten I need to kill is TCP sockets. I need to be able to talk over TCP to Gopher servers directly on a selection of port numbers. This was not easy to implement with XPCOM, but you could implement your own nsIChannel as a first-class citizen that looked to the browser like any other channel in pure JavaScript back in the day, and I did. That was OverbiteFF. I'm not the only one who asked for this feature, and I was willing to write it, but I'm not going to spend a lot of time writing code Mozilla won't accept and that means Mozilla needs to come up with an acceptable spec. This has gradually drowned under the ocean they're trying to boil by coming up with everyone's use cases whether they have anything in common or not and then somehow addressing the security concerns of all these disparate connection models and eventually forging the one ring to bind them all sometime around the natural heatdeath of the observable universe. I'm tired of waiting.

So here is OverbiteNX. OverbiteNX comes in two parts: the actual WebExtensions addon ("OverbiteNX" proper), and Onyx, a native component that the browser add-on asks to connect to a Gopher server and then receives data from it. Onyx is supported on Microsoft Windows, Linux and macOS (I've included binaries for Win32 and macOS 10.12+), and works probably anywhere else you can compile and run Firefox and Onyx as long as it supports Berkeley sockets. Onyx has no dependencies, is written in cross-platform C with a couple Windows-specific bits, and is currently presented in a single source file in an uncomplicated manner to not only serve the purpose of the extension but also to serve as an educational example of how to kill your own particular kitten. All of the pieces needed to hook it up (the Windows NSI and example JSON) are included and you can see how it connects with the front-end and the life cycle of a request in the source code. I discuss the architecture in more detail and how you can play with it. (If you are a current user of OverbiteWX, please remove it first.)

To be sure, I'm not the first to have this idea, and it's hardly an optimal solution. Besides the fact I have to coax someone to install a binary on their system to use the add-on, if I change the communication protocol (which is highly likely given that I will need to add future support for multithreading and possibly session IDs when multiple tabs are in use) then I need to get them to upgrade both the add-on and the native component at the same time. Users on weird platforms, and as a user of a weird platform I certainly empathize, have to do more work to get it to run on their system by compiling and installing it manually. The dat extension I linked to at the beginning of this paragraph gets around the binary code limitation by running the "native" component as JavaScript under Node.js, but that requires you to run Node, which is an extra step that seems unrealistic for many end-users and actually adds more work to people on weird platforms.

On the other hand, this is the only way right now it's going to work at all. I don't think Mozilla is going to move on these pain points unless they get into the situation where half the add-ons on AMO depend on external components they don't manage. So let the kitten killing begin. If you've got an idea and the current APIs don't let you implement it, see if a native component will scratch your itch and increase the pressure on Mountain View. After all, if Mozilla doesn't add useful WebExtensions APIs, there's no alternative but feline mass murder.

Provocative hyperbole aside, bug reports and pull requests appreciated for OverbiteNX. At some point in the near future the add-on piece will be submitted to AMO, which I expect to pass the automatic scanner since it doesn't do any currently proscribed operations, and then it's just a matter of keeping the native component in sync for future releases. Watch the Github project for more. And enjoy Gopherspace. Just don't tell the EU about it.

(Note for the humourless: The cat pictured is my cat. She is purring on her ottoman as this was written. No cat was harmed during the making of this blog post, though she does get an occasional talking-to. The picture was taken when she was penned up in the laundry while I was out and she was not threatened with any projectile weapon.)

K Lars LohnThings Gateway - Nest Thermostat & the Pellet Stove

Back in January of 2014, I wrote a blog post called Hacking a Pellet Stove to Work with Nest.  It was a narrative about trying to use the advanced features of the Nest learning thermostat to control a pellet stove in the volatile temperature environment of a yurt.

The solution that I concocted was entirely electromechanical.  I used 24VAC relays to interface between the Nest thermostat's three level furnace control and the pellet stove.  Complicating the solution was the pellet stove not having anything but a SPDT switch to control the heating output level.  The hack was rewiring the stove so that the relays took over both the thermostat on/off control and the fan/speed adjustment provided by the switch.

While the solution worked, it didn't work well enough.  The Nest thermostat, when perceiving that the yurt needed heat, would turn the pellet stove onto low.  If, after twenty minutes or so, the yurt hadn't sufficiently heated, the Nest thermostat would advance the pellet stove to medium.  It repeated that cycle to eventually advance to high.  The meant that when the yurt was cold, it would take more than forty minutes to provide "room temperature".  Once it did achieve room temperature, the thermostat would just turn the pellet stove off.

The yurt is a volatile temperature environment.  It is, after all, just a fancy tent sensitive to outdoor temperature and wind.  When the Nest turned off the pellet stove, the temperature would go into free fall.  In cold winter temperatures, the yurt could easily shed twenty degrees in less than a half hour. Relighting the stove and methodically advancing through the levels over forty minutes did not make for a comfortable living temperature.

Rather than slowly advancing through the levels, I needed the stove to start on high and then back down through the levels on shutdown.  I call this a "lingering shutdown".  My original solution was based on a logic table that did not have to deal with past state; the Nest thermostat handled the progression through states.  To implement the lingering shutdown, I'd instead need a finite state machine that took into account previous state and have an external timing trigger.  That lead to a ridiculous logic table and I abandoned that path.

I eventually dropped the exclusively electromechanical aspect of this solution and wrote software running on a Raspberry Pi to do the interface.  Relays were still used, but only for electrical isolation of the components.  This software was problematic, too, but in a different way.  The UX wasn't very good: to change the timing of the lingering shutdown levels required opening an ssh session and manually setting configuration values (yes Socorro people, complex configuration bit my ass here, too).  It was just never critical enough to justify the effort of writing a UI the whole family could use.

That changed when Things Gateway with the Things Framework, was released.  I could now create a program that could mediate between the Nest and the thermostat and, without additional effort, have a Web accessible user interface.  Further, it enables a rule system that can adjust the behavior of the stove based on any device the Things Gateway can use: temperature sensors, wind sensors, etc.  In fact, it is likely that I can just write the Nest thermostat entirely out of the system.  One less cloud dependent service is a win.

The previous iteration of my software had already figured out how to get real world information in and out of the Raspberry Pi using the GPIO pins.  I configured one GPIO pin as input and three GPIO pins as output.

For input, I wanted to mirror the call signal from the Nest thermostat within the Raspberry Pi.  I did this with a 24VAC relay.  The thermostat thinks its turning a furnace on and off, but really it's just triggering the relay.  The output side of the relay is turning GPIO pin 18 on and off.  This makes the moment by moment status of the thermostat readily available within the software.

GPIO pin 17 is in charge of actually turning the pellet stove on and off.  This is done with another relay.  When the software wants to turn the pellet stove on, it triggers GPIO17 which energizes the relay, which then shorts the red/white thermostat wires connected to the pellet stove.

Setting the level of the pellet stove is more complicated.  Before my modifications, it was controlled manually via a SPDT center off switch on the back of the stove.  It has three conductor connections, A, B, and C.   It has three switch positions AB, BC, all off.  To simulate this switch in the computer, I need three states, implemented as two binary bits.  00 is all off.  01 is AB, 10 is BC.  The fourth state offered by 2 binary bits is unused.  Moving into the physical world, this means using two GPIO pins to activate two more relays.  The relays handle switching the 24VAC power from the original SPDT switch.


I wanted the Pellet Stove to appear as a thing within the Things Gateway.  That means deriving a new class, PelletStove, from the WoTThing class.  Within that, I define a number of properties:
  • the on/off state of the thermostat
  • the state of the stove itself (off, high, medium, low)
  • the run level of the stove software (off, heating, lingering medium, lingering low)
  • the number of minutes in a lingering shutdown for medium level
  • the number of minutes in a lingering shutdown for low level
The first one is the only property that is sensing an external status: the on/off state of the thermostat.  I define a pywot property for it within the definition of the PelletStove class:
thermostat_state = WoTThing.wot_property(
name='thermostat_state',
description='the on/off state of the thermostat',
initial_value=False,
value_source_fn=get_thermostat_state,
)
(see this code in situ in the pellet_stove.py file in the pywot demo directory)

The code defines the value_source_fn as get_thermostat_state, an asynchronous method that will poll the status of the thermostat (via GPIO pin 18) and control the action of the rest of the system.  When the thermostat goes from the off to on condition, this method will turn the pellet stove on high.  When the thermostat goes from on to off, the method will trigger another asynchronous task that will slowly back down the pellet stove through the lingering shutdown procedure.
    async def get_thermostat_state(self):
previous_thermostat_state = self.thermostat_state
self.thermostat_state = self._controller.get_thermostat_state()
self.logging_count += 1
if self.logging_count % 300 == 0:
logging.debug('still monitoring thermostat')
self.logging_count = 0
if previous_thermostat_state != self.thermostat_state:
if self.thermostat_state:
logging.info('start heating')
await self.set_stove_mode_to_heating()
else:
logging.info('start lingering shutdown')
self.lingering_shutdown_task = asyncio.get_event_loop().create_task(
self.set_stove_mode_to_lingering()
)
(see this code in situ in the pellet_stove.py file in the pywot demo directory)

The next property just reflects the current status of the stove: on/off and low/medium/high:
    stove_state = WoTThing.wot_property(
name='stove_state',
description='the stove intensity level',
initial_value='off', # off, low, medium, high
)
(see this code in situ in the pellet_stove.py file in the pywot demo directory)
Notice that it doesn't have a value_source_fn.  That's because it is the state of the pellet stove as controlled by the software, not an external state to be detected.

The third property is the operating state of the software.  Similar to stove_state, this property reflects the state of the stove as well as what's going on in the software.  Right now, that is just indicated by heating and lingering shutdown levels.  In the future, it will also show that manual overrides are in effect.
    stove_automation_mode = WoTThing.wot_property(
name='stove_automation_mode',
description='the current operating mode of the stove',
initial_value='off', # off, heating, lingering_in_medium, lingering_in_low, overridden
)
(see this code in situ in the pellet_stove.py file in the pywot demo directory)

During the process of lingering shutdown, the stove will progress through running at medium for a set number of minutes, to running at low for a set number of minutes, until finally turning off.  The length of time that it takes for the medium and low states is something that can be set through the user interface.  Because they can be set, that also means that Things Gateway Rules can also set them.  In the future, I imagine connecting these with my Virtual Weather Station.  This will facilitate adjusting the length of the lingering shutdown based on outdoor temperatures and wind speed.
    medium_linger_minutes = WoTThing.wot_property(
name='medium_linger_minutes',
description='how long should the medium level last during lingering shutdown',
initial_value=5.0,
value_forwarder=set_medium_linger
)
low_linger_minutes = WoTThing.wot_property(
name='low_linger_minutes',
description='how long should the low level last during lingering shutdown',
initial_value=5.0,
value_forwarder=set_low_linger
)
(see this code in situ in the pellet_stove.py file in the pywot demo directory)

Once this software was complete and tested, I installed it on a dedicated Raspberry Pi, set the jumpers for controlling the relay board, wired the 24VAC thermostat relay and then wired the pellet stove components.  It started working immediately. It doesn't need the Things Gateway to run in its basic mode.

My Things Gateway doesn't run in the yurt, it lives in my office in the old original farm house.  Fortunately, ten years ago, I trenched in gigabit Ethernet between all my buildings.  So, while standing in the yurt, I opened the Things Gateway in Firefox running on my Android tablet.  I added the Pellet Stove thing, and it all worked correctly.

There appears to be a minor bug here.  Only the "low_linger_minutes" and "medium_linger_minutes" are settable by the user.  However, the Things Gateway is allowing all the other fields to be settable, too - even though setting them doesn't actually do anything.  I've not yet figured out if this is a bug in my code or in the Things-URL adapter.

I monitored the logging of my pellet stove controller closely and saw this one morning:
        
2018-06-06 06:25:29,938 pellet_stove.py:100 INFO start heating
2018-06-06 06:53:39,008 pellet_stove.py:103 INFO start lingering shutdown
2018-06-06 06:53:39,014 pellet_stove.py:162 DEBUG stove set to medium for 120 seconds
2018-06-06 06:55:39,020 pellet_stove.py:171 DEBUG stove set to low for 300 seconds
2018-06-06 07:00:00,706 pellet_stove.py:100 INFO start heating
2018-06-06 07:00:00,707 pellet_stove.py:147 DEBUG canceling lingering shutdown
to turn stove back to high
2018-06-06 07:14:14,256 pellet_stove.py:162 DEBUG stove set to medium for 120 seconds
2018-06-06 07:16:14,262 pellet_stove.py:171 DEBUG stove set to low for 300 seconds
2018-06-06 07:21:14,268 pellet_stove.py:178 INFO stove turned off


The stove started heating at 6:25am.  It got the yurt up to temperature by 6:53am, so it started the lingering shutdown.  At 7:00am, the Nest thermostat decided to raise the temperature of the yurt from the nighttime level to the daytime level.  The software was running the stove at low at that moment.  It canceled the lingering shutdown and restored the stove to running on high. This is perfect behavior.

I'm almost disappointed that it is summer.  In the next few days, I'll be shutting down the stove for the season and probably won't start it up again until late September. I look forward to next Fall when it get to really put this software through its paces. 




Mozilla VR BlogThis week in Mixed Reality: Issue 9

This week in Mixed Reality: Issue 9

This week the team is continously working on new features, making improvements and fixing bugs.

Next week, the team will be in San Francisco for an all-Mozilla company meeting.

Browsers

We just added a new browser design and assets to Firefox Reality and we now have the ability to display a 360 image as the background in VR while doing 2D browsing!

  • Added CubeMap support to the VR engine
  • Added Skybox support to Firefox Reality
  • Putting new UI and platform work for immersive mode
  • The design team delivered the final designs for v1, including the browser window UI, the button tray, and the settings panel

Social

We can now import and display basic 2D web content in a shared room and continue to make improvements.

  • Working on allowing users to pull in content from around the web. This is going to roll out and expand over time, but in the meantime here is some emergent gameplay footage https://imgur.com/DtEBOS6 for the Hubs by Mozilla product.
  • Pushing some improvements to the controls: 2D mobile users can pinch to move around the space, mouse users can engage pointer-lock with right click, 6DOF users can pull ducks to them from far away using a thumbpad or joystick
  • Improvements to networking code particularly on slow connections

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

We are improving support for the Oculus Go on the Unity WebVR project on the Unity WebVR exporter tool.

Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Hacks.Mozilla.Org@media, MathML, and Django 1.11: MDN Changelog for May 2018

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.” Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Done in May

Here’s what happened in May to the code, data, and tools that support MDN Web Docs:

We’ll continue this work in June.

Migrated CSS @media and MathML compat data

The browser compatibility migration continued, jumping from 72% to 80% complete. Daniel D. Beck completed the CSS @media rule features, by converting data (PR 2087 and a half-dozen others) and by reviewing PRs like 1977 from Nathan Cook. Mark Boas finished up MathML, submitting 26 pull requests.

The first few entries of the CSS @media browser compatiblity table

The @media table has 32 features. It’s a big one.

There are over 8500 features in the BCD dataset, and half of the remaining work is already submitted as pull requests. We’re making steady progress toward completing the migration effort.

Prepared for Django 1.11

MDN runs on Django 1.8, the second long-term support (LTS) release. In May, we updated the code so that MDN’s test suite and core processes work on Django 1.8, 1.9, 1.10, and 1.11. We didn’t make it by the end of May, but by the time this report is published, we’ll be on Django 1.11 in production.

Many Mozilla web projects use Django, and most stick to the LTS releases. In 2015, the MDN team upgraded from Django 1.4 LTS to 1.7 in PR 3073, which weighed in at 223 commits, 12,000 files, 1.2 million lines of code, and at least six months of effort. Django 1.8 followed six months later in PR 3525, a lighter change of 30 commits, 2,600 files, and 250,000 lines of code. After this effort, the team was happy to stick with 1.8 for a while.

Most of the upgrade pain was specific to Kuma. The massive file counts were due to importing libraries as git submodules, a common practice at the time for security, reliability, and customization. It was also harder to update libraries, which meant updates were often delayed until necessary. The 1.7 upgrade included an update from Python 2.6 to 2.7, and it is challenging to support dual-Python installs on a single web server.

The MDN team worked within some of these constraints, and tackled others. They re-implemented the development VM setup scripts in Ansible, but kept Puppet for production. They updated submodules for the 1.7 effort, and switched to hashed requirements files for 1.8. A lot of codebase and process improvements were gained during the update effort. Other improvements, such as Dockerized environments, were added later to avoid issues in the next big update.

Django 1.4 wasn’t originally planned as an LTS release, but instead was scheduled for retirement after the 1.6 release. The end of support was repeatedly pushed further into the future, finally extending to six months after 1.8 was published.

On the other hand, the Django team knew that 1.8 was an LTS release when they shipped it, and would get security updates for three years. Many of the Django team’s decisions in 2015 made MDN’s update easier. Django 1.11 retained Python 2.7 support, avoiding the pain of a simultaneous Python upgrade. Django now has a predictable release process, which is great for website and library maintainers.

The Django supported versions table shows what releases are supported, and when they are supported. There's a new release each 8 months, regular releases are supported for 16, and LTS release for 3 years.

Django’s release roadmap makes scheduling update efforts easier.

Django also maintains a guide on updating Django, and the suggested process worked well for us.

The first step was to upgrade third-party libraries, which we’ve been working on for the past two years. Our goal was to get a library update into most production pushes, and we updated dozens of libraries while shipping unrelated features. Some libraries, such as django-pipeline, didn’t support Django 1.11, so we had to update them ourselves. In other cases, like django-tidings, we also had to take over project maintenance for a while.

The next step was to turn on Django’s deprecation warnings, to find the known update issues. This highlighted a few more libraries to update, and also the big changes needed in our code. One problem area was our language-prefixed URLs. MDN uses Mozilla-standard language codes, such as /en-US/, where the region is in capital letters. Django ships a locale-prefixed URL framework, but uses lowercase language codes, such as /en-us/. Mozilla’s custom framework broke in 1.9, and we tried a few approaches before copying Django’s framework and making adjustments for Mozilla-style language codes (PR 4790).

Next we ran our automated tests, and started fixing the many failures. Our goal was to support multiple Django versions in a single codebase, and we added optional builds to TravisCI to run tests on 1.9, 1.10, and 1.11 (PR 4806). We split the fixes into over 50 pull requests, and tested small batches of changes by deploying to production. In some cases, the same code works across all four versions. In other cases, we switched between code paths based on the Django version. Updates for Django 1.9 were about 90% of the upgrade effort.

This incremental approach is safer than a massive PR, and avoids issues with keeping a long-running branch up to date. It does make it harder to estimate the scope of a change. Looking at the commits that mention the update bugs, we changed 2500 to 3000 lines, which represents 10% of the project code.

Started tracking work in ZenHub

For the past two years, the MDN team had been using Taiga to plan and track our work in 3-week sprints. The team was unhappy with performance issues, and had been experimenting with GitHub’s project management features. Janet Swisher and Eric Shepherd led an effort to explore alternatives. In May, we started using ZenHub, which provides a sprint planning layer on top of GitHub issues and milestones.

The ZenHub board collects task cards into columns, that mive to the right as they approach completion.

See how the ‘sausage’ gets made with our ZenHub board.

If you have the ZenHub plugin installed, you can view the sprint board in the new mdn/sprints repository, which collects tasks across our 10 active repositories. It adds additional data to GitHub issues and pull requests, linking them into project planning. If you don’t have the plugin, you can view the sprint board on the ZenHub website. If you don’t want to sign in to ZenHub with your GitHub account, you can view the milestones in the individual projects, like the Sprint 4 milestone in the Interactive Examples project.

Continued HTML Interactive Examples

We’re still working through the technical challenges of shipping HTML interactive examples. One of the blocking issues was restricting style changes to the demo output on the right, but not the example code on the left. This is a use case for Shadow DOM, which can restrict styles to a subset of elements. Schalk Neethling shipped a solution based on Shadow DOM in PR 873 and PR 927, and it works natively in Chrome. Not all current browsers support Shadow DOM, so Schalk added a shim in PR 894. There are some other issues that we’ll continue to fix before including tabbed HTML examples on MDN.

Meanwhile, contributors from the global MDN community have written some interesting demos for <track> (Leonard Lee, PR 940), <map> (Adilson Sandoval, PR 931), and <blockquote> (Florian Scholz, PR 906). We’re excited to get these and the other examples on MDN.

On the left, HTML defines a <map> of polygon targets, and on the right the resulting image with link targets is displayed.

A demo of the polygon targets for <map>.

Shipped Tweaks and Fixes

There were 397 PRs merged in May:

60 of these were from first-time contributors:

Other significant PRs:

Planned for June

We plan to ship Django 1.11 to production in early June, even before this post is published. We’ll spend some additional time in June fixing any issues that MDN’s millions of visitors discover, and upgrading the code to take advantage of some of the new features.

We plan to keep working on the compatibility data migration, HTML interactive examples, and other in-progress projects. June is the end of the second quarter, and is a deadline for many quarterly goals.

We are gathering in San Francisco for Mozilla’s All-Hands. It will be a chance for the remote team to be together in the same room, to celebrate the year’s accomplishments to date, and to make plans for the rest of the year.

Will Kahn-GreeneStandup report: June 8th, 2018

What is Standup?

Standup is a system for capturing standup-style posts from individuals making it easier to see what's going on for teams and projects. It has an associated IRC bot standups for posting messages from IRC.

Project report

Over the last six months, we've done:

  • monthly library updates
  • revamped static assets management infrastructure
  • service maintenance
  • fixed the textarea to be resizeable (Thanks, Arai!)

The monthly library updates have helped with reducing technical debt. That takes a few hours each month to work through.

Paul redid how Standup does static assets. We no longer use django-pipeline, but instead use gulp. It works muuuuuuch better and makes it possible to upgrade to Djagno 2.0 soon. That was a ton of work over the course of a few days for both of us.

We've been keeping the Standup service running. That includes stage and production websites as well as stage and production IRC bots. That also includes helping users who are stuck--usually with accounts management. That's been a handful of hours.

Arai fixed the textareas so they're resizeable. That helps a ton! I'd love to get more help with UI/UX fixing.

Some GitHub stats:

GitHub
======

  mozilla/standup: 15 prs

    Committers:
             pyup-bot :     6  (  +588,   -541,   20 files)
               willkg :     5  (  +383,   -169,   27 files)
                 pmac :     2  ( +4179,   -223,   58 files)
               arai-a :     1  (    +2,     -1,    1 files)
                  g-k :     1  (    +3,     -3,    1 files)

                Total :        ( +5155,   -937,   89 files)

    Most changed files:
      requirements.txt (11)
      requirements-dev.txt (7)
      standup/settings.py (5)
      docker-compose.yml (4)
      standup/status/jinja2/base.html (3)
      standup/status/models.py (3)
      standup/status/tests/test_views.py (3)
      standup/status/urls.py (3)
      standup/status/views.py (3)
      standup/urls.py (3)

    Age stats:
          Youngest PR : 0.0d: 466: Add site-wide messaging
       Average PR age : 2.3d
        Median PR age : 0.0d
            Oldest PR : 10.0d: 459: Scheduled monthly dependency update for May


  All repositories:

    Total merged PRs: 15


Contributors
============

  arai-a
  g-k
  pmac
  pyup-bot
  willkg

That's it for the last six months!

Switching to swag-driven development

Do you use Standup?

Did you use Standup, but the glacial pace of fixing issues was too much so you switched to something else?

Do you want to use Standup?

We think there's still some value in having Standup around and there are still people using it. There's still some technical debt to fix that makes working on it harder than it should be. We've been working through that glacially.

As a project, we have the following problems:

  1. The bulk of the work is being done by Paul and Will.
  2. We don't have time to work on Standup.
  3. There isn't anyone else contributing.

Why aren't users contributing? Probably a lot of reasons. Maybe everyone has their own reason! Have I spent a lot of time to look into this? No, because I don't have a lot of time to work on Standup.

Instead, we're just going to make some changes and see whether that helps. So we're doing the following:

  1. Will promises to send out Standup project reports every 6 months before the All Hands and in doing this raise some awareness of what's going on and thank people who contributed.
  2. We're fixing the Standup site to be clearer on who's doing work and how things get fixed so it's more likely your ideas come to fruition rather than get stale.
  3. We're switching Standup to swag-driven development!

What's that you say? What's swag-driven development?

I mulled over the idea in my post on swag-driven development.

It's a couple of things, but mainly an explicit statement that people work on Standup in our spare time at the cost of not spending that time on other things. While we don't feel entitled to feeling appreciated, it would be nice to feel appreciated sometimes. Not feeling appreciated makes me wonder whether I should spend the time elsewhere. (And maybe that's the case--I have no idea.) Maybe other people would be more interested in spending their spare time on Standup if they knew there were swag incentives?

So what does this mean?

It means that we're encouraging swag donations!

  • If your team has stickers at the All Hands and you use Standup, find Paul and Will and other Standup contributors and give them one!
  • If there are features/bugs you want fixed and they've been sitting in the queue forever, maybe bribing is an option.

For the next quarter

Paul and I were going to try to get together at the All Hands and discuss what's next.

We don't really have an agenda. I know I look at the issue tracker and go, "ugh" and that's about where my energy level is these days.

Possible things to tackle in the next 6 months off the top of my head:

If you're interested in meeting up with us, toss me an email at willkg at mozilla dot com.

Daniel Stenbergquic wg interim Kista

The IETF QUIC working group had its fifth interim meeting the other day, this time in Kista, Sweden hosted by Ericsson. For me as a Stockholm resident, this was ridiculously convenient. Not entirely coincidentally, this was also the first quic interim I attended in person.

We were 30 something persons gathered in a room without windows, with another dozen or so participants joining from remote. This being a meeting in a series, most people already know each other from before so the atmosphere was relaxed and friendly. Lots of the participants have also been involved in other protocol developments and standards before. Many familiar faces.

Schedule

As QUIC is supposed to be done "soon", the emphasis is now a lot to close issues, postpone some stuff to "QUICv2" and make sure to get decisions on outstanding question marks.

Kazuho did a quick run-through with some info from the interop days prior to the meeting.

After MT's initial explanation of where we're at for the upcoming draft-13, Ian took us a on a deep dive into the Stream 0 Design Team report. This is a pretty radical change of how the wire format of the quic protocol, and how the TLS is being handled.

The existing draft-12 approach...

Is suggested to instead become...

What's perhaps the most interesting take away here is that the new format doesn't use TLS records anymore - but simplifies a lot of other things. Not using TLS records but still doing TLS means that a QUIC implementation needs to get data from the TLS layer using APIs that existing TLS libraries don't typically provide. PicoTLS, Minq, BoringSSL. NSS already have or will soon provide the necessary APIs. Slightly behind, OpenSSL should offer it in a nightly build soon but the impression is that it is still a bit away from an actual OpenSSL release.

EKR continued the theme. He talked about the quic handshake flow and among other things explained how 0-RTT and early data works. Taken from that context, I consider this slide (shown below) fairly funny because it makes it look far from simple to me. But it shows communication in different layers, and how the acks go, etc.

HTTP

Mike then presented the state of HTTP over quic. The frames are no longer that similar to the HTTP/2 versions. Work is done to ensure that the HTTP layer doesn't need to refer or "grab" stream IDs from the transport layer.

There was a rather lengthy discussion around how to handle "placeholder streams" like the ones Firefox uses over HTTP/2 to create "anchors" on which to make dependencies but are never actually used over the wire. The nature of the quic transport makes those impractical and we talked about what alternatives there are that could still offer similar functionality.

The subject of priorities and dependencies and if the relative complexity of the h2 model should be replaced by something simpler came up (again) but was ultimately pushed aside.

QPACK

Alan presented the state of QPACK, the HTTP header compression algorithm for hq (HTTP over QUIC). It is not wire compatible with HPACK anymore and there have been some recent improvements and clarifications done.

Alan also did a great step-by-step walk-through how QPACK works with adding headers to the dynamic table and how it works with its indices etc. It was very clarifying I thought.

The discussion about the static table for the compression basically ended with us agreeing that we should just agree on a fairly small fixed table without a way to negotiate the table. Mark said he'd try to get some updated header data from some server deployments to get another data set than just the one from WPT (which is from a single browser).

Interop-testing of QPACK implementations can be done by encode  + shuffle + decode a HAR file and compare the results with the source data. Just do it - and talk to Alan!

And the first day was over. A fully packed day.

ECN

Magnus started off with some heavy stuff talking Explicit Congestion Notification in QUIC and it how it is intended to work and some remaining issues.

He also got into the subject of ACK frequency and how the current model isn't ideal in every situation, causing to work like this image below (from Magnus' slide set):

Interestingly, it turned out that several of the implementers already basically had implemented Magnus' proposal of changing the max delay to min(RTT/4, 25 ms) independently of each other!

mvfst deployment

Subodh took us on a journey with some great insights from Facebook's deployment of mvfast internally, their QUIC implementation. Getting some real-life feedback is useful and with over 100 billion requests/day, it seems they did give this a good run.

Since their usage and stack for this is a bit use case specific I'm not sure how relevant or universal their performance numbers are. They showed roughly the same CPU and memory use, with a 70% RPS rate compared to h2 over TLS 1.2.

He also entertained us with some "fun issues" from bugs and debugging sessions they've done and learned from. Awesome.

The story highlights the need for more tooling around QUIC to help developers and deployers.

Load balancers

Martin talked about load balancers and servers, and how they could or should communicate to work correctly with routing and connection IDs.

The room didn't seem overly thrilled about this work and mostly offered other ways to achieve the same results.

Implicit Open

During the last session for the day and the entire meeting, was mt going through a few things that still needed discussion or closure. On stateless reset and the rather big bike shed issue: implicit open. The later being the question if opening a stream with ID N + 1 implicitly also opens the stream with ID N. I believe we ended with a slight preference to the implicit approach and this will be taken to the list for a consensus call.

Frame type extensibility

How should the QUIC protocol allow extensibility? The oldest still open issue in the project can be solved or satisfied in numerous different ways and the discussion waved back and forth for a while, debating various approaches merits and downsides until the group more or less agreed on a fairly simple and straight forward approach where the extensions will announce support for a feature which then may or may involve one or more new frame types (to be in a registry).

We proceeded to discuss other issues all until "closing time", which was set to be 16:00 today. This was just two days of pushing forward but still it felt quite intense and my personal impression is that there were a lot of good progress made here that took the protocol a good step forward.

The facilities were lovely and Ericsson was a great host for us. The Thursday afternoon cakes were great! Thank you!

Coming up

There's an IETF meeting in Montreal in July and there's a planned next QUIC interim probably in New York in September.

Mozilla Open Design BlogParis, Munich, & Dresden: Help Us Give the Web a Voice!

Text available in: English | Français | Deutsche

 
In July, our Voice Assistant Team will be in France and Germany to explore trust and technology adoption. We’re particularly interested in how people use voice assistants and how people listen to content like Pocket and podcasts. We would like to learn more how you use technology and how a voice assistant or voice user interface (VUIs) could improve your Internet and open web experiences. We will be conducting a series of in-home interviews and participatory design sessions. No prior voice assistant experience needed!

We would love to meet folks in person in:

Paris: July 3 – 6, 2018
Munich: July 9 – 13, 2018
Dresden: July 16 – 20, 2018

If you are interested in participating in our in home interviews (2 hours) or participatory design sessions (1.5 hours), please let us know! We’d love to meet you in-person!

If you are interested in meeting us in Paris, please fill out this form.
If you are interested in meeting us in Germany, please fill out this form.

All information will be held under Mozilla’s Privacy Policy.


 

Paris, Munich & Dresde : aidez-nous à donner une voix au Web !

En juillet, notre équipe Assistants vocaux sera en France et en Allemagne pour explorer la confiance et l’adoption de ces technologies. Nous sommes particulièrement intéressés par la façon dont les gens utilisent les assistants vocaux et par comment ils écoutent du contenu comme Pocket ou des podcasts. Nous aimerions en savoir plus sur la façon dont vous utilisez cette technologie et sur la façon dont un assistant vocal ou une interface utilisateur vocale (VUI) pourrait améliorer vos expériences sur le Web. Nous mènerons une série d’entrevues à domicile et de séances de conception participatives. Aucune utilisation d’assistant vocal n’est requise au préalable !

Nous aimerions rencontrer des gens en personne à :

Paris : du 3 au 6 juillet 2018,
Munich : du 9 au 13 juillet 2018,
Dresde : du 16 au 20 juillet 2018.

Si vous êtes intéressé(e)s à participer à nos interviews à domicile (2 heures) ou à des sessions de conception participative (1,5 heure), faites le nous savoir ! Nous aimerions vous rencontrer !

Si vous souhaitez nous rencontrer à Paris, veuillez remplir ce formulaire.
Si vous souhaitez nous rencontrer en Allemagne, veuillez remplir ce formulaire.

Toutes les informations seront conservées dans la politique de confidentialité de Mozilla.


 

Paris, München, & Dresden: Bitte helfen Sie uns, dem Internet eine Stimme zu geben!

Im Juli wird unser Voice Assistant Team in Frankreich und Deutschland sein, um Technologie-Akzeptanz und -Vertrauen zu erkunden. Uns interessiert besonders, wie Menschen Sprachassistenten benutzen und wie Menschen Inhalte wie Pocket und Podcasts hören. Wir würden gerne mehr darüber erfahren, wie Sie Technologie benutzen und wie ein Sprachassistent oder eine Sprachbenutzeroberfläche (VUIs) Ihr Internet- und freies Weberlebnis verbessern könnte. Wir werden eine Reihe von In-Home-Interviews und partizipativen Design-Sessions durchführen. Keine vorherige Erfahrung des Sprachassistenten erforderlich!

Wir würden uns freuen Sie persönlich zu treffen in:

Paris: 3. – 6. Juli 2018;
München: 9. – 13. Juli 2018;
Dresden: 16. – 20. Juli 2018.

Wenn Sie daran interessiert sind, an unseren In-Home-Interviews (2 Stunden) oder partizipativen Design-Sessions (1,5 Stunden) teilzunehmen, lassen Sie es uns bitte wissen! Wir würden uns freuen, Sie persönlich zu treffen!

Wenn Sie Interesse haben, uns in Paris zu treffen, füllen Sie bitte dieses Formular aus.
Wenn Sie Interesse haben, uns in Deutschland zu treffen, füllen Sie bitte dieses Formular aus.

Alle Informationen werden unter der Datenschutzerklärung von Mozilla gespeichert.

The post Paris, Munich, & Dresden: Help Us Give the Web a Voice! appeared first on Mozilla Open Design.

Michael ComellaFixing Content Scripts on GitHub.com

When writing a WebExtension using content scripts on GitHub.com, you’ll quickly find they don’t work as expected: the content scripts won’t run when clicking links, e.g. when clicking into an issue from the issues list.

Content scripts ordinarily reload for each new page visited but, on GitHub, they don’t. This is because links on GitHub mutate the DOM and use the history.pushState API instead of loading pages the standard way, which would create an entirely new DOM per page.

I wrote a content script to fix this, which you can easily drop into your own WebExtensions. The script works by adding a MutationObserver that will check when the DOM has been updated, thus indicating that the new page has been loaded from a user’s perspective, and notify the WebExtension about this event.

If you want to try it out, the source is on GitHub. You can also check out a sample.

Alternatives

The seemingly “correct” approach would be to create a history.pushState observer using the webNavigation.onHistoryStateUpdated listener. However, this listener does not work as expected: it’s called twice – once before the DOM has been mutated and once after – and I haven’t found a good way to distinguish them other than to look at changes in the DOM, which is already the approach my solution takes.

Dave TownsendSearchfox in VS Code

I spend most of my time developing flipping back and forth between VS Code and Searchfox. VS Code is a great editor but it has nowhere near the speed needed to do searches over the entire tree, at least on my machine. Searchfox on the other hand is pretty fast. But there’s something missing. I usually want to search Searchfox for something I found in the code. Then I want to get the file I found in Searchfox open in my editor.

Luckily VS Code has a decent extension system that allows you to add new features so I spent some time yesterday evening building an extension to integration some of Searchfox’s functionality into VS Code. With the extension installed you can search Searchfox for something from the code editor or pop open an input box to write your own query. The results show up right in VS Code.

A screenshot of Searchfox displayed in VS Code<figcaption class="wp-caption-text">Searchfox in VS Code</figcaption>

Click on a result in Searchfox and it will open the file in an editor in VS Code, right at the line you wanted to see.

It’s pretty early code so the usual disclaimers apply, expect some bugs and don’t be too surprised if it changes quite a bit in the near-term. You can check out the fairly simple code (rendering the Searchfox page is the hardest part of it) on Github.

If you want to give it a try, install the extension from the VS Code Marketplace or find it by searching for “Searchfox” in VS Code itself. Feel free to file issues for bugs or improvements that would be useful or of course submit pull requests of your own! I’d love to hear if you find it useful.

Zibi BranieckiPseudolocalization in Firefox

One of the core projects we did over 2017 was a major overhaul of the Localization and Internationalization layers in Gecko, and all throughout the first half of 2018 we were introducing Fluent into Firefox.

All of that work was “behind the scenes” and laid the foundation to enable us to bring higher level improvements in the future.

Today, I’m happy to announce that the first of those high-level features has just landed in Firefox Nightly!

Pseudolocalization?

Pseudolocalization is a technology allowing for testing the localizability of software UI. It allows developers to check how the UI they are working on will look like when translated, without having to wait for translations to become available.

It shortens the Test-Driven Development cycle and lowers the burden of creating localizable UI.

Here’s a demo of how it works:

How to turn it on?

At the moment, we don’t have any UI for this feature. You need to create a new preference called intl.l10n.pseudo and set its value to accented for a left-to-right, ~30% longer strategy, or bidi for a right-to-left strategy. (more documentation).

If you test the bidi strategy you also will likely want to switch another preference – intl.uidirection – to 1. This is because right now the directionality of text and layout are not connected. We will improve that in the future.

We’ll be looking into ways to expose this functionality in the UI, and if you have any ideas or suggestions for what you’d like to see, let’s talk!

Nitty-gritty details

Although the feature may seem simple to add, and the actual patch that adds it was less than 100 lines long, it took many years of prototyping and years of development to build the foundation layers to allow for it.

Many of the design principles of Project Fluent combined with the vision shaped by the L10n Drivers Team at Mozilla allowed for dynamic runtime locale switching and declarative UI localization bindings.

Thanks to all of that work, we don’t have to require special builds or increase the bundle size for this feature to work. It comes practically for free and we can extend and fine tune pseudolocalization strategies on fly.

Kudos

If that feature looks cool, in the esoteric way localization and internationalization can, please, make sure to high-five the people who put a lot of work to get this done: Staś Małolepszy, Axel Hecht, Francesco Lodolo, Jeff Beatty and Dave Townsend.

More features are coming! Stay tuned.

Air MozillaReps Weekly Meeting, 07 Jun 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Daniel GlazmanBrowser detection inside a WebExtension

Just for the record, if you really need to know about the browser container of your WebExtension, do NOT rely on StackOverflow answers... Most of them are based, directly or not, on the User Agent string. So spoofable, so unreliable. Some will recommend to rely on a given API, implemented by Firefox and not Edge, or Chrome and not the others. In general valid for a limited time only... You can't even rely on chrome, browser or msBrowser since there are polyfills for that to make WebExtensions cross-browser.

So the best and cleanest way is probably to rely on chrome.extension.getURL("/") . It can start with "moz", "chrome" or "ms-browser". Unlikely to change in the near future. Simple to code, works in both content and background.

My pleasure :-)

Mozilla Open Innovation TeamMore Common Voices

Today we are excited to announce that Common Voice, Mozilla’s initiative to crowdsource a large dataset of human voices for use in speech technology, is going multilingual! Thanks to the tremendous efforts from Mozilla’s communities and our deeply engaged language partners you can now donate your voice in German, French and Welsh, and we are working to launch 40+ more as we speak. But this is just the beginning. We want Common Voice to be a tool for any community to make speech technology available in their own language.

Since we launched Common Voice last July, we have collected hundreds of thousands of voice samples in English through our website and iOS app. Last November, we published the first version of the Common Voice dataset. This data has been downloaded thousands of times, and we have seen the data being used in commercial voice products as well as open-source software like Kaldi and our very own speech recognition engine, project Deep Speech.

Up until now, Common Voice has only been available for voice contributions in English. But the goal of Common Voice has always been to support many languages so that we may fulfill our vision of making speech technology more open, accessible, and inclusive for everyone. That is why our main effort these last few months has been around growing and empowering individual language communities to launch Common Voice in their parts of the world, in their local languages and dialects.

In addition to localizing the website, these communities are populating Common Voice with copyright-free sentences for people to read that have the required characteristics for a high quality dataset. They are also helping promote the site in their countries, building a community of contributors, with the goal of growing the total number of hours of data available in each language.

In addition to English, we are now collecting voice samples in French, German and Welsh. And there are already more than 40 other languages on the way — not only big languages like Spanish, Chinese or Russian, but also smaller ones like Frisian, Norwegian or Chuvash. For us, these smaller languages are important because they are often under-served by existing commercial speech recognition services. And so by making this data available, we can empower entrepreneurs and communities to address this gap on their own.

Going multilingual marks a big step for Common Voice and we hope that it’s also a big step for speech technology in general. Democratizing voice technology will not only lower the barrier for global innovation, but also the barrier for access to information. Especially so for people who traditionally have had less of this access — for example, vision impaired, people who never learned to read, children, the elderly and many others.

We are thrilled to see the growing support we are getting to build the world’s largest public, multi-language voice dataset. You can help us grow it right now by donating your voice. You can also use the iOS app. If you would like to help bring Common Voice and speech technology to your language, visit our language page. And if you are part of an organization and have an idea for participating in this project, please get in touch (dchinniah@mozilla.com).

Our Forum gives more details on how to help, as well as being a great place to ask questions and meet the communities.

Special Thanks

We would like to thank our Speech Advisory Group, people who have been expert advisors and contributors to the Common Voice project:

  • Francis Tyers — Assistant Professor of Computational Linguistics at Higher School of Economics in Moscow
  • Gilles Adda — Speech scientist
  • Thomas Griffiths — Digital Services Officer, Office of the Legislative Assembly, Australia
  • Joshua Meyer — PhD candidate in Speech Recognition
  • Delyth Prys — Language technologies at Bangor University research center
  • Dewi Bryn Jones — Language technologies at Bangor University research center
  • Wael Farhan — MS in Machine Learning from UCSD, currently doing research for Arabic NLP at Mawdoo3.com
  • Eren Gölge — Machine learning scientist currently working on TTS for Mozilla
  • Alaa Saade — Senior Machine Learning Scientist @ Snips (Paris)
  • Laurent Besacier — Professor at Université Grenoble Alpes, NLP, speech processing, low resource languages
  • David van Leeuwen — Speech Technologist
  • Benjamin Milde — PhD candidate in NLP/speech processing
  • Shay Palachy — M.Sc. in Computer Science, Lead Data Scientist in a startup

***

Common Voice complements Mozilla’s work in the field of speech recognition, which runs under the project name Deep Speech, an open-source speech recognition engine model that approaches human accuracy, which was released in November 2017. Together with the growing Common Voice dataset we believe this technology can and will enable a wave of innovative products and services, and that it should be available to everyone.


More Common Voices was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogParlez-vous Deutsch? Rhagor o Leisiau i Common Voice

We’re very proud to be announcing the next phase of the Common Voice project. It’s now available for contributors in three new languages, German, French and Welsh, with 40+ other languages on their way! But this is just the beginning. We want Common Voice to be a tool for any community to make speech technology available in their own language.

Speech interfaces are the next frontier for the Internet. Project Common Voice is our initiative to build a global corpus of open voice data to be used to train machine-learning algorithms to power the voice interfaces of the future. We believe these interfaces shouldn’t be controlled by a few companies as gatekeepers to voice-enabled services, and we want users to be understood consistently, in their own languages and accents.

As anyone who has studied the economics of the Internet knows, services chase money. And so it’s quite natural that developers and publishers seek to develop for the audience that will best reward their efforts. What we see as a consequence is an Internet that is heavily skewed towards English, in a world where English is only spoken by 20% of the global population, and only 5% natively. This is increasingly going to be an accessibility issue, as Wired noted last year, “Voice Is the Next Big Platform, Unless You Have an Accent”.

Inevitably, English is becoming a global language, spoken more and more widely, and this is a trend that was underway before the emergence of the Internet. However, the skew of Internet content to English is certainly accelerating this. And while global communications may be becoming easier, there is also a cultural wealth that we should preserve. Native languages provide a deeper shared cultural context, down to the level of influencing our thought patterns. This is a part of our humanity we surely wish to retain and support with technology. In doing so, we’re upholding a proud Mozilla tradition of enabling local ownership by a global community: Firefox is currently offered in 90 languages (and counting), powered by volunteers near you.

Common Voice contribution sprints in Berlin (credit: Michael Kohler), Mexico City (credit: Luis A. Sánchez), Jakarta (credit: Irayani Queencyputri) and Taipei (credit: Irvin Chen), from the top left to the bottom right

With Common Voice it’s the same volunteer passion that drives the project further and we’re grateful for all contributors who already said, “We want to help bringing speech recognition technology to my part of the world – what can we do?”. It is the underlying stories which also make this project so rewarding for me personally:

In Indonesia 20 community members came to our community space in Jakarta for a meet-up to write up sentences for the text corpus that will become the basis for voice recordings. They went into overdrive and submitted around 4,000 sentences within two days.

In Kenya a group of volunteers interested in Mozilla projects found out about Common Voice and started both localizing the website and submitting sentences in Swahili, Jibana and Kikiyu, all highly underrepresented languages, which we’re extremely happy to support. This is in addition to working with language experts in these communities like Laurent Besacier, the initiator of ALFFA, an interdisciplinary project bundling resources and expertise in speech analysis and speech technologies for African languages.

If we look at the country where I’m from, there has been one particular contributor to the Common Voice github project since the very early days. He originally contributed to the English effort, but he is German and wanted to see Common Voice come to Germany. He set himself on a strict schedule, wrote a few sentences every day for the next 6 months (while commuting to school or work), and collected 11,000 (!) sentences, ranging from poetry to day-to-day conversations.

Speaking of which: Another German contributor joined the Global Sprint in our Berlin office, utterly frustrated about a lengthy but fruitless discussion at the post office (Sounds familiar, Germany?). He may not have gotten his package, but I’d like to believe he had his personal cathartic moment when he submitted his whole experience in written form. Now Germans everywhere will help him voice his frustrations.

These are only a few of many wonderful examples from around the world – Taiwan, Slovenia, Macedonia, Hungary, Brazil, Serbia, Thailand, Spain, Nepal, and many more. They show that anyone can help grow the Common Voice project. Any individual or organization that has an interest in its native language, or an interest in open voice interfaces, will find it worth their while. You can contribute your voice at https://voice.mozilla.org/en/languages, or if you have a larger corpus of transcribed speech data, we’d love to hear from you.

***

Common Voice complements Mozilla’s work in the field of speech recognition, which runs under the project name Deep Speech, an open-source speech recognition engine model that approaches human accuracy, which was released in November 2017. Together with the growing Common Voice dataset we believe this technology can and will enable a wave of innovative products and services, and that it should be available to everyone.

The post Parlez-vous Deutsch? Rhagor o Leisiau i Common Voice appeared first on The Mozilla Blog.

Mozilla B-TeamHappy BMO Push Day!

David LawrenceHappy BMO Push Day!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1430905] Remove legacy phabbugz code that is no longer needed
  • [1466159] crash graph is wrong
  • [1466122] Change “Reviews Requested of You” to show results are from Phabricator and not from BMO
  • [1465889] form.dev-engagement-event field should be red instead of black

discuss these changes on mozilla.tools.bmo.

Air MozillaBugzilla Project Meeting, 06 Jun 2018

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Armen ZambranoAreWeFastYet UI refresh

For a long time Mozilla’s JS team and others have been using https://arewefastyet.com to track the JS engine performance against various benchmarks.

<figcaption>Screenshot of landing page</figcaption>

In the last little while, there’s been work moving those benchmarks to another continuous integration system and we have the metrics in Mozilla’s Perfherder. This rewrite will focus on using the new generated data.

If you’re curious on the details about the UI refresh please visit this document. Feel free to add feedback. Stay tuned for an update next month.

Air MozillaThe Joy of Coding - Episode 141

The Joy of Coding - Episode 141 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting, 06 Jun 2018

Weekly SUMO Community Meeting This is the SUMO weekly call

Mark CôtéPhabricator and Lando Launched

The Engineering Workflow team at Mozilla is happy to announce that Phabricator and Lando are now ready for use with mozilla-central! This represents about a year of work integrating Phabricator with our systems and building out Lando.

There are more details in my post to the dev.platform list.

The Firefox FrontierA Socially Responsible Way to Internet

Choices matter. That might sound flippant or obvious, but we’re not just talking about big, life-changing decisions. Little choices — daily choices — add up in a big way, online and off. This is … Read more

The post A Socially Responsible Way to Internet appeared first on The Firefox Frontier.

Gervase MarkhamA Case for the Total Abolition of Software Patents

A little while back, I wrote a piece outlining the case for the total abolition (or non-introduction) of software patents, as seen through the lens of “promoting innovation”. Few of the arguments are new, but the “Narrow Road to Patent Goodness” presentation of the information is quite novel as far as I know, and may form a good basis for anyone trying to explain all the possible problems with software (or other) patents.

You can find it on my website.

William LachanceMission Control 1.0

Just a quick announcement that the first “production-ready” version of Mission Control just went live yesterday, at this easy-to-remember URL:

https://missioncontrol.telemetry.mozilla.org

For those not yet familiar with the project, Mission Control aims to track release stability and quality across Firefox releases. It is similar in spirit to arewestableyet and other crash dashboards, with the following new and exciting properties:

  • Uses the full set of crash counts gathered via telemetry, rather than the arbitrary sample that users decide to submit to crash-stats
  • Results are available within minutes of ingestion by telemetry (although be warned initial results for a release always look bad)
  • The denominator in our crash rate is usage hours, rather than the probably-incorrect calculation of active-daily-installs used by arewestableyet (not a knock on the people who wrote that tool, there was nothing better available at the time)
  • We have a detailed breakdown of the results by platform (rather than letting Windows results dominate the overall rates due to its high volume of usage)

In general, my hope is that this tool will provide a more scientific and accurate idea of release stability and quality over time. There’s lots more to do, but I think this is a promising start. Much gratitude to kairo, calixte, chutten and others who helped build my understanding of this area.

The dashboard itself an easier thing to show than talk about, so I recorded a quick demonstration of some of the dashboard’s capabilities and published it on air mozilla:

link

Daniel PocockPublic Money Public Code: a good policy for FSFE and other non-profits?

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

The Mozilla BlogFacebook Must Do Better

The recent New York Times report alleging expansive data sharing between Facebook and device makers shows that Facebook has a lot of work to do to come clean with its users and to provide transparency into who has their data. We raised these transparency issues with Facebook in March and those concerns drove our decision to pause our advertising on the platform. Despite congressional testimony and major PR campaigns to the contrary, Facebook apparently has yet to fundamentally address these issues.

In its response, Facebook has argued that device partnerships are somehow special and that the company has strong contracts in place to prevent abuse. While those contracts are important, they don’t remove the need to be transparent with users and to give them control. Suggesting otherwise, as Facebook has done here, indicates the company still has a lot to learn.

The post Facebook Must Do Better appeared first on The Mozilla Blog.

Henrik SkupinMy 15th Bugzilla account anniversary

Exactly 15 years ago at “2003-06-05 09:51:47 PDT” my journey in Bugzilla started. At that time when I created my account I would never have imagined where all these endless hours of community work ended-up. And even now I cannot predict how it will look like in another 15 years…

Here some stats from my activities on Bugzilla:

Bugs filed 4690
Comments made 63947
Assigned to 1787
Commented on 18579
QA-Contact 2767
Patches submitted 2629
Patches reviewed 3652

 

Air Mozilla[Town Hall] MoFo OKR

[Town Hall] MoFo OKR MoFo Town Hall with representatives from each OKR to level-set on our progress and learnings related to our H1 Key Results in advance of Toronto...

Firefox Test PilotIntroducing Firefox Color and Side View

We’re excited to launch two new Test Pilot experiments that add power and style to Firefox.

https://medium.com/media/32dc65c89184767b96c26896efb78323/href

Side View enables you to multitask with Firefox like never before by letting you keep two websites open side by side in the same window.

https://medium.com/media/66965c47993ec6ef8783ef525951c67d/href

Firefox Color makes it easy to customize the look and feel of your Firefox browser. With just a few clicks you can create beautiful Firefox themes all your own.

Both experiments are available today from Firefox Test Pilot. Try them out, and don’t forget to give us feedback. You’re helping to shape the future of Firefox!


Introducing Firefox Color and Side View was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Firefox FrontierIt’s A New Firefox Multi-tasking Extension: Side View

Introducing Side View! Side View is a Firefox extension that allows you to view two different browser tabs simultaneously in the same tab, within the same browser window. Firefox Extensions … Read more

The post It’s A New Firefox Multi-tasking Extension: Side View appeared first on The Firefox Frontier.

The Firefox FrontierGet All the Color, New Firefox Extension Announced

Remember when you were a kid and wanted to paint your room your favorite color? Or the first time you dyed your hair a different color and couldn’t wait to … Read more

The post Get All the Color, New Firefox Extension Announced appeared first on The Firefox Frontier.

The Mozilla BlogLatest Firefox Test Pilot Experiments: Custom Color and Side View

Before we bring new features to Firefox, we give them a test run to make sure they’re right for our users. To help determine which features we add and how exactly they should work, we created the Test Pilot program.

Since the launch of Test Pilot, we have experimented with 16 different features, and three have graduated to live in Firefox full time: Activity Stream, Containers and Screenshots. Recently, Screenshots surpassed more than 100M+ screenshots since it launched. Thanks to active Firefox users who opt to take part in Test Pilot experiments.

This week, the Test Pilot team is continuing to evolve Firefox features with two new extensions that will offer users a more customizable and productive browsing experience.

Introducing Firefox Color

One of the greatest things about the Test Pilot program is being the first to try new things. Check out this video from one of our favorite extensions we’re hoping you’ll try out:

 

New Firefox Color extension allows you to customize several different elements of your browser, including background texture, text, icons, the toolbar and highlights.

Color changes update automatically to the browser as you go, so you can mix and match until you find the perfect combo to suit you. Whether you like to update with the seasons, or rep your favorite sports team during playoffs, the new color extension makes it simple to customize your Firefox experience to match anything from your mood or your wardrobe.

You can also save and share your creation with others, or choose from a variety of pre-installed themes.

Introducing Side View

Side View is a great tool for comparison shopping, allowing you to easily scope the best deals on flights or home appliances, and eliminating the need to switch between two separate web pages. It also allows you to easily compare news stories or informational material–perfect for writing a research paper or conducting side-by-side revisions of documents all in one window.

The Side View extension allows you to view two different browser tabs in the same tab, within the same browser window.

You can also opt for Side View to log your recent comparisons, so you can easily access frequently viewed pages.

 

Join our Test Pilot Program

The Test Pilot program is open to all Firefox users. Your feedback helps us test and evaluate a variety of potential Firefox features. We have more than 100k daily participants, and if you’re interested in helping us build the future of Firefox, visit testpilot.firefox.com to learn more. We’ve made it easy for anyone around the world to join with experiments available in 48 languages.

If you’re familiar with Test Pilot then you know all our projects are experimental, so we’ve made it easy to give us feedback or disable features at any time from testpilot.firefox.com.

Check out the new Color and Side View extensions. We’d love to hear your feedback!

The post Latest Firefox Test Pilot Experiments: Custom Color and Side View appeared first on The Mozilla Blog.

Kim MoirManagement books in review

I became a manager of a fantastic team in February.  My standard response to a new role is to read many books and talk to a lot of people who are have experience is this area so I have the background to be successful.  Nothing can prepare you like doing the actual day to day work of being a manager, but these are some concepts I found helpful from my favourite books on this topic:

IMG_6712(1)

  1. Accelerate: Building and Scaling High Performing Technology Operations by Nicole Forsgren, PhD, Jez Humble and Gene Kim.  The main topic of this book is not leadership, but many of the chapters deal with how lead effective teams.
    • Lean management practices entail:
      • Limiting work in process (WIP) and using these limits to drive process improvement and increase throughput
      • Creating and maintain visual displays showing key productivity metrics and the current status of work that are available to both engineers and managers
      • Use data from application performance and infrastructure monitoring to make business decisions on a daily basis
    • Definition of burnout:

IMG_6998

  • Complex and painful deployment strategies that must be performed out of business hours lead to stress and lack of control, which can increase burnout
  • Tips to improve culture and support your teams
    • Build trust with counterparts on other teams
    • Encourage people to move between departments
    • Actively seek, encourage and reward work that facilitates collaboration
    • Create a training budget and advocating using it
    • Ensure your team has the resources and space to engage in informal learning and explore new ideas
    • Make it safe to fail.  Many projects fail.  Learning from them and holding blameless post-mortems allows people to understand that failure is okay and allows them to feel safe to take risks
    • Share information with others
      • lighting talks, lunch and learns and demo days allow teams to share their work and celebrate what they have accomplished
    • Allow teams to choose their tools
    • Make monitoring a priority

 

  1. Radical Focus by Christina Wodtke is a book about OKRs  and written in a similar fable format to the The Phoenix project is written about DevOps.  It’s the story of a struggling startup and how they had to focus on priorities.  IMG_6999
  • If you were hired as a new CEO, what would you do?  Is this a different direction than the current strategy the company is pursuing?
  • How to specify OKRs with measurable results
    • “You don’t need people to work more, you need people to work on the right things”  Look at the work you are doing every week and see how it will impact reaching OKRs.
  • OKR fundamentals
    • The sentence that describes the Objective should be
      • Able to inspire a sense of meaning and purpose. Use the language of your team to write the objective
      • Bound by time – able to be completed in a set amount of time, like a month or a quarter
      • Accountable by the team independently.  Completing it shouldn’t be blocked by the work of another team.
  • The Key results answer the question – “How do we know we met our results?”
    •  Measurable – You can measure opposing forces like growth and performance or revenue and qualify.  For instance, a growth metric doesn’t matter if you customers don’t continue to use the product after using it once

 

  1. Be the Best Boss Everyone Wants to Work for by William Gentry.

IMG_7005

I read this book as part of the management book club we have at Mozilla where we discuss a chapter of a book on management every two weeks.

  • The overarching theme of this book is to “flip your mindset”.  As an individual contributor, your role was to do your individual work to the best of your abilities.
  • As a manager, it’s not about you, it’s about your team.  Your role is to make your team the more effective it can be, and help the individual team members learn new skills, develop their careers, all while meeting the company’s business objectives.
IMG_6981<figcaption class="wp-caption-text">Enter a caption</figcaption>
  • An effective team has
    • Direction
      • Team members should agree on the same goals and that this work is worth pursuing
    • Alignment
      • Everyone on the team knows their role on the team and what others are doing.  Everyone knows what “meets expectations” means and what “excellent” performance looks like.  If people feel isolated in their work and don’t know what is happening or have different opinions constitutes excellent performance, the team is lacking alignment.
    • Commitment
      • Each person is committed to the team and dedicated to the work.

 

  1. Turn this Ship Around by David Marquet is an account of managing the staff of a nuclear submarine that were under-performing and their path to becoming one of the most improved ships in the fleet.

IMG_7002

  • The main theme of this book is to push decision making down to lower and lower levels of the organization.
    • The mechanisms to strengthen technical competence are:
      • Take deliberate action
      • We learn all the time and from everywhere
      • Don’t just brief people on what to do, certify that it has been done
        • The person in charge of the team asks questions.  At the end o the certification, the decision is made whether the team is ready to perform the operation.
      • Continually and consistently repeat the message
      • Specify goals, not methods
    • I liked the chapter on clarity a lot.
      • Achieve excellence, don’t just avoid errors
      • Build trust and take care of your people
      • Use guiding principles for decision criteria
      • Immediate recognition to reinforce desired behaviours
      • Begin with the end in mind
        • For instance, if you wanted to reduce the number of operational errors, quantify the end metric you want to reach
      • Encourage asking questions instead of blind obedience

 

  1.  The First 90 days by Micheal D. Watkins. It describes the skills you need to learn to transition to a new job in management, whether it be as a line manager of a team or as a new CTO.

IMG_7004

  • The purpose of the book to describe how to successfully accelerate through leadership and career transitions
  • Traps during career transitions
    • Using skills from your previous job that don’t apply to your new one
    • Changing things before gaining trust of your team or understanding the scope of work
    • Neglecting relationships with peers
  • Keys to a successful transition
    • Figure out what you need to learn, learn it quickly, and prioritize
    • Match your strategy to the situation
    • Secure early wins
      • expands your credibility and creates momentum
    • Negotiate success
      • You have a new job with new responsibilities, how is success defined, how will you be evaluated in your new role?
  • Being hired is a external leader is more difficult than being promoted from within.
    • Lack of familiarity with internal networks, corporate culture, lack of credibility
    • To overcome this take a business orientation to understand the business, develop the right relationships and align expectations.  Understand the latitude you have to make changes.
  • Businesses are in different states and this impacts the strategy you employ (STARS strategy)
    • Startup
    • Turnaround
    • Accelerated Growth
    • Realignment
    • Sustaining Success

IMG_7012

  • It also includes a checklist on managing yourself as a leader

IMG_6975

 

  1. The Progress Principle by Therasa Amabile and Steven Kramer.

IMG_6856

This central these of this book is that “the best way to motivate people, day in and day out, is by facilitating progress—even small wins.”

  • “People are more creative and productive when they are deeply engaged in the work, when they feel happy, and when they think highly of their projects, coworkers, managers and organizations.”
    • This translates into performance benefits for the company.
    • Ongoing progress makes people more motivated
    • On days when there are setbacks leads to apathy and lack of motivation
  • As an aside, I always think of this when onboarding new employees or interns.  If you can get them to land a small change in the code base in the first few days they will feel like they are making progress.  If you hand them a giant project that hasn’t been broken down into smaller projects, it can become overwhelming and lead to a lack of progress and frustration.
  • How to nourish a team

IMG_6860

 

  1. I have written previously about Camille Fournier’s The Manager’s Path. I really enjoy the structure of the book, as it describes the path from individual contributor to CTO. Also, it also addresses specific issues related to engineering management. I have recommended this book to many people and refer to it often.

IMG_3912

 

  1. I also wrote about How F*cked up your Management? by Johnathan Nightingale and Melissa Nightingale in this blog post. Delightful yet raw stories of management in the trenches of various tech companies.

What books on management have you found insightful?

Hacks.Mozilla.OrgOverscripted! Digging into JavaScript execution at scale

This research was conducted in partnership with the UCOSP (Undergraduate Capstone Open Source Projects) initiative. UCOSP facilitates open source software development by connecting Canadian undergraduate students with industry mentors to practice distributed development and data projects.

The team consisted of the following Mozilla staff: Martin Lopatka, David Zeber, Sarah Bird, Luke Crouch, Jason Thomas

2017 student interns — crawler implementation and data collection: Ruizhi You, Louis Belleville, Calvin Luo, Zejun (Thomas) Yu

2018 student interns — exploratory data analysis projects: Vivian Jin, Tyler Rubenuik, Kyle Kung, Alex McCallum


As champions of a healthy Internet, we at Mozilla have been increasingly concerned about the current advertisement-centric web content ecosystem. Web-based ad technologies continue to evolve increasingly sophisticated programmatic models for targeting individuals based on their demographic characteristics and interests. The financial underpinnings of the current system incentivise optimizing on engagement above all else. This, in turn, has evolved an insatiable appetite for data among advertisers aggressively iterating on models to drive human clicks.

Most of the content, products, and services we use online, whether provided by media organisations or by technology companies, are funded in whole or in part by advertising and various forms of marketing.

–Timothy Libert and Rasmus Kleis Nielsen [link]

We’ve talked about the potentially adverse effects on the Web’s morphology and how content silos can impede a diversity of viewpoints. Now, the Mozilla Systems Research Group is raising a call to action. Help us search for patterns that describe, expose, and illuminate the complex interactions between people and pages!

Inspired by the Web Census recently published by Steven Englehardt and Arvind Narayanan of Princeton University, we adapted the OpenWPM crawler framework to perform a comparable crawl gathering a rich set of information about the JavaScript execution on various websites. This enables us to delve into further analysis of web tracking, as well as a general exploration of client-page interactions and a survey of different APIs employed on the modern Web.

In short, we set out to explore the unseen or otherwise not obvious series of JavaScript execution events that are triggered once a user visits a webpage, and all the first- and third-party events that are set in motion when people retrieve content. To help enable more exploration and analysis, we are providing our full set of data about JavaScript executions open source.

The following sections will introduce the data set, how it was collected and the decisions made along the way. We’ll share examples of insights we’ve discovered and we’ll provide information on how to participate in the associated “Overscripted Web: A Mozilla Data Analysis Challenge”, which we’ve launched today with Mozilla’s Open Innovation Team.

The Dataset

In October 2017, several Mozilla staff and a group of Canadian undergraduate students forked the OpenWPM crawler repository to begin tinkering, in order to collect a plethora of information about the unseen interactions between modern websites and the Firefox web browser.

Preparing the seed list

The master list of pages we crawled in preparing the dataset was itself generated from a preliminary shallow crawl we performed in November 2017. We ran a depth-1 crawl, seeded by Alexa’s top 10,000 site list, using 4 different machines at 4 different IP addresses (all in residential non-Amazon IP addresses served by Canadian internet service providers). The crawl was implemented using the Requests Python library and collected no information except for an indication of successful page loads.

Of the 2,150,251 pages represented in the union of the 4 parallel shallow crawls, we opted to use the intersection of the four lists in order to prune out dynamically generated (e.g. personalized) outbound links that varied between them. This meant a reduction to 981,545 URLs, which formed the seed list for our main OpenWPM crawl.

The Main Collection

The following workflow describes (at a high level) the collection of page information contained in this dataset.

  1. Alexa top 10k (10,000 high traffic pages as of November 1st, 2017)
  2. Precrawl using the python Requests library, visits each one of those pages

    1. Request library requests that page
    2. That page sends a response
    3. All href tags in the response are captured to a depth of 1 (away from Alexa page)

      1. For each of those href tags all valid pages (starts with “http”) are added to the link set.
      2. The link set union (2,150,251) was examined using the request library in parallel, which gives us the intersection list of 981,545.
      3. The set of urls in the list 981,545 is passed to the deeper crawl for JavaScript analysis in a parallelized form.
  3. Each of these pages was sent to our adapted version of OpenWPM to have the execution of JavaScript recorded for 10 seconds.
  4. The window.location was hashed as the unique identifier of the location where the JavaScript was executed (to ensure unique source attribution).

    1. When OpenWPM hits content that is inside an iFrame, the location of the content is reported.
    2. Since we use the window.location to determine the location element of the content, each time an iFrame is encountered, that location can be split into the parent location of the page and the iFrame location.
    3. Data collection and aggregation performed through a websocket associates all the activity linked to a location hash for compilation of the crawl dataset.

Interestingly, for the Alexa top 10,000 sites, our depth-1 crawl yielded properties hosted on 41,166 TLDs across the union of our 4 replicates, whereas only 34,809 unique TLDs remain among the 981,545 pages belonging to their intersection.

A modified version of OpenWPM was used to record JavaScript calls potentially used for browsers tracking data from these pages. The collected JavaScript execution trace was written into an s3 bucket for later aggregation and analysis. Several additional parameters were defined based on cursory ad hoc analyses.

For example, the minimum dwell time per page required to capture the majority of JavaScript activity was set as 10 seconds per page. This was based on a random sampling of the seed list URLs and showed a large variation in time until no new JavaScript was being executed (from no JavaScript, to what appeared to be an infinite loop of self-referential JavaScript calls). This dwell time was chosen to balance between capturing the majority of JavaScript activity on a majority of the pages and minimizing the time required to complete the full crawl.

Several of the probes instrumented in the Data Leak repo were ported over to our hybrid crawler, including instrumentation to monitor JavaScript execution occuring inside an iFrame element (potentially hosted on a third-party domain). This would prove to provide much insight into the relationships between pages in the crawl data.

Exploratory work

In January 2018, we got to work analyzing the dataset we had created. After substantial data cleaning to work through the messiness of real world variation, we were left with a gigantic  Parquet dataset (around 70GB) containing an immense diversity of potential insights. Three example analyses are summarized below. The most important finding is that we have only just scratched the surface of the insights this data may hold.

Examining session replay activity

Session replay is a service that lets websites track users’ interactions with the page—from how they navigate the site, to their searches, to the input they provide. Think of it as a “video replay” of a user’s entire session on a webpage. Since some session replay providers may record personal information such as personal addresses, credit card information and passwords, this can present a significant risk to both privacy and security.

We explored the incidence of session replay usage, and a few associated features, across the pages in our crawl dataset. To identify potential session replay, we obtained the Princeton WebTAP project list, containing 14 Alexa top-10,000 session replay providers, and checked for calls to script URLs belonging to the list.

Out of 6,064,923 distinct script references among page loads in our dataset, we found 95,570 (1.6%) were to session replay providers. This translated to 4,857 distinct domain names (netloc) making such calls, out of a total of 87,325, or 5.6%. Note that even if scripts belonging to session replay providers are being accessed, this does not necessarily mean that session replay functionality is being used on the site.

Given the set of pages making calls to session replay providers, we also looked into the consistency of SSL usage across these calls. Interestingly, the majority of such calls were made over HTTPS (75.7%), and 49.9% of the pages making these calls were accessed over HTTPS. Additionally, we found no pages accessed over HTTPS making calls to session replay scripts over HTTP, which was surprising but encouraging.

Finally, we examined the distribution of TLDs across sites making calls to session replay providers, and compared this to TLDs over the full dataset. We found that, along with .com, .ru accounted for a surprising proportion of sites accessing such scripts (around 33%), whereas .ru domain names made up only 3% of all pages crawled. This implies that 65.6% of .ru sites in our dataset were making calls to potential session replay provider scripts. However, this may be explained by the fact that Yandex is one of the primary session replay providers, and it offers a range of other analytics services of interest to Russian-language websites.

Eval and dynamically created function calls

JavaScript allows a function call to be dynamically created from a string with the eval() function or by creating a new Function() object. For example, this code will print hello twice:

eval("console.log('hello')")
var my_func = new Function("console.log('hello')")
my_func()

While dynamic function creation has its uses, it also opens up users to injection attacks, such as cross-site scripting, and can potentially be used to hide malicious code.

In order to understand how dynamic function creation is being used on the Web, we analyzed its prevalence, location, and distribution in our dataset. The analysis was initially performed on 10,000 randomly selected pages and validated against the entire dataset. In terms of prevalence, we found that 3.72% of overall function calls were created dynamically, and these originated from across 8.76% of the websites crawled in our dataset.

These results suggest that, while dynamic function creation is not used heavily, it is still common enough on the Web to be a potential concern. Looking at call frequency per page showed that, while some Web pages create all their function calls dynamically, the majority tend to have only 1 or 2 dynamically generated calls (which is generally 1-5% of all calls made by a page).

We also examined the extent of this practice among the scripts that are being called. We discovered that they belong to a relatively small subset of script hosts (at an average ratio of about 33 calls per URL), indicating that the same JavaScript files are being used by multiple webpages. Furthermore, around 40% of these are known trackers (identified using the disconnectme entity list), although only 33% are hosted on a different domain from the webpage that uses them. This suggests that web developers may not even know that they are using dynamically generated functions.

Cryptojacking

Cryptojacking refers to the unauthorized use of a user’s computer or mobile device to mine cryptocurrency. More and more websites are using browser-based cryptojacking scripts as cryptocurrencies rise in popularity. It is an easy way to generate revenue and a viable alternative to bloating a website with ads. An excellent contextualization of crypto-mining via client-side JavaScript execution can be found in the unabridged cryptojacking analysis prepared by Vivian Jin.

We investigated the prevalence of cryptojacking among the websites represented in our dataset. A list of potential cryptojacking hosts (212 sites total) was obtained from the adblock-nocoin-list GitHub repo. For each script call initiated on a page visit event, we checked whether the script host belonged to the list. Among 6,069,243 distinct script references on page loads in our dataset, only 945 (0.015%) were identified as cryptojacking hosts. Over half of these belonged to CoinHive, the original script developer. Only one use of AuthedMine was found. Viewed in terms of domains reached in the crawl, we found calls to cryptojacking scripts being made from 49 out of 29,483 distinct domains (0.16%).

However, it is important to note that cryptojacking code can be executed in other ways than by including the host script in a script tag. It can be disguised, stealthily executed in an iframe, or directly used in a function of a first-party script. Users may also face redirect loops that eventually lead to a page with a mining script. The low detection rate could also be due to the popularity of the sites covered by the crawl, which might  dissuade site owners from implementing obvious cryptojacking scripts. It is likely that the actual rate of cryptojacking is higher.

The majority of the domains we found using cryptojacking are streaming sites. This is unsurprising, as users have streaming sites open for longer while they watch video content, and mining scripts can be executed longer. A Chinese variety site called 52pk.com accounted for 207 out of the overall 945 cryptojacking script calls we found in our analysis, by far the largest domain we observed for cryptojacking calls.

Another interesting fact: although our cryptojacking host list contained 212 candidates, we found only 11 of them to be active in our dataset, or about 5%.

Limitations and future directions

While this is a rich dataset allowing for a number of interesting analyses, it is limited in visibility mainly to behaviours that occur via JS API calls.

Another feature we investigated using our dataset is the presence of Evercookies. Evercookies is a tracking tool used by websites to ensure that user data, such as a user ID, remains permanently stored on a computer. Evercookies persist in the browser by leveraging a series of tricks including Web API calls to a variety of available storage mechanisms. An initial attempt was made to search for evercookies in this data by searching for consistent values being passed to suspect Web API calls.

Acar et al., “The Web Never Forgets: Persistent Tracking Mechanisms in the Wild”, (2014) developed techniques for looking at evercookies at scale. First, they proposed a mechanism to detect identifiers. They applied this mechanism to HTTP cookies but noted that it could also be applied to other storage mechanisms, although some modification would be required. For example, they look at cookie expiration, which would not be applicable in the case of localStorage. For this dataset we could try replicating their methodology for set calls to window.document.cookie and window.localStorage.

They also looked at Flash cookies respawning HTTP cookies and HTTP respawning Flash cookies. Our dataset contains no information on the presence of Flash cookies, so additional crawls would be required to obtain this information. In addition, they used multiple crawls to study Flash respawning, so we would have to replicate that procedure.

In addition to our lack of information on Flash cookies, we have no information about HTTP cookies, the first mechanism by which cookies are set. Knowing which HTTP cookies are initially set can serve as an important complement and validation for investigating other storage techniques then used for respawning and evercookies.

Beyond HTTP and Flash, Samy Kamkar’s evercookie library documents over a dozen mechanisms for storing an id to be used as an evercookie. Many of these are not detectable by our current dataset, e.g. HTTP Cookies, HSTS Pinning, Flask Cookies, Silverlight Storage, ETags, Web cache, Internet Explorer userData storage, etc. An evaluation of the prevalence of each technique would be a useful contribution to the literature. We also see the value of an ongoing repeated crawl to identify changes in prevalence and accounting for new techniques as they are discovered.

However, it is possible to continue analyzing the current dataset for some of the techniques described by Samy. For example, window.name caching is listed as a technique. We can look at this property in our dataset, perhaps by applying the same ID technique outlined by Acar et al., or perhaps by looking at sequences of calls.

Conclusions

Throughout our preliminary exploration of this data it became quickly apparent that the amount of superficial JavaScript execution on a Web page only tells part of the story. We have observed several examples of scripts running parallel to the content-serving functionality of webpages, these appear to fulfill a diversity of other functions. The analyses performed so far have led to some exciting discoveries, but so much more information remains hidden in the immense dataset available.

We are calling on any interested individuals to be part of the exploration. You’re invited to participate in the Overscripted Web: A Mozilla Data Analysis Challenge and help us better understand some of the hidden workings of the modern Web!

Note: In the interest of being responsive to all interested contest participants and curious readers in one centralized location, we’ve closed comments on this post. We encourage you to bring relevant questions and discussion to the contest repo at: https://github.com/mozilla/Overscripted-Data-Analysis-Challenge

Acknowledgements

Extra special thanks to Steven Englehardt for his contributions to the OpenWPM tool and advice throughout this project. We also thank Havi Hoffman for valuable editorial contributions to earlier versions of this post. Finally, thanks to Karen Reid of University of Toronto for coordinating the UCOSP program.

Mozilla Open Innovation TeamOverscripted Web: a Mozilla Data Analysis Challenge

Help us explore the unseen JavaScript and what this means for the Web

<figcaption>Photo by Markus Spiske on Unsplash</figcaption>

What happens while you are browsing the Web? Mozilla wants to invite data and computer scientists, students and interested communities to join the “Overscripted Web: a Data Analysis Challenge”, and help explore JavaScript running in browsers and what this means for users. We gathered a rich dataset and we are looking for exciting new observations, patterns and research findings that help to better understand the Web. We want to bring the winners to speak at MozFest, our annual festival for the open Internet held in London.

The Dataset

Two cohorts of Canadian Undergraduate interns worked on data collection and subsequent analysis. The Mozilla Systems Research Group is now open sourcing a dataset of publicly available information that was collected by a Web crawl in November 2017. This dataset is currently being used to help inform product teams at Mozilla. The primary analysis from the students focused on:

  • Session replay analysis: when do websites replay your behavior in the site
  • Eval and dynamically created function calls
  • Cryptojacking: websites using user’s computers to mine cryptocurrencies are mainly video streaming sites

Take a look on Mozilla’s Hacks blog for a longer description of the analysis.

The Data Analysis Challenge

We see great potential in this dataset and believe that our analysis has only scratched the surface of the insights it can offer. We want to empower the community to use this data to better understand what is happening on the Web today, which is why Mozilla’s Systems Research Group and Open Innovation team partnered together to launch this challenge.

We have looked at how other organizations enable and speed up scientific discoveries through collaboratively analyzing large datasets. We’d love to follow this exploratory path: We want to encourage the crowd to think outside the proverbial box, get creative, get under the surface. We hope participants get excited to dig into the JavaScript executions data and come up with new observations, patterns, research findings.

To guide thinking, we’re dividing the Challenge into three categories:

  1. Tracking and Privacy
  2. Web Technologies and the Shape of the Modern Web
  3. Equality, Neutrality, and Law

You will find all of the necessary information to join on the Challenge website. The submissions will close on August 31st and the winners will be announced on September 14th. We will bring the winners of the best three analyses (one per category) to MozFest, the world’s leading festival for the open internet movement, taking place in London from October 26th to the 28th 2018. We will cover their airfare, hotel, admission/registration, and if necessary visa fees in accordance to the official rules. We may also invite the winners to do 15-minute presentations of their findings.

We are looking forward to the diverse and innovative approaches from the data science community and we want to specifically encourage young data scientists and students to take a stab at this dataset. It could be the basis for your final university project and analyzing it can grow your data science skills and build your resumé (and GitHub profile!). The Web gets more complex by the minute, keeping it safe and open can only happen if we work together. Join us!


Overscripted Web: a Mozilla Data Analysis Challenge was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

This Week In RustThis Week in Rust 237

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate, as decreed by llogiq, is im, a library for immutable data structures in Rust.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

149 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When picking up a lentil (Result) a pigeon (?) must consider two options. If the lentil is a good one (Ok), the pigeon simply puts it into the pot (evaluates to the wrapped value). However, if the lentil happens to be a bad one (Err), the pigeon eats it, digests it (from) and finally “returns” it. Also the silhouette of a pigeon kind of resembles a questionmark.

anatol1234 on internals

Thanks to Christopher Durham for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Cameron KaiserJust call it macOS Death Valley and get it over with

What is Apple trying to say with macOS Mojave? That it's dusty, dry, substantially devoid of life in many areas, and prone to severe temperature extremes? Oh, it has dark mode. Ohhh, okay. That totally makes up for everything and all the bugs and all the recurrent lack of technical investment. It's like the anti-shiny.

In other news, besides the 32-bit apocalypse, they just deprecated OpenGL and OpenCL in order to make way for all the Metal apps that people have just been lining up to write. Not that this is any surprise, mind you, given how long Apple's implementation of OpenGL has rotted on the vine. It's a good thing they're talking about allowing iOS apps to run, because there may not be any legacy Mac apps compatible when macOS 10.15 "Zzyzx" rolls around.

Yes, looking forward to that Linux ARM laptop when the MacBook Air "Sierra Forever" wears out. I remember when I was excited about Apple's new stuff. Now it's just wondering what they're going to break next.

Nicholas NethercoteHow to speed up the Rust compiler some more in 2018

I recently wrote about some work I’ve done to speed up the Rust compiler. Since then I’ve done some more.

rustc-perf improvements

Since my last post, rustc-perf — the benchmark suite, harness and visualizer — has seen some improvements. First, some new benchmarks were added: cargo, ripgrep, sentry-cli, and webrender. Also, the parser benchmark has been removed because it was a toy program and thus not a good benchmark.

Second, I added support for several new profilers: Callgrind, Massif, rustc’s own -Ztime-passes, and the use of ad hoc eprintln! statements added to rustc. (This latter case is more useful than it might sound; in combination with post-processing it can be very helpful, as we will see below.)

Finally, the graphs shown on the website now have better y-axis scaling, which makes many of them easier to read. Also, there is a new dashboard view that shows performance across rustc releases.

Failures and incompletes

After my last post, multiple people said they would be interested to hear about optimization attempts of mine that failed. So here is an incomplete selection. I suggest that rustc experts read through these, because there is a chance they will be able to identify alternative approaches that I have overlooked.

nearest_common_ancestors 1: I managed to speed up this hot function in a couple of ways, but a third attempt failed. The representation of the scope tree is not done via a typical tree data structure; instead there is a HashMap of child/parent pairs. This means that moving from a child to its parent node requires a HashMap lookup, which is expensive. I spent some time designing and implementing an alternative data structure that stored nodes in a vector and the child-to-parent links were represented as indices to other elements in the vector. This meant that child-to-parent moves only required stepping through the vector. It worked, but the speed-up turned out to be very small, and the new code was significantly more complicated, so I abandoned it.

nearest_common_ancestors 2: A different part of the same function involves storing seen nodes in a vector. Searching this unsorted vector is O(n), so I tried instead keeping it in sorted order and using binary search, which gives O(log n) search. However, this change meant that node insertion changed from amortized O(1) to O(n) — instead of a simple push onto the end of the vector, insertion could be at any point, which which required shifting all subsequent elements along. Overall this change made things slightly worse.

PredicateObligation SmallVec: There is a type Vec<PredicationObligation> that is instantiated frequently, and the vectors often have few elements. I tried using a SmallVec instead, which avoids the heap allocations when the number of elements is below a threshold. (A trick I’ve used multiple times.) But this made things significantly slower! It turns out that these Vecs are copied around quite a bit, and a SmallVec is larger than a Vec because the elements are inline. Furthermore PredicationObligation is a large type, over 100 bytes. So what happened was that memcpy calls were inserted to copy these SmallVecs around. The slowdown from the extra function calls and memory traffic easily outweighed the speedup from avoiding the Vec heap allocations.

SipHasher128: Incremental compilation does a lot of hashing of data structures in order to determine what has changed from previous compilation runs. As a result, the hash function used for this is extremely hot. I tried various things to speed up the hash function, including LEB128-encoding of usize inputs (a trick that worked in the past) but I failed to speed it up.

LEB128 encoding: Speaking of LEB128 encoding, it is used a lot when writing metadata to file. I tried optimizing the LEB128 functions by special-casing the common case where the value is less than 128 and so can be encoded in a single byte. It worked, but gave a negligible improvement, so I decided it wasn’t worth the extra complication.

Token shrinking: A previous PR shrunk the Token type from 32 to 24 bytes, giving a small win. I tried also replacing the Option<ast::Name> in Literal with just ast::Name and using an empty name to represent “no name”. That  change reduced it to 16 bytes, but produced a negligible speed-up and made the code uglier, so I abandoned it.

#50549: rustc’s string interner was structured in such a way that each interned string was duplicated twice. This PR changed it to use a single Rc‘d allocation, speeding up numerous benchmark runs, the best by 4%. But just after I posted the PR, @Zoxc posted #50607, which allocated the strings out of an arena, as an alternative. This gave better speed-ups and was landed instead of my PR.

#50491: This PR introduced additional uses of LazyBTreeMap (a type I had previously introduced to reduce allocations) speeding up runs of multiple benchmarks, the best by 3%. But at around the same time, @porglezomp completed a PR that changed BTreeMap to have the same lazy-allocation behaviour as LazyBTreeMap, rendering my PR moot. I subsequently removed the LazyBTreeMap type because it was no longer necessary.

#51281: This PR, by @Mark-Simulacrum, removed an unnecessary heap allocation from the RcSlice type. I had looked at this code because DHAT’s output showed it was hot in some cases, but I erroneously concluded that the extra heap allocation was unavoidable, and moved on! I should have asked about it on IRC.

Wins

#50339: rustc’s pretty-printer has a buffer that can contain up to 55 entries, and each entry is 48 bytes on 64-bit platforms. (The 55 somehow comes from the algorithm being used; I won’t pretend to understand how or why.) Cachegrind’s output showed that the pretty printer is invoked frequently (when writing metadata?) and that the zero-initialization of this buffer was expensive. I inserted some eprintln! statements and found that in the vast majority of cases only the first element of the buffer was ever touched. So this PR changed the buffer to default to length 1 and extend when necessary, speeding up runs for numerous benchmarks, the best by 3%.

#50365: I had previously optimized the nearest_common_ancestor function. Github user @kirillkh kindly suggested a tweak to the code from that PR which reduced the number comparisons required. This PR implemented that tweak, speeding up runs of a couple of benchmarks by another 1–2%.

#50391: When compiling certain annotations, rustc needs to convert strings from unescaped form to escaped form. It was using the escape_unicode function to do this, which unnecessarily converts every char to \u{1234} form, bloating the resulting strings greatly. This PR changed the code to use the escape_default function — which only escapes chars that genuinely need escaping — speeding up runs of most benchmarks, the best by 13%. It also slightly reduced the on-disk size of produced rlibs, in the best case by 15%.

#50525: Cachegrind showed that even after the previous PR, the above string code was still hot, because string interning was happening on the resulting string, which was unnecessary in the common case where escaping didn’t change the string. This PR added a scan to determine if escaping is necessary, thus avoiding the re-interning in the common case, speeding up a few benchmark runs, the best by 3%.

#50407: Cachegrind’s output showed that the trivial methods for the simple BytePos and CharPos types in the parser are (a) extremely hot and (b) not being inlined. This PR annotated them so they are inlined, speeding up most benchmarks, the best by 5%.

#50564: This PR did the same thing for the methods of the Span type, speeding up incremental runs of a few benchmarks, the best by 3%.

#50931: This PR did the same thing for the try_get function, speeding up runs of many benchmarks, the best by 1%.

#50418: DHAT’s output showed that there were many heap allocations of the cmt type, which is refcounted. Some code inspection and ad hoc instrumentation with eprintln! showed that many of these allocated cmt instances were very short-lived. However, some of them ended up in longer-lived chains, in which the refcounting was necessary. This PR changed the code so that cmt instances were mostly created on the stack by default, and then promoted to the heap only when necessary, speeding up runs of three benchmarks by 1–2%. This PR was a reasonably large change that took some time, largely because it took me five(!) attempts (the final git branch was initially called cmt5) to find the right dividing line between where to use stack allocation and where to use refcounted heap allocation.

#50565: DHAT’s output showed that the dep_graph structure, which is a IndexVec<DepNodeIndex,Vec<DepNodeIndex>>, caused many allocations, and some eprintln! instrumentation showed that the inner Vec‘s were mostly only a few elements. This PR changed the Vec<DepNodeIndex> to SmallVec<[DepNodeIndex;8]>, which avoids heap allocations when the number of elements is less than 8, speeding up incremental runs of many benchmarks, the best by 2%.

#50566: Cachegrind’s output shows that the hottest part of rustc’s lexer is the bump function, which is responsible for advancing the lexer onto the next input character. This PR streamlined it slightly, speeding up most runs of a couple of benchmarks by 1–3%.

#50818: Both Cachegrind and DHAT’s output showed that the opt_normalize_projection_type function was hot and did a lot of heap allocations. Some eprintln! instrumentation showed that there was a hot path involving this function that could be explicitly extracted that would avoid unnecessary HashMap lookups and the creation of short-lived Vecs. This PR did just that, speeding up most runs of serde and futures by 2–4%.

#50855: DHAT’s output showed that the macro parser performed a lot of heap allocations, particular on the html5ever benchmark. This PR implemented ways to avoid three of them: (a) by storing a slice of a Vec in a struct instead of a clone of the Vec; (b) by introducing a “ref or box” type that allowed stack allocation of the MatcherPos type in the common case, but heap allocation when necessary; and (c) by using Cow to avoid cloning a PathBuf that is rarely modified. These changes sped up runs of html5ever by up to 10%, and crates.io by up to 3%. I was particularly pleased with these changes because they all involved non-trivial changes to memory management that required the introduction of additional explicit lifetimes. I’m starting to get the hang of that stuff… explicit lifetimes no longer scare me the way they used to. It certainly helps that rustc’s error messages do an excellent job of explaining where explicit lifetimes need to be added.

#50932: DHAT’s output showed that a lot of HashSet instances were being created in order to de-duplicate the contents of a commonly used vector type. Some eprintln! instrumentation showed that most of these vectors only had 1 or 2 elements, in which case the de-duplication can be done trivially without involving a HashSet. (Note that the order of elements within this vector is important, so de-duplication via sorting wasn’t an option.) This PR added special handling of these trivial cases, speeding up runs of a few benchmarks, the best by 2%.

#50981: The compiler does a liveness analysis that involves vectors of indices that represent variables and program points. In rare cases these vectors can be enormous; compilation of the inflate benchmark involved one that has almost 6 million 24-byte elements, resulting in 143MB of data. This PR changed the type used for the indices from usize to u32, which is still more than large enough, speeding up “clean incremental” builds of inflate by about 10% on 64-bit platforms, as well as reducing their peak memory usage by 71MB.

What’s next?

These improvements, along with those recently done by others, have significantly sped up the compiler over the past month or so: many workloads are 10–30% faster, and some even more than that. I have also seen some anecdotal reports from users about the improvements over recent versions, and I would be interested to hear more data points, especially those involving rustc nightly.

The profiles produced by Cachegrind, Callgrind, and DHAT are now all looking very “flat”, i.e. with very little in the way of hot functions that stick out as obvious optimization candidates. (The main exceptions are the SipHasher128 functions I mentioned above, which I haven’t been able to improve.) As a result, it has become difficult for me to make further improvements via “bottom-up” profiling and optimization, i.e. optimizing hot leaf and near-leaf functions in the call graph.

Therefore, future improvement will likely come from “top-down” profiling and optimization, i.e. observations such as “rustc spends 20% of its time in phase X, how can that be improved”? The obvious place to start is the part of compilation taken up by LLVM. In many debug and opt builds LLVM accounts for up to 70–80% of instructions executed. It doesn’t necessarily account for that much time, because the LLVM parts of execution are parallelized more than the rustc parts, but it still is the obvious place to focus next. I have looked at a small amount of generated MIR and LLVM IR, but there’s a lot to take in. Making progress will likely require a lot broader understanding of things than many of the optimizations described above, most of which require only a small amount of local knowledge about a particular part of the compiler’s code.

If anybody reading this is interested in trying to help speed up rustc, I’m happy to answer questions and provide assistance as much as I can. The #rustc IRC channel is also a good place to ask for help.

The Rust Programming Language BlogAnnouncing Rust 1.26.2

The Rust team is happy to announce a new version of Rust, 1.26.2. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.26.2 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.26.2 on GitHub.

What’s in 1.26.2 stable

This patch release fixes a bug in the borrow checker verification of match expressions. This bug was introduced in 1.26.0 with the stabilization of match ergonomics. Specifically, it permitted code which took two mutable borrows of the bar path at the same time.

let mut foo = Some("foo".to_string());
let bar = &mut foo;
match bar {
    Some(baz) => {
        bar.take(); // Should not be permitted, as baz has a unique reference to the bar pointer.
    },
    None => unreachable!(),
}

1.26.2 will reject the above code with this error message:

error[E0499]: cannot borrow `*bar` as mutable more than once at a time
 --> src/main.rs:6:9
  |
5 |     Some(baz) => {
  |          --- first mutable borrow occurs here
6 |         bar.take(); // Should not be permitted, as baz has a ...
  |         ^^^ second mutable borrow occurs here
...
9 | }
  | - first borrow ends here

error: aborting due to previous error

The Core team decided to issue a point release to minimize the window of time in which this bug in the Rust compiler was present in stable compilers.

Air MozillaMozilla Weekly Project Meeting, 04 Jun 2018

Mozilla Weekly Project Meeting The Monday Project Meeting

Armen ZambranoSome webdev knowledge gained

Easlier this year I had to split a Koa/SPA app into two separate apps. As part of that I switched from webpack to Neutrino.

Through this work I learned a lot about full stack development (frontend, backend and deployments for both). I could write a blog post per item, however, listing it all in here is better than never getting to write a post for any of them.

Note, I’m pointing to commits that I believe have enough information to understand what I learned.

Npm packages to the rescue:

Node/backend notes:

Neutrino has been a great ally to me and here’s some knowledge on how to use it:

Heroku was my tool for deployment and here you have some specific notes:

The Mozilla BlogMozilla Announces $225,000 for Art and Advocacy Exploring Artificial Intelligence

Mozilla’s latest awards will support people and projects that examine the effects of AI on society

 

At Mozilla, one way we support a healthy internet is by fueling the people and projects on the front lines — from grants for community technologists in Detroit, to fellowships for online privacy activists in Rio.

Today, we are opening applications for a new round of Mozilla awards. We’re awarding $225,000 to technologists and media makers who help the public understand how threats to a healthy internet affect their everyday lives.

APPLY»

Specifically, we’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

What we’re seeking

We’re seeking projects that are accessible to broad audiences and native to the internet, from videos and games to browser extensions and data visualizations.

We will consider projects that are at either the conceptual or prototype phases. All projects must be freely available online and suitable for a non-expert audience. Projects must also respect users’ privacy.

In the past, Mozilla has supported creative media projects like:

Data Selfie, an open-source browser add-on that simulates how Facebook interprets users’ data.

Chupadados, a mix of art and journalism that examines how women and non-binary individuals are tracked and surveilled online.

Paperstorm, a web-based advocacy game that lets users drop digital leaflets on cities — and, in the process, tweet messages to lawmakers.

Codemoji, a tool for encoding messages with emoji and teaching the basics of encryption.

*Privacy Not Included, a holiday shopping guide that assesses the latest gadgets’ privacy and security features.

Details and how to apply

Mozilla is awarding a total of $225,000, with individual awards ranging up to $50,000. Final award amounts are at the discretion of award reviewers and Mozilla staff, but it is currently anticipated that the following awards will be made:

Two $50,000 total prize packages ($47,500 award + $2,500 MozFest travel stipend)

Five $25,000 total prize packages ($22,500 award + $2,500 MozFest travel stipend)

Awardees are selected based on quantitative scoring of their applications by a review committee and a qualitative discussion at a review committee meeting. Committee members include Mozilla staff, current and alumni Mozilla Fellows, and — as available — outside experts. Selection criteria are designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium are also considered.

Mozilla will accept applications through August 1, 2018, and notify winners by September 15, 2018. Winners will be publicly announced on or around MozFest, which is held October 26-28, 2018.

APPLY. Applicants should attend the informational webinar on Monday, June 18 (4pm EDT, 3pm CDT, 1pm PDT). Sign up to be notified here.


Brett Gaylor is Commissioning Editor at Mozilla

The post Mozilla Announces $225,000 for Art and Advocacy Exploring Artificial Intelligence appeared first on The Mozilla Blog.