Cameron KaiserTenFourFox FPR16b1 available

TenFourFox Feature Parity Release 16 beta 1 is now available (downloads, hashes, release notes). In addition, the official FAQ has been updated, along with the tech notes.

FPR16 got delayed because I really tried very hard to make some progress on our two biggest JavaScript deficiencies, the infamous issues 521 (async and await) and 533 (this is undefined). Unfortunately, not only did I make little progress on either, but the speculative fix I tried for issue 533 turned out to be the patch that unsettled the optimized build and had to be backed out. There is some partial work on issue 521, though, including a fully working parser patch. The problem is plumbing this into the browser runtime which is ripe for all kinds of regressions and is not currently implemented (instead, for compatibility, async functions get turned into a bytecode of null throw null return, essentially making any call to an async function throw an exception because it wouldn't have worked in the first place).

This wouldn't seem very useful except that effectively what the whole shebang does is convert a compile-time error into a runtime warning, such that other functions that previously might not have been able to load because of the error can now be parsed and hopefully run. With luck this should improve the functionality of sites using these functions even if everything still doesn't fully work, as a down payment hopefully on a future implementation. It may not be technically possible but it's a start.

Which reminds me, and since this blog is syndicated on Planet Mozilla: hey, front end devs, if you don't have to minify your source, how about you don't? Issue 533, in fact, is entirely caused because uglify took some fast and loose shortcuts that barf on older parsers, and it is nearly impossible to unwind errors that occur in minified code (this is now changing as sites slowly update, so perhaps this will be self-limited in the end, but in the meantime it's as annoying as Andy Samberg on crack). This is particularly acute given that the only thing fixing it in the regression range is a 2.5 megabyte patch that I'm only a small amount of the way through reading. On the flip side, I was able to find and fix several parser edge cases because Bugzilla itself was triggering them and the source file that was involved was not minified. That means I could actually read it and get error reports that made sense! Help support us lonely independent browser makers by keeping our lives a little less complicated. Thank you for your consideration!

Meanwhile, I have the parser changes on by default to see if it induces any regressions. Sites may or may not work any differently, but they should not work worse. If you find a site that seems to be behaving adversely in the beta, please toggle javascript.options.asyncfuncs to false and restart the browser, which will turn the warning back into an error. If even that doesn't fix it, make sure nothing on the site changed (like, I dunno, checking it in FPR15) before reporting it in the comments.

This version also "repairs" Firefox Sync support by connecting the browser back up to the right endpoints. You are reminded, however, that like add-on support Firefox Sync is only supported at a "best effort" level because I have no control over the backend server. I'll make reasonable attempts to keep it working, but things can break at any time, and it is possible that it will stay broken for good (and be removed from the UI) if data structures or the protocol change in a way I can't control for. There's a new FAQ entry for this I suggest you read.

Finally, there are performance improvements for HTML5 and URL parsing from later versions of Firefox as well as a minor update to same-site cookie support, plus a fix for a stupid bug with SVG backgrounds that I caused and Olga found, updates to basic adblock with new bad hosts, updates to the font blacklist with new bad fonts, and the usual security and stability updates from the ESRs.

I realize the delay means there won't be a great deal of time to test this, so let me know deficiencies as quickly as possible so they can be addressed before it goes live on or about September 2 Pacific time.

Joel MaherDigging into regressions

Whenever a patch is landed on autoland, it will run many builds and tests to make sure there are no regressions.  Unfortunately many times we find a regression and 99% of the time backout the changes so they can be fixed.  This work is done by the Sheriff team at Mozilla- they monitor the trees and when something is wrong, they work to fix it (sometimes by a quick fix, usually by a backout).  A quick fact, there were 1228 regressions in H1 (January-June) 2019.

My goal in writing is not to recommend change, but instead to start conversations and figure out what data we should be collecting in order to have data driven discussions.  Only then would I expect that recommendations for changes would come forth.

What got me started in looking at regressions was trying to answer a question: “How many regressions did X catch?”  This alone is a tough question, instead I think the question should be “If we were not running X, how many regressions would our end users see?”  This is a much different question and has two distinct parts:

  • Unique Regressions: Only look at regressions found that only X found, not found on both X and Y
  • Product Fixes: did the regression result in changing code that we ship to users? (i.e. not editing the test)
  • Final Fix: many times a patch [set] lands and is backed out multiple times, in this case do we look at each time it was backed out, or only the change from initial landing to final landing?

These can be more difficult to answer.  For example, Product Fixes- maybe by editing the test case we are preventing a regression in the future because the test is more accurate.

In addition we need to understand how accurate the data we are using is.  As the sheriffs do a great job, they are human and humans make judgement calls.  In this case once a job is marked as “fixed_by_commit”, then we cannot go back in and edit it, so a typo or bad data will result in incorrect data.  To add to it, often times multiple patches are backed out at the same time, so is it correct to say that changes from bug A and bug B should be considered?

This year I have looked at this data many times to answer:

This data is important to harvest because if we were to turn off a set of jobs or run them as tier-2 we would end up missing regressions.  But if all we miss is editing manifests to disable failing tests, then we are getting no value from the test jobs- so it is important to look at what the regression outcome was.

In fact every time I did this I would run an active-data-recipe (fbc recipe in my repo) and have a large pile of data I needed to sort through and manually check.  I spent some time every day for a few weeks looking at regressions and now I have looked at 700 (bugs/changesets).  I found that in manually checking regressions, the end results fell into buckets:

test 196 28.00%
product 272 38.86%
manifest 134 19.14%
unknown 48 6.86%
backout 27 3.86%
infra 23 3.29%

Keep in mind that many of the changes which end up in mozilla-central are not only product bugs, but infrastructure bugs, test editing, etc.

After looking at many of these bugs, I found that ~80% of the time things are straightforward (single patch [set] landed, backed out once, relanded with clear comments).  Data I would like to have easily available via a query:

  • Files that are changed between backout and relanding (even if it is a new patch).
  • A reason as part of phabricator that when we reland, it is required to have a few pre canned fields

Ideally this set of data would exist for not only backouts, but for anything that is landed to fix a regression (linting, build, manifest, typo).

Mozilla Open Policy & Advocacy BlogMozilla Mornings on the future of EU content regulation

On 10 September, Mozilla will host the next installment of our EU Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on the future of EU content regulation. We’re bringing together a high-level panel to discuss how the European Commission should approach the mooted Digital Services Act, and to lay out a vision for a sustainable and rights-protective content regulation framework in Europe.

Featuring

Alan Davidson
Vice President of Global Policy, Trust & Security, Mozilla
Liz Carolan
Executive Director, Digital Action
Guillermo Beltrà
Policy Director, Access Now

Moderated by Brian Maguire, EURACTIV

 Logistical information

10 September 2019
08:30-10:30
L42 Business Centre, rue de la Loi 42, Brussels 1040

 

Register your attendance here

The post Mozilla Mornings on the future of EU content regulation appeared first on Open Policy & Advocacy.

Dustin J. MitchellOutreachy Round 20

Outreachy is a program that provides paid internships working on FOSS (Free and Open Source Software) to applicants from around the world. Internships are three months long and involve deep, technical work on a mentor-selected project, guided by mentors and other developers working on the FOSS application. At Mozilla, projects include work on Firefox itself, development of associated services and sites like Taskcluster and Treeherder, and analysis of Firefox telemetry data from a data-science perspective.

The program has an explicit focus on diversity: “Anyone who faces under-representation, systemic bias, or discrimination in the technology industry of their country is invited to apply.” It’s a small but very effective step in achieving better representation in this field. One of the interesting side-effects is that the program sees a number of career-changing participants. These people bring a wealth of interesting and valuable perspectives, but face challenges in a field where many have been programming since they were young.

Round 20 will involve full-time, remote work from mid-December 2019 to mid-March 2020. Initial applications for this round are now open.

New this year, applicants will need to make an “initial application” to determine eligibility before September 24. During this time, applicants can only see the titles of potential internship projects – not the details. On October 1, all applicants who have been deemed eligible will be able to see the full project descriptions. At that time, they’ll start communicating with project mentors and making small open-source contributions, and eventually prepare applications to one or more projects.

So, here’s the call to action:

  • If you are or know people who might benefit from this program, encourage them to apply or to talk to one of the Mozilla coordinators (Kelsey Witthauer and myself) at outreachy-coordinators@mozilla.com.
  • If you would like to mentor for the program, there’s still time! Get in touch with us and we’ll figure it out.

Armen ZambranoFrontend security — thoughts on Snyk

Frontend security — thoughts on Snyk

I can’t remember why but few months ago I started looking into keeping my various React projects secure. Here’s some of what I discovered (more to come). I hope some will be valuable to you.

A while ago I discovered Snyk and I hooked it up my various projects with it. Snyk sends me a weekly security summary with the breakdown of various security issues across all of my projects.

<figcaption>This is part of Snyk’s weekly report I receive in my inbox</figcaption>

Snyk also gives me context about the particular security issues found:

<figcaption>This is extremely useful if you want to understand the security issue</figcaption>

It also analyzes my dependencies on a per-PR level:

<figcaption>Safey? Check! — This is a GitHub PR check (like Travis)</figcaption>

Other features that I’ve tried from Snyk:

  1. It sends you an email when there’s a vulnerable package (no need to wait for the weekly report)
  2. Open PRs upgrading vulnerable packages when possible
  3. Patch your code while there’s no published package with a fix
<figcaption>This is a summary for your project — it shows that a PR can be opened</figcaption>

The above features I have tried and I decided not to use them for the following reasons (listed in the same order as above):

  1. As a developer I already get enough interruptions in a week. I don’t need to be notified for every single security issue in my dependency tree. My projects don’t deal with anything sensitive, thus, I’m OK with waiting to deal with them at the beginning of the week
  2. The PR opened by Snyk does not work well with Yarn since it does not update the yarn.lock file, thus, requirying me to fetch the PR, run yarn install and push it back (This wastes my time)
  3. The feature to patch your code (Runtime protection or snyk protect) adds a very high set up cost (1–2 minutes) everytime you need to run yarn install. This is because it analyzes all your dependencies and patches your code in-situ. This gets on the way of my development workflow.

Overall I’m very satisfied with Snyk and I highly recommend using it.

In the following posts I’m thinking of speaking on:

  • How Renovate can help reduce the burden of keeping your projects up-to-date (reducing security work later on)
  • Differences between GitHub’s security tab (DependaBot) and Snyk
  • npm audit, yarn audit & snyk test

NOTE: This post is not sponsored by Snyk. I love what they do, I root for them and I hope they soon fix the issues I mention above.

Support.Mozilla.OrgIntroducing Bryce and Brady

Hello SUMO Community,

I’m thrilled to share this update with you today. Bryce and Brady have joined us last week and will be able to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They are going to be involved with new products, but also they won’t forget to put extra effort in providing support on forums and as well as serving as an escalation point for hard to solve issues.

Here is a short introduction to Brady and Bryce:

Hi! My name is Brady, and I am one of the new members of the SUMO team. I am originally from Boise, Idaho and am currently going to school for a Computer Science degree at Boise State. In my free time, I’m normally playing video games, writing, drawing, or enjoying the Sawtooths. I will be providing support for Mozilla products and for the SUMO team.

Hello!  My name is Bryce, I was born and raised in San Diego and I reside in Boise, Idaho.  Growing up I spent a good portion of my life trying to be the best sponger(boogie boarder) and longboarder in North County San Diego.  While out in the ocean I had all sorts of run-ins with sea creatures; but nothing to scary. I am also an IN-N-Out fan, as you may find me sporting their merchandise with boardshorts and the such.   I am truly excited to be part of this amazing group of fun loving folks and I am looking forward to getting to know everyone.

Please welcome them warmly!

Hacks.Mozilla.OrgWebAssembly Interface Types: Interoperate with All the Things!

People are excited about running WebAssembly outside the browser.

That excitement isn’t just about WebAssembly running in its own standalone runtime. People are also excited about running WebAssembly from languages like Python, Ruby, and Rust.

Why would you want to do that? A few reasons:

  • Make “native” modules less complicated
    Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.
  • Make it easier to sandbox native code
    On the other hand, low-level languages like Rust wouldn’t use WebAssembly for speed. But they could use it for security. As we talked about in the WASI announcement, WebAssembly gives you lightweight sandboxing by default. So a language like Rust could use WebAssembly to sandbox native code modules.
  • Share native code across platforms
    Developers can save time and reduce maintainance costs if they can share the same codebase across different platforms (e.g. between the web and a desktop app). This is true for both scripting and low-level languages. And WebAssembly gives you a way to do that without making things slower on these platforms.

Scripting languages like Python and Ruby saying 'We like WebAssembly's speed', low-level languages like Rust and C++ saying, 'and we like the security it could give us' and all of them saying 'and we all want to make developers more effective'

So WebAssembly could really help other languages with important problems.

But with today’s WebAssembly, you wouldn’t want to use it in this way. You can run WebAssembly in all of these places, but that’s not enough.

Right now, WebAssembly only talks in numbers. This means the two languages can call each other’s functions.

But if a function takes or returns anything besides numbers, things get complicated. You can either:

  • Ship one module that has a really hard-to-use API that only speaks in numbers… making life hard for the module’s user.
  • Add glue code for every single environment you want this module to run in… making life hard for the module’s developer.

But this doesn’t have to be the case.

It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer.

user saying 'what even is this API?' vs developer saying 'ugh, so much glue code to worry about' vs both saying 'wait, it just works?'

So the same WebAssembly module could use rich APIs, using complex types, to talk to:

  • Modules running in their own native runtime (e.g. Python modules running in a Python runtime)
  • Other WebAssembly modules written in different source languages (e.g. a Rust module and a Go module running together in the browser)
  • The host system itself (e.g. a WASI module providing the system interface to an operating system or the browser’s APIs)

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

And with a new, early-stage proposal, we’re seeing how we can make this Just Work™, as you can see in this demo.

So let’s take a look at how this will work. But first, let’s look at where we are today and the problems that we’re trying to solve.

WebAssembly talking to JS

WebAssembly isn’t limited to the web. But up to now, most of WebAssembly’s development has focused on the Web.

That’s because you can make better designs when you focus on solving concrete use cases. The language was definitely going to have to run on the Web, so that was a good use case to start with.

This gave the MVP a nicely contained scope. WebAssembly only needed to be able to talk to one language—JavaScript.

And this was relatively easy to do. In the browser, WebAssembly and JS both run in the same engine, so that engine can help them efficiently talk to each other.

A js file asking the engine to call a WebAssembly function

The engine asking the WebAssembly file to run the function

But there is one problem when JS and WebAssembly try to talk to each other… they use different types.

Currently, WebAssembly can only talk in numbers. JavaScript has numbers, but also quite a few more types.

And even the numbers aren’t the same. WebAssembly has 4 different kinds of numbers: int32, int64, float32, and float64. JavaScript currently only has Number (though it will soon have another number type, BigInt).

The difference isn’t just in the names for these types. The values are also stored differently in memory.

First off, in JavaScript any value, no matter the type, is put in something called a box (and I explained boxing more in another article).

WebAssembly, in contrast, has static types for its numbers. Because of this, it doesn’t need (or understand) JS boxes.

This difference makes it hard to communicate with each other.

JS asking wasm to add 5 and 7, and Wasm responding with 9.2368828e+18

But if you want to convert a value from one number type to the other, there are pretty straightforward rules.

Because it’s so simple, it’s easy to write down. And you can find this written down in WebAssembly’s JS API spec.

A large book that has mappings between the wasm number types and the JS number types

This mapping is hardcoded in the engines.

It’s kind of like the engine has a reference book. Whenever the engine has to pass parameters or return values between JS and WebAssembly, it pulls this reference book off the shelf to see how to convert these values.

JS asking the engine to call wasm's add function with 5 and 7, and the engine looking up how to do conversions in the book

Having such a limited set of types (just numbers) made this mapping pretty easy. That was great for an MVP. It limited how many tough design decisions needed to be made.

But it made things more complicated for the developers using WebAssembly. To pass strings between JS and WebAssembly, you had to find a way to turn the strings into an array of numbers, and then turn an array of numbers back into a string. I explained this in a previous post.

JS putting numbers into WebAssembly's memory

This isn’t difficult, but it is tedious. So tools were built to abstract this away.

For example, tools like Rust’s wasm-bindgen and Emscripten’s Embind automatically wrap the WebAssembly module with JS glue code that does this translation from strings to numbers.

JS file complaining about having to pass a string to Wasm, and the JS glue code offering to do all the work

And these tools can do these kinds of transformations for other high-level types, too, such as complex objects with properties.

This works, but there are some pretty obvious use cases where it doesn’t work very well.

For example, sometimes you just want to pass a string through WebAssembly. You want a JavaScript function to pass a string to a WebAssembly function, and then have WebAssembly pass it to another JavaScript function.

Here’s what needs to happen for that to work:

  1. the first JavaScript function passes the string to the JS glue code

  2. the JS glue code turns that string object into numbers and then puts those numbers into linear memory

  3. then passes a number (a pointer to the start of the string) to WebAssembly

  4. the WebAssembly function passes that number over to the JS glue code on the other side

  5. the second JavaScript function pulls all of those numbers out of linear memory and then decodes them back into a string object

  6. which it gives to the second JS function

JS file passing string 'Hello' to JS glue code
JS glue code turning string into numbers and putting that in linear memory
JS glue code telling engine to pass 2 to wasm
Wasm telling engine to pass 2 to JS glue code
JS glue code taking bytes from linear memory and turning them back into a string
JS glue code passing string to JS file

So the JS glue code on one side is just reversing the work it did on the other side. That’s a lot of work to recreate what’s basically the same object.

If the string could just pass straight through WebAssembly without any transformations, that would be way easier.

WebAssembly wouldn’t be able to do anything with this string—it doesn’t understand that type. We wouldn’t be solving that problem.

But it could just pass the string object back and forth between the two JS functions, since they do understand the type.

So this is one of the reasons for the WebAssembly reference types proposal. That proposal adds a new basic WebAssembly type called anyref.

With an anyref, JavaScript just gives WebAssembly a reference object (basically a pointer that doesn’t disclose the memory address). This reference points to the object on the JS heap. Then WebAssembly can pass it to other JS functions, which know exactly how to use it.

JS passing a string to Wasm and the engine turning it into a pointer
Wasm passing the string to a different JS file, and the engine just passes the pointer on

So that solves one of the most annoying interoperability problems with JavaScript. But that’s not the only interoperability problem to solve in the browser.

There’s another, much larger, set of types in the browser. WebAssembly needs to be able to interoperate with these types if we’re going to have good performance.

WebAssembly talking directly to the browser

JS is only one part of the browser. The browser also has a lot of other functions, called Web APIs, that you can use.

Behind the scenes, these Web API functions are usually written in C++ or Rust. And they have their own way of storing objects in memory.

Web APIs’ parameters and return values can be lots of different types. It would be hard to manually create mappings for each of these types. So to simplify things, there’s a standard way to talk about the structure of these types—Web IDL.

When you’re using these functions, you’re usually using them from JavaScript. This means you are passing in values that use JS types. How does a JS type get converted to a Web IDL type?

Just as there is a mapping from WebAssembly types to JavaScript types, there is a mapping from JavaScript types to Web IDL types.

So it’s like the engine has another reference book, showing how to get from JS to Web IDL. And this mapping is also hardcoded in the engine.

A book that has mappings between the JS types and Web IDL types

For many types, this mapping between JavaScript and Web IDL is pretty straightforward. For example, types like DOMString and JS’s String are compatible and can be mapped directly to each other.

Now, what happens when you’re trying to call a Web API from WebAssembly? Here’s where we get to the problem.

Currently, there is no mapping between WebAssembly types and Web IDL types. This means that, even for simple types like numbers, your call has to go through JavaScript.

Here’s how this works:

  1. WebAssembly passes the value to JS.
  2. In the process, the engine converts this value into a JavaScript type, and puts it in the JS heap in memory
  3. Then, that JS value is passed to the Web API function. In the process, the engine converts the JS value into a Web IDL type, and puts it in a different part of memory, the renderer’s heap.

Wasm passing number to JS
Engine converting the int32 to a Number and putting it in the JS heap
Engine converting the Number to a double, and putting that in the renderer heap

This takes more work than it needs to, and also uses up more memory.

There’s an obvious solution to this—create a mapping from WebAssembly directly to Web IDL. But that’s not as straightforward as it might seem.

For simple Web IDL types like boolean and unsigned long (which is a number), there are clear mappings from WebAssembly to Web IDL.

But for the most part, Web API parameters are more complex types. For example, an API might take a dictionary, which is basically an object with properties, or a sequence, which is like an array.

To have a straightforward mapping between WebAssembly types and Web IDL types, we’d need to add some higher-level types. And we are doing that—with the GC proposal. With that, WebAssembly modules will be able to create GC objects—things like structs and arrays—that could be mapped to complicated Web IDL types.

But if the only way to interoperate with Web APIs is through GC objects, that makes life harder for languages like C++ and Rust that wouldn’t use GC objects otherwise. Whenever the code interacts with a Web API, it would have to create a new GC object and copy values from its linear memory into that object.

That’s only slightly better than what we have today with JS glue code.

We don’t want JS glue code to have to build up GC objects—that’s a waste of time and space. And we don’t want the WebAssembly module to do that either, for the same reasons.

We want it to be just as easy for languages that use linear memory (like Rust and C++) to call Web APIs as it is for languages that use the engine’s built-in GC. So we need a way to create a mapping between objects in linear memory and Web IDL types, too.

There’s a problem here, though. Each of these languages represents things in linear memory in different ways. And we can’t just pick one language’s representation. That would make all the other languages less efficient.

someone standing between the names of linear memory languages like C, C++, and Rust, pointing to Rust and saying 'I pick... that one!'. A red arrow points to the person saying 'bad idea'

But even though the exact layout in memory for these things is often different, there are some abstract concepts that they usually share in common.

For example, for strings the language often has a pointer to the start of the string in memory, and the length of the string. And even if the string has a more complicated internal representation, it usually needs to convert strings into this format when calling external APIs anyways.

This means we can reduce this string down to a type that WebAssembly understands… two i32s.

The string Hello in linear memory, with an offset of 2 and length of 5. Red arrows point to offset and length and say 'types that WebAssembly understands!'

We could hardcode a mapping like this in the engine. So the engine would have yet another reference book, this time for WebAssembly to Web IDL mappings.

But there’s a problem here. WebAssembly is a type-checked language. To keep things secure, the engine has to check that the calling code passes in types that match what the callee asks for.

This is because there are ways for attackers to exploit type mismatches and make the engine do things it’s not supposed to do.

If you’re calling something that takes a string, but you try to pass the function an integer, the engine will yell at you. And it should yell at you.

So we need a way for the module to explicitly tell the engine something like: “I know Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.”

This is what the Web IDL proposal does. It gives a WebAssembly module a way to map between the types that it uses and Web IDL’s types.

These mappings aren’t hardcoded in the engine. Instead, a module comes with its own little booklet of mappings.

Wasm file handing a booklet to the engine and saying `Here's a little guidebook. It will tell you how to translate my types to interface types`

So this gives the engine a way to say “For this function, do the type checking as if these two integers are a string.”

The fact that this booklet comes with the module is useful for another reason, though.

Sometimes a module that would usually store its strings in linear memory will want to use an anyref or a GC type in a particular case… for example, if the module is just passing an object that it got from a JS function, like a DOM node, to a Web API.

So modules need to be able to choose on a function-by-function (or even even argument-by-argument) basis how different types should be handled. And since the mapping is provided by the module, it can be custom-tailored for that module.

Wasm telling engine 'Read carefully... For some function that take DOMStrings, I'll give you two numbers. For others, I'll just give you the DOMString that JS gave to me.'

How do you generate this booklet?

The compiler takes care of this information for you. It adds a custom section to the WebAssembly module. So for many language toolchains, the programmer doesn’t have to do much work.

For example, let’s look at how the Rust toolchain handles this for one of the simplest cases: passing a string into the alert function.

#[wasm_bindgen]
extern "C" {
    fn alert(s: &str);
}

The programmer just has to tell the compiler to include this function in the booklet using the annotation #[wasm_bindgen]. By default, the compiler will treat this as a linear memory string and add the right mapping for us. If we needed it to be handled differently (for example, as an anyref) we’d have to tell the compiler using a second annotation.

So with that, we can cut out the JS in the middle. That makes passing values between WebAssembly and Web APIs faster. Plus, it means we don’t need to ship down as much JS.

And we didn’t have to make any compromises on what kinds of languages we support. It’s possible to have all different kinds of languages that compile to WebAssembly. And these languages can all map their types to Web IDL types—whether the language uses linear memory, or GC objects, or both.

Once we stepped back and looked at this solution, we realized it solved a much bigger problem.

WebAssembly talking to All The Things

Here’s where we get back to the promise in the intro.

Is there a feasible way for WebAssembly to talk to all of these different things, using all these different type systems?

A wasm file with arrows pointing to and from: logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

Let’s look at the options.

You could try to create mappings that are hardcoded in the engine, like WebAssembly to JS and JS to Web IDL are.

But to do that, for each language you’d have to create a specific mapping. And the engine would have to explicitly support each of these mappings, and update them as the language on either side changes. This creates a real mess.

This is kind of how early compilers were designed. There was a pipeline for each source language to each machine code language. I talked about this more in my first posts on WebAssembly.

We don’t want something this complicated. We want it to be possible for all these different languages and platforms to talk to each other. But we need it to be scalable, too.

So we need a different way to do this… more like modern day compiler architectures. These have a split between front-end and back-end. The front-end goes from the source language to an abstract intermediate representation (IR). The back-end goes from that IR to the target machine code.

This is where the insight from Web IDL comes in. When you squint at it, Web IDL kind of looks like an IR.

Now, Web IDL is pretty specific to the Web. And there are lots of use cases for WebAssembly outside the web. So Web IDL itself isn’t a great IR to use.

But what if you just use Web IDL as inspiration and create a new set of abstract types?

This is how we got to the WebAssembly interface types proposal.

Diagram showing WebAssembly interface types in the middle. On the left is a wasm module, which could be compiled from Rust, Go, C, etc. Arrows point from these options to the types in the middle. On the right are host languages like JS, Python, and Ruby; host platforms like .NET, Node, and operating systems, and more wasm modules. Arrows point from these options to the types in the middle.

These types aren’t concrete types. They aren’t like the int32 or float64 types in WebAssembly today. There are no operations on them in WebAssembly.

For example, there won’t be any string concatenation operations added to WebAssembly. Instead, all operations are performed on the concrete types on either end.

There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.

saying 'since this is a string in linear memory, I know how to manipulate it' and browser saying 'since this is a DOMString, I know how to manipulate it'

There is one case that would seem like an exception to this rule: the new reference values (like anyref) that I mentioned before. In this case, what is copied between the two sides is the pointer to the object. So both pointers point to the same thing. In theory, this could mean they need to share a representation.

In cases where the reference is just passing through the WebAssembly module (like the anyref example I gave above), the two sides still don’t need to share a representation. The module isn’t expected to understand that type anyway… just pass it along to other functions.

But there are times where the two sides will want to share a representation. For example, the GC proposal adds a way to create type definitions so that the two sides can share representations. In these cases, the choice of how much of the representation to share is up to the developers designing the APIs.

This makes it a lot easier for a single module to talk to many different languages.

In some cases, like the browser, the mapping from the interface types to the host’s concrete types will be baked into the engine.

So one set of mappings is baked in at compile time and the other is handed to the engine at load time.

Engine holding Wasm's mapping booklet and its own mapping reference book for Wasm Interface Types to Web IDL, saying 'So this maps to a string? Ok, I can take it from here to the DOMString that the function is asking for using my hardcoded bindings'

But in other cases, like when two WebAssembly modules are talking to each other, they both send down their own little booklet. They each map their functions’ types to the abstract types.

Engine reaching for mapping booklets from two wasm files, saying 'Ok, let's see how these map to each other'

This isn’t the only thing you need to enable modules written in different source languages to talk to each other (and we’ll write more about this in the future) but it is a big step in that direction.

So now that you understand why, let’s look at how.

What do these interface types actually look like?

Before we look at the details, I should say again: this proposal is still under development. So the final proposal may look very different.

Two construction workers with a sign that says 'Use caution'

Also, this is all handled by the compiler. So even when the proposal is finalized, you’ll only need to know what annotations your toolchain expects you to put in your code (like in the wasm-bindgen example above). You won’t really need to know how this all works under the covers.

But the details of the proposal are pretty neat, so let’s dig into the current thinking.

The problem to solve

The problem we need to solve is translating values between different types when a module is talking to another module (or directly to a host, like the browser).

There are four places where we may need to do a translation:

For exported functions

  • accepting parameters from the caller
  • returning values to the caller

For imported functions

  • passing parameters to the function
  • accepting return values from the function

And you can think about each of these as going in one of two directions:

  • Lifting, for values leaving the module. These go from a concrete type to an interface type.
  • Lowering, for values coming into the module. These go from an interface type to a concrete type.

Telling the engine how to transform between concrete types and interface types

So we need a way to tell the engine which transformations to apply to a function’s parameters and return values. How do we do this?

By defining an interface adapter.

For example, let’s say we have a Rust module compiled to WebAssembly. It exports a greeting_ function that can be called without any parameters and returns a greeting.

Here’s what it would look like (in WebAssembly text format) today.

a Wasm module that exports a function that returns two numbers. See proposal linked above for details.

So right now, this function returns two integers.

But we want it to return the string interface type. So we add something called an interface adapter.

If an engine understands interface types, then when it sees this interface adapter, it will wrap the original module with this interface.

an interface adapter that returns a string. See proposal linked above for details.

It won’t export the greeting_ function anymore… just the greeting function that wraps the original. This new greeting function returns a string, not two numbers.

This provides backwards compatibility because engines that don’t understand interface types will just export the original greeting_ function (the one that returns two integers).

How does the interface adapter tell the engine to turn the two integers into a string?

It uses a sequence of adapter instructions.

Two adapter instructions inside of the adapter function. See proposal linked above for details.

The adapter instructions above are two from a small set of new instructions that the proposal specifies.

Here’s what the instructions above do:

  1. Use the call-export adapter instruction to call the original greeting_ function. This is the one that the original module exported, which returned two numbers. These numbers get put on the stack.
  2. Use the memory-to-string adapter instruction to convert the numbers into the sequence of bytes that make up the string. We have to specifiy “mem” here because a WebAssembly module could one day have multiple memories. This tells the engine which memory to look in. Then the engine takes the two integers from the top of the stack (which are the pointer and the length) and uses those to figure out which bytes to use.

This might look like a full-fledged programming language. But there is no control flow here—you don’t have loops or branches. So it’s still declarative even though we’re giving the engine instructions.

What would it look like if our function also took a string as a parameter (for example, the name of the person to greet)?

Very similar. We just change the interface of the adapter function to add the parameter. Then we add two new adapter instructions.

Here’s what these new instructions do:

  1. Use the arg.get instruction to take a reference to the string object and put it on the stack.
  2. Use the string-to-memory instruction to take the bytes from that object and put them in linear memory. Once again, we have to tell it which memory to put the bytes into. We also have to tell it how to allocate the bytes. We do this by giving it an allocator function (which would be an export provided by the original module).

One nice thing about using instructions like this: we can extend them in the future… just as we can extend the instructions in WebAssembly core. We think the instructions we’re defining are a good set, but we aren’t committing to these being the only instruct for all time.

If you’re interested in understanding more about how this all works, the explainer goes into much more detail.

Sending these instructions to the engine

Now how do we send this to the engine?

These annotations gets added to the binary file in a custom section.

A file split in two. The top part is labeled 'known sections, e.g. code, data'. The bottom part is labeled 'custom sections, e.g. interface adapter'

If an engine knows about interface types, it can use the custom section. If not, the engine can just ignore it, and you can use a polyfill which will read the custom section and create glue code.

How is this different than CORBA, Protocol Buffers, etc?

There are other standards that seem like they solve the same problem—for example CORBA, Protocol Buffers, and Cap’n Proto.

How are those different? They are solving a much harder problem.

They are all designed so that you can interact with a system that you don’t share memory with—either because it’s running in a different process or because it’s on a totally different machine across the network.

This means that you have to be able to send the thing in the middle—the “intermediate representation” of the objects—across that boundary.

So these standards need to define a serialization format that can efficiently go across the boundary. That’s a big part of what they are standardizing.

Two computers with wasm files on them and multiple lines flowing into a single line connecting connecting them. The single line represents serialization and is labelled 'IR'

Even though this looks like a similar problem, it’s actually almost the exact inverse.

With interface types, this “IR” never needs to leave the engine. It’s not even visible to the modules themselves.

The modules only see the what the engine spits out for them at the end of the process—what’s been copied to their linear memory or given to them as a reference. So we don’t have to tell the engine what layout to give these types—that doesn’t need to be specified.

What is specified is the way that you talk to the engine. It’s the declarative language for this booklet that you’re sending to the engine.

Two wasm files with arrows pointing to the word 'IR' with no line between, because there is no serialization happening.

This has a nice side effect: because this is all declarative, the engine can see when a translation is unnecessary—like when the two modules on either side are using the same type—and skip the translation work altogether.

The engine looking at the booklets for a Rust module and a Go module and saying 'Ooh, you’re both using linear memory for this string... I’ll just do a quick copy between your memories, then'

How can you play with this today?

As I mentioned above, this is an early stage proposal. That means things will be changing rapidly, and you don’t want to depend on this in production.

But if you want to start playing with it, we’ve implemented this across the toolchain, from production to consumption:

  • the Rust toolchain
  • wasm-bindgen
  • the Wasmtime WebAssembly runtime

And since we maintain all these tools, and since we’re working on the standard itself, we can keep up with the standard as it develops.

Even though all these parts will continue changing, we’re making sure to synchronize our changes to them. So as long as you use up-to-date versions of all of these, things shouldn’t break too much.

Construction worker saying 'Just be careful and stay on the path'

So here are the many ways you can play with this today. For the most up-to-date version, check out this repo of demos.

Thank you

  • Thank you to the team who brought all of the pieces together across all of these languages and runtimes: Alex Crichton, Yury Delendik, Nick Fitzgerald, Dan Gohman, and Till Schneidereit
  • Thank you to the proposal co-champions and their colleagues for their work on the proposal: Luke Wagner, Francis McCabe, Jacob Gravelle, Alex Crichton, and Nick Fitzgerald
  • Thank you to my fantastic collaborators, Luke Wagner and Till Schneidereit, for their invaluable input and feedback on this article

The post WebAssembly Interface Types: Interoperate with All the Things! appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogMozilla takes action to protect users in Kazakhstan

Today, Mozilla and Google took action to protect the online security and privacy of individuals in Kazakhstan. Together the companies deployed technical solutions within Firefox and Chrome to block the Kazakhstan government’s ability to intercept internet traffic within the country.

The response comes after credible reports that internet service providers in Kazakhstan have required people in the country to download and install a government-issued certificate on all devices and in every browser in order to access the internet. This certificate is not trusted by either of the companies, and once installed, allowed the government to decrypt and read anything a user types or posts, including intercepting their account information and passwords. This targeted people visiting popular sites Facebook, Twitter and Google, among others.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security. We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.” — Marshall Erwin, Senior Director of Trust and Security, Mozilla

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world.” — Parisa Tabriz, Senior Engineering Director, Chrome

This is not the first attempt by the Kazakhstan government to intercept the internet traffic of everyone in the country. In 2015, the Kazakhstan government attempted to have a root certificate included in Mozilla’s trusted root store program. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request. Shortly after, the government forced citizens to manually install its certificate but that attempt failed after organizations took legal action.

Each company will deploy a technical solution unique to its browser. For additional information on those solutions please see the below links.

Mozilla
Google

Russian: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh: Бұл постыны қазақ тілінде мына жерден оқыңыз.

 

 

 

 

The post Mozilla takes action to protect users in Kazakhstan appeared first on The Mozilla Blog.

Mozilla Security BlogProtecting our Users in Kazakhstan

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for www.facebook.com because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

The post Protecting our Users in Kazakhstan appeared first on Mozilla Security Blog.

Cameron KaiserFPR16 delays

FPR16 was supposed to reach you in beta sometime tomorrow but I found a reproducible crash in the optimized build, probably due to one of my vain attempts to fix JavaScript bugs. I'm still investigating exactly which change(s) were responsible. We should still make the deadline (September 3) to be concurrent with the 60.9/68.1 ESRs, but there will not be much of a beta testing period and I don't anticipate it being available until probably at least Friday or Saturday. More later.

While you're waiting, read about today's big OpenPOWER announcement. Isn't it about time for a modern PowerPC under your desk?

This Week In RustThis Week in Rust 300

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is async-std, a library with async variants of the standard library's IO etc.

Thanks to mmmmib for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

268 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

C++ being memory safe is like saying riding a motorcycle is crash safe.

It totally is, if you happen to have the knowledge and experience to realize this is only true if you remember to put on body-armor, a helmet, a full set of leathers including gloves and reinforced boots, and then remember to operate the motorcycle correctly afterwards. In C/C++ though, that armor is completely 100% optional.

cyrusm on /r/rust

Thanks to Dmitry Kashitsyn for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla GFXmoz://gfx newsletter #47

Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

What’s new in gfx

Wayland and hidpi improvements on Linux

  • Martin Stransky made a proof of concept implementation of DMABuf textures in Gecko’s IPC mechanism. This dmabuf EGL texture backend on Wayland is similar what we have on Android/Mac. Dmabuf buffers can be shared with main/compositor process, can be bound as a render target or texture and can be located at GPU memory. The same dma buf buffer can be also used as hardware overlay when it’s attached to wl_surface/wl_subsurface as wl_buffer.
  • Jan Horak fixed a bug that prevented tabs from rendering after restoring a minimized window.
  • Jan Horak fixed the window parenting hierarchy with Wayland.
  • Jan Horak fixed a bug with hidpi that was causing select popups to render incorrectly after scrolling.

WebGL multiview rendering

WebGL’s multiview rendering extension has been approved by the working group and it’s implementation by Jeff Gilbert will be shipping in Firefox 70.
This extension allows more efficient rendering into multiple viewports, which is most commonly use by VR/AR for rendering both eyes at the same time.

Better high dynamic range support

Jean Yves landed the first part of his HDR work (a set of 14 patches). While we can’t yet output HDR content to HDR screen, this work greatly improved the correctness of the conversion from various HDR formats to low dynamic range sRGB.

You can follow progress on the color space meta bug.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

If you are curious about the state of WebRender on a particular platform, up to date information is available at http://arewewebrenderyet.com

Speaking of which, darkspirit enabled webrender on Nightly for Nvidia+Nouveau drivers linux users in Firefox Nightly.

More filters in WebRender

When We run into a primitive that isn’t supported by WebRender, we make it go through software fallback implementation which can be slow for some things. SVG filters are a good example of primitives that perform much better if implemented on the GPU in WebRender.
Connor Brewster has been working on implementing a number of SVG filters in WebRender:

See the SVG filters in WebRender meta bug.

Texture swizzling and allocation

WebRender previously only worked with BGRA for color textures. Unfortunately this format is optimal on some platforms but sub-optimal (or even unsupported) on others. So a conversion sometimes has to happen and this conversion if done by the driver can be very costly.

Kvark reworked the texture caching logic to support using and swizzling between different formats (for example RGBA and BGRA).
A document that landed with the implementation provides more details about the approach and context.

Kvark also improved the texture cache allocation behavior.

Kvark also landed various refactorings (1), (2), (3), (4).

Android improvements

Jamie fixed emoji rendering on webrender on android and continues investigating driver issues on Adreno 3xx devices.

Displaylist serialization

Dan replaced bincode in our DL IPC code with a new bespoke and private serialization library (peek-poke), ending the terrible reign of the secret serde deserialize_in_place hacks and our fork of serde_derive.

Picture caching improvements

Glenn landed several improvements and fixes to the picture caching infrastructure:

  • Bug 1566712 – Fix quality issueswith picture caching when transform has fractional offsets.
  • Bug 1572197 – Fix world clip region for preserve-3d items with picture caching.
  • Bug 1566901 – Make picture caching more robust to float issues.
  • Bug 1567472 – Fix bug in preserve-3d batching code in WebRender.

Font rendering improvements

Lee landed quite a few font related patches:

  • Bug 1569950 – Only partially clear WR glyph caches if it is not necessary to fully clear.
  • Bug 1569174 – Disable embedded bitmaps if ClearType rendering mode is forced.
  • Bug 1568841 – Force GDI parameters for GDI render mode.
  • Bug 1568858 – Always stretch box shadows except for Cairo.
  • Bug 1568841 – Don’t use enhanced contrast on GDI fonts.
  • Bug 1553818 – Use GDI ClearType contrast for GDI font gamma.
  • Bug 1565158 – Allow forcing DWrite symmetric rendering mode.
  • Bug 1563133 – Limit the GlyphBuffer capacity.
  • Bug 1560520 – Limit the size of WebRender’s glyph cache.
  • Bug 1566449 – Don’t reverse glyphs in GlyphBuffer for RTL.
  • Bug 1566528 – Clamp the amount of synthetic bold extra strikes.
  • Bug 1553228 – Don’t free the result of FcPatternGetString.

Various fixes and improvements

  • Gankra fixed an issue with addon popups and document splitting.
  • Sotaro prevented some unnecessary composites on out-of-viewport external image updates.
  • Nical fixed an integer oveflow causing the browser to freeze
  • Nical improved the overlay profiler by showing more relevant numbers when the timings are noisy.
  • Nical fixed corrupted rendering of progressively loaded images.
  • Nical added a fast path when none of the primitives of an image batch need anti-aliasing or repetition.

Wladimir PalantKaspersky in the Middle - what could possibly go wrong?

Roughly a decade ago I read an article that asked antivirus vendors to stop intercepting encrypted HTTPS connections, this practice actively hurting security and privacy. As you can certainly imagine, antivirus vendors agreed with the sensible argument and today no reasonable antivirus product would even consider intercepting HTTPS traffic. Just kidding… Of course they kept going, and so two years ago a study was published detailing the security issues introduced by interception of HTTPS connections. Google and Mozilla once again urged antivirus vendors to stop. Surely this time it worked?

Of course not. So when I decided to look into Kaspersky Internet Security in December last year, I found it breaking up HTTPS connections so that it would get between the server and your browser in order to “protect” you. Expecting some deeply technical details about HTTPS protocol misimplementations now? Don’t worry, I don’t know enough myself to inspect Kaspersky software on this level. The vulnerabilities I found were far more mundane.

Kaspersky Internet Security getting between browser and server

I reported eight vulnerabilities to Kaspersky Lab between 2018-12-13 and 2018-12-21. This article will only describe three vulnerabilities which have been fixed in April this year. This includes two vulnerabilities that weren’t deemed a security risk by Kaspersky, it’s up to you to decide whether you agree with this assessment. The remaining five vulnerabilities have only been fixed in July, and I agreed to wait until November with the disclosure to give users enough time to upgrade.

Edit (2019-08-22): In order to disable this functionality you have to go into Settings, select “Additional” on the left side, then click “Network.” There you will see a section called “Encryption connection scanning” where you need to choose “Do not scan encrypted connections.”

{{toc}}

The underappreciated certificate warning pages

There is an important edge case with HTTPS connections: what if a connection is established but the other side uses an invalid certificate? Current browsers will generally show you a certificate warning page in this scenario. In Firefox it looks like this:

Certificate warning page in Firefox

This page has seen a surprising amount of changes over the years. The browser vendors recognized that asking users to make a decision isn’t a good idea here. Most of the time, getting out is the best course of action, and ignoring the warning only a viable option for very technical users. So the text here is very clear, low on technical details, and the recommended solution is highlighted. The option to ignore the warning is well-hidden on the other hand, to prevent people from using it without understanding the implications. While the page looks different in other browsers, the main design considerations are the same.

But with Kaspersky Internet Security in the middle, the browser is no longer talking to the server, Kaspersky is. The way HTTPS is designed, it means that Kaspersky is responsible for validating the server’s certificate and producing a certificate warning page. And that’s what the certificate warning page looks like then:

Certificate warning page when Kaspersky is installed

There is a considerable amount of technical details here, supposedly to allow users to make an informed decision, but usually confusing them instead. Oh, and why does it list the URL as “www.example.org”? That’s not what I typed into the address bar, it’s actually what this site claims to be (the name has been extracted from the site’s invalid certificate). That’s a tiny security issue here, wasn’t worth reporting however as this only affects sites accessed by IP address which should never be the case with HTTPS.

The bigger issue: what is the user supposed to do here? There is “leave this website” in the text, but experience shows that people usually won’t read when hitting a roadblock like this. And the highlighted action here is “I understand the risks and wish to continue” which is what most users can be expected to hit.

Using clickjacking to override certificate warnings

Let’s say that we hijacked some user’s web traffic, e.g. by tricking them into connecting to our malicious WiFi hotspot. Now we want to do something evil with that, such as collecting their Google searches or hijacking their Google account. Unfortunately, HTTPS won’t let us do it. If we place ourselves between the user and the Google server, we have to use our own certificate for the connection to the user. With our certificate being invalid, this will trigger a certificate warning however.

So the goal is to make the user click “I understand the risks and wish to continue” on the certificate warning page. We could just ask nicely, and given how this page is built we’ll probably succeed in a fair share of cases. Or we could use a trick called clickjacking – let the user click it without realizing what they are clicking.

There is only one complication. When the link is clicked there will be an additional confirmation pop-up:

Warning displayed by Kaspersky when overriding a certificate

But don’t despair just yet! That warning is merely generic text, it would apply to any remotely insecure action. We would only need to convince the user that the warning is expected and they will happily click “Continue.” For example, we could give them the following page when they first connect to the network, similar to those captive portals:

Fake Kaspersky warning page

Looks like a legitimate Kaspersky warning page but isn’t, the text here was written by me. The only “real” thing here is the “I understand the risks and wish to continue” link which actually belongs to an embedded frame. That frame contains Kaspersky’s certificate warning for www.google.com and has been positioned in such a way that only the link is visible. When the user clicks it, they will get the generic warning from above and without doubt confirm ignoring the invalid certificate. We won, now we can do our evil thing without triggering any warnings!

How browser vendors deal with this kind of attack? They require at least two clicks to happen on different spots of the certificate warning page in order to add an exception for an invalid certificate, this makes clickjacking attacks impracticable. Kaspersky on the other hand felt very confident about their warning prompt, so they opted for adding more information to it. This message will now show you the name of the site you are adding the exception for. Let’s just hope that accessing a site by IP address is the only scenario where attackers can manipulate that name…

Something you probably don’t know about HSTS

There is a slightly less obvious detail to the attack described above: it shouldn’t have worked at all. See, if you reroute www.google.com traffic to a malicious server and navigate to the site then, neither Firefox nor Chrome will give you the option to override the certificate warning. Getting out will be the only option available, meaning no way whatsoever to exploit the certificate warning page. What is this magic? Did browsers implement some special behavior only for Google?

Firefox certificate warning for www.google.com

They didn’t. What you see here is a side-effect of the HTTP Strict-Transport-Security (HSTS) mechanism, which Google and many other websites happen to use. When you visit Google it will send the HTTP header Strict-Transport-Security: max-age=31536000 with the response. This tells the browser: “This is an HTTPS-only website, don’t ever try to create an unencrypted connection to it. Keep that in mind for the next year.”

So when the browser later encounters a certificate error on a site using HSTS, it knows: the website owner promised to keep HTTPS functional. There is no way that an invalid certificate is ok here, so allowing users to override the certificate would be wrong.

Unless you have Kaspersky in the middle of course, because Kaspersky completely ignores HSTS and still allows users to override the certificate. When I reported this issue, the vendor response was that this isn’t a security risk because the warning displayed is sufficient. Somehow they decided to add support for HSTS nevertheless, so that current versions will no longer allow overriding certificates here.

It’s no doubt that there are more scenarios where Kaspersky software weakens the security precautions made by browsers. For example, if a certificate is revoked (usually because it has been compromised), browsers will normally recognize that thanks to OCSP stapling and prevent the connection. But I noticed recently that Kaspersky Internet Security doesn’t support OCSP stapling, so if this application is active it will happily allow you to connect to a likely malicious server.

Using injected content for Universal XSS

Kaspersky Internet Security isn’t merely listening in on connections to HTTPS sites, it is also actively modifying those. In some cases it will generate a response of its own, such as the certificate warning page we saw above. In others it will modify the response sent by the server.

For example, if you didn’t install the Kaspersky browser extension, it will fall back to injecting a script into server responses which is then responsible for “protecting” you. This protection does things like showing a green checkmark next to Google search results that are considered safe. As Heise Online wrote merely a few days ago, this also used to leak a unique user ID which allowed tracking users regardless of any protective measures on their side. Oops…

There is a bit more to this feature called URL Advisor. When you put the mouse cursor above the checkmark icon a message appears stating that you have a safe site there. That message is a frame displaying url_advisor_balloon.html. Where does this file load from? If you have the Kaspersky browser extension, it will be part of that browser extension. If you don’t, it will load from ff.kis.v2.scr.kaspersky-labs.com in Firefox and gc.kis.v2.scr.kaspersky-labs.com in Chrome – Kaspersky software will intercept requests to these servers and answer them locally. I noticed however that things were different in Microsoft Edge, here this file would load directly from www.google.com (or any other website if you changed the host name).

URL Advisor frame showing up when the checkmark icon is hovered

Certainly, when injecting their own web page into every domain on the web Kaspersky developers thought about making it very secure? Let’s have a look at the code running there:

var policeLink = document.createElement("a");
policeLink.href = IsDefined(UrlAdvisorLinkPoliceDecision) ? UrlAdvisorLinkPoliceDecision : locales["UrlAdvisorLinkPoliceDecision"];
policeLink.target = "_blank";
div.appendChild(policeLink);

This creates a link inside the frame dynamically. Where the link target comes from? It’s part of the data received from the parent document, no validation performed. In particular, javascript: links will be happily accepted. So a malicious website needs to figure out the location of url_advisor_balloon.html and embed it in a frame using the host name of the website they want to attack. Then they send a message to it:

frame.contentWindow.postMessage(JSON.stringify({
  command: "init",
  data: {
    verdict: {
      url: "",
      categories: [
        21
      ]
    },
    locales: {
      UrlAdvisorLinkPoliceDecision: "javascript:alert('Hi, this JavaScript code is running on ' + document.domain)",
      CAT_21: "click here"
    }
  }
}), "*");

What you get is a link labeled “click here” which will run arbitrary JavaScript code in the context of the attacked domain when clicked. And once again, the attackers could ask the user nicely to click it. Or they could use clickjacking, so whenever the user clicks anywhere on their site, the click goes to this link inside an invisible frame.

Injected JavaScript code running in context of the Google domain

And here you have it: a malicious website taking over your Google or social media accounts, all because Kaspersky considered it a good idea to have their content injected into secure traffic of other people’s domains. But at least this particular issue was limited to Microsoft Edge.

Timeline

  • 2018-12-13: Sent report via Kaspersky bug bounty program: Lack of HSTS support facilitating MiTM attacks.
  • 2018-12-17: Sent reports via Kaspersky bug bounty program: Certificate warning pages susceptible to clickjacking and Universal XSS in Microsoft Edge.
  • 2018-12-20: Response from Kaspersky: HSTS and clickjacking reports are not considered security issues.
  • 2018-12-20: Requested disclosure of the HSTS and clickjacking reports.
  • 2018-12-24: Disclosure denied due to similarity with one of my other reports.
  • 2019-04-29: Kaspersky notifies me about the three issues here being fixed (KIS 2019 Patch E, actually released three weeks earlier).
  • 2019-04-29: Requested disclosure of these three issues, no response.
  • 2019-07-29: With five remaining issues reported by me fixed (KIS 2019 Patch F and KIS 2020), requested disclosure on all reports.
  • 2019-08-04: Disclosure denied on HSTS report because “You’ve requested too many tickets for disclosure at the same time.”
  • 2019-08-05: Disclosure denied on five not yet disclosed reports, asking for time until November for users to update.
  • 2019-08-06: Notified Kaspersky about my intention to publish an article about the three issues here on 2019-08-19, no response.
  • 2019-08-12: Reminded Kaspersky that I will publish an article on these three issues on 2019-08-19.
  • 2019-08-12: Kaspersky requesting an extension of the timeline until 2019-08-22, citing that they need more time to prepare.
  • 2019-08-16: Security advisory published by Kaspersky without notifying me.

Cameron KaiserChrome murders FTP like Jeffrey Epstein

What is it with these people? Why can't things that are working be allowed to still go on working? (Blah blah insecure blah blah unused blah blah maintenance blah blah web everything.)

This leaves an interesting situation where Google has, in its very own search index, HTML pages served by FTP its own browser won't be able to view:

At the top of the search results, even!

Obviously those FTP HTML pages load just fine in mainline Firefox, at least as of this writing, and of course TenFourFox. (UPDATE: This won't work in Firefox either after Fx70, though FTP in general will still be accessible. Note that it references Chrome's announcements; as usual, these kinds of distributed firing squads tend to be self-reinforcing.)

Is it a little ridiculous to serve pages that way? Okay, I'll buy that. But it works fine and wasn't bothering anyone, and they must have some relevance to be accessible because Google even indexed them.

Why is everything old suddenly so bad?

Tantek ÇelikIndieWebCamps Timeline 2011-2019: Amsterdam to Utrecht

At the beginning of IndieWeb Summit 2019, I gave a brief talk on State of the IndieWeb and mentioned that:

We've scheduled lots of IndieWebCamps this year and are on track to schedule a record number of different cities as well.

I had conceived of a graphical representation of the growth of IndieWebCamps over the past nine years, both in number and across the world, but with everything else involved with setting up and running the Summit, ran out of time. However, the idea persisted, and finally this past week, with a little help from Aaron Parecki re-implementing Dopplr’s algorithm for turning city names into colors, was able to put togther something pretty close to what I’d envisioned:

Istanbul 
Amsterdam 
Utrecht 
Nürnberg   
Düsseldorf     
Berlin     
Edinburgh 
Oxford  
Brighton      
New Haven 
Baltimore 
Cambridge   
New York     
Austin  
Bellingham 
Los Angeles  
San Francisco   
Portland         
201120122013201420152016201720182019

I don’t know of any tools to take something like this kind of locations vs years data and graph it as such. So I built an HTML table with a cell for each IndieWebCamp, as well as cells for the colspans of empty space. Each colored cell is hyperlinked to the IndieWebCamp for that city for that year.

2011-2018 and over half of 2019 are IndieWebCamps (and Summits) that have already happened. 2019 includes bars for four upcoming IndieWebCamps, which are fully scheduled and open for sign-ups.

The table markup is copy pasted from the IndieWebCamp wiki template where I built it, and you can see the template working live in the context of the IndieWebCamp Cities page. I’m sure the markup could be improved, suggestions welcome!

Julien VehentThe cost of micro-services complexity

It has long been recognized by the security industry that complex systems are impossible to secure, and that pushing for simplicity helps increase trust by reducing assumptions and increasing our ability to audit. This is often captured under the acronym KISS, for "keep it stupid simple", a design principlepopularized by the US Navy back in the 60s. For a long time, we thought the enemy were application monoliths that burden our infrastructure with years of unpatched vulnerabilities.


So we split them up. We took them apart. We created micro-services where each function, each logical component, is its own individual service, designed, developed, operated and monitored in complete isolation from the rest of the infrastructure. And we composed them ad vitam æternam. Want to send an email? Call the rest API of micro-service X. Want to run a batch job? Invoke lambda function Y. Want to update a database entry? Post it to A which sends an event to B consumed by C stored in D transformed by E and inserted by F. We all love micro-services architecture. It’s like watching dominoes fall down. When it works, it’s visceral. It’s when it doesn’t that things get interesting. After nearly a decade of operating them, let me share some downsides and caveats encountered in large-scale production environments.


High operational cost

The first problem is operational cost. Even in a devops cloud automated world, each micro-service, serverless or not, needs setup, maintenance and deployment. We never fully got to the holy grail of completely automated everything, so humans are still involved with these things. Perhaps someone sold you on the idea devs could do the ops work on their free time, but let’s face it, that’s a lie, and you need dedicated teams of specialists to run the stuff the right way. And those folks don’t come cheap.

The more services you have, the harder it is to keep up with them. First you’ll start noticing delays in getting new services deployed. A week. Two weeks. A month. What do you mean you need a three months notice to get a new service setup?

Then, it’s the deployments that start to take time. And as a result, services that don’t absolutely need to be deployed, well, aren’t. Soon they’ll become outdated, vulnerable, running on the old version of everything and deploying a new version means a week worth of work to get it back to the current standard.


QA uncertainty

A second problem is quality assurance. Deploying anything in a micro-services world means verifying everything still works. Got a chain of 10 services? Each one probably has its own dev team, QA specialists, ops people that need to get involved, or at least notified, with every deployment of any service in the chain. I know it’s not supposed to be this way. We’re supposed to have automated QA, integration tests, and synthetic end-to-end monitoring that can confirm that a butterfly flapping its wings in us-west-2 triggers a KPI update on the leadership dashboard. But in the real world, nothing’s ever perfect and things tend to break in mysterious ways all the time. So you warn everybody when you deploy anything, and require each intermediate service to rerun their own QA until the pain of getting 20 people involved with a one-line change really makes you wish you had a monolith.

The alternative is that you don’t get those people involved, because, well, they’re busy, and everything is fine until a minor change goes out, all testing passes, until two days later in a different part of the world someone’s product is badly broken. It takes another 8 hours for them to track it back to your change, another 2 to roll it back, and 4 to test everything by hand. The post-mortem of that incident has 37 invitees, including 4 senior directors. Bonus points if you were on vacation when that happened.

Huge attack surface

And finally, there’s security. We sure love auditing micro-services, with their tiny codebases that are always neat and clean. We love reviewing their infrastructure too, with those dynamic security groups and clean dataflows and dedicated databases and IAM controlled permissions. There’s a lot of security benefits to micro-services, so we’ve been heavily advocating for them for several years now.

And then, one day, someone gets fed up with having to manage API keys for three dozen services in flat YAML files and suggests to use oauth for service-to-service authentication. Or perhaps Jean-Kevin drank the mTLS Kool-Aid at the FoolNix conference and made a PKI prototype on the flight back (side note: do you know how hard it is to securely run a PKI over 5 or 10 years? It’s hard). Or perhaps compliance mandates that every server, no matter how small, must run a security agent on them.

Even when you keep everything simple, this vast network of tiny services quickly becomes a nightmare to reason about. It’s just too big, and it’s everywhere. Your cross-IAM role assumptions keep you up at night. 73% of services are behind on updates and no one dares touch them. One day, you ask if anyone has a diagram of all the network flows and Jean-Kevin sends you a dot graph he generated using some hacky python. Your browser crashes trying to open it, the damn thing is 158MB of SVG.

Most vulnerabilities happen in the seam of things. API credentials will leak. Firewall will open. Access controls will get mis-managed. The more of them you have, the harder it is to keep it locked down.


Everything in moderation

I’m not anti micro-services. I do believe they are great, and that you should use them, but, like a good bottle of Lagavulin, in moderation. It’s probably OK to let your monolith do more than one thing, and it’s certainly OK to extract the one functionality that several applications need into a micro-service. We did this with autograph, because it was obvious that handling cryptographic operations should be done by a dedicated micro-service, but we don’t do it for everything. My advice is to wait until at least three services want a given thing before turning it into a micro-service. And if the dependency chain becomes too large, consider going back to a well-managed monolith, because in many cases, it actually is the simpler approach.

Hacks.Mozilla.OrgUsing WebThings Gateway notifications as a warning system for your home

Ever wonder if that leaky pipe you fixed is holding up? With a trip to the hardware store and a Mozilla WebThings Gateway you can set up a cheap leak sensor to keep an eye on the situation, whether you’re home or away. Although you can look up detector status easily on the web-based dashboard, it would be better to not need to pay attention unless a leak actually occurs. In the WebThings Gateway 0.9 release, a number of different notification mechanisms can be set up, including emails, apps, and text messages.

Leak Sensor Demo

         

In this post I’ll show you how to set up gateway notifications to warn you of changes in your home that you care about. You can set each notification to one of three levels of severity–low, normal, and high–so that you can identify which are informational changes and which alerts should be addressed immediately (fire! intruder! leak!). First, we’ll choose a device to worry about. Next, we’ll decide how we want our gateway to contact us. Finally, we’ll set up a rule to tell the gateway when it should contact us.

Choosing a device

First, make sure the device you want to monitor is connected to your gateway. If you haven’t added the device yet, visit the Gateway User Guide for information about getting started.

Now it’s time to figure out which things’ properties will lead to interesting notifications. For each thing you want to investigate, click on its splat icon to get a full view of all its properties.

View of all gateway things with splat icon of leak sensor highlighted Detailed Leak Sensor view

You may also want to log properties of various analog devices over time to see what values are “normal”. For example, you can monitor the refrigerator temperature for a couple of days to help determine what qualifies as an abnormal temperature. In this graph, you can see the difference between baseline power draw (around 20 watts) and charging (up to 90 watts).

Graph of laptop charger plug power over the last day with clear differentiation between off, standby, and charging states

Charger Power Consumption Graph

In my case, I’ve selected a leak sensor so I won’t need to log data in advance. It’s pretty clear that I want to be notified when the leak property of my sensor becomes true (i.e., when a leak is detected). If instead you want to monitor a smart plug, you can look at voltage, power, or on/off state. Note that the notification rules you create will let you combine multiple inputs using “and” or “or” logic. For example, you might want to be alerted if indoor motion is detected “and” all of the family smartphone “presence” states are “inactive” (i.e., no one in your family is home, so what caused motion?). Whatever your choice, keep the logical states of your various sensors in mind while you set up your notifier.

Setting up your notifier

The 0.9 WebThings Gateway release added support for notifiers as a specific form of add-on. Thanks to the efforts of the community and a bit of our own work, your gateway can already send you notifications over email, SMS, Telegram, or specialized push notification apps with new add-ons released every week. You can find several notification add-on options by clicking “+” on the Settings > Add-ons page.

Main menu with settings highlighted Settings page with add-ons section highlighted Initial list of installed add-ons without email add-on List of installable add-ons with email sender highlighted List of installed add-ons with link to README of email add-on highlighted

The easiest-to-use notifiers are email and SMS since there are fewer moving parts, but feel free to choose whichever approach you prefer. Follow the configuration instructions in your chosen notifier’s README file. You can get to the README for your notifier by clicking on the author’s name in the add-on list then scrolling down.

You’ll find a complete guide to the email notifier here: https://github.com/mozilla-iot/email-sender-adapter#email-sender-adapter.

Creating a rule

Finally, let’s teach our gateway how and when it should yell for attention. We can set this up in a simple drag-and-drop rule. First, drag your device to the left as a trigger and select the “Leak” property.

Dragging and dropping the leak block into the rule Where to click to open the leak block's select property dropdown Configuring the leak block through a dropdown

Next, drag your notification channel to the right as an effect and configure its title, body, and level as desired.

Illustration of dragging email block into rule Rule after email block is dropped into it Configuring the email part of the rule through a dropdown

Your rule is now set up and ready to go!

Fully configured if leak then send email rule

The finished rule!

You can now manually test it out. For a leak sensor you can just spill a little water on it to make sure you get a text, email, or other notification warning you about a possible scary flood. This is also a perfect time to start experimenting. Can you set up a second, louder notification for when you’re asleep? What about only notifying when you’re at home so you can deal with the leak immediately?

Advanced rule logic where if the leak sensor is active and "phone at home" is true then it sends an email

A more advanced rule

Notifications are just one small piece of the WebThings Gateway ecosystem. We’re trying to build a future where the convenience of a connected life doesn’t require giving up your security and privacy. If you have ideas about how the WebThings Gateway can better orchestrate your home, please comment on Discourse or contribute on GitHub. If your preferred notification channel is missing and you can code, we love community add-ons! Check out the source code of the email add-on for inspiration. Coming up next, we’ll be talking about how you can have a natural spoken dialogue with the WebThings Gateway without sending your voice data to the cloud.

The post Using WebThings Gateway notifications as a warning system for your home appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language BlogAnnouncing Rust 1.37.0

The Rust team is happy to announce a new version of Rust, 1.37.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.37.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.37.0 on GitHub.

What's in 1.37.0 stable

The highlights of Rust 1.37.0 include referring to enum variants through type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, a default-run key in Cargo, and #[repr(align(N))] on enums. Read on for a few highlights, or see the detailed release notes for additional information.

Referring to enum variants through type aliases

With Rust 1.37.0, you can now refer to enum variants through type aliases. For example:

type ByteOption = Option<u8>;

fn increment_or_zero(x: ByteOption) -> u8 {
    match x {
        ByteOption::Some(y) => y + 1,
        ByteOption::None => 0,
    }
}

In implementations, Self acts like a type alias. So in Rust 1.37.0, you can also refer to enum variants with Self::Variant:

impl Coin {
    fn value_in_cents(&self) -> u8 {
        match self {
            Self::Penny => 1,
            Self::Nickel => 5,
            Self::Dime => 10,
            Self::Quarter => 25,
        }
    }
}

To be more exact, Rust now allows you to refer to enum variants through "type-relative resolution", <MyType<..>>::Variant. More details are available in the stabilization report.

Built-in Cargo support for vendored dependencies

After being available as a separate crate for years, the cargo vendor command is now integrated directly into Cargo. The command fetches all your project's dependencies unpacking them into the vendor/ directory, and shows the configuration snippet required to use the vendored code during builds.

There are multiple cases where cargo vendor is already used in production: the Rust compiler rustc uses it to ship all its dependencies in release tarballs, and projects with monorepos use it to commit the dependencies' code in source control.

Using unnamed const items for macros

You can now create unnamed const items. Instead of giving your constant an explicit name, simply name it _ instead. For example, in the rustc compiler we find:

/// Type size assertion where the first parameter
/// is a type and the second is the expected size.
#[macro_export]
macro_rules! static_assert_size {
    ($ty:ty, $size:expr) => {
        const _: [(); $size] = [(); ::std::mem::size_of::<$ty>()];
        //    ^ Note the underscore here.
    }
}

static_assert_size!(Option<Box<String>>, 8); // 1.
static_assert_size!(usize, 8); // 2.

Notice the second static_assert_size!(..): thanks to the use of unnamed constants, you can define new items without naming conflicts. Previously you would have needed to write static_assert_size!(MY_DUMMY_IDENTIFIER, usize, 8);. Instead, with Rust 1.37.0, it now becomes easier to create ergonomic and reusable declarative and procedural macros for static analysis purposes.

Profile-guided optimization

The rustc compiler now comes with support for Profile-Guided Optimization (PGO) via the -C profile-generate and -C profile-use flags.

Profile-Guided Optimization allows the compiler to optimize code based on feedback from real workloads. It works by compiling the program to optimize in two steps:

  1. First, the program is built with instrumentation inserted by the compiler. This is done by passing the -C profile-generate flag to rustc. The instrumented program then needs to be run on sample data and will write the profiling data to a file.
  2. Then, the program is built again, this time feeding the collected profiling data back into rustc by using the -C profile-use flag. This build will make use of the collected data to allow the compiler to make better decisions about code placement, inlining, and other optimizations.

For more in-depth information on Profile-Guided Optimization, please refer to the corresponding chapter in the rustc book.

Choosing a default binary in Cargo projects

cargo run is great for quickly testing CLI applications. When multiple binaries are present in the same package, you have to explicitly declare the name of the binary you want to run with the --bin flag. This makes cargo run not as ergonomic as we'd like, especially when a binary is called more often than the others.

Rust 1.37.0 addresses the issue by adding default-run, a new key in Cargo.toml. When the key is declared in the [package] section, cargo run will default to the chosen binary if the --bin flag is not passed.

#[repr(align(N))] on enums

The #[repr(align(N))] attribute can be used to raise the alignment of a type definition. Previously, the attribute was only allowed on structs and unions. With Rust 1.37.0, the attribute can now also be used on enum definitions. For example, the following type Align16 would, as expected, report 16 as the alignment whereas the natural alignment without #[repr(align(16))] would be 4:

#[repr(align(16))]
enum Align16 {
    Foo { foo: u32 },
    Bar { bar: u32 },
}

The semantics of using #[repr(align(N)) on an enum is the same as defining a wrapper struct AlignN<T> with that alignment and then using AlignN<MyEnum>:

#[repr(align(N))]
struct AlignN<T>(T);

Library changes

In Rust 1.37.0 there have been a number of standard library stabilizations:

Other changes

There are other changes in the Rust 1.37 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.37.0

Many people came together to create Rust 1.37.0. We couldn't have done it without all of you. Thanks!

New sponsors of Rust infrastructure

We'd like to thank two new sponsors of Rust's infrastructure who provided the resources needed to make Rust 1.37.0 happen: Amazon Web Services (AWS) and Microsoft Azure.

  • AWS has provided hosting for release artifacts (compilers, libraries, tools, and source code), serving those artifacts to users through CloudFront, preventing regressions with Crater on EC2, and managing other Rust-related infrastructure hosted on AWS.
  • Microsoft Azure has sponsored builders for Rust’s CI infrastructure, notably the extremely resource intensive rust-lang/rust repository.

Mozilla Localization (L10N)L10n Report: August Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers:

  • Mohsin of Assanese (as) is committed to rebuild the community and has been contributing to several projects.
  • Emil of Syriac (syc) joined us through the Common Voice project.
  • Ratko and Isidora of Serbian (sr) have been prolific contributors to a wide range of products and projects since joining the community.
  • Haile of Amheric (am) joined us through the Common Voice project, is busy localizing and recruiting more contributors so he can rebuild the community.
  • Ahsun Mahmud of Bengali (bn) focuses his interest on Firefox.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Maltese (mt)
  • Romansh Vallader (rm-vallery)
  • Syriac (syc)

New content and projects

What’s new or coming up in Firefox desktop

We’re quickly approaching the deadline for Firefox 69. The last day to ship your changes in this version is August 20, less than a week away.

A lot of content targeting Firefox 70 already landed and it’s available in Pontoon for translation, with more to come in the following days. Here are a few of the areas where you should focus your testing on.

about:logins

This is the new password manager for Firefox. If you don’t plan to store the passwords in your browser, you should at least create a new profile to test the feature and its interactions (adding logins, editing, removing, etc.).

Enhanced Tracking Protection (ETP) and Protection Panels

This is going to be the main focus for Firefox 70:

  • New protection panel displayed when clicking the shield icon in the address bar.
  • Updated preferences.
  • New about:protections page. The content of this page will be exposed for localization in the coming days.

With ETP there will be several new terms to define for your language, like “Cross-Site Tracking Cookies” or “Social Media Trackers”. Make sure they’re translated consistently across the products and websites.

The deadline to ship localization for Firefox 70 will be October 8.

What’s new or coming up in mobile

It’s summer vacation time in mobile land, which means most projects are following the usual course of things.

Just like for Desktop, we’re quickly approaching the deadline for Firefox Android v69. The last day to ship your changes in this version is August 20.

Another thing to note is that we’ve exposed strings for Firefox iOS v19 (deadline TBD soon).

Other projects are following the usual continuous localization workflow. Stay tuned for the next report as there will be novelties then for sure!

What’s new or coming up in web projects

Firefox Accounts

A lot of strings landed earlier this month. If you need to prioritize what to localize first, look for string IDs containing `delete_account` or `sync-engines`. Expect more strings to land in the coming weeks.

Mozilla.org

The following files were added or updated since the last report.

  • New: firefox/adblocker.lang and firefox/whatsnew_69.lang (due on 26 of August)
  • Update: firefox/new/trailhead.lang

The navigation.lang file has been made available for localization for some time. This is a shared file, and the content is on production whether the file is fully localized or not. If this is not fully translated, make sure to give this file higher priority to complete soon.

What’s new or coming up in Foundation projects

More content from foundation.mozilla.org will be exposed to localization in de, es, fr, pl, pt-BR over the next few weeks! Content is exposed in different stages, because the website is built using different technologies, which makes it challenging for localization. The main pages will be available in the Engagement project, and a new tag can help have a look at them. Other template strings will be exposed in a new project later.

donate.mozilla.org is getting an update too! The website is being rebuilt from the ground up with a new system that will make it easier to maintain. The UI won’t change too much, so the copy will mostly remain the same. However, it won’t be possible to migrate the current translations to the new system, instead we will heavily rely on Pontoon’s translation memory.
Once the new website is ready, the current project in Pontoon will be set to “read only” mode during a transition period and a new project will be enabled.

Please make sure to review any pending suggestion over the next few weeks, so that they get properly added to the translation memory and are ready to be reused into the new project.

What’s new or coming up in SuMo

Newly published articles:

What’s new or coming up in Pontoon

The Translate.Next work moves on. We hope to have it wrapped up by the end of this quarter (i.e., end of September). Help us test by turning on Translate.Next from the Pontoon translation editor.

Newly published localizer facing documentation

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers, and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogWebXR category in JS13KGames!

WebXR category in JS13KGames!

Today starts the 8th edition of the annual js13kGames competition and we are sponsoring its WebXR category with a bunch of prizes including Oculus Quest headsets!

Like many other game development contests, the main goal of the js13kGames competition is to make a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to September 13th.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules). Don’t let the size restriction discourage you. Previous competitors have done amazing things in 13kb.

This year, as in the previous editions, Mozilla is sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

Like many other game development contests, the main goal is to release a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to 13th September.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules).

This year, as in the previous editions, Mozilla is again sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

WebXR category in JS13KGames!

Frameworks allowed

Last year you were allowed to use A-Frame and Babylon.js in your game. This year we have been working with the organization to include three.js on that list!
Because these frameworks weigh far more than 13kb, the requirements for this category have been softened. The size of the framework builds won’t count as part of the final 13kb limit. The allowed links for each framework to include in your game are the following:

WebXR category in JS13KGames!

The allowed links per framework to include on your game are the following:

If you feel you can present a WebXR game without using any third-party framework and still keep the 13kb limit for the whole game, you are free to do so and I’m sure the judges will value that fact.

You may use any kind of input system: gamepad, gazer, 3dof or 6dof controllers and we will still be able to test your game in different VR devices. Please indicate in the description what the device/input requirements are for your game.
If you have a standalone headset, please make sure you try your game on Firefox Reality because we plan to feature the best games of the competition on the Firefox Reality homepage.

Resources

Here are some useful links if you need some help or want to share your progress!

Enjoy and good luck!

This Week In RustThis Week in Rust 299

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is topgrade, a command-line program to upgrade all the things.

Thanks to Dror Levin for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

270 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

For me, acquiring a taste for rustfmt-style seems worthwhile to 'eliminate broad classes of debate', even if I didn't like some of the style when I first looked. I've resisted the temptation to even read about how to customise.

Years ago, I was that person writing style guides etc. I now prefer this problem to be automated-away; freeing up time for malloc-memcpy-golf (most popular sport in the Rust community).

@dholroyd on rust-users

Thanks to troiganto for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla VR BlogCustom elements for the immersive web

Custom elements for the immersive web

We are happy to introduce the first set of custom elements for the immersive web we have been working on: <img-360> and <video-360>

From the Mixed Reality team, we keep working on improving the content creator experience, building new frameworks, tools, APIs, performance tuning and so on.
Most of these projects are based on the assumption that the users have a basic knowledge of 3D graphics and want to go deep on fully customizing their WebXR experience, (eg: using A-Frame or three.js).
But there are still a lot of use cases where content creators just want very simple interactions and don’t have the knowledge or time to create and maintain a custom application built on top of a WebXR framework.

With this project we aim to address the problems these content creators have by providing custom elements with simple, yet polished features. One could be just a simple 360 image or video viewer, another one could be a tour allowing the user to jump from one image to another.

Custom elements for the immersive web

Custom elements provide a standard way to create HTML elements to provide simple functionality that matches expectations of content creators without knowledge of 3D, WebXR or even Javascript.

How does this work?

Just include the Javascript bundle on your page and you could start using both elements in your HTML: <img-360> and <video-360>. You just need to provide them with a 360 image or video and the custom elements will do the rest, including detecting WebVR support. Here is a simple example that adds a 360 image and video to a page. All of the interaction controls are generated automatically:

You can try a demo here and find detailed information on how to use them on Github.

Next steps

Today we are releasing just these two elements but we have many others in mind and would love your feedback. What new elements would you find useful? Please join us on GitHub to discuss them.
We are also excited to see other companies working hard on providing quality custom elements for the 3D and XR web as Google with their <model-viewer> component and we hope others will follow.

Mozilla Reps CommunityReps OKRs for second half of 2019

Here is the list of the OKRs (objective and Key Results) that the Reps Council has set for the second half of 2019

Objective 1: By the end of 2019, Reps are feeling informed and are more confident to contribute to Mozilla initiatives

  • KR1: More activities related to MDM campaigns are reported on reps portal (30% more reporting)
  • KR2: 10% of inactive Reps are getting reactivated via the campaigns
  • KR3: 3 communities that haven’t participated before in campaigns are now joining campaigns regularly
  • KR4: Reps report feeling more involved in the program (success increase of 20%)
  • KR5: More than 80% of the reps are reporting that they know what MDM is about
  • KR6: More than 70% reps are voting in autumn elections
  • KR7: More than 50% of reps are sharing feedback on surveys about the program

 

Objective 2: By the end of 2019, Reps have skills that allow them to be local leaders

  • KR1: Due to the skills that the Reps have obtained, they now contribute to a 20% increase on campaigns contributions
  • KR2: 80% of mentors are reporting that are ready to lead their mentees due to the new mentor training they got (⅘ satisfaction rate)
  • KR3: 90% of the new onboarded Reps are reporting that are ready to become local leaders in their community due to their onboarding training

 

Objective 3: By the end of 2019, MDMs recognize Reps as local community builders / helpers

 

  • KR1: 10% more bugs reported for budget / swag (filing on behalf of the community)
  • KR2: [on hold] when the MDM portal is ready, 80% of the leaders of the communities join Reps

Let us know what you think by leaving feedback on the comments.

Wladimir PalantRecognizing basic security flaws in local password managers

If you want to use a password manager (as you probably should), there are literally hundreds of them to choose from. And there are lots of reviews, weighing in features, usability and all other relevant factors to help you make an informed decision. Actually, almost all of them, with one factor suspiciously absent: security. How do you know whether you can trust the application with data as sensitive as your passwords?

Unfortunately, it’s really hard to see security or lack thereof. In fact, even tech publications struggle with this. They will talk about two-factor authentication support, even when discussing a local password manager where it is of very limited use. Or worse yet, they will fire up a debugger to check whether they can see any passwords in memory, completely disregarding the fact that somebody with debug rights can also install a simple key logger (meaning: game over for any password manager).

Judging security of a password manager is a very complex task, something that only experts in the field are capable of. The trouble: these experts usually work for competing products and badmouthing competition would make a bad impression. Luckily, this still leaves me. Actually, I’m not quite an expert, I merely know more than most. And I also work on competition, a password manager called PfP: Pain-free Passwords which I develop as a hobby. But today we’ll just ignore this.

So I want to go with you through some basic flaws which you might encounter in a local password manager. That’s a password manager where all data is stored on your computer rather than being uploaded to some server, a rather convenient feature if you want to take a quick look. Some technical understanding is required, but hopefully you will be able to apply the tricks shown here, particularly if you plan to write about a password manager.

About Password Depot screen

Our guinea pig is a password manager called Password Depot, produced by the German company AceBit GmbH. What’s so special about Password Depot? Absolutely nothing, except for the fact that one of their users asked me for a favor. So I spent 30 minutes looking into it and noticed that they’ve done pretty much everything wrong that they could.

Note: The flaws discussed here have been reported to the company in February this year. The company assured that they take these very seriously but, to my knowledge, didn’t manage to address any of them so far.

{{toc}}

Understanding data encryption

First let’s have a look at the data. Luckily for us, with a local password manager it shouldn’t be hard to find. Password Depot stores its in self-contained database files with the file extension .pswd or .pswe, the latter being merely a ZIP-compressed version of the former. XML format is being used here, meaning that the contents are easily readable:

XML-formatted Password Depot database

The good news: <encrypted> flag here clearly indicates that the data is encrypted, as it should be. The bad news: this flag shouldn’t be necessary, as “safely encrypted” should be the only supported mode for a password manager. As long as some form of unencrypted database format is supported, there is a chance that an unwitting user will use it without knowing. Even a downgrade attack might be possible, an attacker replacing the passwords database by an unencrypted one when it’s still empty, thus making sure that any passwords added to the database later won’t be protected. I’m merely theorizing here, I don’t know whether Password Depot would ever write unencrypted data.

The actual data is more interesting. It’s a base64-encoded blob, when decoded it appears to be unstructured binary data. Size of the data is always a multiple of 16 bytes however. This matches the claim on the website that AES 256 is used for encryption, AES block size being 16 bytes (128 bits).

AES is considered secure, so all is good? Not quite, as there are various block cipher modes which could be used and not all of them are equally good. Which one is it here? I got a hint by saving the database as an outdated “mobile password database” file with the .pswx file extension:

Excerpt from Password Depot database in older format

Unlike with the newer format, here various fields are encrypted separately. What sticks out are two pairs of identical values. That’s something that should never happen, identical ciphertexts are always an indicator that something went terribly wrong. In addition, the shorter pair contains merely 16 bytes of data. This means that only a single AES block is stored here (the minimal possible amount of data), no initialization vector or such. And there is only one block cipher mode which won’t use initialization vectors, namely ECB. Every article on ECB says: “Old and busted, do not use!” We’ll later see proof that ECB is used by the newer file format as well.

Note: If initialization vectors were used, there would be another important thing to consider. Initialization vectors should never be reused, depending on the block cipher mode the results would be more or less disastrous. So something to check out would be: if I undo some changes by restoring a database from backup and make changes to it again, will the application choose the same initialization vector? This could be the case if the application went with a simple incremental counter for initialization vectors, indicating a broken encryption scheme.

Data authentication

It’s common consensus today that data shouldn’t merely be encrypted, it should be authenticated as well. It means that the application should be able to recognize encrypted data which has been tampered with and reject it. Lack of data authorization will make the application try to process manipulated data and might for example allow conclusions about the plaintext from its reaction. Given that there are multiple ideas of how to achieve authentication, it’s not surprising that developers often mess up here. That’s why modern block cipher modes such as GCM integrated this part into the regular encryption flow.

Note that even without data authentication you might see an application reject manipulated data. That’s because the last block is usually padded before encryption. After decryption the padding will be verified, if it is invalid the data is rejected. Padding doesn’t offer real protection however, in particular it won’t flag manipulation of any block but the last one.

So how can we see whether Password Depot uses authenticated encryption? By changing a byte in the middle of the ciphertext of course! Since with ECB every 16 byte block is encrypted separately, changing a block in the middle won’t affect the last block where the padding is. When I try that with Password Depot, the file opens just fine and all the data is seemingly unaffected:

No signs of data corruption when opening a manipulated database

In addition to proving that no data authentication is implemented, that’s also a clear confirmation that ECB is being used. With ECB only one block is affected by the change, and it was probably some unimportant field – that’s why you cannot see any data corruption here. In fact, even changing the last byte doesn’t make the application reject the data, meaning that there are no padding checks either.

What about the encryption key?

As with so many products, the website of Password Depot stresses the fact that a 256 bit encryption key is used. That sounds pretty secure but leaves out one detail: where does this encryption key come from? While the application can accept an external encryption key file, it will normally take nothing but your master password to decrypt the database. So it can be assumed that the encryption key is usually derived from your master password. And your master password is most definitely not 256 bit strong.

Now a weaker master password isn’t a big deal as long as the application came up with reasonable bruteforce protection. This way anybody trying to guess your password will be slowed down, and this kind of attack would take too much time. Password Depot developers indeed thought of something:

Password Depot enforces a delay after entering the wrong password

Wait, no… This is not reasonable bruteforce protection. It would make sense with a web service or some other system that the attackers don’t control. Here however, they could replace Password Depot by a build where this delay has been patched out. Or they could remove Password Depot from the equation completely and just let their password guessing tools run directly against the database file, which would be far more efficient anyway.

The proper way of doing this is using an algorithm to derive the password which is intentionally slow. The baseline for such algorithms is PBKDF2, with scrypt and Argon2 having the additional advantage of being memory-hard. Did Password Depot use any of these algorithms? I consider that highly unlikely, even though I don’t have any hard proof. See, Password Depot has a know-how article on bruteforce attacks on their website. Under “protection” this article mentions complex passwords as the solution. And then:

Another way to make brute-force attacks more difficult is to lengthen the time between two login attempts (after entering a password incorrectly).

So the bullshit protection outlined above is apparently considered “state of the art,” with the developers completely unaware of better approaches. This is additionally confirmed by the statement that attackers should be able to generate 2 billion keys per second, not something that would be possible with a good key derivation algorithm.

There is still one key derivation aspect here which we can see directly: key derivation should always depend on an individual salt, ideally a random value. This helps slow down attackers who manage to get their hands on many different password databases, the work performed bruteforcing one database won’t be reusable for the others. So, if Password Depot uses a salt to derive the encryption key, where is it stored? It cannot be stored anywhere outside the database because the database can be moved to another computer and will still work. And if you look at the database above, there isn’t a whole lot of fields which could be used as salt.

In fact, there is exactly one such field: <fingerprint>. It appears to be a random value which is unique for each database. Could it be the salt used here? Easy to test: let’s change it! Changing the value in the <fingerprint> field, my database still opens just fine. So: no salt. Bad database, bad…

Browser integration

If you’ve been reading my blog, you already know that browser integration is a common weak point of password managers. Most of the issues are rather obscure and hard to recognize however. Not so in this case. If you look at the Password Depot options, you will see a panel called “Browser.” This one contains an option called “WebSockets port.”

Browser integration options listing WebSockets port

So when the Password Depot browser extension needs to talk to the Password Depot application, it will connect to this port and use the WebSockets protocol. If you check the TCP ports of the machine, you will indeed see Password Depot listening on port 25109. You can use netstat command line tool for that or the more convenient CurrPorts utility.

Password Depot listening on TCP port 25109

Note how this lists 0.0.0.0 as the address rather than the expected 127.0.0.1. This means that connections aren’t merely allowed from applications running on the same machine (such as your browser) but from anywhere on the internet. This is a completely unnecessary risk, but that’s really shadowed by the much bigger issue here.

Here is something you need to know about WebSockets first. Traditionally, when a website needed to access some resource, browsers would enforce the same-origin policy. So access would only be allowed for resources belonging to the same website. Later, browsers had to relax the same-origin policy and implement additional mechanisms in order to allow different websites to interact safely. Features conceived after that, such as WebSockets, weren’t bound by the same-origin policy at all and had more flexible access controls from the start.

The consequence: any website can access any WebSockets server, including local servers running on your machine. It is up to the server to validate the origin of the request and to allow or to deny it. If it doesn’t perform this validation, the browser won’t restrict anything on its own. That’s how Zoom and Logitech ended up with applications that could be manipulated by any website to name only some examples.

So let’s say your server is supposed to communicate with a particular browser extension and wants to check request origin. You will soon notice that there is no proper way of doing this. Not only are browser extension origins browser-dependent, at least in Firefox they are even random and change on every install! That’s why many solutions resort to somehow authenticating the browser extension towards the application with some kind of shared secret. Yet arriving on that shared secret in a way that a website cannot replicate isn’t trivial. That’s why I generally recommend staying away from WebSockets in browser extensions and using native messaging instead, a mechanism meant specifically for browser extensions and with all the security checks already built in.

But Password Depot, like so many others, chose to go with WebSockets. So how does their extension authenticate itself upon connecting to the server? Here is a shortened code excerpt:

var websocketMgr = {
  _ws:null,
  _connected:false,
  _msgToSend:null,
  initialize:function(msg){
    if (!this._ws){
      this._ws = new WebSocket(WS_HOST + ':' + options.socketPortNumber);
    }
    this._msgToSend = msg;
    this._ws.onopen = ()=>this.onOpen();
  }
  onOpen:function() {
    this._connected = true;
    if (this._msgToSend) {
      this.send(this._msgToSend);
    }
  },
  send:function(message){
    message.clientVersion = "V12";
    if (this._connected && (this._ws.readyState == this._ws.OPEN)){
      this._ws.send(JSON.stringify(message));
    }
    else {
      this.initialize(message);
    }
  }

You cannot see any authentication here? Me neither. But maybe there is some authentication info in the actual message? With several layers of indirection in the extension, the message format isn’t really obvious. So to verify the findings there is no way around connecting to the server ourselves and sending a message of our own. Here is what I’ve got:

let ws = new WebSocket("ws://127.0.0.1:25109");
ws.onopen = () =>
{
  ws.send(JSON.stringify({clientVersion: "V12", cmd: "checkState"}));
};
ws.onmessage = event =>
{
  console.log(JSON.parse(event.data));
}

When this code is executed on any HTTP website (not HTTPS because an unencrypted WebSockets connection would be disallowed) you get the following response in the console:

Object { cmd: “checkState”, state: “ready”, clientAlive: “1”, dbName: “test.pswd”, dialogTimeout: “10000”, clientVersion: “12.0.3” }

Yes, we are in! And judging by the code, with somewhat more effort we could request the stored passwords for any website. All we have to do for this is to ask nicely.

To add insult to injury, from the extension code it’s obvious that Password Depot can communicate via native messaging, with the insecure WebSockets-based implementation only kept for backwards compatibility. It’s impossible to disable this functionality in the application however, only changing the port number is supported. This is still true six months and four minor releases after I reported this issue.

More oddities

If you look at the data stored by Password Depot in the %APPDATA% directory, you will notice a file named pwdepot.appdata. It contains seemingly random binary data and has a size that is a multiple of 16 bytes. Could it be encrypted? And if it is, what could possibly be the encryption key?

The encryption key cannot be based on the master password set by the user because the password is bound to a database file, yet this file is shared across all of current user’s databases. The key could be stored somewhere, e.g. in the Windows registry or the application itself. But that would mean that the encryption here is merely obfuscation relying on the attacker being unable to find the key.

As far as I know, the only way this could make sense is by using Windows Data Protection API. It can encrypt data using a user-specific secret and thus protect it against other users when the user is logged off. So I would expect either CryptProtectData or the newer NCryptProtectSecret function to be used here. But looking through the imported functions of the application files in the Password Depot directory, there is no dependency on NCrypt.dll and only unrelated functions imported from Crypt32.dll.

Functions imported by PasswordDepot.exe

So here we have a guess again, one that I managed to confirm when debugging a related application however: the encryption key is hardcoded in the Password Depot application in a more or less obfuscated way. Security through obscurity at its best.

Summary

Today you’ve hopefully seen that “encrypted” doesn’t automatically mean “secure.” Even if it is “military grade encryption” (common marketing speak for AES), block cipher mode matters as well, and using ECB is a huge red warning flag. Also, any modern application should authenticate its encrypted data, so that manipulated data results in an error rather than attempts to make sense of it somehow. Finally, an important question to ask is how the application arrives on an encryption key.

In addition to that, browser integration is something where most vendors make mistakes. In particular, a browser extension using WebSockets to communicate with the respective application is very hard to secure, and most vendors fail even when they try. There shouldn’t be open ports expecting connections from browser extensions, native messaging is the far more robust mechanism.

IRL (podcast)The 5G Privilege

‘5G’ is a new buzzword floating around every corner of the internet. But what exactly is this hyped-up cellular network, often referred to as the next technological evolution in mobile internet communications? Will it really be 100 times faster than what we have now? What will it make possible that has never been possible before? Who will reap the benefits? And, who will get left behind?

Mike Thelander at Signals Research Group imagines the wild ways 5G might change our lives in the near future. Rhiannon Williams hits the street and takes a new 5G network out for a test drive. Amy France lives in a very rural part of Kansas — she dreams of the day that true, fast internet could come to her farm (but isn’t holding her breath). Larry Irving explains why technology has never been provided equally to everyone, and why he fears 5G will leave too many people out. Shireen Santosham, though, is doing what she can to leverage 5G deployment in order to bridge the digital divide in her city of San Jose.

IRL is an original podcast from Firefox. For more on the series go to irlpodcast.org

Read more about Rhiannon Williams' 5G tests throughout London.

And, find out more about San Jose's smart city vision that hopes to bridge the digital divide.

Cameron KaiserAnd now for something completely different: Making HTML 4.0 great again, and relevant Mac sightings at Vintage Computer Festival West 2019

UPDATE: Additional pictures are up at Talospace.

Vintage Computer Festival West 2019 has come and gone, and I'll be posting many of the pictures on Talospace hopefully tonight or tomorrow. However, since this blog's audience is both Mozilla-related (as syndicated on Planet Mozilla) and PowerPC-related, I've chosen to talk a little bit about old browsers for old machines (since, if you use TenFourFox, you're using a relatively recent browser on an old machine) since that was part of my exhibit this year as well as some of the Apple-related exhibits that were present.

This exhibit I christened "RISCy Business," a collection of various classic RISC-based portables and laptops. The machines I had running for festival attendees were a Tadpole-RDI UltraBook IIi (UltraSPARC IIi) running Solaris 10, an IBM ThinkPad 860 (166MHz PowerPC 603e, essentially a PowerBook 1400 in a better chassis) running AIX 4.1, an SAIC Galaxy 1100 (HP PA-7100LC) running NeXTSTEP 3.3, and an RDI PrecisionBook C160L (HP PA-7300LC) running HP/UX 11.00. I also brought my Sun Ultra-3 (Tadpole Viper with a 1.2GHz UltraSPARC IIIi), though because of its prodigious heat issues I didn't run it at the show. None of these machines retailed for less than ten grand, if they were sold commercially at all (the Galaxy wasn't).

Here they are, for posterity:

The UltraBook played a Solaris port of Quake II (software-rendered) and Firefox 2, the ThinkPad ran AIX's Ultimedia Video Monitor application (using the machine's built-in video capture hardware and an off-the-shelf composite NTSC camera) and Netscape Navigator 4.7, the Galaxy ran the standard NeXTSTEP suite along with some essential apps like OmniWeb 2.7b3 and Doom, and the PrecisionBook ran the HP/UX ports of the Frodo Commodore 64 emulator and Microsoft Internet Explorer 5.0 SP1. (Yes, IE for Unix used to be a thing.)

Now, of course, period-correct computers demand a period-correct website viewable on the browsers of the day, which is the site being displayed on screen and served to the machines from a "back office" Raspberry Pi 3. However, devising a late 1990s site means a certain, shall we say, specific aesthetic and careful analysis of vital browser capabilities for maximum impact. In these enlightened times no one seems to remember any of this stuff and what HTML 4.01 features worked where, so here is a handy table for your next old workstation browser demonstration (using a <table>, of course):

framesanimated GIF<marquee><blink>
Mozilla Suite 1.7&check&check&check&check
Firefox 2&check&check&check&check
Netscape Navigator 4.7&check&check&check&check
Internet Explorer for UNIX 5.0 SP1&check&check&check&cross
Firefox 52&check&check&check&cross
OmniWeb 2.7b3&check&check&cross&cross

Basically I ended up looting oocities and my old files for every obnoxious animated GIF and background I could find. This yielded a website that was surely authentic for the era these machines inhabited, and demonstrated exceptionally good taste.

By popular request, the website the machines are displaying is now live on Floodgap (after a couple minor editorial changes). I think the exhibit was pretty well received:

Probably the star of the show and more or less on topic for this blog was the huge group of Apple I machines (many, if not most, still in working order). They were under Plexiglas, and given that there was seven-figures'-worth of fruity artifacts all in one place, a security guard impassively watched the gawkers.

The Apple I owners' club is there to remind you that you, of course, don't own an Apple I.

A working Xerox 8010, better known as the Xerox Star and one of the innovators of the modern GUI paradigm (plus things like, you know, Ethernet), was on display along with an emulator. Steve Jobs saw one at PARC and we all know how that ended.

One of the systems there, part of the multi-platform Quake deathmatch network exhibit, was a Sun Ultra workstation running an honest-to-goodness installation of the Macintosh Application Environment emulation layer. Just for yuks, it was simultaneously running Windows on its SunPCI x86 side-card as well:

The Quake exhibitors also had a Daystar Millenium in a lovely jet-black case, essentially a Daystar Genesis MP+. These were some of the few multiprocessor Power Macs (and clones at that) before Apple's own dual G4 systems emerged. This system ran four 200MHz PowerPC 604e CPUs, though of course only application software designed for multiprocessing could take advantage of them.

A pair of Pippins were present at the exhibit next to the Quake guys', Apple's infamous attempt to turn the Power Mac into a home console platform and fresh off being cracked:

A carpal Apple Newtons (an eMate and several Message Pads) also stowed up so you card find art if the headwatering recognition was as dab as they said it wan.

There were also a couple Apple II systems hanging around (part of a larger exhibit on 6502-based home computers, hence the Atari 130XE next to it).

I'll be putting up the rest of the photos on Talospace, including a couple other notable historical artifacts and the IBM 604e systems the Quake exhibit had brought along, but as always it was a great time and my exhibit was not judged to be a fire hazard. You should go next year.

The moral of this story is the next time you need to make a 1990s web page that you can actually view on a 1990s browser, not that phony CSS and JavaScript crap facsimile they made up for Captain Marvel, now you know what will actually show a blinking scrolling marquee in a frame when you ask for one. Maybe I should stick an <isindex>-powered guestbook in there too.

(For some additional pictures, see our entry at Talospace.)

Mozilla VR BlogA Summer with Particles and Emojis

A Summer with Particles and Emojis

This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects.
My first project is about particle systems, the thing that I always have great interest in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs.

Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly.

Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle).

By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain.
A Summer with Particles and Emojis
Changing the emitter size

A Summer with Particles and Emojis
Changing the number of particles from 100 to 200

You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities.
A Summer with Particles and Emojis

And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets.
A Summer with Particles and Emojis

What does a particle’s life cycle look like? Let’s take a look at this chart:
A Summer with Particles and Emojis
Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives.

Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena.
A Summer with Particles and Emojis
With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result.
A Summer with Particles and Emojis

The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages.

A Summer with Particles and Emojis
The particle system is officially out on Spoke, so go try it out and let us know what you think!
A Summer with Particles and Emojis

Avatar Display Emojis

My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs.

A Summer with Particles and Emojis
Evolution of the display emoji design.

We ultimately decided to have the smooth edge emoji with some bloom effect.
A Summer with Particles and Emojis
Final version of the display emoji design

A Summer with Particles and Emojis
Icon design for the menu user interface

A Summer with Particles and Emojis
Interaction design using Hubs display styles

Demo:
A Summer with Particles and Emojis

When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered.

I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already!
A Summer with Particles and Emojis

Chris H-CMy StarCon 2019 Talk: Collecting Data Responsibly and at Scale

 

Back in January I was privileged to speak at StarCon 2019 at the University of Waterloo about responsible data collection. It was a bitterly-cold weekend with beautiful sun dogs ringing the morning sun. I spent it inside talking about good ways to collect data and how Mozilla serves as a concrete example. It’s 15 minutes short and aimed at a general audience. I hope you like it.

I encourage you to also sample some of the other talks. Two I remember fondly are Aaron Levin’s “Conjure ye File System, transmorgifier” about video games that look like file systems and Cory Dominguez’s lovely analysis of Moby Dick editions in “or, the whale“. Since I missed a whole day, I now get to look forward to fondly discovering new ones from the full list.

:chutten

Mike HoyeTen More Simple Rules

Untitled

The Public Library of Science‘s Ten Simple Rules series can be fun reading; they’re introductory papers intended to provide novices or non-domain-experts with a set of quick, evidence-based guidelines for dealing with common problems in and around various fields, and it’s become a pretty popular, accessible format as far as scientific publication goes.

Topic-wise, they’re all over the place: protecting research integrity, creating a data-management plan and taking advantage of Github are right there next to developing good reading habits, organizing an unconference or drawing a scientific comic, and lots of them are kind of great.

I recently had the good fortune to be co-author on one of them that’s right in my wheelhouse and has recently been accepted for publication: Ten Simple Rules for Helping Newcomers Become Contributors to Open Projects. They are, as promised, simple:

  1. Be welcoming.
  2. Help potential contributors evaluate if the project is a good fit.
  3. Make governance explicit.
  4. Keep knowledge up to date and findable.
  5. Have and enforce a code of conduct.
  6. Develop forms of legitimate peripheral participation.
  7. Make it easy for newcomers to get started.
  8. Use opportunities for in-person interaction – with care.
  9. Acknowledge all contributions, and
  10. Follow up on both success and failure.

You should read the whole thing, of course; what we’re proposing are evidence-based practices, and the details matter, but the citations are all there. It’s been a privilege to have been a small part of it, and to have done the work that’s put me in the position to contribute.

Support.Mozilla.OrgCommunity Management Update

Hello SUMO community,

I have a couple announcements for today. I’d like you all to welcome our two new community managers.

First off Kiki has officially joined the SUMO team as a community manager. Kiki has been filling in with Konstantina and Ruben on our social support activities. We had an opportunity to bring her onto the SUMO team full time starting last week. She will be transitioning out of her responsibilities at the Community Development Team and will be continuing her work on the social program as well as managing SUMO days going forward.

In addition, we have hired a new SUMO community manager to join the team. Please welcome Giulia Guizzardi to the SUMO team.

You can find her on the forums as gguizzardi. Below is a short introduction:

Hey everyone, my name is Giulia Guizzardi, and I will be working as a Support Community Manager for Mozilla. 

I am currently based in Berlin, but I was born and raised in the north-east of Italy. I studied Digital Communication in Italy and Finland, and worked for half a year in Poland.

My greatest passion is music, I love participating in festivals and concerts along with collecting records and listening to new releases all day long. Other than that, I am often online, playing video games (Firewatch at the moment) or scrolling Youtube/Reddit.

I am really excited for this opportunity and happy to work alongside the community!

Now that we have two new community managers we will work with Konstantina and Ruben to transition their work to Kiki and Giulia. We’re also kicking off work to create a community strategy which we will be seeking feedback for soon. In the meantime, please help me welcome Kiki and Giulia to the team.

Henrik SkupinExample in how to investigate CPU spikes in Firefox

Note: This article is based on Firefox builds as available for download at least until August 7th, 2019. In case you want to go through those steps on your own, I cannot guarantee that it will lead to the same effects if newer builds are used.

So a couple of months ago when I was looking for some new interesting and challenging sport events, which I could participate in to reach my own limits, I was made aware of the Mega Hike event. It sounded like fun and it was also good to see that one particular event is annually organized in my own city since 2018. As such I accepted it together with a friend, and we had an amazing day. But hey… that’s not what I actually want to talk about in this post!

The thing I was actually more interested in while reading content on this web site, was the high CPU load of Firefox while the page was open in my browser. Once the tab got closed the CPU load dropped back to normal numbers, and went up again once I reopened the tab. Given that I haven’t had that much time to further investigate this behavior, I simply logged bug 1530071 to make people aware of the problem. Sadly the bug got lost in my incoming queue of daily bug mail, and I missed to respond, which itself lead in no further progress been made.

Yesterday I stumbled over the website again, and by any change have been made aware of the problem again. Nothing seemed to have been changed, and Firefox Nightly (70.0a1) was still using around 70% of CPU even with the tab’s content not visible; means moved to a background tab. Given that this is a serious performance and power related issue I thought that investigation might be pretty helpful for developers.

In the following sections I want to lay out the steps I did to nail down this problem.

Energy consumption of Firefox processes

While for the first look the Activity Monitor of MacOS is helpful to get an impression about the memory usage and CPU load of Firefox, it’s a bit hard to see how much each and every open tab is actually using.

Activity monitor showing cpu load of Firefox processes

You could try to match the listed process ids with a specific tab in the browser by hovering over the appropriate tab title, but the displayed tooltip only contains the process id in  Firefox Nightly builds, but not in beta or final releases. Further multiple tabs will currently share the same process, and as such the displayed value in the Activity Monitor is a shared.

To further drill down the CPU load to a specific tab, Firefox has the about:performance page, which can be opened by typing the value into the location bar.  It’s basically an internal task manager to inspect the energy impact and memory consumption of each tab.

Task Manager of Firefox

Even more helpful is the option to expand the view for sub frames, which are usually used to embed external content. In case of the Megamarsch page there are three of those, and one actually spikes out with consuming nearly all the energy as used by the tab. As such it might be a good chance that this particular iframe from YouTube, which is embedding a video, is the problem.

To verify that the integrated Firefox Developer Tools can be used. Specially the Page Inspector will help us, which allows to search for specific nodes, CSS classes, or others, and then interact with them. To open it, check the Tools > Web Developer sub menu from inside the main menu.

Given that the URI of the iframe is known, lets search for it in the inspector:

Page Inspector

When running the search it will not be the first result as found, so continue until the expected iframe is highlighted in the Inspector pane. Now that we found the embedded content lets delete the node by opening the context menu and clicking Delete Node. If it was the problem, the CPU load should be normal again.

Sadly, and as you will notice when doing it yourself, it’s not the case. Which also means something else on that page is causing it. The easiest solution to figure out which node really causes the spike, is to simply delete more nodes on that page. Start at a higher level and delete the header, footer, or any sidebars first. By doing that always keep an eye on the Activity Monitor, and check if the CPU load maybe has dropped. Once that is the case undo the last step, so that the causing node is getting inserted again. Then remove all sibling nodes, so only the causing node remains. Now drill down even further until no more child nodes remain.

As advice don’t forget to change the update frequency so that values are updated each second, and revert it back after you are done.

In our case the following node which is related to the cart icon remains:

Page Inspector with affected node

So some kind of loading indicator seems to trigger Firefox to maybe repaint a specific area on the screen. To verify that remove the extra CSS class definitions. Once the icon-web-loading-spinner class has been removed it’s fine. Note that when hovering over the node and the class still be set, a spinning rectangle which is a placeholder for the real element can even be seen.

Checking the remaining stylesheets which get included, the one which remains (after removing all others without a notable effect) is from assets.jimstatic.com. And for the particular CSS class it holds the following animation:

@keyframes
spinit{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}

More interesting is that this specific class defines opacity: 0, which basically means that the node shouldn’t be visible at all, and no re-painting should happen until the node has been made visible.

With these kind of information found I updated the before mentioned bug with all the newly found details, and handed it over to the developers. Everyone who wants to follow the progress of fixing it, can subscribe as part of the CC list and will be automatically notified by Bugzilla for updates.

If you found this post useful please let me know, and I will write more of them in the future.

Eric ShepherdThe Tall-Tale Clock: The myth of task estimates

Picture of an old clockOne of my most dreaded tasks is that of estimating how long tasks will take to complete while doing sprint planning. I have never been good at this, and it has always felt like time stolen away from the pool of hours available to do what I can’t help thinking of as “real work.”

While I’m quite a bit better at the time estimating process than I was a decade ago—and perhaps infinitely better at it than I was 20 years ago—I still find that I, like a lot of the creative and technical professionals I know, dread the process of poring over bug and task lists, project planning documents, and the like in order to estimate how long things will take to do.

This is a particularly frustrating process when dealing with tasks that may be nested, have multiple—often not easily detected ahead of time—dependencies, and may involve working with technologies that aren’t actually as ready for prime time as expected. Add to that the fact that your days are filled with distractions, interruptions, and other tasks you need to deal with, and predicting how long a given project will take can start to feel like a guessing game.

The problem isn’t just one of coming up with the estimates. There’s a more fundamental problem of how to measure time. Do you estimate projects in terms of the number of work hours you’ll invest in them? The number of days or weeks you’ll spend on each task? Or some other method of measuring duration?

Hypothetical ideal days

On the MDN team, we have begun over the past year to use a time unit we call the hypothetical ideal day or simply ideal day. This is a theoretical time unit in which you are able to work, uninterrupted, on a project for an entire 8-hour work day. A given task may take any appropriate number of ideal days to complete, depending on its size and complexity. Some tasks may take less than a single ideal day, or may otherwise require a fractional number of ideal days (like 0.5 ideal days, or 1.25 ideal days).

There are a couple of additional guidelines we try to follow: we generally round to a quarter of a day, and we almost always keep our user stories’ estimates at five ideal days or less, with two or three being preferable. The larger a task is, the more likely it is that it’s really a group of related tasks.

There obviously isn’t actually any such thing as an ideal, uninterrupted day (hence the words “hypothetical” and “theoretical” a couple of paragraphs ago). Even on one’s best day, you have to stop to eat, to stretch, and do do any number of other things that you have to do during a day of work. But that’s the point to the ideal day unit: by building right into the unit the understanding that you’re not explicitly accounting for these interruptions in the time value, you can reinforce the idea that schedules are fragile, and that every time a colleague or your manager (or anyone else) causes you to be distracted from your planned tasks, the schedule will slip.

Ideal days in sprint planning

The goal, then, during sprint planning is to do your best to leave room for those distractions when mapping ideal days to the actual calendar. Our sprints on the MDN team are 12 business days long. When selecting tasks to attempt to accomplish during a sprint, we start by having each team member count up how many of those 12 days they will be available for work. This involves subtracting from that 12-day sprint any PTO days, company or local holidays, substantial meetings, and so forth.

When calculating my available days, I like to subtract a rough number of partial days to account for any appointments that I know I’ll have. We then typically subtract about 20% (or a day or two per sprint, although the actual amount varies from person to person based on how often they tend to get distracted and how quickly they rebound), to allow for distractions and sidetracking, and to cover typical administrative needs. The result is a rough estimate of the number of ideal days we’re available to work during the sprint.

With that in hand, each member of the team can select a group of tasks that can probably be completed during the number of ideal days we estimate they’ll have available during the sprint. But we know going in that these estimates are in terms of ideal days, not actual business days, and that if anything unanticipated happens, the mapping of ideal days to actual days we did won’t match up anymore, causing the work to take longer than anticipated. This understanding is fundamental to how the system works; by going into each sprint knowing that our mapping of ideal days to actual days is subject to external influences beyond our control, we avoid many of the anxieties that come from having rigid or rigid-feeling schedules.

For your consideration

For example, let’s consider a standard 12-business-day MDN sprint which spans my birthday as well as Martin Luther King, Jr. Day, which is a US Federal holiday. During those 12 days, I also have two doctor appointments scheduled which will have me out of the office for roughly half a day total, and I have about a day’s worth of meetings on my schedule as of sprint planning time. Doing the math, then, we find that I have 8.5 days available to work.

Knowing this, I then review the various task lists and find a total of around 8 to 8.5 days worth of work to do. Perhaps a little less if I think the odds are good that more time will be occupied with other things than the calendar suggests. For example, if my daughter is sick, there’s a decent chance I will be too in a few days, so I might take on just a little less work for the sprint.

As the sprint begins, then, I have an estimated 8 ideal days worth of work to do during the 12-day sprint. Because of the “ideal day” system, everyone on the team knows that if there are any additional interruptions—even short ones—the odds of completing everything on the list are reduced. As such, this system not only helps make it easier to estimate how long tasks will take, but also helps to reinforce with colleagues that we need to stay focused as much as possible, in order to finish everything on time.

If I don’t finish everything on the sprint plan by the end of the sprint, we will discuss it briefly during our end-of-sprint review to see if there’s any adjustment we need to make in future planning sessions, but it’s done with the understanding that life happens, and that sometimes delays just can’t be anticipated or avoided.

On the other hand, if I happen to finish before the sprint is over, I have time to get extra work done, so I go back to the task lists, or to my list of things I want to get done that are not on the priority list right now, and work on those things through the end of the sprint. That way, I’m able to continue to be productive regardless of how accurate my time estimates are.

I can work with this

In general, I really like this way of estimating task schedules. It does a much better job of allowing for the way I work than any other system I’ve been asked to work within. It’s not perfect, and the overhead is a little higher than I’d like, but by and large it does a pretty god job. That’s not to say we won’t try another, possibly better, way of handling the planning process in the future

But for now, my work days are as ideal as can be.

Bryce Van DykBuilding Geckoview/Firefox for Android under Windows Subsystems for Linux (wsl)

These are notes on my recent attempts to get Android builds of Firefox working under WSL 1. After tinkering with this I ultimately decided to do my Android builds in a full blown VM running Linux, but figure these notes may serve useful to myself or others.

This was done on Windows 10 using a Debian 9 WSL machine. The steps below assume an already cloned copy of mozilla-unified or mozilla-central.

Create a .mozconfig ensuring that LF line endings are used, CRLF seems to break parsing of the config under WSL:

# Build GeckoView/Firefox for Android:
ac_add_options --enable-application=mobile/android

# Targeting the following architecture.
# For regular phones, no --target is needed.
# For x86 emulators (and x86 devices, which are uncommon):
ac_add_options --target=i686
# For newer phones.
# ac_add_options --target=aarch64

# Write build artifacts to:
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/../mozilla-builds/objdir-droid-i686-opt

Bootstrap via ./mach bootstrap. After the bootstrap I found I still needed to install yasm in my package manager.

Now you should be ready to build with ./mach build. However, note that the object directory being built into needs to live on the WSL drive, i.e. mk_add_options MOZ_OBJDIR= should point to somewhere like ~/objdir and not /mnt/c/objdir.

This is because the build system will expect to files to be handled in a case sensitive manner and will create files like String.h and string.h in the same directory. Windows doesn't do this outside of WSL by default, and it causes issues with the build. I've got a larger discussion on the nuts and bolts of this, as well as a hacky work around below if you're interested in the details.

At this stage you should have an Android build. It can be packaged via ./mach package and then moved to the Windows mount – or if you have an Android emulator running under windows you can simply use ./mach install – this required required me to ~.mozbuild/android-sdk-linux/platform-tools/adb kill-server then ~.mozbuild/android-sdk-linux/platform-tools/adb start-serverafter enabling debugging on my emulated phone to get my WSLadb` to connect.

For other commands, your mileage may vary. For example ./mach crashtest <crashtest> fails, seemingly due to being unable to call su as expected under WSL.


Case sensitivity of files under Windows

When attempting to build Firefox for Android into an objdir on my Windows C drive I ended up getting a number of errors for due to files including String.h. This was a little confusing, as I recognize string.h, but the upper case S version not so much.

The cause is that the build system contains a list of headers and that there are several cases of headers with the same name only differing by uppercase initial letter, including the above string ones. In fact, there are 3 cases in that file: String.h, Strings.h, and Memory.h, and in my builds they can be safely removed to allow the build to progress.

I initially though this happened because the NTFS file system doesn't support case sensitive file names, whilst whatever file system was being used by WSL did. However, the reality is that NTFS does support case sensitivity and Windows itself is the one imposing case insensitivity.

Indeed, Windows is now exposing functionality to set case sensitivity on directories. Under WSL all directories are created with by default as case sensitive, but fsutil can be used to set the flag on directories outside WSL.

In fact, using fsutil to flag dirs as case sensitive allows for working around the issue with building to a objdir outside of WSL. For example I was able to do this fsutil.exe file setCaseSensitiveInfo ./dist/system_wrappers in the root of my objdir and then perform my build from WSL to outside WSL without issue. This isn't particularly ergonomic for normal use though, because Firefox's build system will destroy and recreate that dir which drops the flag. So I'd either need to manually restore it each time, or modify the build system.

The case sensitivity handling of files on Windows is interesting in a software archeology sense, and I plan to write more on it, but want to avoid this post (further) going off on a tangent around Windows architecture.

Daniel Stenbergmore tiny curl

Without much fanfare or fireworks we put together and shipped a fresh new version of tiny-curl. We call it version 0.10 and it is based on the 7.65.3 curl tree.

tiny-curl is a patch set to build curl as tiny as possible while still being able to perform HTTPS GET requests and maintaining the libcurl API. Additionally, tiny-curl is ported to FreeRTOS.

Changes in 0.10

  • The largest and primary change is that this version is based on curl 7.65.3, which brings more features and in particular more bug fixes compared to tiny-curl 0.9.
  • Parts of the patches used for tiny-curl 0.9 was subsequently upstreamed and merged into curl proper, making the tiny-curl 0.10 patch much smaller.

Download

As before, tiny-curl is an effort that is on a separate track from the main curl. Download tiny-curl from wolfssl.com!

Will Kahn-GreeneSocorro Engineering: July 2019 happenings and putting it on hold

Summary

Socorro Engineering team covers several projects:

This blog post summarizes our activities in July.

Highlights of July

  • Socorro: Added modules_in_stack field to super search allowing people to search the set of module/debugid for functions that are in teh stack of the crashing thread.

    This lets us reprocess crash reports that have modules for which symbols were just uploaded.

  • Socorro: Added PHC related fields, dom_fission_enabled, and bug_1541161 to super search.

  • Socorro: Fixed some things further streamlining the local dev environment.

  • Socorro: Reformatted Python code with Black.

  • Socorro: Extracted supersearch and fetch-data commands as a separate Python library: https://github.com/willkg/crashstats-tools

  • Tecken: Upgraded to Python 3.7 and adjusted storage bucket code to work better for multiple storage providers.

  • Tecken: Added GCS emulator for local development environment.

  • PollBot: Updated to use Buildhub2.

Hiatus and project changes

In April, we picked up Tecken, Buildhub, Buildhub2, and PollBot in addition to working on Socorro. Since then, we've:

  • audited Tecken, Buildhub, Buildhub2, and PollBot
  • updated all projects, updated dependencies, and performed other necessary maintenance
  • documented deploy procedures and basic runbooks
  • deprecated Buildhub in favor of Buildhub2 and updated projects to use Buildhub2

Buildhub is decomissioned now and is being dismantled.

We're passing Buildhub2 and PollBot off to another team. They'll take ownership of those projects going forward.

Socorro and Tecken are switching to maintenance mode as of last week. All Socorro/Tecken related projects are on hold. We'll continue to maintain the two sites doing "keep the lights on" type things:

  • granting access to memory dumps
  • adding new products
  • adding fields to super search
  • making changes to signature generation and updating siggen library
  • responding to outages
  • fixing security issues

All other non-urgent work will be pushed off.

As of August 1st, we've switched to Mozilla Location Services. We'll be auditing that project, getting it back into a healthy state, and bringing it in line with current standards and practices.

Given that, this is the last Socorro Engineering status post for a while.

Read more… (6 min remaining to read)

Tantek ÇelikReflecting On IndieWeb Summit: A Start

Table of Firefox stickers, pronoun pins, IndieWebCamp & microformats stickers. Over a month ago we organized the ninth annual IndieWeb Summit in Portland, Oregon, June 29-30. As frequently happens to organizers, the combination of follow-ups, subsequent holiday, and other events did not allow for much time to blog afterwards. On the other hand, it did allow for at least some reflection and appreciation.

Day 1 Badges, Pins, Shirts, And Breakfast!

Lillian at the table of IndieWebCamp t-shirts. Saturday morning June 29th went relatively smoothly. We had everything setup in time. I finished preparing my “state of” outline. Everyone signed-in when they arrived, got a badge, chose their color of lanyard (more on that later), pronoun pin(s), and an array of decorative stickers to customize their badge.

Breakfast buffet containers of scrambled eggs, potatoes, vegan scramble, etc. For the first time we had an anonymous donor who chipped in enough in addition to the minimal $10 registration fee for us to afford IndieWebCamp t-shirts in a couple of shapes and a variety of sizes. We had a warm breakfast (vegetarian and vegan) ready to go for participants.

Captions, Codes of Conduct, Safety, And Photo Policy!

Another first for any IndieWebCamp, we arranged a captioner who live-captioned the first two hours of Summit keynotes, introductions, and demos.

After welcoming everyone and introducing co-organizers Tiara and Aaron, I showed & briefly summarized our codes of conduct for the Summit:

In particular I emphasized the recent addition from XOXO 2018’s Code of Conduct regarding safety vs. comfort, which is worth its own blog post.

Tiara, photo policy lanyards of different colors, and policy summary. Another Summit first, also inspired by XOXO (and other conferences like Open Source Bridge), color-coded lanyards for our photo policy. Which was a natural lead-in for the heads-up about session live-streaming and where to sit accordingly (based on personal preference). Lastly, pronoun pins and a huge thanks to Aaron Parecki for arranging the logistics of all those materials!

I told people about the online tools that would help their Summit experience (chat, the wiki, Etherpad), summarized the day 1 schedule, and thanked the sponsors.

Video, Outline, And Always Aspiring

Here’s the 8 minute video of the Welcome. I think it went ok, especially with so many firsts for this Summit! In the future I’d like to: reduce it to no more than 5 minutes (one or two rounds of practice & edit should help), and consider what else could or should be included (while staying under 5 minutes). That being said, I feel pretty good about our continuous improvement with organizing and welcoming to IndieWebCamps. As we’ve learned from other inclusive conferences, I encourage all conference organizers to explicitly cover similar aspects (excerpted from the online outline I spoke from)

  • Code(s) of conduct (with multiple organizers and contacts)
  • Photo policy (with clear indicators to self-select)
  • Pronoun pins (or stickers)

Consider these a minimum baseline, a place to build from, more than goals. Ideally we should aspire to provide a safe and inclusive experience for an increasingly diverse community. Two more ways conference organizers can do so is by recognizing what the conference has done better this year, and by choosing keynote speakers to provide diverse perspectives. More on that with State of the IndieWeb, and the IndieWeb Summit 2019 invited keynote speakers.

Photos 1, 2, & 4 by Aaron Parecki

This Week In RustThis Week in Rust 298

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is broot, a program to show the gist of a directory tree.

Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

249 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No new RFCs were proposed this week.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

If you want to block threads, get your own threads.

kornel on rust-users

Thanks to Tom Phinney for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Eitan IsaacsonRevamping Firefox’s Reader Mode this Summer

This is cross-posted from a Medium article by Akshitha Shetty, a Summer of Code student I have been mentoring. It’s been a pleasure and I wish her luck in her next endeavor!

For me, getting all set to read a book would mean spending hours hopping between stores to find the right lighting and mood to get started. But with Firefox’s Reader Mode it’s now much more convenient to get reading on the go. And this summer, I have been fortunate to shift roles from a user to a developer for the Reader Mode . As I write this blog, I have completed two months as a Google Summer of Code student developer with Mozilla. It has been a really enriching experience and thus I would like to share some glimpses of the project and my journey so far.

Motivation behind choosing this organization and project

I began as an open-source contributor to Mozilla early this year. What really impressed me was how open and welcoming Mozillians were. Open-source contribution can be really intimidating at first. But in my case, the kind of documentation and direction that Mozilla provided helped me steer in the right direction really swiftly. Above all, it’s the underlying principle of the organization — “people first” that truly resonated with me. On going through the project idea list, the “Firefox Reader Mode Revamp” was of great interest to me. It was one of the projects where I would be directly enhancing the user-experience for Firefox users and also learning a lot more about user-experience and accessibility in the process.

Redesign of the Reader mode in making

The new design of the reader mode has the following features -

  1. A vertical toolbar is to replaced by a horizontal toolbar so that it is the sync with the other toolbars present in Firefox.
  2. The toolbar is now being designed so that it complies with the Photon Design System (the latest design guidelines proposed by the organization).
  3. The accessibility of the Reader Mode is being improved by making it keyboard friendly.
<figcaption>Mock-up for Reader Mode Redesign</figcaption>

Thanks to Abraham Wallin for designing the new UI for the Reader mode.

Get Set Code

Once the design was ready, I began with the coding of the UI. I thoroughly enjoyed the process and learnt a lot from the challenges I faced during this process. One of the challenges I faced during this phase was to make the toolbar adjust it’s width as per the content width of the main page. This required me to refactor certain portions of the existing code base as well make sure the newly coded toolbar follows the same.

To Sum it all up

All in all, it has been a really exciting process. I would like to thank my mentor — Eitan Isaacson for putting in the time and effort to mentor this project. Also I would like to thank — Gijs Kruitbosch and Yura Zenevich for reviewing my code at various points of time.

I hope this gets you excited to see the Reader Mode in its all new look ! Stay tuned for my next blog where I will be revealing the Revamped Reader Mode into action.

Daniel StenbergFirst HTTP/3 with curl

In the afternoon of August 5 2019, I successfully made curl request a document over HTTP/3, retrieve it and then exit cleanly again.

(It got a 404 response code, two HTTP headers and 10 bytes of content so the actual response was certainly less thrilling to me than the fact that it actually delivered that response over HTTP version 3 over QUIC.)

The components necessary for this to work, if you want to play along at home, are reasonably up-to-date git clones of curl itself and the HTTP/3 library called quiche (and of course quiche’s dependencies too, like boringssl), then apply pull-request 4193 (build everything accordingly) and run a command line like:

curl --http3-direct https://quic.tech:8443

The host name used here (“quic.tech”) is a server run by friends at Cloudflare and it is there for testing and interop purposes and at the time of this test it ran QUIC draft-22 and HTTP/3.

The command line option --http3-direct tells curl to attempt HTTP/3 immediately, which includes using QUIC instead of TCP to the host name and port number – by default you should of course expect a HTTPS:// URL to use TCP + TLS.

The official way to bootstrap into HTTP/3 from HTTP/1 or HTTP/2 is via the server announcing it’s ability to speak HTTP/3 by returning an Alt-Svc: header saying so. curl supports this method as well, it just needs it to be explicitly enabled at build-time since that also is still an experimental feature.

To use alt-svc instead, you do it like this:

curl --alt-svc altcache https://quic.tech:8443

The alt-svc method won’t “take” on the first shot though since it needs to first connect over HTTP/2 (or HTTP/1) to get the alt-svc header and store that information in the “altcache” file, but if you then invoke it again and use the same alt-svc cache curl will know to use HTTP/3 then!

Early days

Be aware that I just made this tiny GET request work. The code is not cleaned up, there are gaps in functionality, we’re missing error checks, we don’t have tests and chances are the internals will change quite a lot going forward as we polish this.

You’re of course still more than welcome to join in, play with it, report bugs or submit pull requests! If you help out, we can make curl’s HTTP/3 support better and getting there sooner than otherwise.

QUIC and TLS backends

curl currently supports two different QUIC/HTTP3 backends, ngtcp2 and quiche. Only the latter currently works this good though. I hope we can get up to speed with the ngtcp2 one too soon.

quiche uses and requires boringssl to be used while ngtcp2 is TLS library independent and will allow us to support QUIC and HTTP/3 with more TLS libraries going forward. Unfortunately it also makes it more complicated to use…

The official OpenSSL doesn’t offer APIs for QUIC. QUIC uses TLS 1.3 but in a way it was never used before when done over TCP so basically all TLS libraries have had to add APIs and do some adjustments to work for QUIC. The ngtcp2 team offers a patched version of OpenSSL that offers such an API so that OpenSSL be used.

Draft what?

Neither the QUIC nor the HTTP/3 protocols are entirely done and ready yet. We’re using the protocols as they are defined in the 22nd version of the protocol documents. They will probably change a little more before they get carved in stone and become the final RFC that they are on their way to.

The libcurl API so far

The command line options mentioned above of course have their corresponding options for libcurl using apps as well.

Set the right bit with CURLOPT_H3 to get direct connect with QUIC and control how to do alt-svc using libcurl with CURLOPT_ALTSVC and CURLOPT_ALTSVC_CTRL.

All of these marked EXPERIMENTAL still, so they might still change somewhat before they become stabilized.

Update

Starting on August 8, the option is just --http3 and you ask libcurl to use HTTP/3 directly with CURLOPT_HTTP_VERSION.

Mozilla Security BlogWeb Authentication in Firefox for Android

Firefox for Android (Fennec) now supports the Web Authentication API as of version 68. WebAuthn blends public-key cryptography into web application logins, and is our best technical response to credential phishing. Applications leveraging WebAuthn gain new  second factor and “passwordless” biometric authentication capabilities. Now, Firefox for Android matches our support for Passwordless Logins using Windows Hello. As a result, even while mobile you can still obtain the highest level of anti-phishing account security.

Firefox for Android uses your device’s native capabilities: On certain devices, you can use built-in biometrics scanners for authentication. You can also use security keys that support Bluetooth, NFC, or can be plugged into the phone’s USB port.

The attached video shows the usage of Web Authentication with a built-in fingerprint scanner: The demo website enrolls a new security key in the account using the fingerprint, and then subsequently logs in using that fingerprint (and without requiring a password).

Adoption of Web Authentication by major websites is underway: Google, Microsoft, and Dropbox all support WebAuthn via their respective Account Security Settings’ “2-Step Verification” menu.

A few notes

For technical reasons, Firefox for Android does not support the older, backwards-compatible FIDO U2F Javascript API, which we enabled on Desktop earlier in 2019. For details as to why, see bug 1550625.

Currently Firefox Preview for Android does not support Web Authentication. As Preview matures, Web Authentication will be joining its feature set.

 

The post Web Authentication in Firefox for Android appeared first on Mozilla Security Blog.

Mozilla Addons BlogExtensions in Firefox 69

In our last post for Firefox 68, we’ve introduced a great number of new features. In contrast, Firefox 69 only has a few new additions. Still, we are proud to present this round of changes to extensions in Firefox.

Better Topsites

The topSites API has received a few additions to better allow developers to retrieve the top sites as Firefox knows them. There are no changes to the defaults, but we’ve added a few options for better querying. The browser.topSites.get() function has two additional options that can be specified to control what sites are returned:

  • includePinned can be set to true to include sites that the user has pinned on the Firefox new tab.
  • includeSearchShortcuts can be set to true for including search shortcuts

Passing both options allows to mimic the behavior you will see on the new tab page, where both pinned results and search shortcuts are available.

User Scripts

This is technically an addition to Firefox 68, but since we didn’t mention it in the last blog post it gets an honorable mention here. In March, we announced that user scripts were coming, and now they are here. Starting with Firefox 68,  you will be able to use the userScripts API without needing to set any preferences in about:config.

The great advantage of the userScripts API is that it can run scripts with reduced privileges. Your extension can provide a mechanism to run user-provided scripts with a custom API, avoiding the need to use eval in content scripts. This makes it easier to adhere to the security and privacy standards of our add-on policies. Please see the original post on this feature for an example on how to use the API while we update the documentation.

Miscellaneous

  • The downloads API now correctly supports byExtensionId and byExtensionName for extension initiated downloads.
  • Clearing site permissions no longer re-prompts the user to accept storage permissions after a restart.
  • Using alert() in a background page will no longer block the extension from running.
  • The proxy.onRequest API now also supports FTP and WebSocket requests.

A round of applause goes to our volunteer contributors Martin Matous, Michael Krasnov, Myeongjun Go, Joe Jalbert, as well as everyone else who has made these additions in Firefox 69 possible.

The post Extensions in Firefox 69 appeared first on Mozilla Add-ons Blog.

Cameron KaiserVintage Computer Festival West 2019 opens in one hour

The machines are getting up and running. If you're a nerd, or you aspire to be one, and you're in the Bay Area for the next day or two come by the Vintage Computer Festival West at the Computer History Museum in Mountain View, CA (across from the Google Panopticon and that weird sail structure they're building). Not a great deal of Mac stuff this year, but there is some Power and PowerPC, including a Daystar Millennium (in a nice black case) accompanied by a couple bits of POWER hardware, including my very favourite 43P, and of course my exhibit, which in addition to a NeXTSTEP SAIC Galaxy 1100 and a couple SPARCs features a PowerPC ThinkPad 860 with its multimedia software operational. Plus come by and see a full exhibit of Apple Newtons, a couple Pippins (finally cracked!), lots of homebrew systems and even a fully functional Xerox Star! There's also lots of cool gear to buy in the consignment area if you don't have enough crap in the house. We're here today and tomorrow. See you then!

Mozilla VR BlogLessons from Hacking Glitch

Lessons from Hacking Glitch

When we first started building MrEd we imagined it would be done as a traditional web service. A potential user goes to a website, creates an account, then can build experiences on the site and save them to the server. We’ve all written software like this before and had a good idea of the requirements. However, as we started actually building MrEd we realized there were additional challenges.

First, MrEd is targeted at students, many of them young. My experience with teaching kids during previous summers let me know that they often don’t have email addresses, and even if they do there are privacy and legal issues around tracking what the students do. Also, we knew that this was an experiment which would end one day, but we didn’t want the students to lose access to this tool they just had just learned.

After pondering these problems we thought Glitch might be an answer. It supports anonymous use out of the box and allows easy remixing. It also has a nice CDN built in; great for hosting models and 360 images. If it would be possible to host the editor as well as the documents then Glitch would be the perfect platform for a self contained tool that lives on after the experiment was done.

The downside of Glitch is that many of its advanced features are undocumented. After much research we figured out how to modify Glitch to solve many problems, so now we’d like to share our solutions with you.

Making a Glitch from a Git Repo

Glitch’s editor is great for editing a small project, but not for building large software. We knew from the start that we’d need to edit on our local machines and store the code in a GitHub repo. The question was how to get that code initially into Glitch? It turns out Glitch supports creating a new project from an existing git repo. This was a fantastic advantage.

Lessons from Hacking Glitch

We could now create a build of the editor and set up the project just how we like, keep it versioned in Git, then make a new Glitch whenever we needed to. We built a new repo called mred-base-glitch specifically for this purpose and documented the steps to use it in the readme.

Integrating React

MrEd is built in React, so the next challenge was how to get a React app into Glitch. During development we ran the app locally using a hotreloading dev server. For final production, however, we need static files that could be hosted anywhere. Since our app was made with create-react-app we can build a static version with npm run build. The problem is that it requires you to set the hostname property in your package.json to calculate the final URL references. This wouldn’t work for us because someone’s Glitch could be renamed to anything. The solution was to set the hostname to ., so that all URLs are relative.

Next we wanted the editor to be hidden. In Glitch the user has a file list on the left side of the editor. While it’s fine to have assets and scripts be visible, we wanted the generated React code to be hidden. It turns out Glitch will hide any directory if it begins with dot: .. So in our base repo we put the code into public/.mred.

Finally we had the challenge of how to update the editor in an existing glitch without over-writing assets and documents the user had created.

Rather than putting everything into one git repo we made two. The first repo, mred, contains just the code to build the editor in React. The second repo, mred-base-glitch, contains the default documents and behaviors. This second repo integrates the first one as a git submodule. The compiled version of the editor also lives in the mred repo in the build directory. This way both the source and compiled versions of the editor can be versioned in git.

Whenever you want to update the editor in an existing glitch you can go to the Glitch console and run git submodule init and git submodule update to pull in just the editor changes. Then you can update the glitch UI with refresh. While this was a manual step, the students were able to do it easily with instruction.

Loading documents

The editor is a static React app hosted in the user’s Glitch, but it needs to save documents created in the editor at some location. Glitch doesn’t provide an API for programmatically loading and saving documents, but any Glitch can have a NodeJS server in it so we built a simple document server with express. The doc server scans the documents and scripts directories to produce a JSON API that the editor consumes.

For the launch page we wanted the user to see a list of their current projects before opening the editor. For this part the doc server has a route at / which returns a webpage containing the list as links. For URLs that need to be absolute the server uses a magic variable provided by Glitch to determine the hostname: process.env.PROJECT_DOMAIN.

The assets were a bit trickier than scripts and docs. The editor needs a list of available assets, but we can’t just scan the assets directory because assets aren’t actually stored in your Glitch. Instead they live on Glitch’s CDN using long generated URLs. However, the Glitch does have a hidden file called .glitch-assets which lists all of the assets as a JSON doc, including the mime types.

We discovered that a few of the files students wanted to use, like GLBs and WAVs, aren’t recognized by Glitch. You can still upload these files to the CDN but the .glitch-assets file won’t list the correct mime-type, so our little doc server also calculated new mime types for these files.

Having a tiny document server in the Glitch gave us a lot of flexibility to fix bugs and implement missing features. It was definitely a design win.

User Authentication

Another challenge with using Glitch is user authentication. Glitch has a concept of users and will not let a user edit someone else’s glitch without permission, but this user system is not exposed as an API. Our code had no way to know if the person interacting with the editor is the owner of that glitch or not. There are rumors of such a feature in the future, but for now we made do with a password file.

It turns out glitches can have a special file called .env for storing passwords and other secure environment variables. This file can be read by code running in the glitch, but it is not copied when remixing, so if someone remixes your glitch they won’t find out your password. To use this we require students to set a password as soon as the remix the base glitch. Then the doc server will use the password for authenticating communication with the editor.

Future Features

We managed to really modify Glitch to support our needs and it worked quite well. That said, there are a few features we’d like them to add in the future.

Documentation. Almost everything we did above came after lots of research in the support forums, and help from a few Glitch staffers. There is very little official documentation of how to do beyond basic project development. It would be nice if there was an official docs site beyond the FAQs.

A real authentication API. Using the .env file was a nice hack, but it would be nice if the editor itself could respond properly to the user. If the user isn’t logged in it could show a play only view of the experience. If the user is logged in but isn’t the owner of the glitch then it could show a remix button.

A way to populate assets programmatically. Everything you see in a glitch when you clone from GitHub comes from the underlying git repo except for the assets. To create a glitch with a pre-set list of assets (say for doing specific exercises in a class) requires manually uploading the files through the visual interface. There is no way (at least that we could find) to store the assets in the git repo or upload them programmatically.

Overall Glitch worked very well. We got an entire visual editor, assets, and document storage into a single conceptual chunk -- a glitch -- that can be shared and remixed by anyone. We couldn’t have done what we needed on such a short timeline without Glitch. We thank you Glitch Team!

Firefox NightlyThese Weeks in Firefox: Issue 62

Highlights

 

 

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Armando Ferreira
  • Arun Kumar Mohan
  • Florens Verschelde :fvsch
  • J
  • jaril
  • Krishnal Ciccolella
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Browser Architecture

Developer Tools

  • Debugger
    • Landed in Fx70: Variables and Scopes now remain expanded while stepping (see bug 1405402).
    • In progress: working on a new DOM mutations breakpoint panel that will be shared in the debugger and the inspector.
  • Console
    • The new Console Editor is progressing nicely: it now has a close button (see bug 1567370), can be resized as you would expect (see bug 1554877), will soon have history navigation buttons (see bug 1558198).
    • The entire console layout now uses CSS grid and subgrids (see bug 1565962) and it’s awesome.
      • The Web Console showing a split pane between an editor and the output. A simple Hello World block of JavaScript is being executed in the console.

        Using modern web technologies to build our own tools feels great!

  • Layout tools
  • Remote debugging
    • As of Fx 70 about:debugging is now the truly official way to connect to remote targets. The old “Connect” page as well as WebIDE have been removed. See this bug and this bug.

Fission

Lint

New Tab Page

  • Working towards launching remote layouts (aka DiscoveryStream) in Firefox 69.

Password Manager

Performance

Picture-in-Picture

Privacy/Security

  • Carolina and Danielle have been hard at work adding a new certificate viewer at about:certificate. Once bug 1567561 lands you can flip security.aboutcertificate.enabled and inspect certificates in a tab.
    • Please note that it’s very much work-in-progress right now, don’t expect things to work.
    • We’ll do another announcement when things are ready for testing.
  • Paul is churning through tricky evil traps bugs, such as Bug 1522120 – Exit fullscreen when a permission prompt is shown to the user
  • Paul also improved our indicators for geolocation usage to include an in-use indicator and show when geolocation was last accessed by the site
    • The Permissions section of the Identity Panel in Firefox showing that the Geolocation API was granted access to your location 5 seconds ago.

      This user gave the site permission to know their location, and it was accessed 5 seconds ago. If the user changes their mind, they can easily revoke that access.

  • We successfully completed the no-eval-in-system-principal project!
    • Thank you very much to everyone who helped out

Search and Navigation

Search
Quantum Bar

User Journey

  • Progress on What’s New Panel (see meta bug) targeted for 70
  • Improvements to the First Run onboarding experience, new targeting/triggers for the What’s new Page, and new Firefox Monitor snippet planned for 69 as the result of a recent work week
  • Some new potential CFRs (Contextual Feature Recommendations) planned for sync, Firefox Send, Send Tab to Device, and Lockwise
    • Some mockups showing how there are plans to promote Firefox Sync, Send, Send Tab to Device, and Lockwise via the Contextual Feature Recommender.

Mozilla VR BlogHubs July Update

Hubs July Update

We’ve introduced new features that make it easier to moderate and share your Hubs experience. July was a busy month for the team, and we’re excited to share some updates! As the community around Hubs has grown, we’ve had the chance to see different ways that groups meet in Hubs and are excited to explore new ways that groups can choose what types of experience they want to have. Different communities have different needs for how they’re meeting in Hubs, and we think that these features are a step towards helping people get co-present together in virtual spaces in the way that works best for them.

Room-Level Permissions
It is now possible for room owners to specify which features are granted to other users in the room. This allows the owner of the room to decide if people can add media to the room, draw with the pen, pin objects, and create cameras. If you’re using Hubs for a meeting or event where there will be a larger number of attendees, this can help keep the room organized and free from distractions.

Hubs July Update

Promoting Moderators
For groups that hold larger events in Hubs, there is now the ability to promote other users in a Hubs room to also have the capabilities of the room owner. If you’ve been creating rooms using the Hubs Discord bot,you may already be familiar with rooms that have multiple owners. This feature can be especially valuable for groups that have a core set of administrators who are available in the room to help moderate and keep events running smoothly. Room owners can promote other users to moderators by opening up the user list and selecting the user from a list, then clicking ‘Promote’ on the action list. You should only promote trusted users to moderator, since they’ll have the same permissions as you do as the room owner. Users must be signed in to be promoted.

Camera Mode
Room owners can now hide the Hubs user interface by enabling camera mode, which was designed for groups that want to have a member in the room record or livestream their gathering. When in camera mode, the room owner will broadcast the view from their avatar and replace the Lobby camera, and non-essential UI elements will be hidden. The full UI can be hidden by clicking the ‘Hide All’ button, which allows for a clear, unobstructed view of what’s going on in the room.

Video Recording
The camera tool in Hubs can now be used to record videos as well as photos. When a camera is created in the room, you can toggle different recording options that can be used by using the UI on the camera itself. Like photos, videos that are taken with the in-room camera will be added to the room after they have finished capturing. Audio for videos will be recorded from the position of the avatar of the user who is recording. While recording video on a camera, users will have an indicator on their display name above their head to show that they are capturing video. The camera itself also contains a light to indicate when it is recording.

Hubs July Update

Tweet from Hubs
For users who want to share their photos, videos, and rooms through Twitter, you can now tweet from directly inside of Hubs when media is captured in a room. When you hover over a photo or video that was taken by the in-room camera, you will see a blue ‘Tweet’ button appear. The first time you share an image or video through Twitter, you will be prompted to authenticate to your Twitter account. You can review the Hubs Privacy Policy and third-party notices here, and revoke access to Hubs from your Twitter account by going to https://twitter.com/settings/applications.

Embed Hubs Rooms
You can now embed a Hubs room directly into another web page in an iFrame. When you click the 'Share' button in a Hubs room, you can copy the embed code and paste it into the HTML on another site. Keep in mind that this means anyone who visits that page will be able to join!

Hubs July Update

Discord Bot Notifications
If you have the Hubs Discord bot in your server and bridged to a channel, you can now set a reminder to notify you of a future event or meeting. Just type in the command !hubs notify set mm/dd/yyyy and your time zone, and the Hubs Bot will post a reminder when the time comes around.

Microphone Level Indicator
Have you ever found yourself wondering if other people in the room could hear you, or forgotten that you were muted? The microphone icon in the HUD now shows mic activity level, regardless of whether or not you have your mic muted. This is a handy little way to make sure that your microphone is picking up your audio, and a nice reminder that you’re talking while muted.

In the coming months, we will be continuing work on new features aimed at enabling communities to get together easily and effectively. We’ll also be exploring improvements to the avatar customization flow and new features for Spoke to improve the tools available to creators to build their own spaces for their Hubs rooms. To participate in the conversation about new features and join our weekly community meetups, join us on Discord using the invitation link here.

Mozilla Open Policy & Advocacy BlogMozilla calls for transparency in compelled access case

Sometime last year, Facebook challenged a law enforcement request for access to encrypted communications through Facebook Messenger, and a federal judge denied the government’s demand. At least, that is what has been reported by the press. Troublingly, the details of this case are still not available to the public, as the opinion was issued “under seal.” We are trying to change that.

Mozilla, with Atlassian, has filed a friend of the court brief in a Ninth Circuit appeal arguing for unsealing portions of the opinion that don’t reveal sensitive or proprietary information or, alternatively, for releasing a summary of the court’s legal analysis. Our common law legal system is built on precedent, which depends on the public availability of court opinions for potential litigants and defendants to understand the direction of the law. This opinion would have been only the third since 2003 offering substantive precedent on compelled access—thus especially relevant input on an especially serious issue.

This case may have important implications for the current debate about whether and under what circumstances law enforcement can access encrypted data and encrypted communications. The opinion, if disclosed, could help all kinds of tech companies push back on overreaching law enforcement demands. We are deeply committed to building secure products and establishing transparency and control for our users, and this information is vital to enabling those ends. As thoughtful, mission-driven engineers and product designers, it’s critical for us as well as end users to understand the legal landscape around what the government can and cannot require.

The post Mozilla calls for transparency in compelled access case appeared first on Open Policy & Advocacy.

Daniel StenbergThe slowest curl vendors of all time

In the curl project we make an effort to ship security fixes as soon as possible after we’ve learned about a problem. We also “prenotify” (inform them about a problem before it gets known to the public) vendors of open source OSes ahead of the release to alert them about what is about to happen and to make it possible for them to be ready and prepared when we publish the security advisory of the particular problems we’ve found.

These distributors ship curl to their customers and users. They build curl from the sources they host and they apply (our and their own) security patches to the code over time to fix vulnerabilities. Usually they start out with the clean and unmodified version we released and then over time the curl version they maintain and ship gets old (by my standards) and the number of patches they apply grow, sometimes to several hundred.

The distros@openwall mailing list allows no more than 14 days of embargo, so they can never be told any further than so in advance.

We always ship at least one official patch for each security advisory. That patch is usually made for the previous version of curl and it will of course sometimes take a little work to backport to much older curl versions.

Red Hat

The other day I was reading LWN when I saw their regular notices about security updates from various vendors and couldn’t help checking out a mentioned curl security fix from Red Hat for Red Hat Enterprise Linux 7. It was dated July 29, 2019 and fixed CVE-2018-14618, which we announced on September 5th 2018. 327 days ago.

Not quite reaching Apple’s level, Red Hat positions themselves as number three in this toplist with this release.

An interesting detail here is that the curl version Red Hat fixed here was 7.29.0, which is the exact same version our winner also patched…

(Update after first publication: after talks with people who know things I’ve gotten some further details. Red Hat did ship a fix for this problem already in 2018. This 2019 one was a subsequent update for complicated reasons, which may or may not make this entry disqualified for my top-list.)

Apple

At times when I’ve thought it has been necessary, I’ve separately informed the product security team at Apple about a pending release with fixes that might affect their users, and almost every time I’ve done that they’ve responded to me and asked that I give them (much) longer time between alert and release in the future. (Requests I’ve ignored so far because it doesn’t match how we work nor how the open vendors want us to behave). Back in 2010, I noticed how one of the security fixes took 391 days for Apple to fix. I haven’t checked, but I hope they’re better at this these days.

With the 391 days, Apple takes place number two.

Oracle

Oracle Linux published the curl errata named ELSA-2019-1880 on July 30 2019 and it apparently fixes nine different curl vulnerabilities. All nine were the result of the Cure53 security audit and we announced them on November 2 2016.

These problems had at that time been public knowledge for exactly 1000 days! The race is over and Oracle got this win by a pretty amazing margin.

In this case, they still ship curl 7.29.0 (released on February 6, 2013) when the latest curl version we ship is version 7.65.3. When I write this, we know about 47 security problems in curl 7.29.0. 14 of those problems were fixed after those nine problems that were reportedly fixed on July 30. It might mean, but doesn’t have to, that their shipped version still is vulnerable to some of those…

Top-3

Summing up, here’s the top-3 list of all times:

  1. Oracle: 1000 days
  2. Apple: 391 days
  3. Red Hat: 327 days

Ending notes

I’m bundling and considering all problems as equals here, which probably isn’t entirely fair. Different vulnerabilities will have different degrees of severity and thus will be more or less important to fix in a short period of time.

Still, these were security releases done by these companies so someone there at least considered them to be security related, worth fixing and worth releasing.

This list is entirely unscientific, I might have missed some offenders. There might also be some that haven’t patched these or even older problems and then they are even harder to spot. If you know of a case suitable for this top-list, let me know!

Daniel Stenberg2000 contributors

Today when I ran the script that counts the total number of contributors that have helped out in the curl project (called contrithanks.sh) the number showing up in my terminal was

2000

At 7804 days since the birthday, it means one new contributor roughly every 4 days. For over 21 years. Kind of impressive when you think of it.

A “contributor” here means everyone that has reported bugs, helped out with fixing bugs, written documentation or authored commits (and whom we recorded the name at the time it happened, but this is something we really make an effort to not miss out on). Out of the 2000 current contributors, 708 are recorded in git as authors.

Plotted out on a graph, with the numbers from the RELEASE-NOTES over time we can see an almost linear growth. (The graph starts at 2005 because that’s when we started to log the number in that file.)

<figcaption>Number of contributors over time.</figcaption>

We crossed the 1000 mark on April 12 2013. 1400 on May 30th 2016 and 1800 on October 30 2018.

It took us almost six years to go from 1000 to 2000; roughly one new contributor every second day.

Two years ago in the curl 7.55.0, we were at exactly 1571 contributors so we’ve received help from over two hundred new persons per year recently. (Barring the miscalculations that occur when we occasionally batch-correct names or go through records to collect previously missed out names etc)

Thank you!

The curl project would not be what it is without all the help we get from all these awesome people. I love you!

docs/THANKS

That’s the file in the git repo that contains all the names of all the contributors, but if you check that right now you will see that it isn’t exactly 2000 names yet and that is because we tend to update that in batches around release time. So by the time the next release is coming, we will gather all the new contributors that aren’t already mentioned in that file and add them then and by then I’m sure we will be able to boast more than 2000 contributors. I hope you are one of the names in that list!

The Firefox FrontierThe latest Facebook Container for Firefox

Last year we helped you keep Facebook contained to Facebook, making it possible for you to stay connected to family and friends on the social network, while also keeping your … Read more

The post The latest Facebook Container for Firefox appeared first on The Firefox Frontier.

Will Kahn-Greenecrashstats-tools v1.0.1 released! cli for Crash Stats.

What is it?

crashstats-tools is a set of command-line tools for working with Crash Stats (https://crash-stats.mozilla.org/).

crashstats-tools comes with two commands:

  • supersearch: for performing Crash Stats Super Search queries
  • fetch-data: for fetching raw crash, dumps, and processed crash data for specified crash ids

v1.0.1 released!

I extracted two commands we have in the Socorro local dev environment as a separate Python project. This allows anyone to use those two commands without having to set up a Socorro local dev environment.

The audience for this is pretty limited, but I think it'll help significantly for testing analysis tools.

Say I'm working on an analysis tool that looks at crash report minidump files and does some additional analysis on it. I could use supersearch command to get me a list of crash ids to download data for and the fetch-data command to download the requisite data.

$ export CRASHSTATS_API_TOKEN=foo
$ mkdir crashdata
$ supersearch --product=Firefox --num=10 | \
    fetch-data --raw --dumps --no-processed crashdata

Then I can run my tools on the dumps in crashdata/upload_file_minidump/.

Where to go for more

See the project on GitHub which includes a README which contains everything about the project including examples of usage, the issue tracker, and the source code:

https://github.com/willkg/crashstats-tools

Let me know whether this helps you!

Dustin J. MitchellCODEOWNERS syntax

The GitHub docs page for CODEOWNERS is not very helpful in terms of how the file is interpreted. I’ve done a little experimentation to figure out how it works, and here are the results.

Rules

For each modified file in a PR, GitHub examines the codeowners file and selects the last matching entry. It then combines the set of mentions for all files in the PR and assigns them as reviewers.

An entry can specify no reviewers by containing only a pattern and no mentions.

Test

Consider this CODEOWNERS:

*            @org/reviewers
*.js         @org/js-reviewers
*.go         @org/go-reviewers
security/**  @org/sec-reviewers
generated/**

Then a change to:

  • README.md would get review from @org/reviewers
  • src/foo.js would get review from @org/js-reviewers
  • bar.go would get review from @org/go-reviewers
  • security/crypto.go would get review from @org/sec-reviewers (but not @org/go-reviewers!)
  • generated/reference.go would get review from nobody

And thus a PR with, for example:

M src/foo.js
M security/crypto.go
M generated/reference.go

would get reviewed by @org/js-reviewers and @org/sec-reviewers.

If I wanted per-language reviews even under security/, then I’d use

security/**       @org/sec-reviewers
security/**/*.js  @org/sec-reviewers @org/js-reviewers
security/**/*.go  @org/sec-reviewers @org/go-reviewers

Hacks.Mozilla.OrgNew CSS Features in Firefox 68

Firefox 68 landed earlier this month with a bunch of CSS additions and changes. In this blog post we will take a look at some of the things you can expect to find, that might have been missed in earlier announcements.

CSS Scroll Snapping

The headline CSS feature this time round is CSS Scroll Snapping. I won’t spend much time on it here as you can read the blog post for more details. The update in Firefox 68 brings the Firefox implementation in line with Scroll Snap as implemented in Chrome and Safari. In addition, it removes the old properties which were part of the earlier Scroll Snap Points Specification.

The ::marker pseudo-element

The ::marker pseudo-element lets you select the marker box of a list item. This will typically contain the list bullet, or a number. If you have ever used an image as a list bullet, or wrapped the text of a list item in a span in order to have different bullet and text colors, this pseudo-element is for you!

With the marker pseudo-element, you can target the bullet itself. The following code will turn the bullet on unordered lists to hot pink, and make the number on an ordered list item larger and blue.

ul ::marker {
  color: hotpink;
}

ol ::marker {
  color: blue;
  font-size: 200%;
}
An ordered and unordered list with styled bullets

With ::marker we can style our list markers

See the CodePen.

There are only a few CSS properties that may be used on ::marker. These include all font properties. Therefore you can change the font-size or family to be something different to the text. You can also color the bullets as shown above, and insert generated content.

Using ::marker on non-list items

A marker can only be shown on list items, however you can turn any element into a list-item by using display: list-item. In the example below I use ::marker, along with generated content and a CSS counter. This code outputs the step number before each h2 heading in my page, preceded by the word “step”. You can see the full example on CodePen.

h2 {
  display: list-item;
  counter-increment: h2-counter;
}

h2::marker {
  content: "Step: " counter(h2-counter) ". ";
}

If you take a look at the bug for the implementation of ::marker you will discover that it is 16 years old! You might wonder why a browser has 16 year old implementation bugs and feature requests sitting around. To find out more read through the issue, where you can discover that it wasn’t clear originally if the ::marker pseudo-element would make it into the spec.

There were some Mozilla-specific properties that achieved the result developers were looking for with something like ::marker. The properties ::moz-list-bullet and ::moz-list-marker allowed for the styling of bullets and markers respectively, using a moz- vendor prefix.

The ::marker pseudo-element is standardized in CSS Lists Level 3, and CSS Pseudo-elements Level 4, and currently implemented as of Firefox 68, and Safari. Chrome has yet to implement ::marker. However, in most cases you should be able to use ::marker as an enhancement for those browsers which support it. You can allow the markers to fall back to the same color and size as the rest of the list text where it is not available.

CSS Fixes

It makes web developers sad when we run into a feature which is supported but works differently in different browsers. These interoperability issues are often caused by the sheer age of the web platform. In fact, some things were never fully specified in terms of how they should work. Many changes to our CSS specifications are made due to these interoperability issues. Developers depend on the browsers to update their implementations to match the clarified spec.

Most browser releases contain fixes for these issues, making the web platform incrementally better as there are fewer issues for you to run into when working with CSS. The latest Firefox release is no different – we’ve got fixes for the ch unit, and list numbering shipping.

Developer Tools

In addition to changes to the implementation of CSS in Firefox, Firefox 68 brings you some great new additions to Developer Tools to help you work with CSS.

In the Rules Panel, look for the new print styles button. This button allows you to toggle to the print styles for your document, making it easier to test a print stylesheet that you are working on.

The Print Styles button in the UI highlighted

The print styles icon is top right of the Rules Panel.

 

Staying with the Rules Panel, Firefox 68 shows an icon next to any invalid or unsupported CSS. If you have ever spent a lot of time puzzling over why something isn’t working, only to realise you made a typo in the property name, this will really help!

A property named flagged invalid in the console

In this example I have spelled padding as “pudding”. There is (sadly) no pudding property so it is highlighted as an error.

 

The console now shows more information about CSS errors and warnings. This includes a nodelist of places the property is used. You will need to click CSS in the filter bar to turn this on.

The console highlighting a CSS error

My pudding error is highlighted in the Console and I can see I used it on the body element.

 

So that’s my short roundup of the features you can start to use in Firefox 68. Take a look at the Firefox 68 release notes to get a full overview of all the changes and additions that Firefox 68 brings you.

The post New CSS Features in Firefox 68 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Future Releases BlogDNS-over-HTTPS (DoH) Update – Detecting Managed Networks and User Choice

At Mozilla, we are continuing to experiment with DNS-over-HTTPS (DoH), a new network protocol that encrypts Domain Name System (DNS) requests and responses. This post outlines a new study we will be conducting to gauge how many Firefox users in the United States are using parental controls or enterprise DNS configurations.

With previous studies, we have tried to understand the performance impacts of DoH, and the results have been very promising. We found that DoH queries are typically the same speed or slightly slower than DNS queries, and in some cases can be significantly faster. Furthermore, we found that web pages that are hosted by Akamai–a content distribution network, or “CDN”–have similar performance when DoH is enabled. As such, DoH has the potential to improve user privacy on the internet without impeding user experience.

Now that we’re satisfied with the performance of DoH, we are shifting our attention to how we will interact with existing DNS configurations that users have chosen.  For example, network operators often want to filter out various kinds of content. Parents and schools in particular may use “parental controls”, which block access to websites that are considered unsuitable for children. These controls may also block access to malware and phishing websites. DNS is commonly used to implement this kind of content filtering.

Similarly, some enterprises set up their own DNS resolvers that behave in special ways. For example, these resolvers may return a different IP address for a domain name depending on whether the user that initiated the request is on a corporate network or a public network. This behavior is known as “split-horizon”, and it is often to host a production and development version of a website. Enabling DoH in this scenario could unintentionally prevent access to internal enterprise websites when using Firefox.

We want to understand how often users of Firefox are subject to these network configurations. To do that, we are performing a study within Firefox for United States-based users to collect metrics that will help answer this question. These metrics are based on common approaches to implementing filters and enterprise DNS resolvers.

Detecting DNS-based parental controls

This study will generate DNS lookups from participants’ browsers to detect DNS-based parental controls. First, we will resolve test domains operated by popular parental control providers to determine if parental controls are enabled on a network. For example, OpenDNS operates exampleadultsite.com. It is not actually an adult website, but it is present on the blocklists for several parental control providers. These providers often block access to such websites by returning an incorrect IP address for DNS lookups.

As part of this test, we will resolve exampleadultsite.com. According to OpenDNS, this domain name should only resolve to the address 146.112.255.155. Thus, if a different address is returned, we will infer that DNS-based parental controls have been configured. The browser will not connect to, or download any content from the website.

We will also attempt to detect when a network has forced “safe search” versions of Google and YouTube for its users. The way that safe search works is that the network administrator configures their resolver to redirect DNS requests for a search provider to a “safe” version of the website. For example, a network administrator may force all users that look up www.google.com to instead look up forcesafesearch.google.com. When the browser connects to the IP address for forcesafesearch.google.com, the search provider knows that safe search is enabled and returns filtered search results.

We will resolve the unrestricted domain names provided by Google and YouTube from the addon, and then resolve the safe search domain names. Importantly, the safe search domain names for Google and YouTube are hosted on fixed IP addresses. Thus, if the IP address for an unrestricted and safe search domain name match, we will infer that parental controls are enabled. The tables below show the domain names we will resolve to detect safe search.

YouTube Google
www.youtube.com www.google.com
m.youtube.com google.com
youtubeapi.googleapis.com
youtube.googleapis.com
www.youtube-nocookie.com

Table 1: The unrestricted domain names provided by YouTube and Google

YouTube Google
restrict.youtube.com forcesafesearch.google.com
restrictmoderate.youtube.com

Table 2: The safe search domain names provided by YouTube and Google

Detecting split-horizon DNS resolvers

We also want to understand how many Firefox users are behind networks that use split-horizon DNS resolvers, which are commonly configured by enterprises. We will perform two checks locally in the browser on DNS answers for websites that users visit during the study. First, we will check if the domain name does not contain a TLD that can be resolved by publicly-available DNS resolvers (such as .com). Second, if the domain name does contain such a TLD, we will check if the domain name resolves to a private IP address.

If either of these checks return true, we will infer that the user’s DNS resolver has configured split-horizon behavior. This is because the public DNS can only resolve domain names with particular TLDs, and it must resolve domain names to addresses that can be accessed over the public internet.

To be clear, we will not collect any DNS requests or responses. All checks will occur locally. We will count how many unique domain names appear to be resolved by a split-horizon resolver and then send only these counts to us.

Study participation

Users that do not wish to participate in this study can opt-out by typing “about:studies’ in the navigation bar, looking for an active study titled “Detection Logic for DNS-over-HTTPS”, and disabling it. (Not all users will receive this study, so don’t be alarmed if you can’t find it.) Users may also opt out of participating in any future studies from this page.

As always, we are committed to maintaining a transparent relationship with our users. We believe that DoH significantly improves the privacy of our users. As we move toward a rollout of DoH to all United States-based Firefox users, we intend to provide explicit mechanisms allowing users and local DNS administrators to opt-out.

 

The post DNS-over-HTTPS (DoH) Update – Detecting Managed Networks and User Choice appeared first on Future Releases.

This Week In RustThis Week in Rust 297

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is async-trait, a procedural macro to allow async fns in trait methods. Thanks to Ehsan M. Kermani for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

324 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No new RFCs were proposed this week.

Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust clearly popularized the ownership model, with similar implementations being considered in D, Swift and other languages. This is great news for both performance and memory safety in general.

Also let's not forget that Rust is not the endgame. Someone may at one point find or invent a language that will offer an even better position in the safety-performance-ergonomics space. We should be careful not to get too attached to Rust, lest we stand in progress' way.

llogiq on reddit

Thanks to Vikrant for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Eitan IsaacsonHTML Text Snippet Extension

I often need to quickly test a snippet of HTML, mostly to see how it interacts with our accessibility APIs.

Instead of creating some throwaway HTML file each time, I find it easier to paste in the HTML in devtools, or even make a data URI.

Last week I spent an hour creating an extension that allows you to just paste some HTML into the address bar and have it rendered immediately.

You just need to prefix it with the html keyword, and you’re good to go. Like this html <h1>Hello, World!</h1>.

You can download it from github.

There might be other extensions or ways of doing this, but it was a quick little project.

IRL (podcast)The Tech Worker Resistance

There's a movement building within tech. Workers are demanding higher standards from their companies — and because of their unique skills and talent, they have the leverage to get attention. Walkouts and sit-ins. Picket protests and petitions. Shareholder resolutions, and open letters. These are the new tools of tech workers, increasingly emboldened to speak out. And, as they do that, they expose the underbellies of their companies' ethics and values or perceived lack of them.

In this episode of IRL, host Manoush Zomorodi meets with Rebecca Stack-Martinez, an Uber driver fed up with being treated like an extension of the app; Jack Poulson, who left Google over ethical concerns with a secret search engine being built for China; and Rebecca Sheppard, who works at Amazon and pushes for innovation on climate change from within. EFF Executive Director Cindy Cohn explains why this movement is happening now, and why it matters for all of us.

IRL is an original podcast from Firefox. For more on the series go to irlpodcast.org

Rebecca Stack-Martinez is a committee member for Gig Workers Rising.

Here is Jack Poulson's resignation letter to Google. For more, read Google employees' open letter against Project Dragonfly.

Check out Amazon employees' open letter to Jeff Bezos and Board of Directors asking for a better plan to address climate change.

Cindy Cohn is the Executive Director of the Electronic Frontier Foundation. EFF is a nonprofit that defends civil liberties in the digital world. They champion user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.

Mozilla VR BlogMrEd, an Experiment in Mixed Reality Editing

MrEd, an Experiment in Mixed Reality Editing

We are excited to tell you about our experimental Mixed Reality editor, an experiment we did in the spring to explore online editing in MR stories. What’s that? You haven’t heard of MrEd? Well please allow us to explain.

MrEd, an Experiment in Mixed Reality Editing

For the past several months Blair, Anselm and I have been working on a visual editor for WebXR called the Mixed Reality Editor, or MrEd. We started with this simple premise: non-programmers should be able to create interactive stories and experiences in Mixed Reality without having to embrace the complexity of game engines and other general purpose tools. We are not the first people to tackle this challenge; from visual programming tools to simplified authoring environments, researchers and hobbyists have grappled with this problem for decades.

Looking beyond Mixed Reality, there have been notable successes in other media. In the late 1980s Apple created a ground breaking tool for the Macintosh called Hypercard. It let people visually build applications at a time when programming the Mac required Pascal or assembly. It did this by using the concrete metaphor of a stack of cards. Anything could be turned into a button that would jump the user to another card. Within this simple framework people were able to create eBooks, simple games, art, and other interactive applications. Hypercard’s reliance on declaring possibly large numbers of “visual moments” (cards) and using simple “programming” to move between them is one of the inspirations for MrEd.

We also took inspiration from Twine, a web-based tool for building interactive hypertext novels. In Twine, each moment in the story (seen on the screen) is defined as a passage in the editor as a mix of HTML content and very simple programming expressions executed when a passage is displayed, or when the reader follows a link. Like Hypercard, the author directly builds what the user sees, annotating it with small bits of code to manage the state of the story.

No matter what the medium — text, pictures, film, or MR — people want to tell stories. Mixed Reality needs tools to let people easily tell stories by focusing on the story, not by writing a simulation. It needs content focused tools for authors, not programmers. This is what MrEd tries to be.

Scenes Linked Together

At first glance, MrEd looks a lot like other 3D editors, such as Unity3D or Amazon Sumerian. There is a scene graph on the left, letting authors create scenes, add anchors and attach content elements under them. Select an item in the graph or in the 3D windows, and a property pane appears on the right. Scripts can be attached to objects. And so on. You can position your objects in absolute space (good for VR) or relative to other objects using anchors. An anchor lets you do something like look for this poster in the real world, then position this text next to it, or look for this GPS location and put this model on it. Anchors aren’t limited to basic can also express more semantically meaningful concepts like find the floor and put this on it (we’ll dig into this in another article).

Dig into the scene graph on the left, and differences appear. Instead of editing a single world or game level, MrEd uses the metaphor of a series of scenes (inspired by Twine’s passages and Hypercard’s cards). All scenes in the project are listed, with each scene defining what you see at any given point: shapes, 3D models, images, 2D text and sounds. You can add interactivity by attaching behaviors to objects for things like ‘click to navigate’ and ‘spin around’. The story advances by moving from scene to scene; code to keep track of story state is typically executed on these scene transitions, like Hypercard and Twine. Where most 3D editors force users to build simulations for their experiences, MrEd lets authors create stores that feel more like “3D flip-books”. Within a scene, the individual elements can be animated, move around, and react to the user (via scripts), but the story progresses by moving from scene to scene. While it is possible to create complex individual scenes that begin to feel like a Unity scene, simple stories can be told through sequences of simple scenes.

We built MrEd on Glitch.com, a free web-based code editing and hosting service. With a little hacking we were able to put an entire IDE and document server into a glitch. This means anyone can share and remix their creations with others.

One key feature of MrEd is that it is built on top of a CRDT data structure to enable editing the same project on multiple devices simultaneously. This feature is critical for Mixed Reality tools because you are often switching between devices during development; the networked CRDT underpinnings also mean that logging messages from any device appear in any open editor console viewing that project, simplifying distributed development. We will tell you more details about the CRDT and Glitch in future posts.

We ran a two week class with a group of younger students in Atlanta using MrEd. The students were very interested in telling stories about their school, situating content in space around the buildings, and often using memes and ideas that were popular for them. We collected feedback on features, bugs and improvements and learned a lot from how the students wanted to use our tool.

Lessons Learned

As I said, this was an experiment, and no experiment is complete without reporting on what we learned. So what did we learn? A lot! And we are going to share it with you over the next couple of blog posts.

First we learned that idea of building a 3D story from a sequence of simple scenes worked for novice MR authors: direct manipulation with concrete metaphors, navigation between scenes as a way of telling stories, and the ability to easily import images and media from other places. The students were able to figure it out. Even more complex AR concepts like image targets and geospatial anchors were understandable when turned into concrete objects.

MrEd’s behaviors scripts are each a separate Javascript file and MrEd generates the property sheet from the definition of the behavior in the file, much like Unity’s behaviors. Compartmentalizing them in separate files means they are easy to update and share, and (like Unity) simple scripts are a great way to add interactivity without requiring complex coding. We leveraged Javascript’s runtime code parsing and execution to support scripts with simple code snippets as parameters (e.g., when the user finds a clue by getting close to it, a proximity behavior can set a global state flag to true, without requiring a new script to be written), while still giving authors the option to drop down to Javascript when necessary.

Second, we learned a lot about building such a tool. We really pushed Glitch to the limit, including using undocumented APIs, to create an IDE and doc server that is entirely remixable. We also built a custom CRDT to enable shared editing. Being able to jump back and forth between a full 2d browser with a keyboard and then XR Viewer running on an ARKit enabled iPhone is really powerful. The CRDT implementation makes this type of real time shared editing possible.

Why we are done

MrEd was an experiment in whether XR metaphors can map cleanly to a Hypercard-like visual tool. We are very happy to report that the answer is yes. Now that our experiment is over we are releasing it as open source, and have designed it to run in perpetuity on Glitch. While we plan to do some code updates for bug fixes and supporting the final WebXR 1.0 spec, we have no current plans to add new features.

Building a community around a new platform is difficult and takes a long time. We realized that our charter isn’t to create platforms and communities. Our charter is to help more people make Mixed Reality experiences on the web. It would be far better for us to help existing platforms add WebXR than for us to build a new community around a new tool.

Of course the source is open and freely usable on Github. And of course anyone can continue to use it on Glitch, or host their own copy. Open projects never truly end, but our work on it is complete. We will continue to do updates as the WebXR spec approaches 1.0, but there won’t be any major changes.

Next Steps

We are going to polish up the UI and fix some remaining bugs. MrEd will remain fully usable on Glitch, and hackable on GitHub. We also want to pull some of the more useful chunks into separate components, such as the editing framework and the CRDT implementation. And most importantly, we are going to document everything we learned over the next few weeks in a series of blogs.

If you are interested in integrating WebXR into your own rapid prototyping / educational programming platform, then please let us know. We are very happy to help you.

You can try MrEd live by remixing the Glitch and setting a password in the .env file. You can get the source from the main MrEd github repo, and the source for the glitch from the base glitch repo.

Botond BalloTrip Report: C++ Standards Meeting in Cologne, July 2019

Summary / TL;DR (new developments since last meeting in bold)

Project What’s in it? Status
C++20 See below On track
Library Fundamentals TS v3 See below Under development
Concepts Constrained templates In C++20
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published!
Executors Abstraction for where/how code runs in a concurrent context Targeting C++23
Concurrency TS v2 See below Under active development
Networking TS Sockets library based on Boost.ASIO Published! Not in C++20.
Ranges Range-based algorithms and views In C++20
Coroutines Resumable functions (generators, tasks, etc.) In C++20
Modules A component system to supersede the textual header file inclusion model In C++20
Numerics TS Various numerical facilities Under active development
C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Under active development
Contracts Preconditions, postconditions, and assertions Pulled from C++20, now targeting C++23
Pattern matching A match-like facility for C++ Under active development, targeting C++23
Reflection TS Static code reflection mechanisms Publication imminent
Reflection v2 A value-based constexpr formulation of the Reflection TS facilities Under active development, targeting C++23
Metaclasses Next-generation reflection facilities Early development

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of August 5, 2019). If you encounter such a link, please check back in a few days.

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Cologne, Germany. This was the second committee meeting in 2019; you can find my reports on preceding meetings here (February 2019, Kona) and here (November 2018, San Diego), and previous ones linked from those. These reports, particularly the Kona one, provide useful context for this post.

This week the committee reached a very important milestone in the C++20 publication schedule: we approved the C++20 Committee Draft (CD), a feature-complete draft of the C++20 standard which includes wording for all of the new features we plan to ship in C++20.

The next step procedurally is to send out the C++20 CD to national standards bodies for a formal ISO ballot, where they have the opportunity to comment on it. The ballot period is a few months, and the results will be in by the next meeting, which will be in November in Belfast, Northern Ireland. We will then spend that meeting and the next one addressing the comments, and then publishing a revised draft standard. Importantly, as this is a feature-complete draft, new features cannot be added in response to comments; only bugfixes to existing features can be made, and in rare cases where a serious problem is discovered, a feature can be removed.

Attendance at this meeting once again broke previous records, with over 200 people present for the first time ever. It was observed that one of the likely reasons for the continued upward trend in attendance is the proliferation of domain-specific study groups such as SG 14 (Games and Low-Latency Programming) and SG 19 (Machine Learning) which is attracting new experts from those fields.

Note that the committe now tracks its proposals in GitHub. If you’re interested in the status of a proposal, you can find its issue on GitHub by searching for its title or paper number, and see its status — such as which subgroups it has been reviewed by and what the outcome of the reviews were — there.

C++20

Here are the new changes voted into C++20 Working Draft at this meeting. For a list of changes voted in at previous meetings, see my Kona report. (As a quick refresher, major features voted in at previous meetings include modules, coroutines, default comparisons (<=>), concepts, and ranges.)

Technical Specifications

In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At this meeting, the focus was on the C++20 CD, and not so much on TSes. In particular, there was no discussion of merging TSes into the C++ IS, because the deadline for doing so for C++20 was the last meeting (where Modules and Coroutines were merged, joining the ranks of Concepts which was merged a few meetings prior), and it’s too early to be discussing mergers into C++23.

Nonetheless, the committee does have a few TSes in progress, and I’ll mention their status:

Reflection TS

The Reflection TS was approved for publication at the last meeting. The publication process for this TS is a little more involved than usual: due to the dependency on the Concepts TS, the Reflection TS needs to be rebased on top of C++14 (the Concepts TS’ base document) for publication. As a result, the official publication has not happened yet, but it’s imminent.

As mentioned before, the facilities in the Reflection TS are not planned to be merged into the IS in their current form. Rather, a formulation based on constexpr values (rather than types) is being worked on. This is a work in progress, but recent developments have been encouraging (see the SG7 (Reflection) section) and I’m hopeful about them making C++23.

Library Fundamentals TS v3

This third iteration (v3) of the Library Fundamentals TS continues to be open for new features. It hasn’t received much attention in recent meetings, as the focus has been on libraries targeted at C++20, but I expect it will continue to pick up material in the coming meetings.

Concurrency TS v2

A concrete plan for Concurrency TS v2 is starting to take shape.

The following features are planned to definitely be included:

The following additional features might tag along if they’re ready in time:

I don’t think there’s a timeline for publication yet; it’s more “when the features in the first list are ready”.

Networking TS

As mentioned before, the Networking TS did not make C++20. As it’s now targeting C++23, we’ll likely see some proposal for design changes between now and its merger into C++23.

One such potential proposal is one that would see the Networking TS support TLS out of the box. JF Bastien from Apple has been trying to get browser implementers on board with such a proposal, which might materialize for the upcoming Belfast meeting.

Evolution Working Group

As usual, I spent most of the week in EWG. Here I will list the papers that EWG reviewed, categorized by topic, and also indicate whether each proposal was approved, had further work on it encouraged, or rejected. Approved proposals are targeting C++20 unless otherwise mentioned; “further work” proposals are not.

Concepts

  • (Approved) Rename concepts to standard_case for C++20, while we still can. Concepts have been part of the C++ literature long before the C++20 language feature that allows them to be expressed in code; for example, they are discussed in Stepanov’s Elements of Programming, and existing versions of the IS document describe the notional concepts that form the requirements for various algorithms. In this literature, concepts are conventionally named in PascalCase. As a result, the actual language-feature concepts added to the standard library in C++20 were named in PascalCase as well. However, it was observed that essentially every other name in the standard library uses snake_case, and remaining consistent with that might be more important than respecting naming conventions from non-code literature. This was contentious, for various reasons: (1) it was late in the cycle to make this change; (2) a pure mechanical rename resulted in some conflicts with existing names, necessitating additional changes that went beyond case; and (3) some people liked the visual distinction that PascalCase conferred onto concept names. Nonetheless, EWG approved the change.
  • (Approved) On the non-uniform semantics of return-type-requirements. This proposal axes concept requirements of the form expression -> Type, because their semantics are not consistent with trailing return types which share the same syntax.
  • (Approved) Using unconstrained template template parameters with constrained templates. This paper allows unconstrained template template parameters to match constrained templates; without this change, it would have been impossible to write a template template parameter that matches any template regardless of constraints, which is an important use case.

Contracts

Contracts were easily the most contentious and most heavily discussed topic of the week. In the weeks leading up the meeting, there were probably 500+ emails on the committee mailing lists about them.

The crux of the problem is that contracts can have a range of associated behaviours / semantics: whether they are checked, what happens if they are checked and fail, whether the compiler can assume them to be true in various scenarios, etc. The different behaviours lend themselves to different use cases, different programming models, different domains, and different stages of the software lifecycle. Given the diversity of all of the above represented at the committee, people are having a really hard time agreeing on what set of possible behaviours the standard should allow for, what the defaults should be, and what mechanisms should be available to control the behaviour in case you want something other than the defaults.

A prominent source of disagreement is around the possibility for contracts to introduce undefined behaviour (UB) if we allow compilers to assume their truth, particularly in cases where they are not checked, or where control flow is allowed to continue past a contract failure.

Contracts were voted into the C++20 working draft in June 2018; the design that was voted in was referred to as the “staus quo design” during this week’s discussions (since being in the working draft made it the status quo). In a nutshell, in the status quo design, the programmer could annotate contracts as having one of three levels — default, audit, or axiom — and the contract levels were mapped to behaviours using two global switches (controlled via an implementation-defined mechanism, such as a compiler flag): a “build level” and a “continuation mode”.

The status quo design clearly had consensus at the time it was voted in, but since then that consensus had begun to increasingly break down, leading to a slew of Contracts-related proposals submitted for the previous meeting and this one.

I’ll summarize the discussions that took place this week, but as mentioned above, the final outcome was that Contracts was removed from C++20 and is now targeting C++23.

EWG discussed Contracts on two occasions during the week, Monday and Wednesday. On Monday, we started with a scoping discussion, where we went through the list of proposals, and decided which of them we were even willing to discuss. Note that, as per the committee’s schedule for C++20, the deadline for making design changes to a C++20 feature had passed, and EWG was only supposed to consider bugfixes to the existing design, though as always that’s a blurry line.

Anyways, the following proposals were rejected with minimal discussion on the basis of being a design change:

That left the following proposals to be considered. I list them here in the order of discussion. Please note that the “approvals” from this discussion were effectively overturned by the subsequent removal of Contracts from C++20.

  • (Rejected) What to do about contracts? This proposed two alternative minimal changes to the status quo design, with the primary aim of addressing the UB concerns, but neither had consensus. (Another paper was essentially a repeat of one of the alternatives and was not polled separately.)
  • (Rejected) Axioms should be assumable. This had a different aim (allowing the compiler to assume contracts in more cases, not less) and also did not have consensus.
  • (Approved) Minimizing contracts. This was a two-part proposal. The first part removed the three existing contract levels (default, audit, and axiom), as well as the build level and continuation mode, and made the way the behaviour of a contract checking statement is determined completely implementation-defined. The second part essentially layered on top the “Contracts that work” proposal, which introduces literal semantics: rather than annotating contracts with “levels” which are somehow mapped onto behaviours, contracts are annotated with their desired behaviour directly; if the programmer wants different behaviours in different build modes, they can arrange for that themselves, using e.g. macros that expand to different semantics in different build modes. EWG approved both parts, which was somewhat surprising because “Contracts that work” was previously voted as not even being in scope for discussion. I think the sentiment was that, while this is a design change, it has more consensus than the status quo, and so it’s worth trying to sneak it in even though we’re past the design change deadline. Notably, while this proposal did pass, it was far from unanimous, and the dissenting minority was very vocal about their opposition, which ultimately led to the topic being revisited and Contracts being axed from C++20 on Wednesday.
  • (Approved) The “default” contract build-level and continuation-mode should be implementation-defined. This was also approved, which is also somewhat suprising given that it was mooted by the previous proposal. Hey, we’re not always a completely rational bunch!

To sum up what happened on Monday: EWG made a design change to Contracts, and that design change had consensus among the people in the room at the time. Unfortunately, subsequent discussions with people not in the room, including heads of delegations from national standards bodies, made it clear that the design change was very unlikely to have the consensus of the committee at large in plenary session, largely for timing reasons (i.e. it being too late in the schedule to make such a nontrivial design change).

As people were unhappy with the status quo, but there wasn’t consensus for a design change either, that left removing contracts from C++20 and continuing to work on it in the C++23 cycle. A proposal to do so was drafted and discussed in EWG on Wednesday, with a larger group of people in attendance this time, and ultimately garnered consensus.

To help organize further work on Contracts in the C++23 timeframe, a new Study Group, SG 21 (Contracts) was formed, which would incubate and refine an updated proposal before it comes back to EWG. It’s too early to say what the shape of that proposal might be.

I personally like literal semantics, though I agree it probably wouldn’t have been prudent to make a significant change like that for C++20. I would welcome a future proposal from SG 21 that includes literal semantics.

Modules

A notable procedural development in the area of Modules, is that the Modules Study Group (SG 2) was resurrected at the last meeting, and met during this meeting to look at all Modules-related proposals and make recommendations about them. EWG then looked at the ones SG 2 recommended for approval for C++20:

  • (Approved) Mitigating minor Modules maladies. EWG affirmed SG2’s recommendation to accept the first and third parts (concerning typedef names and default arguments, respectively) for C++20.
  • (Approved) Relaxing redefinition restrictions for re-exportation robustness. This proposal makes “include translation” (the automatic translation of some #include directives into module imports) optional, because it is problematic for some use cases, and solves the problems that motivated mandatory include translation in another way. (Aside: Richard Smith, the esteemed author of this paper and the previous one, clearly has too much time on his hands if he can come up with alliterations like this for paper titles. We should give him some more work to do. Perhaps we could ask him to be the editor of the C++ IS document? Oh, we already did that… Something else then. Finish implementing Concepts in clang perhaps? <wink>)
  • (Approved) Standard library header units for C++20. This allows users to consume C++ standard library headers (but not headers inherited from C like <cmath>) using import rather than #include (without imposing any requirements (yet) that their contents actually be modularized). It also reserves module names whose first component is std, or std followed by a number, for use by the standard library.
  • (Approved) Recognizing header unit imports requires full preprocessing. This tweaks the context sensitivity rules for the import keyword in such a way that tools can quickly scan source files and gather their module dependencies without having to do too much processing (and in particular without having to do a full run of the preprocessor).

There were also some Modules-related proposals that SG2 looked at and decided not to advance for C++20, but instead continue iterating for C++23:

  • (Further work) The inline keyword is not in line with the design of modules. This proposal will be revised before EWG looks at it.
  • (Further work) ABI isolation for member functions. EWG did look at this, towards the end of the week when it ran out of C++20 material. The idea here is that people like to define class methods inline for brevity (to avoid repeating the function header in an out-of-line definition), but the effects this has on linkage are sometimes undesirable. In module interfaces in particular, the recently adopted rule changes concerning internal linkage mean that users can run into hard-to-understand errors as a result of giving methods internal linkage. The proposal therefore aims to dissociate whether a method is defined inline or out of line, from semantic effects on linkage (which could still be achieved by using the inline keyword explicitly). Reactions were somewhat mixed, with some concerns about impacts on compile-time and runtime performance. Some felt that if we do this at all, we should do it in C++20, so our guidance to authors of modular code can be consistent from the get-go; while it seems to be too late to make this change in C++20 itself, the idea of a possible future C++20 Defect Report was raised.

Finally, EWG favourably reviewed at the Tooling Study Group’s plans for a C++ Ecosystem Technical Report. One suggestion made was to give the TR a more narrowly scoped name to reflect its focus on Modules-related tooling (lest people are misled into expecting that it addresses every “C++ Ecosystem” concern).

Coroutines

EWG considered several proposed improvements to coroutines. All of them were rejected for C++20 due to being too big of a change at this late stage.

Coroutines will undoubtedly see improvements in the C++23 timeframe, including possibly having some of the above topics revisited, but of course we’ll now be limited to making changes that are backwards-compatible with the current design.

constexpr

  • (Approved) Enabling constexpr intrinsics by permitting unevaluated inline assembly in constexpr functions. With std::is_constant_evaluated(), you can already give an operation different implementations for runtime and compile-time evaluation. This proposal just allows the runtime implementations of such functions to use inline assembly.
  • (Approved) A tweak to constinit: EWG was asked to clarify the intended rules for non-initializing declarations. The Core Working Group’s recommendation — that a non-initializing declaration of a variable be permitted to contain constinit, and if it does, the initializing declaration must be constinit as well — was accepted.

Comparisons

  • (Approved) Spaceship needs a tune-up. This fixes some relatively minor fallout from recent spaceship-related bugfixes.
  • (Rejected) The spaceship needs to be grounded: pull spaceship from C++20. Concerns about the fact that we keep finding edge cases where we need to tweak spaceship’s behaviour, and that the rules have become rather complicated as a result of successive bug fixes, prompted this proposal to remove spaceship from C++20. EWG disagreed, feeling that the value this feature delivers for common use cases outweighs the downside of having complex rules to deal with uncommon edge cases.

Lightweight Exceptions

In one of the meeting’s more exciting developments, Herb Sutter’s lightweight exceptions proposal (affectionately dubbed “Herbceptions” in casual conversation) was finally discussed in EWG. I view this proposal as being particularly important, because it aims to heal the current fracture of the C++ user community into those who use exceptions and those who do not.

The proposal has four largely independent parts:

  • The first and arguably most interesting part (section 4.1 in the paper) provides a lightweight exception handling mechanism that avoids the overhead that today’s dynamic exceptions have, namely that of dynamic memory allocation and runtime type information (RTTI). The new mechanism is opt-in on a per-function basis, and designed to allow a codebase to transition incrementally from the old style of exceptions to the new one.
  • The next two parts have to do with using exceptions in fewer scenarios:
    • The second part (section 4.2) is about transitioning the standard library to handle logic errors not via exceptions like std::logic_error, but rather via a contract violation.
    • The third part (section 4.3) is about handling allocation failure via termination rather than an exception. Earlier versions of the proposal were more aggressive on this front, and aimed to make functions that today only throw exceptions related to allocation failure noexcept. However, that’s unlikely to fly, as there are good use cases for recovering from allocation failure, so more recent versions leave the choice of behaviour up to the allocator, and aim to make such functions conditionally noexcept.
  • The fourth part (section 4.5), made more realistic by the previous two, aims to make the remaining uses of exceptions more visible by allowing expressions that propagate exceptions to be annotated with the try keyword (there being prior art for this sort of thing in Swift and Rust). Of course, unlike Rust, use of the annotation would have to be optional for backwards compatibility, though one can envision enforcing its use locally in a codebase (or part of a codebase) via static analysis.

As can be expected from such an ambitious proposal, this prompted a lot of discussion in EWG. A brief summary of the outcome for each part:

  1. There was a lot of discussion both about how performant we can make the proposed lightweight exceptions, and about the ergonomics of the two mechanisms coexisting in the same program. (For the latter, a particular point of contention was that functions that opt into the new exceptions require a modified calling convention, which necessitates encoding the exception mode into the function type (for e.g. correct calling via function pointers), which fractures the type system). EWG cautiously encouraged further exploration, with the understanding that further experiments and especially implementation experience are needed to be able to provide more informed directional guidance.
  2. Will be discussed jointly by Evolution and Library Evolution in the future.
  3. EWG was somewhat skeptical about this one. In particular, the feeling in the room was that, while Library Evolution may allow writing allocators that don’t throw and library APIs may be revised to take advantage of this and make some functions conditionally noexcept, there was no consensus to move in the direction of making the default allocator non-throwing.
  4. EWG was not a fan of this one. The feeling was that the annotations would have limited utility unless they’re required, and we can’t realistically ever make them required.

I expect the proposal will return in revised form (and this will likely repeat for several iterations). The road towards achieving consensus on a significant change like this is a long one!

I’ll mention one interesting comment that was made during the proposal’s presentation: it was observed that since we need to revise the calling convention as part of this proposal anyways, perhaps we could take the opportunity to make other improvements to it as well, such as allowing small objects to be passed in registers, the lack of which is a pretty unfortunate performance problem today (certainly one we’ve run into at Mozilla multiple times). That seems intriguing.

Other new features

  • (Approved*) Changes to expansion statements. EWG previously approved a “for ...” construct which could be used to iterate at compile time over tuple-like objects and parameter packs. Prior to this meeting, it was discovered that the parameter pack formulation has an ambiguity problem. We couldn’t find a fix in time, so the support for parameter packs was dropped, leaving only tuple-like objects. However, “for ...” no longer seemed like an appropriate syntax if parameter packs are not supported, so the syntax was changed to “template for“. Unfortunately, while EWG approved “template for“, the Core Working Group ran out of time to review its wording, so (*) the feature didn’t make C++20. It will likely be revisited for C++23, possibly including ways to resolve the parameter pack issue.
  • (Further work) Pattern matching. EWG looked at a revised version of this proposal which features a refined pattern syntax among other improvements. The review was generally favourable, and the proposal, which is targeting C++23, is getting close to the stage where standard wording can be written and implementation experience gathered.

Bug / Consistency Fixes

(Disclaimer: don’t read too much into the categorization here. One person’s bug fix is another’s feature.)

For C++20:

For C++23:

  • (Approved) Size feedback in operator new. This allows operator new to communicate to its caller how many bytes it actually allocated, which can sometimes be larger than the requested amount.
  • (Approved) A type trait to detect scoped enumerations. This adds a type trait to tell apart enum classes from plan enums, which is not necessarily possible to do in pure library code.
  • (Approved in part) Literal suffixes for size_t and ptrdiff_t. The suffixes uz for size_t and z for ssize_t were approved. The suffixes t for ptrdiff_t and ut for a corresponding unsigned type had no consensus.
  • (Further work) Callsite based inlining hints: [[always_inline]] and [[never_inline]]. EWG was generally supportive, but requested the author provide additional motivation, and also clarify if they are orders to the compiler (usable in cases where inlining or not actually has a semantic effect), or just strong optimization hints.
  • (Further work) Defaultable default constructors and destructors for all unions. The motivation here is to allow having unions which are trivial but have nontrivial members. EWG felt this was a valid usecase, but the formulation in the paper erased important safeguards, and requested a different formulation.
  • (Further work) Name lookup should “find the first thing of that name”. EWG liked the proposed simplification, but requested that research be done to quantify the scope of potential breakage, as well as archaeology to better understand the motivation for the current rule (which no one in the room could recall.)

Proposals Not Discussed

As usual, there were papers EWG did not get to discussing at this meeting; see the committee website for a complete list. At the next meeting, after addressing any national body comments on the C++20 CD which are Evolutionary in nature, EWG expects to spend the majority of the meeting reviewing C++23-track proposals.

Evolution Working Group Incubator

Evolution Incubator, which acts as a filter for new proposals incoming to EWG, met for two days, and reviewed numerous proposals, approving the following ones to advance to EWG at the next meeting:

Other Working Groups

Library Groups

Having sat in the Evolution group, I haven’t been able to follow the Library groups in any amount of detail, but I’ll call out some of the library proposals that have gained design approval at this meeting:

Note that the above is all C++23 material; I listed library proposals which made C++20 at this meeting above.

There are also efforts in place to consolidate general design guidance that the Library Evolution group would like to apply to all proposals into a policy paper.

While still at the Incubator stage, I’d like to call attention to web_view, a proposal for embedding a view powered by a web browser engine into a C++ application, for the purpose of allowing C++ applications to leverage the wealth of web technologies for purposes such as graphical output, interaction, and so on. As mentioned in previous reports, I gathered feedback about this proposal from Mozilla engineers, and conveyed this feedback (which was a general discouragement for adding this type of facility to C++) both at previous meetings and this one. However, this was very much a minority view, and as a whole the groups which looked at this proposal (which included SG13 (I/O) and Library Evolution Incubator) largely viewed it favourably, as a promising way of allow C++ applications to do things like graphical output without having to standardize a graphics API ourselves, as previously attempted.

Study Groups

SG 1 (Concurrency)

SG 1 has a busy week, approving numerous proposals that made it into C++20 (listed above), as well as reviewing material targeted for the Concurrency TS v2 (whose outline I gave above).

Another notable topic for SG 1 was Executors, where a consensus design was reviewed and approved. Error handling remains a contentious issue; out of two different proposed mechanics, the first one seems to have the greater consensus.

Progress was also made on memory model issues, aided by the presence of several memory model experts who are not regular attendees. It seems the group may have an approach for resolving the “out of thin air” (OOTA) problem (see relevant papers); according to SG 1 chair Olivier Giroux, this is the most optimistic the group has been about the OOTA problem in ~20 years!

SG 7 (Compile-Time Programming)

The Compile-Time Programming Study Group (SG 7) met for half a day to discuss two main topics.

First on the agenda was introspection. As mentioned in previous reports, the committee put out a Reflection TS containing compile-time introspection facilities, but has since agreed that in the C++ IS, we’d like facilities with comparable expressive power but a different formulation (constexpr value-based metaprogramming rather than template metaprogramming). Up until recently, the nature of the new formulation was in dispute, with some favouring a monotype approach and others a richer type hierarchy. I’m pleased to report that at this meeting, a compromise approach was presented and favourably reviewed. With this newfound consensus, SG 7 is optimistic about being able to get these facilities into C++23. The compromise proposal does require a new language feature, parameter constraints, which will be presented to EWG at the next meeting.

(SG 7 also looked at a paper asking to revisit some of the previous design choices made regarding parameter names and access control in reflection. The group reaffirmed its previous decisions in these areas.)

The second main topic was reification, which can be thought of as the “next generation” of compile-time programming facilities, where you can not only introspect code at compile time, but perform processing on its representation and generate (“reify”) new code. A popular proposal in this area is Herb Sutter’s metaclasses, which allow you to “decorate” classes with metaprograms that transform the class definition in interesting ways. Metaclasses is intended to be built on a suite of underlying facilties such as code injection; there is now a concrete proposal for what those facilities could look like, and how metaclasses could be built on top of them. SG 7 looked at an overview of this proposal, although there wasn’t time for an in-depth design review at this stage.

SG 15 (Tooling)

The Tooling Study Group (SG 15) met for a full day, focusing on issues related to tooling around modules, and in particular proposals targeting the C++ Modules Ecosystem Technical Report mentioned above.

I couldn’t be in the room for this session as it ran concurrently with Reflection and then Herbceptions in EWG, but my understanding is that the main outcomes were:

  • The Ecosystem TR should contain guidelines for module naming conventions. There was no consensus to include conventions for other things such as project structure, file names, or namespace names.
  • The Ecosystem TR should recommend that implementations provide a way to implicitly build modules (that is, to be able to build them even in the absence of separate metadata specifying what modules are to be built and how), without requiring a particular project layout or file naming scheme. It was observed that implementing this in a performant way will likely require fast dependency scanning tools to extract module dependencies from source files. Such tools are actively being worked on (see e.g. clang-scan-deps), and the committee has made efforts to make them tractable (see e.g. the tweak to the context-sensitivity rules for import which EWG approved this week).

A proposal for a file format for describing dependencies of source files was also reviewed, and will continue to be iterated on.

One observation that was made during informal discussion was that SG 15’s recent focus on modules-related tooling has meant less time available for other topics such as package management. It remains to be seen if this is a temporary state of affairs, or if we could use two different study groups working in parallel.

Other Study Groups

Other Study Groups that met at this meeting include:

  • SG 2 (Modules), covered in the Modules section above.
  • SG 6 (Numerics) reviewed a dozen or so proposals, related to topics such as fixed-point numbers, type interactions, limits and overflow, rational numbers, and extended floating-point types. There was also a joint session with SG 14 (Games & Low Latency) and SG 19 (Machine Learning) to discuss linear algebra libraries and multi-dimensional data structures.
  • SG 12 (Undefined and Unspecified Behaviour). Topics discussed include pointer provenance, the C++ memory object model, and various other low-level topics. There was also the usual joint session with WG23 – Software Vulnerabilities; there is now a document describing the two groups’ relationship.
  • SG 13 (I/O), which reviewed proposals related to audio (proposal, feedback paper), web_view, 2D graphics (which continues to be iterated on in the hopes of a revised version gaining consensus), as well as few proposals related to callbacks which are relevant to the design of I/O facilities.
  • SG 14 (Games & Low Latency), whose focus at this meeting was on linear algebra proposals considered in joint session with SG 19
  • SG 16 (Unicode). Topics discussed include guidelines for where we want to impose requirements regarding character encodings, and filenames and the complexities they involve. The group also provided consults for relevant parts of other groups’ papers.
  • SG 19 (Machine Learning). In addition to linear algebra, the group considered proposals for adding statistical mathematical functions to C++ (simple stuff like mean, median, and standard deviation — somewhat surprising we don’t have them already!), as well as graph data structures.
  • SG 20 (Education), whose focus was on iterating on a document setting out proposed educational guidelines.

In addition, as mentioned, a new Contracts Study Group (SG 21) was formed at this meeting; I expect it will have its inaugural meeting in Belfast.

Most Study Groups hold regular teleconferences in between meetings, which is a great low-barrier-to-entry way to get involved. Check out their mailing lists here or here for telecon scheduling information.

Next Meeting

The next meeting of the Committee will be in Belfast, Northern Ireland, the week of November 4th, 2019.

Conclusion

My highlights for this meeting included:

  • Keeping the C++ release train schedule on track by approving the C++20 Committee Draft
  • Forming a Contracts Study Group to craft a high-quality, consensus-bearing Contracts design in C++23
  • Approving constexpr dynamic allocation, including constexpr vector and string for C++20
  • The standard library gaining a modern text formatting facility for C++20
  • Broaching the topic of bringing the -fno-exceptions segment of C++ users back into the fold
  • Breaking record attendance levels as we continue to gain representation of different parts of the community on the committee

Due to the sheer number of proposals, there is a lot I didn’t cover in this post; if you’re curious about a specific proposal that I didn’t mention, please feel free to ask about it in the comments.

Other Trip Reports

Other trip reports about this meeting include Herb Sutter’s, the collaborative Reddit trip report, Timur Doumler’s and Guy Davidson’s — I encourage you to check them out as well!

Mozilla VR BlogFirefox Reality for Oculus Quest

Firefox Reality for Oculus Quest

We are excited to announce that Firefox Reality is now available for the Oculus Quest!

Following our releases for other 6DoF headsets including the HTC Vive Focus Plus and Lenovo Mirage, we are delighted to bring the Firefox Reality VR web browsing experience to Oculus' newest headset.

Whether you’re watching immersive video or meeting up with friends in Mozilla Hubs, Firefox Reality takes advantage of the Oculus Quest’s boost in performance and capabilities to deliver the best VR web browsing experience. Try the new featured content on the FxR home page or build your own to see what you can do in the next generation of standalone virtual reality headsets.
Firefox Reality for Oculus Quest

Enhanced Tracking Protection Blocks Sites from Tracking You
To protect our users from the pervasive tracking and collection of personal data by ad networks and tech companies, Firefox Reality has Enhanced Tracking Protection enabled by default. We strongly believe privacy shouldn’t be relegated to optional settings. As an added bonus, these protections work in the background and actually increase the speed of the browser.
Firefox Reality for Oculus Quest

Firefox Reality is available in 10 different languages, including Japanese, Korean, Simplified Chinese and Traditional Chinese, with more on the way. You can also use your voice to search the web instead of typing, making it faster and easier to get where you want to go.
Firefox Reality for Oculus Quest

Stay tuned in the coming months as we roll out support for the nearly VR-ready WebXR specification, multi-window browsing, bookmarks sync, additional language support and other exciting new features.

Like all Firefox browser products, Firefox Reality is available for free in the Oculus Quest store.

For more information: https://mixedreality.mozilla.org/firefox-reality/

Mozilla Open Innovation TeamMozilla Voice Challenge: Defining The Voice Technology Space

We are excited to announce the launch of the Mozilla Voice Challenge,” a crowdsourcing competition sponsored by Mozilla and posted on the HeroX platform. The goal of the competition is to better define the voice technology space by creating a “stack” of open source technologies to support the development of new voice-enabled products.

<figcaption>https://www.herox.com/voice</figcaption>

The Power of the Voice

Voice-enabled products are in rapid ascent in both consumer and enterprise markets. The expectations are that in the near future voice interaction will become a key interface for people’s internet-connected lives.

Unfortunately, the current voice product market is heavily dominated by a few giant tech companies. This is unhealthy as it stifles the competition and prevents entry of smaller companies with new and innovative products. Mozilla wants to change that. We want to help opening up the ecosystem. So far there have been two major components in Mozilla’s open source voice tech efforts outside the Firefox browser:

(1) To solve for the lack of available training data for machine-learning algorithms that can power new voice-enabled applications, we launched the Common Voice project. The current release already represents the largest public domain transcribed voice dataset, with more than 2,400 hours of voice data and 28 languages represented.

(2) In addition to the data collection, Mozilla’s Machine Learning Group has applied sophisticated machine learning techniques and a variety of innovations to build an open-source speech-to-text engine that approaches human accuracy, as well as a text-to-speech engine. Together with the growing Common Voice dataset Mozilla believes this technology can and will enable a wave of innovative products and services, and that it should be available to everyone.

And this is exactly where this new Mozilla Voice Challengefits in: Its objective is to better define the voice technology space by creating a “stack” of open source technologies to support the development of new voice-enabled products.

Stacking the Odds

For the purpose of this competition, we define voice-enabled technologies as technologies that use voice as an interface, allowing people to interact with various connected devices through verbal means — both when speaking and listening.

We envision that some elements of this stack would be the following technologies:

  • Speech-to-text (STT)
  • Text-to-speech (TSS)
  • Natural Language Processing (NLP)
  • Voice-signal processing
  • Keyword spotting
  • Keyword alignment
  • Intent parsing
  • Language parsing: stemming, entity recognition, dialog management, and summation.

We want to improve this list by adding more relevant technologies and also identify any “gaps” in the stack where quality open source projects are not available (see the Challenge description for more details). We’ll then place the updated list in a public repository for open access — and to achieve this, all proposed technologies in the stack need to be open source licensed.

How to Participate

The competition was posted to the HeroX platform. The competition will run until August 20, 2019 and the submitted proposals will be evaluated by the members of Mozilla’s Voice team. Up to $6,000 in prizes will be awarded to the best proposals.

The challenge is open to everyone (except for Mozilla employees and their families), and we especially encourage members of Mozilla’s Common Voice community to take part in it.


Mozilla Voice Challenge: Defining The Voice Technology Space was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Firefox FrontierEight ways to reduce your digital carbon footprint

Whether it’s from doing things like burning fossil fuels through driving, cranking up the furnace or grilling a steak, we are all responsible for releasing carbon dioxide into the atmosphere, … Read more

The post Eight ways to reduce your digital carbon footprint appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgWebThings Gateway for Wireless Routers

Wireless Routers

In April we announced that the Mozilla IoT team had been working on evolving WebThings Gateway into a full software distribution for consumer wireless routers.

Today, with the 0.9 release, we’re happy to announce the availability of the first experimental builds for our first target router hardware, the Turris Omnia.

Turris Omnia wireless router

Turris Omnia wireless router. Source: turris.cz

These builds are based on the open source OpenWrt operating system. They feature a new first-time setup experience which enables you to configure the gateway as a router and Wi-Fi access point itself, rather than connecting to an existing Wi-Fi network.

Router first time setup

Router first time setup

So far, these experimental builds only offer extremely basic router configuration and are not ready to replace your existing wireless router. This is just our first step along the path to creating a full software distribution for wireless routers.

Router network settings

Router network settings

We’re planning to add support for other wireless routers and router developer boards in the near future. We want to ensure that the user community can access a range of affordable developer hardware.

Raspberry Pi 4

As well as these new OpenWrt builds for routers, we will continue to support the existing Raspbian-based builds for the Raspberry Pi. In fact, the 0.9 release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. You can now find a handy download link on the Raspberry Pi website.

Raspberry Pi 4 Model B

Raspberry Pi 4 Model B. Source: raspberrypi.org

Notifier Add-ons

Another feature landing in the 0.9 release is a new type of add-on called notifier add-ons.

Notifier Add-ons

Notifier Add-ons

In previous versions of the gateway, the only way you could be notified of events was via browser push notifications. Unfortunately, this is not supported by all browsers, nor is it always the most convenient notification mechanism for users.

A workaround was available by creating add-ons with basic “send notification” actions to implement different types of notifications. However, these required the user to add “things” to their gateway which didn’t represent actual devices and actions had to be hard-coded in the add-on’s configuration.

To remedy this, we have introduced notifier add-ons. Essentially, a notifier creates a set of “outlets”, each of which can be used as an output for a rule. For example, you can now set up a rule to send you an SMS or an email when motion is detected in your home. Notifiers can be configured with a title, a message and a priority level. This allows users to be reached where and how they want, with a message and priority that makes sense to them.

Rule with email notification

Rule with email notification

API Changes

For developers, the 0.9 release of the WebThings Gateway and 0.12 release of the WebThings Framework libraries also bring some small changes to Thing Descriptions. This will bring us more in line with the latest W3C drafts.

One small difference to be aware of is that “name” is now called “title”. There are also some experimental new base, security and securityDefinitions properties of the Thing Descriptions exposed by the gateway, which are still under active discussion at the W3C.

Give it a try!

We invite you to download the new WebThings Gateway 0.9 and continue to build your own web things with the latest WebThings Framework libraries. If you already have WebThings Gateway installed on a Raspberry Pi, it should update itself automatically.

As always, we welcome your feedback on Discourse. Please submit issues and pull requests on GitHub.

The post WebThings Gateway for Wireless Routers appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogUpcoming deprecations in Firefox 70

Several planned code deprecations for Firefox 70, currently available on the Nightly pre-release channel, may impact extension and theme developers. Firefox 70 will be released on October 22, 2019.

Aliased theme properties to be removed

In Firefox 65, we started deprecating the aliased theme properties accentcolor, textcolor, and headerURL. These properties will be removed in Firefox 70.

Themes listed on addons.mozilla.org (AMO) will be automatically updated to use supported properties. Most themes were updated back in April, but new themes have been created using the deprecated properties. If your theme is not listed on AMO, or if you are the developer of a dynamic theme, please update your theme’s manifest.json to use the supported properties.

  • For accentcolor, please use frame
  • For headerURL, please use theme_frame
  • For textcolor, please use tab_background_text

JavaScript deprecations

In Firefox 70, the non-standard, Firefox-specific Array generic methods introduced with JavaScript 1.6 will be considered deprecated and scheduled for removal in the near future. For more information about which generics will be removed and suggested alternatives, please see the Firefox Site Compatibility blog.

The Site Compatibility working group also intends to remove the non-standard prototype toSource and uneval by the end of 2019.

The post Upcoming deprecations in Firefox 70 appeared first on Mozilla Add-ons Blog.

The Mozilla BlogEmpowering voters to combat election manipulation

For the last year, Mozilla has been looking for ways to empower voters in light of the shifts in election dynamics caused by the internet and online advertising. This work included our participation in the EU’s Code of Practice on Disinformation to push for change in the industry which led to the launch of the Firefox EU Elections toolkit that provided people information on the voting process, how tracking and opaque online advertising influence their voting behavior and how they can easily protect themselves.

We also had hoped to lend our technical expertise to create an analysis dashboard that would help researchers and journalists monitor the elections. The dashboard would gather data on the political ads running on various platforms and provide a concise “behind the scenes” look at how these ads were shared and targeted.

But to achieve this we needed the platforms to follow through on their own commitment to make the data available through their Ad Archive APIs.

Here’s what happened.

Platforms didn’t supply sufficient data

On March 29, Facebook began releasing its political ad data through a publicly available API. We quickly concluded the API was inadequate.

  • Targeting information was not available.
  • Bulk data access was not offered.
  • Data wasn’t tagged properly.
  • Identical searches would produce wildly differing results.

The state of the API made it nearly impossible to extract the data needed to populate the dashboard we were hoping to create to make this information more accessible.

And although Google didn’t provide the targeting criteria advertisers use on the platform, it did provide access to the data in a format that allowed for real research and analysis.

That was not the case for Facebook.

So then what?

It took the entire month of April to figure out ways to work within or rather, around, the API to collect any information about the political ads running on the Facebook platform.

After several weeks, hundreds of hours, and thousands of keystrokes, the Mozilla team created the EU Ad Transparency Reports. The reports contained aggregated statistics on spending and impressions about political ads on Facebook, Instagram, Google, and YouTube.

While this was not the dynamic tool we had envisioned at the beginning of this journey, we hoped it would help.

But despite our best efforts to help Facebook debug their system, the API broke again from May 18 through May 26, making it impossible to use the API and generate any reports in the last days leading up to the elections.

All of this was documented through dozens of bug reports provided to Facebook, identifying ways the API needed to be fixed.

A Roadmap for Facebook

Ultimately our contribution to this effort ended up looking very different than what we had first set out to do. Instead of a tool, we have detailed documentation of every time the API failed and every roadblock encountered and a series of tips and tricks to help others use the API.

This documentation provides Facebook a clear roadmap to make the necessary improvements for a functioning and useful API before the next election takes place. The EU elections have passed, but the need for political messaging transparency has not.

In fact, important elections are expected to take place almost every month until the end of the year and Facebook has recently rolled this tool out globally.

We need Facebook to be better. We need an API that actually helps – not hinders – researchers and journalists uncover who is buying ads, the way these ads are being targeted and to whom they’re being served. It’s this important work that informs the public and policymakers about the nature and consequences of misinformation.

This is too important to get wrong. That is why we plan to continue our work on this matter and continue to work with those pushing to shine a light on how online advertising impact elections.

The post Empowering voters to combat election manipulation  appeared first on The Mozilla Blog.

Nicholas NethercoteThe Rust compiler is still getting faster

A key theme of the Rust 2019 roadmap is maturity. This covers a variety of topics, but a crucial one is compile times. For example, the roadmap itself has the following as the first main theme for the compiler team.

Improving “core strength” by lowering raw compilation times and also generating better code (which in turn can help with compilation times)

The roadmap explainer post has a “polish” section that has the following as the first example.

Compile times and IDE support

I previously wrote about one period of improvement in Rust compiler speed. How are things going in 2019?

Speed improvements in 2019

The following image shows changes in time taken to compile the standard benchmarks used on the Rust performance tracker. It compares the compiler from 2019-01-01 with the compiler from 2019-07-24 (the most recent data at the time of writing).

Table showing Rust compiler speedups between 2019-01-01 and 2019-07-24

These are the wall-time results for 29 benchmarks. There are three different build kinds measured for each one: a debug build, an optimized build, and a check build (which detects errors but doesn’t generate code). For each build kind there is a mix of incremental and non-incremental runs done. The numbers for the individual runs aren’t shown here but you can see them if you view the results directly on the site and click around. The “avg” column shows the average change for those runs. The “min” and “max” columns show the minimum and maximum changes among those same runs.

The table has 261 numbers. The thing to take away is that 258 of them are negative, representing a decrease in compile time. Most of the “avg” values are in the range -20% to -40%. The “min” values (representing the best time reduction for each build kind) range from -12.4% to -51.3%. Even the “max” values (representing the worst time reduction for each build kind) are mostly better than -10%. These are pleasing results.

speed improvements since late 2017

What happens if we look further back? The image below compares the compiler from 2017-11-12 (the earliest date for which I could get data from the site) against the compiler from 2019-07-24, a period of just over 20 months.

Table showing Rust compiler speedups between 2017-11-12 and 2019-07-24

These are the wall-time results for only 18 benchmarks, because the benchmark suite was smaller in late 2017. Check builds were also not measured then. You can view the results directly on the site.

My initial thought from looking at the “avg” results was “the compiler is twice as fast” but closer inspection shows that’s not quite true; the average “avg” result is 42%. (I know that averaging averages is statistically dubious, I did it just to get a rough feel.) Overall, the results are significantly better than those for 2019: the “avg” values range from -19.9% to -61.3%, and the “min” values are mostly better than -60%.

(And don’t forget that time reduction percentages can be misleading when they get large. A 50% time reduction means the compiler is twice as fast; a 75% time reduction means the compiler is four times as fast; a 90% time reduction means the compiler is ten times as fast.)

All this is good news. The Rust compiler has long had a reputation for being slow. I still wouldn’t describe it as fast, but it is clearly a lot faster than it used to be. Many thanks to all those who made this happen, and I would be happy to hear from anyone who wants to help continue the trend!

Thanks to theZcuber for a Reddit post that was the starting point for this article.

This Week In RustThis Week in Rust 296

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is abscissa, a security-oriented Rust application framework. Thanks to Tony Arcieri for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

230 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Roses are red, Rust-lang is fine, cannot borrow `i` as mutable more than once at a time

Joseph Lyons on twitter

Thanks to Jelte Fennema for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Daniel Stenbergcurl goez parallel

The first curl release ever saw the light of day on March 20, 1998 and already then, curl could transfer any amount of URLs given on the command line. It would iterate over the entire list and transfer them one by one.

Not even 22 years later, we introduce the ability for the curl command line tool to do parallel transfers! Instead of doing all the provided URLs one by one and only start the next one once the previous has been completed, curl can now be told to do all of them, or at least many of them, at the same time!

This has the potential to drastically decrease the amount of time it takes to complete an operation that involves multiple URLs.

–parallel / -Z

Doing transfers concurrently instead of serially of course changes behavior and thus this is not something that will be done by default. You as the user need to explicitly ask for this to be done, and you do this with the new –parallel option, which also as a short-hand in a single-letter version: -Z (that’s the upper case letter Z).

Limited parallelism

To avoid totally overloading the servers when many URLs are provided or just that curl runs out of sockets it can keep open at the same time, it limits the parallelism. By default curl will only try up to 50 transfers concurrently, so if there are more transfers given to curl those will wait to get started once one of the first transfers are completed. The new –parallel-max command line option can be used to change the concurrency limit.

Progress meter

Is different in this mode. The new progress meter that will show up for parallel transfers is one output for all transfers.

Transfer results

When doing many simultaneous transfers, how do you figure out how they all did individually, like from your script? That’s still to be figured out and implemented.

No same file splitting

This functionality makes curl do URLs in parallel. It will still not download the same URL using multiple parallel transfers the way some other tools do. That might be something to implement and offer in a future fine tuning of this feature.

libcurl already do this fine

This is a new command line feature that uses the fact that libcurl can already do this just fine. Thanks to libcurl being a powerful transfer library that curl uses, enabling this feature was “only” a matter of making sure libcurl was used in a different way than before. This parallel change is entirely in the command line tool code.

Ship

This change has landed in curl’s git repository already (since b8894085000) and is scheduled to ship in curl 7.66.0 on September 11, 2019.

I hope and expect us to keep improving parallel transfers further and we welcome all the help we can get!

QMOFirefox Nightly 70 Testday Results

Hello Mozillians!

As you may already know, last Friday – July 19th – we held a new Testday event, for Firefox Nightly 70.

Thank you all for helping us make Mozilla a better place: gaby2300, maria plachkova and Fernando noelonassis.

Results:

– several test cases executed for Fission. 

– 1 bug verified: 1564267

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Cameron KaiserClean out your fonts, people

Someone forwarded me a MacRumours post that a couple of the (useless) telemetry options in TenFourFox managed to escape my notice and should be disabled. This is true and I'll be flagging them off in FPR16. However, another source of slowdowns popped up recently and while I think it's been pointed out it bears repeating.

On startup, and to a lesser extent when browsing, TenFourFox (and Firefox) enumerates the fonts you have installed on your Power Mac so that sites requesting them can use locally available fonts and not download them unnecessarily. The reason for periodically rechecking is that people can, and do, move fonts around and it would be bad if TenFourFox had stale font information particularly for commonly requested ones. To speed this up, I actually added a TenFourFox-specific font directory cache so that subsequent enumerations are quicker. However, the heuristic for determining when fonts should be rescanned is imperfect and when in doubt I always err towards a fresh scan. That means a certain amount of work is unavoidable under normal circumstances.

Thus, the number of fonts you have currently installed directly affects TenFourFox's performance, and TenFourFox is definitely not the only application that needs to know what fonts are installed. If you have a large (as in several hundred) number of font files and particularly if you are not using an SSD, you should strongly consider thinning them out or using some sort of font management system. Even simply disabling the fonts in Font Book will help, because under the hood this will move the font to a disabled location, and TenFourFox and other applications will then not have to track it further.

How many is too many? On my quad G5, I have about 800 font files on my Samsung SSD. This takes about 3-4 seconds to initially populate the cache and then less than a second on subsequent enumerations. However, on a uniprocessor system and especially on systems without an SSD, I would strongly advise getting that number down below one hundred. Leave the fonts in /System/Library/Fonts alone, but on my vanilla Tiger Sawtooth G4 server, /Library/Fonts has just 87 files. Use Font Book to enable fonts later if you need them for stuff you're working on, or, if you know those fonts aren't ever being used, consider just deleting them entirely.

Due to a work crunch I will not be doing much work on FPR16 until August. However, I will be at the Vintage Computer Festival West again August 3 and 4 at the Computer History Museum in Mountain View. I've met a few readers of this blog in past years, and hopefully getting to play with various PowerPC (non-Power Mac), SPARC and PA-RISC laptops and portable workstations will tempt the rest of you. Come by, say hi, and play around a bit with the other great exhibits that aren't as cool as mine.

Patrick ClokeCelery without a Results Backend

The Celery send_task method allows you to invoke a task by name without importing it. [1] There is an undocumented [2] caveat to using send_task: it doesn’t have access to the configuration of the task (from when the task was created using the @task decorator).

Much of this configuration …

Joel MaherRecent fixes to reduce backlog on Android phones

Last week it seemed that all our limited resource machines were perpetually backlogged. I wrote yesterday to provide insight into what we run and some of our limitations. This post will be discussing the Android phones backlog last week specifically.

The Android phones are hosted at Bitbar and we split them into pools (battery testing, unit testing, perf testing) with perf testing being the majority of the devices.

There were 6 fixes made which resulted in significant wins:

  1. Recovered offline devices at Bitbar
  2. Restarting host machines to fix intermittent connection issues at Bitbar
  3. Update Taskcluster generic-worker startup script to consume superceded jobs
  4. Rewrite the scheduling script as multi-threaded and utilize bitbar APIs more efficiently
  5. Turned off duplicate jobs that were on by accident last month
  6. Removed old taskcluster-worker devices

On top of this there are 3 future wins that could be done to help future proof this:

  1. upgrade android phones from 8.0 -> 9.0 for more stability
  2. Enable power testing on generic usb hubs rather than special hubs which require dedicated devices.
  3. merge all separate pools together to maximize device utilization

With the fixes in place, we are able to keep up with normal load and expect that future spikes in jobs will be shorter lived, instead of lasting an entire week.

Recovered offline devices at Bitbar
Every day a 2-5 devices are offline for some period of time. The Bitbar team finds some on their own and resets the devices, sometimes we notice them and ask for the devices
to be reset. In many cases the devices are hung or have trouble on a reboot (motivation for upgrading to 9.0). I will add to this that the week prior things started getting sideways and it was a holiday week for many, so less people were watching things and more devices ended up in various states.

In total we have 40 pixel2 devices in the perf pool (and 37 Motorola G5 devices as well) and 60 pixel2 devices when including the unittest and battery pools. We found that 19 devices were not accepting jobs and needed attention Monday July 8th. For planning purposes it is assumed that 10% of the devices will be offline, in this case we had 1/3 of our devices offline and we were doing merge day with a lot of big pushes running all the jobs.

Restarting host machines to fix intermittent connection issues at Bitbar
At Bitbar we have a host machine with 4 or more docker containers running and each docker container runs Linux with the Taskcluster generic-worker and the tools to run test jobs. Each docker container is also mapped directly to a phone. The host machines are rarely rebooted and maintained, and we noticed a few instances where the docker containers had trouble connecting to the network.  A fix for this was to update the kernel and schedule periodic reboots.

Update Taskcluter generic-worker startup script
When the job is superseded, we would shut down the Taskcluster generic-worker, the docker image, and clean up. Previously it would terminate the job and docker container and then wait for another job to show up (often a 5-20 minute cycle). With the changes made Taskcluster generic-worker will just restart (not docker container) and quickly pick up the next job.

Rewrite the scheduling script as multi-threaded
This was a big area of improvement that was made. As our jobs increased in volume and had a wider range of runtimes, our tool for scheduling was iterating through the queue and devices and calling the APIs at Bitbar to spin up a worker and hand off a task. This is something that takes a few seconds per job or device and with 100 devices it could take 10+ minutes to come around and schedule a new job on a device. With changes made last week ( Bug 1563377 ) we now have jobs starting quickly <10 seconds, which greatly increases our device utilization.

Turn off duplicate opt jobs and only run PGO jobs
In reviewing what was run by default per push and on try, a big oversight was discovered. When we turned PGO on for Android, all the perf jobs were scheduled both for opt and PGO, when they should have been only scheduled for PGO. This was an easy fix and cut a large portion of the load down (Bug 1565644)

Removed old taskcluster-worker devices
Earlier this year we switched to Taskcluster generic-worker and in the transition had to split devices between the old taskcluster-worker and the new generic-worker (think of downstream branches).  Now everything is run on generic-worker, but we had 4 devices still configured with taskcluster-worker sitting idle.
Given all of these changes, we will still have backlogs that on a bad day could take 12+ hours to schedule try tasks, but we feel confident with the current load we have that most of the time jobs will be started in a reasonable time window and worse case we will catch up every day.

A caveat to the last statement, we are enabling webrender reftests on android and this will increase the load by a couple devices/day. Any additional tests that we schedule or large series of try pushes will cause us to hit the tipping point. I suspect buying more devices will resolve many complaints about lag and backlogs. Waiting for 2 more weeks would be my recommendation to see if these changes made have a measurable change on our backlog. While we wait it would be good to have agreement on what is an acceptable backlog and when we cross that threshold regularly that we can quickly determine the number of devices needed to fix our problem.

The Mozilla BlogQ&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier


When you were in school, you may have taken a trip to a museum or a local park, but you probably never got to see an active volcano or watch great whites hunt. As Virtual Reality grows, this could be the way your kids will learn — using headsets the way we use computers.

When you were in school, you may have gone on a trip to the museum, but you probably never stood next to an erupting volcano, watching molten lava pouring down its sides. As Virtual Reality (VR) grows, learning by going into the educational experience could be the way children will learn — using VR headsets the way we use computers.

This kind of technology holds huge potential in shaping young minds, but like with most technology, not all public schools get the same access. For those who come from underserved communities, the high costs to technology could widen an already existing gap in learning, and future incomes.

Kai Frazier, Founder of the education startup Curated x Kai, has seen this first-hand. As a history teacher who was once a homeless teen, she’s seen how access to technology can change lives. Her experiences on both ends of the spectrum are one of the reasons why she founded Curated x Kai.

As a teacher trying to make a difference, she began to experiment with VR and was inspired by the responses from her students. Now, she is hard at work building a company that stands for opening up opportunities for all children.

We recently sat down with Frazier to talk about the challenges of launching a startup with no engineering experience, and the life-changing impact of VR in the classroom.


How did the idea to use VR to help address inequality come up?
I was teaching close to Washington D.C., and my school couldn’t afford a field trip to the Martin Luther King Memorial or any of the free museums. Even though it was a short distance from their school, they couldn’t go. Being a teacher, I know how those things correlate to classroom performance.

 

As a teacher, it must have been difficult for you to see kids unable to tour museums and monuments, especially when most are free in D.C. Was it a similar situation similar when it came to accessing technology?
When I was teaching, it was really rough because I didn’t have much. We had so few laptops, that we had a laptop cart to share between classrooms. You’d sign up for a laptop for your class, and get a laptop cart once every other week. It got so bad that we went to a ‘bring your own device’ policy. So, if they had a cellphone, they could just use a cellphone because that’s a tiny computer.

And computers are considered essential in today’s schools. We know that most kids like to play with computers, but what about VR? How do educational VR experiences impact kids, especially those from challenging backgrounds or kids that aren’t the strongest students?
The kids that are hardest to reach had such strong responses to Virtual Reality. It awakened something in them that the teachers didn’t know was there. I have teachers in Georgia who work with emotionally-troubled children, and they say they’ve never seen them behave so well. A lot of my students were doing bad in science, and now they’re excited to watch science VR experiences.

Let’s back up a bit and talk about how it all started. How did you came up with the idea for Curated x Kai?
On Christmas day, I went to the MLK Memorial with my camera. I took a few 360-videos, and my students loved it. From there, I would take a camera with me and film in other places. I would think, ‘What could happen if I could show these kids everything from history museums, to the world, to colleges to jobs?’

And is that when you decided to launch?
While chaperoning kids on a college tour to Silicon Valley, I was exposed to Bay Area insights that showed me I wasn’t thinking big enough. That was October 2017, so by November 2017, I decided to launch my company.

When I launched, I didn’t think I was going to have a VR company. I have no technical background. My degrees are in history and secondary education. I sold my house. I sold my car. I sold everything I owned. I bootstrapped everything, and moved across the country here (to the San Francisco Bay Area).

When you got to San Francisco, you tried to raise venture capital but you had a hard time. Can you tell us more about that?
No matter what stage you’re in, VCs (Venture Capitalists) want to know you’re not going to be a large risk. There aren’t many examples of people with my background creating an ed tech VR company, so it feels extremely risky to most.The stat is that there’s a 0.2 percent chance for a black woman to get VC funding. I’m sure that if I looked differently, it would have helped us to scale a lot faster.

 

You’re working out of the Kapor Center in Oakland right now. What has that experience been like for you?
The Kapor Center is different because they are taking a chance on people who are doing good for the community. It’s been an amazing experience being incubated at their facility.

You’re also collaborating with the Emerging Technologies team here at Mozilla. Can you tell us more about that?
I’m used to a lot of false promises or lures of diversity initiatives. I was doing one program that catered to diverse content creators. When I got the contract, it said they were going to take my IP, my copyright, my patent…they would own it all. Mozilla was the first time I had a tech company treat me like a human being. They didn’t really care too much about my background or what credentials I didn’t have.

And of course many early engineers didn’t have traditional credentials. They were just people interested in building something new. As Curated x Kai developed, you choose Hubs, which is a Mozilla project, as a platform to host the virtual tours and museum galleries. Why Hubs when there are other, more high-resolution options on the market?
Tools like Hubs allow me to get students interested in VR, so they want to keep going. It doesn’t matter how high-res the tool is. Hubs enables me to create a pipeline so people can learn about VR in a very low-cost, low-stakes arena. It seems like many of the more expensive VR experiences haven’t been tested in schools. They often don’t work, because the WiFi can’t handle it.

That’s an interesting point because for students to get the full benefits of VR, it has to work in schools first. Besides getting kids excited to learn, if you had one hope for educational VR, what would it be?
I hope all VR experiences, educational or other, will incorporate diverse voices and perspectives. There is a serious lack of inclusive design, and it shows, from the content to the hardware. The lack of inclusive design creates an unwelcoming experience and many are robbed from the excitement of VR. Without that excitement, audiences lack the motivation to come back to the experience and it hinders growth, and adaptation in new communities — including schools.

The post Q&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier appeared first on The Mozilla Blog.

Daniel Stenbergcurl 7.65.2 fixes even more

Six weeks after our previous bug-fix release, we ship a second release in a row with nothing but bug-fixes. We call it 7.65.2. We decided to go through this full release cycle with a focus on fixing bugs (and not merge any new features) since even after 7.65.1 shipped as a bug-fix only release we still seemed to get reports indicating problems we wanted fixed once and for all.

Download curl from curl.haxx.se as always!

Also, I personally had a vacation already planned to happen during this period (and I did) so it worked out pretty good to take this cycle as a slightly calmer one.

Of the numbers below, we can especially celebrate that we’ve now received code commits by more than 700 persons!

Numbers

the 183rd release
0 changes
42 days (total: 7,789)

76 bug fixes (total: 5,259)
113 commits (total: 24,500)
0 new public libcurl function (total: 80)
0 new curl_easy_setopt() option (total: 267)

0 new curl command line option (total: 221)
46 contributors, 25 new (total: 1,990)
30 authors, 19 new (total: 706)
1 security fix (total: 90)
200 USD paid in Bug Bounties

Security

Since the previous release we’ve shipped a security fix. It was special in the way that it wasn’t actually a bug in the curl source code, but in the build procedure for how we made curl builds for Windows. For this report, we paid out a 200 USD bug bounty!

Bug-fixes of interest

As usual I’ve carved out a list with some of the bugs since the previous release that I find interesting and that could warrant a little extra highlighting. Check the full changelog on the curl site.

bindlocal: detect and avoid IP version mismatches in bind

It turned out you could ask curl to connect to a IPv4 site and if you then asked it to bind to an interface in the local end, it could actually bind to an ipv6 address (or vice versa) and then cause havok and fail. Now we make sure to stick to the same IP version for both!

configure: more –disable switches to toggle off individual features

As part of the recent tiny-curl effort, more parts of curl can be disabled in the build and now all of them can be controlled by options to the configure script. We also now have a test that verifies that all the disabled-defines are indeed possible to set with configure!

(A future version could certainly get a better UI/way to configure which parts to enable/disable!)

http2: call done_sending on end of upload

Turned out a very small upload over HTTP/2 could sometimes end up not getting the “upload done” flag set and it would then just linger around or eventually cause a time-out…

libcurl: Restrict redirect schemes to HTTP(S), and FTP(S)

As a stronger safety-precaution, we’ve now made the default set of protocols that are accepted to redirect to much smaller than before. The set of protocols are still settable by applications using the CURLOPT_REDIR_PROTOCOLS option.

multi: enable multiplexing by default (again)

Embarrassingly enough this default was accidentally switched off in 7.65.0 but now we’re back to enabling multiplexing by default for multi interface uses.

multi: fix the transfer hashes in the socket hash entries

The handling of multiple transfers on the same socket was flawed and previous attempts to fix them were incorrect or simply partial. Now we have an improved system and in fact we now store a separate connection hash table for each internal separate socket object.

openssl: fix pubkey/signature algorithm detection in certinfo

The CURLINFO_CERTINFO option broke with OpenSSL 1.1.0+, but now we have finally caught up with the necessary API changes and it should now work again just as well independent of which version you build curl to use!

runtests: keep logfiles around by default

Previously, when you run curl’s test suite, it automatically deleted the log files on success and you had to use runtests.pl -k to prevent it from doing this. Starting now, it will erase the log files on start and not on exit so they will now always be kept on exit no matter how the tests run. Just a convenience thing.

runtests: report single test time + total duration

The output from runtests.pl when it runs each test, one by one, will now include timing information about each individual test. How long each test took and how long time it has spent on the tests so far. This will help us detect if specific tests suddenly takes a very long time and helps us see how they perform in the remote CI build farms etc.

Next?

I truly think we’ve now caught up with the worst problems and can now allow features to get merged again. We have some fun ones in the pipe that I can’t wait to put in the hands of users out there…

Nicholas NethercoteHow to speed up the Rust compiler in 2019

I have written previously about my efforts to speed up the Rust compiler in 2016 (part 1, part 2) and 2018 (part 1, part 2, NLL edition). It’s time for an update on the first half of 2019.

Faster globals

libsyntax has three tables in a global data structure, called Globals, storing information about spans (code locations), symbols, and hygiene data (which relates to macro expansion). Accessing these tables is moderately expensive, so I found various ways to improve things.

#59693: Every element in the AST has a span, which describes its position in the original source code. Each span consists of an offset, a length, and a third value that is related to macro expansion. The three fields are 12 bytes in total, which is a lot to attach to every AST element, and much of the time the three fields can fit in much less space. So the compiler used a 4 byte compressed form with a fallback to a hash table stored in Globals for spans that didn’t fit in 4 bytes. This PR changed that to 8 bytes. This increased memory usage and traffic slightly, but reduced the fallback rate from roughly 10-20% to less than 1%, speeding up many workloads, the best by an amazing 14%.

#61253: There are numerous operations that accessed the hygiene data, and often these were called in pairs or trios, thus repeating the hygiene data lookup. This PR introduced compound operations that avoid the repeated lookups. This won 10% on packed-simd, up to 3% on numerous other workloads.

#61484: Similar to #61253, this won up to 2% on many benchmarks.

#60630: The compiler has an interned string type, called symbol. It used this inconsistently. As a result, lots of comparisons were made between symbols and ordinary strings, which required a lookup of the string in the symbols table and then a char-by-char comparison. A symbol-to-symbol comparison is much cheaper, requiring just an integer comparison. This PR removed the symbol-to-string comparison operations, forcing more widespread use of the symbol type. (Fortunately, most of the introduced symbol uses involved statically-known, pre-interned strings, so there weren’t additional interning costs.) This won up to 1% on various benchmarks, and made the use of symbols more consistent.

#60815: Similar to #60630, this also won up to 1% on various benchmarks.

#60467, #60910, #61035, #60973: These PRs avoiding some more unnecessary symbol interning, for sub-1% wins.

Miscellaneous

The following improvements didn’t have any common theme.

#57719: This PR inlined a very hot function, for a 4% win on one workload.

#58210: This PR changed a hot assertion to run only in debug builds, for a 20%(!) win on one workload.

#58207: I mentioned string interning earlier. The Rust compiler also uses interning for a variety of other types where duplicate values are common, including a type called LazyConst. However, the intern_lazy_const function was buggy and didn’t actually do any interning — it just allocated a new LazyConst without first checking if it had been seen before! This PR fixed that problem, reducing peak memory usage and page faults by 59% on one benchmark.

#59507: The pretty-printer was calling write! for every space of
indentation, and on some workloads the indentation level can exceed 100. This PR reduced it to a single write! call in the vast majority of cases, for up to a
7% win on a few benchmarks.

#59626: This PR changed the preallocated size of one data structure to better match what was needed in practice, reducing peak memory usage by 20 MiB on some workloads.

#61612: This PR optimized a hot path within the parser, whereby constant tokens were uselessly subjected to repeated “is it a keyword?” tests, for up to a 7% win on programs with large constants.

Profiling improvements

The following changes involved improvements to our profiling tools.

#59899: I modified the output of -Zprint-type-sizes so that enum variants are listed from largest to smallest. This makes it much easier to see outsized variants, especially for enums with many variants.

#62110: I improved the output of the -Ztime-passes flag by removing some uninteresting entries that bloated the output and adding a measurement for the total compilation time.

Also, I improved the profiling support within the rustc-perf benchmark suite. First, I added support for profiling with OProfile. I admit I haven’t used it enough yet to gain any wins. It seg faults about half the time when I run it, which isn’t encouraging.

Second, I added support for profiling with the new version of DHAT. This blog post is about 2019, but it’s worth mentioning some improvements I made with the new DHAT’s help in Q4 of 2018, since I didn’t write a blog post about that period: #55167, #55346, #55383, #55384, #55501, #55525, #55556, #55574, #55604, #55777, #55558, #55745, #55778, #55905, #55906, #56268, #56090, #56269, #56336, #56369, #56737, and (ena crate) #14.

Finally, I wrote up brief descriptions for all the benchmarks in rustc-perf.

pipelined compilation

The improvements above (and all the improvements I’ve done before that) can be described as micro-optimizations, where I used profiling data to optimize a small piece of code.

But it’s also worth thinking about larger, systemic improvements to Rust compiler speed. In this vein, I worked in Q2 with Alex Crichton on pipelined compilation, a feature that increases the amount of parallelism available when building a multi-crate Rust project by overlapping the compilation of dependent crates. In diagram form, a compilation without pipelining looks like this:

         metadata            metadata
[-libA----|--------][-libB----|--------][-binary-----------]
0s        5s       10s       15s       20s                30s

With pipelined compilation, it looks like this:

[-libA----|--------]
          [-libB----|--------]
                             [-binary-----------]
0s        5s       10s       15s                25s

I did the work on the Rust compiler side, and Alex did the work on the Cargo side.

For more details on how it works, how to use it, and lots of measurements, see this thread. The effects are highly dependent on a project’s crate structure and the compiling machine’s configuration. We have seen speed-ups as high as 1.84x, while some projects see no speed-up at all. At worst, it should make things only negligibly slower, because it’s not causing any additional work, just changing the order in which certain things happen.

Pipelined compilation is currently a Nightly-only feature. There is a tracking issue for stabilizing the feature here.

Future work

I have a list of things I want to investigate in Q3.

  • For pipelined compilation, I want to try pushing metadata creation even earlier in the compiler front-end, which may increase the speed-ups some more.
  • The compiler uses memcpy a lot; not directly, but the generated code uses it for value moves and possibly other reasons. In “check” builds that don’t do any code generation, typically 2-8% of all instructions executed occur within memcpy. I want to understand why this is and see if it can be improved. One possibility is moves of excessively large types within the compiler; another possibility is poor code generation. The former would be easier to fix. The latter would be harder to fix, but would benefit many Rust programs.
  • Incremental compilation sometimes isn’t very effective. On some workloads, if you make a tiny change and recompile incrementally it takes about the same time as a full non-incremental compilation. Perhaps a small change to the incremental implementation could result in some big wins.
  • I want to see if there are other hot paths within the parser that could be improved, like in #61612.

I also have various pieces of Firefox work that I need to do in Q3, so I might not get to all of these. If you are interested in working on these ideas, or anything else relating to Rust compiler speed, please get in touch.

Joel Maherbacklogs, lag, and waiting

Many times each week I see a ping on IRC or Slack asking “why are my jobs not starting on my try push?”  I want to talk about why we have backlogs and some things to consider in regards to fixing the problem.

It a frustrating experience when you have code that you are working on or ready to land and some test jobs have been waiting for hours to run. I personally experienced this the last 2 weeks while trying to uplift some test only changes to esr68 and I would get results the next day. In fact many of us on our team joke that we work weekends and less during the week in order to get try results in a reasonable time.

It would be a good time to cover briefly what we run and where we run it, to understand some of the variables.

In general we run on 4 primary platforms:

  • Linux: Ubuntu 16.04
  • OSX: 10.14.5
  • Windows: 7 (32 bit) + 10 (v1803) + 10 (aarch64)
  • Android: Emulator v7.0, hardware 7.0/8.0

In addition to the platforms, we often run tests in a variety of configs:

  • PGO / Opt / Debug
  • Asan / ccov (code coverage)
  • Runtime prefs: qr (webrender) / spi (socket process) / fission (upcoming)

In some cases a single test can run >90 times for a given change when iterated through all the different platforms and configurations. Every week we are adding many new tests to the system and it seems that every month we are changing configurations somehow.

In total for January 1st to June 30th (first half of this year) Mozilla ran >25M test jobs. In order to do that, we need a lot of machines, here is what we have:

  • linux
    • unittests are in AWS – basically unlimited
    • perf tests in data center with 200 machines – 1M jobs this year
  • Windows
    • unittests are in AWS – some require instances with a dedicated GPU and that is a limited pool)
    • perf tests in data center with 600 machines – 1.5M jobs this year
    • Windows 10 aarch64 – 35 laptops (at Bitbar) that run all unittests and perftests, a new platform in 2019 and 20K jobs this year
    • Windows 10 perf reference (low end) laptop – 16 laptops (at Bitbar) that run select perf tests, 30K jobs this year
  • OSX
    • unittests and perf tests run in data center with 450 mac minis – 380K jobs this year
  • Android
    • Emulators (packet.net fixed pool of 50 hosts w/4 instances/host) 493K jobs this year – run most unittests on here
      • will have much larger pool in the near future
    • real devices – we have 100 real devices (at Bitbar) – 40 Motorola – G5’s, 60 Google Pixel2’s running all perf tests and some unittests- 288K jobs this year

You will notice that OSX, some windows laptops, and android phones are a limited resource and we need to be careful for what we run on them and ensure our machines and devices are running at full capacity.

These limited resource machines are where we see jobs scheduled and not starting for a long time. We call this backlog, it could also be referred to as lag. While it would be great to point to a public graph showing our backlog, we don’t have great resources that are uniform between all machine types. Here is a view of what we have internally for the Android devices:bitbar_queue

What typically happens when a developer pushes their code to a try server to run all the tests, many jobs finish in a reasonable amount of time, but jobs scheduled on resource constrained hardware (such as android phones) typically have a larger lag which then results in frustration.

How do we manage the load:

  1. reduce the number of jobs
  2. ensure tooling and infrastructure is efficient and fully operational

I would like to talk about how to reduce the number of jobs. This is really important when dealing with limited resources, but we shouldn’t ignore this on all platforms. The things to tweak are:

  1. what tests are run and on what branches
  2. what frequency we run the tests at
  3. what gets scheduled on try server pushes

I find that for 1, we want to run everything everywhere if possible, this isn’t possible, so one of our tricks is to run things on mozilla-central (the branch we ship nightlies off of) and not on our integration branches. A side effect here is a regression isn’t seen for a longer period of time and finding a root cause can be more difficult. One recent fix was  when PGO was enabled for android we were running both regular tests and PGO tests at the same time for all revisions- we only ship PGO and only need to test PGO, the jobs were cut in half with a simple fix.

Looking at 2, the frequency is something else. Many tests are for information or comparison only, not for tracking per commit.  Running most tests once/day or even once/week will give a signal while our most diverse and effective tests are running more frequently.

The last option 3 is where all developers have a chance to spoil the fun for everyone else. One thing is different for try pushes, they are scheduled on the same test machines as our release and integration branches, except they are put in a separate queue to run which is priority 2. Basically if any new jobs get scheduled on an integration branch, the next available devices will pick those up and your try push will have to wait until all integration jobs for that device are finished. This keeps our trees open more frequently (if we have 50 commits with no tests run, we could be backing out changes from 12 hours ago which maybe was released or maybe has bitrot while performing the backout). One other aspect of this is we have >10K jobs one could possibly run while scheduling a try push and knowing what to run is hard. Many developers know what to run and some over schedule, either out of difficulty in job selection or being overly cautious.

Keeping all of this in mind, I often see many pushes to our try server scheduling what looks to be way too many jobs on hardware. Once someone does this, everybody else who wants to get their 3 jobs run have to wait in line behind the queue of jobs (many times 1000+) which often only get ran during night for North America.

I would encourage developers pushing to try to really question if they need all jobs, or just a sample of the possible jobs.  With tools like |/.mach try fuzzy| , |./mach try chooser| , or |./mach try empty| it is easier to schedule what you need instead of blanket commands that run everything.  I also encourage everyone to cancel old try pushes if a second try push has been performed to fix errors from the first try push- that alone saves a lot of unnecessary jobs from running.