Patrick ClokeCelery without a Results Backend

The Celery send_task method allows you to invoke a task by name without importing it. [1] There is an undocumented [2] caveat to using send_task: it doesn’t have access to the configuration of the task (from when the task was created using the @task decorator).

Much of this configuration …

Joel MaherRecent fixes to reduce backlog on Android phones

Last week it seemed that all our limited resource machines were perpetually backlogged. I wrote yesterday to provide insight into what we run and some of our limitations. This post will be discussing the Android phones backlog last week specifically.

The Android phones are hosted at Bitbar and we split them into pools (battery testing, unit testing, perf testing) with perf testing being the majority of the devices.

There were 6 fixes made which resulted in significant wins:

  1. Recovered offline devices at Bitbar
  2. Restarting host machines to fix intermittent connection issues at Bitbar
  3. Update Taskcluster generic-worker startup script to consume superceded jobs
  4. Rewrite the scheduling script as multi-threaded and utilize bitbar APIs more efficiently
  5. Turned off duplicate jobs that were on by accident last month
  6. Removed old taskcluster-worker devices

On top of this there are 3 future wins that could be done to help future proof this:

  1. upgrade android phones from 8.0 -> 9.0 for more stability
  2. Enable power testing on generic usb hubs rather than special hubs which require dedicated devices.
  3. merge all separate pools together to maximize device utilization

With the fixes in place, we are able to keep up with normal load and expect that future spikes in jobs will be shorter lived, instead of lasting an entire week.

Recovered offline devices at Bitbar
Every day a 2-5 devices are offline for some period of time. The Bitbar team finds some on their own and resets the devices, sometimes we notice them and ask for the devices
to be reset. In many cases the devices are hung or have trouble on a reboot (motivation for upgrading to 9.0). I will add to this that the week prior things started getting sideways and it was a holiday week for many, so less people were watching things and more devices ended up in various states.

In total we have 40 pixel2 devices in the perf pool (and 37 Motorola G5 devices as well) and 60 pixel2 devices when including the unittest and battery pools. We found that 19 devices were not accepting jobs and needed attention Monday July 8th. For planning purposes it is assumed that 10% of the devices will be offline, in this case we had 1/3 of our devices offline and we were doing merge day with a lot of big pushes running all the jobs.

Restarting host machines to fix intermittent connection issues at Bitbar
At Bitbar we have a host machine with 4 or more docker containers running and each docker container runs Linux with the Taskcluster generic-worker and the tools to run test jobs. Each docker container is also mapped directly to a phone. The host machines are rarely rebooted and maintained, and we noticed a few instances where the docker containers had trouble connecting to the network.  A fix for this was to update the kernel and schedule periodic reboots.

Update Taskcluter generic-worker startup script
When the job is superseded, we would shut down the Taskcluster generic-worker, the docker image, and clean up. Previously it would terminate the job and docker container and then wait for another job to show up (often a 5-20 minute cycle). With the changes made Taskcluster generic-worker will just restart (not docker container) and quickly pick up the next job.

Rewrite the scheduling script as multi-threaded
This was a big area of improvement that was made. As our jobs increased in volume and had a wider range of runtimes, our tool for scheduling was iterating through the queue and devices and calling the APIs at Bitbar to spin up a worker and hand off a task. This is something that takes a few seconds per job or device and with 100 devices it could take 10+ minutes to come around and schedule a new job on a device. With changes made last week ( Bug 1563377 ) we now have jobs starting quickly <10 seconds, which greatly increases our device utilization.

Turn off duplicate opt jobs and only run PGO jobs
In reviewing what was run by default per push and on try, a big oversight was discovered. When we turned PGO on for Android, all the perf jobs were scheduled both for opt and PGO, when they should have been only scheduled for PGO. This was an easy fix and cut a large portion of the load down (Bug 1565644)

Removed old taskcluster-worker devices
Earlier this year we switched to Taskcluster generic-worker and in the transition had to split devices between the old taskcluster-worker and the new generic-worker (think of downstream branches).  Now everything is run on generic-worker, but we had 4 devices still configured with taskcluster-worker sitting idle.
Given all of these changes, we will still have backlogs that on a bad day could take 12+ hours to schedule try tasks, but we feel confident with the current load we have that most of the time jobs will be started in a reasonable time window and worse case we will catch up every day.

A caveat to the last statement, we are enabling webrender reftests on android and this will increase the load by a couple devices/day. Any additional tests that we schedule or large series of try pushes will cause us to hit the tipping point. I suspect buying more devices will resolve many complaints about lag and backlogs. Waiting for 2 more weeks would be my recommendation to see if these changes made have a measurable change on our backlog. While we wait it would be good to have agreement on what is an acceptable backlog and when we cross that threshold regularly that we can quickly determine the number of devices needed to fix our problem.

The Mozilla BlogQ&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier


When you were in school, you may have taken a trip to a museum or a local park, but you probably never got to see an active volcano or watch great whites hunt. As Virtual Reality grows, this could be the way your kids will learn — using headsets the way we use computers.

When you were in school, you may have gone on a trip to the museum, but you probably never stood next to an erupting volcano, watching molten lava pouring down its sides. As Virtual Reality (VR) grows, learning by going into the educational experience could be the way children will learn — using VR headsets the way we use computers.

This kind of technology holds huge potential in shaping young minds, but like with most technology, not all public schools get the same access. For those who come from underserved communities, the high costs to technology could widen an already existing gap in learning, and future incomes.

Kai Frazier, Founder of the education startup Curated x Kai, has seen this first-hand. As a history teacher who was once a homeless teen, she’s seen how access to technology can change lives. Her experiences on both ends of the spectrum are one of the reasons why she founded Curated x Kai.

As a teacher trying to make a difference, she began to experiment with VR and was inspired by the responses from her students. Now, she is hard at work building a company that stands for opening up opportunities for all children.

We recently sat down with Frazier to talk about the challenges of launching a startup with no engineering experience, and the life-changing impact of VR in the classroom.


How did the idea to use VR to help address inequality come up?
I was teaching close to Washington D.C., and my school couldn’t afford a field trip to the Martin Luther King Memorial or any of the free museums. Even though it was a short distance from their school, they couldn’t go. Being a teacher, I know how those things correlate to classroom performance.

 

As a teacher, it must have been difficult for you to see kids unable to tour museums and monuments, especially when most are free in D.C. Was it a similar situation similar when it came to accessing technology?
When I was teaching, it was really rough because I didn’t have much. We had so few laptops, that we had a laptop cart to share between classrooms. You’d sign up for a laptop for your class, and get a laptop cart once every other week. It got so bad that we went to a ‘bring your own device’ policy. So, if they had a cellphone, they could just use a cellphone because that’s a tiny computer.

And computers are considered essential in today’s schools. We know that most kids like to play with computers, but what about VR? How do educational VR experiences impact kids, especially those from challenging backgrounds or kids that aren’t the strongest students?
The kids that are hardest to reach had such strong responses to Virtual Reality. It awakened something in them that the teachers didn’t know was there. I have teachers in Georgia who work with emotionally-troubled children, and they say they’ve never seen them behave so well. A lot of my students were doing bad in science, and now they’re excited to watch science VR experiences.

Let’s back up a bit and talk about how it all started. How did you came up with the idea for Curated x Kai?
On Christmas day, I went to the MLK Memorial with my camera. I took a few 360-videos, and my students loved it. From there, I would take a camera with me and film in other places. I would think, ‘What could happen if I could show these kids everything from history museums, to the world, to colleges to jobs?’

And is that when you decided to launch?
While chaperoning kids on a college tour to Silicon Valley, I was exposed to Bay Area insights that showed me I wasn’t thinking big enough. That was October 2017, so by November 2017, I decided to launch my company.

When I launched, I didn’t think I was going to have a VR company. I have no technical background. My degrees are in history and secondary education. I sold my house. I sold my car. I sold everything I owned. I bootstrapped everything, and moved across the country here (to the San Francisco Bay Area).

When you got to San Francisco, you tried to raise venture capital but you had a hard time. Can you tell us more about that?
No matter what stage you’re in, VCs (Venture Capitalists) want to know you’re not going to be a large risk. There aren’t many examples of people with my background creating an ed tech VR company, so it feels extremely risky to most.The stat is that there’s a 0.2 percent chance for a black woman to get VC funding. I’m sure that if I looked differently, it would have helped us to scale a lot faster.

 

You’re working out of the Kapor Center in Oakland right now. What has that experience been like for you?
The Kapor Center is different because they are taking a chance on people who are doing good for the community. It’s been an amazing experience being incubated at their facility.

You’re also collaborating with the Emerging Technologies team here at Mozilla. Can you tell us more about that?
I’m used to a lot of false promises or lures of diversity initiatives. I was doing one program that catered to diverse content creators. When I got the contract, it said they were going to take my IP, my copyright, my patent…they would own it all. Mozilla was the first time I had a tech company treat me like a human being. They didn’t really care too much about my background or what credentials I didn’t have.

And of course many early engineers didn’t have traditional credentials. They were just people interested in building something new. As Curated x Kai developed, you choose Hubs, which is a Mozilla project, as a platform to host the virtual tours and museum galleries. Why Hubs when there are other, more high-resolution options on the market?
Tools like Hubs allow me to get students interested in VR, so they want to keep going. It doesn’t matter how high-res the tool is. Hubs enables me to create a pipeline so people can learn about VR in a very low-cost, low-stakes arena. It seems like many of the more expensive VR experiences haven’t been tested in schools. They often don’t work, because the WiFi can’t handle it.

That’s an interesting point because for students to get the full benefits of VR, it has to work in schools first. Besides getting kids excited to learn, if you had one hope for educational VR, what would it be?
I hope all VR experiences, educational or other, will incorporate diverse voices and perspectives. There is a serious lack of inclusive design, and it shows, from the content to the hardware. The lack of inclusive design creates an unwelcoming experience and many are robbed from the excitement of VR. Without that excitement, audiences lack the motivation to come back to the experience and it hinders growth, and adaptation in new communities — including schools.

The post Q&A: Igniting imaginations and putting VR in the hands of students with Kai Frazier appeared first on The Mozilla Blog.

Daniel Stenbergcurl 7.65.2 fixes even more

Six weeks after our previous bug-fix release, we ship a second release in a row with nothing but bug-fixes. We call it 7.65.2. We decided to go through this full release cycle with a focus on fixing bugs (and not merge any new features) since even after 7.65.1 shipped as a bug-fix only release we still seemed to get reports indicating problems we wanted fixed once and for all.

Download curl from curl.haxx.se as always!

Also, I personally had a vacation already planned to happen during this period (and I did) so it worked out pretty good to take this cycle as a slightly calmer one.

Of the numbers below, we can especially celebrate that we’ve now received code commits by more than 700 persons!

Numbers

the 183rd release
0 changes
42 days (total: 7,789)

76 bug fixes (total: 5,259)
113 commits (total: 24,500)
0 new public libcurl function (total: 80)
0 new curl_easy_setopt() option (total: 267)

0 new curl command line option (total: 221)
46 contributors, 25 new (total: 1,990)
30 authors, 19 new (total: 706)
1 security fix (total: 90)
200 USD paid in Bug Bounties

Security

Since the previous release we’ve shipped a security fix. It was special in the way that it wasn’t actually a bug in the curl source code, but in the build procedure for how we made curl builds for Windows. For this report, we paid out a 200 USD bug bounty!

Bug-fixes of interest

As usual I’ve carved out a list with some of the bugs since the previous release that I find interesting and that could warrant a little extra highlighting. Check the full changelog on the curl site.

bindlocal: detect and avoid IP version mismatches in bind

It turned out you could ask curl to connect to a IPv4 site and if you then asked it to bind to an interface in the local end, it could actually bind to an ipv6 address (or vice versa) and then cause havok and fail. Now we make sure to stick to the same IP version for both!

configure: more –disable switches to toggle off individual features

As part of the recent tiny-curl effort, more parts of curl can be disabled in the build and now all of them can be controlled by options to the configure script. We also now have a test that verifies that all the disabled-defines are indeed possible to set with configure!

(A future version could certainly get a better UI/way to configure which parts to enable/disable!)

http2: call done_sending on end of upload

Turned out a very small upload over HTTP/2 could sometimes end up not getting the “upload done” flag set and it would then just linger around or eventually cause a time-out…

libcurl: Restrict redirect schemes to HTTP(S), and FTP(S)

As a stronger safety-precaution, we’ve now made the default set of protocols that are accepted to redirect to much smaller than before. The set of protocols are still settable by applications using the CURLOPT_REDIR_PROTOCOLS option.

multi: enable multiplexing by default (again)

Embarrassingly enough this default was accidentally switched off in 7.65.0 but now we’re back to enabling multiplexing by default for multi interface uses.

multi: fix the transfer hashes in the socket hash entries

The handling of multiple transfers on the same socket was flawed and previous attempts to fix them were incorrect or simply partial. Now we have an improved system and in fact we now store a separate connection hash table for each internal separate socket object.

openssl: fix pubkey/signature algorithm detection in certinfo

The CURLINFO_CERTINFO option broke with OpenSSL 1.1.0+, but now we have finally caught up with the necessary API changes and it should now work again just as well independent of which version you build curl to use!

runtests: keep logfiles around by default

Previously, when you run curl’s test suite, it automatically deleted the log files on success and you had to use runtests.pl -k to prevent it from doing this. Starting now, it will erase the log files on start and not on exit so they will now always be kept on exit no matter how the tests run. Just a convenience thing.

runtests: report single test time + total duration

The output from runtests.pl when it runs each test, one by one, will now include timing information about each individual test. How long each test took and how long time it has spent on the tests so far. This will help us detect if specific tests suddenly takes a very long time and helps us see how they perform in the remote CI build farms etc.

Next?

I truly think we’ve now caught up with the worst problems and can now allow features to get merged again. We have some fun ones in the pipe that I can’t wait to put in the hands of users out there…

Nicholas NethercoteHow to speed up the Rust compiler in 2019

I have written previously about my efforts to speed up the Rust compiler in 2016 (part 1, part 2) and 2018 (part 1, part 2, NLL edition). It’s time for an update on the first half of 2019.

Faster globals

libsyntax has three tables in a global data structure, called Globals, storing information about spans (code locations), symbols, and hygiene data (which relates to macro expansion). Accessing these tables is moderately expensive, so I found various ways to improve things.

#59693: Every element in the AST has a span, which describes its position in the original source code. Each span consists of an offset, a length, and a third value that is related to macro expansion. The three fields are 12 bytes in total, which is a lot to attach to every AST element, and much of the time the three fields can fit in much less space. So the compiler used a 4 byte compressed form with a fallback to a hash table stored in Globals for spans that didn’t fit in 4 bytes. This PR changed that to 8 bytes. This increased memory usage and traffic slightly, but reduced the fallback rate from roughly 10-20% to less than 1%, speeding up many workloads, the best by an amazing 14%.

#61253: There are numerous operations that accessed the hygiene data, and often these were called in pairs or trios, thus repeating the hygiene data lookup. This PR introduced compound operations that avoid the repeated lookups. This won 10% on packed-simd, up to 3% on numerous other workloads.

#61484: Similar to #61253, this won up to 2% on many benchmarks.

#60630: The compiler has an interned string type, called symbol. It used this inconsistently. As a result, lots of comparisons were made between symbols and ordinary strings, which required a lookup of the string in the symbols table and then a char-by-char comparison. A symbol-to-symbol comparison is much cheaper, requiring just an integer comparison. This PR removed the symbol-to-string comparison operations, forcing more widespread use of the symbol type. (Fortunately, most of the introduced symbol uses involved statically-known, pre-interned strings, so there weren’t additional interning costs.) This won up to 1% on various benchmarks, and made the use of symbols more consistent.

#60815: Similar to #60630, this also won up to 1% on various benchmarks.

#60467, #60910, #61035, #60973: These PRs avoiding some more unnecessary symbol interning, for sub-1% wins.

Miscellaneous

The following improvements didn’t have any common theme.

#57719: This PR inlined a very hot function, for a 4% win on one workload.

#58210: This PR changed a hot assertion to run only in debug builds, for a 20%(!) win on one workload.

#58207: I mentioned string interning earlier. The Rust compiler also uses interning for a variety of other types where duplicate values are common, including a type called LazyConst. However, the intern_lazy_const function was buggy and didn’t actually do any interning — it just allocated a new LazyConst without first checking if it had been seen before! This PR fixed that problem, reducing peak memory usage and page faults by 59% on one benchmark.

#59507: The pretty-printer was calling write! for every space of
indentation, and on some workloads the indentation level can exceed 100. This PR reduced it to a single write! call in the vast majority of cases, for up to a
7% win on a few benchmarks.

#59626: This PR changed the preallocated size of one data structure to better match what was needed in practice, reducing peak memory usage by 20 MiB on some workloads.

#61612: This PR optimized a hot path within the parser, whereby constant tokens were uselessly subjected to repeated “is it a keyword?” tests, for up to a 7% win on programs with large constants.

Profiling improvements

The following changes involved improvements to our profiling tools.

#59899: I modified the output of -Zprint-type-sizes so that enum variants are listed from largest to smallest. This makes it much easier to see outsized variants, especially for enums with many variants.

#62110: I improved the output of the -Ztime-passes flag by removing some uninteresting entries that bloated the output and adding a measurement for the total compilation time.

Also, I improved the profiling support within the rustc-perf benchmark suite. First, I added support for profiling with OProfile. I admit I haven’t used it enough yet to gain any wins. It seg faults about half the time when I run it, which isn’t encouraging.

Second, I added support for profiling with the new version of DHAT. This blog post is about 2019, but it’s worth mentioning some improvements I made with the new DHAT’s help in Q4 of 2018, since I didn’t write a blog post about that period: #55167, #55346, #55383, #55384, #55501, #55525, #55556, #55574, #55604, #55777, #55558, #55745, #55778, #55905, #55906, #56268, #56090, #56269, #56336, #56369, #56737, and (ena crate) #14.

Finally, I wrote up brief descriptions for all the benchmarks in rustc-perf.

pipelined compilation

The improvements above (and all the improvements I’ve done before that) can be described as micro-optimizations, where I used profiling data to optimize a small piece of code.

But it’s also worth thinking about larger, systemic improvements to Rust compiler speed. In this vein, I worked in Q2 with Alex Crichton on pipelined compilation, a feature that increases the amount of parallelism available when building a multi-crate Rust project by overlapping the compilation of dependent crates. In diagram form, a compilation without pipelining looks like this:

         metadata            metadata
[-libA----|--------][-libB----|--------][-binary-----------]
0s        5s       10s       15s       20s                30s

With pipelined compilation, it looks like this:

[-libA----|--------]
          [-libB----|--------]
                             [-binary-----------]
0s        5s       10s       15s                25s

I did the work on the Rust compiler side, and Alex did the work on the Cargo side.

For more details on how it works, how to use it, and lots of measurements, see this thread. The effects are highly dependent on a project’s crate structure and the compiling machine’s configuration. We have seen speed-ups as high as 1.84x, while some projects see no speed-up at all. At worst, it should make things only negligibly slower, because it’s not causing any additional work, just changing the order in which certain things happen.

Pipelined compilation is currently a Nightly-only feature. There is a tracking issue for stabilizing the feature here.

Future work

I have a list of things I want to investigate in Q3.

  • For pipelined compilation, I want to try pushing metadata creation even earlier in the compiler front-end, which may increase the speed-ups some more.
  • The compiler uses memcpy a lot; not directly, but the generated code uses it for value moves and possibly other reasons. In “check” builds that don’t do any code generation, typically 2-8% of all instructions executed occur within memcpy. I want to understand why this is and see if it can be improved. One possibility is moves of excessively large types within the compiler; another possibility is poor code generation. The former would be easier to fix. The latter would be harder to fix, but would benefit many Rust programs.
  • Incremental compilation sometimes isn’t very effective. On some workloads, if you make a tiny change and recompile incrementally it takes about the same time as a full non-incremental compilation. Perhaps a small change to the incremental implementation could result in some big wins.
  • I want to see if there are other hot paths within the parser that could be improved, like in #61612.

I also have various pieces of Firefox work that I need to do in Q3, so I might not get to all of these. If you are interested in working on these ideas, or anything else relating to Rust compiler speed, please get in touch.

Joel Maherbacklogs, lag, and waiting

Many times each week I see a ping on IRC or Slack asking “why are my jobs not starting on my try push?”  I want to talk about why we have backlogs and some things to consider in regards to fixing the problem.

It a frustrating experience when you have code that you are working on or ready to land and some test jobs have been waiting for hours to run. I personally experienced this the last 2 weeks while trying to uplift some test only changes to esr68 and I would get results the next day. In fact many of us on our team joke that we work weekends and less during the week in order to get try results in a reasonable time.

It would be a good time to cover briefly what we run and where we run it, to understand some of the variables.

In general we run on 4 primary platforms:

  • Linux: Ubuntu 16.04
  • OSX: 10.14.5
  • Windows: 7 (32 bit) + 10 (v1803) + 10 (aarch64)
  • Android: Emulator v7.0, hardware 7.0/8.0

In addition to the platforms, we often run tests in a variety of configs:

  • PGO / Opt / Debug
  • Asan / ccov (code coverage)
  • Runtime prefs: qr (webrender) / spi (socket process) / fission (upcoming)

In some cases a single test can run >90 times for a given change when iterated through all the different platforms and configurations. Every week we are adding many new tests to the system and it seems that every month we are changing configurations somehow.

In total for January 1st to June 30th (first half of this year) Mozilla ran >25M test jobs. In order to do that, we need a lot of machines, here is what we have:

  • linux
    • unittests are in AWS – basically unlimited
    • perf tests in data center with 200 machines – 1M jobs this year
  • Windows
    • unittests are in AWS – some require instances with a dedicated GPU and that is a limited pool)
    • perf tests in data center with 600 machines – 1.5M jobs this year
    • Windows 10 aarch64 – 35 laptops (at Bitbar) that run all unittests and perftests, a new platform in 2019 and 20K jobs this year
    • Windows 10 perf reference (low end) laptop – 16 laptops (at Bitbar) that run select perf tests, 30K jobs this year
  • OSX
    • unittests and perf tests run in data center with 450 mac minis – 380K jobs this year
  • Android
    • Emulators (packet.net fixed pool of 50 hosts w/4 instances/host) 493K jobs this year – run most unittests on here
      • will have much larger pool in the near future
    • real devices – we have 100 real devices (at Bitbar) – 40 Motorola – G5’s, 60 Google Pixel2’s running all perf tests and some unittests- 288K jobs this year

You will notice that OSX, some windows laptops, and android phones are a limited resource and we need to be careful for what we run on them and ensure our machines and devices are running at full capacity.

These limited resource machines are where we see jobs scheduled and not starting for a long time. We call this backlog, it could also be referred to as lag. While it would be great to point to a public graph showing our backlog, we don’t have great resources that are uniform between all machine types. Here is a view of what we have internally for the Android devices:bitbar_queue

What typically happens when a developer pushes their code to a try server to run all the tests, many jobs finish in a reasonable amount of time, but jobs scheduled on resource constrained hardware (such as android phones) typically have a larger lag which then results in frustration.

How do we manage the load:

  1. reduce the number of jobs
  2. ensure tooling and infrastructure is efficient and fully operational

I would like to talk about how to reduce the number of jobs. This is really important when dealing with limited resources, but we shouldn’t ignore this on all platforms. The things to tweak are:

  1. what tests are run and on what branches
  2. what frequency we run the tests at
  3. what gets scheduled on try server pushes

I find that for 1, we want to run everything everywhere if possible, this isn’t possible, so one of our tricks is to run things on mozilla-central (the branch we ship nightlies off of) and not on our integration branches. A side effect here is a regression isn’t seen for a longer period of time and finding a root cause can be more difficult. One recent fix was  when PGO was enabled for android we were running both regular tests and PGO tests at the same time for all revisions- we only ship PGO and only need to test PGO, the jobs were cut in half with a simple fix.

Looking at 2, the frequency is something else. Many tests are for information or comparison only, not for tracking per commit.  Running most tests once/day or even once/week will give a signal while our most diverse and effective tests are running more frequently.

The last option 3 is where all developers have a chance to spoil the fun for everyone else. One thing is different for try pushes, they are scheduled on the same test machines as our release and integration branches, except they are put in a separate queue to run which is priority 2. Basically if any new jobs get scheduled on an integration branch, the next available devices will pick those up and your try push will have to wait until all integration jobs for that device are finished. This keeps our trees open more frequently (if we have 50 commits with no tests run, we could be backing out changes from 12 hours ago which maybe was released or maybe has bitrot while performing the backout). One other aspect of this is we have >10K jobs one could possibly run while scheduling a try push and knowing what to run is hard. Many developers know what to run and some over schedule, either out of difficulty in job selection or being overly cautious.

Keeping all of this in mind, I often see many pushes to our try server scheduling what looks to be way too many jobs on hardware. Once someone does this, everybody else who wants to get their 3 jobs run have to wait in line behind the queue of jobs (many times 1000+) which often only get ran during night for North America.

I would encourage developers pushing to try to really question if they need all jobs, or just a sample of the possible jobs.  With tools like |/.mach try fuzzy| , |./mach try chooser| , or |./mach try empty| it is easier to schedule what you need instead of blanket commands that run everything.  I also encourage everyone to cancel old try pushes if a second try push has been performed to fix errors from the first try push- that alone saves a lot of unnecessary jobs from running.

 

Hacks.Mozilla.OrgMDN’s First Annual Web Developer & Designer Survey

Today we are launching the first edition of the MDN Developer & Designer Needs Survey. Web developers and designers, we need to hear from you! This is your opportunity to tell us about your needs and frustrations with the web. In fact, your participation will influence how browser vendors like Mozilla, Google, Microsoft, and Samsung prioritize feature development.

Goals of the MDN Developer Needs Survey

With the insights we gather from your input, we aim to:

  • Understand and prioritize the needs of web developers and designers around the world. If you create sites or services using HTML, CSS and/or JavaScript, we want to hear from you.
  • Understand how browser vendors should prioritize feature development to address your needs and challenges.
  • Set a baseline to evaluate how your needs change year over year.

The survey results will be published on MDN Web Docs (MDN) as an annual report. In addition, we’ll include a prioritized list of needs, with an analysis of priorities by geographic region.

It’s easy to take part. The MDN survey has been designed in collaboration with the major browser vendors mentioned above, and the W3C. We expect it will take approximately 20 minutes to complete. This is your chance to make yourself heard. Results and learnings will be shared publicly on MDN later this year.

Please participate

You’re invited to take the survey here. Your participation can help move the web forward, and make it a better experience for everyone. Thank you for your time and attention!

Take the MDN survey!

The post MDN’s First Annual Web Developer & Designer Survey appeared first on Mozilla Hacks - the Web developer blog.

RabimbaARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.

I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.

Mapping: feature detection and anchors

Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors.

What are anchors:

Anchors are your virtual markers in the real world. As a developer, you anchor any virtual object to the real world and that 3d model will stay glued into the physical location of the real world. Anchors also get updated over time depending on the new information that the system learns. For example, if you anchor a pikcahu 2 ft away from you and then you actually walk towards your pikachu, it realises the actual distance is 2.1ft, then it will compensate that. In real life we have a static coordinate system where every object has it's own x,y,z coordinates. Anchors in devices like HoloLens override the rotation and position of the transform component.

How Anchors stick?

If we follow the documentation in Google ARCore then we see it is attached to something called "trackables", which are feature points and planes in an image. Planes are essentially clustered feature points. You can have a more in-depth look at what Google says ARCore anchor does by reading their really nice Fundamentals. But for our purposes, we first need to understand what exactly are these Feature Points.

Feature Points: through the eyes of computer vision

Feature points are distinctive markers on an image that an algorithm can use to track specific things in that image. Normally any distinctive pattern, such as T-junctions, corners are a good candidate. They lone are not too useful to distinguish between each other and reliable place marker on an image so the neighbouring pixels of that image are also analyzed and saved as a descriptor.
Now a good anchor should have reliable feature points attached to it. The algorithm must be able to find the same physical space under different viewpoints. It should be able to accommodate the change in 
  • Camera Perspective
  • Rotation
  • Scale
  • Lightning
  • Motion blur and noise
Reliable Feature Points
This is an open research problem with multiple solutions on board. Each with their own set of problems though. One of the most popular algorithms stem from this paper by David G. Lowe at IJCV is called Scale Invariant Feature Transform (SIFT). Another follow up work which claims to have even better speed was published in ECCV in 2006 called Speeded Up Robust Features (SURF) by Bay et al. Thought both of them are patented at this point. 

Microsoft Hololens doesn't really need to do all this heavy lifting since it can rely on an extensive amount of sensor data. Especially it gets aid from the depth sensor data from its Infrared Sensors. However, ARCore and ARkit doesn't enjoy those privileges and has to work with 2d images. Though we cannot say for sure which of the algorithm is actually used for ARKit or ARCore we can try to replicate the procecss with a patent-free algorithm to understand how the process actually works.

D.I.Y Feature Detection and keypoint descriptors

To understand the process we will use an algorithm by Leutenegger et al called BRISK. To detect feature we must follow a multi-step process. A typical algorithm would adhere to the following two steps
  1. Keypoint Detection: Detecting keypoint can be as simple as just detecting corners. Which essentially evaluates the contrast between neighbouring pixels. A common way to do that in a large-scale image is to blur the image to smooth out pixel contrast variation and then do edge detection on it. The rationale for this is that you would normally want to detect a tree and a house as a whole to achieve reliable tracking instead of every single twig or window. SIFT and SURF adhere to this approach. However, for real-time scenario blurring adds a compute penalty which we don't want. In their paper "Machine Learning for High-Speed Corner Detection" Rosten and Drummond proposed a method called FAST which analyzes the circular surrounding if each pixel p. If the neighbouring pixels brightness is lower/higher than  and a certain number of connected pixels fall into this category then the algorithm found a corner. 
    Image credits: Rosten E., Drummond T. (2006) Machine Learning for High-Speed Corner Detection.. Computer Vision – ECCV 2006. 
    Now back to BRISK, for it out of the 16-pixel circle, 9 consecutive pixels must be brighter or darker than the central one. Also BRISk uses down-sized images allowing it to achieve better invariance to scale.
  2. Keypoint Descriptor: The primary property of the detected keypoints should be that they are unique. The algorithm should be able to find the feature in a different image with a different viewpoint, lightning. BRISK concatenates the brightness comparison results between different pixels surrounding the centre keypoint and concatenates them to a 512 bit string.
    Image credits: S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints”, 2011 International Conference on Computer Vision
    As we can see from the sample figure, the blue dots create the concentric circles and the red circles indicate the individually sampled areas. Based on these, the results of the brightness comparison are determined. BRISK also ensures rotational invariance. This is calculated by the largest gradients between two samples with a long distance from each other.

Test out the code

To test out the algorithms we will use the reference implementations available in OpenCV. We can use a pip install to install it with python bindings

As test image, I used two test images I had previously captured in my talks at All Things Open and TechSpeaker meetup. One is an evening shot of Louvre, which should have enough corners as well as overlapping edges, and another a picture of my friend in a beer garden in portrait mode. To see how it fares against the already existing blur on the image.

Original Image Link

Original Image Link
Visualizing the Feature Points
We use the below small code snippet to use BRISK on the two images above to visualize the feature points


What the code does:
  1. Loads the jpeg into the variable and converts into grayscale
  2. Initialize BRISK and run it. We use the paper suggested 4 octaves and an increased threshold of 70. This ensures we get a low number but highly reliable key points. As we will see below we still got a lot of key points
  3. We used the detectAndCompute() to get two arrays from the algo holding both the keypoints and their descriptors
  4. We draw the keypoints at their detected positions indicating keypoint size and orientation through circle diameter and angle, The "DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS" does that.


As you can see most of the key points are visible at the castle edges and all visible edges for Louvre and almost none on the floor. With the portrait mode pic of my friend it's more diverse but also shows some false positives like the reflection on the glass. Considering how BRISK works, this is normal.

Conclusion

In this first part of understanding what power ARCore/Kit we understood how basic feature detection works behind the scenes and tried our hands on to replicate that. These spatial anchors are vital to the virtual objects "glueing" to the real world. This whole demo and hands-on now should help you understand why designing your apps so that users place objects in areas where the device has a chance to create anchors is a good idea.
Placing a lot of objects in a single non-textured smooth plane may produce inconsistent experience since its just too hard to detect keypoints in a plane surface (now you know why). As a result, the objects may sometimes drift away if tracking is not good enough.
If your app design encourages placing objects near corners of floor or table then the app has a much better chance of working reliably. Also, Google's guide on anchor placement is an excellent read.

Their short recommendation is:
  • Release spatial anchors if you don’t need them. Each anchor costs CPU cycles which you can save
  • Keep the object close to the anchor. 
On our next post, we will see how we can use this knowledge to do basic SLAM.

Update: The next post lives here: https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html

This Week In RustThis Week in Rust 295

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is overloadable, a crate to provides you with the capabilities to overload your functions in a similar style to C# or C++, including support for meta attributes, type parameters and constraints, and visibility modifiers Thanks to Stevensonmt for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

235 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is 5 languages stacked on top of each other, except that instead of ending up like 5 children under a trenchcoat, they end up like the power rangers.

reuvenpo on /r/rust

Thanks to Jelte Fennema for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Firefox Frontier100,985,047 have been invited to the Evite data breach “party”

Did you get an invitation to the latest data breach? Over the weekend it was disclosed that Evite, the online invitation platform that has sent more than a few birthday … Read more

The post 100,985,047 have been invited to the Evite data breach “party” appeared first on The Firefox Frontier.

Kartikaya GuptaThe Gecko Hacker's Guide to Taskcluster

Don't panic.

I spent a good chunk of this year fiddling with taskcluster configurations in order to get various bits of continuous integration stood up for WebRender. Taskcluster configuration is very flexible and powerful, but can also be daunting at first. This guide is intended to give you a mental model of how it works, and how to add new jobs and modify existing ones. I'll try and cover things in detail where I believe the detail would be helpful, but in the interest of brevity I'll skip over things that should be mostly obvious by inspection or experimentation if you actually start digging around in the configurations. I also try and walk through examples and provide links to code as much as possible.

Based on my experience, there are two main kinds of changes most Gecko hackers might want to do:
(1) modify existing Gecko test configurations, and
(2) add new jobs that run in automation.
I'm going to explain (2) in some detail, because on top of that explaining (1) is a lot easier.

Overview of fundamentals

The taskcluster configuration lives in-tree in the taskcluster/ folder. The most interesting subfolders there are the ci/ folder, which contain job defintiions in .yml files, and the taskgraph/ folder which contain python scripts to do transforms. A quick summary of the process is that the job definitions are taken from the .yml files, run through a series of "transforms", finally producing a task definition. The task definitions may have dependencies on other task definitions; together this set is called the "task graph". That is then submitted to Taskcluster for execution. Conceptually you can think of the stuff in the .yml files as a higher-level definition, which gets "compiled" down to the final taskgraph (the "machine code") that Taskcluster actually understands and executes.

It's helpful to walk through a quick example of a transform. Consider the webrender-linux-release job definition. At the top of the kind.yml file, there are a number of transforms listed, so each of those gets applied to the job definition in turn. The first one is use_toolchains, the code for which you can find in use_toolchains.py. This transform takes the toolchains attributes of the job definition (in this example, these attributes), and figures out the corresponding toolchain jobs and artifacts (artifacts are files that are outputs of a task), The toolchain jobs are defined in taskcluster/ci/toolchains/, so we can see that the linux64-rust toolchain is here and the wrench-deps toolchain is here. The transform code discovers this, then adds dependencies for those tasks to webrender-linux-release, and populates the MOZ_TOOLCHAINS env var with the links to the artifacts. Taskcluster ensures that tasks only get run after their dependencies are done, and the MOZ_TOOLCHAINS is used by ./mach artifact toolchain to actually download and unpack those toolchains when the webrender-linux-release task runs.

Similar to this, there are lots of other transforms that live in taskcluster/taskgraph/transforms/ that do other transformations on the job definitions. Some of the job definitions in the .yml files look very different in terms of what attributes are defined, and go through many layers of transformation before they produce the final task definition. In a sense, each folder in taskcluster/ci is a domain-specific language, with the transforms eventually compiling them down to the same machine code.

One of the more important transforms is the job transform, which determines which wrapper script will be used to run your job's commands. This is usually specified in your job definition using the run.using attribute. Examples include mach or toolchain-script. These both get transformed (see transforms for mach and toolchain-script) into commands that get passed to run-task. Or you can just use run-task directly in your job, and specify the command you want to have run. In the end most jobs will boil down to run-task commands; the run-task wrapper script just does some basic abstraction over the host machine/VM and then runs the command.

Host environment and Docker

Taskcluster can manage many different underlying host systems - from Amazon Linux VMs, to dedicated physical macOS machines, and everything in between. I'm not going to into details of provisioning but there's the notion of a "worker" which is useful to know. This is the binary that runs on the host system, polls taskcluster to find new tasks scheduled to be run on that kind of host system, and executes them in whatever sandboxing is available on that host system. For example, a docker-worker instance will start a specified docker image and execute the task commands in that. A generic-worker instance will instead create a dedicated work folder and run stuff in there, cleaning up afterwards.

If you're adding a new job (e.g. some sort of code analysis or whatever) most likely you should run on Linux using docker. This means you need to specify a docker image, which you can also do using the taskcluster configuration. Again I will use the webrender-linux-release job as an example. It specifies the webrender docker image to use, which is defined with an in-tree Dockerfile. The docker image is itself built using taskcluster with the job definition here (this job definition is an example of one that looks very different from other job definitions, because the job description is literally two lines and transforms do most of the work in compiling this into a full task definition).

Circling back to generic-worker, this is the default for jobs that need to run on Windows and macOS, because Docker doesn't run either of those platforms as a target. An example is the webrender-macos-debug job, which specifies using: run-task on a t-osx-1010 worker type, which will cause it go through this transform and eventually run using the run-task wrapper under generic worker on the macOS instance. For the most part you probably won't need to care about this but it's something to be aware of if you're running jobs targetted at Windows or macOS.

Caching

As we've seen from previous sections, jobs can use docker images and toolchain artifacts that are produced by other tasks in the taskgraph. Of course, we don't want to rebuild these docker images and toolchains on every single push, as that would be quite expensive. Taskcluster provides a mechanism for caching and reusing artifacts from previous pushes. I'm not going to go into too much detail on this, but you can look at the cached_tasks transform if you're interested. I will, however, point to the %include comments in e.g. this Dockerfile and note that these include statements are special because the mark the docker image as dependent on the given file. So if the included file changes, the docker image will be rebuilt. For toolchain tasks you can specify additional dependencies on inputs using the resources attribute; this also triggers toolchain rebuilds if the dependent inputs change.

The other thing to keep in mind when adding a new job is that you want to avoid too much network traffic or redundant work. So if your job involves downloading or building stuff that usually doesn't change from one push to the next, you probably want to split up your job so that the mostly-static part is done by a toolchain or other job, and the result of that is cached and reused by your main job. This will reduce overall load and also improve the runtime of your per-push job.

Even if you don't need caching across pushes, you might want to refactor two jobs so that their shared work is extracted into a dependency, and the two dependent jobs then just do their unique postprocessing bits. This can be done by manually specifying the dependencies and pulling in artifacts from those dependencies using the fetches attribute. See here for an example. In this scenario taskcluster will again ensure the jobs run in the right order, and you can use artifacts from the dependencies, but no caching across pushes takes place.

Adding new jobs

So hopefully with the above sections you have a general idea of how taskcluster configuration works in mozilla-central CI. To add a new job, you probably want to find an existing job kind that fits what you want, and then add your job to that folder, possibly by copy/pasting an existing job with appropriate modifications. Or if you have a new type of job you want to run that's significantly different from existing ones, you can add a new kind (a new subfolder in taskcluster/ci and documented in kinds.rst). Either way, you'll want to ensure the transforms being used are appropriate and allow you to reuse the features (e.g. toolchain dependencies) that you need and that already exist.

Gecko testing

The Gecko test jobs are defined in the taskcluster/ci/test/ folder. The entry point, as always, is the kind.yml file in that folder, which lists the transforms that get applied. The tests transform is one of the largest and most complex transforms. It does a variety of things (e.g. generating fission-enabled and fission-disabled tasks for jobs), but thankfully you probably won't need to fiddle with that too much, unless you find your test suite is behaving unexpectedly. Instead, you can mostly do copy-pasting in the other .yml files to enable test suites on particular platforms or adjust options. The test-platforms.yml file allows you define "test platforms" which show up as new rows on TreeHerder and run sets of tests on a particular build. The sets of tests are defined in test-sets.yml, which in turn reference the individual test jobs defined in the various other .yml files in that folder. Enabling a test suite on a platform is generally as easy as adding the test to the test set that you care about, and maybe tweaking some of the per-platform test attributes (e.g. number of chunks, or what trees to run on) to suit your new platform. I found the .yml files to mostly self-explanatory so I won't walk through any examples here.

What I will briefly mention is how the tests are actually run. This is not strictly part of taskcluster but is good to know anyway. The test tasks generally run using mozharness, which is a set of scripts in testing/mozharness/scripts and configuration files in testing/mozharness/configs. Mozharness is responsible for setting up the firefox environment and then delegates to the actual test harness. For example, running reftests on desktop linux would run testing/mozharness/scripts/desktop_unittest.py with the testing/mozharness/configs/unittests/linux_unittest.py config (which are indicated in the job description here). The config file, among other things, tells the mozharness script where the actual test harness entrypoint is (in this case, runreftest.py) and the mozharness script will invoke that test harness after doing some setup. There's many layers here (run-task, mozharness, test harness, etc.) with the number of layers varying across platforms (mozharness scripts for Android device/emulator are totally different than desktop), and I don't have as good a grasp on all this as I would like, but hopefully this is sufficient to point you in the right direction if you need to fiddle with tests at this level.

Debugging

As always, when modifying configs you might run into unexpected problems and need to debug. There are a few tools that are useful here. One is the ./mach taskgraph command, which can run different steps of the "decision" task and generate taskgraphs. When trying to debug task generation issues my go-to technique would be to download a parameters.yml file from an existing decision task on try or m-c (you can find it in the artifacts list for the decision task on TreeHerder), and then run ./mach taskgraph target-graph -p parameters.yml. This runs the taskgraph code and emits a list of tasks that would be scheduled given the taskcluster configuration in your local tree and the parameters provided. Likewise, ./mach taskcluster-build-image and ./mach taskcluster-load-image are useful for building and testing docker images for use with jobs. You can use these to e.g. run a docker image with your local Docker installation, and see what all you might need to install on it to make it ready to run your job.

Another useful debugging tool as you start doing try pushes to test your new tasks, is the task/group inspector at tools.taskcluster.net. This is easily accessible from TreeHerder by clicking on a job and using the "Inspect task" link in the details pane (bottom left). TreeHerder provides some information, but the Taskcluster tools website provides a much richer view including the final task description that was submitted to Taskcluster, its dependencies, environment variables, and so on. While TreeHerder is useful for looking at overall push health, the Taskcluster web UI is better for debugging specific task problems, specially as you're in the process of standing up a new task.

In general the taskcluster scripts are pretty good at identifying errors and printing useful error messages. Schemas are enforced, so it's rare to run into silent failures because of typos and such.

Conclusion

There's a lot of power and flexibility afforded by Taskcluster, but with that goes a steep learning curve. Thankfully once you understand the basic shape of how Taskcluster works, most of the configuration tends to be fairly intuitive, and reading existing .yml files is a good way to understand the different features available. grep/searchfox in the taskcluster/ folder will help you out a lot. If you run into problems, there's always people willing to help - as of this writing, :tomprince is the best first point of contact for most of this, and he can redirect you if needed.

Firefox NightlyThese Weeks in Firefox: Issue 61

Highlights

  • OOP iframes somewhat work now on Nightly!
    • For the super-brave, you can test it yourself by creating a boolean preference fission.autostart and setting it to true, and then restarting Firefox
    • Please file bugs that you find against this metabug
    • Expect general instability for the next little while
  • Event Breakpoints enabled on Nightly, allowing you to break on specific event type (e.g. mouse click).
    • The Debugger showing the Event Listener Breakpoints pane, allowing developers to set breakpoints on individual events.

      Not sure where your click events are being handled? An Event Listener Breakpoint will help you track that down by letting you break on any click event, and stepping through each listener!

  • Blocked resources are displayed in the Network panel (rendered in red and using new status label) (platform bug, DevTools meta)
    • Blocked resources in the Network Monitor are displayed and marked in red.

      The blocked resources are highlighted in red.

  • AVG blocked our access to logins.json and broke our password database for some of our users. We released an addon, heartbeat message, and a SUMO article to help users recover from the data loss. Thanks to everyone who helped with this effort!
  • The flow for publishing Gecko Profiler performance profiles is streamlined! In addition, the profiler now accepts gzipped profiles from Firefox, letting us send larger profile data from addon (it was a problem because of the IPC data limit)
    • An animated GIF demonstrating the new publishing flow for the Firefox Profiler tool

      Sharing profiles with Firefox developers recorded during slow times helps us investigate and fix performance problems.

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Bryan Kok [:transfusion]
  • Chujun Lu
  • Florens Verschelde :fvsch
  • Heng Yeow (:tanhengyeow)
  • janelledement
  • jaril
  • Karan Sapolia
  • Kestrel
  • Masatoshi Kimura [:emk]
  • Michael Krasnov
  • Miriam
  • premk
  • Thomas
  • Tim Nguyen :ntim

New contributors (🌟 = first patch)

Project Updates

Developer Tools

Console
  • Console responsive toolbar redesign
    • The Web Console shown in both a wide and slim mode.
  • New, better warning groups, as part of our “unclutter the Console panel” effort. Compare the amount of content in the following screenshots!
    • Showing grouped warnings in the Web Console.
  • “Export Console content to file” option available through the Console panel context menu.
    • The context menu for the Web Console showing the item for exporting content to a file.
Debugger
  • Displaying extension name in the Sources panel (bug)
    • The Sources list in the Debugger showing a list of sources from a WebExtension
  • Original Rust variables are shown. Thanks to Yury Delendik’s work on compiling dwarf metadata in sourcemaps and our existing map scopes functionality.
    • The Debugger showing the original Rust source from a WASM module thanks to sourcemaps.
Inspector
  • The node infobar, displayed at the top of the highlighted area, will now display whether the highlighted element is a flex or grid container or item (bug).
    • The Inspector Picker now showing additional context information for some layout, including whether or not the selected item is a flex container or a flex item.
  • Bug 1464440 – [meta] subgrid support in grid inspector
    • Showing the Grid Inspector on a layout with a subgrid layout.

      Subgrid support is coming!

Documentation
  • DevTools documentation for 68 updated.

Fission

Lint

New Tab Page

Password Manager

Performance

Performance tools

  • Added a context menu to the network panel.
    • Showing the context menu on top of the Network panel of the Firefox Profiler tool.

      This should help make it easier for Firefox developers to navigate performance profiles and focus in on the important things.

  • Sub category information in the sidebar
    • Showing the submenu in the Firefox Profiler, breaking down the categories that the selected samples fall into.

      This submenu helps Firefox developers know where your precious time is being spent when examining performance profiles.

  • Javascript only stacks now include more information about the things that are happening on C++ side.
    • The Call Tree Viewer in the Firefox Profiler tool, showing JavaScript stacks interleaved with frames that indicate what was happening in the DOM native layer.

      Special frames have been added in the JavaScript-only mode of the Call Tree Viewer which help make it clearer what the JavaScript was doing with the DOM.

  • More markers have backtraces in the tooltip now.
    • A tooltip showing the stacktrace for a Notify Observer marker in the Firefox Profiler

      Firefox Developers get more hints this way about why certain things happened that might be taking up time.

Picture-in-Picture

Search and Navigation

Search
Quantum Bar
  • On track to be released with Firefox 68, fixed a few regressions in these weeks
  • Working on many experiments: at least 2 of them targeted to 69, 3 targeted at 70, more incoming
  • Started working on a partial address bar UI refresh for Firefox 70 (internally known as the Mega Bar)
  • Started working on legacy address bar code removal
Places

User Journey

  • Created new “Messaging System” bugzilla component currently covering What’s New panel, Contextual Feature Recommendations, Onboarding / First Run, Snippets
  • Adding Feature Callouts to toolbar and menu items as well as What’s New to highlight Firefox features and updates
  • Turned on Trailhead “Join Firefox” about:welcome messaging with “Privacy” cards for all locales in 68+

Hacks.Mozilla.OrgAdd-Ons Outage Post-Mortem Result

Editor’s Note: July 12, 1:52pm pt – Updated Balrog update frequency and added some more background.

As I mentioned in my previous post, we’ve been conducting a post-mortem on the add-ons outage. Sorry this took so long to get out; we’d hoped to have this out within a week, but obviously that didn’t happen. There was just a lot more digging to do than we expected. In any case, we’re now ready to share the results. This post provides a high level overview of our findings, with more detail available in Sheila Mooney’s incident report and Matt Miller & Peter Saint-Andre’s technical report.

Root Cause Analysis

The first question that everyone asks is “how did you let this happen?” At a high level, the story seems simple: we let the certificate expire. This seems like a simple failure of planning, but upon further investigation it turns out to be more complicated: the team responsible for the system which generated the signatures knew that the certificate was expiring but thought (incorrectly) that Firefox ignored the expiration dates. Part of the reason for this misunderstanding was that in a previous incident we had disabled end-entity certificate checking, and this led to confusion about the status of intermediate certificate checking. Moreover, the Firefox QA plan didn’t incorporate testing for certificate expiration (or generalized testing of how the browser will behave at future dates) and therefore the problem wasn’t detected. This seems to have been a fundamental oversight in our test plan.

The lesson here is that: (1) we need better communication and documentation of these parts of the system and (2) this information needs to get fed back into our engineering and QA work to make sure we’re not missing things. The technical report provides more details.

Code Delivery

As I mentioned previously, once we had a fix, we decided to deliver it via the Studies system (this is one part of a system we internally call “Normandy”). The Studies system isn’t an obvious choice for this kind of deployment because it was intended for deploying experiments, not code fixes. Moreover, because Studies permission is coupled to Telemetry, this meant that some users needed to enable Telemetry in order to get the fix, leading to Mozilla temporarily over-collecting data that we didn’t actually want, which we then had to clean up.

This leads to the natural question: “isn’t there some other way you could have deployed the fix?” to which the answer is “sort of.” Our other main mechanisms for deploying new code to users are dot releases and a system called “Balrog”. Unfortunately, both of these are slower than Normandy: Balrog checks for updates every 12 hours (though there turns out to have been some confusion about whether this number was 12 or 24), whereas Normandy checks every 6. Because we had a lot of users who were affected, getting them fixed was a very high priority, which made Studies the best technical choice.

The lesson here is that we need a mechanism that allows fast updates that isn’t coupled to Telemetry and Studies. The property we want is the ability to quickly deploy updates to any user who has automatic updates enabled. This is something our engineers are already working on.

Incomplete Fixes

Over the weeks following the incident, we released a large number of fixes, including eight versions of the system add-on and six dot releases. In some cases this was necessary because older deployment targets needed a separate fix. In other cases it was a result of defects in an earlier fix, which we then had to patch up in subsequent work. Of course, defects in software cannot be completely eliminated, but the technical report found that at least in some cases a high level of urgency combined with a lack of available QA resources (or at least coordination issues around QA) led to testing that was less thorough than we would have liked.

The lesson here is that during incidents of this kind we need to make sure that we not only recruit management, engineering, and operations personnel (which we did) but also to ensure that we have QA available to test the inevitable fixes.

Where to Learn More

If you want to learn more about our findings, I would invite you to read the more detailed reports we produced. And as always, if those don’t answer your questions, feel free to email me at ekr-blog@mozilla.com.

The post Add-Ons Outage Post-Mortem Result appeared first on Mozilla Hacks - the Web developer blog.

QMOFirefox Nightly 70 Testday, July 19th

Hello Mozillians,

We are happy to let you know that Friday, July 19th, we are organizing Firefox Nightly 70 Testday. We’ll be focusing our testing on: Fission. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Mike HommeyReproducing the Linux builds of Firefox 68

Starting with Firefox 68, the Linux builds shipped by Mozilla should be reproducible (it is not currently automatically validated that it definitely is, but 68.0 is). These builds are optimized with Profile Guided Optimization, and the profile data was not kept and published until recently, which is why they weren’t reproducible until now.

The following instructions require running Docker on a Linux host (this may or may not work on a non-Linux host, I don’t know what e.g. Docker for Mac does, and if the docker support in the mach command works with it). I’ll try to make them generic enough that they may apply to any subsequent release of Firefox.

  • Clone either the mozilla-unified or mozilla-release repository. You can use Mercurial or Git (with git-cinnabar), it doesn’t matter.
  • Checkout the FIREFOX_68_0_RELEASE tag and find out what its Mercurial changeset id is (it is 353628fec415324ca6aa333ab6c47d447ecc128e).
  • Open the Taskcluster index tool in a browser tab.
  • In the input field type or copy/paste gecko.v2.mozilla-release.shippable.revision.353628fec415324ca6aa333ab6c47d447ecc128e.firefox.linux64-opt and press the Enter key. (replace 353628fec415324ca6aa333ab6c47d447ecc128e with the right revision if you’re trying for another release)
  • This will fill the “Indexed Task” pane, where you will find a TaskId. Follow the link there, it will bring you to the corresponding Task Run Logs
  • Switch to the Task Details
  • Scroll down to the “Dependencies” list, and check the task name that begins with “build-docker-image”. For the Firefox 68 build task, it is build-docker-image-debian7-amd64-build.
  • Take that name, remove the “build-docker-image-” prefix, and run the following command, from inside the repository, to download the corresponding docker image:
    $ ./mach taskcluster-load-image debian7-amd64-build

    Obviously, replace debian7-amd64-build with whatever you found in the task dependencies. The image can also be built from the source tree, but this is out of scope for this post.

  • The command output will give you a docker run -ti ... command to try. Run it. It will open a shell in the docker image.
  • From the docker shell, run the following commands:
    $ echo no-api-key > /builds/mozilla-desktop-geoloc-api.key
    $ echo no-api-key > /builds/sb-gapi.data
    $ echo no-api-key > /builds/gls-gapi.data
    

    Or replace no-api-key with the actual keys if you have them.

  • Back to the Task Details, check the env part of the “Payload”. You’ll need to export all these variables with the corresponding values. e.g.
    $ export EXTRA_MOZHARNESS_CONFIG='{"update_channel": "release", "mozconfig_variant": "release"}'
    $ export GECKO_BASE_REPOSITORY='https://hg.mozilla.org/mozilla-unified'
    $ export GECKO_HEAD_REPOSITORY='https://hg.mozilla.org/releases/mozilla-release'
    ...
  • Set the missing TASKCLUSTER_ROOT_URL environment variable:
    $ export TASKCLUSTER_ROOT_URL='https://taskcluster.net'
  • Change the value of MOZHARNESS_ACTIONS to:
    $ export MOZHARNESS_ACTIONS='build'

    The original value contains get-secrets, which will try to download from http://taskcluster/, which will fail with a DNS error, and check-test, which runs make check, which is not necessary to get a working Firefox.

  • Take command part of the “Payload”, and run that in the docker shell:
    $ /builds/worker/bin/run-task --gecko-checkout /builds/worker/workspace/build/src -- /builds/worker/workspace/build/src/taskcluster/scripts/builder/build-linux.sh
  • Once the build is finished, in another terminal, check what the container id of your running docker container is, and extract the build artifact from there:
    $ docker ps
    CONTAINER ID        IMAGE                                                                                  COMMAND             CREATED             STATUS              PORTS               NAMES
    d234383ba9c7        debian7-amd64-build:be96d1b734e1a152a861ce786861fca6e70bcb996bf67347f5af4f146db157ec   "bash"              2 hours ago         Up 2 hours                              nifty_hermann
    $ docker cp d234383ba9c7:/builds/worker/artifacts/target.tar.bz2 .

    (replace d234383ba9c7 with your container id)

  • Now you can exit the docker shell. That will remove the container.

After all the above, you can finally compare your target.tar.bz2 to the Linux64 Firefox 68 release. You will find a few inevitable differences:

  • The .chk files will be different, because they are self-signatures for FIPS mode that are generated with one-time throw-away keys.
  • The Firefox 68 release contains .sig files that your build won’t contain. They are signature files, which aren’t reproducible outside Mozilla automation for obvious reasons.
  • Consequently, the precomplete file contains instructions for the .sig files in the Firefox 68 release that won’t be in your build.
  • The omni.ja files are different. If you extract them (they are uncompressed zip files with a few tweaks to the format), you’ll see the only difference is in modules/AppConstants.jsm, for the three API keys you created a file for earlier.

Everything else is identical bit for bit.

All the above is a rather long list of manual steps. Ideally, most of it would be automated. We’re not there yet. We only recently got to the point where the profile data is available to make it possible at all. In other words, this is a starting point. It’s valuable to know it does work but requires manual steps and what those are.

It is also worth noting that while the above downloads and uses pre-built compilers and other tools, it is also possible to rebuild those, although they likely won’t be bit-for-bit identical. But differences in those shouldn’t incur differences in Firefox. Replacing the pre-built ones with ones you’d build yourself unfortunately currently requires some more manual work.

As for Windows and Mac builds, long story short, they are not reproducible as of writing. Mac builds are not optimized with PGO, but Windows builds are, and their profile data won’t be available until Firefox 69. Both platforms require SDKs that Mozilla can’t redistribute per their license (but are otherwise available for download from Microsoft or Apple, respectively), which makes the setup more complex. And in all likeliness, for both platforms, the toolchains are not deterministic yet (that’s at least true for Mac). Also, binary signatures would need to be tripped off the executables and libraries before any comparison.

Mozilla Security BlogGrizzly Browser Fuzzing Framework

At Mozilla, we rely heavily on automation to increase our ability to fuzz Firefox and the components from which it is built. Our fuzzing team is constantly developing tools to help integrate new and existing capabilities into our workflow with a heavy emphasis on scaling. Today we would like to share Grizzly – a browser fuzzing framework that has enabled us to quickly and effectively deploy fuzzers at scale.

Grizzly was designed to allow fuzzer developers to focus solely on writing fuzzers and not worry about the overhead of creating tools and scripts to run them. It was created as a platform for our team to run internal and external fuzzers in a common way using shared tools. It is cross-platform and supports running multiple instances in parallel.

Grizzly is responsible for:

  • managing the browser (via Target)
    • launching
    • terminating
    • monitoring logs
    • monitoring resource usage of the browser
    • handling crashes, OOMs, hangs… etc
  • managing the fuzzer/test case generator tool (via Adapter)
    • setup and teardown of tool
    • providing input for the tool (if necessary)
    • creating test cases
  • serving test cases
  • reporting results
    • basic crash deduplication is performed by default
    • FuzzManager support is available (with advanced crash deduplication)

Grizzly is extensible by extending the “Target” or “Adapter” interface. Targets are used to add support for specific browsers. This is where the quirks and complexities of each browser are handled. See puppet_target.py for an example which uses FFPuppet to add support for Firefox. Adapters are used to add support for fuzzers. A basic functional example can be found here. See here for a slightly more advanced example that can be modified to support existing fuzzers.

Grizzly is primarily intended to support blackbox fuzzers. For a feedback driven fuzzing interface please see the libfuzzer fuzzing interface. Grizzly also has a test case reduction mode that can be used on crashes it finds.

For more information please checkout the README.md in the repository and the wiki. Feel free to ask questions on IRC in #fuzzing.

The post Grizzly Browser Fuzzing Framework appeared first on Mozilla Security Blog.

Dave TownsendPlease watch your character encodings

I started writing this as a newsgroup post for one of Mozilla’s mailing lists, but it turned out to be too long and since this part was mainly aimed at folks who either didn’t know about or wanted a quick refresher on character encodings I decided to blog it instead. Please let me know if there are errors in here, I am by no means an expert on this stuff either and I do get caught out sometimes!

Text is tricky. Unicode supports the notion of 1,114,112 distinct characters, slightly more than a byte of memory can hold. So to store a character we have to use a way of encoding its value into bytes in memory. A straightforward encoding would just use three bytes per character. But (roughly) the larger the character value the less often it is used, and memory is precious, so often variable length encodings are used. These will use fewer bytes in memory for characters earlier in the range at the cost of using a little more memory for the rarer characters. Common encodings include UTF-8 (one byte for ASCII characters, up to four bytes for other characters) and UTF-16 (two bytes for most characters, four bytes for less used ones).

What does this mean?

It may not be possible to know the number of characters in a string purely by looking at the number of bytes of memory used.

When a string is encoded with a variable length encoding the number of bytes used by a character will vary. If the string is held in a byte buffer just dividing its length by some number will not always return the number of characters in a string. Confusingly many string implementations expose a length property, that often only tells you the number of code points, not the number of characters in a string. I bet most JavaScript developers don’t know that JavaScript suffers from this:

let test = "\u{1F42E}"; // This is the Unicode cow 🐮 (https://emojipedia.org/cow-face/)
test.length; // This returns 2!
test.charAt(0); // This returns "\ud83d"
test.charAt(1); // This returns "\udc2e"
test.substring(0, 1); // This returns "\ud83d"

Fun!

More modern versions of JavaScript do give better options, though they are probably slower than the length property (because it must decode the characters to understand the length:

Array.from(test).length; // This returns 1
test.codePointAt(0).toString(16); // This returns "1f42e"

When you encode a character into memory and pass it to some other code, that code needs to know the encoding so it can decode it correctly. Using the wrong encoder/decoder will lead to incorrect data.

Using the wrong decoder to convert a buffer of memory into characters will often fail. Take the character “ñ”. In UTF-8 this is encoded as C3 B1. Decoding that as UTF-16 will result in “쎱”. In UTF-16 however “ñ” is encoded as 00 F1. Trying to decode that as UTF-8 will fail as that is in invalid UTF-8 sequence.

Many languages thankfully use string types that have fixed encodings, in rust for example the str primitive is UTF-8 encoded. In these languages as long as you stick to the normal string types everything should just work. It isn’t uncommon though to do manipulations based on the byte representation of the characters, %-encoding a string for a URL for example, so knowing the character encoding is still important.

Some languages though have string types where the encoding may not be clear. In Gecko C++ code for example a very common string type in use is the nsCString. It is really just a set of bytes and has no defined encoding and no way of specifying one at the code level. The only way to know for sure what the string is encoded as is to track back to where it was created. If you’re unlucky it gets created in multiple places using different encodings!

Funny story. This blog posts contains a couple of the larger unicode characters. While working on the post I kept coming back to find that the character had been lost somewhere along the way and replaced with a “?”. Seems likely that there is a bug in WordPress that isn’t correctly handling character encodings. I’m not sure yet whether those characters will survive publishing this post!

These problems disproportionately affect non-English speakers.

Pretty much all of the characters that English speakers use (mostly the Latin alphabet) live in the ASCII character set which covers just 128 characters (some of these are control characters). The ASCII characters are very popular and though I can’t find references right now it is likely that the majority of strings used in digital communication are made up of only ASCII characters, particularly when you consider strings that humans don’t generally see. HTTP request and response headers generally only use ASCII characters for example.

Because of this popularity when the Unicode character set was first defined, it mapped the 128 ASCII characters to the first 128 Unicode characters. Also UTF-8 will encode those 128 characters as a single byte, any other characters get encoded as two bytes or more.

The upshot is that if you only ever work with ASCII characters, encoding or decoding as UTF-8 or ASCII yields identical results. Each character will only ever take up one byte in memory so the length of a string will just be the number of bytes used. An English speaking developer, and indeed many other developers may only ever develop and test with ASCII characters and so potentially become blind to the problems above and not notice that they aren’t handling non-ASCII characters correctly.

At Mozilla where we try hard to make Firefox work in all locales we still routinely come across bugs where non-ASCII characters haven’t been handled correctly. Quite often issues stem from a user having non-ASCII characters in their username or filesystem causing breakage if we end up decoding the path incorrectly.

This issue may start getting rarer. With the rise in emoji popularity developers are starting to see and test with more and more characters that encode as more than one byte. Even in UTF-16 many emoji encode to four bytes.

Summary

If you don’t care about non-ASCII characters then you can ignore all this. But if you care about supporting the 80% of the world that use non-ASCII characters then take care when you are doing something with strings. Make sure you are checking its length correctly when needed. If you are working with data structures that don’t have an explicit character encoding then make sure you know what encoding your data is in before doing anything with it other than passing it around.

Hacks.Mozilla.OrgTesting Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition

Editor’s Note: We updated this post on July 11, 2019 to mention that the Picture-in-Picture feature is currently only enabled Firefox 69 Beta and Developer Edition on Windows. We apologize for getting your hopes up if you’re on macOS or Linux, and we hope to have this feature enabled on those platforms once it reaches our quality standards.

Have you ever needed to scan a recipe while also watching a cooking video? Or perhaps you wanted to watch a recording of a lecture while also looking at the course slides. Or maybe you wanted to watch somebody stream themselves playing video games while you work.

We’ve recently shipped a version of Firefox for Windows on our Beta and Developer Edition release channels with an experimental feature that aims to make this easier for you to do!

Picture-in-Picture allows you to pop a video out from where it’s being played into a special kind of window that’s always on top. Then you can move that window around or resize it however you need!

There are two ways to pop out a video into a Picture-in-Picture window:

Via the context menu

If you open the context menu on a <video> element, you’ll sometimes see the media context menu that looks like this:

Showing the default context menu when opened on a video element, with the Picture-in-Picture menu item highlighted.

There’s a Picture-in-Picture menu item in that context menu that you can use to toggle the feature.

Many sites, however, make it difficult to access the context menu for <video> elements. YouTube, for example, overrides the default context menu with their own.

You can get to the default native context menu by either holding Shift while right-clicking, or double right-clicking. We feel, however, that this is not the most obvious gesture for accessing the feature, so that leads us to the other toggling mechanism – the Picture-in-Picture video toggle.

Via the new Picture-in-Picture video toggle

The Picture-in-Picture toggle appears when you hover over videos with the mouse cursor. It is a small blue rectangle that slides out when you hover over it. Clicking on the blue rectangle will open the underlying video in the Picture-in-Picture player window.

Showing the Picture-in-Picture toggle overlaying a video element on YouTube.

Note that the toggle doesn’t appear when hovering all videos. We only show it for videos that include an audio track that are also of sufficient size and play length.

The advantage of the toggle is that we think we can make this work for most sites out of the box, without making the site authors do anything special!

Using the Picture-in-Picture player window

The Picture-in-Picture window also gives you the ability to quickly play or pause the video — hovering the video with your mouse will expose that control, as well as a control for closing the window, and closing the window while returning you to the tab that the video came from.

Asking for your feedback

We’re still working on hammering out keyboard accessibility, as well as some issues on how the video is displayed at extreme window sizes. We wanted to give Firefox Beta and Developer Edition users on Windows the chance to try the feature out and let us know how it feels. We’ll use the information that we gather to determine whether or not we’ve got the UI right for most users, or need to go back to the drawing board. We’re also hoping to bring this same Picture-in-Picture support to macOS and Linux in the near future.

We’re particularly interested in feedback on the video toggle — there’s a fine balance between discoverability and obtrusiveness, and we want to get a clearer sense of where the blue toggle falls for users on sites out in the wild.

So grab yourself an up-to-date copy of Firefox 69 Beta or Developer Edition for Windows, and give Picture-in-Picture a shot! If you’ve got constructive feedback to share, here’s a form you can use to submit it.

Happy testing!

The post Testing Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisAiC: Unbounded queues and lang design

I have been thinking about how language feature development works in Rust1. I wanted to write a post about what I see as one of the key problems: too much concurrency in our design process, without any kind of “back-pressure” to help keep the number of “open efforts” under control. This setup does enable us to get a lot of things done sometimes, but I believe it also leads to a number of problems.

Although I don’t make any proposals in this post, I am basically advocating for changes to our process that can help us to stay focused on a few active things at a time. Basically, incorporating a notion of capacity such that, if we want to start something new, we either have to finish up with something or else find a way to grow our capacity.

The feature pipeline

Consider how a typical language feature gets introduced today:

  • Initial design in the form of an RFC. This is done by the lang team.
  • Initial implementation is done. This work is overseen by the compiler team, but often it is done by a volunteer contributor who is not themselves affiliated.
  • Documentation work is done, again often by a contributor, overseen by the docs team.
  • Experimentation in nightly takes places, often leading to changes in the design. (These changes have their own FCP periods.)
  • Finally, at some point, we stabilize the feature. This involves a stabilization report that summarizes what has changed, known bugs, what tests exist, and other details. This decision is made by the lang team.

At any given time, therefore, we have a number of features at each point in the pipeline – some are being designed, some are waiting for an implementor to show up, etc.

Today we have unbounded queues

One of the challenges is that the “links” between these pipeline are effectively unbounded queues. It’s not uncommon that we get an RFC for a piece of design that “seems good”. The RFC gets accepted. But nobody is really driving that work – as a result, it simply languishes. To me, the poster child for this is RFC 66 – a modest change to our rules around the lifetime of temporary values. I still think the RFC is a good idea (although its wording is very imprecise and it needs to be rewritten to be made precise). But it’s been sitting around unimplemented since June of 2014. At this point, is the original decision approving the RFC even still valid? (I sort of think no, but we don’t have a formal rule about that.)

How can an RFC sit around for 5 years?

Why did this happen? I think the reason is pretty clear: the idea was good, but it didn’t align with any particular priority. We didn’t have resources lined up behind it. It needed somebody from the lang team (probably me) to rewrite its text to be actionable and precise2. It needed somebody from the compiler team (maybe me again) to either write a PR or mentor somebody through it. And all those people were busy doing other things. So why did we accept the PR in the first place? Well, why wouldn’t we? Nothing in the process states that we should consider available resources when making an RFC decision.

Unbounded queues lead to confusion for users

So why does it matter when things sit around? I think it has a number of negative effects. The most obvious is that it sends really confusing signals to people trying to follow along with Rust’s development. It’s really hard to tell what the current priorities are; it’s hard to tell when a given feature might actually appear. Some of this we can help resolve just by better labeling and documentation.

Unbounded queues make it harder for teams

But there are other, more subtle effects. Overall, it makes it much harder for the team itself to stay organized and focused and that in turn can create a lot of stress. Stress in turn magnifies all other problems.

How does it make it harder to stay organized? Under the current setup, people can add new entries into any of these queues at basically any time. This can come in many forms, such as new RFCs (new design work and discussion), proposed changes to an existing design (new design or implementation work), etc.

Just having a large number of existing issues means that, in a very practical sense, it becomes challenging to follow GitHub notifications or stay on top of all the things going on. I’ve lost count of the number of attempts I’ve made at this personally.

Finally, the fact that design work stretches over such long periods (frequently years!) makes it harder to form stable communities of people that can dig deeply into an issue, develop a rapport, and reach a consensus.

Leaving room for serendipity?

Still, there’s a reason that we setup the system the way we did. This setup can really be a great fit for an open source project. After all, in an open source project, it can be really hard for us to figure out how many resources we actually have. It’s certainly more than the number of folks on the teams. It happens pretty regularly that people appear out of the blue with an amazing PR implementing some feature or other – and we had no idea they were working on it!

In the 2018 RustConf keynote, we talked about the contrast between OSS by serendipity and OSS on purpose. We were highlighting exactly this tension: on the one hand, Rust is a product, and like any product it needs direction. But at the same time, we want to enable people to contribute as much as we can.

Reviewing as the limited resource

Still, while the existing setup helps ensure that there are many opportunities for people to get involved, it also means that people who come with a new idea, PR, or whatever may wind up waiting a long time to get a response. Often the people who are supposed to answer are just busy doing other things. Sometimes, there is a (often unspoken) understanding that a given issue is just not high enough priority to worry about.

In an OSS project, therefore, I think that the right way to measure capacity is in terms of reviewer bandwidth. Here I mean “reviewer” in a pretty general way. It might be someone who reviews a PR, but it might also be a lang team member who is helping to drive a particular design forward.

Leaving room for new ideas?

One other thing I’ve noticed that’s worth highlighting is that, sometimes, hard ideas just need time to bake. Trying to rush something through the design process can be a bad idea.

Consider specialization: On the one hand, this feature was first proposed in July of 2015. We had a lot of really important debate at the time about the importance of parametricity and so forth. We have an initial implementation. But there was one key issue that never got satisfactorily resolved, a technical soundness concern around lifetimes and traits. As such, the issue has sat around – it would get periodically discussed but we never came to a satisfactory conclusion. Then, in Feb of 2018, I had an idea which aturon then extended in April. It seems like these ideas have basically solved the problem, but we’ve been busy in the meantime and haven’t had time to follow up.

This is a tricky case: maybe if we had tried to push specialization all the way to stabilization, we would have had these same ideas. But maybe we wouldn’t have. Overall, I think that deciding to wait has worked out reasonably well for us, but probably not optimally. I think in an ideal world we would have found some useful subset of specialization that we could stabilize, while deferring the tricky questions.

Tabling as an explicit action

Thinking about specialization leads to an observation: one of the things we’re going to have to figure out is how to draw good boundaries so that we can push out a useful subset of a feature (an “MVP”, if you will) and then leave the rest for later. Unlike today, though, I think should be an explicit process, where we take the time to document the problems we still see and our current understanding of the space, and then explicitly “table” the remainder of the work for another time.

People need help to set limits

One of the things I think we should put into our system is some kind of hard cap on the number of things you can do at any given time. I’d like this cap to be pretty small, like one or two. This will be frustrating. It will be tempting to say “sure I’m working on X, but I can make a little time for Y too”. It will also slow us down a bit.

But I think that’s ok. We can afford to do a few less things. Or, if it seems like we can’t, that’s probably a sign that we need to grow that capacity: find more people we trust to do the reviews and lead the process. If we can’t do that, then we have to adjust our ambitions.

In other words, in the absence of a cap, it is very easy to “stretch” to achieve our goals. That’s what we’ve done often in the past. But you can only stretch so far and for so long.

Conclusion

As I wrote in the beginning, I’m not making any proposals in this post, just sharing my current thoughts. I’d like to hear if you think I’m onto something here, or heading in the wrong direction. Here is a link to the Adventures in Consensus thread on internals.

One thing that has been pointed out to me is that these ideas resemble a number of management philosophies, most notably kanban. I don’t have much experience with that personally but it makes sense to me that others would have tried to tackle similar issues.

Footnotes

  1. I’m coming at this from the perspective of the lang team, but I think a lot of this applies more generally. 

  2. For that matter, it would be helpful if there were a spec of the current behavior for it to build off of. 

Mozilla Addons BlogChanges in Firefox 68

Firefox 68 is coming out today, and we wanted to highlight a few of the changes coming to add-ons. We’ve updated addons.mozilla.org (AMO) and the Add-ons Manager (about:addons) in Firefox to help people find high-quality, secure extensions more easily. We’re also making it easier to manage installed add-ons and report potentially harmful extensions and themes directly from the Add-ons Manager.

Recommended Extensions

In April, we previewed the Recommended Extensions program as one of the ways we plan to make add-ons safer. This program will make it easier for users to discover extensions that have been reviewed for security, functionality, and user experience.

In Firefox 68, you may begin to notice the first small batch of these recommendations in the Add-ons Manager. Recommendations will include star ratings and the number of users that currently have the extension installed. All extensions recommended in the Add-ons Manager are vetted through the Recommended Extensions program.

As the first iteration of a new design, you can expect some clean-up in upcoming releases as we refine it and incorporate feedback.

recommended extensions card in about:addons

On AMO starting July 15, Recommended extensions will receive special badging to indicate its inclusion in the program. Additionally, the AMO homepage will be updated to only display Recommended content, and AMO search results will place more emphasis on Recommended extensions.

Note: We previously stated that modifications to AMO would occur on July 11. This has been changed to July 15.

AMO recommended extension badge

As the Recommended Extensions program continues to evolve, more extensions will be added to the curated list.

Add-ons management and abuse reporting

In alignment with design changes in Firefox, we’ve refreshed the Add-ons Manager to deliver a cleaner user experience. As a result, an ellipsis (3-dot) icon has been introduced to keep options organized and easy to find. You can find all the available controls, including the option to report an extension or theme to Mozilla—in one place.

new about:addons look

The new reporting feature allows users to provide us with a better understanding of the issue they’re experiencing. This new process can be used to report any installed extension, whether they were installed from AMO or somewhere else.

report option in about:addonsselect issue type when reporting extension

Users can also report an extension or theme when they uninstall an add-on. More information about the new abuse reporting process is available here.

Permissions

It’s easy to forget about the permissions that were previously granted to an extension. While most extensions are created by trustworthy third-party developers, we recommend periodically checking what you have installed, what permissions you’ve granted, and making sure you only keep the ones you really want.

Starting in Firefox 68, you can view the permissions of installed extensions directly in the Add-ons Manager, making it easier to perform these periodic checks. Here’s a summary of all extension permissions, so you can review them for yourself when deciding which extensions to keep installed.

permissions panel in about:addons

In upcoming releases, we will be adjusting and refining changes to the Add-ons Manager to continue aligning the design with the rest of Firefox and incorporating feedback we receive. We’re also developing a Recommended Extensions Community Board for contributors to assist with extension recommendations—we’ll have more information soon.

The post Changes in Firefox 68 appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgFirefox 68: BigInts, Contrast Checks, and the QuantumBar

Firefox 68 is available today, featuring support for big integers, whole-page contrast checks, and a completely new implementation of a core Firefox feature: the URL bar.

These are just the highlights. For complete information, see:

BigInts for JavaScript

Firefox 68 now supports JavaScript’s new BigInt numeric type.

Screenshot of the DevTools Console showing 2**64 with normal numbers and with BigInts. The screenshot shows how floating point numbers lose precision as they grow larger.

Since its introduction, JavaScript has only had a single numeric type: Number. By definition, Numbers in JavaScript are floating point numbers, which means they can represent both integers (like 22 or 451) and decimal fractions (like 6.28 or 0.30000000000000004). However, this flexibility comes at a cost: 64-bit floats cannot reliably represent integers larger than 2 ** 53.

» 2 ** 53
9007199254740992

» (2 ** 53) + 1
9007199254740992  // <- Shouldn't that end in 3?

» (2 ** 53) + 2
9007199254740994

This limitation makes it difficult to work with very large numbers. For example, it’s why Twitter’s JSON API returns Tweet IDs as strings instead of literal numbers.

BigInt makes it possible to represent arbitrarily large integers.

» 2n ** 53n  // <-- the "n" means BigInt
9007199254740992n

» (2n ** 53n) + 1n
9007199254740993n  // <- It ends in 3!

» (2n ** 53n) + 2n
9007199254740994n

JavaScript does not automatically convert between BigInts and Numbers, so you can’t mix and match them in the same expression, nor can you serialize them to JSON.

» 1n + 2
TypeError: can't convert BigInt to number

» JSON.stringify(2n)
TypeError: BigInt value can't be serialized in JSON

You can, however, losslessly convert BigInt values to and from strings:

» BigInt("994633657141813248")
994633657141813248n

» String(994633657141813248n)
"994633657141813248"  // <-- The "n" goes away

The same is not true for Numbers — they can lose precision when being parsed from a string:

» Number("994633657141813248")
994633657141813200  // <-- Off by 48!

MDN has much more information on BigInt.

Accessibility Checks in DevTools

Each release of Firefox brings improved DevTools, but Firefox 68 marks the debut of a brand new capability: checking for basic accessibility issues.

Screenshot of the Accessibility panel in the Firefox DevTools, showing the results of a text contrast check. The page being checked is a Wired article from 2016, "How the Web Became Unreadable." Ironically, its header fails the contrast check.

With Firefox 68, the Accessibility panel can now report any color contrast issues with text on a page. More checks are planned for the future.

We’ve also:

  • Included a button in the Inspector that enables “print media emulation,” making it easy to see what elements of a page would be visible when printed. (Try it on Wikipedia!)
  • Improved CSS warnings in the console to show more information and include a link to related nodes.Screenshot of the Firefox DevTools showing a CSS warnings in the Console of the form "unknown property 'foo', declaration dropped." The warning shows a list of nodes matching the selector with the erroneous rule.
  • Added support for adjusting letter spacing in the Font Editor.
  • Implemented RegEx-based filtering in the DevTools Console: just enclose your query in slashes, like /(foo|bar)/.
  • Made it possible to block specific requests by right-clicking on them in the Network panel.

Firefox 68 also includes refinements to the smarter debugging features we wrote about a few weeks ago.

Web Compatibility

Keeping the Web open is hard work. Sometimes browsers disagree on how to interpret web standards. Other times, browsers implement and ship their own ideas without going through the standards process. Even worse, some developers intentionally block certain browsers from their sites, regardless of whether or not those browsers would have worked.

Screenshot of a blank webpage telling the visitor that the site is "currently not supporting your browser."

At Mozilla, we call these “Web Compatibility” problems, or “webcompat” for short.

Each release of Firefox contains fixes for webcompat issues. For example, Firefox 68 implements:

In the latter case, even with a standard line-clamp property in the works, we have to support the -webkit- version to ensure that existing sites work in Firefox.

Unfortunately, not all webcompat issues are as simple as implementing non-standard APIs from other browsers. Some problems can only be fixed by modifying how Firefox works on a specific site, or even telling Firefox to pretend to be something else in order to evade browser sniffing.

Screenshot of a webpage that blocks Firefox users, but which works perfectly with a webcompat intervention.

We deliver these targeted fixes as part of the webcompat system add-on that’s bundled with Firefox. This makes it easier to update our webcompat interventions as sites change, without needing to bake those fixes directly into Firefox itself. And as of Firefox 68, you can view (and disable) these interventions by visiting about:compat and toggling the relevant switches.

Our first preference is always to help developers ensure their sites work on all modern browsers, but we can only address the problems that we’re aware of. If you run into a web compatibility issue, please report it at webcompat.com.

CSS: Scroll Snapping and Marker Styling

Firefox 68 supports the latest syntax for CSS scroll snapping, which provides a standardized way to control the behavior of scrolling inside containers. You can find out more in Rachel Andrew’s article, CSS Scroll Snap Updated in Firefox 68.

As shown in the video above, scroll snapping allows you to start scrolling a container so that, when a certain threshold is reached, letting go will neatly finish scrolling to the next available snap point. It is easier to understand this if you try it yourself, so download Firefox 68 and try it out on some of the examples in the MDN Scroll Snapping docs.

And if you are wondering where this leaves the now old-and-deprecated Scroll Snap Points spec, read Browser compatibility and Scroll Snap.

Today’s release of Firefox also adds support for the ::marker pseudo-element. This makes it possible to style the bullets or counters that appear to the side of list items and summary elements.

Last but not least, CSS transforms now work on SVG elements like mark, marker, pattern and clipPath, which are indirectly rendered.

We have an entire article in the works diving into these and other CSS changes in Firefox 68; look for it later this month.

Browser: WebRender and QuantumBar Updates

Two months ago, Firefox 67 became the first Firefox release with WebRender enabled by default, though limited to users with NVIDIA GPUs on Windows 10. Firefox 68 expands that audience to include people with AMD GPUs on Windows 10, with more platforms on the way.

We’ve also been hard at work in other areas of Firefox’s foundation. The URL bar (affectionately known as the “AwesomeBar”) has been completely reimplemented using web technologies: HTML, CSS, and JavaScript. This new ”QuantumBar” should be indistinguishable from the previous AwesomeBar, but its architecture makes it easier to maintain and extend in the future. We move one step closer to the eventual elimination of our legacy XUL/XBL toolkit with this overhaul.

DOM APIs

Firefox 68 brings several changes to existing DOM APIs, notably:

  • Access to cameras, microphones, and other media devices is no longer allowed in insecure contexts like plain HTTP.
  • You can now pass the noreferrer option to window.open() to avoid leaking referrer information upon opening a link in a new window.

We’ve also added a few new APIs, including support for the Visual Viewport API on Android, which returns the viewport taking into consideration things like on-screen keyboards or pinch-zooming. These may result in a smaller visible area than the overall layout viewport.

It is also now possible to use the .decode() method on HTMLImageElement to download and decode elements before adding them to the DOM. For example, this API simplifies replacing low-resolution placeholders with higher resolution images: it provides a way to know that a new image can be immediately displayed upon insertion into the page.

More Inside

These highlights just scratch the surface. In addition to these changes in Firefox, the last month has seen us release Lockwise, a password manager that lets you take your saved credentials with you on mobile. We’ve also released a brand new Firefox Preview on Android, and more.

From all of us at your favorite nominee for Internet Villain of the Year, thank you for choosing Firefox.

The post Firefox 68: BigInts, Contrast Checks, and the QuantumBar appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogLatest Firefox Release Available today for iOS and Desktop

Since our last Firefox release, we’ve been working on features to make the Firefox Quantum browser work better for you. We added by default Enhanced Tracking Protection which blocks known “third-party tracking cookies” from following your every move. With this latest Firefox release we’ve added new features so you can browse the web the way you want — unfettered and free. We’ve also made improvements for IT managers who want more flexibility when using Firefox in the workplace.

Key highlights for today’s update includes:

  • Blackout shades come to Firefox Reader View: One of the most popular ways that people use our Reader View is by changing the contrast from light to dark. Initially, this only covered the text area. Now, when a user moves the contrast to dark all the sections of the site — including the sidebars and toolbars will be completely in dark mode.

All sections of the site will be completely in dark mode

  • Firefox Recommended Extensions and more:  We curated a list of recommended extensions that have been thoroughly reviewed for security, usability and usefulness. You can find the list on the “Get Add-ons” page in the Firefox Add-ons Manager (about:addons). Plus we are making it even easier to report any bad extensions you come across. You can do this directly through your Firefox Add-ons Manager to ensure others’ safety. This new feature is part of the larger effort to make the add-ons ecosystem safer.
  • Popular User Requested Features added to Firefox for iOS: Users are at the center of everything we do, and most of the features we’ve added to Firefox for iOS in the past have been requests straight from our community. Today, we’re adding two new features which have been asked for the most. They include:
          • Bookmark editing – We added a new ‘Recently Bookmarked’ section to bookmarks and also added support for editing all bookmarks. Now users can reorder, rename, or update the URL for bookmarks.
          • Set sites to always open with Desktop Version – We know that some sites aren’t optimized for mobile, and for users the desktop version of a site is the better and preferred experience. Today, users can set sites to always open in desktop mode, and we’re also introducing a badge to help you identify when a site is being displayed in desktop mode.

Reorder, rename, or update the URL for bookmarks

More customization for IT Pros

Since the launch of Firefox Quantum for Enterprise last year, we’ve received feedback from IT (information technology) professionals who wanted to make Firefox Quantum more flexible and easier to use so they can meet their workplace needs. Today we’re adding a number of new enterprise policies for IT leads who want to customize Firefox for their employees.

The new policies for today’s Firefox Quantum for Enterprise will help IT managers configure their company’s infrastructure in the best way to meet their own personalized needs. This includes adding a support menu so enterprise users can easily contact their internal support teams and configuring or removing the new tab page so companies can bring their intranet or other sites front and center for employees. For shared machines and protecting employees’ privacy, IT managers can turn off search suggestions. You can look here to see a complete list of policies supported by Firefox.

We are always looking for ways to improve Firefox Quantum for Enterprise, our Extended Support Release. For organizations that need access to specific preferences, we’ve already started a list in our GitHub repository, and we will continue to add based on the requests we receive. Feel free to submit at the GitHub link on specific preferences you need.

To read the complete list of new items or see what we’ve changed in today’s release, you can check out our release notes.

We hope you try out and download the latest version of Firefox Quantum for desktop, iOS and Enterprise.

The post Latest Firefox Release Available today for iOS and Desktop appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 294

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's CotW is not a crate but Rustexp site, a Rust regular expression editor & tester. Thanks to carols10cents for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

237 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Are we trying to steal the JVM’s “compile once run everywhere” concept?

No, we just borrow it mutably.

minno & llogiq on /r/rust

Thanks to Will Page for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

The Servo BlogMedia stack Mid-Year review

We recently closed the first half of 2019 and with that it is time to look back and do a quick summary of what the media team has achieved during this 6 months period.

Looking at some stats, we merged 87 Pull Requests, we opened 56 issues, we closed 42 issues and we welcomed 13 new amazing contributors to the media stack.

A/V playback

These are some of the selected A/V playback related H1 acomplishments

Media cache and improved seeking

We significally improved the seeking experience of audio and video files by implementing preloading and buffering support and a media cache.

Basic media controls

After a few months of work we got partial support for the Shadow DOM API, which gave us the opportunity to implement our first basic set of media controls.

media controls

The UI is not perfect, among other things, because we still have no way to render a progress or volume bar properly, as that depends on the input type="range"> layout, which so far is rendered as a simple text box instead of the usual slider with a thumb.

GStreamer backend for MagicLeap

Another great achievement by Xavier Claessens from Collabora has been the GStreamer backend for Magic Leap. The work is not completely done yet, but as you can see on the animation below, he already managed to paint a full screen video on the Magic Leap device.

magic leap video

Hardware accelerated decoding

One of the most wanted features that we have been working on for almost a year and that has recently landed is hardware accelerated decoding.

Thanks to the excellent and constant work from the Igalian Víctor Jáquez, Servo recently gained support for hardware-accelerated media playback, which means lower CPU usage, better battery life and better thermal behaviour, among other goodies.

We only have support on Linux and Android (EGL and Wayland) so far. Support for other platforms is on the roadmap.

Wladimir PalantVarious RememBear security issues

Whenever I write about security issues in some password manager, people will ask what I’m thinking about their tool of choice. And occasionally I’ll take a closer look at the tool, which is what I did with the RememBear password manager in April. Technically, it is very similar to its competitor 1Password, to the point that the developers are being accused of plagiarism. Security-wise the tool doesn’t appear to be as advanced however, and I quickly found six issues (severity varies) which have all been fixed since. I also couldn’t fail noticing a bogus security mechanism, something that I already wrote about.

Stealing remembear.com login tokens

Password managers will often give special powers to “their” website. This is generally an issue, because compromising this website (e.g. via an all too common XSS vulnerability) will give attackers access to this functionality. In case of RememBear, things turned out to be easier however. The following function was responsible for recognizing privileged websites:

isRememBearWebsite() {
    let remembearSites = this.getRememBearWebsites();
    let url = window.getOriginUrl();
    let foundSite = remembearSites.firstOrDefault(allowed => url.indexOf(allowed) === 0, undefined);
    if (foundSite) {
        return true;
    }
    return false;
}

We’ll get back to window.getOriginUrl() later, it not actually producing the expected result. But the important detail here: the resulting URL is being compared against some whitelisted origins by checking whether it starts with an origin like https://remembear.com. No, I didn’t forget the slash at the end here, there really is none. So this code will accept https://remembear.com.malicious.info/ as a trusted website!

Luckily, the consequences aren’t as severe as with similar LastPass issues for example. This would only give attacker’s website access to the RememBear login token. That token will automatically log you into the user’s RememBear account, which cannot be used to access passwords data however. It will “merely” allow the attacker to manage user’s subscription, with the most drastic available action being deleting the account along with all passwords data.

Messing with AutoFill functionality

AutoFill functionality of password managers is another typical area where security issues are found. RememBear requires a user action to activate AutoFill which is an important preventive measure. Also, AutoFill user interface will be displayed by the native RememBear application, so websites won’t have any way of messing with it. I found multiple other aspects of this functionality to be exploitable however.

Most importantly, RememBear would not verify that it filled in credentials on the right website (a recent regression according to the developers). Given that considerable time can pass between the user clicking the bear icon to display AutoFill user interface and the user actually selecting a password to be filled in, one cannot really expect that the browser tab is still displaying the same website. RememBear will happily continue filling in the password however, not recognizing that it doesn’t belong to the current website.

Worse yet, RememBear will try to fill out passwords in all frames of a tab. So if https://malicious.com embeds a frame from https://mybank.com and the user triggers AutoFill on the latter, https://malicious.com will potentially receive the password as well (e.g. via a hidden form). Or even less obvious: if you go to https://shop.com and that site has third-party frames e.g. for advertising, these frames will be able to intercept any of your filled in passwords.

Public Suffix List implementation issues

One point on my list of common AutoFill issues is: Domain name is not “the last two parts of a host name.” On the first glance, RememBear appears to have this done correctly by using Mozilla’s Public Suffix List. So it knows in particular that the relevant part of foo.bar.example.co.uk is example.co.uk and not co.uk. On a closer glance, there are considerable issues in the C# based implementation however.

For example, there is some rather bogus logic in the CheckPublicTLDs() function and I’m not even sure what this code is trying to accomplish. You will only get into this function for multi-part public suffixes where one of the parts has more than 3 characters – meaning pilot.aero for example. The code will correctly recognize example.pilot.aero as being the relevant part of the foo.bar.example.pilot.aero host name, but it will come to the same conclusion for pilot.aeroexample.pilot.aero as well. Since domains are being registered under the pilot.aero namespace, the two host names here actually belong to unrelated domains, so the bug here allows one of them to steal credentials for the other.

The other issue is that the syntax of the Public Suffix List is processed incorrectly. This results for example in the algorithm assuming that example.asia.np and malicious.asia.np belong to the same domain, so that credentials will be shared between the two. With asia.np being the public suffix here, these host names are unrelated however.

Issues saving passwords

When you enter a password on some site, RememBear will offer you to save it – fairly common functionality. However, this will fail spectacularly under some circumstances, and that’s partially due to the already mentioned window.getOriginUrl() function which is implemented as follows:

if (window.location.ancestorOrigins != undefined
    && window.location.ancestorOrigins.length > 0) {
    return window.location.ancestorOrigins[0];
}
else {
    return window.location.href;
}

Don’t know what window.location.ancestorOrigins does? I didn’t know either, it being a barely documented Chrome/Safari feature which undermines referrer policy protection. It contains the list of origins for parent frames, so this function will return the origin of the parent frame if there is any – the URL of the current document is completely ignored.

While AutoFill doesn’t use window.getOriginUrl(), saving passwords does. So if in Chrome https://evil.com embeds a frame from https://mybank.com and the user logs into the latter, RememBear will offer to save the password. But instead of saving that password for https://mybank.com it will store it for https://evil.com. And https://evil.com will be able to retrieve the password later if the user triggers AutoFill functionality on their site. But at least there will be some warning flags for the user along the way…

There was one more issue: the function hostFromString() used to extract host name from URL when saving passwords was using a custom URL parser. It wouldn’t know how to deal with “unusual” URL schemes, so for data:text/html,foo/example.com:// or about:blank#://example.com it would return example.com as the host name. Luckily for RememBear, its content scripts wouldn’t run on any of these URLs, at least in Chrome. In their old (and already phased out) Safari extension this likely was an issue and would have allowed websites to save passwords under an arbitrary website name.

Timeline

  • 2019-04-09: After discovering the first security vulnerability I am attempting to find a security contact. There is none, so I ask on Twitter. I get a response on the same day, suggesting to invite me to a private bug bounty program. This route fails (I’ve been invited to that program previously and rejected), so we settle on using the support contact as fallback.
  • 2019-04-10: Reported issue: “RememBear extensions leak remembear.com token.”
  • 2019-04-10: RememBear fixes “RememBear extensions leak remembear.com token” issue and updates their Firefox and Chrome extensions.
  • 2019-04-11: Reported issue “No protection against logins being filled in on wrong websites.”
  • 2019-04-12: Reported issues: “Unrelated websites can share logins”, “Wrong interpretation of Mozilla’s Public Suffix list”, “Login saved for wrong site (frames in Chrome)”, “Websites can save logins for arbitrary site (Safari).”
  • 2019-04-23: RememBear fixes parts of the “No protection against logins being filled in on wrong websites” issue in the Chrome extension.
  • 2019-04-24: RememBear confirms that “Websites can save logins for arbitrary site (Safari)” issue doesn’t affect any current products but they intend to remove hostFromString() function regardless.
  • 2019-05-27: RememBear reports having fixed all outstanding issues in the Windows application and Chrome extension. macOS application is supposed to follow a week later.
  • 2019-06-12: RememBear updates Firefox extension as well.
  • 2019-07-08: Coordinated disclosure.

Niko MatsakisAsync-await status report #2

I wanted to give an update on the status of the “async-await foundations” working group. This post aims to cover three things:

  • the “async await MVP” that we are currently targeting;
  • how that fits into the bigger picture;
  • and how you can help, if you’re so inclined;

Current target: async-await MVP

We are currently working on stabilizing what we call the async-await MVP – as in, “minimal viable product”. As the name suggests, the work we’re doing now is basically the minimum that is needed to “unlock” async-await. After this work is done, it will be easier to build async I/O based applications in Rust, though a number of rough edges remain.

The MVP consists of the following pieces:

The future trait

The first of these bullets, the future trait, was stabilized in the 1.36.0 release. This is important because the Future trait is the core building block for the whole Async I/O ecosystem. Having a stable future trait means that we can begin the process of consolidating the ecosystem around it.

Basic async-await syntax

Now that the future trait is stable, the next step is to stabilize the basic “async-await” syntax. We are presently shooting to stabilize this in 1.38. We’ve finished the largest work items, but there are still a number of things left to get done before that date – if you’re interested in helping out, see the “how you can help” section at the end of this post!

The current support we are aiming to stabilize permits async fn, but only outside of traits and trait implementations. This means that you can write free functions like this one:1

// When invoked, returns a future that (once awaited) will yield back a result:
async fn process(data: TcpStream) -> Result<(), Box<dyn Error>> {
    let mut buf = vec![0u8; 1024];
    
    // Await data from the stream:
    let len = reader.read(&mut buf).await?;
    ...
}

or inherent methods:

impl MyType {
    // Same as above, but defined as a method on `MyType`:
    async fn process(data: TcpStream) -> Result<(), Box<dyn Error>> { .. }
}

You can also write async blocks, which generate a future “in place” without defining a separate function. These are particularly useful to pass as arguments to helpers like runtime::spawn:

let data: TcpStream;
runtime::spawn(async move {
    let mut buf = vec![0u8; 1024];
    let len = reader.read(&mut buf).await?;
    ...
})

Eventually, we plan to permit async fn in other places, but there are some complications to be resolved first, as will be discussed shortly.

The async book

One of the goals of this stabilization is that, once async-await syntax becomes available, there should be really strong documentation to help people get started. To that end, we’re rejuvenating the “async Rust” book. This book covers the nuts and bolts of Async I/O in Rust, ranging from simple examples with async fn all the way down to the details of how the future trait works, writing your own executors, and so forth. Take a look!

(Eventually, I expect some of this material may make its way into more standard books like The Rust Programming Language, but in the meantime we’re evolving it separately.)

Future work: the bigger picture

The current stabilization push, as I mentioned above, is aimed at getting an MVP stabilized – just enough to enable people to run off and start to build things. So you’re probably wondering, what are some of the things that come next? Here is a (incomplete) list of possible future work:

  • A core set of async traits and combinators. Basically a 1.0 version of the futures-rs repository, offering key interfaces like AsyncRead.
  • Better stream support. The futures-rs repository contains a Stream trait, but there remains some “support work” to make it better supported. This may include some form of for-await syntax (although that is not a given).
  • Generators and async generators. The same core compiler transform that enables async await should enable us to support Python- or JS-like generators as a way to write iterators. Those same generators can then be made asynchronous to produce streams of data.
  • Async fn in traits and trait impls. Writing generic crates and interfaces that work with async fn is possible in the MVP, but not as clean or elegant as it could be. Supporting async fn in traits is an obvious extension to make that nicer, though we have to figure out all of the interactions with the rest of the trait system.
  • Async closures. We would like to support the obvious async || syntax that would generate a closure. This may require tinkering with the Fn trait hierarchy.

How you can get involved

There’s been a lot of great work on the async fn implementation since my first post – we’ve closed over 40 blocker issues! I want to give a special shout out to the folks who worked on those issues:2

  • davidtwco reworked the desugaring so that the drop order for parameters in an async fn and fn is analagous, and then heroically fixed a number of minor bugs that were filed as fallout from this change.
  • tmandry dramatically reduced the size of futures at runtime.
  • gilescope improved a number of error messages and helped to reduce errors.
  • matthewjasper reworked some details of the compiler transform to solve a large number of ICEs.
  • doctorn fixed an ICE when await was used in inappropriate places.
  • centril has been helping to enumerate tests and generally work on triage work.
  • cramertj implemented the await syntax, wrote a bunch of tests, and, of course, did all of the initial implementation work.
  • and hey, I extended the region inferencer to support multiple lifetime parameters. I guess I get some credit too. =)

If you’d like to help push async fn over the finish line, take a look at our list of blocking issues. Anything that is not assigned is fair game! Just find an issue you like that is not assigned and use @rustbot claim to claim it. You can find out more about how our working group works on the async-await working group page. In particular, that page includes a link to the calendar event for our weekly meeting, which takes place in the the #wg-async-foundations channel on the rust-lang Zulip – the next meeting is tomorrow (Tuesday)!. But feel free to drop in any time with questions.

Footnotes

  1. Sadly, it seems like rouge hasn’t been updated yet to highlight the async or await keywords. Or maybe I just don’t understand how to upgrade it. =) 

  2. I culled this list by browsing the closed issues and who they were assigned to. I’m sorry if I forgot someone or minimized your role! Let me know and I’ll edit the post. <3 

Cameron KaiserTenFourFox FPR15 available

TenFourFox Feature Parity Release 15 final is now available for testing (downloads, hashes, release notes). There are no changes from the beta other than outstanding security fixes. Assuming all goes well, it will go live Monday evening Pacific as usual.

Also, we now have Korean and Turkish language packs available for testing. If you want to give these a spin, download them here; the plan is to have them go-live at the same time as FPR15. Thanks again to new contributor Tae-Woong Se and, of course, to Chris Trusch as always for organizing localizations and doing the grunt work of turning them into installers.

Not much work will occur on the browser for the next week or so due to family commitments and a couple out-of-town trips, but I'll be looking at a few new things for FPR16, including some minor potential performance improvements and a font subsystem upgrade. There's still the issue of our outstanding JavaScript deficiencies as well, of course. More about that later.

About:CommunityFirefox 68 new contributors

With the release of Firefox 68, we are pleased to welcome the 55 developers who contributed their first code change to Firefox in this release, 49 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogMozilla’s Latest Research Grants: Prioritizing Research for the Internet

We are very happy to announce the results of our Mozilla Research Grants for the first half of 2019. This was an extremely competitive process, and we selected proposals which address twelve strategic priorities for the internet and for Mozilla. This includes researching better support for integrating Tor in the browser, improving scientific notebooks, using speech on mobile phones in India, and alternatives to advertising for funding the internet. The Mozilla Research Grants program is part of our commitment to being a world-class example of using inclusive innovation to impact culture, and reflects Mozilla’s commitment to open innovation.

We will open a new round of grants in Fall of 2019. See our Research Grant webpage for more details and to sign up to be notified when applications open.

Lead Researchers Institution Project Title
Valentin Churavy MIT Bringing Julia to the Browser
Jessica Outlaw Concordia University of Portland Studying the Unique Social and Spatial affordances of Hubs by Mozilla for Remote Participation in Live Events
Neha Kumar Georgia Tech Missing Data: Health on the Internet for Internet Health
Piotr Sapiezynski, Alan Mislove, & Aleksandra Korolova Northeastern University & University of Southern California Understanding the impact of ad preference controls
Sumandro Chattapadhyay The Centre for Internet and Society (CIS), India Making Voices Heard: Privacy, Inclusivity, and Accessibility of Voice Interfaces in India
Weihang Wang State University of New York Designing Access Control Interfaces for Wasmtime
Bernease Herman University of Washington Toward generalizable methods for measuring bias in crowdsourced speech datasets and validation processes
David Karger MIT Tipsy: A Decentralized Open Standard for a Microdonation-Supported Web
Linhai Song Pennsylvania State University Benchmarking Generic Functions in Rust
Leigh Clark University College Dublin Creating a trustworthy model for always-listening voice interfaces
Steven Wu University of Minnesota DP-Fathom: Private, Accurate, and Communication-Efficient
Nikita Borisov University of Illinois, Urbana-Champaign Performance and Anonymity of HTTP/2 and HTTP/3 in Tor

Many thanks to everyone who submitted a proposal and helped us publicize these grants.

The post Mozilla’s Latest Research Grants: Prioritizing Research for the Internet appeared first on The Mozilla Blog.

Mozilla GFXmoz://gfx newsletter #46

Hi there! As previously announced WebRender has made it to the stable channel and a couple of million users are now using it without having opted into it manually. With this important milestone behind us, now is a good time to widen the scope of the newsletter and give credit to other projects being worked on by members of the graphics team.

The WebRender newsletter therefore becomes the gfx newsletter. This is still far from an exhaustive list of the work done by the team, just a few highlights in WebRender and graphics in general. I am hoping to keep the pace around a post per month, we’ll see where things go from there.

What’s new in gfx

Async zoom for desktop Firefox

Botond has been working on desktop zooming

  • The work is currently focused on the ability to use pinch gestures to zoom (scaling only, no reflow, like on mobile) on desktop platforms.
  • The initial focus is on touchscreens, with support for touchpads to follow.
  • We hope to have this ready for some early adopters to try out in the coming weeks.

WebGL power usage

Jeff Gilbert has been working on power preference options for WebGL.
WebGL has three power preference options available during canvas context creation:

  • default
  • low-power
  • high-performance

The vast majority of web content implicitly requests “default”. Since we don’t want to regress web content performance, we usually treat “default” like “high-performance”. On macOS with multiple GPUs (MacBook Pro), this means activating the power-hungry dedicated GPU for almost all WebGL contexts. While this keeps our performance high, it also means every last one-off or transient WebGL context from ads, trackers, and fingerprinters will keep the high-power GPU running until they are garbage-collected.
In bug 1562812, Jeff added a ramp-up/ramp-down behavior for “default”: For the first couple seconds after WebGL context creation, things stay on the low-power GPU. After that grace period, if the context continues to render frames for presentation, we migrate to the high-power GPU. Then, if the context stops producing frames for a couple seconds, we release our lock of the high-power GPU, to try to migrate back to the low-power GPU.
What this means is that active WebGL content should fairly quickly end up ramped-up onto the high-power GPU, but inactive and orphaned WebGL contexts won’t keep the browser locked on to the high-power GPU anymore, which means better battery life for users on these machines as they wander the web.

DisplayList building optimization

Miko, Matt, Timothy and Dan have worked on improving display list build times

  • The two main areas of improvement have been avoiding unnecessary work during display list merging, and improving the memory access patterns during display list building.
  • The improved display list merging algorithm utilizes the invalidation assumptions of the frame tree, and avoids preprocessing sub display lists that cannot have changed. (bug 1544948)
  • Some commonly used display items have drastically shrunk in size, which has reduced the memory usage and allocations. For example, the size of transform display item went down from 1024 bytes to 512 bytes. (bug 1502049, bug 1526941)
  • The display item size improvements have also tangentially helped with caching and prefetching performance. For example, the base display item state booleans were collapsed into a bit field and moved to the first cache line. (bug 1526972, bug 1540785)
  • Retained display lists were enabled for Android devices and for the parent process. (bugs 1413567 and 1413546)
  • Telemetry probes show that since the Orlando All Hands, the mean display list build time has gone down by 40%, from ~1.8ms to ~1.1ms. The 95th percentile has gone down by 30%, from ~6.2ms to ~4.4ms.
  • While these numbers might seem low, they are still a considerable proportion of the target 16ms frame budget. There is more promising follow-up work scheduled in bugs 1539597 and 1554503.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

Software backend investigations

Glenn, Jeff Muizelaar and Jeff Gilbert are investigating WebRender on top of swiftshader or llvmpipe when the avialable GPU featureset is too small or too buggy. The hope is that these emulation layers will help us quickly migrate users that can’t have hardware acceleration to webrender with a minimal amount of specific code (most likely some simple specialized blitting routines in a few hot spots where the emulation layer is unlikely provide optimal speed).
It’s too early to tell at this point whether this experiment will pan out. We can see some (expected) regressions compared to the non-webrender software backend, but with some amount of optimization it’s probable that performance will get close enough and provide an acceptable user experience.
This investigation is important since it will determine how much code we have to maintain specifically for non-accelerated users, a proportion of users that will decrease over time but will probably never quite get to zero.

Pathfinder 3 investigations

Nical spent some time in the last quarter familiarizing with pathfinder’s internals and investigating its viability for rendering SVG paths in WebRender/Firefox. He wrote a blog post explaining in details how pathfinder fills paths on the GPU.
The outcome of this investigation is that we are optimistic about pathfinder’s approach and think it’s a good fit for integration in Firefox. In particular the approach is flexible and robust, and will let us improve or replace parts in the longer run.
Nical is also experimenting with a different tiling algorithm, aiming at reducing the CPU overhead.

The next steps are to prototype an integration of pathfinder 3 in WebRender, start using it for simple primitives such as CSS clip paths and gradually use it with more primitives.

Picture caching improvements

Glenn continues his work on picture caching. At the moment WebRender only triggers the optimization for a single scroll root. Glenn has landed a lot of infrastructure work to be able to benefit from picture caching at a more granular level (for example one per scroll-root), and pave the way for rendering picture cached slices directly into OS compositor surfaces.

The next steps are, roughly:

  • Add a couple follow up optimizations such as exact dirty rectangles for smaller updates / multi-resolution tiles, detect tiles that are solid colors.
  • Enable caching on content + UI explicitly (as an intermediate step, to give caching on the UI).
  • Implement the right support for multiple cached slices that have transparency and subpixel text anti-aliasing.
  • Enable fine grained caching on multiple slices.
  • Expose those multiple cached slices to OS compositor for power savings
    and better scrolling performance where supported.

WebRender on Android

Thanks to the continuous efforts of Jamie, Kats, WebRender is starting to be pretty solid on android. GeckoView is required to enable WebRender so it isn’t possible to enable it now in Firefox for Android, but we plan make the option available for the Firefox Preview browser (which is powered by GeckoView) when it will have nightly builds.

Mozilla Reps CommunityRep of the Month – June 2019

Please join us in congratulating Pranshu Khanna, Rep of the Month for June 2019!

Pranshu is from Surat, Gujarat, India. His journey started with a Connected Devices workshop in 2016, since then he’s been a super active contributor and a proud Mozillian. He joined the Reps Program in March 2019 and has been instrumental ever since.

In addition to that, he’s been one of the most active Reps in his region since he joined the program. He has worked to get his local community, Mozilla Gujarat, to meet very regularly and contribute to Common Voice, BugHunter, Localization, SUMO, A-Frame, Add-ons, and Open Source Contribution. He’s an active contributor and a maintainer of the OpenDesign Repository and frequently contributes to the Mozilla India & Mozilla Gujarat Social Channels.

Congratulations and keep rocking the open web!

To congratulate Pranshu, please head over to Discourse!

Mozilla Localization (L10N)L10n report: July edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Manx was added to Pontoon and they’re starting to work on localizing Firefox.

New content and projects

What’s new or coming up in Firefox desktop

As usual, let’s start with some important dates:

  • Firefox 68 will be released on July 9. At that point, Firefox 69 will be in beta, while Nightly will have Firefox 70.
  • The deadline to ship localization updates in Firefox 69 is August 20.

Firefox 69 has been a lightweight release in terms of new strings, with most of the changes coming from Developer Tools. Expect a lot more content in Firefox 70, with particular focus on the Password Manager – integrating most of the Lockwise add-on features directly into Firefox – and work around Tracking Protection and Security, with brand new about pages.

What’s new or coming up in mobile

Since our last report, we’ve shipped the first release of Firefox Preview (Fenix) in 11 languages (including en-US). The next upcoming step will be to open up the project to more locales. If you are interested, make sure to follow closely the dev.l10n mailing list this week. And congratulations to the teams that helped make this a successful localized first release!

On another note, Firefox for iOS deadline for l10n work on v18 just passed this week, and we’re adding two new locales: Finnish and Gujarati. Congratulations to the teams!

What’s new or coming up in web projects

Keep brand and product names in English

The current policy per branding and legal teams is that the trademarked names should be left unchanged. Unless indicated in the comments, brand names should be left in English, they cannot be transliterated or declined. If you have any doubt, ask through the available communication channels.

Product names such as Lockwise, Monitor, Send, and Screenshot, to name a few, used alone or with Firefox, should be left unchanged. Common Voice is also a product name and should remain unchanged. Pocket is a trademark and must remain as is too. Locale managers, please inform the rest of the communities on the policy especially to new contributors. Search in Pontoon or Transvision for these names and revert them to English.

Mozilla.org

The recently added new pages contain marketing languages to promote all the Firefox products. Localize them and start spreading the words on social media and other platforms. The deadline is indicative.

  • New: firefox/best-browser.lang, firefox/campaign-trailhead.lang, firefox/all-unified.lang;
  • Update: firefox/accounts-2019.lang;
  • Obsolete: firefox/accounts.lang will be removed soon. Stop working on this page if is incomplete.

What’s new or coming up in SuMo

Newly published localizer facing documentation

The Firefox marketing guides are living documents and will be updated as needed.

  • The guide in German was written by Mozilla staff copywriter. It sets the tone for marketing content localization, including the mozilla.org site. With this guide, the hope is that the site has a more unified tone from page to page, regardless of who contributed to the localization: the community or a staff.
  • The guide in English was derived from the guide written in German and is meant to be a template for any communities who want to create a marketing content localization guide. This is in addition to the style guide a community has already created.
  • The guide in French will be authored by a Mozilla staff copywriter. We will update you in a future report.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Rust Programming Language BlogAnnouncing Rust 1.36.0

The Rust team is happy to announce a new version of Rust, 1.36.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.36.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.36.0 on GitHub.

What's in 1.36.0 stable

This release brings many changes, including the stabilization of the Future trait, the alloc crate, the MaybeUninit<T> type, NLL for Rust 2015, a new HashMap<K, V> implementation, and --offline support in Cargo. Read on for a few highlights, or see the detailed release notes for additional information.

The Future is here!

In Rust 1.36.0 the long awaited Future trait has been stabilized!

With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future.

The alloc crate is stable

Before 1.36.0, the standard library consisted of the crates std, core, and proc_macro. The core crate provided core functionality such as Iterator and Copy and could be used in #![no_std] environments since it did not impose any requirements. Meanwhile, the std crate provided types like Box<T> and OS functionality but required a global allocator and other OS capabilities in return.

Starting with Rust 1.36.0, the parts of std that depend on a global allocator, e.g. Vec<T>, are now available in the alloc crate. The std crate then re-exports these parts. While #![no_std] binaries using alloc still require nightly Rust, #![no_std] library crates can use the alloc crate in stable Rust. Meanwhile, normal binaries, without #![no_std], can depend on such library crates. We hope this will facilitate the development of a #![no_std] compatible ecosystem of libraries prior to stabilizing support for #![no_std] binaries using alloc.

If you are the maintainer of a library that only relies on some allocation primitives to function, consider making your library #[no_std] compatible by using the following at the top of your lib.rs file:

#![no_std]

extern crate alloc;

use alloc::vec::Vec;

MaybeUninit<T> instead of mem::uninitialized

In previous releases of Rust, the mem::uninitialized function has allowed you to bypass Rust's initialization checks by pretending that you've initialized a value at type T without doing anything. One of the main uses of this function has been to lazily allocate arrays.

However, mem::uninitialized is an incredibly dangerous operation that essentially cannot be used correctly as the Rust compiler assumes that values are properly initialized. For example, calling mem::uninitialized::<bool>() causes instantaneous undefined behavior as, from Rust's point of view, the uninitialized bits are neither 0 (for false) nor 1 (for true) - the only two allowed bit patterns for bool.

To remedy this situation, in Rust 1.36.0, the type MaybeUninit<T> has been stabilized. The Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. Therefore, you can do gradual initialization more safely and eventually use .assume_init() once you are certain that maybe_t: MaybeUninit<T> contains an initialized T.

As MaybeUninit<T> is the safer alternative, starting with Rust 1.39, the function mem::uninitialized will be deprecated.

To find out more about uninitialized memory, mem::uninitialized, and MaybeUninit<T>, read Alexis Beingessner's blog post. The standard library also contains extensive documentation about MaybeUninit<T>.

NLL for Rust 2015

In the announcement for Rust 1.31.0, we told you about NLL (Non-Lexical Lifetimes), an improvement to the language that makes the borrow checker smarter and more user friendly. For example, you may now write:

fn main() {
    let mut x = 5;
    let y = &x;
    let z = &mut x; // This was not allowed before 1.31.0.
}

In 1.31.0 NLL was stabilized only for Rust 2018, with a promise that we would backport it to Rust 2015 as well. With Rust 1.36.0, we are happy to announce that we have done so! NLL is now available for Rust 2015.

With NLL on both editions, we are closer to removing the old borrow checker. However, the old borrow checker unfortunately accepted some unsound code it should not have. As a result, NLL is currently in a "migration mode" wherein we will emit warnings instead of errors if the NLL borrow checker rejects code the old AST borrow checker would accept. Please see this list of public crates that are affected.

To find out more about NLL, MIR, the story around fixing soundness holes, and what you can do about the warnings if you have them, read Felix Klock's blog post.

A new HashMap<K, V> implementation

In Rust 1.36.0, the HashMap<K, V> implementation has been replaced with the one in the hashbrown crate which is based on the SwissTable design. While the interface is the same, the HashMap<K, V> implementation is now faster on average and has lower memory overhead. Note that unlike the hashbrown crate, the implementation in std still defaults to the SipHash 1-3 hashing algorithm.

--offline support in Cargo

During most builds, Cargo doesn't interact with the network. Sometimes, however, Cargo has to. Such is the case when a dependency is added and the latest compatible version needs to be downloaded. At times, network access is not an option though, for example on an airplane or in isolated build environments.

In Rust 1.36, a new Cargo flag has been stabilized: --offline. The flag alters Cargo's dependency resolution algorithm to only use locally cached dependencies. When the required crates are not available offline, and a network access would be required, Cargo will exit with an error. To prepopulate the local cache in preparation for going offline, use the cargo fetch command, which downloads all the required dependencies for a project.

To find out more about --offline and cargo fetch, read Nick Cameron's blog post.

For information on other changes to Cargo, see the detailed release notes.

Library changes

The dbg! macro now supports multiple arguments.

Additionally, a number of APIs have been made const:

New APIs have become stable, including:

Other library changes are available in the detailed release notes.

Other changes

Detailed 1.36.0 release notes are available for Rust, Cargo, and Clippy.

Contributors to 1.36.0

Many people came together to create Rust 1.36.0. We couldn't have done it without all of you. Thanks!

Will Kahn-GreeneCrash pings (Telemetry) and crash reports (Socorro/Crash Stats)

I keep getting asked questions that stem from confusion about crash pings and crash reports, the details of where they come from, differences between the two data sets, what each is currently good for, and possible future directions for work on both. I figured I'd write it all down.

This is a brain dump and sort of a blog post and possibly not a good version of either. I desperately wished it was more formal and mind-blowing like something written by Chutten or Alessio.

It's likely that this is 90% true today but as time goes on, things will change and it may be horribly wrong depending on how far in the future you're reading this. As I find out things are wrong, I'll keep notes. Any errors are my own.

Summary

We (Mozilla) have two different data sets for crashes: crash pings in Telemetry and crash reports in Socorro/Crash Stats. When Firefox crashes, the crash reporter collects information about the crash and this results in crash ping and crash report data. From there, the two different data things travel two different paths and end up in two different systems.

This blog post covers these two different journeys, their destinations, and the resulting properties of both data sets.

This blog post specifically talks about Firefox and not other products which have different crash reporting stories.

CRASH!

Firefox crashes. It happens.

The crash reporter kicks in. It uses the Breakpad library to collect data about the crashed process, package it up into a minidump. The minidump has information about the registers, what's in memory, the stack of the crashing thread, stacks of other threads, what modules are in memory, and so on.

Additionally, the crash reporter collects a set of annotations for the crash. Annotations like ProductName, Version, ReleaseChannel, BuildID and others help us group crashes for the same product and build.

The crash reporter assembles the portions of the crash report that don't have personally identifiable information (PII) in them into a crash ping. It uses minidump-analyzer to unwind the stack. The crash ping with this stack is sent via the crash ping sender to Telemetry.

If Telemetry is disabled, then the crash ping will not get sent to Telemetry.

The crash reporter will show a crash report dialog informing the user that Firefox crashed. The crash report dialog includes a comments field for additional data about the crash and an email field. The user can choose to send the crash report or not.

If the user chooses to send the crash report, then the crash report is sent via HTTP POST to the collector for the crash ingestion pipeline. The entire crash ingestion pipeline is called Socorro. The website part is called Crash Stats.

If the user chooses not to send the crash report, then the crash report never goes to Socorro.

If Firefox is unable to send the crash report, it keeps it on disk. It might ask the user to try to send it again later. The user can access about:crashes and send it explicitly.

Relevant backstory

What is symbolication?

Before we get too much further, let's talk about symbolication.

minidump-whatever will walk the stack starting with the top-most frame. It uses frame information to find the caller frame and works backwards to produce a list of frames. It also includes a list of modules that are in memory.

For example, part of the crash ping might look like this:

  "modules": [
    ...
    {
      "debug_file": "xul.pdb",
      "base_addr": "0x7fecca50000",
      "version": "69.0.0.7091",
      "debug_id": "4E1555BE725E9E5C4C4C44205044422E1",
      "filename": "xul.dll",
      "end_addr": "0x7fed32a9000",
      "code_id": "5CF2591C6859000"
    },
    ...
  ],
  "threads": [
    {
      "frames": [
        {
          "trust": "context",
          "module_index": 8,
          "ip": "0x7feccfc3337"
        },
        {
          "trust": "cfi",
          "module_index": 8,
          "ip": "0x7feccfb0c8f"
        },
        {
          "trust": "cfi",
          "module_index": 8,
          "ip": "0x7feccfae0af"
        },
        {
          "trust": "cfi",
          "module_index": 8,
          "ip": "0x7feccfae1be"
        },
...

The "ip" is an instruction pointer.

The "module_index" refers to another list of modules that were all in memory at the time.

The "trust" refers to how the stack unwinding figured out how to unwind that frame. Sometimes it doesn't have enough information and it does an educated guess.

Symbolication takes the module name, the module debug id, and the offset and looks it up with the symbols it knows about. So for the first frame, it'd do this:

  1. module index 8 is xul.dll
  2. get the symbols for xul.pdb debug id 4E1555BE725E9E5C4C4C44205044422E1 which is at https://symbols.mozilla.org/xul.pdb/4E1555BE725E9E5C4C4C44205044422E1/xul.sym
  3. figure out that 0x7feccfc3337 (ip) - 0x7fecca50000 (base addr for xul.pdb module) is 0x573337
  4. look up 0x573337 in the SYM file and I think that's nsTimerImpl::InitCommon(mozilla::BaseTimeDuration<mozilla::TimeDurationValueCalculator> const &,unsigned int,nsTimerImpl::Callback &&)

Symbolication does that for every frame and then we end up with a helpful symbolicated stack.

Tecken has a symbolication API which takes the module and stack information in a minimal form and symbolicates using symbols it manages.

It takes a form like this:

{
  "memoryMap": [
    [
      "xul.pdb",
      "44E4EC8C2F41492B9369D6B9A059577C2"
    ],
    [
      "wntdll.pdb",
      "D74F79EB1F8D4A45ABCD2F476CCABACC2"
    ]
  ],
  "stacks": [
    [
      [0, 11723767],
      [1, 65802]
    ]
  ]
}

This has two data structures. The first is a list of (module name, module debug id) tuples. The second is a list of (module id, memory offset) tuples.

What is Socorro-style signature generation?

Socorro has a signature generator that goes through the stack, normalizes the frames so that frames look the same across platforms, and then uses that to generate a "signature" for the crash that suggests a common cause for all the crash reports with that signature.

It's a fragile and finicky process. It works well for some things and poorly for others. There are other ways to generate signatures. This is the one that Socorro currently uses. We're constantly honing it.

I export Socorro's signature generation system as a Python library called siggen.

For examples of stacks -> signatures, look at crash reports on Crash Stats.

What is it and where does it go?

Crash pings in Telemetry

Crash pings are only sent if Telemetry is enabled in Firefox.

The crash ping contains the stack for the crash, but little else about the crashed process. No register data, no memory data, nothing found on the heap.

The stack is unwound by minidump-analyzer in the client on the user's machine. Because of that, driver information can be used by unwinding so for some kinds of crashes, we may get a better stack than crash reports in Socorro.

Stacks in crash pings are not symbolicated.

There's an error aggregates data set generated from the crash pings which is used by Mission Control.

Crash reports in Socorro

Socorro does not get crash reports if the user chooses not to send a crash report.

Socorro collector discards crash reports for unsupported products.

Socorro collector throttles incoming crash reports for Firefox release channel--it accepts 10% of those for processing and rejects the other 90%.

The Socorro processor runs minidump-stackwalk on the minidump which unwinds the stack. Then it symbolicates the stack using symbols uploaded during the build process to symbols.mozilla.org.

If we don't have symbols for modules, minidump-stackwalk will guess at the unwinding. This can work poorly for crashes that involve drivers and system libraries we don't have symbols for.

Crash pings vs. Crash reports

Because of the above, there are big differences in collection of crash data between the two systems and what you can do with it.

Representative of the real world

Because crash ping data doesn't require explicit consent by users on a crash-by-crash basis and crash pings are sent using the Telemetry infrastructure which is pretty resilient to network issues and other problems, crash ping data in Telemetry is likely more representative of crashes happening for our users.

Crash report data in Socorro is limited to what users explicitly send us. Further, there are cases where Firefox isn't able to run the crash reporter dialog to ask the user.

For example, on Friday, June 28th, 2019 for Firefox release channel:

  • Telemetry got 1,706,041 crash pings
  • Socorro processed 42,939 crash reports, so figure it got around 420,000 crash reports

Stack quality

A crash report can have a different stack in the crash ping than in the crash report.

Crash ping data in Telemetry is unwound in the client. On Windows, minidump-analyzer can access CFI unwinding data, so the stacks can be better especially in cases where the stack contains system libraries and drivers.

We haven't implemented this yet on non-Windows platforms.

Crash report data in Socorro is unwound by the Socorro processor and is heavily dependent on what symbols we have available. It doesn't do a good job with unwinding through drivers and we often don't have symbols for Linux system libraries.

Gabriele says sometimes stacks are unwound better for crashes on MacOS and Linux platforms than what the crash ping contains.

Symbolication and signatures

Crash ping data is not symbolicated and we're not generating Socorro-style signatures, so it's hard to bucket them and see change in crash rates for specific crashes.

There's an fx-crash-sig Python library which has code to symbolicate crash ping stacks and generate a Socorro-style signature from that stack. This is helpful for one-off analysis but this is not a long-term solution.

Crash report data in Socorro is symbolicated and has Socorro-style signatures.

The consequence of this is that in Telemetry, we can look at crash rates for builds, but can't look at crash rates for specific kinds of crashes as bucketed by signatures.

The Signature Report and Top Crashers Report in Crash Stats can't be implemented in Telemetry (yet).

Tooling

Telemetry has better tooling for analyzing crash ping data.

Crash ping data drives Mission Control.

Socorro's tooling is limited to Supersearch web ui and API which is ok at some things and not great at others. I've heard some people really like the Supersearch web ui.

There are parts of the crash report which are not searchable. For example, it's not possible to search for crash reports where a certain module is in the stack. Socorro has a signature report and a topcrashers page which help, but they're not flexible for answering questions outside of what we've explicitly coded them for.

Socorro sends a copy of processed crash reports to Telemetry and this is in the "socorro_crash" dataset.

PII and data access

Telemetry crash ping data does not contain PII. It is not publicly available, except in aggregate via Mission Control.

Socorro crash report data contains PII. A subset of the crash data is available to view and search by anyone. The PII data is restricted to users explicitly granted access to it. PII data includes user email addresses, user-provided comments, CPU register data, what else was in memory, and other things.

Data expiration

Telemetry crash ping data isn't expired, but I think that's changing at some point.

Socorro crash report data is kept for 6 months.

Data latency

Socorro data is near real-time. Crash reports get collected and processed and are available in searches and reports within a few minutes.

Crash ping data gets to Telemetry almost immediately.

Main ping data has some latency between when it's generated and when it is collected. This affects normalization numbers if you were looking at crash rates from crash ping data.

Derived data sets may have some latency depending on how they're generated.

Conclusions and future plans

Socorro

Socorro is still good for deep dives into specific crash reports since it contains the full minidump and sometimes a user email address and user comments.

Socorro has Socorro-style signatures which make it possible to aggregate crash reports into signature buckets. Signatures are kind of fickle and we adjust how they're generated over time as compilers, symbols extraction, and other things change. We can build Signature Reports and Top Crasher reports and those are ok, but they use total counts and not rates.

I want to tackle switching from Socorro's minidump-stackwalk to minidump-analyzer so we're using the same stack walker in both places. I don't know when that will happen.

Socorro is going to GCP which means there will be different tools available for data analysis. Further, we may switch to BigQuery or some other data store that lets us search the stack. That'd be a big win.

Telemetry

Telemetry crash ping data is more representative of the real world, but stacks are symbolicated and there's no signature generation, so you can't look at aggregates by cause.

Symbolication and signature generation of crash pings will get figured out at some point.

Work continues on Mission Control 2.0.

Telemetry is going to GCP which means there will be different tools available for data analysis.

Together

At the All Hands, I had a few conversations about fixing tooling for both crash reports and crash pings so the resulting data sets were more similar and you could move from one to the other. For example, if you notice a troubling trend in the crash ping data, can you then go to Crash Stats and find crash reports to deep dive into?

I also had conversations around use cases. Which data set is better for answering certain questions?

We think having a guide that covers which data set is good for what kinds of analysis, tools to use, and how to move between the data sets would be really helpful.

Thanks!

Many thanks to everyone who helped with this: Will Lachance, W Chris Beard, Gabriele Svelto, Nathan Froyd, and Chutten.

Also, many thanks to Chutten and Alessio who write fantastic blog posts about Telemetry things. Those are gold.

Updates

  • 2019-07-04: Crash ping data is not publicly available. Blog post updated accordingly. Thanks, Chutten!

Mozilla Reps Community8 Years of Reps Program, Celebrating Community Successes!

The Reps program idea was started in 2010 by William Quiviger and Pierros Papadeas, until officially launched and welcoming volunteers onboard as Mozilla Reps in 2011. The Mozilla Reps program aims to empower and support volunteer Mozillians who want to be official representatives of Mozilla in their region/locale/country. The program provides a framework and a specific set of tools to help Mozillians to organize and/or attend events, recruit and mentor new contributors, document and share activities, and support their local communities better. The Reps program was created to help communities around the world. Community is the backbone of the Mozilla project. As the Mozilla project grows in scope and scale, community needs to be strengthened and empowered accordingly. This is the central aim of the Mozilla Reps program: to empower and to help push responsibility to the edges, in order to help the Mozilla contributor base grow. Nowadays, the Reps are taking a stronger point by becoming the Community Coordinators.

You can learn more about the program here.

Success Stories

Mozilla Reps program proven to be successful to help to identify talents from local communities to contribute to Mozilla projects. Reps also help to run local/international events and helping the campaigns to take place. Some of the campaigns happen in the past, was a collaboration work with other Mozilla teams. These are list of some activities or campaigns which collaborating the Reps program with other Mozilla teams. Until today, we have total 7623 events, 261 active Reps, and 51,635 activities reported on Reps portal.

Historically, the Reps have been supporting major Mozilla products through the whole existence, from the Firefox OS Launch to supporting the latest campaigns. Events that help to promote the launch of new version of Firefox such Firefox 4.0, Firefox Quantum, and every Firefox major release updates. Events that care about users and community such Privacy Month, Aadhar, The Dark Funnel, Techspeakers, Web Compatibility Sprint, Maker Party. Events relate to new product release such Firefox Rocket/Lite and Screenshot Go for South East Asia and India market. Events relate to localization such The Add-on Localization. Events relate to Mozilla’s product such Rust, Webmaker, Firefox OS.

Do You Have More Ideas?

With those many success stories on the past in helping engagement of local communities and helping many different campaigns, Mozilla Reps still looking forward to many different activities and campaigns in future. So, if you have more ideas for campaigns or engagement for local communities, or want to collaborate with Mozilla Reps program to get in touch with local communities, Let’s do it together!

Will Kahn-GreeneSocorro Engineering: June 2019 happenings

Summary

Socorro Engineering team covers several projects:

This blog post summarizes our activities in June.

Highlights of June

  • Socorro: Fixed the collector's support of a single JSON-encoded field in the HTTP POST payload for crash reports. This is a big deal because we'll get less junk data in crash reports going forward.
  • Socorro: Reworked how Crash Stats manages featured versions: if the product defines a product_details/PRODUCTNAME.json file, it'll pull from that. Otherwise it calculates featured versions based on the crash reports it's received.
  • Buildhub: deprecated Buildhub in favor of Buildhub2. Current plan is to decommission Buildhub in July.
  • Across projects: Updated tons of dependencies that had security vulnerabilities. It was like a hamster wheel of updates, PRs, and deploys.
  • Tecken: Worked on GCS emulator for local dev environment.
  • All hands discussions:
    • GCP migration plan for Tecken and figure out what needs to be done.
    • Possible GCP migration schedule for Tecken and Socorro.
    • Migrating applications using Buildhub to Buildhub2 and decommissioning Buildhub in July.
    • What would happen if we switched from Elasticsearch to BigQuery?
    • Switching from Socorro's minidump-stackwalk to minidump-analyzer.
    • Re-implementing the Socorro Top Crashers and Signature reports using Telemetry tools and data.
    • Writing a symbolicator and Socorro-style signature generator in Rust that can be used for crash reports in Socorro and crash pings in Telemetry.
    • The crash ping vs. crash report situation (blog post coming soon).

Read more… (5 min remaining to read)

The Mozilla BlogMozilla joins brief for protection of LGBTQ employees from discrimination

Last year, we joined the call in support of transgender equality as part of our longstanding commitment to diversity, inclusion and fostering a supportive work environment. Today, we are proud to join over 200 companies, big and small, as friends of the court, in a brief brought to the Supreme Court of the United States.

The brief says, in part:

“Amici support the principle that no one should be passed over for a job, paid less, fired, or subjected to harassment or any other form of discrimination based on their sexual orientation or gender identity. Amici’s commitment to equality is violated when any employee is treated unequally because of their sexual orientation or gender identity. When workplaces are free from discrimination against LGBT employees, everyone can do their best work, with substantial benefits for both employers and employees.”

The post Mozilla joins brief for protection of LGBTQ employees from discrimination appeared first on The Mozilla Blog.

Dave HuntState of Performance Test Engineering (H1/2019)

Late in 2018 I stepped out of the familiar position of automation engineer, and into the unknown as an engineering manager. A new team was formed for me to manage, focusing on performance test engineering. Now here we are, just over six months in, and I’m excited to share some updates!

Team

The team consists of 10 engineers based in California, Toronto, Montreal, and Romania.

The following team members joined us in H1/2019:

  • Alexandru Ionescu 🇷🇴
  • Marian Raicof 🇷🇴
  • Alexandru Irimovici 🇷🇴
  • Arnold Iakab 🇷🇴
  • Ken Rubin 🇺🇸

We also welcomed Greg Mierzwinski, who was converted from a contractor to full time employee.

Despite some early signs of interest, there were no community contributions in this period.

Tests

We currently support 4 frameworks:

  • Are We Slim Yet (AWSY) - memory consumption by the browser
  • build_metrics - build times, installer size, and other compiler-specific insights
  • Raptor - page load and browser benchmarks
  • Talos - various time-based performance KPIs on the browser

At the time of writing, there are 263 test suites (query) for the above frameworks.

The following are some highlights of tests introduced in H1/2019:

  • Raptor power tests for Android:
    • Idle foreground
    • Idle background
  • Raptor page load tests for Android:
    • Expanded from 4 sites to 26
  • Raptor cold page load tests for Android and desktop
  • Raptor page load tests for Reference Browser and Firefox Preview

Hardware

Our tests are running on the following hardware in continuous integration:

  • 35 Moto G5 devices (with two of these dedicated to power testing) 📱
  • 38 Pixel 2 devices (with two of these dedicated to power testing) 📱
  • 16 Windows UX laptops (2017 reference hardware) 💻
  • 35 Windows ARM64 laptops (Lenovo Yoga C630) 💻

Sheriffs

We have 2 performance sheriffs 🤠 dedicating up to 50% of their time to this role. Outside of this, they assist with improving our tools and test harnesses. We also have 2 sheriffs in training to meet the needs of the additional tests we’re adding, and to provide support when needed.

Q1/2019

During the first quarter of 2019 our perfomance tools generated 863 alert summaries. This is an average of 9.7 alert summaries every day. Of the four test frameworks, build_metrics caused the most alert summaries, accounting for 35.1% of the total. The raptor framework was the second biggest contributor, with 29.9%.

Alert Summaries by Framework (Q1/2019)

Of the alert summaries generated, 106 (12.3%) were improvements. 13 alerts associated with regressions were fixed, and 9 were backed out. The remaining alerts were either reassigned, marked as downstream, marked as invalid, are still under investigation, or were accepted.

Here are some highlights for some of our sheriffed frameworks:

Are We Slim Yet (AWSY)

  • On February 15th we detected up to 20.66% improvement in AWSY. This was caused by Paul Bone’s patch on bug 1433007, which allowed the nursery to use less than a single chunk.
  • The largest fixed regression detected from AWSY was noticed on March 20th, and had an impact of up to 9.53% regression to the heap unclassified. This was attributed to bug 1429796 and was fixed a week later by Myk Melez.
  • There are no open regressions for AWSY for this period!

Raptor

  • The improvement alert with the highest magnitude for Raptor was created on March 25th, which saw up to a massive 41.04% improvement to page load time on Android. The gains were attributed to bug 1537964, which raised the importance of services launched from the main process.
  • The largest regression alert that was ultimately fixed for Raptor was generated early on in the quarter on January 3rd. This showed up to 120.65% regression to the assorted-dom benchmark, caused by bug 1467124. A fix was quickly identified by the author, Jan de Mooij and pushed within 24 hours of the notification from our sheriff.
  • Our biggest regression that remains unresolved was created on January 23rd, and includes 20.38% page load regression for Reddit on desktop, and appears to have been caused by bug 1485216. Due to excessive noise, we have since disabled the Time to First Interactive (TTFI) measurement, which is the metric that caused this regression. This will make confirming any fix here more difficult.

Talos

  • For Talos, the alert showing the largest improvement was created on January 9th, which showed up to 38.14% improvement to tsvg_static, tsvgr_opacity, and tsvgx. It was attributed to bug 1518773, which was a WebRender update.
  • The largest fixed regression alert for Talos was spotted on January 23rd, and includes a 26.73% hit to tscrollx and tp5o_scroll. It was caused by bug 1522028, and was fixed by Glenn Watson via 1523210 within a couple of days of being notified by our sheriffs.
  • The worst open regression alert for Talos was created on February 5th, and contains a 28.66% slow down to the about-preferences test. It looks like this has stalled with a patch to enable ASAP mode for the test, which caused failures and was subsequently backed out. According to Mike Conley in this comment, “ASAP mode causes us to paint ASAP after the DOM has been dirtied, without waiting for vsync”.

Q2/2019

In the second quarter, our perfomance tools generated 831 alert summaries. This is an average of 9.23 alert summaries every day. Of the four test frameworks, raptor caused the most alert summaries, accounting for 42.2% of the total. The build_metrics framework was the second biggest contributor, with 32.7%. There have been additional tests landed for Raptor, new page recordings, and a focus on improving page load performance, which are all likely to be reasons for the increase in Raptor alerts in this period.

Alert Summaries by Framework (Q2/2019)

Of the alert summaries generated, 98 (11.79%) were improvements. 19 alerts associated with regressions were fixed, and 6 were backed out. The remaining alerts were either reassigned, marked as downstream, marked as invalid, are still under investigation, or were accepted.

In Q1 we introduced a first_triaged timestamp in Perfherder (bug 1521025), which allows us to track how much time passes between the alert generation and the initial triage from a performance sheriff. Using this, we can see that our perf sheriffs triaged 56.1% of alerts within a day, and an additional 24.7% within three days.

Days to Triage (Q2/2019)

In Q2 we introduced a bug_updated timestamp in Perfherder (bug 1536040) to more closely track the time between the alert and the correct bug being identified, which is where we handover to the regression author and test owners. The data doesn’t cover all of Q2, but so far we see that 63.3% of alerts complete this period within 3 days, and 87.4% complete within 1 week.

Days to Bug (Q2/2019)

Here are some highlights for some of our sheriffed frameworks:

Are We Slim Yet (AWSY)

  • The largest improvement noticed for AWSY was on June 9th, where we saw up to a 5.98% decrease in heap unclassified and base content js. This was attributed to a patch from Emilio Cobos Álvarez in bug .
  • On May 30th, a 9.54% regression was noticed across several tests, caused by bug 1493420. Fortunately, this was promptly followed by a fix from Emilio Cobos Álvarez.
  • The open regression with the highest impact for AWSY was detected on June 12th. Whilst there were a large number of improvements in this alert, the 12.67% regression to images was also significant. The regression was caused by bug 1558763, which changed the value of a preference within Marionette. It looks like this may have been fixed, but our performance sheriffs have yet to verify this.

Raptor

  • The most significant improvement detected by Raptor was on May 22nd. Up to 20.42% boost to page load of several sites was seen, which was thanks to Andrew Creskey’s investigation and Michal Novotny’s patch in bug 1553166 to make disk cache sizing less restrictive on mobile.
  • On April 30th, a regression alert of 52.62% was reported against the Raptor page load test for bing.com on desktop. It turned out that bug 1514655 caused the unexpected regression, and Emilio Cobos Álvarez was able to post a fix within a day of the regression report!
  • Due to many page recordings being recreated, there are a lot of open alerts that require closer examination.

Talos

  • On April 13th we detected an improvement of up to 99.24% to the tp5n nonmain_startup_fileio test. This was thanks to Doug Thayer’s work on optimising prefetching of Windows DLLs in bug 1538279.
  • Talos detected a 3.78% regression to tsvgr_opacity on March 31st, which was caused by bug 1539026. After an initial fix was backed out, Lee Salzman was able to fix the regression.
  • The open regression alert with the highest magnitude was opened on April 29th, and reports a 38.25% decrease in the average frame interval while animating a simple WebGL scene. It was discovered by the glterrain test, and attributed to 1547775. It does look like this test returned to the pre-regression values, but we have yet to confirm if this was expected or if the regression remains.

Survey

I’d really like to find out how our team is doing, and hear if you have any feedback or suggestions for any way we can improve. Please take a couple of minutes to complete the very simple satisfaction survey I have created. Thanks!

Mozilla Open Policy & Advocacy BlogBuilding on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms

In April 2019 the UK government unveiled plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK government’s proposal contains some interesting avenues of exploration for the next generation of European content regulation laws, it also includes several critical weaknesses and grey areas. We’ve just filed comments with the government that spell out the key areas of concern and provide recommendations on how to address them.

The UK government’s white paper responds to legitimate public policy concerns around how technology companies deal with illegal and harmful content online. We understand that in many respects the current European regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like in the UK and at EU-level, while ensuring strong and clear protections for individuals’ free expression and due process rights.

As we have noted previously, we believe that the white paper’s proposed regulatory architecture could have some potential. However, the UK government’s vision to put this model into practice contains serious flaws. Here are some of the changes we believe the UK government must make to its proposal to avoid the practical implementation pitfalls:

  • Clarity on definitions: The government must provide far more detail on what is meant by the terms ‘reasonableness’ and ‘proportionality’, if these are to serve as meaningful safeguards for companies and citizens. Moreover, the government must clearly define the relevant ‘online harms’ that are to be made subject to the duty of care, to ensure that companies can effectively target their trust and safety efforts.
    ddd
  • A rights-protective governance model:  The regulator tasked with overseeing the duty of care must be truly co-regulatory in nature, with companies and civil society groups central to the process by which the Codes of Practice are developed. Moreover, the regulator’s mission must include a mandate to protect fundamental rights and internet openness, and it must not have power to issue content takedown orders.
    ddd
  • A targeted scope: The duty of care should be limited to online services that store and publicly disseminate user-uploaded content. There should be clear exemptions for electronic communications services, internet service providers, and cloud services, whose operational and technical architecture are ill-suited and problematic for a duty of care approach.
    ddd
  • Focus on practices over outcomes:  The regulator’s role should be to operationalise the duty of care with respect to companies’ practices – the steps they are taking to reduce ‘online harms’ on their service. The regulator should not have a role in assessing the legality or harm of individual pieces of content. Even the best content moderation systems can sometimes fail to identify illegal or harmful content, and so focusing exclusively on outcomes-based metrics to assess the duty of care is inappropriate.

We look forward to engaging more with the UK government as it continues its consultation on the Online Harms white paper, and hopefully the recommendations in this filing can help address some of the white paper’s critical shortcomings. As policymakers from Brussels to Delhi contemplate the next generation of online content regulations, the UK government has the opportunity to set a positive standard for the world.

The post Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 293

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is aljabar, an extremely generic linear algebra libary. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

gfx-rs introduces the contributor-friendly label for issues that are appropriately inviting to new members:

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

196 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Python and Go pick up your trash for you. C lets you litter everywhere, but throws a fit when it steps on your banana peel. Rust slaps you and demands that you clean up after yourself.

Nicholas Hahn on his blog

Thanks to UtherII for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mike HommeyGit now faster than Mercurial to clone Mozilla Mercurial repos

How is that for clickbait?

With the now released git-cinnabar 0.5.2, the cinnabarclone feature is enabled by default, which means it doesn’t need to be enabled manually anymore.

Cinnabarclone is to git-cinnabar what clonebundles is to Mercurial (to some extent). Clonebundles allow Mercurial to download a pre-generated bundle of a repository, which reduces work on the server side. Similarly, Cinnabarclone allows git-cinnabar to download a pre-generated bundle of the git form of a Mercurial repository.

Thanks to Connor Sheehan, who deployed the necessary extension and configuration on the server side, cinnabarclone is now enabled for mozilla-central and mozilla-unified, making git-cinnabar clone faster than ever for these repositories. In fact, under some conditions (mostly depending on network bandwidth), cloning with git-cinnabar is now faster than cloning with Mercurial:

$ time git clone hg::https://hg.mozilla.org/mozilla-unified mozilla-unified_git
Cloning into 'mozilla-unified_git'...
Fetching cinnabar metadata from https://index.taskcluster.net/v1/task/github.glandium.git-cinnabar.bundle.mozilla-unified/artifacts/public/bundle.git
Receiving objects: 100% (12153616/12153616), 2.67 GiB | 41.41 MiB/s, done.
Resolving deltas: 100% (8393939/8393939), done.
Reading 172 changesets
Reading and importing 170 manifests
Reading and importing 758 revisions of 570 files
Importing 172 changesets
It is recommended that you set "remote.origin.prune" or "fetch.prune" to "true".
git config remote.origin.prune true
or
git config fetch.prune true

Run the following command to update tags:
git fetch --tags hg::tags: tag "*"
Checking out files: 100% (279688/279688), done.

real    4m57.837s
user    9m57.373s
sys     0m41.106s

$ time hg clone https://hg.mozilla.org/mozilla-unified
destination directory: mozilla-unified
applying clone bundle from https://hg.cdn.mozilla.net/mozilla-unified/5ebb4441aa24eb6cbe8dad58d232004a3ea11b28.zstd-max.hg
adding changesets
adding manifests
adding file changes
added 537259 changesets with 3275908 changes to 523698 files (+13 heads)
finished applying clone bundle
searching for changes
adding changesets
adding manifests
adding file changes
added 172 changesets with 758 changes to 570 files (-1 heads)
new changesets 8b3c35badb46:468e240bf668
537259 local changesets published
updating to branch default
(warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor")
279688 files updated, 0 files merged, 0 files removed, 0 files unresolved

real    21m9.662s
user    21m30.851s
sys     1m31.153s

To be fair, the Mozilla Mercurial repos also have a faster “streaming” clonebundle that they only prioritize automatically if the client is on AWS currently, because they are much larger, and could take longer to download. But you can opt-in with the --stream command line argument:

$ time hg clone --stream https://hg.mozilla.org/mozilla-unified mozilla-unified_hg
destination directory: mozilla-unified_hg
applying clone bundle from https://hg.cdn.mozilla.net/mozilla-unified/5ebb4441aa24eb6cbe8dad58d232004a3ea11b28.packed1.hg
525514 files to transfer, 2.95 GB of data
transferred 2.95 GB in 51.5 seconds (58.7 MB/sec)
finished applying clone bundle
searching for changes
adding changesets
adding manifests
adding file changes
added 172 changesets with 758 changes to 570 files (-1 heads)
new changesets 8b3c35badb46:468e240bf668
updating to branch default
(warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor")
279688 files updated, 0 files merged, 0 files removed, 0 files unresolved

real    1m49.388s
user    2m52.943s
sys     0m43.779s

If you’re using Mercurial and can download 3GB in less than 20 minutes (in other words, if you can download faster than 2.5MB/s), you’re probably better off with the streaming clone.

Bonus fact: the Git clone is smaller than the Mercurial clone

The Mercurial streaming clone bundle contains data in a form close to what Mercurial puts on disk in the .hg directory, meaning the size of .hg is close to that of the clone bundle. The Cinnabarclone bundle contains a git pack, meaning the size of .git is close to that of the bundle, plus some more for the pack index file that unbundling creates.

The amazing fact is that, to my own surprise, the git pack, containing the repository contents along with all git-cinnabar needs to recreate Mercurial changesets, manifests and files from the contents, takes less space than the Mercurial streaming clone bundle.

And that translates in local repository size:

$ du -h -s --apparent-size mozilla-unified_hg/.hg
3.3G    mozilla-unified_hg/.hg
$ du -h -s --apparent-size mozilla-unified_git/.git
3.1G    mozilla-unified_git/.git

And because Mercurial creates so many files (essentially, two per file that ever was in the repository), there is a larger difference in block size used on disk:

$ du -h -s mozilla-unified_hg/.hg
4.7G    mozilla-unified_hg/.hg
$ du -h -s mozilla-unified_git/.git
3.1G    mozilla-unified_git/.git

It’s even more mind blowing when you consider that Mercurial happily creates delta chains of several thousand revisions, when the git pack’s longest delta chain is 250 (set arbitrarily at pack creation, by which I mean I didn’t pick a larger value because it didn’t make a significant difference). For the casual readers, Git and Mercurial try to store object revisions as a diff/delta from a previous object revision because that takes less space. You get a delta chain when that previous object revision itself is stored as a diff/delta from another object revision itself stored as a diff/delta … etc.

My guess is that the difference is mainly caused by the use of line-based deltas in Mercurial, but some Mercurial developer should probably take a deeper look. The fact that Mercurial cannot delta across file renames is another candidate.

Mozilla Security BlogFixing Antivirus Errors

After the release of Firefox 65 in December, we detected a significant increase in a certain type of TLS error that is often triggered by the interaction of antivirus software with the browser. Today, we are announcing the results of our work to eliminate most of these issues, and explaining how we have done so without compromising security.

On Windows, about 60% of Firefox users run antivirus software and most of them have HTTPS scanning features enabled by default. Moreover, CloudFlare publishes statistics showing that a significant portion of TLS browser traffic is intercepted. In order to inspect the contents of encrypted HTTPS connections to websites, the antivirus software intercepts the data before it reaches the browser. TLS is designed to prevent this through the use of certificates issued by trusted Certificate Authorities (CAs). Because of this, Firefox will display an error when TLS connections are intercepted unless the antivirus software anticipates this problem.

Firefox is different than a number of other browsers in that we maintain our own list of trusted CAs, called a root store. In the past we’ve explained how this improves Firefox security. Other browsers often choose to rely on the root store provided by the operating system (OS) (e.g. Windows). This means that antivirus software has to properly reconfigure Firefox in addition to the OS, and if that fails for some reason, Firefox won’t be able to connect to any websites over HTTPS, even when other browsers on the same computer can.

The interception of TLS connections has historically been referred to as a “man-in-the-middle”, or MITM. We’ve developed a mechanism to detect when a Firefox error is caused by a MITM. We also have a mechanism in place that often fixes the problems. The “enterprise roots” preference, when enabled, causes Firefox to import any root CAs that have been added to the OS by the user, an administrator, or a program that has been installed on the computer. This option is available on Windows and MacOS.

We considered adding a “Fix it” button to MITM error pages (see example below) that would allow users to easily enable the “enterprise roots” preference when the error is displayed. However, we realized that this was something we want users to do rather than an “override” button that allows a user to bypass an error at their own risk.

Example of a MitM Error Page in Firefox

Beginning with Firefox 68, whenever a MITM error is detected, Firefox will automatically turn on the “enterprise roots” preference and retry the connection. If it fixes the problem, then the “enterprise roots” preference will remain enabled (unless the user manually sets the “security.enterprise_roots.enabled” preference to false). We’ve tested this change to ensure that it doesn’t create new problems. We are also recommending as a best practice that antivirus vendors enable this preference (by modifying prefs.js) instead of adding their root CA to the Firefox root store. We believe that these actions combined will greatly reduce the issues encountered by Firefox users.

In addition, in Firefox ESR 68, the “enterprise roots” preference will be enabled by default. Because extended support releases are often used in enterprise settings where there is a need for Firefox to recognize the organization’s own internal CA, this change will streamline the process of deploying Firefox for administrators.

Finally, we’ve added an indicator that allows the user to determine when a website is relying on an imported root CA certificate. This notification is on the site information panel accessed by clicking the lock icon in the URL bar.

It might cause some concern for Firefox to automatically trust CAs that haven’t been audited and gone through the rigorous Mozilla process. However, any user or program that has the ability to add a CA to the OS almost certainly also has the ability to add that same CA directly to the Firefox root store. Also, because we only import CAs that are not included with the OS, Mozilla maintains our ability to set and enforce the highest standards in the industry on publicly-trusted CAs that Firefox supports by default. In short, the changes we’re making meet the goal of making Firefox easier to use without sacrificing security.

The post Fixing Antivirus Errors appeared first on Mozilla Security Blog.

Mike HommeyAnnouncing git-cinnabar 0.5.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.1?

  • Updated git to 2.22.0 for the helper.
  • cinnabarclone support is now enabled by default. See details in README.md and mercurial/cinnabarclone.py.
  • cinnabarclone now supports grafted repositories.
  • git cinnabar fsck now does incremental checks against last known good state.
  • Avoid git cinnabar sometimes thinking the helper is not up-to-date when it is.
  • Removing bookmarks on a Mercurial server is now working properly.

Cameron KaiserAnd now for something completely different: NetBSD on the last G4 Mac mini (and making the kernel power failure proof)

(First, as a public service message, if you're running Linux on a G5 you may wish to update the kernel.)

I'm a big fan of NetBSD. I've run it since 2000 on a Mac IIci (of course it's still running it) and I ran it for several years on a Power Mac 7300 with a G3 card which was the second incarnation of the Floodgap gopher server. Today I also still run it on a MIPS-based Cobalt RaQ 2 and an HP Jornada 690. I think NetBSD is a better match for smaller or underpowered systems than current-day Linux, and is fairly easy to harden and keep secure even though none of these systems are exposed to the outside world.

Recently I had a need to set up a bridge system that would be fast enough to connect two networks and I happened to have two of the "secret" last-of-the-line 1.5GHz G4 Mac minis sitting on the shelf doing nothing. Yes, they're probably outclassed by later Raspberry Pi models, but I don't have to buy anything and I like putting old hardware to good use. So here it is, doing serious business, with the total outlay being the cost of one weekend afternoon:

NetBSD/macppc is a fairly mature port, but that doesn't mean it doesn't have bugs. And oddly there do seem to still be some in the install process, at least of the 8.1 release I used, on this last and mightiest of the PowerPC miniatures. Still, once it got set up it's been working great since, so here's a few pointers on getting the 1.5 mini (and current Power Macs generally) running as little NetBSD servers. As most of my readers are Power Mac users and likely to have multiple Power Macs that can aid this process, I will orient this guide to them with some sidebar notes for people trying to pull the mini up all by itself. This machine is configured with 1GB of RAM, the standard 2.5" PATA spinning disk and optical drive, USB 2.0, FireWire 400, Bluetooth and WiFi, using the onboard Radeon 9200 GPU as the console.

The working configuration, hit upon by Sevan Janiyan, is to have an HFS+ partition with the bootloader (called ofwboot.xcf) and the kernel, and then a separate partition for NetBSD's root volume. For some reason the mini goes berserk when trying to boot from a kernel on a NetBSD native partition, but works fine from an HFS+ one. Unfortunately, since the NetBSD installer cannot actually initialize an HFS+ volume, you'll need to do some of the partitioning work in Mac OS X, copy the files there, and then finish the rest. There's a couple ways of skinning that cat, but for many of you this means you'll need not only the NetBSD installer CD, but also a bootable copy of Mac OS X either on disc (i.e., an installer) or a bootable external drive, and some means of copying files.

And that brings us to our first pro tip: the G4 Mac minis had absolutely crap optical drives that would fail if you looked at them crossways. This drive was no exception; it would read commercially pressed CDs but only certain media types of burned CDs and wouldn't read any DVD at all. That means it wouldn't boot from my usual OS X Tiger 10.4.6 install DVD, and the last generation of G4 minis require 10.4, so none of my previous OS X CDs would work.

As it happens, the minimum requirement for the G4/1.5 minis is not actually 10.4.2, yet another Apple lie; it's actually 10.4.0 (though note that some devices like Bluetooth may not work properly). This is fortunate because 10.4.0 was available in CD format and I was able to boot Disk Utility off that Tiger CD instead. Your other option is to bring up the mini in Target Disk Mode (connect over FireWire; hold T down as you turn the mini on until you see a yellow logo on a blue background) from another Power Mac and do the formatting there. In fact, we'll be using Target Disk Mode in a minute, but here I just booted from the CD instead.

In Disk Utility (whether you're doing this on the machine from the Tiger installer or on another machine over FireWire), wipe the mini's current partition scheme and create two new partitions. The first is your HFS+ volume for booting. This particular machine will only run NetBSD, so I made it 512MB to have enough room for multiple kernels and for other files I might need, but if you want a dual-boot system you can make this larger. The second partition will be for NetBSD; I allocated everything else and created it as a UFS ("UNIX File System") partition, though we will divvy it up later. The formatting scheme should look more or less like these screenshots. Shut down the mini when you're done.

Now we boot the NetBSD installer. Bring up the machine in OpenFirmware mode -- all New World Macs use OpenFirmware 3 -- by holding down Command-Option-O-F while powering it on (I advise doing this from a directly-attached USB keyboard). This will bring up the OpenFirmware interface. When you are commanded to do so, release the keys and you will drop to the famous ok prompt. If you're lucky and the evil spirits in your optical drive have been placated by an offering of peanut M&Ms and a young maiden at midnight, you can simply insert the NetBSD install disc and type

boot cd:,\ofwboot.xcf netbsd.macppc

Note the backslash, not a forward slash! If this works properly, then the screen will go black (you don't go back) and enter the Installer proper.

If you get weird errors or OpenFirmware complains the disc is not readable, the optical drive is probably whacked. My drive wouldn't read burned Fujifilm CD-R media (that everything else did), but would read burned Maxell media. If you can't even control for that, you may be able to connect a FireWire CD/DVD reader and boot from it instead. The command would be "something like"

boot fw/node/sbp-2/disk:,\ofwboot.xcf netbsd.macppc

If this didn't work, you may need to snoop around the OpenFirmware device tree to figure out where the device is actually attached, though this should basically work for the G4 mini's single port. Alternatively, you could also try a USB CD-ROM drive, or dding the install image to a USB drive on another computer and booting the mini from that, but the boot string will vary based on which port you connect it to (use dev usb0 and ls to show everything under that port, then dev usb1, etc.). Make sure it is directly connected to the mini. Once you find a device that shows a disk, then "something like" this will work (let's say it was found under usb1):

boot usb1/disk:,\ofwboot.xcf netbsd.macppc

If even that won't work, there are some other ways like netbooting, but this is rather more complicated and I won't talk about it here. Or you could actually fix the drive, I guess ...

When the Installer starts up, choose the option to drop to the shell when it is offered. We will now finish the partitioning from the NetBSD side; we do not use the Installer's built-in partition tool as it will run out of memory. At the shell prompt, type

pdisk /dev/wd0c

When it asks you for a command, type a capital letter P and press RETURN. This will print out the current partition map, which if your mini is similar to mine, should show 4 partitions: the Apple partition map itself, followed by the HFS+ partition, and then by a tiny Apple_Boot partition that is made whenever a UFS volume appears to be the boot volume. (Silly Mac OS X.) You can remove it if you want, but this seemed like more trouble than it was worth for a measly 8.5 megabytes. After that is the space for NetBSD. On my G4 mini, this was partition number 4. Delete this partition by typing a lower-case d, press RETURN, and type 4. Be sure of this number! I will use it in the examples below.

First we will formally create the swap. This is done with the capital letter C command (as shown in the screenshot). Indicate the first block is 4p (i.e., starting at partition 4), for 4194304 blocks (2GB), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice b.

Next is the actual NetBSD root: capital letter C, then saying the first block was 5p (i.e., starting at partition 5, the unallocated section), soaking up the rest of the blocks (however many you see listed under Apple_Free), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice a.

If you did all this right, your screen should look more or less like this:

Verify the partition map one more time with the capital letter P command, then write it out with lower-case w, answering y(es), and then quit with lower-case q. At the shell prompt, return to the installer by typing sysinst and when asked, indicate you will "Use existing partition sizes." The installer will then install the appropriate packages and you can do the initial setup for your clock, the root password, etc. When this is all done, reboot your mini with the left mouse button held down; it will eject the CD (and fail to find a boot volume if you do not have an OS X installation). Shut down the mini.

Before the mini will boot NetBSD, we must copy the kernel and the bootloader to the HFS+ partition. This is where Target Disk Mode comes in handy, because you can just copy directly. Here is my iBook G4 copying a custom kernel (more in a moment):

On the iBook G4, I put in the NetBSD install CD and copied off ofwboot.xcf and netbsd-GENERIC.gz, or you can download them from here and here. They should be copied to the root of the mini's HFS+ volume for the command below to work. For good measure I also uncompressed the gzipped kernel as a failsafe and put a copy of the installation kernel there too, though this isn't necessary. Once the files are copied, eject the mini's drive on the FireWire machine, unplug the FireWire and power the mini off.

If you don't have another Mac around that can talk to the mini over FireWire, you can do this from NetBSD itself, but it's a bit more involved.

Either way, re-enter OpenFirmware with Cmd-Opt-O-F while powering it back up. It's time to boot your new NetBSD machine.

You can see from the screenshot here that the HFS+ volume is considered partition 2, as we left it in pdisk. That means your boot string is

boot hd:,\ofwboot.xcf hd:2/netbsd-GENERIC.gz

Yes, the path to ofwboot still has a backslash, but the argument to ofwboot actually needs a forward slash. NetBSD will start immediately.

There are several minor and one rather obnoxious bug with NetBSD's current support. You will notice a few strange messages on startup as part of the huge mass of text:

oea_startup: failed to allocate DEAD ZONE: error=12
pmu0: power-mgt not configured
pmu0: pmu-pwm-fans not configured
WARNING: 3 errors while detecting hardware; check system log.
bwi0: firmware_open failed on v3/ucode5.fw

I don't know what the first three are, but they appear to be harmless, and appear in many otherwise working dmesg archives (see also this report). The summary WARNING thus can also be politely ignored.

However, the last message is rather obnoxious. Per Sevan the built-in Broadcom WiFi in the Mac mini (detected as bwi0) doesn't work right in NetBSD with more than 512MB of memory, which I refuse to downgrade to, and NetBSD doesn't come with the firmware anyway. Even if you copy it off some other system that does, you won't be able to bring the interface up in the configuration here (you'll just see weird errors about wrong firmware version, etc.).

Since this machine is a bridge and sometimes needs to connect to a test WiFi, I went with a USB WiFi dongle instead (I also use a USB dongle when bridging Ethernet to Ethernet, but pretty much any Ethernet-USB dongle will work too). The one I had on the shelf that I'd bought for something else and then forgot about was a Belkin Wireless G. They sell a number of chipsets under this name, but the model F5D7050 I have here is based on a Ralink RT2501USB chipset that NetBSD sees as rum0, and works fine with wpa_supplicant.

Last but not least was making it "failsafe," with a solid power supply and making it autostarting. Although the G4 mini came with an 85W power supply, I stole the 110W from my 2007 Intel mini and used that so it wouldn't run anywhere near the PSU's capacity and hopefully lengthen its lifetime. As it turns out, this may not have been a problem anyway; most of the time this system is using just 21W on the Kill-A-Watt, maybe 40ish when it's booting.

To autostart NetBSD, ordinarily you would go into OpenFirmware and set boot-device to the bootloader and boot-file to the kernel, as the picture below shows.

However, you'll end up with a black screen or at minimum no console at all on an OpenFirmware 3 system if that's all you do. The magic sauce is to emit some text to the screen before loading the bootloader. Thus, the OpenFirmware settings are (each setenv command is one line):

setenv auto-boot? true
setenv boot-device hd:,\ofwboot.xcf
setenv boot-file hd:2/netbsd-GENERIC.gz (note that I used a different kernel in the screenshot: more in a second)
setenv boot-command ." hi there" cr " screen" output boot
reset-all

The boot-command spacing is especially critical. There is a space after the ." and the quote mark before screen" and after cr is also separated by spaces. The reset-all just tells OpenFirmware to write those settings to NVRAM. If you zap the mini's PRAM with Command-Option-P-R later, you may need to re-enter these.

In this configuration your mini will now start NetBSD automatically when it's turned on (just hold down Command-Option-O-F when starting it up to abort to OpenFirmware). However, this won't bring the machine up automatically after a power failure. While FreeBSD allows starting up after a power failure, this code apparently never made it over to NetBSD. Happily, supporting it merely requires a relatively simple kernel hack. Based on the FreeBSD pmu(4) driver, I created a patch that will automatically reboot any PMU-based NetBSD Power Mac after a power failure.

You should be comfortable with compiling your own kernels in NetBSD; not only is it just good to do for auditing purposes, but you can slim the kernel down substantially or enable other less common features. It's especially easy for NetBSD because all the tools to build it come with a standard installation. All you need to do is download the source and run the build process.

To use this patch, download the source to your home directory on the NetBSD box (you want syssrc.tgz) and download the patch and have it in your home directory as pmu.diff. If you don't have a working curl on your install yet (pkg_add curl, pkg_add mozilla-rootcerts, mozilla-rootcerts install), you may want to download it somewhere else and use scp, sftp or ftp to retrieve it. Then, adjusting as necessary for username and path,

cd /
tar zxf ~/syssrc.tgz
cd /usr/src/sys/arch/macppc/dev
patch -p0 < ~/pmu.diff

Then follow the instructions to make the kernel. I have a pre-built one of 8.1-GENERIC (I call it POWERON) on the gopher server, but you should really roll your own so that you get security fixes, since I may only maintain that kernel intermittently. That build is the one I'm using on the machine currently and on the screenshot above. With this custom kernel installed, when the power is abruptly cut while the machine is powered up it will automatically reboot when power is reapplied, just as the analogous option does in Mac OS X. Copy it to the HFS+ partition and remember to change boot-file to point to it once you've confirmed it works.

Overall, I think the G4 mini makes a fine little server. I wouldn't use it as a client except in Mac OS X itself, and I am forced to admit that even that is becoming less practical these days. But as a little machine to do important back-office tasks and do so reliably, I think NetBSD on the mini is a good choice. Once all the kinks with the installation got ironed out, so far it's been solid and performant especially considering this machine is about 13 years old (though I'm happy with its performance even on thirty-year-old machines). Rather than buying something new, if your needs are small it's probable you've got some old machine around that could do those tasks instead of oozing toxins from its circuit board into a waste dump in Rwanda. And since I had two on the shelf, it has an instant spare. I'll probably be usefully running it for as long as I've run my other NetBSD systems, and that's the highest compliment I think I can pay it.

Firefox UXiPad Sketching with GoodNotes 5

I’m always making notes and sketching on my iPad and people often ask me what app I’m using. So I thought I’d make a video of how I use GoodNotes 5 in my design practice.

Links:
This video on YouTube
GoodNotes
Templates: blank, storyboard, crazy 8s, iPhone X

Transcript:
Hi, I’m Michael Verdi, and I’m a product designer for Firefox. Today, I wanna talk about how I use sketching on my iPad in my design practice. So, first, sketching on a iPad, what I really like about it is that these apps on the iPad allow me to collect all my stuff in one place. So, I’ve got photos and screenshots, I’ve got handwritten notes, typed up notes, whatever, it’s all together. And I can organize it, and I can search through it and find what I need.

There’s really two basic kind of apps that you can use for this. There’s drawing and sketching apps, and then there’s note-taking apps. Personally, I prefer the note-taking apps because they usually have better search tools and organization. The thing that I like that I’m gonna talk about today is GoodNotes 5.

I’ve got all kinds of stuff in here. I’ve got handwritten notes with photographs, I’ve got some typewritten notes, screenshots, other things that I’ve saved, photographs, again. Yeah, I really like using this. I can do storyboards in here, right, or I can draw things, copy and paste them so that I can iterate quickly, make multiple variations over and over again. I can stick in screenshots and then draw on top of them or annotate them. Right, so, let me do a quick demo of using this to draw.

So, one of the things that I’ll do is actually, maybe I’ve drawn some stuff before, and I’ll save that drawing as an image in my photo library. And then I’ll come stick it in here, and I’ll draw on top of it. So, I work on search, so here’s Firefox with no search box. So, I’m gonna draw one. Let’s use some straight lines to draw one. I’m gonna draw a big search box, but I’m doing it here in the middle because I’m gonna place it a little better in a second. And we have the selection tool, and I’m gonna make the, the selection is not selecting images, right? So, I can come over here and just grab my box, and then I can move my box around on top. Okay, so, I still have this gray line. I can’t erase that because it’s an image. So, I’m gonna come over here, and I’m gonna get some white, and I’m gonna just draw over it. Right, okay. Let’s go back and get my gray color. I can zoom in when I need to, and I’m gonna copy this, and I’m gonna pate it a bunch of times. Then I can annotate this. Right, so, there we go.

Another thing that I really like about GoodNotes is the ability to search through stuff that you’ve done. So, I’m gonna search here, and I’m gonna search for spaces. So, this was a thing that we mocked up with a storyboard. This is it right here. And it recognized my, it read my handwriting, which is really cool. So, I can find this thing in a notebook of a jillion pages. But there’s also another way to find things. So, you have this view here, this is called the outline view. These are sorta like named bookmarks. There’s also a thumbnail view, right? Here’s all the pages in this notebook. But if I go to the outlines, so, here, I did some notes about a critique format, and I can jump right to them. But let’s say this new drawing, well, where did I do it? This new drawing, I wanna be able to get to this all the time, right? So, I can come up here, and I can say Add This Page to the Outline, and now I can give it a name. And I don’t know what I’m gonna call this, so I’m just callin’ it sample for right now. And so, now it is in the outline. Oh, I guess I had already done this as a demo. But there it is. And that’s how I can get to it now. That’s super, super cool.

Okay, and then one last thing I wanna show you about this is templates. So, I actually made this template to better fit my setup here. I’ve got no status bar at the top. And then these are just PDFs. And you can import your own. And I can change the template for this page, and I’ve made a storyboard template. And I can apply that here. And now I’ve got a thing so I can draw a storyboard. Or maybe I don’t wanna do a storyboard, but what else do I have? Oh, I wanna do a crazy eights exercise. So, now I’ve got one ready for a crazy eight exercise. I love these templates, they’re super handy. I’ll include those in the post with this, a link to some of these things.

So, that’s sketching on the iPad with GoodNotes 5. Thanks for watching.

 

Hacks.Mozilla.OrgGeckoView in 2019

Logo of GeckoViewLast September we wrote about using GeckoView to bring Firefox’s rendering engine to Android as a reusable library. By decoupling the Gecko engine from the Firefox application, we’ve created a newer, faster, and more maintainable way to create Android applications. This approach leverages Gecko’s excellent performance, privacy, and support for cutting-edge web standards.

With today’s release of our GeckoView-powered Firefox Preview, we’d like to share an update on what we’ve accomplished and where GeckoView is going in 2019.

Introducing Firefox Preview

Wordmark of Firefox PreviewWe’re excited to announce today’s initial release of Firefox Preview (GitHub), an entire browser built from the ground up with GeckoView and Mozilla Android Components at its core. Though still an early preview, this is our first end-user product built completely with these new technologies.

Two screenshots of Firefox Preview showing the home screen and a page loaded with the main menu open

Firefox Preview is our platform for building, testing, and delivering unique features. We’ll use it to explore new concepts for how mobile browsers should look and feel. We encourage you to give it a try!

Other Projects using GeckoView

But that is not all — Mozilla is using GeckoView in many other products as well:

Firefox Focus

Logo of Firefox FocusTo date, Firefox Focus has been our most prominent consumer of GeckoView. Focus’s simplicity lends itself to experimentation. Currently we’re using Focus to split test between GeckoView and Android’s built-in WebView. This helps us ensure that GeckoView’s performance and stability meet or exceed expectations set by Android’s platform libraries.

While Focus is great at what it does, it is not a general purpose browser. By design, Focus does not keep track of history or bookmarks, nor does it support APIs like WebRTC. Yet we need a place to test those features to ensure that GeckoView is sufficiently robust and capable of building fully-featured browsers. That’s where Reference Browser comes in.

Reference Browser

Logo of Reference BrowserLike Firefox Preview, Reference Browser is a full browser built with GeckoView and Mozilla Android Components, but crucially, it is not an end-user product. Its intended audience is browser developers. Indeed, Reference Browser is a proving ground where we validate that GeckoView and the Components fit together and work as expected. We gain the ability to develop our core libraries without the constraints of an in-market product.

Firefox Reality

Logo of Firefox RealityGeckoView also powers Firefox Reality, a browser designed for standalone virtual reality headsets. In addition to leveraging Gecko’s excellent support for immersive web technologies, Firefox Reality demonstrates GeckoView’s versatility. The same library that’s at the heart of “traditional” browsers like Focus and Firefox Preview can also power experiences in an entirely different medium.

Firefox for Android

Logo of FirefoxLastly, while Firefox for Android (“Fennec”) does not use GeckoView for normal browsing, it does use it to support Progressive Web Apps and Custom Tabs. Moreover, because GeckoView and Fennec are both based on Gecko, they jointly benefit from improvements to that common infrastructure.

GeckoView is the foundation of Mozilla’s next generation of mobile products. To better support that future, we’ve halted new feature development on Focus while we concentrate on refining GeckoView and prepare for the launch of Firefox Preview. If you’re interested in supporting Focus in the future, please help by filling out this survey.

Internals

Aside from product development, the past six months have seen many improvements to GeckoView’s internals, especially around compiler-level optimizations and support for additional CPU architectures. Highlights include:

  • Profile-Guided Optimization (PGO) on Android is now enabled, which allows the compiler to generate more efficient code by considering data gathered by actually running and observing GeckoView.
  • The IonMonkey JavaScript JIT compiler is now enabled for 64-bit ARM builds of GeckoView.
  • We’re now producing builds of GeckoView for the x86_64 architecture.

In addition to meeting Google’s upcoming requirements for listing in the Play Store, supporting 64-bit architectures further improves GeckoView’s stability (fewer out-of-memory crashes) and security.

For upcoming releases, we’re working on support for web push and “add to home screen,” among other things.

Get involved

GeckoView isn’t just for Mozilla, we want it to be useful to you.

Thanks to Emily Toop for new work on the GeckoView Documentation website. It’s easier than ever to get started, either as an app developer using GeckoView or as a GeckoView contributor. If you spot something that could be better documented, pull requests are always welcome.

Logos of all GeckoView-powered projects

We’d also love to help you directly. If you need assistance with anything GeckoView-related, you can find us:

Please do reach out with any questions or comments.

The post GeckoView in 2019 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Future Releases BlogReinventing Firefox for Android: a Preview

At Firefox, we’re passionate about providing solutions for people who care about safety, privacy and independence. For several months, we’ve been working on a new strategy for our Android products to serve you even better. Today we’re very happy to announce a pilot of our new browser for Android devices that is available to early adopters for testing as of now. We’ll have a feature-rich, polished version of this flagship application available for this fall.

Firefox Preview — our new mobile pilot app for Android

Always-on, always private: a new and improved mobile Firefox

Unlike Big Tech, which only recently started to put more emphasis on privacy, we launched Firefox Focus about two and a half years ago, a mobile browser for iOS and Android that allows you to discover the web without being followed around by trackers. While continuously improving Firefox Focus over time, we realized that users demanded a full-fledged mobile browsing experience, but more private and secure than any existing app. So we decided to make Firefox more like Focus, but with all the ease and amenities of a full-featured mobile browser. The result is an early version of what we currently call Firefox Preview.

Bringing Firefox Quantum performance to mobile, with GeckoView

With Firefox Preview, we’re combining the best of what our lightweight Focus application and our current mobile browsers have to offer to create a best in class mobile experience. The new application is powered by Firefox’s own mobile browser engine — GeckoView — the same high-performance, feature enabling motor that fuels our Focus app.

You might remember how we revamped the engine behind the Firefox desktop browser in 2017 enabling us to significantly improve the desktop user experience. As a result, today’s Firefox Quantum is much faster, more efficient, equipped with a modern user interface and clearly the next-gen Firefox. Quite similarly, implementing GeckoView paves the way for a complete makeover of the mobile Firefox experience. While all other major Android browsers today are based on Blink and therefore reflective of Google’s decisions about mobile, Firefox’s GeckoView engine ensures us and our users independence. Building Firefox for Android on GeckoView also results in greater flexibility in terms of the types of privacy and security features we can offer our mobile users. With GeckoView we have the ability to develop faster, more secure and more user friendly browsers that deliver unprecedented performance.

To speak more specifically about features, here are some new functions Firefox Preview will offer, partially enabled by GeckoView:

        • Faster than ever: Firefox Preview is up to 2x faster than previous versions of Firefox for Android.
        • Fast by design: with a minimalist start screen and bottom navigation bar, Preview helps you get more done on the go.
        • Stay organized: Make sense of the web with Collections, a new feature that helps you save, organize, and share collections of sites. Quickly save and return to tasks like your morning routine, shopping lists, travel planning and more.
        • Tracking Protection on by default: Everyone deserves freedom from invasive advertising trackers and other bad actors so Firefox Preview blocks trackers by default. The result is faster browsing and fewer annoyances.

 

 

With Firefox Preview you’re browsing the mobile web faster, more efficiently and more privately

 

For more information about how we’re planning to use GeckoView in our product portfolio, check out this blog post on Mozilla Hacks.

Be among the first to test

Before we release products to the world, we run many different experiments and tests which we learn from and help us make our products better for real consumption. For example, our Firefox Quantum desktop browser has a beta release, a separate channel aimed at developers or early tech adopters to test upcoming features before they’re released to all consumers.

Likewise, what we’re releasing today is an early version for our experimental browser for Android users based on GeckoView. Firefox Preview is a separate mobile application primarily aimed at developers and early adopters who want to help us improve Firefox on Android. The user experience of this early version will differ significantly from the final product, planned for release later this year. We’re counting on our passionate users to try it now and provide the kind of feedback (via email or on Github) that will enable us to release the best mobile Firefox possible and continuously improve GeckoView.

How our new mobile strategy affects existing products

For the rest of 2019, we’re going to direct our efforts into optimizing the entire Firefox experience on all Android devices. In order to have a strong foundation for the next generation of mobile Firefox browsers and put all our efforts and resources in GeckoView, work on Firefox Focus will currently be on hold. Don’t worry though, you can still keep using our privacy browser, Focus, as well as our current Firefox for Android.

Stay tuned for more!

We hope this update from the Firefox Mobile Team sparks excitement for the new mobile strategy we’re rolling out in 2019. We plan to take mobile browsing to a whole new level. No matter where, when or on which device, we at Firefox believe that you always deserve the best possible user experience. And we’ll do our best to bring it to your screens.

Try the preview of our new Firefox for Android, let us know what you think about this GeckoView-based mobile app and stay tuned!

The post Reinventing Firefox for Android: a Preview appeared first on Future Releases.

Chris PearceFirefox's Gecko Media Plugin & EME Architecture

For rendering audio and video Firefox typically uses either the operating system's audio/video codecs or bundled software codec libraries, but for DRM video playback (like Netflix, Amazon Prime Video, and the like) and WebRTC video calls using baseline H.264 video, Firefox relies on Gecko Media Plugins, or GMPs for short.

This blog post describes the architecture of the Gecko Media Plugin system in Firefox, and the major class/objects involved, as it looked in June 2019.

For DRM video Firefox relies upon Google's Widevine Content Decryption Module, a dynamic shared library downloaded at runtime. Although this plugin doesn't conform to the GMP ABI, we provide an adapter to allow it to be run through the GMP system. We use the same Widevine CDM plugin that Chrome uses.

For decode and encode of H.264 streams for WebRTC, Firefox uses OpenH264, which is provided by Cisco. This plugin implements the GMP ABI.

These two plugins are downloaded at runtime from Google's and Cisco's servers, and installed in the user's Firefox profile directory.

We also ship a ClearKey CDM, which is the baseline decryption scheme required by the Encrypted Media Extensions specification. This mimics interface which the Widevine CDM implements, and is used in our EME regression tests. It's bundled with the rest of Firefox, and lives in the Firefox install directory.

The objects involved in running GMPs are spread over three processes; the main (AKA parent) process, the sandboxed content process where we run JavaScript and load web pages, and the sandboxed GMP process, which only runs GMPs.




The main facade to the GMP system is the GeckoMediaPluginService. Clients use the GeckoMediaPluginService to instantiate IPDL actors connecting their client to the GMP process, and to configure the service. In general, most operations which involve IPC to the GMPs/CDMs should happen on the GMP thread, as the GMP related protocols are processed on that thread.

mozIGeckoMediaPluginService can be used on the main thread by JavaScript, but the main-thread accessible methods proxy their work to the GMP thread.

How GMPs are downloaded and installed

The Firefox front end code which manages GMPs is the GMPProvider. This is a JavaScript object, running in the front end code in the main process. On startup if any existing GMPs are already downloaded and installed, this calls mozIGeckoMediaPluginService.addPluginDir() with the path to the GMP's location on disk. Gecko's C++ code then knows about the GMP. The GeckoMediaPluginService then parses the metadata file in that GMP's directory, and creates and stores a GMPParent for that plugin. At this stage the GMPParent is like a template, which stores the metadata describing how to start a plugin of this type. When we come to instantiate a plugin, we'll clone the template GMPParent into a new instance, and load a child process to run the plugin using the cloned GMPParent.

Shortly after the browser starts up (usually within 60 seconds), the GMPProvider will decide whether it should check for new GMP updates. The GMPProvider will check for updates if either it has not checked in the past 24 hours, or if the browser has been updated since last time it checked. If the GMPProvider decides to check for updates, it will poll Mozilla's Addons Update Server. This will return an update.xml file which lists the current GMPs for that particular Firefox version/platform, and the URLs from which to download those plugins. The plugins are hosted by third parties (Cisco and Google), not on Mozilla's servers. Mozilla only hosts the manifest describing where to download them from.

If the GMPs in the update.xml file are different to what is installed, Firefox will update its GMPs to match the update.xml file from AUS. Firefox will download and verify the new GMP, uninstall the old GMP, install the new GMP, and then add the new GMP's path to the mozIGeckoMediaPluginService. The objects that do this are the GMPDownloader and the GMPInstallManager, which are JavaScript modules in the front end code as well.

Note Firefox will take action to ensure its installed GMPs matches whatever is specified in the update.xml file. So if a version of a GMP which is older than what is installed is specified in the update.xml file, Firefox will uninstall the newer version, and download and install the older version. This is to allow a GMP update to be rolled back if a problem is detected with the newer GMP version.

If the AUS server can't be contacted, and no GMPs are installed, Firefox has the URLs of GMPs baked in, and will use those URLs to download the GMPs.

On startup, the GMPProvider also calls mozIGeckoMediaPluginService.addPluginDir() for the ClearKey CDM, passing in its path in the Firefox install directory.

How EME plugins are started in Firefox

The lifecycle for Widevine and ClearKey CDM begins in the content process with content JavaScript calling Navigator.requestMediaKeySystemAccess(). Script passes in a set of MediaKeySystemConfig, and these are passed forward to the MediaKeySystemAccessManager. The MediaKeySystemAccessManager figures out a supported configuration, and if it finds one, returns a MediaKeySystemAccess from which content JavaScript can instantiate a MediaKeys object. 

Once script calls MediaKeySystemAccess.createMediaKeys(), we begin the process of instantiating the plugin. We create a MediaKeys object and a ChromiumCDMProxy object, and call Init() on the proxy. The initialization is asynchronous, so we return a promise to content JavaScript and on success we'll resolve the promise with the MediaKeys instance which can talk to the CDM in the GMP process.

To create a new CDM, ChromiumCDMProxy::Init() calls GeckoMediaPluginService::GetCDM(). This runs in the content process, but since the content process is sandboxed, we can't create a new child process to run the CDM there and then. As we're in the content process, the GeckoMediaPluginService instance we're talking to is a GeckoMediaPluginServiceChild. This calls over to the parent process to retrieve a GMPContentParent bridge. GMPContentParent acts like the GMPParent in the content process. GeckoMediaPluginServiceChild::GetContentParent() retrieves the bridge, and sends a LaunchGMPForNodeId() message to instantiate the plugin in the parent process.

In the non multi-process Firefox case, we still call GeckoMediaPluginService::GetContentParent(), but we end up running GeckoMediaPluginServiceParent::GetContentParent(), which can just instantiate the plugin directly.

When the parent process receives a LaunchGMPForNodeId() message, the GMPServiceParent runs through its list of GMPParents to see if there's one matching the parameters passed over. We check to see if there's an instance from the same NodeId, and if so use that. The NodeId is a hash of the origin requesting the plugin, combined with the top level browsing origin, plus salt. This ensures GMPs from different origins always end up running in different processes, and GMPs running in the same origin run in the same process.

If we don't find an active GMPParent running the requested NodeId, we'll make a copy of a GMPParent matching the parameters, and call LoadProcess() on the new instance. This creates a GMPProcessParent object, which in turn uses GeckoChildProcessHost to run a command line to start the child GMP process. The command line passed to the newly spawned child process causes the GMPProcessChild to run, which creates and initializes the GMPChild, setting up the IPC connection between GMP and Main processes.

The GMPChild delegates most of the business of loading the GMP to the GMPLoader. The GMPLoader opens the plugin library from disk, and starts the Sandbox using the SandboxStarter, which has a different implementation for every platform. Once the sandbox is started, the GMPLoader uses a GMPAdapter parameter to adapt whatever binary interface the plugin exports (the Widevine C API for example) to the match the GMP API. We use the adapter to call into the plugin to instantiate an instance of the CDM. For OpenH264 we simply use a PassThroughAdapter, since the plugin implements the GMP API.

If all that succeeded, we'll send a message reporting success to the parent process, which in turn reports success to the content process, which resolves the JavaScript promise returned by MediaKeySystemAccess.createMediaKeys() with the MediaKeys object, which is now setup to talk to a CDM instance.

Once content JavaScript has a MediaKeys object, it can set it on an HTMLMediaElement using HTMLMediaElement.setMediaKeys().

The MediaKeys object encapsulates the ChromiumCDMProxy, which proxies commands sent to the CDM into calls to ChromiumCDMParent on the GMP thread.

How EME playback works

There are two main cases that we care about here; encrypted content being encountered before a MediaKeys is set on the HTMLMediaElement, or after. Note that the CDM is only usable to the media pipeline once it's been associated with a media element by script calling HTMLMediaElement.setMediaKeys().

If we detect encrypted media streams in the MediaFormatReader's pipeline, and we don't have a CDMProxy, the pipeline will move into a "waiting for keys" state, and not resume playback until content JS has set a MediaKeys on the HTMLMediaElement. Setting a MediaKeys on the HTMLMediaElement causes the encapsulated ChromiumCDMProxy to bubble down past MediaDecoder, through the layers until it ends up on the MediaFormatReader, and the EMEDecoderModule.

Once we've got a CDMProxy pushed down to the MediaFormatReader level, we can use the PDMFactory to create a decoder which can process encrypted samples. The PDMFactory will use the EMEDecoderModule to create the EME MediaDataDecoders, which process the encrypted samples.

The EME MediaDataDecoders talk directly to the ChromiumCDMParent, which they get from the ChromiumCDMProxy on initialization. The ChromiumCDMParent is the IPDL parent actor for communicating with CDMs.

All calls to the ChromiumCDMParent should be made on the GMP thread. Indeed, one of the primary jobs of the ChromiumCDMProxy is to proxy calls made by the MediaKeys on the main thread to the GMP thread so that commands can be sent to the CDM via off main thread IPC.

Any callbacks from the CDM in the GMP process are made onto the ChromiumCDMChild object, and they're sent via PChromiumCDM IPC over to ChromiumCDMParent in the content process. If they're bound for the main thread (i.e. the MediaKeys or MediaKeySession objects), the ChromiumCDMCallbackProxy ensures they're proxied to the main thread.

Before the EME MediaDataDecoders submit samples to the CDM, they first ensure that the samples have a key with which to decrypt the samples. This is achieved by a SamplesWaitingForKey object. We keep a copy in the content process of what keyIds the CDM has reported are usable in the CDMCaps object. The information stored in the CDMCaps about which keys are usable is mirrored in the JavaScript exposed MediaKeySystemStatusMap object.

The MediaDataDecoder's decode operation is asynchronous, and the SamplesWaitingForKey object delays decode operations until the CDM has reported that the keys that the sample requires for decryption are usable. Before sending a sample to the CDM, the EME MediaDataDecoders check with the SamplesWaitingForKey, which looks up in the CDMCaps whether the CDM has reported that the sample's keyId is usable. If not, the SamplesWaitingForKey registers with the CDMCaps for a callback once the key becomes usable. This stalls the decode pipeline until content JavaScript has negotiated a license for the media.

Content JavaScript negotiates licenses by receiving messages from the CDM on the MediaKeySession object, and forwarding those messages on to the license server, and forwarding the response from the license server back to the CDM via the MediaKeySession.update() function. These messages are in turn proxied by the ChromiumCDMProxy to the GMP thread, and result in a call to ChromiumCDMParent and thus an IPC message to the GMP process, and a function call into the CDM there. If the license server sends a valid license, the CDM will report the keyId as usable via a key statuses changed callback.

Once the key becomes usable, the SamplesWaitingForKey gets a callback, and the EME MediaDataDecoder will submit the sample for processing by the CDM and the pipeline unblocks. 

EME on Android

EME on Android is similar in terms of the EME DOM binding and integration with the MediaFormatReader and friends, but it uses a MediaDrmCDMProxy instead of a ChromiumCDMProxy. The MediaDrmCDMProxy doesn't talk to the GMP subsystem, and instead uses the Android platform's inbuilt Widevine APIs to process encrypted samples.

How WebRTC uses OpenH264

WebRTC uses OpenH264 for encode and decode of baseline H.264 streams. It doesn't need all the DRM stuff, so it talks to the OpenH264 GMP via the PGMPVideoDecoder and PGMPVideoEncoder protocols.

The child actors GMPVideoDecoderChild and GMPVideoEncoderChild talk to OpenH264, which conforms to the GMP API.

OpenH264 is not used by Firefox for playback of H264 content inside regular <video>, though there is still a GMPVideoDecoder MediaDataDecoder in the tree should this ever be desired.

How GMP shutdown works

Shutdown is confusing, because there are three processes involved. When the destructor of the MediaKeys object in the content process is run (possibly because it's been cycle or garbage collected), it calls CDMProxy::Shutdown(), which calls through to ChromiumCDMParent::Shutdown(), which cancels pending decrypt/decode operations, and sends a Destroy message to the ChromiumCDMChild.

In the GMP process, ChromiumCDMChild::RecvDestroy() shuts down and deletes the CDM instance, and sends a __delete__ message back to the ChromiumCDMParent in the content process.

In the content process, ChromiumCDMParent::Recv__delete__() calls GMPContentParent::ChromiumCDMDestroyed(), which calls CloseIfUnused(). The GMPContentParent tracks the living protocol actors for this plugin instance in this content process, and CloseIfUnused() checks if they're all shutdown. If so, we unlink the GMPContentParent from the GeckoMediaPluginServiceChild (which is PGMPContent protocol's manager), and close the GMPContentParent instance. This shuts down the bridge between the content and GMP processes.

This causes the GMPContentChild in the GMP process to be removed from the GMPChild in GMPChild::GMPContentChildActorDestroy(). This sends a GMPContentChildDestroyed message to GMPParent in the main process.

In the main process, GMPParent::RecvPGMPContentChildDestroyed() checks if all actors on its side are destroyed (i.e. if all content processes' bridges to this GMP process are shutdown), and will shutdown the child process if so. Otherwise we'll check again the next time one of the GMPContentParents shuts down. 

Note there are a few places where we use GMPContentParent::CloseBlocker. This stops us from shutting down the child process when there are no active actors, but we still need the process alive. This is useful for keeping the child alive in the time between operations, for example after we've retrieved the GMPContentParent, but before we've created the ChromiumCDM (or some other) protocol actor.

How crash reporting works for EME CDMs

Crash handling for EME CDMs is confusing for the same reason as shutdown; because there are three processes involved. It's tricky because the crash is first reported in the parent process, but we need state from the content process in order to identify which tabs need to show the crash reporter notification box.

We receive a GMPParent::ActorDestroy() callback in the main process with aWhy==AbnormalShutdown. We get the crash dump ID, and dispatch a task to run GMPNotifyObservers() on the main thread. This collects some details, including the pluginID, and dispatches an observer service notification "gmp-plugin-crash".  A JavaScript module ContentCrashHandlers.jsm observes this notification, and rebroadcasts it to the content processes.

JavaScript in every content process observes the rebroadcast, and calls mozIGeckoMediaPluginService::RunPluginCrashCallbacks(), passing in the plugin ID. Each content process' GeckoMediaPluginService then goes through its list of GMPCrashHelpers, and finds those which match the pluginID. We then dispatch a PluginCrashed event at the window that the GMPCrashHelper reports as the current window owning the plugin. This is then handled by PluginChild.jsm, which sends a message to cause the crash reporter notification bar to show.

GMP crash reporting for WebRTC

Unfortunately, the code paths for WebRTC handling crashes is slightly different, due to their window being owned by PeerConnection. They don't use GMPCrashHelpers, they have PeerConnection help find the target window to dispatch PluginCrashed to.

Hacks.Mozilla.OrgHow accessibility trees inform assistive tech

The web is accessible by default. It was designed with features to make accessibility possible, and these have been part of the platform pretty much from the beginning. In recent times, inspectable accessibility trees have made it easier to see how things work in practice. In this post we’ll look at how “good” client-side code (HTML, CSS and JavaScript) improves the experience of users of assistive technologies, and how we can use accessibility trees to help verify our work on the user experience.

People browse differently

Assistive Technology (AT) is the umbrella term for tools that help people operate a computer in the way that suits them. Braille displays, for instance, let blind users understand what’s on their screen by conveying that information in braille format in real time. VoiceOver, a utility for Mac and iOS, converts text into speech, so that people can listen to an interface. Dragon NaturallySpeaking is a tool that lets people operate an interface by talking into a microphone.

hand on purple braille terminal with laptop on topA refreshable Braille display (Photo: Sebastien.delorme)

The idea that people can use the web in the way that works best for them is a fundamental design principle of the platform. When the web was invented to let scientists exchange documents, those scientists already had a wide variety of systems. Now, in 2019, systems vary even more.  We use browsers on everything from watches to phones, tablets to TVs. There is a perennial need for web pages that are resilient and allow for user choice. These values of resilience and flexibility have always been core to our work.

AT draws on these fundamentals. Most assistive technologies need to know what happens on a user’s screen. They all must understand the user interface, so that they can convey it to the user in a way that makes sense. Many years ago, assistive technologies relied on OCR (optical character recognition) techniques to figure what was on the screen. Later they consumed markup directly from the browser. On modern operating systems the software is more advanced: accessibility APIs that are built into the platform provide guidance.

How front-end code helps

Platform-specific Accessibility APIs are slightly different depending on the platform. Generally, they know about the things that are platform-specific: the Start Menu in Windows, the Dock on the Mac, the Favorites menu in Firefox… even the address bar in Firefox. But when we use the address bar to access a website, the screen displays information that it probably has never displayed before, let alone for AT users. How can Accessibility APIs tell AT about information on websites? Well, this is where the right client-side HTML, CSS and JavaScript can help.

Whether we write plain HTML, JSX or Jinja, when someone accesses our site, the browser ultimately receives markup as the start for any interface. It turns that markup into an internal representation, called the DOM tree. The DOM tree contains objects for everything we had in our markup. In some cases, browsers also create an accessibility tree, based on the DOM tree, as a tool to better understand the needs and experiences of assistive technology users. The accessibility tree informs platform-specific Accessibility APIs, which then inform Assistive Technologies. So ultimately, our client-side code impacts the experience of assistive technology users.

flow chart, starts at your markup, point at DOM tree, points at accessibility tree, points at platform apis, points at AT, which lists text to speech, screen magnifiers and alternate pointing devices

A flow chart: your markup results in a DOM tree, which impacts the accessibility tree, which informs the Platform APIs, which ultimately impact AT users.

HTML

With HTML, we can be specific about what things are in the page. We can define what’s what, or, in technical terms, provide semantics. For example, we can define something as a:

  • checkbox or a radio button
  • table of structured data
  • list, ordered or unordered, or a list of definitions
  • navigation or a footer area

CSS

Stylesheets can also impact the accessibility tree: layout and visibility of elements are sometimes taken into account. Elements that are set to display: none or visibility: hidden are taken out of the accessibility tree completely. Setting display to table/table-cell can also impact semantics, as Adrian Roselli explains in Tables, CSS display properties and ARIA.

If your site dynamically changes generated content in CSS (::before and ::after), this can also appear or disappear in accessibility trees.

And then, there are properties that can make visual layout differ from DOM order, for example order in grid and flex items, and auto-flow: dense in Grid Layout. When visual order is different from DOM order, it is likely also going to be different from accessibility tree order. This may confuse AT users. The Flexbox spec is quite clear: the CSS order property is “for visual, not logical reordering”.

JavaScript

JavaScript lets us change the state of our components. This is often relevant for accessibility, for instance, we can determine:

  • Is the menu expanded or collapsed?
  • Was the checkbox checked or not?
  • Is the email address field valid or invalid?

Note that accessibility tree implementations can vary, creating discrepancies between browsers.  For instance, missing values are computed to null in some browsers, '' (empty string) in others. Differing implementations are one of many reasons plans to develop a standard are in the works.

What’s in an accessibility tree?

Accessibility trees contain accessibility-related meta information for most of our HTML elements. The elements involved determine what that means, so we’ll look at some examples.

Generally, there are four things in an accessibility tree object:

  • name: how can we refer to this thing? For instance, a link with the text ‘Read more’ will have ‘Read more’ as its name (more on how names are computed in the Accessible Name and Description Computation spec)
  • description: how do we describe this element, if we want to add anything to the name? The description of a table could explain what kind of info that table offers.
  • role: what kind of thing is it? For example, is it a button, a nav bar or a list of items?
  • state: if any, does it have state? Think checked/unchecked for checkboxes, or collapsed/expanded for the `<summary>` element

Additionally, the accessibility tree often contains information on what can be done with an element: a link can be followed, a text input can be typed into, that kind of thing.

Inspecting the accessibility tree in Firefox

All major browsers provide ways to inspect the accessibility tree, so that we can figure out what an element’s name has computed to, or what role it has, according to the browser. For some context on how this works in Firefox, see Introducing the Accessibility Inspector in the Firefox Developer Tools by Marco Zehe.

Here’s how it works in Firefox:

  • In Settings, under Default Developer Tools, ensure that the checkbox “Accessibility” is checked
  • You should now see the Accessibility tab
  • In the Accessibility tab, you’ll find the accessibility tree with all its objectsanimated; shows go to settings, check accessibility, accessibility tab appears

Other browsers

In Chrome, the Accessibility Tree information lives together with the DOM inspector and can be found under the ‘Accessibility’ tab. In Safari, it is in the Node tab in the panel next to the DOM tree, together with DOM properties.

An example

Let’s say we have a form where people can pick their favourite fruit:

<form action="">
  <fieldset>
    <legend>Pick a fruit </legend>
    <label><input type="radio" name="fruit"> Apple</label>
    <label><input type="radio" name="fruit"> Orange</label>
    <label><input type="radio"  name="fruit"> Banana</label>
  </fieldset>
</form>

pick a fruit: radio buttons, all unchecked, apple orange banana

In Firefox, this creates a number of objects, including:

    • An object with a role of grouping, named Pick a fruit
    • Three objects with roles of label, named Apple, Orange and Banana, with action Click, and these states: selectable text, opaque, enabled, sensitive
    • An object with role of radiobutton, named Apple, with action of Select and these states: focusable, checkable, opaque, enabled, sensitive

And so on. When we select ‘Apple’, checked is added to its list of states.

Note that each thing expressed in the markup gets reflected in a useful way. Because we added a legend to the group of radio buttons, it is exposed with a name of ‘Pick a fruit’.  Because we used inputs with a type of radio, they are exposed as such and have relevant states.

As mentioned earlier, we don’t just influence this through markup. CSS and JavaScript can also affect it.

With the following CSS, we would effectively take the name out of the accessibility tree, leaving the fieldset unnamed:

legend { display: none; /* removes item from accessibility tree */ }

This is true in at least some browsers. When I tried it in Firefox, its heuristics still managed to compute the name to ‘Pick a fruit’, in Chrome and Safari it was left out completely. What this means, in terms of real humans: they would have no information as to what to do with Apple, Orange and Banana.

As mentioned earlier, we can also influence the accessibility tree with JavaScript. Here is one example:

const inputApple = document.querySelector(‘input[radio]’);
inputApple.checked = true; // alters state of this input, also in accessibility tree

Anything you do to manipulate DOM elements, directly through DOM scripting or with your framework of choice, will update the accessibility tree.

Conclusion

To provide a great experience to users, assistive technologies present our web pages differently from how we may have intended them. Yet, what they present is based directly on the content and semantic structure that we provide. As designers and developers, we can ensure that assistive technologies understand our pages well writing good HTML, CSS and JavaScript. Inspectable accessibility trees help us verify directly in the browser if our names, roles and state make sense.

The post How accessibility trees inform assistive tech appeared first on Mozilla Hacks - the Web developer blog.

QMOFirefox 68 Beta 10 Testday Results

Hello Mozillians!

As you may already know,  Friday June 14th – we held a new Testday event, for Firefox 68 Beta 10.

Thank you all for helping us make Mozilla a better place: Fernando, noelonassis !

Result: Several test cases were executed for: Sync & Firefox Account and Browser notifications & prompts.

Thanks for another awesome testday, we appreciate your contribution! 🙂

We hope to see you all in our next events, keep an eye on QMO.
We will make announcements as soon as something shows up!

The Firefox FrontierHey advertisers, track THIS

If it feels like the ads chasing you across the internet know you a little too well, it’s because they do (unless you’re an avid user of ad blockers, in … Read more

The post Hey advertisers, track THIS appeared first on The Firefox Frontier.

This Week In RustThis Week in Rust 292

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is winit, a pure-rust cross-platform window initialization library. Thanks to Osspial for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

172 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

why doesn't 'static, the largest lifetime, not simply eat all the others

@mountain_ghosts on twitter

@mountain_ghosts 'static is biggest but actually,, weakest of lifetimes, becuase it is subtype of every lifetime

'static is big soft friend

pls love and protect it

@gankro on twitter

Thanks to Christopher Durham for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Emily DunhamMore on Mentorship

More on Mentorship

Last year, I wrote about some of the aspirations which motivated my move from Mozilla Research to the CloudOps team. At the recent Mozilla All Hands in Whistler, I had the “how’s the new team going?” conversation with many old and new friends, and that repetition helped me reify some ideas about what I really meant by “I’d like better mentorship”.

To generalize about how mentors’ careers affect what they can mentor me on, I’ve sketched up a quick figure in order to name some possible situations that people can be in relative to one another:

../../../_images/places-to-find-mentors.png

The first couple cases of mentorship are easy to describe, because I’ve experienced and thought about them for many years already:

Mentorship across industries

Mentors from outside my own industry are valuable for high level perspectives, and for advice on general life and human topics that aren’t specialized to a single field. Additionally, specialists in other industries often represent the consumers of my own industry’s products. Wise and thoughtful people who share little to none of my domain knowledge can provide constructive feedback on why my industry’s work gets particular reactions from the people it affects – just as someone who’s never read a particular book before is likely to catch more spelling errors than its own author, who’s been poring over the same manuscript for many hours a day for several years.

However, for more concrete problems within my particular career (“this program is running slower than expected”, or even “how should I describe that role on my resume?”), observers from outside of it can rarely offer a well tested recommendation of a path forward.

Mentorship across companies within an industry

Similarly, mentors from other companies within my own industry are my go-to source of insight on general trends and technologies. A colleague in a distant corner of my field can tell me about the frustrations they encountered when using a piece of technology that I’m considering, and I can use that advice to make better-informed choices in my daily work.

But advice on a particular company’s peculiarities rarely translates well across organizations. A certain frequency of reorganization might be perfectly ordinary at my company, but a re-org might indicate major problems at another. This type of education, while difficult to get from someone at a different company, is perfectly feasible to pick up from anyone on another team within one’s own organization.

Mentorship across teams within a company

When I switched roles, I had trial-and-errored my way into the observation that there’s a large class of problems with which mentors from different teams within the same company cannot effectively help. I’d tentatively call these “junior engineer problems”, as having overcome their general cases seems to correlate strongly to seniority. In my own expeience, honing the improvement of code-adjacent skills such as the intuition for what problems should be effectively solvable from the docs versus when and whom to ask for help, how deeply to explore a prospective course of action before committing to it, and when to write off an experiment as “effectively impossible”, are all questions whose answers one derives from experience and observing expert peers rather than from just asking them with words.

Mentorship across projects or specialties within a team

I had assumed that simply being on the same team as people capable of imparting that highly specialized variant of common sense would suffice to expose me to it. However, my first few projects on my new team have clearly shown, in both the positive and the negative cases, that working on the same project as an expert is far more useful to my own growth than simply chancing to be bureaucracied into the same group.

The negative case was my first pair of projects: The migration of 2 small, simple services from my team’s AWS infrastructure to GCP. Although I was on the same team as experts in this process, the particular projects were essentially mine alone, and it was up to me to determine how far to proceed on each problem by myself before escalating it to interrupt a busy senior engineer. My heuristics for that process weren’t great, and I knew that at the outset, but my bias toward asking for help later than was optimal slowed the process of improving my ability to draw that line – how can one enhance one’s discrimination between “too soon”, “just right”, and “too late” when all the data points one gathers are in the same one of those categories?

Mentorship within a project

Finally, however, I’m in the midst of a project that demonstrates a positive case for the type of mentorship I switched teams to seek. I’m in the case labeled A on the diagram up above – I’m working with a more-experienced teammate on a project which also includes close collaboration with members of another team within our organization. In examining why this is working so much better for me than my prior tasks, I’ve noticed some differences: First, I’m getting constant feedback on my own expectations for my work. This is no serious nor bureaucratic process, but simply a series of tiny interactions – expressions of surprise when I complete a task effectively, or recommendations to move on to a different approach when something seems to take too long. Similarly, code review from someone immersed in the same problem that I’m working on is indescribably more constructive than review from someone who’s less familiar with the nuances of whatever objective my code is trying to achieve.

Another reason that I suspect I’m improving more quickly than before in this particular task is the opportunity to observe my teammate modeling the skills that I’m learning in his interactions with our colleagues from another team (those in position C on that chart). There’s always a particular trick to asking a question in a way that elicits the category of answer one actually wanted, and watching this trick done frequently in circumstances where I’m up to date on all the nuances and details is a great way to learn.

The FOSS loophole

I suspect I may have been slower to notice these differences than I otherwise might have been, because the start of my career included a lot of fantastic, same-project mentorship from individuals on other teams, at other companies, and even in other industries. This is because my earliest work was on free and open source software and infrastructure. In FOSS, anyone who can pay with their time and computer usage buys access to a cross-company, often cross-industry web of professionals and can derive all the benefits of working directly with mentors on a single project. I was particularly fortunate to draw a wage from the OSU Open Source Lab while doing that work, because the opportunity cost of a hours spent on FOSS by a student who also needs to spend those hours on work is far from free.

Daniel Stenbergopenssl engine code injection in curl

This flaw is known as CVE-2019-5443.

If you downloaded and installed a curl executable for Windows from the curl project before June 21st 2019, go get an updated one. Now.

On Windows, using OpenSSL

The official curl builds for Windows – that the curl project offers – are built cross-compiled on Linux. They’re made to use OpenSSL by default as the TLS backend, the by far most popular TLS backend by curl users.

The curl project has provided official curl builds for Windows on and off through history, but most recently this has been going on since August 2018.

OpenSSL engines

These builds use OpenSSL. OpenSSL has a feature called “engines”. Described by the project itself like this:

“a component to support alternative cryptography implementations, most commonly for interfacing with external crypto devices (eg. accelerator cards). This component is called ENGINE”

More simply put, an “engine” is a plugin for OpenSSL that can be loaded and run dynamically. The particular engine is activated either built-in or by loading a config file that specifies what to do.

curl and OpenSSL engines

When using curl built with OpenSSL, you can specify an “engine” to use, which in turn allows users to use their dedicated hardware when doing TLS related communications with curl.

By default, the curl tool allows OpenSSL to load a config file and figure out what engines to load at run-time but it also provides a build option to make it possible to build curl/libcurl without the ability to load that config file at run time – which some users want, primarily for security reasons.

The mistakes

The primary mistake in the curl build for Windows that we offered, was that the disabling of the config file loading had a typo which actually made it not disable it (because the commit message had it wrong). The feature was therefore still present and would load the config file if present when curl was invoked, contrary to the intention.

The second mistake comes a little more from the OpenSSL side: by default if you build OpenSSL cross-compiled like we do, the default paths where it looks for the above mentioned config file is under the c:\usr\local tree. It is in fact even complicated and impossible to fix this path in the build without a patch.

What the mistakes enable

A non-privileged user or program (the attacker) with access to the host to put a config file in the directory where curl would look for a config file (and create the directory first as it probably didn’t already exist) and the suitable associated engine code.

Then, when an privileged user subsequently executes curl, it will run with more power and run the code, the engine, the attacker had put there. An engine is a piece of compiled code, it can do virtually anything on the machine.

The fix

Already three days ago, on June 21st, a fixed version of the curl executable for Windows was uploaded to the curl web site (“curl 7.65.1_2”). All older versions that had been provided in the past were removed to reduce the risk of someone still using an old lingering download link.

The fix now makes the curl build switch off the loading of the config file, as was already intended. But also, the OpenSSL build that is used for the build is now modified to only load the config file from a privileged path that isn’t world writable (C:/Windows/System32/OpenSSL/).

Widespread mistake

This problem is very widespread among projects on Windows that use OpenSSL. The curl project coordinated this publication with the postgres project and have worked with OpenSSL to make them improve their default paths. We have also found a few other openssl-using projects that already have fixed their builds for this flaw (like stunnel) but I think we have reason to suspect that there are more vulnerable projects out there still not fixed.

If you know of a project that uses OpenSSL and ships binaries for Windows, give them a closer look and make sure they’re not vulnerable to this.

The cat is already out of the bag

When we got this problem reported, we soon realized it had already been publicly discussed and published for other projects even before we got to know about it. Due to this, we took it to publication as quick as possible to minimize user impact as much as we can.

Only on Windows and only with OpenSSL

This flaw only exists on curl for Windows and only if curl was built to use OpenSSL with this bad path and behavior.

Microsoft ships curl as part of Windows 10, but it does not use OpenSSL and is not vulnerable.

Credits

This flaw was reported to us by Rich Mirch.

The build was fixed by Viktor Szakats.

The image on the blog post comes from pixabay.

Cameron KaiserTenFourFox FPR15b1 available

TenFourFox Feature Parity Release 15 beta 1 is now available (downloads, hashes, release notes).

In honour of New Coke's temporary return to the market (by the way, I say it tastes like Pepsi and my father says it tastes like RC), I failed again with this release to get some sort of async/await support off the ground, and we are still plagued by issue 533. The second should be possible to fix, but I don't know exactly what's wrong. The first is not possible to fix without major changes because it reaches up into the browser event loop, but should be still able to get parsing and thus enable at least partial functionality from the sites that depend on it. That part didn't work either. A smaller hack, though, did make it into this release with test changes. Its semantics aren't quite right, but they're good enough for what requires it and does fix some parts of Github and other sites.

However, there are some other feature improvements, including expanded blocking of cryptominers when basic adblock is enabled (from the same list Mozilla uses for enhanced privacy in mainstream Firefox), and updated internationalization support with upgraded timezones and locales such as the new Japanese Reiwa era (for fun, look at Is it Reiwa yet? in FPR14.1 before you download FPR15b1). The usual maintenance and security fixes are (will be) also included (in final). In the meantime, I'm going to take a different pass at the async/await problem for FPR16. If even that doesn't work, we'll have to see where we're at then for parity purposes, since while the majority of websites still work well in TenFourFox's heavily patched-up engine there are an increasing number of major ones that don't. It's hard to maintain a browser engine on your own. :(

Meanwhile, if you'd like the next generation of PowerPC but couldn't afford a Talos II, maybe you can afford a Blackbird. Here's what I thought of it. (See also the followup.)

Karl DubostQuick notes for Mozilla Whistler All Hands 2019

Whistler 2019 Quick Notes

(taken as it comes, without a specific logic, just thoughts here and there. Emotions. To take with a pinch of salt.)

  • Plane trip without a hitch from Japan.
  • Back in Vancouver after 5 years, from the bus windows, I noticed the new high rise condos and I wonder who can afford them when they are so many of them. People living with credits and loans?
  • All the Vietnamese restaurants just make me want to stop to have a Bun Bo Hue.
  • Bus didn’t get a flat tire
  • Two very chatty persons beside me during the full bus trip never stopped talking. A flow of words very difficult to cope with when you are tired with jet lag.
  • Noisy Welcome reception.
  • Happy to see new people, happy to see old friends.
  • Beautiful view, I just want to hop in shoes and hike the trails.
  • Huge North American hotel room with cold Air con and all lights on is a waste.
  • Cafe latte. Wonderful.
  • Uneasy with the Native American dance. Culture out of context.
  • I like Roxy Wen for her direct talk about things.
  • Stan Leong very positive vibe for Mozilla and Taipei office.
  • Less people who seemed to read a script at the Plenary. This is a good thing.
  • Overall good impression of the Plenary on Tuesday.
  • Does Pocket surface blogs which are edited by simple people. What’s happening in there? The promoted content seems to be mainstream editors.
  • Noisy environments do not help to have soft, relaxed discussions.
  • Finding a bug and being in admiration by the explanation of Boris Zbarsky
  • The wonderfully intoxicating smell of cypress in the mornings
  • Early morning and refreshing cold makes me happy.
  • Thanks Brianna for the cafe latte station at the breakfast area.
  • I guess I do not have a very good relationship with marketing. I need to dive into that. Plenary Wednesday.
  • Our perception of privacy is not equally distributed. People have different expectations and habits. People working at Mozilla are privileged compared to the rest of the population.
  • That said, there were comments during the panel by Lindsey Shepard, VP Product Marketing which resonated with me. So maybe, I need to break down my own silos.
  • Performance Workshop. We, the developers, techies are a bourgeoisie (by/through devices) which makes us blind to the reality of common users performances. This tied to the Plenary this morning about knowing the normal people using services online.
  • Congratulations to people who made possible to have a dot release during the All Hands.
  • Little discussions here and there which help you to unpack a of lot of unknown contexts, specifically when you are working remotely. Invaluable.
  • Working. Together.
  • Released a long due version of the code for the webcompat metrics dashboard. Found more bugs. Fixed more bugs. Filed new issues.
  • The demos session made discovered cool projects that I had no idea about. This is useful and cool.
  • Chatting about movies from childhood to now with friends we do not have the opportunities to see each other enough.
  • Laptop… shutting off automatically when the battery reaches 50%, keys 2 and m repeating time to time, and shift key not working 20% of time. This last one is probably the most frustrating. 2 years and this MacBook Pro is not giving good signs of health.
  • Spotted two bears from the gondola on our way to the top of the mountain.
  • Very good feeling about the webcompat metrics discussions after the talk by Mike Taylor. Closer work in between Web Platform Tests and Web Compat sounds like a very good thing. We need to explore and define the small loosely joined hooks that will make it really cool.
  • Firefox Devtools team, you are a bunch of awesome people.
  • Plenaries, for this Whistler All Hands, felt more sincere, more in touch with people with clearer goals for Mozilla (than the last 6 years since I started at Mozilla). So that was cool.
  • Loved the cross-cultural/cross-team vibes.
  • Thanks to the people who are contributing to the projects and give one week of their precious time with their family to work on the projects they care about.
  • Whistler is a very expensive place.
  • Slept through all the ride back from Whistler to Vancouver, avoiding being motion sick.
  • Staying in Vancouver for a couple of days
  • Then heading back to Japan on Wednesday.

Otsukare!

Robert O'CallahanStack Write Traffic In Firefox Binaries

For people who like this sort of thing...

I became interested in how much CPU memory write traffic corresponds to "stack writes". For x86-64 this roughly corresponds to writes that use RSP or RBP as a base register (including implicitly via PUSH/CALL). I thought I had pretty good intuitions about x86 machine code, but the results surprised me.

In a Firefox debug build running a (non-media) DOM test (including browser startup/rendering/shutdown), Linux x86-64, non-optimized (in an rr recording, though that shouldn't matter):

Base registerFraction of written bytes
RAX0.40%
RCX0.32%
RDX0.31%
RBX0.01%
RSP53.48%
RBP44.12%
RSI0.50%
RDI0.58%
R80.01%
R90.00%
R100.00%
R110.00%
R120.00%
R130.00%
R140.00%
R150.00%
RIP0.00%
RDI (MOVS/STOS)0.25%
Other0.00%
RSP/RBP97.59%

Ooof! I expected stack writes to dominate, since non-opt Firefox builds have lots of trivial function calls and local variables live on the stack, but 97.6% is a lot more dominant than I expected.

You would expect optimized builds to be much less stack-dominated because trivial functions have been inlined and local variables should mostly be in registers. So here's a Firefox optimized build:

Base registerFraction of written bytes
RAX1.23%
RCX0.78%
RDX0.36%
RBX2.75%
RSP75.30%
RBP8.34%
RSI0.98%
RDI4.07%
R80.19%
R90.06%
R100.04%
R110.03%
R120.40%
R130.30%
R141.13%
R150.36%
RIP0.14%
RDI (MOVS/STOS)3.51%
Other0.03%
RSP/RBP83.64%

Definitely less stack-dominated than for non-opt builds — but still very stack-dominated! And of course this is not counting indirect writes to the stack, e.g. to out-parameters via pointers held in general-purpose registers. (Note that opt builds could use RBP for non-stack purposes, but Firefox builds with -fno-omit-frame-pointer so only in leaf functions, and even then, probably not.)

It would be interesting to compare the absolute number of written bytes between opt and non-opt builds but I don't have traces running the same test immediately at hand. Non-opt builds certainly do a lot more writes.

Hacks.Mozilla.OrgView Source 5 comes to Amsterdam

Mozilla’s View Source Conference is back for a fifth year, this time in Amsterdam, September 30 – October 1, 2019. Tickets are available now.

What’s new for 2019

This year, we’re trying something new. We’ve shifted our focus to take a deeper look at the web platform and how it is evolving. We’ve planned more interactive sessions, and we’ve partnered with a variety of groups to bring you even more opportunities to engage, learn and participate.

Our goal in 2019 is to offer a unique, two-day, single track conference. With this in mind, we’ll provide ways to engage with engineering and thought leaders from Mozilla, Google, Microsoft, and a variety of individuals and organizations that shape the web today and for the future. These experts will share a perspective on how browser makers, standards bodies, and allies work together to create, support, and implement web standards. Together, we’ll explore what that means for the web platform and the developers and designers who rely on it.

We’ll hear from Google’s Paul Irish and Elizabeth Sweeny on performance, Mozilla’s Selena Deckelman on security and Mike Taylor on web compatibility, along with talks from friends and allies like Henri Helvetica, Hui Jing Chen, Ali Spittel, and Tejas Kumar. Jeremy Keith will close out the event with a new talk, and more speakers will be announced in the coming days and weeks.

Beyond the main stage, we are bringing back “conversation corners.” These breakout sessions create opportunities for attendees to learn from and talk with the people across the industry who are contributing to web standards and building browsers and other tools and technologies.

Come for View Source, stay for Fronteers

To provide a full week’s worth of events, we’ve partnered with Fronteers—Amsterdam’s noted single-track community-driven conference on front-end web development that’s taking place Oct 3-4—to offer combination tickets and shared social events. There’s also a Hack on MDN Web Docs event on Oct 2, where we’ll work on web standards documentation together.

Making sure View Source is representative, inclusive, and accessible is a core goal of the conference. To that end, we’ve set aside 20% of the conference tickets for diversity scholarships. In addition, we will provide live captioning, reserved seating, a lounge for attendees from underrepresented groups, a quiet space, and a focus on a friendly and inclusive environment. We not only have a code of conduct but a strong response and communication plan to ensure that all are welcome, safe, and well-treated.

Tickets & updates

Stay tuned for upcoming announcements. We will put out a CFP for lightning talks and a call for volunteers, as well as information on how to apply for a scholarship in the coming weeks. To keep up with the latest news, including newly announced speakers, please follow @viewsourceconf on Twitter.

View Source 2019 Amsterdam tickets are on sale now. Join us in Amsterdam for a week of amazing events. Want to check out last year’s View Source talks? Our 2018 speaker lineup was spectacular, and we’ll rise to this stellar level again this year.

The post View Source 5 comes to Amsterdam appeared first on Mozilla Hacks - the Web developer blog.

Daniel StenbergGoogle to reimplement curl in libcrurl

Not the entire thing, just “a subset”. It’s not stated very clearly exactly what that subset is but the easy interface is mentioned in the Chrome bug about this project.

What?

The Chromium bug states that they will create a library of their own (named libcrurl) that will offer (parts of) the libcurl API and be implemented using Cronet.

Cronet is the networking stack of Chromium put into a library for use on mobile. The same networking stack that is used in the Chrome browser.

There’s also a mentioned possibility that “if this works”, they might also create “crurl” tool which is then their own version of the curl tool but using their own library. In itself is a pretty strong indication that their API will not be fully compatible, as if it was they could just use the existing curl tool…

Why?

“Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.”

Logically, I suppose they then also hope that 3rd party applications can switch to this library (without having to change to another API or adapt much) and gain something and that new applications can use this library without having to learn a new API. Stick to the old established libcurl API.

How?

By throwing a lot of man power on it. As the primary author and developer of the libcurl API and the libcurl code, I assume that Cronet works quite differently than libcurl so there’s going to be quite a lot of wrestling of data and code flow to make this API work on that code.

The libcurl API is also very versatile and is an API that has developed over a period of almost 20 years so there’s a lot of functionality, a lot of options and a lot of subtle behavior that may or may not be easy or straight forward to mimic.

The initial commit imported the headers and examples from the curl 7.65.1 release.

Will it work?

Getting basic functionality for a small set of use cases should be simple and straight forward. But even if they limit the subset to number of functions and libcurl options, making them work exactly as we have them documented will be hard and time consuming.

I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!

Given enough time and effort they can certainly make this work to some degree.

Releases?

There’s no word on API/ABI stability or how they intend to ship or version their library. It is all very early still. I suppose we will learn more details as and if this progresses.

Flattered?

I think this move underscores that libcurl has succeeded in becoming an almost defacto standard for network transfers.

<figcaption>A Google office building in New York.</figcaption>

There’s this saying about imitation and flattery but getting competition from a giant like Google is a little intimidating. If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does…

How will it affect curl?

First off: this doesn’t seem to actually exist for real yet so it is still very early.

Ideally the team working on this from Google’s end finds and fixes issues in our code and API so curl improves. Ideally this move makes more users aware of libcurl and its API and we make it even easier for users and applications in the world to do safe and solid Internet transfers. If the engineers are magically good, they offer a library that can do things better than libcurl can, using the same API so application authors can just pick the library they find work the best. Let the best library win!

Unfortunately I think introducing half-baked implementations of the API will cause users grief since it will be hard for users to understand what API it is and how they differ.

Since I don’t think “libcrurl” will be able to offer a compatible API without a considerable effort, I think applications will need to be aware of which of the APIs they work with and then we have a “split world” to deal with for the foreseeable future and that will cause problems, documentation problems and users misunderstanding or just getting things wrong.

Their naming will possibly also be reason for confusion since “libcrurl” and “crurl” look so much like typos of the original names.

We are determined to keep libcurl the transfer library for the internet. We support the full API and we offer full backwards compatibility while working the same way on a vast amount of different platforms and architectures. Why use a copy when the original is free, proven and battle-tested since years?

Rights?

Just to put things in perspective: yes they’re perfectly allowed and permitted to do this. Both morally and legally. curl is free and open source and licensed under the MIT license.

Good luck!

I wish the team working on this the best of luck!

Updates after initial post

Discussions: the hacker news discussion, the reddit thread, the lobsters talk.

Rename? it seems the google library might change name to libcurl_on_cronet.

Cameron KaiserStand by for FPR14 SPR1 chemspill

Mozilla has shipped a fix for MFSA2019-18 in Firefox 67.0.3 and 60.7.1. This exploit has been detected in the wild, and while my analysis indicates it would require a PowerPC-specific attack to be exploitable in official TenFourFox builds (the Intel versions may be directly exploited, however), it could probably cause drive-by crashes and we should therefore ship an urgent fix as well. The chemspill is currently undergoing confidence tests and I'm shooting to release builds before the weekend. For builders, the only change in FPR14 SPR1 is the patch for bug 1544386, which I will be pushing to the repo just as soon as I have confirmed the fix causes no regressions.

This chemspill also holds up the FPR15 beta which was actually scheduled for today. Unfortunately, the big JavaScript update I've been trying to make for the last couple cycles also ran aground and will not be in FPR15 either. There is a smaller one and some other improvements, so this is not an empty release, but I'll talk more about that in a few days.

Hacks.Mozilla.OrgCSS Scroll Snap Updated in Firefox 68

When Firefox 68 goes to general release next month, it will ship with an updated CSS Scroll Snap specification. This means that Firefox will support the same version of the specification as Chrome and Safari. Scroll snapping will work in the same way across all browsers that implement it.

In this post, I’ll give you a quick rundown of what scroll snapping is. I will also explain why we had a situation where browsers had different versions of the specification for a time.

What is CSS Scroll Snap?

The CSS Scroll Snap specification gives us a way in CSS to snap between different elements in a page or scrolling component, in a very similar fashion to how native apps work on phones and tablets.

Scroll snapping can happen on the x or y axis. This means that you can swipe in both the inline and the block direction depending on your requirements. In the example below I demonstrate a very simple use of scroll snapping. I have a scrolling box, which has a vertical scrollbar due to overflow-y being specified, and the box being given a height. I have then added the property scroll-snap-type: y mandatory, which gives us mandatory scrolling on the y axis. You can see this example in the CodePen.

.scroller {
  height: 300px;
  overflow-y: scroll;
  scroll-snap-type: y mandatory;
}

View the CodePen example.

Mandatory scrolling means that the browser has to snap to a scroll point, no matter where in the content the user is. The other available keyword is proximity. Proximity causes the browser to only snap to the scroll point when the scroll is near that point. This prevents situations where the user is unable to scroll to a certain point because that point is outside the visible area.

In addition to the scroll-snap-type property on the scroll container, I need to add the scroll-snap-align property to define the point that the scroll will snap to. This property takes a value of start, center, or end, which defines where in the child container the scroll should snap to:

.scroller section {
  scroll-snap-align: start;
}

For many use cases, these key properties will be all that you need to get your scroll snapping to work. However, the specification defines a way to add padding and/or margins to the scroll point. This can help in certain cases where you don’t want the scroll to snap right to the edge of the scrollable area.

For example, below I have used the scroll-padding-top property to leave a gap. This makes space for the fixed element at the top of the container. If I didn’t do this, I would risk content ending up underneath that bar.

h1 {
  position: sticky;
  top: 0;
}

.scroller {
  height: 300px;
  overflow-y: scroll;
  scroll-snap-type: y mandatory;
  scroll-padding-top: 40px;
}

.scroller section {
  scroll-snap-align: start;
}

View this example on CodePen.

On MDN we have pages for the various Scroll Snap properties. A guide to using Scroll Snap offers lots of additional examples. The property pages all show the status of browser support for these properties.

What has changed in Firefox 68?

Firefox 68 implements the version of scroll snap as described above, according to the current version of the specification. This matchs the Chrome implementation. If you have implemented scroll snap to work in Chrome, then you don’t need to do anything — your scroll snapping will now work in Firefox.

If you used the old version of the specification as it was implemented in Firefox in version 39, you should update that code to use the new version. In addition to implementing the new spec, Firefox 68 will remove support for properties from the old version of the spec.

If you have used scroll-snap-type-x and scroll-snap-type-y, then you are using the old spec. These properties are removed in Firefox 68. scroll-snap-type is now used to set the x or y direction along with the type of scroll snapping.

Why were there two versions of scroll snap?

CSS specifications are developed in an iterative way, and browsers begin to implement specifications while they are in the process of being developed. This is an important step. The CSS Working Group needs to know that it is possible to implement the specification in browsers. Test implementations let web developers try out a new spec and file issues against it. Often these implementations happen behind a browser flag. In the past we’ve used vendor-prefixed versions to expose them for testing. Sometimes, however, a specification is in a seemingly good state and therefore implemented, but then changes need to be made. Such is the way of developing new browser features.

In some cases, a change like this means that the old and new properties have to be supported forever. The grid-gap property is a good example. The property has been renamed to gap. Due to significant usage in the wild, the grid-gap property is being maintained as an alias. In the case of Scroll Snap, usage of that old experimental spec was very low, and therefore the scroll-snap-type property has been updated in a non-backwards compatible way. This means that the old version will be removed at this point.

Backwards compatibility and scroll snap

Scroll Snap is one of those specifications that can act as a nice enhancement in many cases. If the browser does not support scroll snapping, then regular scrolling will happen instead. We have information on MDN which can help you to implement the old specification as a fallback for old Firefox versions, if your analytics show this is necessary. For most use cases this is unlikely to be required.

It’s great to see another CSS specification get wide browser implementation. If you have used JavaScript to get a scroll snapping effect in the past, it might be a good time to take a look at the CSS version! You are likely to find that it performs far better than a JavaScript solution.

The post CSS Scroll Snap Updated in Firefox 68 appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 291

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is safe, a #[safe] attribute for explaining why unsafe { ... } is OK. Thanks to Michael-F-Bryan for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

205 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

No issues are currently in final comment period.

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Africa
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Robert HelmerVectiv and the Browser Monoculture

So, so tired of the "hot take" that having a single browser engine implementation is good, and there is no value to having multiple implementations of a standard. I have a little story to tell about this.

In the late 90s, I worked for a company called Vectiv. There isn't much info on the web (the name has been used by other companies in the meantime), this old press release is one of the few I can find.

Vectiv was a web-based service for commercial real estate departments doing site selection. This was pretty revolutionary at the time, as the state-of-the-art for most of these was to buy a bunch of paper maps and put them up on the walls, using push-pins to keep track of current and possible store locations.

The story of Vectiv is interesting on its own, but the relevant bit to this story is that it was written for and tested exclusively in IE 5.5 for Windows, as was the style at the time. The once-dominant Netscape browser had plummeted to negligible market share, and was struggling to rewrite Netscape 6 to be based on the open-source Mozilla Suite.

Around this time, Apple was starting to have a resurgence. Steve Jobs had returned, and the candy-colored iMac was proving to be successful. Apple was planning to launch official stores, and the head of their real estate department was a board member of Vectiv, so we managed to land our first deal - a pilot project with Apple's nascent real estate department.

We picked up a few iMacs around the office for testing, and immediately hit a snag - Steve had ordered that everyone in the company, real estate dept included, has to use the new Mac OS X. The iMacs that the dept used (and that we tested on) were pretty slow, but serviceable. The real snag was that our product didn't really work on IE for Mac. Like, at all. Pages wouldn't load, and the browser would consistently crash on certain pages.

This was before Safari and its Webkit engine, We started debugging and rewriting bits of the product, and simultaneously talking to Microsoft about our problems. They were responsive, and hopeful the upcoming update would fix some of our problems. Sadly, there were to be no further updates for IE 5 for Mac.

I was something on a Unix fanboy at the time, and had been using early releases of Mozilla Suite on my Solaris workstation, so I knew that our product basically worked with some rough edges (mostly minor things like CSS, with a few less trivial problems around divergent web standards.)

Long story short, our QA manager and myself visited Apple's real estate and test folks, and we settled on using Mozilla 0.6 for the pilot, and corresponding Netscape 6 when it was released (I think we ended up using Netscape 7.1, which I recall being a lot more usable, being based on Mozilla 1.4)

Vectiv had other clients like Dollartree and Quiznos, but getting over that initial pilot hurdle was key to proving that our product worked and had backing from a known brand. Vectiv was VC backed and like many startups caught up in the dot-com crash ran out of runway, although the product was sold and did live on. I did a few consulting gigs setting up local installs for the remaining clients.

Most people reading this probably know the rest of the story - IE stagnated, AOL pulled the plug on Netscape, and Mozilla Suite was reborn as the Firefox browser. With MS moving to Google Chrome's Blink browser engine, Mozilla Firefox's Gecko engine along with Apple Safari's Webkit are the only independent implementations of the various web standards.

(Blink is technically a fork of Webkit, but IE and Netscape were ultimately forks of NCSA Mosaic, I think it's fair to call it independent at this point.)

To be clear: having multiple browser engines didn't ultimately save Vectiv, but Firefox did open the door for Safari and Chrome, as Firefox's Firebug (the predecessor of today's integrated devtools) enticed web developers enough that they made their sites more standards-compliant just so they could have access to nice devtools.

It's easy for me to write a nice narrative of the past, complete with the moral of the story. The future isn't totally certain, but it's clear that the web will continue to play a large role in the world. Let's not (again) back ourselves into a corner and cede all meaningful control over that future.

Mozilla Security BlogUpdated GPG key for signing Firefox Releases

The GPG key used to sign the Firefox release manifests is expiring soon, and so we’re going to be switching over to new key shortly.

The new GPG subkey’s fingerprint is 097B 3130 77AE 62A0 2F84 DA4D F1A6 668F BB7D 572E, and it expires 2021-05-29.

The public key can be fetched from KEY files from Firefox 68 beta releases, or from below. This can be used to validate existing releases signed with the current key, or future releases signed with the new key.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFWpQAQBEAC+9wVlwGLy8ILCybLesuB3KkHHK+Yt1F1PJaI30X448ttGzxCz
PQpH6BoA73uzcTReVjfCFGvM4ij6qVV2SNaTxmNBrL1uVeEUsCuGduDUQMQYRGxR
tWq5rCH48LnltKPamPiEBzrgFL3i5bYEUHO7M0lATEknG7Iaz697K/ssHREZfuuc
B4GNxXMgswZ7GTZO3VBDVEw5GwU3sUvww93TwMC29lIPCux445AxZPKr5sOVEsEn
dUB2oDMsSAoS/dZcl8F4otqfR1pXg618cU06omvq5yguWLDRV327BLmezYK0prD3
P+7qwEp8MTVmxlbkrClS5j5pR47FrJGdyupNKqLzK+7hok5kBxhsdMsdTZLd4tVR
jXf04isVO3iFFf/GKuwscOi1+ZYeB3l3sAqgFUWnjbpbHxfslTmo7BgvmjZvAH5Z
asaewF3wA06biCDJdcSkC9GmFPmN5DS5/Dkjwfj8+dZAttuSKfmQQnypUPaJ2sBu
blnJ6INpvYgsEZjV6CFG1EiDJDPu2Zxap8ep0iRMbBBZnpfZTn7SKAcurDJptxin
CRclTcdOdi1iSZ35LZW0R2FKNnGL33u1IhxU9HRLw3XuljXCOZ84RLn6M+PBc1eZ
suv1TA+Mn111yD3uDv/u/edZ/xeJccF6bYcMvUgRRZh0sgZ0ZT4b0Q6YcQARAQAB
tC9Nb3ppbGxhIFNvZnR3YXJlIFJlbGVhc2VzIDxyZWxlYXNlQG1vemlsbGEuY29t
PohGBBARAgAGBQJVrP9LAAoJEHYlQD1/DRWxU2QAoOOFRbkbIU1zKP2i3jy/6VKH
kYEgAJ9N6f9Gmjm1/vtSrvjjlxWzzQQrkIhGBBARAgAGBQJVrTrjAAoJEMNOV0fi
PdZ3BbkAoJUNHEqNv9dioaGMEIpiFtDjEm44AJ9UinMTfAYsL9yb15SdJWe/56VC
coheBBARCAAGBQJWBldjAAoJEAJasBBrF+oerNYA/13MQehk3AfkljGi252/cU6i
1VOFpCuOeT7lK2c5unGcAP0WZjIDJgaHijtrF4MKCZbUnz37Vxm0OcU8qcGkYUwH
i4heBBARCgAGBQJVrSz+AAoJEPCp59zTnkUulAYA/31nYhIpb7sVigone8OvFO19
xtkR9/vy5+iKeYCVlvZtAP9rZ85ymuNYNqX06t+ruDqG2RfdUhJ6aD5IND+KD5ve
7IkBHAQQAQIABgUCVaz9fgAKCRCzxalYUIpD8muMB/sH58bMSzzF9zTXRropldw7
Vbj9VrRD7NyoX4OlDArtvdLqgPm0JUoP2gXINeSuVPpOfC676yVnBEMjIfqEjq09
vcbwayS+Ncx4vQh2BmzDUNLE3SlnRn2bEWr9SQL/pOYUDUgmY5a0UIf/WKtBapsP
E+Zan51ezYSEfxDNfUpA4T2/9iWwJ2ZOy0yIfLdHyvumuyiekJrfrMaF4L9Q0OnJ
wp1PwkvN4IVwhZeYDtIJN4nRcJK5LrwU7B97uef2hqBBll7/qCHl5y4Khb0csFan
Ig+pQLPUJdIiYtzoFtlgykB61pxqtU9rqGKW02JzEUT8DdPUXxmMBy6A8oGeBRH/
iQEcBBABAgAGBQJVrRdcAAoJEGVzgtv/JREKQJgH/3nD/3/SumL7nG2g7Y1HQqWp
hUbn40XWvjZcHq3uBUn1QYXeZ5X56SANLM2t+uirGnNaZXW3cxEl5IyZVLbmcLWE
BlVAcp2Bf3FXFbdJK59f+M+y2+jZT9feTyrw+EtLoiGTxgkLdJyMyI0xGmQhMx5V
1ex1CxhZK2JPjzCVYriBI0wIbmKi90YNMQoSsdMhYmX9bHl6XWS9TCDWsqj25FLY
JL+WeVXpjO0NjRwEE6pc/qldeJYG5Vbf0snGxIerXe+l5D8Yd4PEAnpj58+5pXeo
GYZn3WjX8eTFMAEU+QhLKWQ+j/Y8Kijge7fUxnSNBZ2KEnuDN/4Hv/DrCFLv14CJ
ARwEEAECAAYFAlWtZVoACgkQ5DJ8bD4CmcBzsAf/RMqDdVHggQHc0/YLt1f/vY9Y
7QQ6HwnDrtcNxxErSVcMguD8K6Oxir0TMSh+/YuZAW8K4KSgEURwZqz4na8/eOxj
8bluNmlcAseQDHswqU6CyB95Woy3BocihH7L0eDXZOMzsa33vRQHBMioLxIbpnVt
VbFR1z7tmyfjcOrzP32xo5QoPoczKX26luMBjAvbw1FC0is2INnmUSYM4uH7iFZu
XGPFYxcAqODqy5ys3MoPa4oZ71d0HoiRil1+s0Y+2ByddZ19pE2TXp4ZXNYNUj/2
aRj8b4sTjR4rqhHIx/vfoK+VCNy/skFUZOyPdbbymE0stTRSJ1gr9CZLcBWYF4kB
HAQQAQIABgUCVcFZcAAKCRCJFz+VfFX5XqApB/938p+CJiDRnh2o7eDWnjSyAu7F
WmWGkOQnjI/kraKx1vojsYnKRXD6mjq1QJ8Hsp4taJnLQjcokNTUiST4m/e4ZJEx
PWuJKkwlralWGH6NpqYcgWPajSYb0eYQC4YqS0kfyzolrHdKI8Y4NGEU7yy5zsHw
WkHt/mpNQMrYnXwyWdIrc03X/OXo51dJyshJDRw3InREyBblFJcLvArNHz219wMr
XAicPytw4wfPpVrmDx6GrZcI8q8ECWCjwSXXv7hRpEuFLSy5XPhMc+wYBJjNlUoi
FBAF/7zENd3rMn9SCQLiIFYe0ubmO+bpeGy7TizbxOaCIfgUouyy0BQXNuJBiQEc
BBABAgAGBQJV0hrqAAoJEK18uZ+CSLoPzEIH/1D6sJMNAJtZCRGhJXvv6SYhv4pU
VNyDF9FnUvRsovliojoe4IkuBTWKhPGrxbiD5IO/izr38shqNhhm9JE2/SQZHObY
Pi+lyfDKbJgImTNxmS4F7JHnRLr37VxK1sVvuNkynJnqvCcp1g5xwNIx1rKcka3i
uqJj6toM8XQfgsTHH1rUkWHbUV3QwNzXm+yhFm2s6QzxBooPzmFn8AY7CXD4pvcM
R+M0Zy+e42nngd8lzRnmTBVig4pRq0GCMulFG+XjeVQZFpoIIxo2k1lczbRmGttO
NdGWSjxBUxReoTbSwM3C/50NrobycGQgY0gd6LGtWtU8/uEfklEy2NluxYWJARwE
EAEIAAYFAlWtAUYACgkQVu5xjc4OFUs0OAf+LM0dyyvUFGdXfJDpP2xMknXzsHAX
WFEtH5jein58mv6dD3fTVcCouo1vMQH3WFFSLYZvwtNnHGrSBqFbNKqZ0ATQ5tcY
aWsSZ+MVJJMXJDXFG/Oihg1nNOM33VdfV0RGPKP1I4cEROxms3TUFkHW3cSCgMzs
8I1OxfSoLrm6da8EN+2ct2InqzdQL2yisyTyrdmXoNpwXDxApKYkvVHQ4+9eJI5m
0ZAr0mBjIeJdATcw4/lIVKTrV7UhrChxiffYJcz4SSC1crmr+2Fzw53CyAsAmYal
UHep3Yr05oQ4oJRX9X3VrY/yELHwwxXaxCAdwwHbbXAMhZsPk9Mc20J6BokBHAQQ
AQgABgUCVa0isQAKCRCj1lIXO3Y+j6ZeB/91Q9/qr5oMWgOMsix8kflBLw2f/t+t
RR0SWDw90bG1npJB6nq5Hl+Bz4/A4SWFTFrrrlZi1Enjn1FYBiZuHaSQ/+loYF/2
dbQDbBKShfIk3J0lxqfKPAfKopRsEuxckC8YW1thGxt5eQQ8zkJoqBFTBzwiXOj3
/ncJkX9q9krgUlfTSVmrT9nx0hjyNQQXrghsmBtpR7WCS7G7vNRGCNUorhtviUvL
+ze1F7TTSGspVsVxo2ghmz5WT/cD9MV1gcVjojYmksh5JIl39jCHr9hl8aRId/Of
zsN+TKuBcpAxDkm9BCAps7oY8FlLKDFZTtHa000AkodKHT88nwnvKuqPiQEcBBAB
CAAGBQJVrTkDAAoJEPbQ92HczOykK9YH/0MARo3HlYXeS2bDqM/lwK/rQcPCCyYk
e6wbICjncbCOjgXHqG/lBhClNs7hp/7gqkUaR7H5tmeI4lalP40mSHHnnFvMD3Tc
yhn350igK0bgrjWQDaYxhKlHT3vIXd/C24/vRSAxmqIKbP+IoXOyt2GMTQq8GOm2
dgYRaTkwyHnGWnMaibctX8D4oCYR0/D4YJqPkfqobf8+1ZfP5GaMbSxE/Jwdo0kJ
a4vPjEzFXbygAbncapzdwN6zgel2zh885rz7B7vIpMr/Y7eV85Q68qdyyhLe8cL8
Y18YPzpFf+/PZNbgYxouafvnFwBhPQwg0gUF/+1eM3UE2ua+saSTGduJARwEEAEK
AAYFAlWtCVsACgkQM0LhtmejiGMovwf8CfYJHNbwiwSMUoP4n7FrmElhBtxvlbnC
MZKz08v+lFsfS3wU1LUN69GqirfF0vkQRSlSBp7niCLHQCfSoqHMLgxF0P2xgXLj
aYM/t/rxXDawJmW18G04dqFrtCPZTbwMT2PsPHTiWQdaN0e50lXk9Vo+l6VbwQMg
4zH7icZadeJgQooxFalHYFVXUVeex9t8/YdanFVrHFa3tao6azBTSUkJvZtIu14S
fxigDWIIwsx0xpVfJf3a/xC6HY3Q1a3NeBz3i6DwaK5wYqijZKl0WVdULKyqU98o
F6y0mUv3d2o/p07Cqgeo6xxMkHqu83OLa2a0C7tYPLgL4EFc2FtikYkCHAQQAQIA
BgUCVaz7KAAKCRCWO3gxCjexfKxrD/4npm1rB7+pPlotbqK37Mur7egPbVSAzVNU
/zUKPAuGUeP3C64YN77ETx1kDuS+meAqMDHFc9Bf8HivPbtj6QcK96U5KstbmSh1
Ow9YiQtxJgxGjg/CzREgZAFcjy0MhoklyPsFhv07s6MLOJMSM/krEN5nqjifQ0Wd
mTk02FLoHVWcLdjfgMiPiSjGbU3k7luvjPyRNzk831szE5mfa74rEYh4TBklse+2
uB4DFQ/3oHZ1Sj6OBK6ujmNKQjIP7Cl+jmjr7+QK0OJcRaj/8AckDA5qXTZACh1S
2syCDDMnX0V+dTxGCIoWOK+tt9mLohMzpEeD4NIX4qdpbbCRzeYZMHSomyBIsbA6
B+/ftDE7W1N0/FtJ9adkkCynKULvh2CH5c5hgOOL22M+2spnywRoeJRUWU7hBM5O
UH3JjA4Tu4j/cwp7dD7QzZrzmC9f5LQJ3OelejvVowWPQd3/tky4o1q6wlmFqAcA
gtu97UwgBOSR9sJPGDlt1iC91UYAiBQQAA7ya8uXUS84mCQwTlr8j+YrowvEHK4I
xpPREytT1LzzV/4Am4ndDFtujy83QjL0qaIIim1xIwoEosd4yidhpczw7f3b9dQp
uBIFeQuhM7JsxP4tmE7S6k6GlEmqa3INPVaPGnsUGS7+xSMlcJXLtimPCSQvFma9
YiGV5vtLy4kCHAQQAQIABgUCVaz8uAAKCRASy06X4H5n0dg0D/9QoxIh9LRt1jor
7OHG4xKUjKiXxn/KeQNlJnxI55dlWIvJEJGheFjaDomzKBYuxmm2Ejx+eV5CHDLU
YsLFYwWf8+JGOP75Ueglgr8A0/bdsL63KX6NP2DCg8XR4Z1aeei3WMY7p/qMWpqb
QoAv9c3p49Ss2jSNuthWsRR6vbQ9iwze2oaUaA44WKQyhhbCwBU4SHYjlKCLqIBh
/HXZFhZ4rDfuWgPBKvYU1nnOPF0jJRCco3Vgx3T9F+LZ3zo5UPt1Xapr3hMVS9ia
Jyl1w4z2miApUaZuHPuWKuO4CJ1GF1mS5T6vG8gB3Ts5zdtBF2xQIkCz+SM7vW/2
i/82oq6P8EuLHEhrQPR4oTjXIvXdEJ9kgbjqcj8Xk+8teEOnuwh6iEhay9i/bf0D
3Jd+roFN5dnWPxhOVjzrI3fwlK1/ylsZYqUYBEzt7Wj0MdhjeKssI5YICcqYXXjB
ttMw4B7DZXPFXzz3kHB56jZ/II4YUjpLO85Jo5A9SV+aIqa0mvCt6DvVWy/rhfxf
oUdqNlhX11gkVLaA7xxgn/NqPOf+h5hVO2mwWkmart9YHKMZ3ukCdke65ITL/nsY
Sm2ZhG7OYjaCfu9jPWtkBstOEWyT9q4JTdViR7wN3eMefEG6rb49rxOYvGJu+cTV
kp3SCpl0w1j+tPj4tkj7ENzPMXdnuYkCHAQQAQIABgUCVa0s4gAKCRCKsTKWOgZT
euMyEACKOySKAd/xDcPcHg7Prvdws04Z8DIR0dY2qUlbRVx2jTmIXyry63CqbOJF
bDg9uk5x0+lSotvrWtZ+NKSrg9VM6vyV4cc2P9rhqIBi3wO2elzAmpOaS2KKOjQ+
2fS/xqh91ElJUu09xXQXJ0vMrqgui+zN1YBDiJV0WOmm90Mm2NPiihcWZmBmDorO
qMQabwbjBLi0yUVHgAlkilY3mAB4tmEKDeN+4pYSAAhXAll9U+nyoVMgwMJscZya
zOp4MqMbmFjyr4p5AGzv+OOJtjtCNKT6oW9Y+URLY0YKeOsPk0v5PlbQCVBlLeSB
sNZudKav/Gvo7Mvz5uLTcneBFb+haYIiXO/FQm4uBHkzdNFLgaph81Wzh62AhbtB
lfBOj/lbzN3k/xRwo64QU+2Z9GOhFlhjfROquY70FCQcspwNuqCdZybnkdpF2Qrr
6Pi0qKR/Xb9Vd7PW0/gKQdwwlYTiDemgA21mYeJrYw873/7U/+kLFRvmPAEX4IOI
OEN6XVjxvu78REi6CmXxOoYnH4aRSXDRyi1nsGjB43AtfAMMNCUigDgFP4sUsZAG
1RAoxBhOsO/g9S5wx8H3rKITCXDjQh2SYeBwHFcU03EMcyzEQhbZNighN+aRKGIi
bteRxISiKU+kcWaHolemeo6wGF87QXEpJaQ2OwIoIxQYvDDmQokCHAQQAQgABgUC
Vaz/8QAKCRA/8xuvEEv54t06D/9n1Nyn2QSUN1mXd7pomoaka+I2ogDbQpu9iuFq
bkqfcH3UuG8yTKlPp9lYDBs0IEfG85Js6iVxJIultocrcDmOyDkyEsnYbdel/tn3
X4yqD8eI6ImRoCE+gnQ3LoEIHuODfJoosM/jAHANs4fsla4/u5CZDXaaq7pYXGiT
t7ndsfmLiCa7dAg7bVFfJagsnL/VjlfeWM9nW01rDL9LPxSN4tq7ZKXWZDonFZYJ
4unsK/Cn6Pqco4Wb+FUOWCcWt8in1pgeNHZ9WnAgXG999/3iCbbQTLB6uVwY4Ax5
P7VApnLVXV6QFVf7bN1DxE8kZk+pfLGcuD1LJSF0skE80M17kAt+iV+fam8EYzeG
dG6cY6w+srndaMaq9ddiHIiQkR35SjJAGnrNRj8ooUr/vKOBnFfuwJLA2MOUVPZ8
HWB+WXW8qhihw9CXa38Hdt4o5knMGRIyTWEF0TQDtRGQ6hisVBN3OxJRXBj7/QgC
G/GoYpweGKcsMU43p57TzbnXVVUytJsLFyexOGNzrUIxgDVPEvTUnNvdAihNZPdb
W3YdFkP9pdwOyDpQwebXELUx1kp4ql0laueex4L1v+0a6rDYQeK1gOq5UGY+THRS
gB2xsHl5zeryfgnjlUkUlxKuumz+9FI2fRtSpxmWllJkRF2oFMGRuLPGAWe8nHvf
gkuGVokCHAQQAQgABgUCVa0bowAKCRCVY0f2+/OkFWKREACZ9TOmzvY6mrfWVEdl
dcYPj8cU/1LJhGdbNo5YYMx+A72nchxGXepHA65OEK+f6rFMeZFPwpQPy6Sj3MhT
623H/PECfeG87WcLOyJbfc3i9T5jvxS+ztG6abYI2J/50oMvjUWdWkDX3VvdPc0Z
Z+KC+oHvx9a/9Yki48m4CEKglgVsrRW/b9AXZQCj07bB0GjQQtkqY/m1Z8m4ttzx
fO7OBo/jHNF2An4/4gUDirXNDj0UdB5FYFJaTEUCneIj2x0fk1r4u6na8tINhiZ0
M7IgjnDlBD5jwzvwG+3kYE6TnYp9Mfeg2MPC13tp7jrJatLLutrOzvmSVLGLXbkh
9w+v+vx7qO3TxZUNlFqTmYs+vI2V/9j7KYV7Ttoind6Io7X9ImnYrvd8JOyVcO38
67MplKnrnqHJvFStE+JcHEcw5aRw+WVmoFd/obGc34V3K62T977QQGOkrTYDEdje
KADfjXXZkZMZc0IvzLBOJ1XB45+PKqJYCcJJS8Xr55+NGCDaaUPWDpkNGIqmX2n9
kYROMKG6uWkZIqG0JlZkga3THSJIvLiy6uoOvDC4GoQ9JnTwpGv6r1Hwcg+4DCOr
YKOoPKMMU24vHx2FtRRUgCXtr2cmi2ymHlUrtz8EXS4tblic8lixcbvPUqLEvbJ2
gfWQvjXNd1whYE/wfvI9WBTEIokCHAQQAQgABgUCVa0b3wAKCRC8FzAbSRs/IQhX
EADiKbCnsN/+Plllxn6SQHACEU75ackx+Q02XiD/u+wUptYUGmJi4aaW9f6mgzed
OxYK4S+/dCiFtkcYlL+FjaR0C7G6tMjrDgW+8nQCTPUNQA0gX2B8n06a7Zmdv3Eb
V/PIJJwTNSBp/dqKbvPKnRquOOpH+ayZ3awKOq/LlWBErbW1gB+FabN0lCe0iUIQ
TF9OH3GC4QsMtIrePueBmVrVPcHATV2Vw9UPqX1uX/tlXm5eai06oVT7V0FwUbg0
o1eacblNXvHciHpe33zZIKkGBWwSjDVcU9/SN+U8GfoMYmyCma4iN3KaCklpzBkJ
iQZtNKPAB5KJti8LDUxFi2sJd3sqWaZDGFhO+/PKhBKpqIhAzx1ppd11zLgh0eg6
gQlXN8D8ELISRvQqGGNNZdChEFdzGElg5SMfmeEd37OaX4wceLLV0v7EA0doHMVo
0enFhSwU3YwtwxbiukKc7H/ylG7+jvntjY+z7KktRsY/FkklrbrNhddMBQMMSAQU
Uz1GJ+6NUKmzXjqxFuuh3OAhqNzhJyABZWQcNMph+rogEslkenwoHV9gWRWtS3CM
ybJkKkbsWpYhMZNY6hFtgCwida7NPs8369v+yTTE6TU/NIlXUKYIf2LMqtOpEBTj
aN3jKpUi5DeE3zBeh6iVKUrfCXbt8O0rYQPNWGSW+MZ2t4kCHAQQAQgABgUCVvA4
GwAKCRBE9G4UbQI5XfS9D/9XPK7jg0lmsNZ2sDIyeAw5n6ohSR5F20ocTMAVeXqN
7VkvJdNpIqHJa13EP408DgTy9BsSptym/OQGE6B82BU7FZTEL6eMHnGGDg+5ktx9
+b73xLedzK75ti6ED+QuA4kDYcvW8hASht0zRcmFUzwbtuEopJ1Lk1R3oFLwCAov
lhduC45nANWrTK5U+D1U2obl5PAvx+9mEfgvojlGH/C/WD74W+cQZFH7t4+muRza
mckLyPftnTxjNF/lpYIm7z0QOwvzBYj+PJ09wYueK00RE5+i9Ff8DrjtVSXsziQv
SjJuUlv0kVvM8r3th4zBBNRhA4cinwqxhgqO4G+r2r9Gv0M2nKKOnWmyF+MSIRnh
gONOQZe5a7kQxKVWkLicS2IGUpPeQyTWaqZzYXsD+Dm6DXD57vYTURtUkwO0CDON
zT5XiS1HG1MZrw+V/Jai4HAvpF5WkTJXPc1Lv75BxJj3wOAw4MzEWCCdr/N/dt5/
+ULpEaSQfIg4L4iEj6rvabQyN0KbOxIDx+pPQ81izfj36wIrDqhyCNIdmVH/yARl
tkL4XDEl/pt7Y3t6jqFhy057lektowClWcPeq3DoL0LFYnjNPpYvIjRIAXdhaYiA
u2ViF8WdGzQ5tFeI7u3PQUG5NcPe+WOPOru3wMMrUhLgLHkCdNkjivP79qIPSTkC
GYkCHAQQAQgABgUCVvA48gAKCRC3hu8lqKOJoLRMEACmlyePsyE5CH7JALOWPDjT
f+ERbn+JUTKF+QS0XyWclA/BIK8qmGWfgH38T9nocFnkw17D3GP8msv8ll+T4TzW
9Kz9+GCUJcHzdsWj99npyeqG5tw+VfJctIBjsnX3mf4N0idvNrkAG5olbpR5UdsY
Yz62HstLqxibOg4zWhTyYvO6CjnszZrRJk0TYZON4cXN14WYq2OTrMaElx0My8o1
qVBnK58pIRzv72PmvQqUk5ZjhUyp9gxjqqCJDz0hVK61ZuGP6iKK8KCLTfSxeat0
5LAbz8aC58qlg5DVktevHOjBgnTa8B7BgJ7bQ9PLMa3lF4H1eSiR9+8ecpzEfGHI
LoeIDIYH7z7J/S0mTgV3u5brOMYO+mE9CEfps85tVVoyJrIR8mGEdtE2YmdQpdFz
YIYvRfq9tnXZjVsAAsC20Smw0LnjhYzAt9QJwZ9pFMXUTg6lC5xT+6LNrEY+JR3w
C16q36bcbCNj0cBv1A3x6OI5OQfpexhLPDgoDiI+qozJIdj8MzJ8W6KU1Z3yb3dq
ACk77yv37rGO6uduSHnSti26c/cUIy6XZBbXBdobE9O3tr8hwvTQ1FXBmYnBrdiz
U6tgxEA5czRC9HOkdk6y6ocbjmONpF6MxkpJAvTMk7IqC2/hisbV9x4utla+7tmN
ZU137QGcaK2AGQablVAy4YkCHAQQAQgABgUCVvCMigAKCRCkhaDtUbi3xAU7D/9g
UPZSJ8pbZV9TLaKD57Bc7B78HNV/B438ib4dI33iihMTBHnCB1giPE9X54QoV8AS
xrO/xveS1kkj78jERqUcED6ZHhMLb9SWs6CxUKdMdgovnIlFUc+t05D5mb6STi+z
NihwO0JI+n79qhETy73WLpC7RR0aMx7zYcbqp3NWPptcf1kVGJZGx+QbEHfVye98
T5pkH5Wp+7LSlup6AldQT/oifxdGxLXbECTnwozRvyMpAaphoEHrET1YOmKnmw/J
yi6DLpTb3XvSf5Tntzr7HklCEcL9FvYCoHxiXWawLhuPhSyrFYeYtF1ypmzTgaJW
yuTZ8sN9J+y7Tbchk/I6FpX+3YoTgPCcC7hv1Krs803N/3KuyBEvhzg7NYRikzO3
fxXlBG0RMm+662E7KlERU24izbWhGiYwl34+MaxrIO4oDvF79LEN7y0+SjL4V0B9
689d+HI1ZfS9O1xkOlW6y0QyagOzsTOUF12s2mWydFmipbYnIwsSsu6Nzk3yO4M+
qYABJXJ3tIFQPTd7xqmPNlJ8mFtmzHDhb3Pv6sRNFLLujYM9cJpuNMbAHWdohz1b
jBT9pZQ3zWpll5wotUvGmJd6hTAXdUgmZ7lh7Uq6axClMmiLe1WYntcNpb04PyyE
m2+GU5x123UTiSX2LGKa4t+HNSM8nJL8BJiGk80xVIkCHAQQAQoABgUCVa0OAwAK
CRDDvTXkbdRdpVR+D/4/37e8WqKOHNPteQu42sj0ZOfcqyVMA9TQ578F0s9MwoQu
qfVhXGSWevOctuMv2qTBjBfFjkdPrKR5L4LNAgMsu1epHU0DPcRZUCbh1P7Gpolm
Z8KgnjT5Wpl1AcuOCaP08VMrt/e/JndTHp6btn6HsLVtryNhlL7oaeYbDr6/ovHN
GHVIVSZgGP9f4Y8FiDpyfKav71vYLBMxtzM7lc3eFT1S10XhSW6k+8S5XldYWkLD
riRXDE85C+9QndpOoQaIICp3ye3JVnUxa1qhvsYj9uPt1M6hKiBSoXdplrB+hQc+
nqLNN3jxpGdmGmwrjtjqMhocMIguEqgARJOek3XKOppEhu+IcnJgU4edARJNLsBa
uiVBWY/6mZOFlZq6H48tVyziS2n/oIpi+aCc/fQeGs9zMTtFUohPfYtTcy9PecXM
OYpSu4p4tQ07oucnxfBkRUgTdM5VwX7YwTcRwp9XhHACUEGBhrwMH8Iz+sK2jLF3
FhJGkef1vFs0vqSf4I8DBFkYAKF848YyEcGHeINQloi3v0Kr2PpBxlRh+GPWwi++
QPKXQFzlTiyVtMzoo/lpmAWUJwj0dbAbH/mohtvWtA1WPHC2JRZ52JLThhpDrK3t
//Jdt2WHE91cMx7/2B0PK4O8/j7UVlsOJXpVPsGX5SFCeTB/iS4JtIwWN275zIkC
MwQQAQgAHRYhBFnKni0qMx3iUaokJ18Dx2fCR6TVBQJZDvZCAAoJEF8Dx2fCR6TV
oGkQAIjqaQ7tpdhDJ6ORNtLIt0TsWg0jg2rpoq+9Au36+UYBMuBJ3Py/tAsZ3cqQ
lig7lJiQqOuQZkbg1vcY4Kdad7AGa8Kq3sLn8h2XUlNU90X0KAwdCTA/YXxODlfU
CD2hl4vJEoH/FZtfUsaLNHLmz0brKGrWvChq00j5bPfp90KYKqamGb3a4/LG4DHL
4lmEBtP++YA0YqUQ3laOvKune2YwSGe4nKRarZnFiIn2OnH9w0vKN/x9IMGEtc5M
bQVgGtmT5km3DUuXMDforshue6c7ao4nMOC96ajkWYZhybqHJgLOrEGPVUkOaEe7
s1kx4ye9Ph3w/LXEE8Y8VFiZorkA/8PTtx0M9hrCVkDp0w8YTzFJ9DFutrImuPT6
+mNIk+0NQeuDsv492m/JXGLw/LRl97TmHpKME+vDd5NBLo4OShlDKHwPszYcpSJT
G9+5++csR95al3tWnuGX9V0/dO1s7Mv0f/z07nLB/tL+hEpqqA5aRiGzdx/KOrPZ
uhCTyfA3b2wvOblwf4A/E1yO7uzPTuSWnx1E14iZuaCPyZPXEh3XSYCLEnQ05jy5
0uGXCDVR+xiE/5i/L3IxyhJk6zn5GOW5b8Taq5s/dFS3zWiFS6l0zQ1VQmJH8jdG
LoBFvdVLZoAa1bihLo+nJVPR2RauWnxWoWk1NQoT3l02Lk6DiQI4BBMBAgAiBQJV
qUAEAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBht7Um2Y8DU1CqD/9G
vr9Xu4uqsjDHRQWSfI0lqxElmFSRjF0awsPXzM7Q1rxV7dCxik4LeiOmpoVTOmqb
oo2/x5d938q7uPdYav2Q+RuNk2CG/LpXku9rgmTE7oszEqQliqKoXajUZ91rw19w
rTwYXLgLQvzM3CUAO+Z0yjjfza2Yc0ZtNN+3sF5VpGsT3Fb14aYZDaNg6yPFvkyx
p0B1lS4rwgL3lkeVQNHeAf0qqF9tBankGj3bgqK/5/YlTM2usb3x46bVBvwX2t4/
NnYM5hEnI57inwamX6SiMJc2e2QmBzAnVrXJETrDL1HOl4GUJ6hC4tL3Yw2d7515
BlSyRNkWhhdRp1/q9t1+ovSe48Ip2X2WF5/VA3ATfQhHKa3p+EkIV98VCMZ14x9K
IIeBwjyJyFBuvOEEIYZHdsAdqf1zYRtD6m6obcBrRiNfoNsYmNY4joDrVupI96ks
IxVpepXaZkQhplZ1mQ4eOdGtToIl1cb/4PibVgFnBgzrR4mQ27h4wzAwWdGweJZ/
tuGoqm3C6TwfIganajiPyKqsVFUkRsr9y12EDcfUCUq6D182t/AJ+qE0JIGO73tX
TdTbqPTgkyf2etnZQQZum3L7w41NvfxZfn+gLrUGDBXwqLjovDJvt8iZTPPyMTze
mOHuzf40Iq+9sf5V9PXZ/5X9+ymE3cTAbAk9MLd9fbkCDQRVqUD0ARAAr/Prvt+m
hVSPjNDPSDrTBVZ/7XLaUZvyIVggKa+snJoStrlJGTKKFgDVaYTOE3hP/+0fDdQh
97rjr4aRjd4hBbaNj0MzZdoSWYw3yT+/nidufmgPus0TIJMVO8I6rl3vgcfW/D3o
vNrLW/LjkTuM9a+p+D1J7woCfMSWiFMmOLPKFT7RBuY8edCVjyA6RP9K9Gj1sURS
eqNaHR9Gr4rW10s+FwUHWxxzbmIWqH0gApQYO6vyND5IMcKOBCWQU6Detuq1pQ6d
Uc+iF+sEz3Rk3C6d4WBBjtkVJSJ0KKan8Q3gJefOCMNhdRQDjZLwbzr4bgoAkLba
BFCjiZxWZ6HAdMfSCV8uZQrtMS7b0DUpY0vdH9Htl3JqOOkK9RorYDQBuPdkTYFI
NsmtWVsFV/LmR891mOF3fBRaoVoMeJVwiZyNlFY+dyWWFzLp+GoTLcQtmuR7OkmO
cBGxWSKPcZfPqhf4dVQud7bDR2RNfJ1Hqa5kj8Z422sseYDwHf/T9OWWYvLwKGZh
lUgpnzO3WCGrd/6EVNeC1mKXt4F7BmADov4Rdcrp1mPXiVt7oIxLaS6eBNf2y1TW
zjYj5ZFuKqIukDEJfqpwsE5asnCw56nae+7luGs8em1J9GEXhWzXG15UVyQJaFwu
B1iL8l7VcEQz4ABVrSTUWLLAKDsyqUbq2gsAEQEAAYkERAQYAQIADwUCValA9AIb
AgUJA8JnAAIpCRBht7Um2Y8DU8FdIAQZAQIABgUCValA9AAKCRAcacTlXpkF2y/F
D/oDrZm143Rv9NV9InnVJ0brpqbB7aulFfhR1LDuJ/GjeqGAQgJCZdHlzT2pfCXX
swUlYzcWEatvGcDkoaB5Ya2qs+6nhBk8pT6XYRrZAtIlKIGrlCqoSBm9HXguGv+E
IaEECr2z/Funx9so0mP+5aJn65M9u3lPmuAonj6DcHoM07WsfsXvQ4ut3fabFmzi
lLGeAdEDKIw8Hn3JBUOxUyFrQlOoL4/3qK1TO+cidz/2bATQQyIG2kNOSgHBslU+
e6/7sWOQ4ufmzm7dEsf197zPXGdXR88LT+d2uU2K4GkCffNUKxZqy9bXxXPwr4JB
jxLDQnDvl50GAWjPZAwXEd8Okwl5+8xp0HuZ217WUqT8ib0oUUfwh2H1vrMPRr/4
6i6O6THpCkV8BWF7axPYIibaeYwC4BkjZwK3tIL5ESf2f0xK4hbE3xhMTeqABQHo
Xd5rQ7SEaUuX7PlQ59fRs0Cz55vH8/o9zMm0PN6qmZFvRBeqjnklZcu+ZdP9+CMX
t81NMuzIK1X7EfpkUoam8YkYkwcCkRvPZrSHLXZFkfnx4jW543dPOfycjnv6hhKy
oXD9CBx0ZcOicsYmw9XMilBGD3b8ZdK6RYX4ywKNU6KUdFJjXB88+Ynv6QxDit1e
mMCHA1glzV9/k36iYLEIqgWBiwJeUUIcUqzgnBFtN13cyS6oEACUGUiPKbw3IkgG
W19ZyS6FBNfgGIGW0Y82Br0KlCyaXnX0R4+4u2h7kfR9NSnhRhsvRnPIkiZATa7D
+Ew1nfpsDTnti0c6g/gVw9TC/rCyXkkLztRHVcWEBdvnFJTSp2LeFaHSGbvvZfoI
GUzyUzoa1P98NmRIY1cxBoizVf8729/zAaD4fAslxoK/JsjjDvDUrRHtaNZmUle6
0Jl/yFFzR3zxb+pJliigoP2rZLt+ipomHJIhoXXWwfkRO9U/egJ8ZUhWEpZvROna
Nc9eVct5EBADxL7gHWjlceIz4ndI1eE9AdEZDdUZwOfjmK2DcXjFBfZC+jhJXjY0
xh3pPKQz90h9DIkM5WDcJPf6ep+MKSd/3hI2/JmmscQ+alwN6x6g8zDySMo3APA9
cUvEFGe0+CepVcNw03jU4faSrHiMXsUuVGbA2kHaYVUfzF5W5GbuHZZlGxoSiq+K
+HNG0RJUDa6bkSDvrcJVNw1iUrowP+LLwnNsy5kGuU4evnwcoN1w7LVbTPaq4RIa
iqvAD33kiA9q//UNKnK4k81z+hRNaWGliyGpgqh+V7MDIqPfT5TMLdH+ZjTeuLrN
S8KBcc2BmUpSwzdUReTqHmgO5peeIcsvO7GNMFWsgucZiAdIVE/zQv+SfP6jhS+r
jCPs0eeu5zl8/V+gXFE2wy3jTJEl9bkCDQRZS9m1ARAAvh1Nh4GgjpTFZy7uQRFz
5PPXdZTBI+Y4hTpF2heoFzZDI6SLyz64Ooglum3ZglQ9ac+ChTSsO36aw4b22kCM
9WDmkcl7wf21fG9o8gJDVjFjDWbwTWREaKjgS6s/Yb8f9gje/BGySojxynTi3zyT
UN94q9dhVjfiQ79UzXZdN9FyyIx2YO5tOo09hTWSZg16oxP47Mj1ATaS6UIrQMcM
nOp0kuc6SufXPSWsUA+g2lW0dmHgPvIHwUfcjWqT2elF01e9KOFe7im29G6zOS2M
Rx8cr6KRg/eNWpHh5aI4quRUhYk4Kw4ohQTbs9ed0YttS4PMK+sq6xHpb28X6Zgr
WnelPY9hfwcR4m7Ot3VQUG8JY9/aTlFCoeTgkhop+MCUI+dJeY8depIa0PTzdEmE
WRvPhTTv+CUdZ6v4z5LD6FhP+/5c6FCbcIb89Rp5fa53oYV5/KZf+0DUVgmpXFU7
J7ZrGgDeU7vIzmwr8kcx0vtsVm1dVwYLACpTaaQPbISQUDM8sEcqKAqD7hWKaxNs
b2M85L6q2/rnHq4g46yJzdR3b8EH+V9u+mUi9DIljDwcpvw7ReRQ9wPdDWLynngl
IeGImbjYfr324yaIl4vNORAkbsoCkS/qc5v6MvKvYNle5fzb9S9kCbNZmD9c5/bH
Pjj9ENeQvzrl2pFh6dc1o5cAEQEAAYkEcgQYAQgAJhYhBBTyZoLQkWzdgeN7bWG3
tSbZjwNTBQJZS9m1AhsCBQkDwmcAAkAJEGG3tSbZjwNTwXQgBBkBCAAdFiEE3OrF
2WE1uRxOpnKru769uyTG81UFAllL2bUACgkQu769uyTG81UFUw//bW5T7w2k8ukG
fpIcm0gB98VgxKenSCmU6N+Ii0DwcNtzW+pmVWl2TbHIXDpvuD69ODWBDMXu6gBk
rVzNEsK3uhzGe0tWA+5I7Vke3iEkbll7VRQlIOrw+n5NMvjeuDqKsMt1gMEEdgRK
ddYApEAi49vV7XnqkB2lLKfAnf6o/KqPm8MuQ+u0xYanupZCldwdpcx5rybj79Es
0iO9Gh/+3qOtR6ubOz3Vn78Lc3y6AP9pmtdOI2QX8foGK4hNmgHSP6uPLh/ERC9N
ir0Lc2hoEhHEkQ8CnEaccp70r03VkEQuMJQJPUyRsGZ/gIm0SAm9JJxWHXJk2/5N
UN83pHAX0LA4zxtWs4fVW5f8v9eIhFFPTZ4au+/cS9D4GFx4mlY34awcpAzrny2t
ntGEejY9HSJv4PuFZCmtyS2q61N9EU8yuBwVM9cp5HntzG+OT4HYugtI6ibehM0S
1Roy4ETwT+Ns41ffhCwdYMp8tzdeksQ35s7rkB9OJHj+q2dkGaV0FQb3FutbSpxb
P4zk/dLqyxuivdUPHGtf4W/qklxzCWBg0VDFA7PwatmEXRxTjx77RelTY0V7K54d
DyVv3Jh2+FzuaQZzzuIhv4gtqHntaqLnYl3h/QNLbOTE3ppvn9RUSR983Bd+M3Qh
bbwZrgG1m+hdUZUmji+wbK0wV0xHNEH+4BAAjbVzdNOs7hMvjY1wVDRFjvICVorN
dNdU3ELy/9BAoiwOs2+zjDXmsX+3YtdzwKvdpQ24O0TvH4Vo3BkvKkJ75EU7LroA
bYQ2423m1MY3eaBslmX7TUJ3XE+k7OZF8AmcftgP4nhC4IQSCtoBc9+ncyGN4da1
BpYO7b19tO0/HST8GHSrEcU9bGGdimS2eNkSgybA8wF6K0K9yvrpTNSZ7OBVlzQf
En8s70Gyzs/d6C/rTA+defnv3AMaciuINSEdFyfYq4wjt5PikvgceMAAkH/z69xT
Ng+6q3FQt/lyK7xX5qPMe2oFyDA1H+Cb/uL7ioo+jXh9gF+0fk8OP2IPzxYhBful
pVtgclmOuaekzaKeIv8NFW7GoA9OghziExePxg95OpL/VyQ7PJiAUj1pFovFk5HS
6ejVZNEGJ/A5zLc1PBIcr/phu0luqhXAhImsZS6858GWQllWULNWw8bX5Blo8Avc
fFVdq9iAK7aHN7g45ZR7Ze6qKHDyFv4XWuE/rj9C2mM/GAstvU0gGmbo6B1mNGMJ
uX3Gd3dG8fqFjE77OB2feJyfZ8UeF1nvG1hxlmuD1A5e6/osO9V7kjhXKzM2zSO1
1zHQ/5PlUisoUBjJ/QIK4v9RBNGtbRKso5X9Fke692lVgrdggDJ3j2QqMuTo71rA
VDLtxerc+GNq0GK5Ag0EXPA56gEQAK3x5otbuNfefm1BD4gJ4Y4EhVvMCdeUf1uN
1OqiWUz6KLCR2UE00QaS7v65D11Oh96bpxtJRe6HmQk0vLm1QgbigZku+kCMt0c/
zOyWBOCVDKahZJu/ayzimrzGgNunoTnV2SjCTIVK4QtkowUeA+Ilt9hfFCZvASIW
Eazj7bWXC0ji1U73OzVdvMkP1e2ibGRxhp494HGkj0metYhQq3vQk9mL9BzE8wba
yL9iK2cJVqZxXDxauL5bswAlbdLI2MlSrH9wg6Jvy91lXG91ZEhmZ2RaQU7q+/gi
i0QEJ+W3pwPQQjLciOfecMN/n2/vCl1ZeJ2PNWMoXQXyNSJfpL9m4TPiW92i2kKA
kaEOZ2LilUJhuZKEpt9+BxIn8FafjAI4co8D5cHvnPM+PXt1xZHwP0Yz+yC6wnLZ
9scNw50AM/Kcpbc6TUIQ0kPHsNMPCbZ89Ey63i61IvUUxKXvMENFptw+vWfD7MRG
OZKObdH3VrKtD1QqU//g/xeSGNzcUVatFDVDv9KWRSatOpKi5Bw6iUYxrYQ02Co1
45jugTnGc+zrhMsDND3Prq7R0dMw4MBLUDoLSXrous9VgahAsOQztZCfvPOcV2cp
Haew85SLSYx3Ksb/raIBG02+4XjF2xQCgIq2LfA6xdG5SGJd6Lg3ip2vwhogQgWB
gQoq1qrPABEBAAGJBHIEGAEKACYWIQQU8maC0JFs3YHje21ht7Um2Y8DUwUCXPA5
6gIbAgUJA8JnAAJACRBht7Um2Y8DU8F0IAQZAQoAHRYhBAl7MTB3rmKgL4TaTfGm
Zo+7fVcuBQJc8DnqAAoJEPGmZo+7fVcuilAQAIq/yZi3hoxzSgDblQlcxGKDaVck
Oo5kFRK8UWby3peB0bTMV3MWjiCqmN7ryQ3K3126G84yxXCTNSv7kOLsVxy+DBx5
TdfmeBamdX8ZxeDv2bm+KKlZR0FGjJn3SiLgPkrp+GWazOZbKIPsoOuN20h0aeMC
6mvO2Rz8kGUjCWa7/jQnINHkN/GnbnoOXOwn8cfwtt2Mkb8XRe+1lTBwd59ruYNe
0hazOwygmcSgvM6Rm8nXeuElxMdVAiYKXJLej34+ToNN/LgB3TD72cXStmn7z8Zd
kaKV+f81cyPR6Vi7laLIIGpquBGSsel5s0pb2PFFxrTZoNLZit/x2Wx47V30Vdk8
4FYjENEb75OpGEErwVRbTQOKREOutXlGwiJe2C1Gof6HCScTfeLybz8ooLn+lSXc
B/l/68wqxPezWYxfbMD42YtTEkJcvPvRaouhKEjxASP5EuT3K6T3tujsqg1K/rE3
Bei0krOnTxUCdU1f9B12ZXpntKehwSz5LpnQvr2wR98yBRoKQfT2Yi3XK7QNp46X
t8zaIPdVw7qMn3gODyiWSEGCtkRF7exdHZ9yIW2yofFhz3M/tjIvu89tRDwGiDER
8hBk8Z2mcbmlMzQ1Jf+aCQ5YjZUkUN7+s4VN1OT7yYpWcV1pR8cDjyoh5LRXTMwt
3g5mStiVbeIq9mmzogAQAJq6KK9Z28q8mAozKLd++JxefiovydXVfqWY4GoOfhtv
Q3Sa2mKpd+8b3nT7b2twIlKPqpIxWPxke7BWS2sShZfqWkltbAR1SH7fkOeN8Y5g
3yBbxvHQJ/KoTHnRb74MrlaP4v4MvuOAn+Xt0wrEt2OKOSD6/jIjnXqfNFuvHfHv
tTyf4fVX1sesN9JwRKevc+1ZLjN7jIOuv1en5grwDtGQhy0fZOjISLGKfy6299rn
p0alRqLuAhFn7Ru1ZjQVfqD9VEfDCEsjNOxoW95Aa7MVtdoaQ6FEBKK0q1tCjZ8P
f2oAyLGQo5y030D+cDd1lTwGUmPvbnz4/fqNAM6KWvZjtGVNJNlLlpd4A9vAgvAo
I1wONXSR56sZKqJXbOSIfS8AlzFWiSf6IyPpU5ozVHLGySAVqtrRW3ci4COeKsRL
85W4iSGCl5uYZcE9weotakUO0R0u/pi07BJwTR5hvn9IDdeyYreqDMQDMF9yHCXg
+oCGJAB/+1vHkSABRsWbCERyB33ExjnDr7z/kWOhfAdiZ1+MpP2IPtDEXcufKayS
gpBAZYEeLAmNoDwtg+milSD3B7TTM11IY1e/Utq70Az/8BOK6wfyIsmPfeh9CHoW
+JAbpbdOUs+rH+gjHE2Pls8BDLp1EoHVPGVavhfSlRbG6oF45s+gdL2PTKzEo+u7
=xFcH
-----END PGP PUBLIC KEY BLOCK-----

The post Updated GPG key for signing Firefox Releases appeared first on Mozilla Security Blog.

Chris AtLeeUpdated GPG key for signing Firefox Releases

The GPG key used to sign the Firefox release manifests is expiring soon, and so we're going to be switching over to new key shortly.

The new GPG subkey's fingerprint is 097B 3130 77AE 62A0 2F84 DA4D F1A6 668F BB7D 572E, and it expires 2021-05-29.

The public key can be fetched from KEY files from Firefox 68 beta releases, or from below. This can be used to validate existing releases signed with the current key, or future releases signed with the new key.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFWpQAQBEAC+9wVlwGLy8ILCybLesuB3KkHHK+Yt1F1PJaI30X448ttGzxCz
PQpH6BoA73uzcTReVjfCFGvM4ij6qVV2SNaTxmNBrL1uVeEUsCuGduDUQMQYRGxR
tWq5rCH48LnltKPamPiEBzrgFL3i5bYEUHO7M0lATEknG7Iaz697K/ssHREZfuuc
B4GNxXMgswZ7GTZO3VBDVEw5GwU3sUvww93TwMC29lIPCux445AxZPKr5sOVEsEn
dUB2oDMsSAoS/dZcl8F4otqfR1pXg618cU06omvq5yguWLDRV327BLmezYK0prD3
P+7qwEp8MTVmxlbkrClS5j5pR47FrJGdyupNKqLzK+7hok5kBxhsdMsdTZLd4tVR
jXf04isVO3iFFf/GKuwscOi1+ZYeB3l3sAqgFUWnjbpbHxfslTmo7BgvmjZvAH5Z
asaewF3wA06biCDJdcSkC9GmFPmN5DS5/Dkjwfj8+dZAttuSKfmQQnypUPaJ2sBu
blnJ6INpvYgsEZjV6CFG1EiDJDPu2Zxap8ep0iRMbBBZnpfZTn7SKAcurDJptxin
CRclTcdOdi1iSZ35LZW0R2FKNnGL33u1IhxU9HRLw3XuljXCOZ84RLn6M+PBc1eZ
suv1TA+Mn111yD3uDv/u/edZ/xeJccF6bYcMvUgRRZh0sgZ0ZT4b0Q6YcQARAQAB
tC9Nb3ppbGxhIFNvZnR3YXJlIFJlbGVhc2VzIDxyZWxlYXNlQG1vemlsbGEuY29t
PohGBBARAgAGBQJVrP9LAAoJEHYlQD1/DRWxU2QAoOOFRbkbIU1zKP2i3jy/6VKH
kYEgAJ9N6f9Gmjm1/vtSrvjjlxWzzQQrkIhGBBARAgAGBQJVrTrjAAoJEMNOV0fi
PdZ3BbkAoJUNHEqNv9dioaGMEIpiFtDjEm44AJ9UinMTfAYsL9yb15SdJWe/56VC
coheBBARCAAGBQJWBldjAAoJEAJasBBrF+oerNYA/13MQehk3AfkljGi252/cU6i
1VOFpCuOeT7lK2c5unGcAP0WZjIDJgaHijtrF4MKCZbUnz37Vxm0OcU8qcGkYUwH
i4heBBARCgAGBQJVrSz+AAoJEPCp59zTnkUulAYA/31nYhIpb7sVigone8OvFO19
xtkR9/vy5+iKeYCVlvZtAP9rZ85ymuNYNqX06t+ruDqG2RfdUhJ6aD5IND+KD5ve
7IkBHAQQAQIABgUCVaz9fgAKCRCzxalYUIpD8muMB/sH58bMSzzF9zTXRropldw7
Vbj9VrRD7NyoX4OlDArtvdLqgPm0JUoP2gXINeSuVPpOfC676yVnBEMjIfqEjq09
vcbwayS+Ncx4vQh2BmzDUNLE3SlnRn2bEWr9SQL/pOYUDUgmY5a0UIf/WKtBapsP
E+Zan51ezYSEfxDNfUpA4T2/9iWwJ2ZOy0yIfLdHyvumuyiekJrfrMaF4L9Q0OnJ
wp1PwkvN4IVwhZeYDtIJN4nRcJK5LrwU7B97uef2hqBBll7/qCHl5y4Khb0csFan
Ig+pQLPUJdIiYtzoFtlgykB61pxqtU9rqGKW02JzEUT8DdPUXxmMBy6A8oGeBRH/
iQEcBBABAgAGBQJVrRdcAAoJEGVzgtv/JREKQJgH/3nD/3/SumL7nG2g7Y1HQqWp
hUbn40XWvjZcHq3uBUn1QYXeZ5X56SANLM2t+uirGnNaZXW3cxEl5IyZVLbmcLWE
BlVAcp2Bf3FXFbdJK59f+M+y2+jZT9feTyrw+EtLoiGTxgkLdJyMyI0xGmQhMx5V
1ex1CxhZK2JPjzCVYriBI0wIbmKi90YNMQoSsdMhYmX9bHl6XWS9TCDWsqj25FLY
JL+WeVXpjO0NjRwEE6pc/qldeJYG5Vbf0snGxIerXe+l5D8Yd4PEAnpj58+5pXeo
GYZn3WjX8eTFMAEU+QhLKWQ+j/Y8Kijge7fUxnSNBZ2KEnuDN/4Hv/DrCFLv14CJ
ARwEEAECAAYFAlWtZVoACgkQ5DJ8bD4CmcBzsAf/RMqDdVHggQHc0/YLt1f/vY9Y
7QQ6HwnDrtcNxxErSVcMguD8K6Oxir0TMSh+/YuZAW8K4KSgEURwZqz4na8/eOxj
8bluNmlcAseQDHswqU6CyB95Woy3BocihH7L0eDXZOMzsa33vRQHBMioLxIbpnVt
VbFR1z7tmyfjcOrzP32xo5QoPoczKX26luMBjAvbw1FC0is2INnmUSYM4uH7iFZu
XGPFYxcAqODqy5ys3MoPa4oZ71d0HoiRil1+s0Y+2ByddZ19pE2TXp4ZXNYNUj/2
aRj8b4sTjR4rqhHIx/vfoK+VCNy/skFUZOyPdbbymE0stTRSJ1gr9CZLcBWYF4kB
HAQQAQIABgUCVcFZcAAKCRCJFz+VfFX5XqApB/938p+CJiDRnh2o7eDWnjSyAu7F
WmWGkOQnjI/kraKx1vojsYnKRXD6mjq1QJ8Hsp4taJnLQjcokNTUiST4m/e4ZJEx
PWuJKkwlralWGH6NpqYcgWPajSYb0eYQC4YqS0kfyzolrHdKI8Y4NGEU7yy5zsHw
WkHt/mpNQMrYnXwyWdIrc03X/OXo51dJyshJDRw3InREyBblFJcLvArNHz219wMr
XAicPytw4wfPpVrmDx6GrZcI8q8ECWCjwSXXv7hRpEuFLSy5XPhMc+wYBJjNlUoi
FBAF/7zENd3rMn9SCQLiIFYe0ubmO+bpeGy7TizbxOaCIfgUouyy0BQXNuJBiQEc
BBABAgAGBQJV0hrqAAoJEK18uZ+CSLoPzEIH/1D6sJMNAJtZCRGhJXvv6SYhv4pU
VNyDF9FnUvRsovliojoe4IkuBTWKhPGrxbiD5IO/izr38shqNhhm9JE2/SQZHObY
Pi+lyfDKbJgImTNxmS4F7JHnRLr37VxK1sVvuNkynJnqvCcp1g5xwNIx1rKcka3i
uqJj6toM8XQfgsTHH1rUkWHbUV3QwNzXm+yhFm2s6QzxBooPzmFn8AY7CXD4pvcM
R+M0Zy+e42nngd8lzRnmTBVig4pRq0GCMulFG+XjeVQZFpoIIxo2k1lczbRmGttO
NdGWSjxBUxReoTbSwM3C/50NrobycGQgY0gd6LGtWtU8/uEfklEy2NluxYWJARwE
EAEIAAYFAlWtAUYACgkQVu5xjc4OFUs0OAf+LM0dyyvUFGdXfJDpP2xMknXzsHAX
WFEtH5jein58mv6dD3fTVcCouo1vMQH3WFFSLYZvwtNnHGrSBqFbNKqZ0ATQ5tcY
aWsSZ+MVJJMXJDXFG/Oihg1nNOM33VdfV0RGPKP1I4cEROxms3TUFkHW3cSCgMzs
8I1OxfSoLrm6da8EN+2ct2InqzdQL2yisyTyrdmXoNpwXDxApKYkvVHQ4+9eJI5m
0ZAr0mBjIeJdATcw4/lIVKTrV7UhrChxiffYJcz4SSC1crmr+2Fzw53CyAsAmYal
UHep3Yr05oQ4oJRX9X3VrY/yELHwwxXaxCAdwwHbbXAMhZsPk9Mc20J6BokBHAQQ
AQgABgUCVa0isQAKCRCj1lIXO3Y+j6ZeB/91Q9/qr5oMWgOMsix8kflBLw2f/t+t
RR0SWDw90bG1npJB6nq5Hl+Bz4/A4SWFTFrrrlZi1Enjn1FYBiZuHaSQ/+loYF/2
dbQDbBKShfIk3J0lxqfKPAfKopRsEuxckC8YW1thGxt5eQQ8zkJoqBFTBzwiXOj3
/ncJkX9q9krgUlfTSVmrT9nx0hjyNQQXrghsmBtpR7WCS7G7vNRGCNUorhtviUvL
+ze1F7TTSGspVsVxo2ghmz5WT/cD9MV1gcVjojYmksh5JIl39jCHr9hl8aRId/Of
zsN+TKuBcpAxDkm9BCAps7oY8FlLKDFZTtHa000AkodKHT88nwnvKuqPiQEcBBAB
CAAGBQJVrTkDAAoJEPbQ92HczOykK9YH/0MARo3HlYXeS2bDqM/lwK/rQcPCCyYk
e6wbICjncbCOjgXHqG/lBhClNs7hp/7gqkUaR7H5tmeI4lalP40mSHHnnFvMD3Tc
yhn350igK0bgrjWQDaYxhKlHT3vIXd/C24/vRSAxmqIKbP+IoXOyt2GMTQq8GOm2
dgYRaTkwyHnGWnMaibctX8D4oCYR0/D4YJqPkfqobf8+1ZfP5GaMbSxE/Jwdo0kJ
a4vPjEzFXbygAbncapzdwN6zgel2zh885rz7B7vIpMr/Y7eV85Q68qdyyhLe8cL8
Y18YPzpFf+/PZNbgYxouafvnFwBhPQwg0gUF/+1eM3UE2ua+saSTGduJARwEEAEK
AAYFAlWtCVsACgkQM0LhtmejiGMovwf8CfYJHNbwiwSMUoP4n7FrmElhBtxvlbnC
MZKz08v+lFsfS3wU1LUN69GqirfF0vkQRSlSBp7niCLHQCfSoqHMLgxF0P2xgXLj
aYM/t/rxXDawJmW18G04dqFrtCPZTbwMT2PsPHTiWQdaN0e50lXk9Vo+l6VbwQMg
4zH7icZadeJgQooxFalHYFVXUVeex9t8/YdanFVrHFa3tao6azBTSUkJvZtIu14S
fxigDWIIwsx0xpVfJf3a/xC6HY3Q1a3NeBz3i6DwaK5wYqijZKl0WVdULKyqU98o
F6y0mUv3d2o/p07Cqgeo6xxMkHqu83OLa2a0C7tYPLgL4EFc2FtikYkCHAQQAQIA
BgUCVaz7KAAKCRCWO3gxCjexfKxrD/4npm1rB7+pPlotbqK37Mur7egPbVSAzVNU
/zUKPAuGUeP3C64YN77ETx1kDuS+meAqMDHFc9Bf8HivPbtj6QcK96U5KstbmSh1
Ow9YiQtxJgxGjg/CzREgZAFcjy0MhoklyPsFhv07s6MLOJMSM/krEN5nqjifQ0Wd
mTk02FLoHVWcLdjfgMiPiSjGbU3k7luvjPyRNzk831szE5mfa74rEYh4TBklse+2
uB4DFQ/3oHZ1Sj6OBK6ujmNKQjIP7Cl+jmjr7+QK0OJcRaj/8AckDA5qXTZACh1S
2syCDDMnX0V+dTxGCIoWOK+tt9mLohMzpEeD4NIX4qdpbbCRzeYZMHSomyBIsbA6
B+/ftDE7W1N0/FtJ9adkkCynKULvh2CH5c5hgOOL22M+2spnywRoeJRUWU7hBM5O
UH3JjA4Tu4j/cwp7dD7QzZrzmC9f5LQJ3OelejvVowWPQd3/tky4o1q6wlmFqAcA
gtu97UwgBOSR9sJPGDlt1iC91UYAiBQQAA7ya8uXUS84mCQwTlr8j+YrowvEHK4I
xpPREytT1LzzV/4Am4ndDFtujy83QjL0qaIIim1xIwoEosd4yidhpczw7f3b9dQp
uBIFeQuhM7JsxP4tmE7S6k6GlEmqa3INPVaPGnsUGS7+xSMlcJXLtimPCSQvFma9
YiGV5vtLy4kCHAQQAQIABgUCVaz8uAAKCRASy06X4H5n0dg0D/9QoxIh9LRt1jor
7OHG4xKUjKiXxn/KeQNlJnxI55dlWIvJEJGheFjaDomzKBYuxmm2Ejx+eV5CHDLU
YsLFYwWf8+JGOP75Ueglgr8A0/bdsL63KX6NP2DCg8XR4Z1aeei3WMY7p/qMWpqb
QoAv9c3p49Ss2jSNuthWsRR6vbQ9iwze2oaUaA44WKQyhhbCwBU4SHYjlKCLqIBh
/HXZFhZ4rDfuWgPBKvYU1nnOPF0jJRCco3Vgx3T9F+LZ3zo5UPt1Xapr3hMVS9ia
Jyl1w4z2miApUaZuHPuWKuO4CJ1GF1mS5T6vG8gB3Ts5zdtBF2xQIkCz+SM7vW/2
i/82oq6P8EuLHEhrQPR4oTjXIvXdEJ9kgbjqcj8Xk+8teEOnuwh6iEhay9i/bf0D
3Jd+roFN5dnWPxhOVjzrI3fwlK1/ylsZYqUYBEzt7Wj0MdhjeKssI5YICcqYXXjB
ttMw4B7DZXPFXzz3kHB56jZ/II4YUjpLO85Jo5A9SV+aIqa0mvCt6DvVWy/rhfxf
oUdqNlhX11gkVLaA7xxgn/NqPOf+h5hVO2mwWkmart9YHKMZ3ukCdke65ITL/nsY
Sm2ZhG7OYjaCfu9jPWtkBstOEWyT9q4JTdViR7wN3eMefEG6rb49rxOYvGJu+cTV
kp3SCpl0w1j+tPj4tkj7ENzPMXdnuYkCHAQQAQIABgUCVa0s4gAKCRCKsTKWOgZT
euMyEACKOySKAd/xDcPcHg7Prvdws04Z8DIR0dY2qUlbRVx2jTmIXyry63CqbOJF
bDg9uk5x0+lSotvrWtZ+NKSrg9VM6vyV4cc2P9rhqIBi3wO2elzAmpOaS2KKOjQ+
2fS/xqh91ElJUu09xXQXJ0vMrqgui+zN1YBDiJV0WOmm90Mm2NPiihcWZmBmDorO
qMQabwbjBLi0yUVHgAlkilY3mAB4tmEKDeN+4pYSAAhXAll9U+nyoVMgwMJscZya
zOp4MqMbmFjyr4p5AGzv+OOJtjtCNKT6oW9Y+URLY0YKeOsPk0v5PlbQCVBlLeSB
sNZudKav/Gvo7Mvz5uLTcneBFb+haYIiXO/FQm4uBHkzdNFLgaph81Wzh62AhbtB
lfBOj/lbzN3k/xRwo64QU+2Z9GOhFlhjfROquY70FCQcspwNuqCdZybnkdpF2Qrr
6Pi0qKR/Xb9Vd7PW0/gKQdwwlYTiDemgA21mYeJrYw873/7U/+kLFRvmPAEX4IOI
OEN6XVjxvu78REi6CmXxOoYnH4aRSXDRyi1nsGjB43AtfAMMNCUigDgFP4sUsZAG
1RAoxBhOsO/g9S5wx8H3rKITCXDjQh2SYeBwHFcU03EMcyzEQhbZNighN+aRKGIi
bteRxISiKU+kcWaHolemeo6wGF87QXEpJaQ2OwIoIxQYvDDmQokCHAQQAQgABgUC
Vaz/8QAKCRA/8xuvEEv54t06D/9n1Nyn2QSUN1mXd7pomoaka+I2ogDbQpu9iuFq
bkqfcH3UuG8yTKlPp9lYDBs0IEfG85Js6iVxJIultocrcDmOyDkyEsnYbdel/tn3
X4yqD8eI6ImRoCE+gnQ3LoEIHuODfJoosM/jAHANs4fsla4/u5CZDXaaq7pYXGiT
t7ndsfmLiCa7dAg7bVFfJagsnL/VjlfeWM9nW01rDL9LPxSN4tq7ZKXWZDonFZYJ
4unsK/Cn6Pqco4Wb+FUOWCcWt8in1pgeNHZ9WnAgXG999/3iCbbQTLB6uVwY4Ax5
P7VApnLVXV6QFVf7bN1DxE8kZk+pfLGcuD1LJSF0skE80M17kAt+iV+fam8EYzeG
dG6cY6w+srndaMaq9ddiHIiQkR35SjJAGnrNRj8ooUr/vKOBnFfuwJLA2MOUVPZ8
HWB+WXW8qhihw9CXa38Hdt4o5knMGRIyTWEF0TQDtRGQ6hisVBN3OxJRXBj7/QgC
G/GoYpweGKcsMU43p57TzbnXVVUytJsLFyexOGNzrUIxgDVPEvTUnNvdAihNZPdb
W3YdFkP9pdwOyDpQwebXELUx1kp4ql0laueex4L1v+0a6rDYQeK1gOq5UGY+THRS
gB2xsHl5zeryfgnjlUkUlxKuumz+9FI2fRtSpxmWllJkRF2oFMGRuLPGAWe8nHvf
gkuGVokCHAQQAQgABgUCVa0bowAKCRCVY0f2+/OkFWKREACZ9TOmzvY6mrfWVEdl
dcYPj8cU/1LJhGdbNo5YYMx+A72nchxGXepHA65OEK+f6rFMeZFPwpQPy6Sj3MhT
623H/PECfeG87WcLOyJbfc3i9T5jvxS+ztG6abYI2J/50oMvjUWdWkDX3VvdPc0Z
Z+KC+oHvx9a/9Yki48m4CEKglgVsrRW/b9AXZQCj07bB0GjQQtkqY/m1Z8m4ttzx
fO7OBo/jHNF2An4/4gUDirXNDj0UdB5FYFJaTEUCneIj2x0fk1r4u6na8tINhiZ0
M7IgjnDlBD5jwzvwG+3kYE6TnYp9Mfeg2MPC13tp7jrJatLLutrOzvmSVLGLXbkh
9w+v+vx7qO3TxZUNlFqTmYs+vI2V/9j7KYV7Ttoind6Io7X9ImnYrvd8JOyVcO38
67MplKnrnqHJvFStE+JcHEcw5aRw+WVmoFd/obGc34V3K62T977QQGOkrTYDEdje
KADfjXXZkZMZc0IvzLBOJ1XB45+PKqJYCcJJS8Xr55+NGCDaaUPWDpkNGIqmX2n9
kYROMKG6uWkZIqG0JlZkga3THSJIvLiy6uoOvDC4GoQ9JnTwpGv6r1Hwcg+4DCOr
YKOoPKMMU24vHx2FtRRUgCXtr2cmi2ymHlUrtz8EXS4tblic8lixcbvPUqLEvbJ2
gfWQvjXNd1whYE/wfvI9WBTEIokCHAQQAQgABgUCVa0b3wAKCRC8FzAbSRs/IQhX
EADiKbCnsN/+Plllxn6SQHACEU75ackx+Q02XiD/u+wUptYUGmJi4aaW9f6mgzed
OxYK4S+/dCiFtkcYlL+FjaR0C7G6tMjrDgW+8nQCTPUNQA0gX2B8n06a7Zmdv3Eb
V/PIJJwTNSBp/dqKbvPKnRquOOpH+ayZ3awKOq/LlWBErbW1gB+FabN0lCe0iUIQ
TF9OH3GC4QsMtIrePueBmVrVPcHATV2Vw9UPqX1uX/tlXm5eai06oVT7V0FwUbg0
o1eacblNXvHciHpe33zZIKkGBWwSjDVcU9/SN+U8GfoMYmyCma4iN3KaCklpzBkJ
iQZtNKPAB5KJti8LDUxFi2sJd3sqWaZDGFhO+/PKhBKpqIhAzx1ppd11zLgh0eg6
gQlXN8D8ELISRvQqGGNNZdChEFdzGElg5SMfmeEd37OaX4wceLLV0v7EA0doHMVo
0enFhSwU3YwtwxbiukKc7H/ylG7+jvntjY+z7KktRsY/FkklrbrNhddMBQMMSAQU
Uz1GJ+6NUKmzXjqxFuuh3OAhqNzhJyABZWQcNMph+rogEslkenwoHV9gWRWtS3CM
ybJkKkbsWpYhMZNY6hFtgCwida7NPs8369v+yTTE6TU/NIlXUKYIf2LMqtOpEBTj
aN3jKpUi5DeE3zBeh6iVKUrfCXbt8O0rYQPNWGSW+MZ2t4kCHAQQAQgABgUCVvA4
GwAKCRBE9G4UbQI5XfS9D/9XPK7jg0lmsNZ2sDIyeAw5n6ohSR5F20ocTMAVeXqN
7VkvJdNpIqHJa13EP408DgTy9BsSptym/OQGE6B82BU7FZTEL6eMHnGGDg+5ktx9
+b73xLedzK75ti6ED+QuA4kDYcvW8hASht0zRcmFUzwbtuEopJ1Lk1R3oFLwCAov
lhduC45nANWrTK5U+D1U2obl5PAvx+9mEfgvojlGH/C/WD74W+cQZFH7t4+muRza
mckLyPftnTxjNF/lpYIm7z0QOwvzBYj+PJ09wYueK00RE5+i9Ff8DrjtVSXsziQv
SjJuUlv0kVvM8r3th4zBBNRhA4cinwqxhgqO4G+r2r9Gv0M2nKKOnWmyF+MSIRnh
gONOQZe5a7kQxKVWkLicS2IGUpPeQyTWaqZzYXsD+Dm6DXD57vYTURtUkwO0CDON
zT5XiS1HG1MZrw+V/Jai4HAvpF5WkTJXPc1Lv75BxJj3wOAw4MzEWCCdr/N/dt5/
+ULpEaSQfIg4L4iEj6rvabQyN0KbOxIDx+pPQ81izfj36wIrDqhyCNIdmVH/yARl
tkL4XDEl/pt7Y3t6jqFhy057lektowClWcPeq3DoL0LFYnjNPpYvIjRIAXdhaYiA
u2ViF8WdGzQ5tFeI7u3PQUG5NcPe+WOPOru3wMMrUhLgLHkCdNkjivP79qIPSTkC
GYkCHAQQAQgABgUCVvA48gAKCRC3hu8lqKOJoLRMEACmlyePsyE5CH7JALOWPDjT
f+ERbn+JUTKF+QS0XyWclA/BIK8qmGWfgH38T9nocFnkw17D3GP8msv8ll+T4TzW
9Kz9+GCUJcHzdsWj99npyeqG5tw+VfJctIBjsnX3mf4N0idvNrkAG5olbpR5UdsY
Yz62HstLqxibOg4zWhTyYvO6CjnszZrRJk0TYZON4cXN14WYq2OTrMaElx0My8o1
qVBnK58pIRzv72PmvQqUk5ZjhUyp9gxjqqCJDz0hVK61ZuGP6iKK8KCLTfSxeat0
5LAbz8aC58qlg5DVktevHOjBgnTa8B7BgJ7bQ9PLMa3lF4H1eSiR9+8ecpzEfGHI
LoeIDIYH7z7J/S0mTgV3u5brOMYO+mE9CEfps85tVVoyJrIR8mGEdtE2YmdQpdFz
YIYvRfq9tnXZjVsAAsC20Smw0LnjhYzAt9QJwZ9pFMXUTg6lC5xT+6LNrEY+JR3w
C16q36bcbCNj0cBv1A3x6OI5OQfpexhLPDgoDiI+qozJIdj8MzJ8W6KU1Z3yb3dq
ACk77yv37rGO6uduSHnSti26c/cUIy6XZBbXBdobE9O3tr8hwvTQ1FXBmYnBrdiz
U6tgxEA5czRC9HOkdk6y6ocbjmONpF6MxkpJAvTMk7IqC2/hisbV9x4utla+7tmN
ZU137QGcaK2AGQablVAy4YkCHAQQAQgABgUCVvCMigAKCRCkhaDtUbi3xAU7D/9g
UPZSJ8pbZV9TLaKD57Bc7B78HNV/B438ib4dI33iihMTBHnCB1giPE9X54QoV8AS
xrO/xveS1kkj78jERqUcED6ZHhMLb9SWs6CxUKdMdgovnIlFUc+t05D5mb6STi+z
NihwO0JI+n79qhETy73WLpC7RR0aMx7zYcbqp3NWPptcf1kVGJZGx+QbEHfVye98
T5pkH5Wp+7LSlup6AldQT/oifxdGxLXbECTnwozRvyMpAaphoEHrET1YOmKnmw/J
yi6DLpTb3XvSf5Tntzr7HklCEcL9FvYCoHxiXWawLhuPhSyrFYeYtF1ypmzTgaJW
yuTZ8sN9J+y7Tbchk/I6FpX+3YoTgPCcC7hv1Krs803N/3KuyBEvhzg7NYRikzO3
fxXlBG0RMm+662E7KlERU24izbWhGiYwl34+MaxrIO4oDvF79LEN7y0+SjL4V0B9
689d+HI1ZfS9O1xkOlW6y0QyagOzsTOUF12s2mWydFmipbYnIwsSsu6Nzk3yO4M+
qYABJXJ3tIFQPTd7xqmPNlJ8mFtmzHDhb3Pv6sRNFLLujYM9cJpuNMbAHWdohz1b
jBT9pZQ3zWpll5wotUvGmJd6hTAXdUgmZ7lh7Uq6axClMmiLe1WYntcNpb04PyyE
m2+GU5x123UTiSX2LGKa4t+HNSM8nJL8BJiGk80xVIkCHAQQAQoABgUCVa0OAwAK
CRDDvTXkbdRdpVR+D/4/37e8WqKOHNPteQu42sj0ZOfcqyVMA9TQ578F0s9MwoQu
qfVhXGSWevOctuMv2qTBjBfFjkdPrKR5L4LNAgMsu1epHU0DPcRZUCbh1P7Gpolm
Z8KgnjT5Wpl1AcuOCaP08VMrt/e/JndTHp6btn6HsLVtryNhlL7oaeYbDr6/ovHN
GHVIVSZgGP9f4Y8FiDpyfKav71vYLBMxtzM7lc3eFT1S10XhSW6k+8S5XldYWkLD
riRXDE85C+9QndpOoQaIICp3ye3JVnUxa1qhvsYj9uPt1M6hKiBSoXdplrB+hQc+
nqLNN3jxpGdmGmwrjtjqMhocMIguEqgARJOek3XKOppEhu+IcnJgU4edARJNLsBa
uiVBWY/6mZOFlZq6H48tVyziS2n/oIpi+aCc/fQeGs9zMTtFUohPfYtTcy9PecXM
OYpSu4p4tQ07oucnxfBkRUgTdM5VwX7YwTcRwp9XhHACUEGBhrwMH8Iz+sK2jLF3
FhJGkef1vFs0vqSf4I8DBFkYAKF848YyEcGHeINQloi3v0Kr2PpBxlRh+GPWwi++
QPKXQFzlTiyVtMzoo/lpmAWUJwj0dbAbH/mohtvWtA1WPHC2JRZ52JLThhpDrK3t
//Jdt2WHE91cMx7/2B0PK4O8/j7UVlsOJXpVPsGX5SFCeTB/iS4JtIwWN275zIkC
MwQQAQgAHRYhBFnKni0qMx3iUaokJ18Dx2fCR6TVBQJZDvZCAAoJEF8Dx2fCR6TV
oGkQAIjqaQ7tpdhDJ6ORNtLIt0TsWg0jg2rpoq+9Au36+UYBMuBJ3Py/tAsZ3cqQ
lig7lJiQqOuQZkbg1vcY4Kdad7AGa8Kq3sLn8h2XUlNU90X0KAwdCTA/YXxODlfU
CD2hl4vJEoH/FZtfUsaLNHLmz0brKGrWvChq00j5bPfp90KYKqamGb3a4/LG4DHL
4lmEBtP++YA0YqUQ3laOvKune2YwSGe4nKRarZnFiIn2OnH9w0vKN/x9IMGEtc5M
bQVgGtmT5km3DUuXMDforshue6c7ao4nMOC96ajkWYZhybqHJgLOrEGPVUkOaEe7
s1kx4ye9Ph3w/LXEE8Y8VFiZorkA/8PTtx0M9hrCVkDp0w8YTzFJ9DFutrImuPT6
+mNIk+0NQeuDsv492m/JXGLw/LRl97TmHpKME+vDd5NBLo4OShlDKHwPszYcpSJT
G9+5++csR95al3tWnuGX9V0/dO1s7Mv0f/z07nLB/tL+hEpqqA5aRiGzdx/KOrPZ
uhCTyfA3b2wvOblwf4A/E1yO7uzPTuSWnx1E14iZuaCPyZPXEh3XSYCLEnQ05jy5
0uGXCDVR+xiE/5i/L3IxyhJk6zn5GOW5b8Taq5s/dFS3zWiFS6l0zQ1VQmJH8jdG
LoBFvdVLZoAa1bihLo+nJVPR2RauWnxWoWk1NQoT3l02Lk6DiQI4BBMBAgAiBQJV
qUAEAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBht7Um2Y8DU1CqD/9G
vr9Xu4uqsjDHRQWSfI0lqxElmFSRjF0awsPXzM7Q1rxV7dCxik4LeiOmpoVTOmqb
oo2/x5d938q7uPdYav2Q+RuNk2CG/LpXku9rgmTE7oszEqQliqKoXajUZ91rw19w
rTwYXLgLQvzM3CUAO+Z0yjjfza2Yc0ZtNN+3sF5VpGsT3Fb14aYZDaNg6yPFvkyx
p0B1lS4rwgL3lkeVQNHeAf0qqF9tBankGj3bgqK/5/YlTM2usb3x46bVBvwX2t4/
NnYM5hEnI57inwamX6SiMJc2e2QmBzAnVrXJETrDL1HOl4GUJ6hC4tL3Yw2d7515
BlSyRNkWhhdRp1/q9t1+ovSe48Ip2X2WF5/VA3ATfQhHKa3p+EkIV98VCMZ14x9K
IIeBwjyJyFBuvOEEIYZHdsAdqf1zYRtD6m6obcBrRiNfoNsYmNY4joDrVupI96ks
IxVpepXaZkQhplZ1mQ4eOdGtToIl1cb/4PibVgFnBgzrR4mQ27h4wzAwWdGweJZ/
tuGoqm3C6TwfIganajiPyKqsVFUkRsr9y12EDcfUCUq6D182t/AJ+qE0JIGO73tX
TdTbqPTgkyf2etnZQQZum3L7w41NvfxZfn+gLrUGDBXwqLjovDJvt8iZTPPyMTze
mOHuzf40Iq+9sf5V9PXZ/5X9+ymE3cTAbAk9MLd9fbkCDQRVqUD0ARAAr/Prvt+m
hVSPjNDPSDrTBVZ/7XLaUZvyIVggKa+snJoStrlJGTKKFgDVaYTOE3hP/+0fDdQh
97rjr4aRjd4hBbaNj0MzZdoSWYw3yT+/nidufmgPus0TIJMVO8I6rl3vgcfW/D3o
vNrLW/LjkTuM9a+p+D1J7woCfMSWiFMmOLPKFT7RBuY8edCVjyA6RP9K9Gj1sURS
eqNaHR9Gr4rW10s+FwUHWxxzbmIWqH0gApQYO6vyND5IMcKOBCWQU6Detuq1pQ6d
Uc+iF+sEz3Rk3C6d4WBBjtkVJSJ0KKan8Q3gJefOCMNhdRQDjZLwbzr4bgoAkLba
BFCjiZxWZ6HAdMfSCV8uZQrtMS7b0DUpY0vdH9Htl3JqOOkK9RorYDQBuPdkTYFI
NsmtWVsFV/LmR891mOF3fBRaoVoMeJVwiZyNlFY+dyWWFzLp+GoTLcQtmuR7OkmO
cBGxWSKPcZfPqhf4dVQud7bDR2RNfJ1Hqa5kj8Z422sseYDwHf/T9OWWYvLwKGZh
lUgpnzO3WCGrd/6EVNeC1mKXt4F7BmADov4Rdcrp1mPXiVt7oIxLaS6eBNf2y1TW
zjYj5ZFuKqIukDEJfqpwsE5asnCw56nae+7luGs8em1J9GEXhWzXG15UVyQJaFwu
B1iL8l7VcEQz4ABVrSTUWLLAKDsyqUbq2gsAEQEAAYkERAQYAQIADwUCValA9AIb
AgUJA8JnAAIpCRBht7Um2Y8DU8FdIAQZAQIABgUCValA9AAKCRAcacTlXpkF2y/F
D/oDrZm143Rv9NV9InnVJ0brpqbB7aulFfhR1LDuJ/GjeqGAQgJCZdHlzT2pfCXX
swUlYzcWEatvGcDkoaB5Ya2qs+6nhBk8pT6XYRrZAtIlKIGrlCqoSBm9HXguGv+E
IaEECr2z/Funx9so0mP+5aJn65M9u3lPmuAonj6DcHoM07WsfsXvQ4ut3fabFmzi
lLGeAdEDKIw8Hn3JBUOxUyFrQlOoL4/3qK1TO+cidz/2bATQQyIG2kNOSgHBslU+
e6/7sWOQ4ufmzm7dEsf197zPXGdXR88LT+d2uU2K4GkCffNUKxZqy9bXxXPwr4JB
jxLDQnDvl50GAWjPZAwXEd8Okwl5+8xp0HuZ217WUqT8ib0oUUfwh2H1vrMPRr/4
6i6O6THpCkV8BWF7axPYIibaeYwC4BkjZwK3tIL5ESf2f0xK4hbE3xhMTeqABQHo
Xd5rQ7SEaUuX7PlQ59fRs0Cz55vH8/o9zMm0PN6qmZFvRBeqjnklZcu+ZdP9+CMX
t81NMuzIK1X7EfpkUoam8YkYkwcCkRvPZrSHLXZFkfnx4jW543dPOfycjnv6hhKy
oXD9CBx0ZcOicsYmw9XMilBGD3b8ZdK6RYX4ywKNU6KUdFJjXB88+Ynv6QxDit1e
mMCHA1glzV9/k36iYLEIqgWBiwJeUUIcUqzgnBFtN13cyS6oEACUGUiPKbw3IkgG
W19ZyS6FBNfgGIGW0Y82Br0KlCyaXnX0R4+4u2h7kfR9NSnhRhsvRnPIkiZATa7D
+Ew1nfpsDTnti0c6g/gVw9TC/rCyXkkLztRHVcWEBdvnFJTSp2LeFaHSGbvvZfoI
GUzyUzoa1P98NmRIY1cxBoizVf8729/zAaD4fAslxoK/JsjjDvDUrRHtaNZmUle6
0Jl/yFFzR3zxb+pJliigoP2rZLt+ipomHJIhoXXWwfkRO9U/egJ8ZUhWEpZvROna
Nc9eVct5EBADxL7gHWjlceIz4ndI1eE9AdEZDdUZwOfjmK2DcXjFBfZC+jhJXjY0
xh3pPKQz90h9DIkM5WDcJPf6ep+MKSd/3hI2/JmmscQ+alwN6x6g8zDySMo3APA9
cUvEFGe0+CepVcNw03jU4faSrHiMXsUuVGbA2kHaYVUfzF5W5GbuHZZlGxoSiq+K
+HNG0RJUDa6bkSDvrcJVNw1iUrowP+LLwnNsy5kGuU4evnwcoN1w7LVbTPaq4RIa
iqvAD33kiA9q//UNKnK4k81z+hRNaWGliyGpgqh+V7MDIqPfT5TMLdH+ZjTeuLrN
S8KBcc2BmUpSwzdUReTqHmgO5peeIcsvO7GNMFWsgucZiAdIVE/zQv+SfP6jhS+r
jCPs0eeu5zl8/V+gXFE2wy3jTJEl9bkCDQRZS9m1ARAAvh1Nh4GgjpTFZy7uQRFz
5PPXdZTBI+Y4hTpF2heoFzZDI6SLyz64Ooglum3ZglQ9ac+ChTSsO36aw4b22kCM
9WDmkcl7wf21fG9o8gJDVjFjDWbwTWREaKjgS6s/Yb8f9gje/BGySojxynTi3zyT
UN94q9dhVjfiQ79UzXZdN9FyyIx2YO5tOo09hTWSZg16oxP47Mj1ATaS6UIrQMcM
nOp0kuc6SufXPSWsUA+g2lW0dmHgPvIHwUfcjWqT2elF01e9KOFe7im29G6zOS2M
Rx8cr6KRg/eNWpHh5aI4quRUhYk4Kw4ohQTbs9ed0YttS4PMK+sq6xHpb28X6Zgr
WnelPY9hfwcR4m7Ot3VQUG8JY9/aTlFCoeTgkhop+MCUI+dJeY8depIa0PTzdEmE
WRvPhTTv+CUdZ6v4z5LD6FhP+/5c6FCbcIb89Rp5fa53oYV5/KZf+0DUVgmpXFU7
J7ZrGgDeU7vIzmwr8kcx0vtsVm1dVwYLACpTaaQPbISQUDM8sEcqKAqD7hWKaxNs
b2M85L6q2/rnHq4g46yJzdR3b8EH+V9u+mUi9DIljDwcpvw7ReRQ9wPdDWLynngl
IeGImbjYfr324yaIl4vNORAkbsoCkS/qc5v6MvKvYNle5fzb9S9kCbNZmD9c5/bH
Pjj9ENeQvzrl2pFh6dc1o5cAEQEAAYkEcgQYAQgAJhYhBBTyZoLQkWzdgeN7bWG3
tSbZjwNTBQJZS9m1AhsCBQkDwmcAAkAJEGG3tSbZjwNTwXQgBBkBCAAdFiEE3OrF
2WE1uRxOpnKru769uyTG81UFAllL2bUACgkQu769uyTG81UFUw//bW5T7w2k8ukG
fpIcm0gB98VgxKenSCmU6N+Ii0DwcNtzW+pmVWl2TbHIXDpvuD69ODWBDMXu6gBk
rVzNEsK3uhzGe0tWA+5I7Vke3iEkbll7VRQlIOrw+n5NMvjeuDqKsMt1gMEEdgRK
ddYApEAi49vV7XnqkB2lLKfAnf6o/KqPm8MuQ+u0xYanupZCldwdpcx5rybj79Es
0iO9Gh/+3qOtR6ubOz3Vn78Lc3y6AP9pmtdOI2QX8foGK4hNmgHSP6uPLh/ERC9N
ir0Lc2hoEhHEkQ8CnEaccp70r03VkEQuMJQJPUyRsGZ/gIm0SAm9JJxWHXJk2/5N
UN83pHAX0LA4zxtWs4fVW5f8v9eIhFFPTZ4au+/cS9D4GFx4mlY34awcpAzrny2t
ntGEejY9HSJv4PuFZCmtyS2q61N9EU8yuBwVM9cp5HntzG+OT4HYugtI6ibehM0S
1Roy4ETwT+Ns41ffhCwdYMp8tzdeksQ35s7rkB9OJHj+q2dkGaV0FQb3FutbSpxb
P4zk/dLqyxuivdUPHGtf4W/qklxzCWBg0VDFA7PwatmEXRxTjx77RelTY0V7K54d
DyVv3Jh2+FzuaQZzzuIhv4gtqHntaqLnYl3h/QNLbOTE3ppvn9RUSR983Bd+M3Qh
bbwZrgG1m+hdUZUmji+wbK0wV0xHNEH+4BAAjbVzdNOs7hMvjY1wVDRFjvICVorN
dNdU3ELy/9BAoiwOs2+zjDXmsX+3YtdzwKvdpQ24O0TvH4Vo3BkvKkJ75EU7LroA
bYQ2423m1MY3eaBslmX7TUJ3XE+k7OZF8AmcftgP4nhC4IQSCtoBc9+ncyGN4da1
BpYO7b19tO0/HST8GHSrEcU9bGGdimS2eNkSgybA8wF6K0K9yvrpTNSZ7OBVlzQf
En8s70Gyzs/d6C/rTA+defnv3AMaciuINSEdFyfYq4wjt5PikvgceMAAkH/z69xT
Ng+6q3FQt/lyK7xX5qPMe2oFyDA1H+Cb/uL7ioo+jXh9gF+0fk8OP2IPzxYhBful
pVtgclmOuaekzaKeIv8NFW7GoA9OghziExePxg95OpL/VyQ7PJiAUj1pFovFk5HS
6ejVZNEGJ/A5zLc1PBIcr/phu0luqhXAhImsZS6858GWQllWULNWw8bX5Blo8Avc
fFVdq9iAK7aHN7g45ZR7Ze6qKHDyFv4XWuE/rj9C2mM/GAstvU0gGmbo6B1mNGMJ
uX3Gd3dG8fqFjE77OB2feJyfZ8UeF1nvG1hxlmuD1A5e6/osO9V7kjhXKzM2zSO1
1zHQ/5PlUisoUBjJ/QIK4v9RBNGtbRKso5X9Fke692lVgrdggDJ3j2QqMuTo71rA
VDLtxerc+GNq0GK5Ag0EXPA56gEQAK3x5otbuNfefm1BD4gJ4Y4EhVvMCdeUf1uN
1OqiWUz6KLCR2UE00QaS7v65D11Oh96bpxtJRe6HmQk0vLm1QgbigZku+kCMt0c/
zOyWBOCVDKahZJu/ayzimrzGgNunoTnV2SjCTIVK4QtkowUeA+Ilt9hfFCZvASIW
Eazj7bWXC0ji1U73OzVdvMkP1e2ibGRxhp494HGkj0metYhQq3vQk9mL9BzE8wba
yL9iK2cJVqZxXDxauL5bswAlbdLI2MlSrH9wg6Jvy91lXG91ZEhmZ2RaQU7q+/gi
i0QEJ+W3pwPQQjLciOfecMN/n2/vCl1ZeJ2PNWMoXQXyNSJfpL9m4TPiW92i2kKA
kaEOZ2LilUJhuZKEpt9+BxIn8FafjAI4co8D5cHvnPM+PXt1xZHwP0Yz+yC6wnLZ
9scNw50AM/Kcpbc6TUIQ0kPHsNMPCbZ89Ey63i61IvUUxKXvMENFptw+vWfD7MRG
OZKObdH3VrKtD1QqU//g/xeSGNzcUVatFDVDv9KWRSatOpKi5Bw6iUYxrYQ02Co1
45jugTnGc+zrhMsDND3Prq7R0dMw4MBLUDoLSXrous9VgahAsOQztZCfvPOcV2cp
Haew85SLSYx3Ksb/raIBG02+4XjF2xQCgIq2LfA6xdG5SGJd6Lg3ip2vwhogQgWB
gQoq1qrPABEBAAGJBHIEGAEKACYWIQQU8maC0JFs3YHje21ht7Um2Y8DUwUCXPA5
6gIbAgUJA8JnAAJACRBht7Um2Y8DU8F0IAQZAQoAHRYhBAl7MTB3rmKgL4TaTfGm
Zo+7fVcuBQJc8DnqAAoJEPGmZo+7fVcuilAQAIq/yZi3hoxzSgDblQlcxGKDaVck
Oo5kFRK8UWby3peB0bTMV3MWjiCqmN7ryQ3K3126G84yxXCTNSv7kOLsVxy+DBx5
TdfmeBamdX8ZxeDv2bm+KKlZR0FGjJn3SiLgPkrp+GWazOZbKIPsoOuN20h0aeMC
6mvO2Rz8kGUjCWa7/jQnINHkN/GnbnoOXOwn8cfwtt2Mkb8XRe+1lTBwd59ruYNe
0hazOwygmcSgvM6Rm8nXeuElxMdVAiYKXJLej34+ToNN/LgB3TD72cXStmn7z8Zd
kaKV+f81cyPR6Vi7laLIIGpquBGSsel5s0pb2PFFxrTZoNLZit/x2Wx47V30Vdk8
4FYjENEb75OpGEErwVRbTQOKREOutXlGwiJe2C1Gof6HCScTfeLybz8ooLn+lSXc
B/l/68wqxPezWYxfbMD42YtTEkJcvPvRaouhKEjxASP5EuT3K6T3tujsqg1K/rE3
Bei0krOnTxUCdU1f9B12ZXpntKehwSz5LpnQvr2wR98yBRoKQfT2Yi3XK7QNp46X
t8zaIPdVw7qMn3gODyiWSEGCtkRF7exdHZ9yIW2yofFhz3M/tjIvu89tRDwGiDER
8hBk8Z2mcbmlMzQ1Jf+aCQ5YjZUkUN7+s4VN1OT7yYpWcV1pR8cDjyoh5LRXTMwt
3g5mStiVbeIq9mmzogAQAJq6KK9Z28q8mAozKLd++JxefiovydXVfqWY4GoOfhtv
Q3Sa2mKpd+8b3nT7b2twIlKPqpIxWPxke7BWS2sShZfqWkltbAR1SH7fkOeN8Y5g
3yBbxvHQJ/KoTHnRb74MrlaP4v4MvuOAn+Xt0wrEt2OKOSD6/jIjnXqfNFuvHfHv
tTyf4fVX1sesN9JwRKevc+1ZLjN7jIOuv1en5grwDtGQhy0fZOjISLGKfy6299rn
p0alRqLuAhFn7Ru1ZjQVfqD9VEfDCEsjNOxoW95Aa7MVtdoaQ6FEBKK0q1tCjZ8P
f2oAyLGQo5y030D+cDd1lTwGUmPvbnz4/fqNAM6KWvZjtGVNJNlLlpd4A9vAgvAo
I1wONXSR56sZKqJXbOSIfS8AlzFWiSf6IyPpU5ozVHLGySAVqtrRW3ci4COeKsRL
85W4iSGCl5uYZcE9weotakUO0R0u/pi07BJwTR5hvn9IDdeyYreqDMQDMF9yHCXg
+oCGJAB/+1vHkSABRsWbCERyB33ExjnDr7z/kWOhfAdiZ1+MpP2IPtDEXcufKayS
gpBAZYEeLAmNoDwtg+milSD3B7TTM11IY1e/Utq70Az/8BOK6wfyIsmPfeh9CHoW
+JAbpbdOUs+rH+gjHE2Pls8BDLp1EoHVPGVavhfSlRbG6oF45s+gdL2PTKzEo+u7
=xFcH
-----END PGP PUBLIC KEY BLOCK-----

Mozilla Addons BlogExtensions in Firefox 68

In Firefox 68, we are introducing a new API and some enhancements to webRequest and private browsing. We’ve also fixed a few issues in order to improve compatibility and resolve issues developers were having with Firefox.

Captivating Add-ons

At airports and cafés you may have seen Firefox asking you to log in to the network before you can access the internet. In Firefox 68, you can make use of this information in an extension. The new captive portal API will assist you in making sure your add-on works gracefully when locked behind a captive portal.

For example, you could hold off your requests until network access is available again. If you have been using other techniques for detecting captive portals, we encourage you to switch to this API so your extension uses the same logic as Firefox.

Here is an example of how to use this API:

(async function() {
  // The current portal state, one of `unknown`, `not_captive`, `unlocked_portal`, `locked_portal`.
  let state = await browser.captivePortal.getState();

  // Get the duration since the captive portal state was last checked
  let lastChecked = await browser.captivePortal.getLastChecked();

  console.log(`The captive portal has been ${state} since at least ${lastChecked} milliseconds`);

  browser.captivePortal.onStateChanged.addListener((details) => {
    console.log("Captive portal state is: " + details.state);
  });

  browser.captivePortal.onConnectivityAvailable.addListener((status) => {
    // status can be "captive" in an (unlocked) captive portal, or "clear" if we are in the open
    console.log("Internet connectivity is available: " + status);
  });
})();

Note: if you use this API, be sure to add the captivePortal permission to your manifest.

Private and contained

We’ve made a few additions to the webRequest API to better support private browsing mode. You can now limit your webRequests to only include requests from private browsing mode. If instead you are interested in both types of requests it is now possible to differentiate them in the webRequest listener.

To improve the integration of containers, we’ve also added the container ID (cookieStoreId) to the webRequest listener.

Proxy new and proxy old

The two additional fields mentioned in the previous section are also available in the details object passed to the proxy.onRequest listener.

At the same time, we’d like to make you aware that we are deprecating the proxy.register, proxy.unregister and proxy.onProxyError APIs. As an alternative, you can use the proxy.onRequest API to determine how requests will be handled, and proxy.onError to handle failures. If your extension is using these APIs you will see a warning in the console. These APIs will ultimately be removed in Firefox 71, to be released on December 10th, 2019.

Timing is everything

We’ve changed the timing of tabs.duplicate for better compatibility with Chrome. The promise is now resolved immediately, before the duplicated tab finished loading. If you have been relying on this promise for a completed duplicated tab in Firefox, please adjust your code and make use of the tabs.onUpdated listener.

Miscellaneous

    • Since extensions cannot add bookmarks to the root folder, we’ve improved the error message you get when you try.
    • If you’ve been trying to remove indexedDB data via browser.browsingData.remove({}, { indexedDB: true }); and it failed in some cases, we’ve fixed this on our end now.
    • Removing cookies for IPv6 addresses has been fixed.
    • Fixes an issue when setting cookies with an IP address in the domain field, along with the url field being set.
    • Hard-to-debug performance issues using webRequest.onBeforeRequest with requestBody during large uploads have been solved.
    • An issue with identity.launchWebAuthFlow hanging after authentication has been resolved.
    • storage.onChanged is now fired when values are removed.

Thank You

We’ve had a great amount of support from the community. I’d like to thank everyone who has taken part, but especially our volunteer contributors Jan Henning, Myeongjun Go, Oriol Brufau, Mélanie Chauvel, violet.bugreport and Piro. If you are interested in contributing to the WebExtensions ecosystem, please take a look at our wiki.

The post Extensions in Firefox 68 appeared first on Mozilla Add-ons Blog.

Will Kahn-GreeneSocorro: May 2019 happenings

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in May.

Read more… (5 min remaining to read)

Mozilla Open Innovation TeamA Firefox browser extension promoting quality online content?

We are excited to announce the launch of the “What’s your idea for a Firefox extension for promoting credible content?”, a competition sponsored by Mozilla and posted to MindSumo, the world’s largest crowdsourcing community of Millennial and GenZ solvers. The goal of the competition is to create an extension (or other browser technology solution) to help internet users identify credible voices and credible sources of information online.

The Problems Plaguing the Internet of Today

Imagine the internet that is a place where meaningful and credible content is easily found. A place where everyone feels safe, empowered, and accurately informed.

Is this the internet we’re having today? Unfortunately, not. Today’s internet is unsavory mix of intrusive advertising, misinformation, and toxic social media full of offensive language, harassment, and harmful content. The problems plaguing the web are magnified by the economy that benefits from engagement of the most extreme and attention-grabbing material. Instead of improving a civil, informed online discourse, the internet of today is degrading it.

Solutions to the internet problems exist. Interestingly enough, they are owned by the large tech companies that allowed the problems grow and fester. However, the tech giants are reluctant to implement these solutions because doing so may negatively affect their bottom lines.

Mozilla is a recognized leader in the field of online Privacy & Security. Our commitment to the open and human internet anchored in the Mozilla Manifesto is solidified by our unwavering desire to put people before profits. This commitment was on display again recently with Mozilla’s launch of Enhanced Tracking Protection that blocks third-party tracking cookies by default.

Removing “Bad” Content or Promoting a Quality One?

Ensuring safe internet browsing is only part of Mozilla’s efforts to protect its users; equally important is fighting online harassment. Two general approaches can be considered here. The first is to design online tools to remove or filter “bad” content. However, serious doubts have been expressed whether this approach is technically feasible at all. Besides, this approach is also threatening free expression and the open internet ecosystem.

Instead of asking how we can filter hateful and misleading content, we’re asking: How can browser technology amplify credible and quality content and conversations?

As part of this new paradigm, we’ve launched a crowdsourcing challenge to identify approaches to using Firefox extensions or other browser technology solutions to find and promote credible sources of information online. (For a primer to technological approaches to access credibility online, see this report.)

How to Participate

The challenge was posted to the MindSumo platform, the world’s largest crowdsourcing community of Millennial and GenZ solvers. The competition will run until July 7 and the submitted proposals will be evaluated by the members of Mozilla’s Privacy & Security and Product Management teams. Up to $1,600 in prizes will be awarded to the best proposals.

The challenge is open to everyone (except for Mozilla employees and their families), and we especially encourage members of Mozilla’s communities to take part in it.


A Firefox browser extension promoting quality online content? was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Open Design BlogFirefox: The Evolution Of A Brand

Consider the fox. It’s known for being quick, clever, and untamed — attributes easily applied to its mythical cousin, the “Firefox” of browser fame. Well, Firefox has another trait not found in earthly foxes: stretchiness. (Just look how it circumnavigates the globe.) That fabled flexibility now enables Firefox to adapt once again to a changing environment.

The “Firefox” you’ve always known as a browser is stretching to cover a family of products and services united by putting you and your privacy first. Firefox is a browser AND an encrypted service to send huge files. It’s an easy way to protect your passwords on every device AND an early warning if your email has been part of a data breach. Safe, private, eye-opening. That’s just the beginning of the new Firefox family.

Now Firefox has a new look to support its evolving product line. Today we’re introducing the Firefox parent brand — an icon representing the entire family of products. When you see it, it’s your invitation to join Firefox and gain access to everything we have to offer. That includes the famous Firefox Browser icon  for desktop and mobile, and even that icon is getting an update to be rolled out this fall.

Here’s a peek behind the curtain of how the new brand look was born:

Design beyond identity.

This update is about more than logos. The Firefox design system includes everything we need to make product and web experiences today and long into the future.

  • A new color palette that expands the range of possibilities and makes distinctive gradients possible.

  • A new shape system derived from the geometry of the product logos that makes beautiful background patterns, spot illustrations, motion graphics and pictograms.

  • A modern typeface for product marks with a rounded feel that echoes our icons.

  • An emphasis on accessible color and type standards to make the brand open to everyone. Button colors signal common actions within products and web experiences.

Meaning beyond design.

Privacy is woven into every Firefox brand experience. With each release, our products will continue to add features that protect you and alert you to risks. Unlike Big Tech companies that claim to offer privacy but still use you and your data, with us you know where you stand. Everything Firefox is backed by our Personal Data Promise: Take Less, Keep It Safe, No Secrets.

The brand system is built on four pillars, present in everything we make and do:

Radical. It’s a radical act to be optimistic about the future of the internet. It’s a radical act to serve others before ourselves. We disrupt the status quo because it’s the right thing to do.

Kind. We want what’s best for the internet and for the world. So we lead by example. Build better products. Start conversations, Partner, collaborate, educate and inform. Our empathy extends to everybody.

Open. Open-minded. Open-hearted. Open source. An open book. We make transparency and a global perspective integral to our brand, speaking many languages and striving to reflect all vantage points.

Opinionated. Our products prove that we are driven by strong convictions. Now we’re giving voice to our point of view. While others can speak only to settings, we ground everything in our ethos.

The end of the beginning

The Firefox brand exploration began more than 18 months ago, and along the way we tapped into many talents. Michael Johnson of Johnson Banks provided early inspiration while working on the Mozilla brand identity. Jon Hicks, the designer behind the original Firefox browser logo, was full of breathtaking design and wise advice. Michael Chu of Ramotion was the driving force behind the new parent brand and system icons.

We worked across internal brand, marketing, and product teams to reach a consistent brand system for our users. Three members of our cross-org team have since moved on to new adventures: Madhava Enros, Yuliya Gorlovetsky, and Vince Joy. Along with Mozilla team members Liza Ruzer, Stephen Horlander, Natalie Linden, and Sean Martell, they formed the core working team.

Finally, we’re grateful to everyone who has commented on this blog with your passionate opinions, critiques, words of encouragement, and unique points of view.

Tell us. We can take it.

As a living brand, Firefox will never be done. It will continue to evolve as we change and the world changes around us. We have to stretch our brand guidelines even further in the months ahead, so we’re interested in hearing your reaction to what we’ve done so far. Feel free to let us know in the comments below. Thanks for being with us on this journey, and please stay tuned for more.

The post Firefox: The Evolution Of A Brand appeared first on Mozilla Open Design.

Adrian Gaudebertreact-content-marker Released – Marking Content with React

Last year, in a React side-project, I had to replace some content in a string with HTML markup. That is not a trivial thing to do with React, as you can't just put HTML as string in your content, unless you want to use dangerouslySetInnerHtml — which I don't. So, I hacked a little code to smartly split my string into an array of sub-strings and DOM elements.

More recently, while working on Translate.Next — the rewrite of Pontoon's translate page to React — I stumbled upon the same problem. After looking around the Web for a tool that would solve it, and coming up short handed, I decided to write my own and make it a library.

Introducing react-content-marker v1.0

react-content-marker is a library for React to mark content in a string based on rules. These rules can be simple strings or regular expressions. Let's look at an example.

Say you have a blob of text, and you want to make the numbers in that text more visible, for example by making them bold.

const content = 'The fellowship had 4 Hobbits but only 1 Dwarf.';

Matching numbers can be done with a simple regex: /(\d+)/. If we turn that into a parser:

const parser = {
    rule: /(\d+)/,
    tag: x => <strong>{ x }</strong>,
};

We can now use that parser to create a content marker, and use it to enhance our content:

import createMarker from 'react-content-marker';
const Marker = createMarker([parser]);
render(<Marker>{ content }</Marker>);

This will show:

The fellowship had 4 Hobbits but only 1 Dwarf.

Hurray!

Advanced usage

Passing parsers

The first thing to note is that you can pass any number of parsers to the createMarker function, and they will all be called in turn. The order of the parsers is very important though, because content that has already been marked will not be parsed again. Let's look at another example.

Say you have a rule that matches content between brackets: /({.*})/, and a rule that matches content between brackets that contain only capital letters: /({[A-W]+})/. Now let's say you are marking this content: I have {CATCOUNT} cats. Whichever rule you passed first will match the content between brackets, and the second rule will not apply. You thus need to make sure that your rules are ordered so that the most important ones come first. Generally, that means you want to have the more specific rules first.

The reason why this happens is that, behind the scene, the matched content is turned into a DOM element, and parsers ignore non-string content. With the previous example, the initial string, I have {CATCOUNT} cats, would be turned into ['I have ', &lt;mark>{CATCOUNT}&lt;/mark>, ' cats'] after the first parser is called. The second one then only looks at 'I have ' and ' cats', which do not match.

Using regex

The second important thing to know relates to regex. You might have noticed that I put parentheses in my examples above: they are required for the algorithm to capture content. But that also gives you more flexibility: you can use a regex that matches some content that you do not want to mark. Let's say you want to match only the name of someone who's being greeted, with this rule: /hello (\w+)/i. Applying it to Hello Adrian will only mark the Adrian part of that content.

Sometimes, however, you need to use more complex regex that include several groups of parentheses. When that's the case, by default react-content-marker will mark the content of the last non-null capturing group. In such cases, you can add a matchIndex number to your parser: that index will be used to select the capture group to mark.

Here's a simple example:

const parser = {
    rule: /(hello (world|folks))/i,
    tag: x => <b>{ x }</b>,
};

Applying this rule to Hello World will show: Hello World. If we want to, instead, make the whole match bold, we'll have to use matchIndex:

const parser = {
    rule: /(hello (world|folks))/i,
    matchIndex: 0,
    tag: x => <b>{ x }</b>,
};

Now our entire string will correctly be made bold: Hello World.

Advanced example

If you're interested in looking at an advanced usage example of this library, I recommend you check out how we use in Pontoon, Mozilla's localization platform. We have a long list of parsers there, and they have a lot of edge-cases.

Installation and stuff

react-content-marker is available on npm, so you can easily install it with your favorite javascript package manager:

npm install -D react-content-marker
# or
yarn add react-content-marker

The code is released under the BSD 3-Clause License, and is available on github. If you hit any problems with it, or have a use case that is not covered, please file an issue. And of course, you are always welcome to contribute a patch!

I hope this is useful to someone out there. It has been for me at least, on Pontoon and on several React-based side-projects. I like how flexible it is, and I believe it does more than any other similar tools I could find around the Web.

QMOFirefox 68 Beta 10 Testday, June 14th

Hello Mozillians,

We are happy to let you know that Friday, June 14th we are organizing Firefox 68 Beta 10 Testday. We’ll be focusing our testing on: Sync & Firefox Account and Browser notifications & prompts.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday! 🙂

Mozilla Open Policy & Advocacy BlogIt’s time for the US Senate to Save the Net

On the one year anniversary of the Federal Communications Commission’s repeal of net neutrality, Mozilla is joining millions of people across the internet to once again stand up to protect the open internet.

When the FCC gutted net neutrality protections last year, we filed our lawsuit because we believed that repeal was unlawful. We also believed taking on the FCC was the right thing to do for the future of the internet and everyone who uses it.

Until the Senate listens to the American people and protects the open Internet, Mozilla v. FCC continues to be net neutrality’s best hope.

But it’s time our Senators do what they were elected to do – represent their constituents, and pass net neutrality legislation that has overwhelming support and protects Americans. With a victory in the courts, or bipartisan legislation, we can ensure that people – and not big cable and telephone companies – get to choose what they see and do online.

 

The post It’s time for the US Senate to Save the Net appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 290

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is uom, Units of measurement is a crate that does automatic type-safe zero-cost dimensional analysis. Thanks to ehsanmok for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

242 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Firefox NightlyThese Weeks in Firefox: Issue 60

Highlights

  • Integrated add-on abuse reporting landed in Firefox 68
    • You need to enable the HTML-based about:addons by setting extensions.htmlaboutaddons.enabled to true in about:config
    • The abuse reporting UI can be enabled by setting extensions.abuseReport.enabled to true in about:config
  • Work on the new login manager UI is progressing well
    • To see it go to about:config and set signon.management.page.enabled to true
    • Then load about:logins (will need to set the pref before loading the page)
    • Initial password generation code has also landed
      • WIP UI that only works on autocomplete=”new-password” fields
        • This is very early in the project so there is no additional attempt to save the generated password.
        • www.facebook.com is a good test page.
      • Enable both prefs to test the feature:
        • signon.generation.enabled is the user pref to enable/disable the feature from about:preferences (UI not implemented yet).
        • signon.generation.available controls whether the feature is available for users (e.g. if the about:preferences UI should show in the future).
  • The DevTools Inspector color picker widget just got a bit of a re-design. Thanks Maliha Islam [:maliha] for pushing this over the edge!
    • The new color-picker in the Inspector Panel!

      The new color-picker in the Inspector Panel!

Friends of the Firefox team

Introductions/Shout-Outs

  • New interns!
    • Mandy Cheang (@mcheang, mandy__)
      • Working on improving start-up performance
    • Abdoulaye Ly (@Abdoulaye O. Ly, abdoulaye)
      • Working on Fission

Resolved bugs (excluding employees)

Fixed more than one bug
  • Florens Verschelde :fvsch
  • Ian Moody [:Kwan]
  • jaril
  • Kestrel
  • Monika Maheshwari [:MonikaMaheshwari]
  • Tim Nguyen :ntim
New contributors (🌟 = first patch)

 

 

Project Updates

Activity Stream

  • The new Pocket Newtab is on track for 68 with performance parity
    • Slight regression with our usage of -webkit-line-clamp (thanks heycam for platform implementation!) and fixing with requestAnimationFrame (thanks performance best practices doc)
  • We’ve made some progress on migrating our build process into Firefox / making it easier to develop on Activity Stream features
  • Introducing our new intern: Emily (:emcminn)!

Add-ons / Web Extensions

Applications

Lockwise
  • Work on the Lockwise addon is complete
    • Final release waiting on localizers to translate strings
  • Future Lockwise work will happen in tree as part of the Firefox Password Manager
  • Removing this Lockwise item after this meeting

Developer Tools

Layout Tools
  • Inactive CSS landed! It’s currently only ON by default in nightly (since 68) but will ship to everyone with Firefox 69. Bug 1306054. We will be adding a larger collection of warnings very soon too, to warn users about more tricky CSS cases.
    • A tooltip in the CSS property inspector explains that a justify-content property doesn't apply to a non-flex element.

      This tooltip shows a CSS property that has no effect on the current element in the Inspector, and explains why.

  • Expandable CSS warnings also landed (shipping with 68). This allows jumping directly from a CSS warning displayed in the console to a node in the inspector when the warning occurred inside a CSS rule. Bug 1093953.
    • CSS warnings in the web console, with one of them being expanded, and revealing a list of DOM nodes that the warning applies to.

      Now errors can expand and collapse, and link directly to any nodes they’re associated with.

  • CSS Grid level 2 (subgrid) is close to shipping in Firefox. We’re getting the tooling for it ready in Firefox 69 so it’s easy to see the relationship between a grid and a subgrid.
  • We’re continuing to prototype on WebCompat awareness tools. Our latest prototype is an addon that displays CSS compatibility information about a page from the Firefox toolbar. It now allows to jump from a warning into the Style Editor, and to open other browsers where issues occur. GitHub repo for the addon
    • GIF demonstrating the doorhanger menu added by this extension, which contains the list of CSS compatibility problems detected on the page.

      This WebExtension helps bring attention to any detected web compatibility issues on the page. Handy!

  • We’re also focusing on fixing the last few remaining issues preventing to support the <meta viewport> tag in RDM, and therefore simulate mobile devices better.
Console
  • We now have borders between messages to make them easier to read. Bug 1519904. Thanks Florens Verschelde :fvsch for your keen eye for details.
    • Screenshot of the Web Console in Firefox 68 with borders between each console log message.

      Better visual separation and readability? Yes, please!

  • It is possible to resend network requests that were logged in the console. Bug 1530138. Thank you Christoph Walcher.
    • Screenshot of the context menu in the Web Console, showing the new "Resend request" feature for network requests.

      Want to resend that network request from the console? Now you can!

Debugger
  • Column breakpoints are stable now and we’re super happy with it.
  • Event breakpoints making good progress. The UI is ready and we’ll be landing the feature very soon. Follow along in this bug.
    • A preview of the nascent event breakpoints panel in the Debugger, allowing you to set breakpoints on all event types.

      When this lands, developers will be able to set breakpoints on events, as well as lines of code.

  • Workers are now displayed in the source tree along all the other sources.
  • The new logpoint feature is now even better with dedicated icons in the web console.
    • This shows the new "Add log" menu item in the debugger context menu, allowing you to log an expression when a certain line of code is executed.

      Log log log!

Remote Debugging
  • The new about:debugging page will ride the trains with Firefox 69. A final QA testing phase will happen in beta 69 in a few weeks. The main implementation phase is over and the final few fixes have happened:
    • Stay on the same page when reloading about:debugging (reconnects to remote runtimes automatically) (bug)
    • Remember last temporary addon install directory (bug)
    • Disable temporary addon installation if xpinstall.enabled is false (bug)
    • Updated error message colors and borders
    • New “Remote Debugging” menu item in the Web Developer menu
    • Update for Fenix/Firefox Preview (name, icon and version) (bug)
      • The new logo and header in about:debugging if an instance of Firefox Preview is connected.

        Coming soon to a phone near you!

Fission

  • Abdoulaye has an initial version of the <select> dropdown working with Fission
  • Neil has a version of drag and drop working with Fission! \o/
  • mconley has a patch that makes PermitUnload work with Fission, but is blocked on some DOM work
  • mconley is starting efforts to make the context menu work with Fission

Lint

Password Manager

Performance

Performance tools

  • Properly updating the URL state after publishing now.
  • Improved algorithm to find idle threads at load time.
  • Firefox Profiler now supports SimplePerf output format.
  • Added more relevant information to window title to improve the searchability of tabs. Thanks to our GSoC student Raj!
    • A tooltip for Firefox Profile showing the Firefox version, OS version, as well as collection date and time.

      This should make it easier for developers to tell various profiles apart when opened in multiple tabs.

  • Transforms are usable inside stack chart via context menu now.
    • The transfomation utilities are now available in the stack chart panel of the Firefox Profiler.

      Stack chart getting some love!

Picture-in-Picture

  • Dave Justice is about to get a patch landed that decorates the tab that a Picture-in-Picture video is coming from
  • Keyboard access and RTL support for Picture-in-Picture is still underway
  • The tentative plan is to let this ship to Firefox 69 Beta / Dev Edition and get feedback from our users and web developers

Policy Engine

  • ExtensionSettings policy finally landed (bug 1522823)
  • Added a number of preferences to the new Preferences policy (bug 1545539)
  • Download related policies (bug 1546973)
  • Activity Stream policies (bug 1548080)
  • Legacy Browser Support
    • EXE built
    • Able to test extension
    • Working on IE BHO
  • Investigating multiple intermittents on policy that have been around a while

Privacy/Security

Search and Navigation

Search
Quantum Bar
  • Bugs and cleanup work
  • Active in Beta, no critical regressions reported so far
  • Nightly XUL/HTML experiment didn’t show any fallout
  • Adding core support for the WebExtension APIs that will be driving our future experiments