Daniel Stenbergcurl localhost as a local host

When you use the name localhost in a URL, what does it mean? Where does the network traffic go when you ask curl to download http://localhost ?

Is “localhost” just a name like any other or do you think it infers speaking to your local host on a loopback address?

Previously

curl http://localhost

The name was “resolved” using the standard resolver mechanism into one or more IP addresses and then curl connected to the first one that works and gets the data from there.

The (default) resolving phase there involves asking the getaddrinfo() function about the name. In many systems, it will return the IP address(es) specified in /etc/hosts for the name. In some systems things are a bit more unusually setup and causes a DNS query get sent out over the network to answer the question.

In other words: localhost was not really special and using this name in a URL worked just like any other name in curl. In most cases in most systems it would resolve to 127.0.0.1 and ::1 just fine, but in some cases it would mean something completely different. Often as a complete surprise to the user…

Starting now

curl http://localhost

Starting in commit 1a0ebf6632f8, to be released in curl 7.78.0, curl now treats the host name “localhost” specially and will use an internal “hard-coded” set of addresses for it – the ones we typically use for the loopback device: 127.0.0.1 and ::1. It cannot be modified by /etc/hosts and it cannot be accidentally or deliberately tricked by DNS resolves. localhost will now always resolve to a local address!

Does that kind of mistakes or modifications really happen? Yes they do. We’ve seen it and you can find other projects report it as well.

Who knows, it might even be a few microseconds faster than doing the “full” resolve call.

(You can still build curl without IPv6 support at will and on systems without support, for which the ::1 address of course will not be provided for localhost.)

Specs say we can

The RFC 6761 is titled Special-Use Domain Names and in its section 6.3 it especially allows or even encourages this:

Users are free to use localhost names as they would any other domain names.  Users may assume that IPv4 and IPv6 address queries for localhost names will always resolve to the respective IP loopback address.

Followed by

Name resolution APIs and libraries SHOULD recognize localhost names as special and SHOULD always return the IP loopback address for address queries and negative responses for all other query types. Name resolution APIs SHOULD NOT send queries for localhost names to their configured caching DNS server(s).

Mike West at Google also once filed an I-D with even stronger wording suggesting we should always let localhost be local. That wasn’t ever turned into an RFC though but shows a mindset.

(Some) Browsers do it

Chrome has been special-casing localhost this way since 2017, as can be seen in this commit and I think we can safely assume that the other browsers built on their foundation also do this.

Firefox landed their corresponding change during the fall of 2020, as recorded in this bugzilla entry.

Safari (on macOS at least) does however not do this. It rather follows what /etc/hosts says (and presumably DNS of not present in there). I’ve not found any official position on the matter, but I found this source code comment indicating that localhost resolving might change at some point:

// FIXME: Ensure that localhost resolves to the loopback address.

Windows (kind of) does it

Since some time back, Windows already resolves “localhost” internally and it is not present in their /etc/hosts alternative. I believe it is more of a hybrid solution though as I believe you can put localhost into that file and then have that custom address get used for the name.

Secure over http://localhost

When we know for sure that http://localhost is indeed a secure context (that’s a browser term I’m borrowing, sorry), we can follow the example of the browsers and for example curl should be able to start considering cookies with the “secure” property to be dealt with over this host even when done over plain HTTP. Previously, secure in that regard has always just meant HTTPS.

This change in cookie handling has not happened in curl yet, but with localhost being truly local, it seems like an improvement we can proceed with.

Can you still trick curl?

When I mentioned this change proposal on twitter two of the most common questions in response were

  1. can’t you still trick curl by routing 127.0.0.1 somewhere else
  2. can you still use --resolve to “move” localhost?

The answers to both questions are yes.

You can of course commit the most hideous hacks to your system and reroute traffic to 127.0.0.1 somewhere else if you really wanted to. But I’ve never seen or heard of anyone doing it, and it certainly will not be done by mistake. But then you can also just rebuild your curl/libcurl and insert another address than the default as “hardcoded” and it’ll behave even weirder. It’s all just software, we can make it do anything.

The --resolve option is this magic thing to redirect curl operations from the given host to another custom address. It also works for localhost, since curl will check the cache before the internal resolve and --resolve populates the DNS cache with the given entries. (Provided to applications via the CURLOPT_RESOLVE option.)

What will break?

With enough number of users, every single little modification or even improvement is likely to trigger something unexpected and undesired on at least one system somewhere. I don’t think this change is an exception. I fully expect this to cause someone to shake their fist in the sky.

However, I believe there are fairly good ways to make to restore even the most complicated use cases even after this change, even if it might take some hands on to update the script or application. I still believe this change is a general improvement for the vast majority of use cases and users. That’s also why I haven’t provided any knob or option to toggle off this behavior.

Credits

The top photo was taken by me (the symbolism being that there’s a path to take somewhere but we don’t really know where it leads or which one is the right to take…). This curl change was written by me. Mike West provided me the Chrome localhost change URL. Valentin Gosu gave me the Firefox bugzilla link.

Karl DubostGet Ready For Three Digits User Agent Strings

In 2022, Firefox and Chrome will reach a version number with three digits: 100. It's time to get ready and extensively test your code, so your code doesn't return null or worse 10 instead of 100.

Durian on sale

Some contexts

The browser user agent string is used in many circumstances, on the server side with the User-Agent HTTP header and on the client side with navigator.userAgent. Browsers lie about it. Web apps and websites detection do not cover all cases. So browsers have to modify the user agent string on a site by site case.

Browsers Release Calendar

According to the Firefox release calendar, during the first quarter of 2022 (probably March), Firefox Nightly will reach version 100. It will set Firefox stable release version around May 2022 (if it doesn't change until then).

And Chrome release calendar sets a current date of March 29, 2022.

What Mozilla Webcompat Team is doing?

Dennis Schubert started to test JavaScript Libraries, but this tests only the libraries which are up to date. And we know it, the Web is a legacy machine full of history.

The webcompat team will probably automatically test the top 1000 websites. But this is very rudimentary. It will not cover everything. Sites always break in strange ways.

What Can You Do To Help?

Browse the Web with a 100 UA string

  1. Change the user agent string of your favorite browser. For example, if the string is Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0, change it to be Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
  2. If you notice something that is breaking because of the UA string, file a report on webcompat. Do not forget to check that it is working with the normal UA string.

Automatic tests for your code

If your web app has a JavaScript Test suite, add a profile with a browser having 100 for its version number and check if it breaks. Test both Firefox and Chrome (mobile and desktop) because the libraries have different code paths depending on the user agent. Watch out for code like:

const ua_string = "Firefox/100.0";
ua_string.match(/Firefox\/(\d\d)/); //  ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d{2})/); // ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d\d)\./); //  null

Compare version numbers as integer not string

Compare integer, not string when you have decided to have a minimum version for supporting a browser, because

"80" < "99" // true
"80" < "100" // false
parseInt("80", 10) < parseInt("99", 10) // true
parseInt("80", 10) < parseInt("100", 10) // true

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

Cameron KaiserTenFourFox FPR32 SPR1 available

TenFourFox Feature Parity Release 32 Security Parity Release 1 "32.1" is available for testing (downloads, hashes). There are no changes to the release notes except that Mozilla has lengthened 78ESR by a couple more weeks, so the end of official builds is now extended to October 5, 2021. Assuming no major problems, FPR32.1 will go live Monday evening Pacific time as usual.

The Mozilla BlogBuilding a more privacy preserving ads-based ecosystem

Advertising is central to the internet economy. It funds many free products and services. But it is also very intrusive. It is powered by ubiquitous surveillance and it is used in ways that harm individuals and society. The advertising ecosystem is fundamentally broken in its current form.

Advertising does not need to harm consumer privacy. As a browser maker and as an ethical company driven by a clear mission, we want to ensure that the interests of users are represented and that privacy is a priority. We also benefit from the advertising ecosystem which gives us a unique perspective on these issues.

Every part of the ecosystem has a role to play in strengthening and improving it. That is why we see potential in the debate happening today about the merits of privacy preserving advertising.

As this debate moves forward, there are two principles that should anchor work on this topic to ensure we deliver a better web to consumers.

Consumer Privacy First

Improving privacy for everyone must remain the north star for review of proposals, such as Google’s FLoC and Microsoft’s PARAKEET, and parallel proposals from the ad tech industry. At Mozilla, we will be looking at proposals through this lens, which is always a key factor for any decision about what we implement in Firefox. Parties that aren’t interested in protecting user privacy or in advancing a practical vision for a more private web will slow down the innovation that is possible to achieve and necessary for consumers.

Development in the Open

It is important that proposals are transparently debated and collaboratively developed by all stakeholders through formal processes and oversight at open standards development organizations (“SDOs”). Critical elements of online infrastructure should be developed at SDOs to ensure an interoperable and decentralized open internet. Stakeholder commitment to final specifications and timelines is just as important, because without this, the anticipated privacy benefits to consumers cannot materialize. 

At its core, the current proposals being debated and their testing plans have important potential to improve how advertising can be delivered but also may raise privacy and centralization issues that need to be addressed. This is why it’s so critical this process plays out in the open at SDOs.

We hope that all stakeholders can commit to these two principles. We have a real opportunity now to improve the privacy properties of online advertising—an industry that hasn’t seen privacy improvement in years. We should not squander this opportunity. We should instead draw on the internet’s founding principles of transparency, public participation and innovation to make progress.

For more on this:

How online advertising works today and Privacy-Preserving Advertising

The post Building a more privacy preserving ads-based ecosystem appeared first on The Mozilla Blog.

The Mozilla BlogThe future of ads and privacy

The modern web is funded by advertisements. Advertisements pay for all those “free” services you love, as well as many of the products you use on a daily basis — including Firefox. There’s nothing inherently wrong with advertising: Mozilla’s Principle #9 states that “Commercial involvement in the development of the internet brings many benefits.” However, that principle goes on to say that “a balance between commercial profit and public benefit is critical” and that’s where things have gone wrong: advertising on the web in many situations is powered by ubiquitous tracking of people’s activity on the web in a way that is deeply harmful to users and to the web as a whole.

Some Background

The ad tech ecosystem is incredibly complicated, but at its heart, the way that web advertising works is fairly simple. As you browse the web, trackers (mostly, but not exclusively advertisers), follow you around and build up a profile of your browsing history. Then, when you go to a site which wants to show you an ad, that browsing history is used to decide which of the potential ads you might see you actually get shown. 

The visible part of web tracking is creepy enough — why are those pants I looked at last week following me around the Internet? — but the invisible part is even worse: hundreds of companies you’ve never heard of follow you around as you browse and then use your data for their own purposes or sell it to other companies you’ve also never heard of. 

The primary technical mechanism used by trackers is what’s called “third party cookies”. A good description of third party cookies can be found here, a cookie is a piece of data that a website stores on your browser and can retrieve later. A third party cookie is a cookie which is set by someone other than the page you’re visiting (typically a tracker). The tracker works with the web site to embed some code from the tracker on their page (often this code is also responsible for showing ads) and that code sets a cookie for the tracker. Every time you go to a page the tracker is embedded on, it sees the same cookie and can use that to link up all the sites you go to. 

Cookies themselves are an important part of the web — they’re what let you log into sites, maintain your shopping carts, etc. However, third party cookies are used in a way that the designers of the web didn’t really intend and unfortunately, they’re now ubiquitous. While they have some legitimate uses, like federated login, they are mostly used for tracking user behavior.

Obviously, this is bad and it shouldn’t be a surprise to anybody who has followed our work in Firefox that we believe this needs to change. We’ve been working for years to drive the industry in a better direction. In 2015 we launched Tracking Protection, our first major step towards blocking tracking in the browser. In 2019 we turned on a newer version of our anti-tracking technology by default for all of our users. And we’re not the only ones doing this.

We believe all browsers should protect their users from tracking, particularly cookie-based tracking, and should be moving expeditiously to do so.

Privacy Preserving Advertising

Although third-party cookies are bad news, now that they are so baked into the web, it won’t be easy to get rid of them. Because they’re a dual-use technology with some legitimate applications, just turning them off (or doing something more sophisticated like Firefox Total Cookie Protection) can cause some web sites to break for users. Moreover, we have to be constantly on guard against new tracking techniques.

One idea that has gotten a lot of attention recently is what’s called “Privacy Preserving Advertising” (PPA) . The basic idea has a long history with systems such as Adnostic, PrivAd, and AdScale but has lately been reborn with proposals from Google, Microsoft, Apple, and Criteo, among others. The details are of course fairly complicated, but the general idea is straightforward: identify the legitimate (i.e., non-harmful) applications for tracking techniques and build alternative technical mechanisms for those applications without threatening user privacy. Once we have done that, it becomes much more practical to strictly limit the use of third party cookies.

This is a generally good paradigm: technology has advanced a lot since cookies were invented in the 1990s and it’s now possible to do many things privately that used to require just collecting user data. But, of course, it’s also possible to use technology to do things that aren’t so good (which is how we got into this hole in the first place). When looking at a set of technologies like PPA, we need to ask:

  1. Are the use cases for the technology actually good for users and for the web?
  2. Do these technologies improve user privacy and security? Are they collecting the minimal amount of data that is necessary to accomplish the task?
  3. Are these technologies being developed in an open standards process with input from all stakeholders?

Because this isn’t just one technology but rather a set of them, we should expect some pieces to be better than others. In particular, ad measurement is a use case that is important to the ecosystem, and we think that getting this one component right can drive value for consumers and engage advertising stakeholders. There’s overlap here with technologies like Prio which we already use in Firefox. On the other hand, we’re less certain about a number of the proposed technologies for user targeting, which have privacy properties that seem hard to analyze. This is a whole new area of technology, so we should expect it to be hard, but that’s also a reason to make sure we get it right.

What’s next?

Obviously, this is just the barest overview. In upcoming posts we’ll provide a more detailed survey of the space, covering the existing situation in more detail, some of the proposals on offer, and where we think the big opportunities are to improve things in both the technical and policy domains.

For more on this:

Building a more privacy preserving ads-based ecosystem

Coming soon: thoughts on FLoC

The post The future of ads and privacy appeared first on The Mozilla Blog.

Sam FosterIdeas on a lower-carbon internet through scheduled downloads and Quality of Service requests

Other titles:

  • The impact of internet use and what we might do about it?
  • Opportunities for powering more internet use with renewables
  • I want this thing, but not until later
  • A story of demand-side prioritization, scheduling and negotiation to take advantage of a fluxuating energy supply.

I recently got interested in how renewable power generation plays into the carbon footprint of internet usage. We need power to run and charge the devices we use to consume internet content, to run the networks that deliver that content to us, and to power the servers and data centers that house those servers.

Powering the internet eats up energy. The power necessary to serve up the files, do the computation, encode and package it all up to send it down the wire to each of the billions of devices making those requests consumes energy on an enormous scale. The process of hosting and delivering content is so power hungry, the industry is driven to large extent by the cost and availability of electricity. Data centers are even described in terms of the power they consume - as a reasonable proxy for the capacity they can supply.

One of the problems we hear about constantly is that the intermittent and relatively unpredicatable nature of wind and solar energy means it can only ever make up a portion of a region’s electricity generation capacity. There’s an expectation of always-on power availability; regardles of the weather or time of day, a factory must run, a building must be lit, and if a device requests some internet resource the request must be met immediately. So, we need reliable base load generation to meet most energy demands. Today, that means coal, natural gas, nuclear and hydro generation plants - which can be depended on to supply energy day and night, all year round. Nuclear and hydro are low-carbon, but they can also be expensive and problematic to develop. Wind and solar are much less so, but as long as their output is intermittent they can only form part of the solution for de-carbonizing electricity grids across the world - as long as demand not supply is king.

There are lots of approaches to tackling this. Better storage options (PDF) smooth out the intermittency of wind and solar - day to day if not seasonally. Carbon capture and sequestration lower the carbon footprint of fossil fuel power generation - but raise the cost. What if that on-demand, constant availability of those data centers’ capacity was itself a variable? Suppose the client device issuing the request had a way to indicate priority and expected delivery time, would that change the dynamic?

Wind power tends to peak early in the morning, solar in the afternoon. Internet traffic is at its highest during the day and evening, and some - most - is necessarily real-time. But if I’m watching a series on Netflix, the next episode could be downloaded at anytime, as long as its available by the next evening when I sit down to watch it. And for computational tasks - like compiling some code, running an automated test suite, or encoding video - sometimes you need it as soon as possible, other times its less urgent. Communicating priority and scheduling requirements (a.k.a Quality of Service) from the client through to the infrastructure used to fullfill a request would allow smarter balancing of demand and resources. It would open up the door to better use of less constant (non-baseload) energy sources. The server could defer on some tasks when power is least available or most expensive, and process them later when for example the sun comes up, or the wind blows. Smoothing out spikes in demand would also reduce the need for so-called “peaker” plants - typically natural gas power plants that are spun up to meet excess energy demand.

“Kestler: While intermittent power is a challenge for data center operations, the development of sensors, software tools and network capabilities will be at the forefront of advancing the deployment of renewables across the globe. The modernization of the grid will be dependent on large power consumers being capable of operating in a less stable flow of electrons.

What’s Ahead for Data Centers in 2021

Google already experimented with some of this, and its a fascinating and encouraging read.

“Results from our pilot suggest that by shifting compute jobs we can increase the amount of lower-carbon energy we consume”

Our data centers now work harder when the sun shines and wind blows

There are clearly going to be hurdles for wide-scale adoption of this kind of strategy, and its never going to work for all cases. But with a problem at this scale, a solution that shaves off 1%, or a fraction of 1% can still translate into huge monetary and carbon savings. So, what would it take? Are there practical steps that us non-data-center-operators can take to facilitate this kind of negotiation betweeen the client and the massive and indifferent upstream infrastructure that supports it?

The low hanging fruit in this scenario is video streaming. It represents an outsized percentage of all internet traffic - and data center load. Netflix alone generates 15% of all global internet traffic. What if even 1% of that could be shifted to be powered entirely by renewable energy, by virtue of the deferred-processing at the supply-side, or scheduled download at the client-side? Often its the case that when I click to watch video, I need it right there and then - perhaps it is a live event, or I didn’t know I needed it until that minute. Sometimes not though. If it was possible to schedule the download ensuring it was there on my device when I did need it, the benefits would ripple through the whole system - content delivery providers would save money and maybe the grid itself would be able to absorb more intermittent renewable generation.

There are other opportunities and I don’t want to get too hung up on specifics. But the notion of attaching Quality of Service in some way to some requests to facilitate smarter utilization of seasonal, regional and weather-dependent energy generation fluxuations seems promising to me. Fundamentally, power demand from worldwide internet traffic is extremely dynamic. We can better meet that demand with equally dynamic low and zero carbon sources if we can introduce patterns and signals at all levels of the system to allow it to plan and adapt.

When I get to the end of a piece like this I’m always left wondering “what is the point?”. Is this just a rant into the void, hoping someone listens? Its certainly not an actionable plan for change. Writing it down helps me process some of these ideas, and I hope it starts conversations and prompts you to spot these kind of indirect opportunities to tackle climate change. And if you are in a position to nudge any of this towards really existing in the world, that would be great. I work at Mozilla, we make a web browser and have our own substantial data-center and compute-time bill. I’ll be looking into what change I can help create there.

Some References

I collected a large list of papers and articles as I looked into this. Here’s a smaller list:

Daniel StenbergTaking hyper-curl further

Thanks to funding by ISRG (via Google), we merged the hyper powered HTTP back-end into curl earlier this year as an alternative HTTP/1 and HTTP/2 implementation. Previously, there was only one way to do HTTP/1 and 2 in curl.

Backends

Core libcurl functionality can be powered by optional and alternative backends in a way that doesn’t change the API or directly affect the application. This is done by featuring internal APIs that can be implemented by independent components. See the illustration below (click for higher resolution).

This is a slide from Daniel’s libcurl under the hood presentation.

curl 7.75.0 became the first curl release that could be built with hyper. The support for it was labeled “experimental” as while most of all common and basic use cases were supported, we still couldn’t run the full test suite when built with it and some edge cases even crashed.

We’ve subsequently fixed a few of the worst flaws so the Hyper powered curl has gradually and slowly improved since then.

Going further

Our best friends at ISRG has now once again put up funding and I’ll spend more work hours on making sure that more (preferably all) tests can run with hyper.

I’ve already started. Right now I’m sitting and staring at test case 154 which is doing a HTTP PUT using Digest authentication and an Expect: 100-continue header and this test case currently doesn’t work correctly when built to use Hyper. I’ll report back in a few weeks and let you know how it goes – and then I don’t mean with just test 154!

Consider yourself invited to join the #curl IRC channel and chat if you want live reports or want to help out!

Fund

You too can fund me to do curl work. Get in touch!

Raphael PierzinaEnable Fission tt(c) on more platforms

Last week my coworker Andrew Halberstadt talked me through the process of configuring Firefox CI to run a given test suite with Fission enabled on additional platforms. I am working on a patch to do this for our telemetry integration tests which are set up with mozharness and use treeherder symbol tt(c). Since the process should be close to identical for similar test suites, I decided to summarize what I’ve learned in this post, so next time someone on my team wants to do this, we don’t need to bug Andrew again.

Daniel StenbergGiving away an insane amount of curl stickers

Part 1. The beginning. (There will be at least one more part later on following up the progress.)

On May 18, 2021 I posted a tweet that I was giving away curl stickers for free to anyone who’d submit their address to me. It looked like this:

Everyone once in a while when I post a photo that involves curl stickers, a few people ask me where they can get hold of such. I figured it was about time I properly offered “the world” some. I expected maybe 50 or a 100 people would take me up on this offer.

The response was totally overwhelming and immediate. Within the first hour 270 persons had already requested stickers. After 24 hours when I closed the form again, 1003 addresses had been submitted. To countries all around the globe. Quite the avalanche.

Assessing the damage

This level of interest put up some challenges I hadn’t planned for. Do I have stickers enough? Now suddenly doing 3 or 5 stickers per parcel will have a major impact. Getting envelops and addresses onto them for a thousand deliveries is quite a job! Not to mention the cost. A “standard mail” to outside Sweden using the regular postal service is 24 SEK. That’s ~2.9 USD. Per parcel. Add the extra expenses and we’re at an adventure north of 3,000 USD.

For this kind of volume, I can get a better rate by registering as a “company customer”. It adds some extra work for me though but I haven’t worked out the details around this yet.

Let me be clear: I already from the beginning planned to ask for reimbursement from the curl fund for my expenses for this stunt. I would mostly add my work on this for free. Maybe “hire” my daughter for an extra set of hands.

Donations

During the time the form was up, we also received 51 donations to Open Collective (as the form mentioned that, and I also mentioned it on Twitter several times). The donated total was 943 USD. The average donation was 18 USD, the largest ones (2) were at 100 USD and the smallest was 2 USD.

Of course some donations might not be related to this and some donations may very well arrive after this form was closed again.

Cleaning up

If I had thought this through better at the beginning, I would not have asked for the address using a free text field like this. People clearly don’t have the same idea of how to do this as I do.

I had to manually go through the addresses to insert newlines, add country names and remove obviously broken addresses. For example, a common pattern was addresses added with only a 6-8 digit number? I think over 20 addresses were specified like that!

Clearly there’s a lesson to be had there.

After removing obviously bad and broken addresses there were 978 addresses left.

Countries

I got postal addresses to 65 different countries. A surprisingly diverse collection I think. The top 10 countries were:

USA174
Sweden103
Germany93
India92
UK64
France56
Spain31
Brazil31
The Netherlands24
Switzerland20

Countries that were only entered once: Dubai, Iran, Japan, Latvia, Morocco, Nicaragua, Philippines, Romania, Serbia, Thailand, Tunisia, UAE, Ukraine, Uruguay, Zimbabwe

Figuring out the process

While I explicitly said I wouldn’t guarantee that everyone gets stickers, I want to do my best in delivering a few to every single one who asked for them.

Volunteers

I have the best community. Without me saying a word or asking for it, several people raised their hands and volunteered to offload the sending to their countries. I could send one big batch to them and they redistribute within their countries. They would handle US, Czechia, Denmark and Switzerland for me.

But why stop at those four? In my next step I put up a public plea for more volunteers on Twitter and man, I got myself a busy evening and after a few hours I had friends signed up from over 20 countries offering to redistributed stickers within the respective countries. This way, we share the expenses and the work load, and mailing out many smaller parcels within countries is also a lot cheaper than me sending them all individually from Sweden.

After a lot of communications I had an army of helpers lined up.

28 distributors will help me do 724 sticker deliveries to 24 countries. Leaving me to do just the remaining 282 packages to the other 41 countries.

Stickers inventory

I’ve offered “a few” stickers and I decided that means 4.

978 * 4 = 3912

Plus I want to add 10 extra stickers to each distributor, and there are 28 distributors.

3912 + 28 * 10 = 4192

Do I have 4200 curl stickers? I emptied my sticker drawer and put them all on the table and took this photo. All of these curl stickers you see on the photo have been donated to us/me by sponsors. Most of the from Sticker Mule, some of them from XXXX.

I think I might be a little “thin”. Luckily, I have friends that can help me stock up…

(There are some Haxx and wolfSSL stickers on the photo as well, because I figured I should spice up some packages with some of those as well.)

Schedule

The stickers still haven’t shipped from my place but the plan is to get the bulk of them shipped from me within days. Stay tuned. There will of course be more delays on the route to their destinations, but rest assured that we intend to deliver to all who asked for them!

Will I give away more curl stickers?

Not now, and I don’t have any plans on doing this stunt again very soon. It was already way more than I expected. More attention, more desire and definitely a lot more work!

But at the first opportunity where you meet me physically I will of course give away stickers.

Buy curl stickers?

I’ve started looking into offering stickers for purchase but I’m not ready to make anything public or official yet. Stay tuned and I promise you’ll learn and be told when the sticker shop opens.

If it happens, the stickers will not be very cheap but you should rather see each such sticker as a mini-sponsorship.

Follow up

Stay tuned. I will be back with updates.

Mike TaylorThe hidden meaning of 537.36 in the Chromium User-Agent string

If you’re like me, first of all, very sorry to hear that, but you are probably spending your Friday morning wondering what the meaning of 537.36 is in the Chromium User-Agent string. It appears in two places: AppleWebKit/537.36 and Safari/537.36.

As any serious researcher does, the first place I went to for answers was numeroscop.net, to check out the “Angel Number Spiritual Meaning”.

(I enjoy a good data-collection-scheme-disguised-as-fortune-telling site as much as anyone else, don’t judge me.)

engraving an angel with 2 horns, blowing the numbers 537 and 36

537 means:

“Positive changes in the material aspect will be an extra confirmation that you have made the right choice of a life partner”

And 36 means:

“[Y]es, you are doing everything right, but you are not doing everything that you could do”.

Angels probably use PHP, so let’s assume “.” is the string concatenation operator. Mashing those together, a meaning emerges: “537.36” represents the last shipping version of WebKit before the Blink fork.

Back in 2013 (right after the fork announcement), Ojan Vafai wrote,

“In the short-term we have no plans of changing the UA string. The only thing that will change is the Chrome version number.”

Darin Fisher (former engineering lead for the Chrome Web Platform Team) said the same in the recorded Q&A video (linked from the Developer FAQ).

Assuming Wikipedia is as trustworthy as that “why did I give the Angel Numerology site my email, birthdate, relationship status, and name, and why am I getting so many ads on other sites about healing crystals and clearance specials on hydroxychloroquine??” site, Chrome 27.0.1453 was the last version of Chrome shipping WebKit, which was at 537.36, and Chrome 28.0.1500 was the first version of stable channel release shipping the Blink engine.

So that’s why those numbers are in the User-Agent string. For obvious compatibility reasons, you can’t just remove strings like AppleWebKit/537.36 and Safari/537.36. And that’s why we’ll keep them there, likely frozen forever.

Daniel StenbergQUIC is RFC 9000

The official publication date of the relevant QUIC specifications is: May 27, 2021.

I’ve done many presentations about HTTP and related technologies over the years. HTTP/2 had only just shipped when the QUIC working group had been formed in the IETF and I started to mention and describe what was being done there.

I’ve explained HTTP/3

I started writing the document HTTP/3 explained in February 2018 before the protocol was even called HTTP/3 (and yeah the document itself was also called something else at first). The HTTP protocol for QUIC was just called “HTTP over QUIC” in the beginning and it took until November 2018 before it got the name HTTP/3. I did my first presentation using HTTP/3 in the title and on slides in early December 2018, My first recorded HTTP/3 presentation was in January 2019 (in Stockholm, Sweden).

In that talk I mentioned that the protocol would be “live” by the summer of 2019, which was an optimistic estimate based on the then current milestones set out by the IETF working group.

I think my optimism regarding the release schedule has kept up but as time progressed I’ve updated that estimation many times…

HTTP/3 – not yet

The first four RFC documentations to be ratified and published only concern QUIC, the transport protocol, and not the HTTP/3 parts. The two HTTP/3 documents are also in queue but are slightly delayed as they await some other prerequisite (“generic” HTTP update) documents to ship first, then the HTTP/3 ones can ship and refer to those other documents.

QUIC

QUIC is a new transport protocol. It is done over UDP and can be described as being something of a TCP + TLS replacement, merged into a single protocol.

Okay, the title of this blog is misleading. QUIC is actually documented in four different RFCs:

RFC 8999 – Version-Independent Properties of QUIC

RFC 9000 – QUIC: A UDP-Based Multiplexed and Secure Transport

RFC 9001 – Using TLS to Secure QUIC

RFC 9002 – QUIC Loss Detection and Congestion Control

My role: I’m just a bystander

I initially wanted to keep up closely with the working group and follow what happened and participate on the meetings and interims etc. It turned out to be too difficult for me to do that so I had to lower my ambitions and I’ve mostly had a casual observing role. I just couldn’t muster the energy and spend the time necessary to do it properly.

I’ve participated in many of the meetings, I’ve been present in the QUIC implementers slack, I’ve followed lots of design and architectural discussions on the mailing list and in GitHub issues. I’ve worked on implementing support for QUIC and h3 in curl and thanks to that helped out iron issues and glitches in various implementations, but the now published RFCs have virtually no traces of me or my feedback in them.

Mozilla Open Policy & Advocacy BlogAdvancing system-level change with ad transparency in the EU DSA

At Mozilla we believe that greater transparency in the online advertising ecosystem can empower individuals, safeguard advertisers’ interests, and address systemic harms. It’s something we care passionately about, and it’s an ethos that runs through our own marketing work. Indeed, our recent decision to resume advertising on Instagram is underpinned by a commitment to transparency. Yet we also recognise that this issue is a structural one, and that regulation and public policy has an important role to play in improving the health of the ecosystem. In this post, we give an update on our efforts to advance system-level change, focusing on the ongoing discussions on this topic in the EU.

In December 2020 the European Commission unveiled the Digital Services Act, a draft law that seeks to usher in a new regulatory standard for content responsibility by platforms. A focus on systemic transparency is at the core of the DSA, including in the context of online advertising. The DSA’s approach to ad transparency mandates disclosure well above the voluntary standard that we see today (and mirrors the ethos of our new Instagram advertising strategy).

Under the DSA’s approach, so-called ‘Very Large Online Platforms’ must:

  • Disclose the content of all advertisements that run on their services;
  • Disclose the key targeting parameters that are associated with each advertisement; and,
  • Make this disclosure through publicly-available ad archives (our recommendations on how these ad archives should operate can be found here).

The DSA’s ad transparency approach will give researchers, regulators, and advertisers greater insight into the platform-mediated advertising ecosystem, providing a crucial means of understanding and detecting hidden harms. Harms fester when they happen in the dark, and so meaningful transparency in and of the ecosystem can help mitigate them.

Yet at the same time, transparency is rarely an end in itself. And we’re humble enough to know that we don’t have all the answers to the challenges holding back the internet from what it should be. Fortunately, another crucial benefit of advertising transparency frameworks is that they can provide us with the prerequisite insight and evidence-base that is essential for effective policy solutions, in the EU and beyond.

Although the EU DSA is trending in a positive direction, we’re not resting on our laurels. The draft law still has some way to go in the legislative mark-up phase. We’ll continue to advocate for thoughtful and effective policy approaches for advertising transparency, and prototype these approaches in our own marketing work.

The post Advancing system-level change with ad transparency in the EU DSA appeared first on Open Policy & Advocacy.

The Mozilla BlogZeke Smith on internet haters, Survivor and sending support to people online

The lines between online life and real life practically disappeared in 2020 when the COVID-19 pandemic forced us to replace social platforms and video apps for human contact.

As part of our mental health awareness month coverage this May, we are talking to people about how their online lives impact their mental health. We connected with Zeke Smith, the comedy writer who was known to fans of CBS’s Survivor as “the goofy guy with the mustache and the Hawaiian shirt” over two seasons of the show until another contestant outed Smith as a trans man. Smith found himself suddenly in the spotlight as an activist and voice of an often-invisible community. 

Smith talked to us about how to think about online haters, when it’s time to log off and why a puppy pic is usually more useful than a hot take.

It’s been 15 months since the world came to a halt. How are you doing? 

I’ve been fully vaccinated for about five weeks at this point, and it has been a game changer. During the pandemic my boyfriend [actor Nico Santos (Crazy Rich Asians, Superstore)] and I have been slowly having friends, vaccinated friends, over and just being able to see and talk to people who are not my boyfriend or are not on a computer screen has been great. I’m probably like 60 percent introverted, 40 percent extroverted and that 40 percent of me has not been getting fed at all.

Were you replacing in-person with online interaction?

Yeah I was, and I don’t think that’s a good place for anybody to be getting their social interaction. But it’s just been me and my boyfriend the whole time and I found myself feeling lonely and going on Twitter and being like, alright well what are my Twitter friends doing? What are they reading? What podcasts have they just been on?

The hardest part about interacting with people online — and I’m guilty of this as well — is that nobody’s willing to listen, everyone just wants to say their opinion and mic drop and walk away. And I feel like conversations where you have them in person over coffee or cocktails or whatever, where people listen and engage with you, you can’t do that online, because, in my opinion, people are only looking to be validated in their current beliefs.

There is a pull you feel in your career with your aspirations and your life and activism toward social media. Some of this, you have to do, right?

There is a significant portion of our culture that is happening online via various social media platforms and if you live in a major city, particularly if you’re in entertainment, it’s the equivalent of reading the newspaper. You have to sort of be where trends are found and grown, and that’s how you keep up. I think, for me, the quarantine has both accelerated and also troubled this conversation of “Have we reached peak social media?” Because we’ve realized that it’s not good for anyone’s mental health.

It’s bad for our eyes and it’s bad for our sleep to be looking at screens all the time. And it has created really toxic ripples in our culture on both social and political levels.

How do you know it’s time to log off? 

When [Harry Potter author] JK Rowling came out as a TERF (trans-exclusionary radical feminist) this summer, I was trying to engage with her and it brought out all of these trans-phobic feminists. I’m making a very well-reasoned argument and then more and more trash comes at you, and it makes you very angry, because you feel threatened.

And when you’re at a moment where all of your notifications on Twitter are just someone yelling at you or saying mean things about you, or trying to undermine something very fundamental about you, it does make you feel very lonely. It does make you feel very defeated.

Those are moments when I have to catch myself because I won’t be able to focus on other things. I’ll just be thinking about a response to what that person said, and those are the moments where I’m like, “Okay, we are deleting the apps off the phone. We are logging out on the browser. We need to take a step back and reset.”

Speaking of how online interactions will never replace the real, lived experience… I think this very much ties into one of the things that’s been on your mind lately: that there aren’t any trans men that we know of working in writers rooms for television. Why does that real-life experience matter?

There was a time not too long ago, and it still exists today, where the only people writing television were straight, cisgender, white guys. The reason why stereotypes develop in television is because they’re written by people who do not have a particular experience.

In the trans context, people who don’t know trans people intimately probably think lives revolve a lot around bathrooms and saying our pronouns.

Right. The mechanics of transition or how your family reacted. 

Sure, those are all artifacts of being trans but a lot of times when I sit down with writers, who are not trans but who want to write a trans story and they send me their script, I’m like this is not true to the lived experience of being trans. And they’re confused because they say they did all this research. They watched all these movies. And I say, written by people who weren’t trans, portrayed by people who weren’t trans, directed by people who weren’t trans. It’s what you believe versus our real experience.

Without really being in charge of it, you became a trans person that an awful lot of people knew all of a sudden. 

Totally, and I think that’s one of the reasons why my Survivor experience seems to have had an impact on so many people. For a season and a half of being on Survivor, nobody knew I was trans. I was just the goofy guy with the mustache and the Hawaiian shirt. And people liked me. They liked my sense of humor, they liked my passion. Hopefully that rewrites a person’s understanding of what it means to be trans.

Pocket Joy List Project

My Pocket Joy List: Zeke Smith

Read it

You obviously experienced this on a much bigger stage. On a smaller scale though, we all maybe have moments like this if we’re passionate or vocal. Whether that’s being out as a trans person or being in favor of something, that puts you in such a vulnerable position online. What do you think people should do in a situation where they’re passionate about an issue?

I don’t know that any problems have been solved by a single individual with a Twitter account engaging. Unlike me, if you find lots of joy in getting into those squabbles, it fills you with energy and it doesn’t make you feel bad about yourself, then squabble away. 

But if it does not bring you joy, then I think it is worth reminding yourself that the weight of the world is not on your shoulders alone. There’s always a way to support a cause or people you care about without directly engaging with those people who are trying to shut them down. 

There are ways to donate. There are ways to volunteer. Make phone calls. On a personal level, channel your feelings into a creative endeavor. Instead of engaging with, as we’d colloquially say, “the haters,” send positive messages of support.

I love what you said. If someone is in the middle of a firestorm, send them a supportive message. It’s simple, but better than just having a hot take.

Exactly. Just you know, say, ‘Hey, I’m rooting for you.’ Or send pictures of puppies.

When I was on Survivor I got mostly positive comments from everybody. And then I would go on Reddit and there would be a gaggle of people who were saying nasty things about me and being critical to what I was doing, and it really shook me because that was the first time I ever saw myself commented upon publicly.

My friend Hannah [Shapiro], another contestant on Survivor (Season 33: Millennials vs. Gen-X), and I were both dealing with the haters, and so we decided to pretend we were dating. I’m very gay and she is very straight. We did this series of Instagram and Twitter posts that made it seem like we were dating. But the captions were all musical theater lyrics and if you were above the age of 18 you would know that we were pulling a prank.

But so many people fell for it and were confused and didn’t know what was going on. And what it made us realize is that, oh, a lot of these people who’ve been saying hateful things are middle schoolers. That was such a profound experience of realizing, “Oh, you have been upset about what some 13 year olds have been saying.

And so if you are seeking community exclusively online, that’s not a path that’s necessarily going to lead you to self love and self acceptance. It’s not a recipe for finding your truth or how you can be happy within yourself.

There are millions of people for whom the internet has been a place that has helped them find themselves or express themselves or be safe, when there is no other safe place to be.

But I do think that a full life can’t be fully lived on the internet. I think you do have to at some point find your chosen family. You find a community and find people that can hold your hand and sing you “Happy Birthday.”

This interview has been edited for length and clarity.

The post Zeke Smith on internet haters, Survivor and sending support to people online appeared first on The Mozilla Blog.

The Mozilla BlogOur approach to advertising on Facebook platforms

Are we targeting you? Yep. And we’ll tell you how. 

When I joined Mozilla, the organization had made the decision to pause Facebook advertising in light of the Cambridge Analytica privacy controversy. This was a decision that I understand, but I’m changing course. 

For Mozilla, it boils down to this: our mission requires that we empower everyone to protect themselves online, not just the folks that are plugged in to the recent techlash. And a lot of the people that may need our tools the most spend a lot of time on Facebook and Instagram. 

So the question becomes, can we reach folks on these platforms with our ads, while staying true to Mozilla’s values? I believe we can, and it starts with being up front about what we’re doing. 

Here’s the skinny:

Online, people are segmented into small groups in order to serve them up highly-targeted advertisements. This is called microtargeting. And it happens countless times a day on the platforms we use every day.

For many years the conventional wisdom was that this type of advertising was simply a better way to put more relevant ads in front of consumers. Relevant ads means better conversion for advertisers, which means more goods sold. And, hey, sometimes it works out well for people. Chances are you’ve found a new shampoo or pair of shoes or set of weights because of these targeted ads. You’ve also probably bought something you didn’t need (or even particularly want) because of these targeted ads. I know I have – my half-used collection of skincare products can attest to that. 

The problem is not that these ads themselves exist, it is the complexity of the system that’s the issue. The same highly sophisticated targeting tools that allow advertisers to find you can also be used for harm. This can look like overly aggressive ad tactics,  or even changing consumers’ self-perception after being targeted a certain way. In the most dangerous cases, advertisers can target groups with harmful messages, like targeting sick people with treatments that don’t actually work, ads that discriminate against different under-represented groups, and deceptive ads targeting stressed teenagers for army recruitment. 

So what can we do? It starts with transparency. Ads and the targeting parameters used by companies should be public and available for anyone to see. 

When Mozilla advertises on Instagram, we are going to tell you exactly what our ad says, who we are advertising to and why. Our ads themselves will do the same. 

The full list of information for our Mozilla and Firefox ads can be found here.

Here’s an example: 

Target: Scorpio + Oct/Nov birthday + interest in tarot cards

Rather than wait for legislation like the Social Media DATA Act and the EU’s Digital Services Act to pass, or for the ad platforms to act themselves, we’re going to show them how it can be done. All large platforms who operate ad networks — Facebook, Google, YouTube — should disclose targeting parameters on their platforms through publicly available APIs.

The health of the ad ecosystem is important to the future of the internet. This is an important step forward.

Want to know more about online advertising? Check out these resources.

Check My Ads

NYU Ad Observatory 

Common Sense Media

The post Our approach to advertising on Facebook platforms appeared first on The Mozilla Blog.

Mozilla Addons BlogManifest v3 update

Two years ago, Google proposed Manifest v3, a number of foundational changes to the Chrome extension framework. Many of these changes introduce new incompatibilities between Firefox and Chrome. As we previously wrote, we want to maintain a high degree of compatibility to support cross-browser development.  We will introduce Manifest v3 support for Firefox extensions. However, we will diverge from Chrome’s implementation where we think it matters and our values point to a different solution.

For the last few months, we have consulted with extension developers and Firefox’s engineering leadership about our approach to Manifest v3. The following is an overview of our plan to move forward, which is based on those conversations.

High level changes

  • In our initial response to the Manifest v3 proposal, we committed to implementing cross-origin protections. Some of this work is underway as part of Site Isolation, a larger reworking of Firefox’s architecture to isolate sites from each other. You can test how your extension performs in site isolation on the Nightly pre-release channel by going to about:preferences#experimental and enabling Fission (Site Isolation). This feature will be gradually enabled by default on Firefox Beta in the upcoming months and will start rolling out a small percentage of release users in Q3 2021.

    Cross-origin requests in content scripts already encounter restrictions by advances of the web platform (e.g. SameSite cookies, CORP) and privacy features of Firefox (e.g. state partitioning). To support extensions, we are allowing extension scripts with sufficient host permissions to be exempted from these policies. Content scripts won’t benefit from these improvements, and will eventually have the same kind of permissions as regular web pages (bug 1578405). We will continue to develop APIs to enable extensions to perform cross-origin requests that respect the user’s privacy choices (e.g. bug 1670278, bug 1698863).

  • Background pages will be replaced by background service workers (bug 1578286). This is a substantial change and will continue to be developed over the next few months. We will make a new announcement once we have something that can be tested in Nightly.
  • Promise-based APIs: Our APIs have been Promise-based since their inception using the browser.* namespace and we published a polyfill to offer consistent behavior across browsers that only support the chrome.* namespace. For Manifest v3, we will enable Promise-based APIs in the chrome.* namespace as well.
  • Host permission controls (bug 1711787): Chrome has shipped a feature that gives users control over which sites extensions are allowed to run on. We’re working on our own design that puts users in control, including early work by our Outreachy intern Richa Sharma on a project to give users the ability to decide if extensions will run in different container tabs (bug 1683056). Stay tuned for more information about that project!
  • Code execution: Dynamic code execution in privileged extension contexts will be restricted by default (bug 1687763). A content security policy for content scripts will be introduced (bug 1581608). The existing userScripts and contentScripts APIs will be reworked to support service worker-based extensions (bug 1687761).

declarativeNetRequest

Google has introduced declarativeNetRequest (DNR) to replace the blocking webRequest API. This impacts the capabilities of extensions that process network requests (including but not limited to content blockers) by limiting the number of rules an extension can use, as well as available filters and actions.

After discussing this with several content blocking extension developers, we have decided to implement DNR and continue maintaining support for blocking webRequest. Our initial goal for implementing DNR is to provide compatibility with Chrome so developers do not have to support multiple code bases if they do not want to. With both APIs supported in Firefox, developers can choose the approach that works best for them and their users.

We will support blocking webRequest until there’s a better solution which covers all use cases we consider important, since DNR as currently implemented by Chrome does not yet meet the needs of extension developers.

You can follow our progress on implementing DNR in bug 1687755.

Implementation timeline

Manifest v3 is a large platform project, and some parts of it will take longer than others to implement. As of this writing, we are hoping to complete enough work on this project to support developer testing in Q4 2021 and start accepting v3 submissions in early 2022. This schedule may be pushed back or delayed due to unforeseeable circumstances.

We’d like to note that it’s still very early to be talking about migrating extensions to Manifest v3. We have not yet set a deprecation date for Manifest v2 but expect it to be supported for at least one year after Manifest v3 becomes stable in the release channel.

Get involved

We understand that extension developers will need to adapt their extensions to be compatible with Manifest v3, and we would like to make this process as smooth as possible. Please let us know about any pain points you might have encountered when migrating Chrome extensions to Manifest v3, and any suggested mitigations, on our community forum or in relevant issues on Bugzilla.

We are also interested in hearing about specific use cases we should keep in mind so that your extension will be compatible with Chrome for Manifest V3.

The post Manifest v3 update appeared first on Mozilla Add-ons Blog.

Dennis SchubertWebCompat PSA: Please don't use negative `text-indent`s for hidden labels.

During my work on Web Compatibility at Mozilla, I see many things that break in exciting ways. Sometimes, it’s obvious stuff like flexbox compat issues1, but sometimes, the breakages are a bit surprising. Today, the star of the show is a single CSS instruction:

text-indent: -9999px

When we talk about web compatibility issues, most people think about an elite subset of “well-known” breakages or massive layout issues. They rarely think about innocent-looking things like text-indent. And to be fair, most of the time, neither do we browser people.

This large negative text-indent appears to be a hack, frequently used to “move away” labels next to icons, probably to hide them from view but keep them in the markup for screen readers and similar user agents. Please don’t do that, there are better alternatives for screenreaders. Even though having a negative indentation seems like a good solution, the unfortunate reality is that text-indent has some weird cross-browser quirks. Two examples that I stumbled across in the last month:

… and there are a lot more.

text-indent does extend the size of an element, but not in a fixed direction, but depending on the direction of text flow. Here’s a quick example:

Source:


<style>
  #text-indent-demo p {
    text-indent: 100px;
  }
</style>
<section id="text-indent-demo">
  <p style="direction: ltr;">one</p>
  <p style="direction: rtl;">two</p>
  <p style="direction: ltr; writing-mode: vertical-lr;">three</p>
  <p style="direction: rtl; writing-mode: vertical-lr;">five</p>
</section>

Result:

one

two

three

five

As you can see, we have the same text-indent: 100px;, but in four different directions depending on the text direction and writing mode. This makes perfect sense if you think about it, but developers can get caught off-guard here, especially if working on a site that later gets translated. Or, well, if browsers misbehave.

On an Israeli site I recently looked at, a large negative text-indent caused the site to be extended to the right, which caused some viewport issues in Firefox for Android because we try to fit everything into your view. Another example is a report about a Romanian news site, where clicking on the social links left a dotted border all across the screen because they extended their buttons 9999px to the left without overflow: hidden‘ing it. In Chrome, this particular case is not noticeable because Chrome does not show focus borders the same way Firefox does, but the issue is still there. There are more examples of things going wrong in unexpected ways, but you get the gist.

While I am only talking about text-indent here, mainly because the text direction dependency adds an interesting twist, note that all methods of “moving something out of the screen to make it invisible” have similar issues. Even if you move things really far away, they still exist inside the document, and they can have unexpected side effects.

So… please don’t. The web is broken enough already. :)

  1. Spoiler: there soon will be another blog post, about a flexbox issue! Wohoo! 

Mozilla Open Policy & Advocacy BlogMozilla reacts to the European Commission’s guidance on the revision of the EU Code of Practice on Disinformation

Today the European Commission published its guidance for the upcoming revision of the EU Code of Practice on Disinformation. Mozilla was a founding signatory of the Code of Practice in 2018, and we’re happy to see plans materialise for its evolution.

Reacting to the guidance, Raegan MacDonald, Director of Global Policy, Mozilla Corporation said:

“We welcome the Commission’s guidance for the next iteration of the Code of Practice. We’re happy that the revised Code will provide a greater role for organisations with technical and research expertise, and we look forward to harnessing that opportunity to support the various stakeholders.

This guidance outlines a clear vision for how the fight against disinformation can sit within a future-focused and thoughtful policy framework for platform accountability. While we still need to ensure the DSA provides the fundamentals, we see the revised Code as playing an important role in giving practical meaning to transparency and accountability.”

The post Mozilla reacts to the European Commission’s guidance on the revision of the EU Code of Practice on Disinformation appeared first on Open Policy & Advocacy.

Mozilla Performance BlogPerformance Sheriff Newsletter (April 2021)

In April there were 187 alerts generated, resulting in 34 regression bugs being filed on average 6 days after the regressing change landed.

Welcome to the April 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on our invalid regression alerts and bugs. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.3 days
  • 85% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 3.3 days
  • 85% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (Apr 2021)

Invalid Alerts

Sometimes we have alerts that turn out to be invalid. This usually means there were outliers in the results that triggered an alert, the results are multi-modal, or that the data is too noisy and the magnitude of the change is to small to confidently identify a culprit revision. Here’s an example of where outliers have caused invalid regression alerts:

Perfherder graph showing invalid alerts due to outliers

Perfherder graph showing invalid alerts due to outliers

These invalid alerts are usually identified by the performance sheriffs. They can be an indicator for the quality of our data and our change detection algorithm. If the percentage of invalid alerts increases we’ll be spending more time sheriffing these alerts, and we may want to investigate.

Alerts by Status (April 2021)

Regression Alerts by Status (April 2021)

In April we saw 5 invalid alerts, which equates for 3% of all regression alerts. Over the last 6 months we’ve seen 93 invalid alerts out 1,371 total alerts, just under 7%.

Invalid Regression Bugs

Occaisionally we detect a performance regression, identify the suspected culprit, and open a regression bug only for it to be closed as invalid. There can be a number of reasons for this, but the most likely is that the suspected culprit was incorrect. As our performance sheriffs are not expected to be familiar with all of our performance tests or what might impact them, we rely on the authors of suspected culprits to point out when the performance impact doesn’t make sense. When queried, our sheriffs will trigger additional tests around the regression range and either confirm the original culprit or close the bug as invalid and open a new one. Note that until recently, sheriffs may have used the same bug and simply modified the “regressed by” field. We have changed this to allow us to track the number of invalid bugs over time.

Regression Bugs by Status (April 2021)

Regression Bugs by Status (April 2021)

Note that bugs may have many alerts, and are often resolved some time before the alerts, which explains why there are more open alerts than bugs. Our sheriffs periodically run a query to identify alerts linked to bugs that have been resolved and use this to sanity check and update the alerts as necessary.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for April can be found here (for those with access).

Daniel Stenbergcurl 7.77.0 – 200 OK

Welcome to the 200th curl release. We call it 200 OK. It coincides with us counting more than 900 commit authors and surpassing 2,400 credited contributors in the project. This is also the first release ever in which we thank more than 80 persons in the RELEASE-NOTES for having helped out making it and we’ve set two new record in the bug-bounty program: the largest single payout ever for a single bug (2,000 USD) and the largest total payout during a single release cycle: 3,800 USD.

This release cycle was 42 days only, two weeks shorter than normal due to the previous 7.76.1 patch release.

Release Presentation

Numbers

the 200th release
5 changes
42 days (total: 8,468)

133 bug-fixes (total: 6,966)
192 commits (total: 27,202)
0 new public libcurl function (total: 85)
2 new curl_easy_setopt() option (total: 290)

2 new curl command line option (total: 242)
82 contributors, 44 new (total: 2,410)
47 authors, 23 new (total: 901)
3 security fixes (total: 103)
3,800 USD paid in Bug Bounties (total: 9,000 USD)

Security

We set two new records in the curl bug-bounty program this time as mentioned above. These are the issues that made them happen.

CVE-2021-22901: TLS session caching disaster

This is a Use-After-Free in the OpenSSL backend code that in the absolutely worst case can lead to an RCE, a Remote Code Execution. The flaw is reasonably recently added and it’s very hard to exploit but you should upgrade or patch immediately.

The issue occurs when TLS session related info is sent from the TLS server when the transfer that previously used it is already done and gone.

The reporter was awarded 2,000 USD for this finding.

CVE-2021-22898: TELNET stack contents disclosure

When libcurl accepts custom TELNET options to send to the server, it the input parser was flawed which could be exploited to have libcurl instead send contents from the stack.

The reporter was awarded 1,000 USD for this finding.

CVE-2021-22897: schannel cipher selection surprise

In the Schannel backend code, the selected cipher for a transfer done with was stored in a static variable. This caused one transfer’s choice to weaken the choice for a single set transfer could unknowingly affect other connections to a lower security grade than intended.

The reporter was awarded 800 USD for this finding.

Changes

In this release we introduce 5 new changes that might be interesting to take a look at!

Make TLS flavor explicit

As explained separately, the curl configure script no longer defaults to selecting a particular TLS library. When you build curl with configure now, you need to select which library to use. No special treatment for any of them!

No more SSL

curl now has no more traces of support for SSLv2 or SSLv3. Those ancient and insecure SSL versions were already disabled by default by TLS libraries everywhere, but now it’s also impossible to activate them even in special build. Stripped out from both the curl tool and the library (thus counted as two changes).

HSTS in the build

We brought HSTS support a while ago, but now we finally remove the experimental label and ship it enabled in the build by default for everyone to use it more easily.

In-memory cert API

We introduce API options for libcurl that allow users to specify certificates in-memory instead of using files in the file system. See CURLOPT_CAINFO_BLOB.

Favorite bug-fixes

Again we manage to perform a large amount of fixes in this release, so I’m highlighting a few of the ones I find most interesting!

Version output

The first line of curl -V output got updated: libcurl now includes OpenLDAP and its version of that was used in the build, and then the curl tool can add libmetalink and its version of that was used in the build!

curl_mprintf: add description

We’ve provided the *printf() clone functions in the API since forever, but we’ve tried to discourage users from using them. Still, now we have a first shot at a man page that clearly describes how they work.

This is important as they’re not quite POSIX compliant and users who against our advice decide to rely on them need to be able to know how they work!

CURLOPT_IPRESOLVE: preventing wrong IP version from being used

This option was made a little stricter than before. Previously, it would be lax about existing connections and prefer reuse instead of resolving again, but starting now this option makes sure to only use a connection with the request IP version.

This allows applications to explicitly create two separate connections to the same host using different IP versions when desired, which previously libcurl wouldn’t easily let you do.

Ignore SIGPIPE in curl_easy_send

libcurl makes its best at ignoring SIGPIPE everywhere and here we identified a spot where we had missed it… We also made sure to enable the ignoring logic when built to use wolfSSL.

Several HTTP/2-fixes

There are no less than 6 separate fixes mentioned in the HTTP/2 module in this release. Some potential memory leaks but also some more behavior improving things. Possibly the most important one was the move of the transfer-related error code from the connection struct to the transfers struct since it was vulnerable to a race condition that could make it wrong. Another related fix is that libcurl no longer forcibly disconnects a connection over which a transfer gets HTTP_1_1_REQUIRED returned.

Partial CONNECT requests

When the CONNECT HTTP request sent to a proxy wasn’t all sent in a single send() call, curl would fail. It is baffling that this bug hasn’t been found or reported earlier but was detected this time when the reporter issued a CONNECT request that was larger than 16 kilobytes…

TLS: add USE_HTTP2 define

There was several remaining bad assumptions that HTTP/2 support in curl relies purely on nghttp2. This is no longer true as HTTP/2 support can also be provide by hyper.

normalize numerical IPv4 hosts

The URL parser now knows about the special IPv4 numerical formats and parses and normalizes URLs with numerical IPv4 addresses.

Timeout, timed out libssh2 disconnects too

When libcurl (built with libssh2 support) stopped an SFTP transfer because a timeout was triggered, the following SFTP disconnect procedure was subsequently also stopped because of the same timeout and therefore wasn’t allowed to properly clean up everything, leading to a memory-leak!

IRC network switch

We moved the #curl IRC channel to the new network libera.chat. Come join us there!

Next release

On Jul 21, 2021 we plan to ship the next release. The version number for that is not yet decided but we have changes in the pipeline, making a minor version number bump very likely.

Credits

7.77.0 release image by Filip Dimitrovski.

This Week In RustThis Week in Rust 392

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No official blog posts, newsletters, or research papers this week.

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is typed-index-collections, a crate that lets you make Vecs with custom-typed indices.

Thanks to Tim for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

No issues were proposed for CfP.

Updates from Rust Core

280 pull requests were merged in the last week

Rust Compiler Performance Triage

A somewhat quiet week. Some PRs had performance runs performed on them, but the changes were merged despite this. Also, we still have issues with certain benchmarks being noisy.

Triage done by @rylev. Revision range: 25a277..cdbe2

2 Regressions, 2 Improvements, 1 Mixed 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Ockam

Red Hat

Starry

NZXT

Kollider

Kraken

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Ok, you wanted it. Let's go full meta:

This time, there were two crates and one quote, which is not much, but ok. Keep it up, folks!

llogiq on reddit

Thanks to Patrice Peterson for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Niko MatsakisEdition: the song

You may have heard that the Rust 2021 Edition is coming. Along with my daughter Daphne, I have recorded a little song in honor of the occasion! The full lyrics are below – if you feel inspired, please make your own version!1 Enjoy!

Video

Lyrics

(Spoken)
Breaking changes where no code breaks.
Sounds impossible, no?
But in the Rust language, you might say that we like to do impossible things.
It isn’t easy.
You may ask, how do we manage such a thing?
That I can tell you in one word… Edition!

(Chorus)
Edition, edition… edition!

(Lang)
Who day and night
Is searching for a change
Whatever they can do
So Rust’s easier for you
Who sometimes finds
They have to tweak the rules
And change a thing or two in Rust?

(All)
The lang team, the lang team… edition!
The lang team, the lang team… edition!

(Libs)
Who designs the traits that we use each day?
All the time, in every way?
Who updates the prelude so that we can call
The methods that we want no sweat

(All)
The libs team, the libs team… edition!
The libs team, the libs team… edition!

(Users)
Three years ago I changed my code
to Rust twenty eighteen
Some dependencies did not
But they… kept working.

(All)
The users, the users… edition!
The users, the users… edition!

(Tooling)
And who does all this work
To patch and tweak and fix
Migrating all our code
Each edition to the next

(All)
The tooling, the tooling… edition!
The tooling, the tooling… edition!

(Spoken)
And here in Rust, we’ve always had our little slogans.
For instance, abstraction… without overhead.
Concurrency… without data races.
Stability… without stagnation.
Hack… without fear.
But we couldn’t do all of those things…
not without…
Edition!

Footnotes

  1. OMG, that would be amazing. I’ll update the post with any such links I find. 

Mozilla Security BlogUpdates to Firefox’s Breach Alert Policy

Your personal data is yours – and it should remain yours! Unfortunately data breaches that reveal your personal information on the internet are omnipresent these days. In fact, fraudulent use of stolen credentials is the 2nd-most common threat action (after phishing) in Verizon’s 2020 Data Breach Investigations report and highlights the problematic situation of data breaches.

In 2018, we launched Firefox Monitor which instantly notifies you in case your data was involved in a breach and further provides guidance on how to protect your personal information online. Expanding the scope of protecting our users across the world to stay in control of their data and privacy, we integrated alerts from Firefox Monitor into mainstream Firefox. We integrated this privacy enhancing feature into your daily browsing experience so Firefox can better protect your data by instantly notifying you when you visit a site that has been breached.

While sites continue to suffer password breaches, other leaks or lose other types of data. Even though we consider all personal data as important, notifying you for every one of these leaks generates noise that’s difficult to act on. The better alternative is to only alert you in case it’s critical for you to act to protect your data. Hence, the primary change is that Firefox will only show alerts for websites where passwords were exposed in the breach.

In detail, we are announcing an update to our initial Firefox breach alert policy for when Firefox alerts for breached sites:

“Firefox shows a breach alert when a user visits a site where passwords were exposed and added to Have I Been Pwned within the last 2 months.”

To receive the most comprehensive breach alerts we suggest to sign up for Firefox Monitor to check if your account was involved in a breach. We will keep you informed and will alert you with an email in case your personal data is affected by a data breach. Our continued commitment  to protect your privacy and security from online threats is critical for us and aligns with our mission: Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. If you aren’t a Firefox user, download Firefox to start benefiting from all the ways that Firefox works to protect your privacy.

The post Updates to Firefox’s Breach Alert Policy appeared first on Mozilla Security Blog.

The Mozilla BlogWhy I’m joining Mozilla’s Board of Directors

I was born and raised in Zambia and came to the United States for university. Even though growing up, I had almost no exposure to computers, I chose to study computer engineering. Fortunately for me, this leap of faith proved to be a great decision, as I’ve been very blessed with the career that came after.

When I started my tenure at Intuit, my first job was working as a developer on QuickBooks for DOS. In what would turn out to be a 22-year run, I got a front row seat in watching technology evolve by working on Windows, Mac, and eventually online and mobile offerings. I was fortunate to play a leading role in the development of new Intuit products, which further fueled my passion for innovation.

Hugh Molotsi, Mozilla board member<figcaption>Hugh Molotsi</figcaption>

From this vantage point, there is no doubt that the internet has been the most transformational technology in my years working in the industry. The internet has not only expanded the possibilities of problems to solve and how to solve them, but it has also expanded the reach of who can benefit from the solutions.

From the beginning, Mozilla has had an important vision: for the internet to benefit everyone, it should remain a global resource that is open and accessible. With this vision in mind, Mozilla has played a prominent role in the advancement of open source and community collaboration. The impact of Mozilla has catalyzed the internet as a force for good, as it touches lives in some of the least developed parts of the world.

I believe Mozilla’s mission is today more important than ever. In these polarized times, the internet is blamed for the rise of monoliths and the rapid spread of disinformation. Mozilla is the trusted entity ensuring the internet-as-a-public-resource is effective and enriches the lives of not just the privileged few. The challenges of a polarized world may feel daunting, but with human ingenuity and collaboration, I’m convinced the internet will play a key part in a better future. For someone who grew up in Africa, I’m especially excited about how the internet is enabling “leapfrog” solutions in meeting the needs of emerging economies.

I’m excited to help the Mozilla Corporation drive new innovations that will benefit the global community. This is why I am truly honored to serve on the Mozilla Corporation Board.

(Photographer credit: Danny Ortega)

The post Why I’m joining Mozilla’s Board of Directors appeared first on The Mozilla Blog.

Patrick Clokecelery-batches 0.5 released!

A new version (v0.5) of celery-batches is available which adds support for Celery 5.1 and fixes storing of results when using the RPC result backend.

As explored previously, the RPC result backend works by having a results queue per client, unfortunately celery-batches was attempting to store the results …

Daniel StenbergThe curl user survey 2021

For the eighth consecutive year we run the annual curl user survey again in 2021. The form just went up and I would love to have you spend 10 minutes of your busy life to tell us how you think curl works, what doesn’t work and what we should do next.

We have no tracking on the website and we have no metrics or usage measurements of the curl tool or the libcurl library. The only proper way we have left to learn how users and people in general think of us and how curl works, is to ask. So this is what we do, and we limit the asking to once per year.

You can also view this from your own “selfish” angle: this is a way for you to submit your input, your opinions and we will listen.

The survey will be up two weeks during which I hope to get as many people as possible to respond. If you have friends you know use curl or libcurl, please have them help us out too!

Take the survey

Yes really, please take the survey!

Bonus: see the extensive analysis of the 2020 user survey. There’s a lot of user feedback to learn from it.

Firefox NightlyThese Weeks in Firefox: Issue 94

Highlights

  • On macOS, scrollbars now squish during rubber-banding.
  • We’re working on supporting native fullscreen on macOS. Turn it on by enabling the pref full-screen-api.macos-native-full-screen. This will (among other things) create new fullscreen Spaces for videos. You could, for example, put a fullscreen YouTube video in native Split Screen next to another application.
  • We’re also working on enhanced dark mode support for macOS (Bug 1623686). Enable this by turning on the pref widget.macos.respect-system-appearance. Recent fixes include a dark library window (Bug 1698763), dark page info dialog (Bug 1698754), and a dark “Clear Recent History” window (Bug 1710269).
  • We’ve announced the deprecation of the canvas drawWindow WebExtension method, due to incompatibility with the Fission architecture:
  • about:welcome got major updates for Firefox 89. This includes new animations, icons, and accessibility improvements.

Friends of the Firefox team

For contributions from May 4 to May 18 2021, inclusive.

Resolved bugs (excluding employees)

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Landed some more styling tweaks for making sure about:addons does better match the new Proton UI conventions: Bug 1709464 and Bug 1709655
WebExtensions Framework
  • More Fission-related changes landed in Firefox 90: Bug 1708238
  • Work related to the “manifest_version 3”: Support for new web_accessible_resources manifest property formats – Bug 1696580, Bug 1697334
WebExtension APIs
  • Starting from Firefox 90, Extensions will be allowed to use the Cache web API from extension pages to cache HTTP resources (caching an HTTP URL using the Cache web API will be still enforcing the extensions host permissions, as it would if the extension would be fetching from the same URLs using fetch or XHR) – Bug 1575625 and Bug 1710138
    • Thanks to André Natal for contributing this change as part of his work on the Project Bergamot extension

Fission

Lint and Docs

macOS Spotlight

  • Native context menus landed in Firefox 89! This closes the 21-year-old bug 34572.
    • We also fixed a number of follow-up issues, like supporting dark mode context menus on macOS 10.14+.

Messaging System

New Tab Page

  • Accessibility bug fixes for the “personalize” drawer allowing it to operate better with screen readers (Bug 1707022) Thanks to :eeejay for the patches! Also a fix for high contrast mode (Bug 1708248) thanks to :morgan and :thecount
  • Snippets has been disabled in Firefox 89 (Bug 1709984)

Performance

  • dthayer has a patch up for review to reduce the UI freezes caused by sending SessionStore data to the SessionFile worker.
  • mconley would like to experiment with the about:home startup cache using Nimbus, and is considering having the startup cache enabled in MR1.1.
  • mconley fixed Bug 1703494 – Remove sync layout flush for hamburger menu opening with proton
  • emalysz landed a patch to provide async support for promise workers, and removed OS.File from PageThumbs.jsm. Only 3 callers of OS.File left during startup!
  • Several BHR improvements:
    • Improved dashboard:
      • It’s possible to navigate to the data of previous days, and to link to a specific day.
      • For hangs with an associated bug, the whiteboard annotation is shown in the top right
      • when using the filter box, the filtered word is highlighted in the stack on the right side.
    • Better data:
      • (chrome) JS function names are now included in BHR stacks.
      • these label frames are now visible: “ChromeUtils::Import”, “mozJSSubScriptLoader::DoLoadSubScriptWithOptions”, “nsThreadManager::SpinEventLoop”, “Category observer notification”, “Services_Resolve”, “Task”
    • Doug is working on showing annotations (eg. “UserInteracting”, “browser.tabs.opening”) in the dashboard

Performance Tools

Proton / MR1

Search and Navigation

  • Daisuke fixed a bug on Linux where opening new tabs by middle clicking the tabs bar could paste clipboard contents into the urlbar. Bug 1710289
  • Daisuke also fixed a bug where pasting a string ending with a combination of CR, LF may drop the search terms. Bug 1709971
  • Marco landed a patch improving the tooltips and accessible text when adding new OpenSearch engines – Bug 1706334
  • Mark fixed a bug in the separate search bar, where certain characters could be shown encoded in the results panel – Bug 1529220

Screenshots

  • Screenshots now factors in Firefox zoom values
  • We’ve accepted an Outreachy intern who will start next week!

Mozilla Attack & DefenseBrowser fuzzing at Mozilla

Introduction

Mozilla has been fuzzing Firefox and its underlying components for a while. It has proven to be one of the most efficient ways to identify quality and security issues. In general, we apply fuzzing on different levels: there is fuzzing the browser as a whole, but a significant amount of time is also spent on fuzzing isolated code (e.g. with libFuzzer) or whole components such as the JS engine using separate shells. In this blog post, we will talk specifically about browser fuzzing only, and go into detail on the pipeline we’ve developed. This single pipeline is the result of years of work that the fuzzing team has put into aggregating our browser fuzzing efforts to provide consistently actionable issues to developers and to ease integration of internal and external fuzzing tools as they become available.

Diagram showing interaction of systems used in Mozilla's browser fuzzing workflow

Build instrumentation

To be as effective as possible we make use of different methods of detecting errors. These include sanitizers such as AddressSanitizer (with LeakSanitizer), ThreadSanitizer, and UndefinedBehaviorSanitizer, as well as using debug builds that enable assertions and other runtime checks. We also make use of debuggers such as rr and Valgrind. Each of these tools provides a different lens to help uncover specific bug types, but many are incompatible with each other or require their own custom build to function or provide optimal results. Besides providing debugging and error detection, some tools cannot work without build instrumentation, such as code coverage and libFuzzer. Each operating system and architecture combination requires a unique build and may only support a subset of these tools.

Last, each variation has multiple active branches including Release, Beta, Nightly, and Extended Support Release (ESR). The Firefox CI Taskcluster instance builds each of these periodically.

Downloading builds

Taskcluster makes it easy to find and download the latest build to test. We discussed above the number of variants created by different instrumentation types, and we need to fuzz them in automation. Because of the large number of combinations of builds, artifacts, architectures, operating systems, and unpacking each, downloading is a non-trivial task.

To help reduce the complexity of build management, we developed a tool called fuzzfetch. Fuzzfetch makes it easy to specify the required build parameters and it will download and unpack the build. It also supports downloading specified revisions to make it useful with bisection tools.

How we generate the test cases

As the goal of this blog post is to explain the whole pipeline, we won’t spend much time explaining fuzzers. If you are interested, please read “Fuzzing Firefox with WebIDL” and the in-tree documentation. We use a combination of publicly available and custom-built fuzzers to generate test cases.

How we execute, report, and scale

For fuzzers that target the browser, Grizzly manages and runs test cases and monitors for results. Creating an adapter allows us to easily run existing fuzzers in Grizzly.

Simplified Python code for a Grizzly adaptor using an external fuzzer.

To make full use of available resources on any given machine, we run multiple instances of Grizzly in parallel.

For each fuzzer, we create containers to encapsulate the configuration required to run it. These exist in the Orion monorepo. Each fuzzer has a configuration with deployment specifics and resource allocation depending on the priority of the fuzzer. Taskcluster continuously deploys these configurations to distribute work and manage fuzzing nodes.

Grizzly Target handles the detection of issues such as hangs, crashes, and other defects. Target is an interface between Grizzly and the browser. Detected issues are automatically packaged and reported to a FuzzManager server. The FuzzManager server provides automation and a UI for triaging the results.

Other more targeted fuzzers use JS shell and libFuzzer based targets use the fuzzing interface. Many third-party libraries are also fuzzed in OSS-Fuzz. These deserve mention but are outside of the scope of this post.

Managing results

Running multiple fuzzers against various targets at scale generates a large amount of data. These crashes are not suitable for direct entry into a bug tracking system like Bugzilla. We have tools to manage this data and get it ready to report.

The FuzzManager client library filters out crash variations and duplicate results before they leave the fuzzing node. Unique results are reported to a FuzzManager server. The FuzzManager web interface allows for the creation of signatures that help group reports together in buckets to aid the client in detecting duplicate results.

Fuzzers commonly generate test cases that are hundreds or even thousands of lines long. FuzzManager buckets are automatically scanned to queue reduction tasks in Taskcluster. These reduction tasks use Grizzly Reduce and Lithium to apply different reduction strategies, often removing the majority of the unnecessary data. Each bucket is continually processed until a successful reduction is complete. Then an engineer can do a final inspection of the minimized test case and attach it to a bug report. The final result is often used as a crash test in the Firefox test suite.

Animation showing an example testcase reduction using Grizzly

Code coverage of the fuzzer is also measured periodically. FuzzManager is used again to collect code coverage data and generate coverage reports.

Creating optimal bug reports

Our goal is to create actionable bug reports to get issues fixed as soon as possible while minimizing overhead for developers.

We do this by providing:

  • crash information such as logs and a stack trace
  • build and environment information
  • reduced test case
  • Pernosco session
  • regression range (bisections via Bugmon)
  • verification via Bugmon

Grizzly Replay is a tool that forms the basic execution engine for Bugmon and Grizzly Reduce, and makes it easy to collect rr traces to submit to Pernosco. It makes re-running browser test cases easy both in automation and for manual use. It simplifies working with stubborn test cases and test cases that trigger multiple results.

As mentioned, we have also been making use of Pernosco. Pernosco is a tool that provides a web interface for rr traces and makes them available to developers without the need for direct access to the execution environment. It is an amazing tool developed by a company of the same name which significantly helps to debug massively parallel applications. It is also very helpful when test cases are too unreliable to reduce or attach to bug reports. Creating an rr trace and uploading it can make stalled bug reports actionable.

The combination of Grizzly and Pernosco have had the added benefit of making infrequent, hard to reproduce issues, actionable. A test case for a very inconsistent issue can be run hundreds or thousands of times until the desired crash occurs under rr. The trace is automatically collected and ready to be submitted to Pernosco and fixed by a developer, instead of being passed over because it was not actionable.

How we interact with developers

To request new features get a proper assessment, the fuzzing team can be reached at fuzzing@mozilla.com or on Matrix. This is also a great way to get in touch for any reason. We are happy to help you with any fuzzing related questions or ideas. We will also reach out when we receive information about new initiatives and features that we think will require attention. Once fuzzing of a component begins, we communicate mainly via Bugzilla. As mentioned, we strive to open actionable issues or enhance existing issues logged by others.

Bugmon is used to automatically bisect regression ranges. This notifies the appropriate people as quickly as possible and verifies bugs once they are marked as FIXED. Closing a bug automatically removes it from FuzzManager, so if a similar bug finds its way into the code base, it can be identified again.

Some issues found during fuzzing will prevent us from effectively fuzzing a feature or build variant. These are known as fuzz-blockers, and they come in a few different forms. These issues may seem benign from a product perspective, but they can block fuzzers from targeting important code paths or even prevent fuzzing a target altogether. Prioritizing these issues appropriately and getting them fixed quickly is very helpful and much appreciated by the fuzzing team.

PrefPicker manages the set of Firefox preferences used for fuzzing. When adding features behind a pref, consider adding it to the PrefPicker fuzzing template to have it enabled during fuzzing. Periodic audits of the PrefPicker fuzzing template can help ensure areas are not missed and resources are used as effectively as possible.

Measuring success

As in other fields, measurement is a key part of evaluating success. We leverage the meta bug feature of Bugzilla to help us keep track of the issues identified by fuzzers. We strive to have a meta bug per fuzzer and for each new component fuzzed.

For example, the meta bug for Domino lists all the issues (over 1100!) identified by this tool. Using this Bugzilla data, we are able to show the impact over the years of our various fuzzers.

Bar graph showing number of bugs reported by Domino over time

Number of bugs reported by Domino over time

These dashboards help evaluate the return on investment of a fuzzer.

Conclusion

There are many components in the fuzzing pipeline. These components are constantly evolving to keep up with changes in debugging tools, execution environments, and browser internals. Developers are always adding, removing, and updating browser features. Bugs are being detected, triaged, and logged. Keeping everything running continuously and targeting as much code as possible requires constant and ongoing efforts.

If you work on Firefox, you can help by keeping us informed of new features and initiatives that may affect or require fuzzing, by prioritizing fuzz-blockers, and by curating fuzzing preferences in PrefPicker. If fuzzing interests you, please take part in the bug bounty program. Our tools are available publicly, and we encourage bug hunting.

The Mozilla BlogBehind the design of the fresh new Firefox coming June 1

A new Firefox is coming your way on June 1 with a fresh look designed for today’s modern life online. We pored over the browser’s user interface pixel by pixel, measured the value users were getting from our massive library of features, and ultimately streamlined the Firefox experience to be clean, inviting and easier to use on every device.

Things are looking different in 2021

While we make small design improvements regularly, major overhauls are big undertakings that don’t come around all that often since change can be…hard. We took the Firefox redesign project to heart with the goal of making your life a little better so you can worry less, get more done and feel good about what’s on the horizon. That’s the energy we all need in 2021.

Get ready! The new Firefox starts rolling out on June 1 for all devices — desktop and mobile. Download and install Firefox for desktop, Android and iOS today so you’ll be one of the first to find out when the fresh new version goes live.

The post Behind the design of the fresh new Firefox coming June 1 appeared first on The Mozilla Blog.

Daniel Stenberg“I could rewrite curl”

Collected quotes and snippets from people publicly sneezing off or belittling what curl is, explaining how easy it would be to make a replacement in no time with no effort or generally not being very helpful.

These are statements made seriously. For all I know, they were not ironic. If you find others to add here, please let me know!

Listen. I’ve been young too once and I’ve probably thought similar things myself in the past. But there’s a huge difference between thinking and saying. Quotes included here are mentioned for our collective amusement.

I can do it in less than a 100 lines

[source]

I can do it in a three day weekend

(The yellow marking in the picture was added by me.)

[source]

No reason to be written in C

Maybe not exactly in the same category as the two ones above, but still a significant “I know this” vibe:

[source]

We sold a curl exploit

Some people deliberately decides to play for the other team.

[source]

This isn’t a big deal

It’s easy to say things on Twitter…

This tweet was removed by its author after I and others replied to it so I cannot link it. The name has been blurred on purpose because of this.

Discussions

Hacker news, Reddit

Hacks.Mozilla.OrgImproving Firefox stability on Linux

Roughly a year ago at Mozilla we started an effort to improve Firefox stability on Linux. This effort quickly became an example of good synergies between FOSS projects.

Every time Firefox crashes, the user can send us a crash report which we use to analyze the problem and hopefully fix it:

A screenshot of a tab that justc crashed

This report contains, among other things, a minidump: a small snapshot of the process memory at the time it crashed. This includes the contents of the processor’s registers as well as data from the stacks of every thread.

Here’s what this usually looks like:

If you’re familiar with core dumps, then minidumps are essentially a smaller version of them. The minidump format was originally designed at Microsoft and Windows has a native way of writing out minidumps. On Linux, we use Breakpad for this task. Breakpad originated at Google for their software (Picasa, Google Earth, etc…) but we have forked, heavily modified for our purposes and recently partly rewrote it in Rust.

Once the user submits a crash report, we have a server-side component – called Socorro – that processes it and extracts a stack trace from the minidump. The reports are then clustered based on the top method name of the stack trace of the crashing thread. When a new crash is spotted we assign it a bug and start working on it. See the picture below for an example of how crashes are grouped:

The snapshot of a stack trace as displayed on crash-stats.mozilla.com

To extract a meaningful stack trace from a minidump two more things are needed: unwinding information and symbols. The unwinding information is a set of instructions that describe how to find the various frames in the stack given an instruction pointer. Symbol information contains the names of the functions corresponding to a given range of addresses as well as the source files they come from and the line numbers a given instruction corresponds to.

In regular Firefox releases, we extract this information from the build files and store it into symbol files in Breakpad standard format. Equipped with this information Socorro can produce a human-readable stack trace. The whole flow can be seen below:

A graphicsl representation of our crash reporting flow, from the capture on the client to processing on the server

Here’s an example of a proper stack trace:

A fully symbolicated stack trace

If Socorro doesn’t have access to the appropriate symbol files for a crash the resulting trace contains only addresses and isn’t very helpful:

A stack trace showing raw addresses instead of symbols

When it comes to Linux things work differently than on other platforms: most of our users do not install our builds, they install the Firefox version that comes packaged for their favourite distribution.

This posed a significant problem when dealing with stability issues on Linux: for the majority of our crash reports, we couldn’t produce high-quality stack traces because we didn’t have the required symbol information. The Firefox builds that submitted the reports weren’t done by us. To make matters worse, Firefox depends on a number of third-party packages (such as GTK, Mesa, FFmpeg, SQLite, etc.). We wouldn’t get good stack traces if a crash occurred in one of these packages instead of Firefox itself because we didn’t have symbols for them either.

To address this issue, we started scraping debug information for Firefox builds and their dependencies from the package repositories of multiple distributions: Arch, Debian, Fedora, OpenSUSE and Ubuntu. Since every distribution does things a little bit differently, we had to write distro-specific scripts that would go through the list of packages in their repositories and find the associated debug information (the scripts are available here). This data is then fed into a tool that extracts symbol files from the debug information and uploads it to our symbol server.

With that information now available, we were able to analyze >99% of the crash reports we received from Linux users, up from less than 20%. Here’s an example of a high-quality trace extracted from a distro-packaged version of Firefox. We haven’t built any of the libraries involved yet the function names are present and so are the file and line numbers of the affected code:

A fully symbolicated stack trace including external code

The importance of this cannot be overestimated: Linux users tend to be more tech-savvy and are more likely to help us solve issues, so all those reports were a treasure trove for improving stability even for other operating systems (Windows, Mac, Android, etc). In particular, we often identified Fission bugs on Linux first.

The first effect of this newfound ability to inspect Linux crashes is that it greatly sped up our response time to Linux-specific issues, and often allowed us to identify problems in the Nightly and Beta versions of Firefox before they reached users on the release channel.

We could also quickly identify issues in bleeding-edge components such as WebRender, WebGPU, Wayland and VA-API video acceleration; oftentimes providing a fix within days of the change that triggered the issue.

We didn’t stop there: we could now identify distro-specific issues and regressions. This allowed us to inform package maintainers of the problems and have them resolved quickly. For example, we were able to identify a Debian-specific issue only two weeks after it was introduced and fixed it right away. The crash was caused by a modification made by Debian to one of Firefox dependencies that could cause a crash on startup, it’s filed under bug 1679430 if you’re curious about the details.

Another good example comes from Fedora: they had been using their own crash reporting system (ABRT) to catch Firefox crashes in their Firefox builds, but given the improvements on our side they started sending Firefox crashes our way instead.

We could also finally identify regressions and issues in our dependencies. This allowed us to communicate the issues upstream and sometimes even contributed fixes, benefiting both our users and theirs.

For example, at some point, Debian updated the fontconfig package by backporting an upstream fix for a memory leak. Unfortunately, the fix contained a bug that would crash Firefox and possibly other software too. We spotted the new crash only six days after the change landed in Debian sources and only a couple of weeks afterwards the issue had been fixed both upstream and in Debian. We sent reports and fixes to other projects too including Mesa, GTK, glib, PCSC, SQLite and more.

Nightly versions of Firefox also include a tool to spot security-sensitive issues: the probabilistic heap checker. This tool randomly pads a handful of memory allocations in order to detect buffer overflows and use-after-free accesses. When it detects one of these, it sends us a very detailed crash report. Given Firefox’s large user-base on Linux, this allowed us to spot some elusive issues in upstream projects and report them.

This also exposed some limitations in the tools we use for crash analysis, so we decided to rewrite them in Rust largely relying on the excellent crates developed by Sentry. The resulting tools were dramatically faster than our old ones, used a fraction of the memory and produced more accurate results. Code flowed both ways: we contributed improvements to their crates (and their dependencies) while they expanded their APIs to address our new use-cases and fixed the issues we discovered.

Another pleasant side-effect of this work is that Thunderbird now also benefits from the improvement we made for Firefox.

This goes on to show how collaboration between FOSS projects not only benefits their users but ultimately improves the whole ecosystem and the broader community that relies on it.

Special thanks to Calixte Denizet, Nicholas Nethercote, Jan Auer and all the others that contributed to this effort!

The post Improving Firefox stability on Linux appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogHow to actually enjoy being online again

COVID-19 accelerated changes in our behaviors that drove our lives increasingly online. Screens became our almost exclusive discovery point to the world. There were some benefits to this rapid shift. For many people, work was able to continue relatively uninterrupted at home, almost anything could be ordered online and delivered, and we were able to still connect with people and celebrate life moments over video chat. However, there were also points where our screens felt like something we couldn’t escape. Zoom happy hours faded out as it became clear that drinks with friends couldn’t really be replicated online. We flirted with the idea of doing a Chrissy Teigen and quitting Twitter (only to quickly miss it and come back). Plus, despite the fact we were languishing, our online lives became more hectic than ever.

Turning on our computer or phone at times became daunting with an avalanche of Zoom meetings, calendar invites, FaceTimes from everyone we’ve ever met and near constant slacks. Because of this onslaught, we worked to create a new Firefox experience so your time online could be more efficient, calm and hopefully enjoyable. We observed the way you used the browser by studying billions of clicks to incorporate your habits into our latest release.

Now that things are starting to open up allowing for in-person interaction, we have a moment to redefine what our online life looks like for the rest of the year. And, the truth is, we will be online. And there is a lot of really f**king cool stuff online. The internet does make our life better. As our feeling of choice comes back again, here are some tips to help us all choose joy when online.

1. Create a space for what you consider the best of the web

When the internet is at its best, it reminds us that we are not alone. Find your corners of the internet, your online community, that remind you how big and exciting the world is. For some that can look like a wild comment section, or a favorite Discord, an online class where you learn something new, or a Pocket collection that you continue to revisit on when to be inspired. Create a bookmark folder called ‘things to make me laugh’ so you can revisit these sites with ease. Less time on the internet is not the key to being happy, spending time doing things that feel enriching and exciting to you is. Schedule time on these platforms just like you would spending time with loved ones or taking care of yourself, if you don’t prioritize time on the good corners of the internet, no one else is going to do it for you.

2. Turn your phone to grayscale (at least temporarily)

How often do you find yourself mindlessly reaching for your phone and then scrolling through for no reason? You look down and it’s been an hour and you didn’t accomplish or learn anything, or even really find yourself entertained. Your time matters, and app logos and colors are designed and used to capture your attention for longer. The colors stimulate your brain, so by turning your phone to grayscale and stripping the colors, you may be less motivated to check your phone as often.

This doesn’t need to be a permanent change. Seeing the experience in color is its own form of joy, but after a week or so of having your phone in grayscale, you may find yourself reaching for your phone less. Consider it a reset to only scrolling when you actually want to.

Here’s how to change your phone to grayscale on an iPhone and an Android phone.

iPhone: Accessibility > Display & Text Size > check Color Filters on > and tap on Grayscale

Android: Settings > Digital Wellbeing & Parental Controls > Wind Down > Grayscale

3. Download Firefox for the fresh new experience coming June 1

The upcoming Firefox release coming out on June 1, understands that after this last year we all need some simplicity. The Firefox browser is being redesigned to bring you a more modern and calmer web experience. Wherever visual noise — like notifications, menus and buttons — could be eliminated, were eliminated. Subtle design changes add up to make a big visual difference with new tabs that are easier to navigate, extra spacing, and lighter typography. You know those auto-playing videos that normally disrupt your browsing? Instead of having to click on every tab to see where the sound is coming from, easily scroll your tab bar and look for the sound-on icon and with one tap, turn it off. Plus, you know that you have more privacy and security built into your web experience with Firefox’s SmartBlock feature and Total Cookie Protection. You can sit back and enjoy the fresh new experience with one less thing to worry about.

4. Download customizable add-ons for a more personal internet experience

This is your online experience, what do you want to prioritize? Once you figure that out, there are endless customizable add-ons that can help. For many of us, we want our online experience to be a little less overwhelming. Having access to anything we ever wanted is good, but also comes with many distractions, so then we have to spend more time online to finish our actual tasks. There are Firefox add-ons that can remedy this instantly.

Or maybe you just want to bring something fun and colorful to your online experience? There are add-ons for that, too, from Firefox themes to add-ons that introduce cute cats to every new tab you open.

5. Go ahead and close your computer

It’s been a year and a half unlike any other, and the call of going outside, meeting people in person and closing your screen is one that you should answer when you’re ready. The web makes our world bigger, but it was never meant to be the only way we interact with people outside of our household. We should all be having experiences offline, which will only make signing back on even better. Those Zoom calls with our family members and friends across the country don’t need to stop anytime soon, the joy will just be so much more knowing that you could plan a trip to see them — you are choosing to catch up via video, not being forced too.

Sign off for a while, Firefox will be ready for you when you return.

The post How to actually enjoy being online again appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 391

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters or research papers this week.

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is arraygen, a derive proc macro to generate arrays from structs.

Thanks to José Manuel Barroso Galindo for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

No issues were proposed for CfP.

Updates from Rust Core

333 pull requests were merged in the last week

Rust Compiler Performance Triage

A lot of noise in the benchmark results this week. We are discussing (zulip archive, live zulip) how best to update the benchmark set to eliminate the noisy cases that are bouncing around. Beyond that, some large improvements to a few individual benchmarks.

The memory usage (max-rss) seemed largely flat. Except for an upward trend on tuple-stess that indicates 4% more memory from a week ago.

Triage done by @pnkfelix. Revision range: 382f..25a2

5 Regressions, 7 Improvements, 2 Mixed 1 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Protocol Labs

Amazon Web Services

Techno Creatives

Paige

ANIXE

NZXT

Zondax

Ockam

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I often think about Rust as a process and community for developing a programming language, rather than as a programming language itself.

throwaway894345 on hacker news

Thanks to Krishna Sundarram for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Mozilla BlogA fresh new Firefox for desktop and mobile

You don’t last 20 years as a browser without making improvements along the way. So on June 1, that’s exactly what we’re doing. Get ready for a fresh new Firefox.

The post A fresh new Firefox for desktop and mobile appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogMozilla publishes position paper on EU Digital Services Act

In December 2020 the European Commission published the draft EU Digital Services Act. The law seeks to establish a new paradigm for tech sector regulation, and we see it as a crucial opportunity to address many of the challenges holding back the internet from what it should be. As EU lawmakers start to consider amendments and improvements to the draft law, today we’re publishing our substantive perspectives and recommendations to guide those deliberations.

We are encouraged that the draft DSA includes many of the policy recommendations that Mozilla and our allies had advocated for in recent years. For that we commend the European Commission. However, many elements of the DSA are novel and complex, and so there is a need for elaboration and clarification in the legislative mark-up phase. We believe that with targeted amendments the DSA has the potential to serve as the effective, balanced, and future-proof legal framework.

Given the sheer breath of the DSA, we’re choosing to focus on the elements where we believe we have a unique contribution to make, and where we believe the DSA can constitute a real paradigm shift. That is not to say we don’t have thoughts on the other elements of the proposal, and we look forward to supporting our allies in industry and civil society who are focusing their efforts elsewhere.

Broadly speaking, our position can be summarised as follows:

  • Asymmetric obligations for the largest platforms 
      • We welcome the DSA’s approach of making very large platforms subject to enhanced regulation compared to the rest of the industry, but we suggest tweaks to the scope and definitions.
      • The definition of these so-called Very Large Online Platforms (VLOPS) shouldn’t be based solely on quantitative criteria, but possibly qualitative (e.g. taking into account risk) as well, in anticipation of certain extraordinary edge cases where a service that meets that quantitative VLOP standard is in reality very low risk in nature.
  • Systemic transparency
      • We welcome the DSA’s inclusion of public-facing ad archive APIs and the provisions on access to data for public interest researchers.
      • We call for the advertising transparency elements to take into account novel forms of paid influence, and for the definition of ‘public interest researchers’ to be broader than just university faculty.
  • A risk-based approach to content responsibility
      • We welcome this approach, but suggest more clarification on the types of risks to be assessed and how those assessments are undertaken.
  • Auditing and oversight
    • We welcome the DSA’s third-party auditing requirement but we provide recommendations on how it can be more than just a tick-box exercise (e.g. through standardisation; clarity on what is to be audited; etc).
    • We reiterate the call for oversight bodies to be well-resourced and staffed with the appropriate technical expertise.

This position paper is the latest milestone in our long-standing engagement on issues of content regulation and platform responsibility in the EU. In the coming months we’ll be ramping up our efforts further, and look forward to supporting EU lawmakers in turning these recommendations into reality.

Ultimately, we firmly believe that if developed properly, the DSA can usher in a new global paradigm for tech regulation. At a time when lawmakers from Delhi to Washington DC are grappling with questions of platform accountability and content responsibility, the DSA is indeed a once-in-a-generation opportunity.

The post Mozilla publishes position paper on EU Digital Services Act appeared first on Open Policy & Advocacy.

Mozilla Security BlogIntroducing Site Isolation in Firefox

When two major vulnerabilities known as Meltdown and Spectre were disclosed by security researchers in early 2018, Firefox promptly added security mitigations to keep you safe. Going forward, however, it was clear that with the evolving techniques of malicious actors on the web, we needed to redesign Firefox to mitigate future variations of such vulnerabilities and to keep you safe when browsing the web!

We are excited to announce that Firefox’s new Site Isolation architecture is coming together. This fundamental redesign of Firefox’s Security architecture extends current security mechanisms by creating operating system process-level boundaries for all sites loaded in Firefox for Desktop. Isolating each site into a separate operating system process makes it even harder for malicious sites to read another site’s secret or private data.

We are currently finalizing Firefox’s Site Isolation feature by allowing a subset of users to benefit from this new security architecture on our Nightly and Beta channels and plan a roll out to more of our users later this year. If you are as excited about it as we are and would like to try it out, follow these steps:

To enable Site Isolation on Firefox Nightly:

  1. Navigate to about:preferences#experimental
  2. Check the “Fission (Site Isolation)” checkbox to enable.
  3. Restart Firefox.

To enable Site Isolation on Firefox Beta or Release:

  1. Navigate to about:config.
  2. Set `fission.autostart` pref to `true`.
  3. Restart Firefox.

With this monumental change of secure browser design, users of Firefox Desktop benefit from protections against future variants of Spectre, resulting in an even safer browsing experience. If you aren’t a Firefox user yet, you can download the latest version here and if you want to know all the technical details about Firefox’s new security architecture, you can read it here.

The post Introducing Site Isolation in Firefox appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgIntroducing Firefox’s new Site Isolation Security Architecture

Like any web browser, Firefox loads code from untrusted and potentially hostile websites and runs it on your computer. To protect you against new types of attacks from malicious sites and to meet the security principles of Mozilla, we set out to redesign Firefox on desktop.

Site Isolation builds upon a new security architecture that extends current protection mechanisms by separating (web) content and loading each site in its own operating system process.

This new security architecture allows Firefox to completely separate code originating from different sites and, in turn, defend against malicious sites trying to access sensitive information from other sites you are visiting.

In more detail, whenever you open a website and enter a password, a credit card number, or any other sensitive information, you want to be sure that this information is kept secure and inaccessible to malicious actors.

As a first line of defence Firefox enforces a variety of security mechanisms, e.g. the same-origin policy which prevents adversaries from accessing such information when loaded into the same application.

Unfortunately, the web evolves and so do the techniques of malicious actors. To fully protect your private information, a modern web browser not only needs to provide protections on the application layer but also needs to entirely separate the memory space of different sites – the new Site Isolation security architecture in Firefox provides those security guarantees.

Why separating memory space is crucial

In early 2018, security researchers disclosed two major vulnerabilities, known as Meltdown and Spectre. The researchers exploited fundamental assumptions about modern hardware execution, and were able to demonstrate how untrusted code can access and read memory anywhere within a process’ address space, even in a language as high level as JavaScript (which powers almost every single website).

While band-aid countermeasures deployed by OS, CPU and major web browser vendors quickly neutralized the attacks, they came with a performance cost and were designed to be temporary. Back when the attacks were announced publicly, Firefox teams promptly reduced the precision of high-precision timers and disabled APIs that allowed such timers to be implemented to keep our users safe.

Going forward, it was clear that we needed to fundamentally re-architecture the security design of Firefox to mitigate future variations of such vulnerabilities.

Let’s take a closer look at the following example which demonstrates how an attacker can access your private data when executing a Spectre-like attack.

Two hand-drawn diagrams, with the first labeled “Without Site Isolation, we might load both of these sites in the same process :( ”. Two browser windows with partially visible sites “attacker.com” and “my-bank” partial site, are loaded in the same process - process 16. On top of the banking window, there is a cartoon face that looks happy, personifying the browser. The attacker site window contains a face that is looking at the banking window, with a mischievous smile. In the second diagram, labeled “Attacker.com executes a sophisticated attack”, we see the same two browser windows loaded in process 16 and a 1 column table labelled “memory where my-bank’s data is stored in process 16” underneath the banking window. It has two entries: “credit card info” and “login password”. A hand extending from the malicious site reaches toward the table (aka memory of the second window), signifying that the malicious site is able to access sensitive data belonging to the banking window because it is in the same process. The personified browser character is looking towards the malicious site, and exhibits feelings of concern and worry, with exclamation marks floating around the face.

Without Site Isolation, Firefox might load a malicious site in the same process as a site that is handling sensitive information. In the worst case scenario, a malicious site might execute a Spectre-like attack to gain access to memory of the other site.

Suppose you have two websites open – www.my-bank.com and www.attacker.com. As illustrated in the diagram above, with current web browser architecture it’s possible that web content from both sites ends up being loaded into the same operating system process. To make things worse, using a Spectre-like attack would allow attacker.com to query and access data from the my-bank.com website.

Despite existing security mitigations, the only way to provide memory protections necessary to defend against Spectre-like attacks is to rely on the security guarantees that come with isolating content from different sites using the operating system’s process separation.

Background on Current Browser Architecture

Upon being launched, the Firefox web browser internally spawns one privileged process (also known as the parent process) which then launches and coordinates activities of multiple (web) content processes – the parent process is the most privileged one, as it is allowed to perform any action that the end-user can.

This multi-process architecture allows Firefox to separate more complicated or less trustworthy code into processes, most of which have reduced access to operating system resources or user files. As a consequence, less privileged code will need to ask more privileged code to perform operations which it itself cannot.

For example, a content process will have to ask the parent process to save a download because it does not have the permissions to write to disk. Put differently, if an attacker manages to compromise the content process it must additionally (ab)use one of the APIs to convince the parent process to act on its behalf.

In great detail, (as of April 2021) Firefox’s parent process launches a fixed number of processes: eight web content processes, up to two additional semi-privileged web content processes, and four utility processes for web extensions, GPU operations, networking, and media decoding.

While separating content into currently eight web content processes already provides a solid foundation, it does not meet the security standards of Mozilla because it allows two completely different sites to end up in the same operating system process and, therefore, share process memory. To counter this, we are targeting a Site Isolation architecture that loads every single site into its own process.

A hand drawn diagram titled “Loading Sites with Current Browser Architecture”. On the left hand-side, from top to bottom, there are four browser windows with different sites loaded. The first window, www.my-bank.com, is loaded in process 3. The second window is loaded in process 4. The third window is loaded in process 5. The last window with a url - “www.attacker.com” - is loaded in process 3, same as the first window. On the right hand-side of the drawing, there is a table titled “List of Content Processes”. The table contains two columns: “site” and “pid”, which stands for process id. In the table, the first window, my-bank.com, and the last attacker.com window have the same PID.

Without Site Isolation, Firefox does not separate web content into different processes and it’s possible for different sites to be loaded in the same process.

Imagine you open some websites in different tabs: www.my-bank.com, www.getpocket.com, www.mozilla.org and www.attacker.com. As illustrated in the diagram above, it’s entirely possible that my-bank.com and attacker.com end up being loaded in the same operating system process, which would result in them sharing process memory. As we saw in the previous example, with this separation model, an attacker could perform a Spectre-like attack to access my-bank.com’s data.

A hand drawn diagram titled “Loading Subframes With Current Browser Architecture”. There is one browser window drawn. The window, www.attacker.com, embeds a page from a different site, www.my-bank.com. The top level page and the subframe are loaded in the same process - process 3.

Without Site Isolation, the browser will load embedded pages, such as a bank page or an ad, in the same process as the top level document.

While straightforward to understand sites being loaded into different tabs, it’s also possible that sites are embedded into other sites through so-called subframes – if you ever visited a website that had ads on it, those are probably subframes. If you ever had a personal website and you embedded a YouTube video with your favourite song within it, the YouTube video was embedded in a subframe.

In a more dangerous scenario, a malicious site could embed a legitimate site within a subframe and try to trick you into entering sensitive information. With the current architecture, if a page contains any subframes from a different site, they will generally be in the same process as the outer tab.

This results in both the page and all of its subframes sharing process memory, even if the subframes originate from different sites. In the case of a successful Spectre-like attack, a top-level site might access sensitive information it should not have access to from a subframe it embeds (and vice-versa) – the new Site Isolation security architecture within Firefox will effectively make it even harder for malicious sites to execute such attacks.

How Site Isolation Works in Firefox

When enabling Site Isolation in Firefox for desktop, each unique site is loaded in a separate process. In more detail, loading “https://mozilla.org” and also loading “http://getpocket.com” will cause Site Isolation to separate the two sites into their own operating system process because they are not considered “same-site”.

Similarly, “https://getpocket.com” (note the difference between http and https) will also be loaded into a separate process – so ultimately all three sites will load in different processes.

For the sake of completeness, there are some domains such as “.github.io” or “.blogspot.com” that would be too general to identify a “site”. This is why we use a community-maintained list of effective top level domains (eTLDs) to aid in differentiating between sites.

Since “github.io” is listed as an eTLD, “a.github.io”  and “b.github.io” would load in different processes. In our running examples, websites “www.my-bank.com” and “www.attacker.com” are not considered “same-site” with each other and will be isolated in separate processes.

Two hand-drawn diagrams, with the first labeled “With Site Isolation, we will load these sites in different processes”. It shows two browser windows, one www.attacker.com , loaded in process 5, and www.my-bank.com loaded in process 16. On top of the banking window, there is a cartoon face that looks happy, personifying the browser. In contrast, the webpage area of the www.attacker.com window, contains a face that is looking at the banking window, with a mischievous smile. In the second diagram, labeled “Attacker.com tries to execute a sophisticated attack”, we see the same two browser windows. There is a 1 column table labelled “memory where my-bank’s data is stored in process 16” underneath the banking window . It has two entries: “credit card info” and “login password”. A hand extending from the malicious site tries to reach towards the table (aka memory of the banking window), but is unable to reach it, due to the process boundary. The face of the malicious site is frowning and looks unhappy, while the face, representing the browser, continues to look happy and carefree. The second window’s data is safe from the malicious site.

With Site Isolation, Firefox loads each site in its own process, thereby isolating their memory from each other, and relies on security guarantees of the operating system.

Suppose now, you open the same two websites: www.attacker.com and www.my-bank.com, as seen in the diagram above. Site isolation recognizes that the two sites are not “same-site” and hence the site isolation architecture will completely separate content from attacker.com and my-bank.com into separate operating system processes.

This process separation of content from different sites provides the memory protections required to allow for a secure browsing experience, making it even harder for sites to execute Spectre-like attacks, and, ultimately, provide a secure browsing experience for our users.

The window, www.attacker.com, embeds a page from a different site, www.my-bank.com. The top level page is loaded in process 3 and the subframe corresponding to the bank site is loaded in process 5. The two sites are, thus, isolated from each other in different operating system processes.

With Site Isolation, Firefox loads subframes from different sites in their own processes.

Identical to loading sites into two different tabs is the separation of two different sites when loaded into subframes. Let’s revisit an earlier example where pages contained subframes, with Site Isolation, subframes that are not “same-site” with the top level page will load in a different process.

In the diagram above, we see that the page www.attacker.com embeds a page from www.my-bank.com and loads in a different process. Having a top level document and subframes from different sites loaded in their own processes ensures their memory is isolated from each other, yielding profound security guarantees.

Additional Benefits of Site Isolation

With Site Isolation architecture in place, we are able to bring additional security hardening to Firefox to keep you and your data safe. Besides providing an extra layer of defence against possible security threats, Site Isolation brings other wins:

  • By placing more pages into separate processes, we can ensure that doing heavy computation or garbage collection on one page will not degrade the responsiveness of pages in other processes.
  • Using more processes to load websites allows us to spread work across many CPU cores and use the underlying hardware more efficiently.
  • Due to the finer-grained separation of sites, a subframe or a tab crashing will not affect websites loaded in different processes, resulting in an improved application stability and better user experience.

Going Forward

We are currently testing Site Isolation on desktop browsers Nightly and Beta with a subset of users and will be rolling out to more desktop users soon. However, if you already want to benefit from the improved security architecture now, you can enable it by downloading the Nightly or Beta browser from here and following these steps:

To enable Site Isolation on Firefox Nightly:

  1. Navigate to about:preferences#experimental
  2. Check the “Fission (Site Isolation)” checkbox to enable.
  3. Restart Firefox.

To enable Site Isolation on Firefox Beta or Release:

  1. Navigate to about:config.
  2. Set `fission.autostart` pref to `true`.
  3. Restart Firefox.

For technical details on how we group sites and subframes together, you can check out our new process manager tool at “about:processes” (type it into the address bar) and follow the project at  https://wiki.mozilla.org/Project_Fission.

With Site Isolation enabled on Firefox for Desktop, Mozilla takes its security guarantees to the next level and protects you against a new class of malicious attacks by relying on memory protections of OS-level process separation for each site. If you are interested in contributing to Mozilla’s open-source projects, you can help us by filing bugs here if you run into any problems with Site Isolation enabled.

Acknowledgements

Site Isolation (Project Fission), has been a massive multi-year project. Thank you to all of the talented and awesome colleagues who contributed to this work! It’s a privilege to work with people who are passionate about building the web we want: free, inclusive, independent and secure! In particular, I would like to thank Neha Kochar, Nika Layzell, Mike Conley, Melissa Thermidor, Chris Peterson, Kashav Madan, Andrew McCreight, Peter Van der Beken, Tantek Çelik and Christoph Kerschbaumer for their insightful comments and discussions.  Finally, thank you to Morgan Rae Reschenberg for helping me craft alt-text to meet the high standards of our web accessibility principles and allow everyone on the internet to easily gather the benefits provided by Site Isolation.

The post Introducing Firefox’s new Site Isolation Security Architecture appeared first on Mozilla Hacks - the Web developer blog.

Spidermonkey Development BlogErgonomic Brand Checks will ship with Firefox 90

When programming with Private Fields and methods, it can sometimes be desirable to check if an object has a given private field. While the semantics of private fields allow doing that check by using try...catch, the Ergonomic Brand checks proposal provides a simpler syntax, allowing one to simply write #field in o.

As an example, the following class uses ergonomic brand checks to provide a more helpful custom error.

class Scalar {
  #length = 0;

  add(s) {
    if (!(#length in s)) {
      throw new TypeError("Expected an instance of Scalar");
    }

    this.#length += s.#length;
  }
}

While the same effect could be accomplished with try...catch, it’s much uglier, and also doesn’t work reliably in the presence of private getters which may possibly throw for different reasons.

This JavaScript language feature proposal is at Stage 3 of the TC39 process, and will ship in Firefox 90.

The Mozilla BlogClass of Zoom: The reality of virtual graduation, prom & college orientation

Last spring, Norah W. took note of how former President Barack Obama, Beyoncé and K-Pop band BTS broadcasted impassioned commencement speeches to the graduating class of 2020. Now that it’s her turn to mark the end of high school, she can’t help but feel disregarded.

“It felt like the entire world mourned the class of 2020. I get it, they had things ripped out from under their feet. But I had almost my entire senior year online … without any friends. There’s nothing I can do to fix it; it just stinks that no one is even acknowledging this,” said the 18-year-old from Louisville, Ky. “I will never get another senior year of high school.”

The fresh wound got a dash of salt in it after she read online comments from adults saying seniors like her were lucky, that they would have preferred to miss high school.

“I can’t believe they’re making a joke. You don’t know what total isolation feels like. You don’t know what it’s like to do 12 straight years of work and then be rewarded with nothing,” she said.

Young adults across the country are grieving the losses brought on by the COVID-19 pandemic forcing school, and its accompanying social life, online. Some of those losses were big moments, the kind society has taught us to cherish: prom, graduation and the newfound freedom of college. Others are small losses that are less talked about and easier to trivialize. But they’re still losses and all of it carries an impact for the around 3.4 million high school graduates in 2021.

“It’s such a formative age, developmentally,” said Powell Burke, a psychotherapist who works with adolescents and young adults at his private practice in Atlanta, Ga. “There can be a real sense of loss and a real sense of being robbed, of powerlessness and loss of control.”

Online learning did come with some benefits. Namely, the new schedule meant students reported getting more sleep and could be flexible about when they did their work. Some students responded to the change with resiliency by gaining new skills, diving further into interests and finding creative ways to salvage soiled plans.

But for many, those benefits didn’t outweigh the negatives. Zoom-based learning essentially created a window into each student’s bedroom, a privilege usually requiring an invitation and reserved for close friends. The constant eye of the camera is also saddled with a hyper-awareness of how you’re presenting physically, which can result in more stress from self-scrutiny and judgment.

Burke said he noticed an uptick in symptoms of depression and anxiety in the fall of 2020, a time when he thinks parents were realizing that the situation and their child’s symptoms may not change anytime soon.

Some of the challenges that students have been facing included feeling disconnected from their peers, feeling lethargic from not moving all day and academic disengagement. “A huge part of my identity has been being very involved at school, and then suddenly I’m not only not caring about school, but being actively miserable about it,” Norah said.

One vital component to social development that Burke said has been largely missing over this past year of conducting classes via Zoom calls is casual conversations.

“Of course, these schools are there to educate. But the reality is that so, so much of social development during this age happens at school and it happens in the cracks: between classes, before school, during lunch, on the bus,” Burke said. “Those are the places and times where typically adult eyes and ears are a little scarcer and adolescents are relating to each other or in groups in slightly less guarded ways than they would be with adults.”

Zach F., a 17-year-old senior in Orland Park, Ill., said now that his school is back to in-person learning, he has noticed that it’s harder to talk to his peers — a problem that as a self-described extrovert, he didn’t have before.

“When you’re sitting down next to someone in class, it feels weird to talk to them,” he said. “You almost don’t know how to start the conversation or what to talk to them about.”

This lost connection to peers is a void that college freshmen can’t help but notice. John V., 18, of Toronto, Canada had always planned to live at home while attending the University of Toronto, but he still hoped to expand his social network.

“I was really excited to meet new people and make new friends. I definitely feel like I’m missing out. I haven’t met anyone at my college at all,” he said.

While the university has held social events online, John said they feel awkward and so he plans to wait to try to meet people when things open back up. He said he’s been on campus twice this past year and feels no real connection to his teachers, whom he’s only met through his computer screen.

“I’m studying to pass the tests, not to learn,” he said.

While this age group more or less grew up immersed in technology, there is ultimately no real replacement for the bond that happens during in-person interactions, Burke said.

“So much of communication is nonverbal and when access to that nonverbal communication is cut off, it makes connection harder for anybody,” Burke said. While relationships with parents may have strengthened, connections with other adults, such as teachers, were generally harder to foster. “All these different parts of who we are socially, that get nurtured by different types of relationships. If one is missing, there’s going to be a deficit there.”

As anyone who has experienced grief will tell you, there’s no delineated timeline. While there’s no way of predicting what it will look like, Burke said he does think there will be a long-lasting effect from this past year.

“In mental health, we have the concept of post-traumatic stress, but there’s also this concept of post-traumatic growth,” he said. “As things continue to change, there will be opportunities for societal post-traumatic growth. I think it’s something that will need to be intentional about.”

If you or someone you know are a college student or class of 2021 graduate, we’d love to hear from you. In partnership with HerCampus, Pocket is sponsoring a college essay contest called The Future Connection. We are looking for 300-500 word essays on what it’s like to come of age in a hyper-online world. One winner will be awarded a $5,000 cash prize plus a mentorship and publication opportunity with Mozilla and Pocket. You can read the full official rules here.

The post Class of Zoom: The reality of virtual graduation, prom & college orientation appeared first on The Mozilla Blog.

Daniel Stenberg200 OK

One day in March 1998 I released a little file transfer tool I called curl. The first ever curl release. That was good.

10

By the end of July the same year, I released the 10th curl release. I’ve always believed in release early release often as a service to users and developers alike.

20

In December 1998 I released the 20th curl release. I started to get a hang of this.

50

In January 2001, not even three years in, we shipped the 50th curl release (version 7.5.2). We were really cramming them out back then!

200

Next week. 23 years, two months and six days after the first release, we will ship the 200th curl release. We call it curl 7.77.0.

Yes, there are exactly 200 stickers used in the photo. But the visual comparison with 50 is also apt: it isn’t that big difference seen from a distance.

I’ve personally done every release to date, but there’s nothing in the curl release procedure that says it has to be me, as long as the uploader has access to put the new packages on the correct server.

The fact that 200 is an HTTP status code that is indicating success is an awesome combination.

Release cadence

In 2014 we formally switched to an eight week release cycle. It was more or less what we already used at the time, but from then on we’ve had it documented and we’ve tried harder to stick to it.

Assuming no alarmingly bad bugs are found, we let 56 days pass until we ship the next release. We occasionally slip up and fail on this goal, and then we usually do a patch release and cut the next cycle short. We never let the cycle go longer than those eight weeks. This makes us typically manage somewhere between 6 and 10 releases per year.

Lessons learned

  • Make a release checklist, and stick to that when making releases
  • Update the checklist when needed
  • Script as much as possible of the procedure
  • Verify the release tarballs/builds too in CI
  • People never test your code properly until you actually release
  • No matter how hard you try, some releases will need quick follow-up patch releases
  • There is always another release
  • Time-based release scheduling beats feature-based

The Rust Programming Language BlogAnnouncing Rustup 1.24.2

The rustup working group is happy to announce the release of rustup version 1.24.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.2 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.2

1.24.2 introduces pooled allocations to prevent memory fragmentation issues on some platforms with 1.24.x. We're not entirely sure what aspect of the streamed unpacking logic caused allocator fragmentation, but memory pools are a well known fix that should solve this for all platforms.

Those who were encountering CI issues with 1.24.1 should find them resolved.

Other changes

You can check out all the changes to Rustup for 1.24.2 in the changelog!

Rustup's documentation is also available in the rustup book.

Finally, the Rustup working group are pleased to welcome a new member. Between 1.24.1 and 1.24.2 二手掉包工程师 (hi-rustin) has joined, having already made some excellent contributions.

Thanks

Thanks again to all the contributors who made rustup 1.24.2 possible!

  • Carol (Nichols || Goulding)
  • Daniel Silverstone
  • João Marcos Bezerra
  • Josh Rotenberg
  • Joshua Nelson
  • Martijn Gribnau
  • pierwill
  • Robert Collins
  • 二手掉包工程师 (hi-rustin)

The Rust Programming Language BlogSix Years of Rust

Today marks Rust's sixth birthday since it went 1.0 in 2015. A lot has changed since then and especially over the past year, and Rust was no different. In 2020, there was no foundation yet, no const generics, and a lot of organisations were still wondering whether Rust was production ready.

In the midst of the COVID-19 pandemic, hundreds of Rust's global distributed set of team members and volunteers shipped over nine new stable releases of Rust, in addition to various bugfix releases. Today, "Rust in production" isn't a question, but a statement. The newly founded Rust foundation has several members who value using Rust in production enough to help continue to support and contribute to its open development ecosystem.

We wanted to take today to look back at some of the major improvements over the past year, how the community has been using Rust in production, and finally look ahead at some of the work that is currently ongoing to improve and use Rust for small and large scale projects over the next year. Let's get started!

Recent Additions

The Rust language has improved tremendously in the past year, gaining a lot of quality of life features, that while they don't fundamentally change the language, they help make using and maintaining Rust in more places even easier.

  • As of Rust 1.52.0 and the upgrade to LLVM 12, one of few cases of unsoundness around forward progress (such as handling infinite loops) has finally been resolved. This has been a long running collaboration between the Rust teams and the LLVM project, and is a great example of improvements to Rust also benefitting the wider ecosystem of programming languages.

  • On supporting an even wider ecosystem, the introduction of Tier 1 support for 64 bit ARM Linux, and Tier 2 support for ARM macOS & ARM Windows, has made Rust an even better place to easily build your projects across new and different architectures.

  • The most notable exception to the theme of polish has been the major improvements to Rust's compile-time capabilities. The stabilisation of const generics for primitive types, the addition of control flow for const fns, and allowing procedural macros to be used in more places, have allowed completely powerful new types of APIs and crates to be created.

Rustc wasn't the only tool that had significant improvements.

  • Cargo just recently stabilised its new feature resolver, that makes it easier to use your dependencies across different targets.

  • Rustdoc stabilised its "intra-doc links" feature, allowing you to easily and automatically cross reference Rust types and functions in your documentation.

  • Clippy with Cargo now uses a separate build cache that provides much more consistent behaviour.

Rust In Production

Each year Rust's growth and adoption in the community and industry has been unbelievable, and this past year has been no exception. Once again in 2020, Rust was voted StackOverflow's Most Loved Programming Language. Thank you to everyone in the community for your support, and help making Rust what it is today.

With the formation of the Rust foundation, Rust has been in a better position to build a sustainable open source ecosystem empowering everyone to build reliable and efficient software. A number of companies that use Rust have formed teams dedicated to maintaining and improving the Rust project, including AWS, Facebook, and Microsoft.

And it isn't just Rust that has been getting bigger. Larger and larger companies have been adopting Rust in their projects and offering officially supported Rust APIs.

  • Both Microsoft and Amazon have just recently announced and released their new officially supported Rust libraries for interacting with Windows and AWS. Official first party support for these massive APIs helps make Rust people's first choice when deciding what to use for their project.
  • The cURL project has released new versions that offer opt-in support for using Rust libraries for handling HTTP/s and TLS communication. This has been a huge inter-community collaboration between the ISRG, the Hyper & Rustls teams, and the cURL project, and we'd like to thank everyone for their hard work in providing new memory safe backends for a project as massive and widely used as cURL!
  • Tokio (an asynchronous runtime written in Rust), released its 1.0 version and announced their three year stability guarantee, providing everyone with a solid, stable foundation for writing reliable network applications without compromising speed.

Future Work

Of course, all that is just to start, we're seeing more and more initiatives putting Rust in exciting new places;

Right now the Rust teams are planning and coordinating the 2021 edition of Rust. Much like this past year, a lot of themes of the changes are around improving quality of life. You can check out our recent post about "The Plan for the Rust 2021 Edition" to see what the changes the teams are planning.

And that's just the tip of the iceberg; there are a lot more changes being worked on, and exciting new open projects being started every day in Rust. We can't wait to see what you all build in the year ahead!


Are there changes, or projects from the past year that you're excited about? Are you looking to get started with Rust? Do you want to help contribute to the 2021 edition? Then come on over, introduce yourself, and join the discussion over on our Discourse forum and Zulip chat! Everyone is welcome, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, disability, ethnicity, religion, or similar personal characteristic.

Daniel Stenbergcurl -G vs curl -X GET

(This is a repost of a stackoverflow answer I once wrote on this topic. Slightly edited. Copied here to make sure I own and store my own content properly.)

curl knows the HTTP method

You normally use curl without explicitly saying which request method to use.

If you just pass in a HTTP URL like curl http://example.com, curl will use GET. If you use -d or -F curl will use POST, -I will cause a HEAD and -T will make it a PUT.

If for whatever reason you’re not happy with these default choices that curl does for you, you can override those request methods by specifying -X [WHATEVER]. This way you can for example send a DELETE by doing curl -X DELETE [URL].

It is thus pointless to do curl -X GET [URL] as GET would be used anyway. In the same vein it is pointless to do curl -X POST -d data [URL]... But you can make a fun and somewhat rare request that sends a request-body in a GET request with something like curl -X GET -d data [URL].

Digging deeper

curl -GET (using a single dash) is just wrong for this purpose. That’s the equivalent of specifying the -G, -E and -T options and that will do something completely different.

There’s also a curl option called --get to not confuse matters with either. It is the long form of -G, which is used to convert data specified with -d into a GET request instead of a POST.

(I subsequently used this answer to populate the curl FAQ to cover this.)

Warnings

Modern versions of curl will inform users about this unnecessary and potentially harmful use of -X when verbose mode is enabled (-v) – to make users aware. Further explained and motivated here.

-G converts a POST + body to a GET + query

You can ask curl to convert a set of -d options and instead of sending them in the request body with POST, put them at the end of the URL’s query string and issue a GET, with the use of `-G. Like this:

curl -d name=daniel -d grumpy=yes -G https://example.com/

… which does the exact same thing as this command:

curl https://example.com/?name=daniel&grumpy=yes

Firefox NightlyThese Weeks in Firefox: Issue 93

Highlights

  • Firefox 89 introduces a fresh new look and feel!
    • Floating tabs!
    • Streamlined menus!
    • New icons!
    • Better dark mode support!
    • Improved context menus on Mac and Windows
    • Improved perceived startup performance on Windows
    • Native context menus and rubberbanding/overscroll on macOS
    • Refreshed modals dialogs and notification bars!
    • More details in these release notes and in this early review from laptopmag
  • Non-native form controls are slated to ride out in Firefox 89 as well
    • This lays the groundwork for improving the sandboxing of the content processes by shutting off access to native OS widget drawing routines
  • (Experimental, and en-US Nightly only) Users will now get unit conversions directly in the URL bar! Users can type “5 lbs to kg” and see a copy/paste friendly result instantaneously.

Friends of the Firefox team

For contributions from April 20 2021 to May 4 2021, inclusive.

Introductions/Shout-Outs

Resolved bugs (excluding employees)
Fixed more than one bug
  • Falguni Islam
  • Itiel
  • kaira [:anshukaira]
  • Kajal Sah
  • Luz De La Rosa
  • Richa Sharma
  • Sebastian Zartner [:sebo]
  • Vaidehi
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 90, when no extensions are installed  our about:addons page will show to the users a nicer message to explicitly direct them to addons.mozilla.org  instead of an empty list of installed extensions (Bug 1561538) – Thanks to Samuel Grasse-Haroldsen for fixing this polishing issue.
  • As part of the ongoing work to get rid of OS.File usage, Barret unveiled and fixed some races in AddonManager and XPIDatabase jsm (Bug 1702116)
  • Fixed a macOS specific issue in the “Manager Extension Shortcuts” about:addons view, which was preventing this view from detecting some of the conflicting shortcuts (Bug 1565854)
WebExtensions Framework
WebExtension APIs
  • Nicolas Chevobbe applied the needed changes to ensure that the devtools.inspectedWindow.reload method is Fission compatible also when an extension does pass to it the userAgent option (Bug 1706098)

Fission

  • Neil has been working on reviving the tab unloader for when users are hitting memory limits
    • It’s smarter this time though, and should hopefully make better choices on which tabs to unload.
    • Currently disabled by default, but Nightly users can test it by setting `browser.tabs.unloadOnLowMemory` to `true`

Messaging System

Performance

Performance Tools

  • Stacks now include the category color of each stack frame (in tooltips, marker table, sidebar)
    • Before and after image with stack frames highlighted in different colors.
  • Fixed a bug where the dot markers appear in the wrong places.
    • Profiler timeline with markers correctly displayed.

Search and Navigation

  • Lots of polish fixes to Proton address bar (and search bar)
  • The Search Mode chiclet can now be closed also when the address bar is unfocused – Bug 1701901
  • Address bar results action text (for example “Switch to tab”, or “Search with Engine”) won’t be pushed out of the visible area by long titles anymore – Bug 1707839
  • Double dots in domain-looking strings will now be corrected – Bug 1580881

Screenshots

Kajal, Falguni, Dawit, and Kaira have been working on removing server side code from screenshots

The Mozilla BlogEnter our college essay contest for a chance to win $5,000

The world this year’s college graduates will inherit is vastly different than the one they grew up expecting. COVID-19, a changing political climate, and a fluctuating economy all have something to do with it. But one of the biggest factors: continuous advancements in technology. New developments in social platforms, artificial intelligence, and augmented reality continue to disrupt the status quo, and the people who best understand both the advantages of and challenges of these dramatic shifts are online in their childhood bedrooms and dorm rooms right now.

We want to know about your experience coming of age in a hyper-online, always-connected world. If you or someone you know is a college student or class of 2021 graduate, we’d love to hear from you. In partnership with HerCampus, Pocket is sponsoring a college essay contest called The Future Connection. We are looking for 300-500 word essays on what it’s like to come of age in a hyper-online world. One winner will be awarded a $5,000 cash prize plus a mentorship and publication opportunity with Mozilla and Pocket.

The contest

Living in the digital age can mean something different to everyone, and we want to hear what it means to you. To enter, answer the following prompt in 300-500 words: “What does it mean for you to come of age in a hyper-online, always-connected world?” Get creative, have fun with your essay, and don’t be afraid to let your personality shine. You can read the full official rules here.

The Prize

One winner will have their essay published and featured on Pocket. And that’s not all. The winner will also receive a personal mentorship call with a Mozilla executive and take home a $5,000 cash prize.

Timeline

Visit our contest page for The Future Connection writing competition. Be sure to submit your essay by Thursday, July 15, 2021, by 11:59 p.m. PT for the opportunity to win. The contest will kick off on May 14 during the epic To The Future graduation event hosted by HerCampus, featuring speeches and performances by celebs, CEOs, actors, politicians, and spring graduates.

Winner selection

The winner will be announced on Friday, July 30, 2021. Essays will be reviewed by a panel of expert judges, including Mozilla CMO Lindsey Shepard and Her Campus Media co-founder, CEO, and Editor-in-Chief Stephanie Kaplan Lewis.

This graduation season, Pocket wants to continue the conversations about what’s next in technology and support your writing ambitions with The Future Connection essay competition. To learn more and enter for your chance to win, visit https://www.hercampus.com/lifestyle/pocketcontest and follow @pocket and @hercampus on Instagram for more exciting updates.

The post Enter our college essay contest for a chance to win $5,000 appeared first on The Mozilla Blog.

Niko MatsakisCTCFTFTW

This Monday I am starting something new: a monthly meeting called the “Cross Team Collaboration Fun Times” (CTCFT)1. Check out our nifty logo2:

Logo

The meeting is a mechanism to help keep the members of the Rust teams in sync and in touch with one another. The idea is to focus on topics of broad interest (more than two teams):

  • Status updates on far-reaching projects that could affect multiple teams;
  • Experience reports about people trying new things (sometimes succeeding, sometimes not);
  • “Rough draft” proposals that are ready to be brought before a wider audience.

The meeting will focus on things that could either offer insights that might affect the work you’re doing, or where the presenter would like to pose questions to the Rust teams and get feedback.

I announced the meeting some time back to all@rust-lang.org, but I wanted to make a broader announcement as well. This meeting is open for anyone to come and observe. This is by design. Even though the meeting is primarily meant as a forum for the members of the Rust teams, it can be hard to define the borders of a community like ours. I’m hoping we’ll get people who work on major Rust libraries in the ecosystem, for example, or who work on the various Rust teams that have come into being.

The first meeting is scheduled for 2021-05-17 at 15:00 Eastern and you will find the agenda on the CTCFT website, along with links to the slides (still a work-in-progress as of this writing!). There is also a twitter account @RustCTCFT and a Google calendar that you can subscribe to.

I realize the limitations of a synchronous meeting. Due to the reality of time zones and a volunteer project, for example, we’ll never be able to get all of Rust’s global community to attend at once. I’ve designed the meeting to work well even if you can’t attend: the goal is have a place to start conversations, not to finish them. Agendas are annonunced well in advance and the meetings are recorded. We’re also rotating times – the next meeting on 2021-06-21 takes place at 21:00 Eastern time, for example.3

Hope to see you there!

Footnotes

  1. In keeping with Rust’s long-standing tradition of ridiculous acronyms. 

  2. Thanks to @Xfactor521! 🙏 

  3. The agenda is still TBD. I’ll tweet when we get it lined up. We’re not announcing that far in advance! 😂 

Mozilla Open Policy & Advocacy BlogDefending users’ security in Mauritius

Yesterday, Mozilla and Google filed a joint submission to the public consultation on amending the Information and Communications Technology (ICT) Act organised by the Government of Mauritius. Our submission states that the proposed changes would disproportionately harm the security of Mauritian users on the internet and should be abandoned. Mozilla believes that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. The proposals under these amendments are fundamentally incompatible with this principle and would fail to achieve their projected outcomes.

Under Section 18(m) of the proposed changes, the ICTA could deploy a “new technical toolset” to intercept, decrypt, archive and then inspect/block https traffic between a local user’s Internet device and internet services, including social media platforms.

In their current form, these measures will place the privacy and security of internet users in Mauritius at grave risk. The blunt and disproportionate action will allow the government to decrypt, read and store anything a user types or posts on the internet, including intercepting their account information, passwords and private messages. While doing little to address the legitimate concerns of content moderation in local languages, it will undermine the trust of the fundamental security infrastructure that currently serves as the basis for the security of at least 80% of websites on the web that use HTTPS, including those that carry out e-commerce and other critical financial transactions.

When similarly dangerous mechanisms have been abused in the past, whether by known-malicious parties, business partners such as a computer or device manufacturer, or a government entity, as browser makers we have taken steps to protect and secure our users and products.

In our joint submission to the on-going public consultation, Google and Mozilla have urged the Authority not to pursue this approach. Operating within international frameworks for cross-border law enforcement cooperation and enhancing communication with industry can provide a more promising path to address the stated concerns raised in the consultation paper. We remain committed to working with the Government of Mauritius to address the underlying concerns in a manner that does not harm the privacy, security and freedom of expression of Mauritians on the internet.

The post Defending users’ security in Mauritius appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogMozilla files joint amicus brief in support of California net neutrality law

Yesterday, Mozilla joined a coalition of public interest organizations* in submitting an amicus brief to the Ninth Circuit in support of SB 822, California’s net neutrality law. In this case, telecom and cable companies are arguing that California’s law is preempted by federal law. In February of this year, a federal judge dismissed this challenge and held that California can enforce its law. The telecom industry appealed that decision to the 9th Circuit. We are asking the 9th Circuit to find that California has the authority to protect net neutrality.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

*Mozilla is joined on the amicus brief by Access Now, Public Knowledge, New America’s Open Technology Institute and Free Press.

The post Mozilla files joint amicus brief in support of California net neutrality law appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 390

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is tokio-console, a "top"-like utility to view your tasks run.

Thanks to Simon Farnsworth for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

324 pull requests were merged in the last week

Rust Compiler Performance Triage

Not much change overall - both regressions and improvements were all minor, apart from the 2x compile-time improvement for libcore from PR #83278.

Triage done by @pnkfelix. Revision range: 7a0f..382f

2 Regressions, 3 Improvements, 0 Mixed 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Paige

Impero

Zimpler

Yat Labs

Kollider

TrueLayer

Aleph Alpha

Kraken

ChainSafe

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

You won’t appreciate Rust unless you spend few weeks building something in it. The initial steep learning curve could be frustrating or challenging depending on how you see it, but once past that it’s hard not to love it. It’s a toddler with superpowers after all 💗

Deepu K Sasidharan on their blog

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Rust Programming Language BlogThe Plan for the Rust 2021 Edition

We are happy to announce that the third edition of the Rust language, Rust 2021, is scheduled for release in October. Rust 2021 contains a number of small changes that are nonetheless expected to make a significant improvement to how Rust feels in practice.

What is an Edition?

The release of Rust 1.0 established "stability without stagnation" as a core Rust deliverable. Ever since the 1.0 release, the rule for Rust has been that once a feature has been released on stable, we are committed to supporting that feature for all future releases.

There are times, however, when it is useful to be able to make small changes to the language that are not backwards compatible. The most obvious example is introducing a new keyword, which would invalidate variables with the same name. For example, the first version of Rust did not have the async and await keywords. Suddenly changing those words to keywords in a later version would've broken code like let async = 1;.

Editions are the mechanism we use to solve this problem. When we want to release a feature that would otherwise be backwards incompatible, we do so as part of a new Rust edition. Editions are opt-in, and so existing crates do not see these changes until they explicitly migrate over to the new edition. This means that even the latest version of Rust will still not treat async as a keyword, unless edition 2018 or later is chosen. This choice is made per crate as part of its Cargo.toml. New crates created by cargo new are always configured to use the latest stable edition.

Editions do not split the ecosystem

The most important rule for editions is that crates in one edition can interoperate seamlessly with crates compiled in other editions. This ensures that the decision to migrate to a newer edition is a "private one" that the crate can make without affecting others.

The requirement for crate interoperability implies some limits on the kinds of changes that we can make in an edition. In general, changes that occur in an edition tend to be "skin deep". All Rust code, regardless of edition, is ultimately compiled to the same internal representation within the compiler.

Edition migration is easy and largely automated

Our goal is to make it easy for crates to upgrade to a new edition. When we release a new edition, we also provide tooling to automate the migration. It makes minor changes to your code necessary to make it compatible with the new edition. For example, when migrating to Rust 2018, it changes anything named async to use the equivalent raw identifier syntax: r#async.

The automated migrations are not necessarily perfect: there might be some corner cases where manual changes are still required. The tooling tries hard to avoid changes to semantics that could affect the correctness or performance of the code.

In addition to tooling, we also maintain an Edition Migration Guide that covers the changes that are part of an edition. This guide will describe the change and give pointers to where people can learn more about it. It will also cover any corner cases or details that people should be aware of. The guide serves both as an overview of the edition, but also as a quick troubleshooting reference if people encounter problems with the automated tooling.

What changes are planned for Rust 2021?

Over the last few months, the Rust 2021 Working Group has gone through a number of proposals for what to include in the new edition. We are happy to announce the final list of edition changes. Each feature had to meet two criteria to make this list. First, they had to be approved by the appropriate Rust team(s). Second, their implementation had to be far enough along that we had confidence that they would be completed in time for the planned milestones.

Additions to the prelude

The prelude of the standard library is the module containing everything that is automatically imported in every module. It contains commonly used items such as Option, Vec, drop, and Clone.

The Rust compiler prioritizes any manually imported items over those from the prelude, to make sure additions to the prelude will not break any existing code. For example, if you have a crate or module called example containing a pub struct Option;, then use example::*; will make Option unambiguously refer to the one from example; not the one from the standard library.

However, adding a trait to the prelude can break existing code in a subtle way. A call to x.try_into() using a MyTryInto trait might become ambiguous and fail to compile if std's TryInto is also imported, since it provides a method with the same name. This is the reason we haven't added TryInto to the prelude yet, since there is a lot of code that would break this way.

As a solution, Rust 2021 will use a new prelude. It's identical to the current one, except for three new additions:

Default Cargo feature resolver

Since Rust 1.51.0, Cargo has opt-in support for a new feature resolver which can be activated with resolver = "2" in Cargo.toml.

Starting in Rust 2021, this will be the default. That is, writing edition = "2021" in Cargo.toml will imply resolver = "2".

The new feature resolver no longer merges all requested features for crates that are depended on in multiple ways. See the announcement of Rust 1.51 for details.

IntoIterator for arrays

Until Rust 1.53, only references to arrays implement IntoIterator. This means you can iterate over &[1, 2, 3] and &mut [1, 2, 3], but not over [1, 2, 3] directly.

for &e in &[1, 2, 3] {} // Ok :)

for e in [1, 2, 3] {} // Error :(

This has been a long-standing issue, but the solution is not as simple as it seems. Just adding the trait implementation would break existing code. array.into_iter() already compiles today because that implicitly calls (&array).into_iter() due to how method call syntax works. Adding the trait implementation would change the meaning.

Usually we categorize this type of breakage (adding a trait implementation) 'minor' and acceptable. But in this case there is too much code that would be broken by it.

It has been suggested many times to "only implement IntoIterator for arrays in Rust 2021". However, this is simply not possible. You can't have a trait implementation exist in one edition and not in another, since editions can be mixed.

Instead, we decided to add the trait implementation in all editions (starting in Rust 1.53.0), but add a small hack to avoid breakage until Rust 2021. In Rust 2015 and 2018 code, the compiler will still resolve array.into_iter() to (&array).into_iter() like before, as if the trait implementation does not exist. This only applies to the .into_iter() method call syntax. It does not affect any other syntax such as for e in [1, 2, 3], iter.zip([1, 2, 3]) or IntoIterator::into_iter([1, 2, 3]). Those will start to work in all editions.

While it's a shame that this required a small hack to avoid breakage, we're very happy with how this solution keeps the difference between the editions to an absolute minimum. Since the hack is only present in the older editions, there is no added complexity in the new edition.

Disjoint capture in closures

Closures automatically capture anything that you refer to from within their body. For example, || a + 1 automatically captures a reference to a from the surrounding context.

Currently, this applies to whole structs, even when only using one field. For example, || a.x + 1 captures a reference to a and not just a.x. In some situations, this is a problem. When a field of the struct is already borrowed (mutably) or moved out of, the other fields can no longer be used in a closure, since that would capture the whole struct, which is no longer available.

let a = SomeStruct::new();

drop(a.x); // Move out of one field of the struct

println!("{}", a.y); // Ok: Still use another field of the struct

let c = || println!("{}", a.y); // Error: Tries to capture all of `a`
c();

Starting in Rust 2021, closures will only capture the fields that they use. So, the above example will compile fine in Rust 2021.

This new behavior is only activated in the new edition, since it can change the order in which fields are dropped. As for all edition changes, an automatic migration is available, which will update your closures for which this matters. It can insert let _ = &a; inside the closure to force the entire struct to be captured as before.

Panic macro consistency

The panic!() macro is one of Rust's most well known macros. However, it has some subtle surprises that we can't just change due to backwards compatibility.

panic!("{}", 1); // Ok, panics with the message "1"
panic!("{}"); // Ok, panics with the message "{}"

The panic!() macro only uses string formatting when it's invoked with more than one argument. When invoked with a single argument, it doesn't even look at that argument.

let a = "{";
println!(a); // Error: First argument must be a format string literal
panic!(a); // Ok: The panic macro doesn't care

(It even accepts non-strings such as panic!(123), which is uncommon and rarely useful.)

This will especially be a problem once implicit format arguments are stabilized. That feature will make println!("hello {name}") a short-hand for println!("hello {}", name). However, panic!("hello {name}") would not work as expected, since panic!() doesn't process a single argument as format string.

To avoid that confusing situation, Rust 2021 features a more consistent panic!() macro. The new panic!() macro will no longer accept arbitrary expressions as the only argument. It will, just like println!(), always process the first argument as format string. Since panic!() will no longer accept arbitrary payloads, panic_any() will be the only way to panic with something other than a formatted string.

In addition, core::panic!() and std::panic!() will be identical in Rust 2021. Currently, there are some historical differences between those two, which can be noticable when switching #![no_std] on or off.

Reserving syntax

To make space for some new syntax in the future, we've decided to reserve syntax for prefixed identifiers and literals: prefix#identifier, prefix"string", prefix'c', and prefix#123, where prefix can be any identifier. (Except those that already have a meaning, such as b'…' and r"…".)

This is a breaking change, since macros can currently accept hello"world", which they will see as two separate tokens: hello and "world". The (automatic) fix is simple though. Just insert a space: hello "world".

Other than turning these into a tokenization error, the RFC does not attach a meaning to any prefix yet. Assigning meaning to specific prefixes is left to future proposals, which will—thanks to reserving these prefixes now—not be breaking changes.

These are some new prefixes you might see in the future:

  • f"" as a short-hand for a format string. For example, f"hello {name}" as a short-hand for the equivalent format_args!() invocation.

  • c"" or z"" for null-terminated C strings.

  • k#keyword to allow writing keywords that don't exist yet in the current edition. For example, while async is not a keyword in edition 2015, this prefix would've allowed us to accept k#async as an alternative in edition 2015 while we waited for edition 2018 to reserve async as a keyword.

Promoting two warnings to hard errors

Two existing lints are becoming hard errors in Rust 2021. These lints will remain warnings in older editions.

Or patterns in macro_rules

Starting in Rust 1.53.0, patterns are extended to support | nested anywhere in the pattern. This enables you to write Some(1 | 2) instead of Some(1) | Some(2). Since this was simply not allowed before, this is not a breaking change.

However, this change also affects macro_rules macros. Such macros can accept patterns using the :pat fragment specifier. Currently, :pat does not match |, since before Rust 1.53, not all patterns (at all nested levels) could contain a |. Macros that accept patterns like A | B, such as matches!() use something like $($_:pat)|+. Because we don't want to break any existing macros, we did not change the meaning of :pat in Rust 1.53.0 to include |.

Instead, we will make that change as part of Rust 2021. In the new edition, the :pat fragment specifier will match A | B.

Since there are times that one still wishes to match a single pattern variant without |, the fragment specified :pat_param has been added to retain the older behavior. The name refers to its main use case: a pattern in a closure parameter.

What comes next?

Our plan is to have these changes merged and fully tested by September, to make sure the 2021 edition makes it into Rust 1.56.0. Rust 1.56.0 will then be in beta for six weeks, after which it is released as stable on October 21st.

However, note that Rust is a project run by volunteers. We prioritize the personal well-being of everyone working on Rust over any deadlines and expectations we might have set. This could mean delaying the edition a version if necessary, or dropping a feature that turns out to be too difficult or stressful to finish in time.

That said, we are on schedule and many of the difficult problems are already tackled, thanks to all the people contributing to Rust 2021! 💛


You can expect another announcement about the new edition in July. At that point we expect all changes and automatic migrations to be implemented and ready for public testing.

We'll be posting some more details about the process and rejected proposals on the "Inside Rust" blog soon.

Mozilla Security BlogBeware of Applications Misusing Root Stores

We have been alerted about applications that use the root store provided by Mozilla for purposes other than what Mozilla’s root store is curated for. We provide a root store to be used for server authentication (TLS) and for digitally signed and encrypted email (S/MIME). Applications that use Mozilla’s root store for a purpose other than that have a critical security vulnerability. With the goal of improving the security ecosystem on the internet, below we clarify the correct and incorrect use of Mozilla’s root store, and provide tools for correct use.

Background on Root Stores: Mozilla provides a root store (curated list of root certificates) to enable Certificate Authorities (CAs) to issue trusted TLS certificates which in turn enables secure browsing and encryption on the internet. The root store provided by Mozilla is intended to be used for server authentication (TLS) and for digitally signed and encrypted email (S/MIME). The root store is built into Firefox and Network Security Services (NSS). The NSS cryptographic library is a set of libraries designed to support cross-platform development of security-enabled client and server applications; it is open source and therefore has become the de-facto standard for many Linux-powered operating systems. While NSS includes Mozilla’s root store by default, it also provides the ability for developers to use their own root store, enabling application developers to provide a list of root certificates that is curated for use cases other than TLS and S/MIME.

Misuse of Root Stores: We have been alerted that some applications are using root stores provided by Mozilla or an operating system (e.g. Linux) for purposes other than what the root store is curated for. An application that uses a root store for a purpose other than what the store was created for has a critical security vulnerability. This is no different than failing to validate a certificate at all.

There are different procedures, controls, and audit criteria for different types of certificates. For example, when a CA issues a certificate for S/MIME, it ensures that the email address in the certificate is controlled by the certificate subscriber. Likewise, when a CA issues a certificate for TLS, it ensures that the domain names in the certificate are controlled by the certificate subscriber. For a CA who has only been evaluated in terms of their issuance of S/MIME certificates there is no indication that they follow the correct procedures for issuance of TLS certificates (i.e. that they properly validate who controls the domain names in the certificate). Similarly, for a CA who has only been evaluated in terms of their issuance of TLS certificates there is no indication that they follow the correct procedures for issuance of Code Signing certificates.

Additionally, some application developers directly parse a file in Mozilla’s source code management system called certdata.txt, in which Mozilla’s root store is maintained in a form that is convenient for NSS to build from. The problem with the scripts that directly parse this file is that some of the certificates in this file are not trusted but rather explicitly distrusted, so scripts that do not take the trust records into account may be trusting root certificates, such as the DigiNotar certificates, which Mozilla explicitly distrusts.

Correctly using Root Stores: Curating a root store is a costly ongoing responsibility, so the Common CA Database (CCADB) Resources tab provides lists of root certificates that are being curated for the purposes of Code Signing, Email (S/MIME), and Server Authentication (SSL/TLS). The Code Signing root certificate list is based on the data that Microsoft maintains in the CCADB for their root store. The Email (S/MIME) and Server Authentication (SSL/TLS) root certificate lists are based on the data that Mozilla maintains in the CCADB for Mozilla’s root store (aka the NSS root store). These lists of certificates may be used for their intended purposes; specifically Code Signing, S/MIME, or TLS. If you choose to use one of these lists, be sure to read the data usage terms and to update the list in your applications frequently.

We recommend that you use the certificate lists provided on the CCADB Resources page rather than directly parsing the certdata.txt file. Application developers who continue to parse the certdata.txt file should use a script that correctly takes the trust records into account.

It is important to note that decisions that a root store operator makes with regards to inclusion or exclusion of CA certificates in its root store are directly tied to the capabilities and behaviors of the software they are distributing. Additionally, a security change could be made wholly or partly in the software instead of the root store. On a best-efforts basis, Mozilla maintains a list of the additional things users of Mozilla’s root store might need to consider.

Application developers must pay attention to which Root Store to use: We strongly encourage application developers to ensure that the list of root certificates that they are using in their applications have been curated for their use case. Additionally, application developers should only use the Mozilla/NSS root store for TLS or S/MIME by using the links provided on the CCADB Resources page that list the certificates in the Mozilla/NSS root store according to the trust bits (key usage) they are curated for.

Choosing to rely on a root store also means understanding and accepting the policies for that root store. Concretely, that means respecting both the trust flags on root certificates and decisions to add or remove root certificates. In particular, Mozilla removes root certificates when they are determined to be no longer trustworthy for TLS or S/MIME. If a removal causes an application to break, then it is either correct on the basis that the root certificate should no longer be used for TLS or S/MIME, or it is a fault in that application not using the root store correctly. Significant root removals are usually announced in Mozilla’s Security Blog (e.g. DigiNotar, CNNIC, WoSign).

Mozilla is committed to maintaining our own root store because doing so is vital to the security of our products and the web in general. It gives us the ability to set policies, determine which CAs meet them, and to take action when a CA fails to do so.

The post Beware of Applications Misusing Root Stores appeared first on Mozilla Security Blog.

Support.Mozilla.OrgWhat’s up with SUMO – May 2021

Hey SUMO folks,

The second quarter of 2021 is underway and we can’t be more excited about lots of stuff that we’ve been working on in this quarter.

Let’s find out more about them!

Welcome on board!

  1. Welcome back dbben! Thanks for actively contributing back in the forum.

Community news

  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • Advanced Search page is gone from SUMO as per May 4, 2021. The team is currently working to add syntax functionality that will be added to the simple search field. The plan is to have similar functionality to what we have in the advanced search but with minimal UI. Follow our discussion about this in the contributor forum here.
  • Firefox 89 is coming soon. We’ve been working on the tagging plan across channels for the upcoming proton launch next month. The idea is that, we want to collect those feedbacks and report to the product team regularly before and after the release. Here’s what we’re going to do for each channel:
    • Forum: If you’ve seen any questions related to proton changes, please tag the question with MR1.
    • Twitter: Conversocial let us automatically tag conversations with specific keywords related to proton. If you’ve seen other conversations that haven’t been tagged, please add “MR1” tag manually.
    • Reddit: Include proton in your post and tag the thread with “Proton” (related thread). We’ll capture top 10 conversations to the product team on a weekly basis.
  • Check out the following release notes from Kitsune for the past month:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April. We talked about various updates including the upcoming proton release in Firefox 89.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB Page views

Month Page views Vs previous month
April 2020 8,739,284 -28.03%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Michele Rodaro
  4. Artist
  5. Marchelo Ghelman

KB Localization

Top 10 locale based on total page views

Locale Apr 2020 page views Localization progress (per 6 May)
de 10.09% 99%
es 6.80% 45%
zh-CN 6.58% 100%
fr 6.52% 88%
pt-BR 6.14% 68%
ja 4.37% 57%
ru 3.87% 99%
it 2.48% 98%
pl 2.31% 85%
id 0.96% 2%

Top 5 localization contributor in the last 90 days: 

  1. Ihor_ck
  2. Artist
  3. Milupo
  4. JimSp472
  5. Mark Heiji

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Apr 2020 3379 71.26% 14.86 71.43%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. Jscher2000
  3. FredMcD
  4. Sfhowes
  5. Seburo

Social Support

Channel Apr 2020
Total conv Conv handled
@firefox 4,064 287
@FirefoxSupport 303 123

Top 5 contributors in April 2021

  1. Christophe Villeneuve
  2. Md Monirul Alom
  3. Devin E
  4. Andrew Truong
  5. Alex Mayorga

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX 89 release – June 1st
  • MR1/Proton
    • Firefox Beta 8(88.0b8) will have final if not near final changes implemented for proton
  • Phase 2 of Total Cookie protection – Dynamic First Party Isolation, or dFPI, feature enabled for Private Browsing Mode Users
  • Shimming Category 2 – Automatic exceptions UI indicator
  • Personalizing New Tab – Customize your new tab experience

Firefox mobile

  • Fenix (Fx 89) – June 1st
    • Optimized toolbar menus
    • Top Site visual improvements
    • Sync tabs → tabs tray
  • iOS V34
    • Refresh of tabs view
    • Adding synced tabs to tabs trey
    • Removed tabs search bar
    • Tabs Trey refresh
    • Nimbus experimentation platform integrated

Other products / Experiments

  • Mozilla VPN V2.3 – may 28
    • Windows – split tunneling
    • IPv6 Captive portal detection
  • Firefox for Amazon Fire TV and Echo Show sunset

Shout-outs!

  • Thank you Mamoon for taking up VPN questions on the forum!
  • Thank you Yoasif for helping us with Proton flair on Reddit!
  • Congrats dbben for making into the top contributor list for the forum.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

William Lachancemozregression update May 2021

Just wanted to give some quick updates on the state of mozregression.

Anti-virus false positives

One of the persistent issues with mozregression is that it seems to be persistently detected as a virus by many popular anti-virus scanners. The causes for this are somewhat complex, but at root the problem is that mozregression requires fairly broad permissions to do the things it needs to do (install and run copies of Firefox) and thus its behavior is hard to distinguish from a piece of software doing something malicious.

Recently there have been a number of mitigations which seem to be improving this situation:

  • :bryce has been submitting copies of mozregression to Microsoft so that Windows Defender (probably the most popular anti-virus software on this platform) doesn’t flag it.
  • I recently released mozregression 4.0.17, which upgrades the GUI dependency for pyinstaller to a later version which sets PE checksums correctly on the generated executable (pyinstaller/pyinstaller#5579).

It’s tempting to lament the fact that this is happening, but in a way I can understand it’s hard to reliably detect what kind of software is legitimate and what isn’t. I take the responsibility for distributing this kind of software seriously, and have pretty strict limits on who has access to the mozregression GitHub repository and what pull requests I’ll merge.

CI ported to GitHub Actions

Due to changes in Travis’s policies, we needed to migrate continuous integration for mozregression to GitHub actions. You can see the gory details in bug 1686039. One possibly interesting wrinkle to others: due to Mozilla’s security policy, we can’t use (most) external actions inside our GitHub repository. I thus rewrote the logic for uploading a mozregression release to GitHub for MacOS and Linux GUI builds (Windows builds are still happening via AppVeyor for now) from scratch. Feel free to check the above out if you have a similar need.

MacOS Big Sur

As of version 4.0.17, the mozregression GUI now works on MacOS Big Sur. It is safe to ask community members to install and use it on this platform (though note the caveats due to the bundle being unsigned).

Usage Dashboard

Fulfilling a promise I implied last year, I created a public dataset for mozregression and created an dashboard tracking mozregression use using Observable. There are a few interesting insights and trends there that can be gleaned from our telemetry. I’d be curious if the community can find any more!

Karl DubostBrowser Wish List - Tabs and bookmarks are the same thing

My browser is my like an office room with desk and shelves, where the information is accessible. Information is stacked, accessible, sometimes open and browsable at glance and some deep on the shelves. But how would I want to have access it in the browser.

Currently we bury the information of tabs and bookmarks in a big bind of context without giving any help for managing apart of having to go through the list of pages one by one. No wonder why people feel overwhelmed and try to limit the number of tabs they have opened. Because big numbers rely on external tools (Tree Style Tabs, Sidebery, Containers, etc) which do not go far enough to manage the tabs.

Binder of pages

Some contexts

It started with a message from Glandium sharing an article from Joseph Chee Chang with the title: When the Tab Comes Due. Tabs! Love Tabs. Reading the PDF brought some strong nodding.

Tabs should better reflect users’ complex task structures.

One potential design space is to bootstrap such mental model repre-sentations with minimal user effort by identifying their intentionsusing their navigation patterns. For example, a set of tabs openedfrom a search engine query is likely to support the same information needs; or, a set of tabs opened from a top-10 list article arelikely competing options under the same category. Capturing andorganizing tabs using such structures has the potential of betterorienting users and providing better support for task progressionand resumption.

Allow users to externalize their thoughts and synthesize information across tabs.

More directly, a recent survey showed thataround half of their participants (49.4%, N=89) use spreadsheets togather evidence and take notes across multiple online informationsources to compare options (e.g., products or destinations) to helpthem make decisions. However, current browsers treat tabs asindividual silos and provide little support for cross-referencing andcollecting information between webpages. Using external tools,such as word documents and spreadsheets, creates a disconnectin users’ workspace, and can incur high cognitive and interactioncosts when trying to copy and paste information to synthesize themin a separate document

Sketch

The article made me think about tabs and bookmarks, in our browsers UIs, these are separated. Probably it should not be. A bookmark is just a closed context, and a tab is just an opened context. But they are basically the same. The UI to access them is completely different, the information to filter them is also totally different. Why?

So I was thinking how could both world be mixed together.

  • Make the bookmarks more visual though thumbnails.
  • Make the tabs manageable through trees and categories and gives them the concept of dates (created and last opened) and show these dates.
  • Add on top of this full text search on the full set (or subcategory) of tabs/bookmarks (we need a new name).
    • Search "Gardening" for tabs opened in between February 2021 and May 2021.
    • Search "Curry" for tabs in my Thailand category
  • Give the notion of views
    • By tree (the sketch below)
    • By timeline (Year, month, days). Think photo management software. Sure I opened this tab after this date, during this trip, etc.
    • By geolocation (tabs opened when I was at home or in this cafe) Sometimes we memorize the information through the external context we where in.
    • By labels or keywords that you may have added.
    • By automatic classification of content. Machine Learning is all the rage, why not using the capabilities that OS provides more and more for running Machine learning to classify the content or even embark one.

Sketch for tabs bookmarks

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

Spidermonkey Development BlogTC39 meeting, April 19-21 2021

In this TC39 meeting, the updates to JavaScript Classes around private state have moved to stage 4. Other proposals of note this meeting were proposals related to ArrayBuffers, notably resizable ArrayBuffers and a new proposal, introducing read-only ArrayBuffers and fixed views into ArrayBuffers. Read-only ArrayBuffers are not a new ArrayBuffer, but rather a way to freeze existing ArrayBuffers so that they are not modified accidentally. Fixed views into ArrayBuffers would have the goal of not exposing more than the intended view of an ArrayBuffer to a third party.

One of the interesting new proposals is Object.has or Object.hasOwn. This would supply developers with a convenient shorthand. The following:

let hasOwn = (obj, prop) => Object.prototype.hasOwnProperty.call(obj, prop);

if (hasOwn(object, "foo")) {
  console.log("has property foo");
}

could instead be written as:

if (Object.hasOwn(object, "foo")) {
  console.log("has property foo")
}

This is a tricky corner case, and this would simplify things.

Pattern matching was brought back with an update. The proposal has a number of champions now, and a new effort to cleanly define the syntax. The record/tuple champions brought a new proposal that would help align how mutable and immutable structures have symmetry in their methods.

Needs minor change:

None.

Keep an eye on…

  • Pattern Matching
  • Read-only ArrayBuffers and Fixed views
  • Change array by copy

Normative Spec Changes

None.

Proposals Seeking Advancement to Stage 4

Class fields, private methods, and static class features

Proposals Seeking Advancement to Stage 3

Intl Locale Info for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: An API to expose information of locale, such as week data (first day of a week, weekend start, weekend end), hour cycle, measurement system, commonly used calendar, etc.
  • Impact on SM: Needs implementation
  • Outcome: Advanced to Stage 3.

ResizableArrayBuffer for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Introduces two new ArrayBuffers, one resizable, the other only growable (and shared). The update to resizable ArrayBuffers introduces implementation defined rounding.
  • Impact on SM:
  • Outcome: Did not achieve consensus. Moddable requested more time to investigate the cost of having two new globals on their engine. The current outcome is that instead of introducing these new globals, we will instead overload the name, with a parameter (name to be determined) that will allow for the creation of a resizable/growable arraybuffer/sharedarraybuffer.

Intl DisplayNames v2 for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further coverage to the existing Intl.DisplayNames API.
  • Impact on SM: Will Need implementation
  • Outcome: Did not achieve Consensus. There were a few requests for more investigation and time to resolve issues. Specifically, around CLDR and its defined language display names and whether they should all be supported in #29.

Stage 3 Updates

Import Assertions update

  • Notes
  • Proposal Link
  • Slides
  • Summary: The Import Assertions proposal adds an inline syntax for module import statements to pass on more information alongside the module specifier. The initial application for such assertions will be to support additional types of modules in a common way across JavaScript environments, starting with JSON modules. The syntax allows for the following.
      import json from "./foo.json" assert { type: "json" };
    

    The update focused on the question of “what do we do when we have an assertion that isn’t recognized?”. Currently if a host sees a module type assertion that they don’t recognize they can choose what to do. There wasn’t a resolution here so far.

  • Impact on SM: Implementation in Progress

Proposals Seeking Advancement to Stage 2

Extend TimeZoneName Option Proposal for Stage 2

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further options for the TimeZoneName option in Intl.DateTimeFormat, allowing for greater accuracy in representing different time zones.
  • Impact on SM: Will Need implementation
  • Outcome: Advanced to stage 2.

Symbols as WeakMap keys for Stage 2

  • Notes
  • Proposal Link
  • Slides
  • Summary: Allows symbols in WeakMap Keys. The discussion focused on the potential issue of using globally shared symbols in a weakmap, as these would effectively be strongly held. As this is already possible in JavaScript (globals can be keys in a weakmap and are also never garbage collected), it was determined that this was not a significant risk.
  • Impact on SM: Will Need implementation
  • Outcome: Advanced to stage 2.

Stage 2 Updates

Intl.NumberFormat V3 Stage 2 Update

  • Notes
  • Proposal Link
  • Slides
  • Summary: A batch of internationalization features for number formatting. This update focused on changes to grouping enums, rounding and precision options, and sign display negative.
  • Impact on SM: Will Need implementation

Intl Enumeration API update

  • Notes
  • Proposal Link
  • Slides
  • Summary: Intl enumeration allows inspecting what is available on the intl api. Initially, we had reservations that this could be used for fingerprinting. Mozilla did an analysis and no longer holds this concern. However, it is unclear if this api has usecases which warrant its inclusion in the language.
  • Impact on SM: Will Need implementation

Proposals Seeking Advancement to Stage 1

Read-only ArrayBuffer and Fixed view of ArrayBuffer for Stage 1

  • Notes
  • Proposal Link for Read-Only ArrayBuffer
  • Proposal Link for Fixed view
  • Slides
  • Summary: These two proposals introduce ways to constrain ArrayBuffers. The first, read-only ArrayBuffers, would allow you to freeze arraybuffers much the way that you can freeze JS objects. Once it is frozen, it cannot be unfrozen or altered. The second, fixed view, creates a view that third parties cannot change. They are given only one view into the ArrayBuffer.
  • Outcome: Advanced to stage 1.

Change Array by copy for Stage 1

  • Notes
  • Proposal Link
  • Slides
  • Summary: Discussed last meeting in the Records and Tuples topic. This proposal will introduce a set of methods which array and tuple will share. The issue with a method like “sort” is that it operates on the array in a mutable way. This proposal introduces a new api, “sorted” which will copy the array and modify it, rather than modifying it in place. The full set of apis is still being determined.
  • Outcome: Advanced to stage 1.

Object.has for Stage 1

  • Notes
  • Proposal Link
  • Slides
  • Summary: Checking an object for a property at the moment, is rather unintuitive and error prone. This proposal introduces a more ergonoic wrapper around a common pattern involving Object.prototype.hasOwnProperty which allows the following:
      let hasOwnProperty = Object.prototype.hasOwnProperty
    
      if (hasOwnProperty.call(object, "foo")) {
        console.log("has property foo")
      }
    

    to be written as:

      if (Object.hasOwn(object, "foo")) {
        console.log("has property foo")
      }
    
  • Outcome: Advanced to stage 1.

Stage 1 Updates

Pattern matching update

  • Notes
  • Proposal Link
  • Slides
  • Summary: This update revives the pattern matching proposal, which will allow programmers to do complex matches on objects and other types. The proposal has been taken over by a new champion group. The goal is to introduce a useful alternative to switch, with more matching.

Daniel StenbergThe libcurl transfer state machine

I’ve worked hard on making the presentation I ended up calling libcurl under the hood. A part of that presentation is spent on explaining the main libcurl transfer state machine and here I’ll try to document some of what, in a written form. Understanding the main transfer state machine in libcurl could be valuable and interesting for anyone who wants to work on libcurl internals and maybe improve it.

Background

The state is kept in easy handle in the struct field called mstate. The source file for this state machine is called multi.c.

An easy handle is always in exactly one of these states for as long as it exists.

This transfer state machine is designed to work for all protocols libcurl supports, but basically no protocol will transition through all states. As you can see in the drawing, there are many different possible transitions from a lot of the states.

libcurl transfer state machine

(click the image for a larger version)

Start

A transfer starts up there above the surface in the INIT state. That’s a yellow box next to the little start button. Basically the boat shows how it goes from INIT to the right over to MSGSENT with it’s finish flag, but the real path is all done under the surface.

The yellow boxes (states) are the ones that exist before or when a connection is setup. The striped background is for all states that has a single and specific connectdata struct associated with the transfer.

CONNECT

If there’s a connection limit, either in total or per host etc, the transfer can get sent to the PENDING state to wait for conditions to change. If not, the state probably moves on to one of the blue ones to resolve host name and connect to the server etc. If a connection could be reused, it can shortcut immediately over to the green DO state.

The green states are all about setting up the connection to a state of fully connected, authenticated and logged in. Ready to send the first request.

DO

The green DO states are all about sending the request with one or more commands so that the file transfer can begin. There are several such states to properly support all protocols but also for historical reasons. We could probably remove a state there by some clever reorgs if we wanted.

PERFORMING

When a request has been issued and the transfer starts, it transitions over to PERFORMING. In the white states data is flowing. Potentially a lot. Potentially in both or either direction. If during the transfer curl finds out that the transfer is faster than allowed, it will move into RATELIMITING until it has cooled down a bit.

DONE

All the post-transfer states are red in the picture. The DONE is the first of them and after having done what it needs to round up the transfer, it disassociates with the connection and moves to COMPLETED. There’s no stripes behind that state. Disassociate here means that the connection is returned back to the connection pool for later reuse, or in the worst case if deemed that it can’t be reused or if the application has instructed it so, closed.

As you’ll note, there’s no disconnect anywhere in the state machine. This is simply because the disconnect is not really a part of the transfer at all.

COMPLETED

This is the end of the road. In this state a message will be created and put in the outgoing queue for the application to read, and then as a final last step it moves over to MSGSENT where nothing more happens.

A typical handle remains in this state until the transfer is reused and restarted, in which it will be set back to the INIT state again and the journey begins again. Possibly with other transfer parameters and URL this time. Or perhaps not.

State machines within each state

What this state diagram and explanation doesn’t show is of course that in each of these states, there can be protocol specific handling and each of those functions might in themselves of course have their own state machines to control what to do and how to handle the protocol details.

Each protocol in libcurl has its own “protocol handler” and most of the protocol specific stuff in libcurl is then done by calls from the generic parts to the protocol specific parts with calls like protocol_handler->proto_connect() that calls the protocol specific connection procedure.

This allows the generic state machine described in this blog post to not really know the protocol specifics and yet all the currently support 26 transfer protocols can be supported.

libcurl under the hood – the video

Here’s the full video of libcurl under the hood.

If you want to skip directly to the state machine diagram and the following explanation, go here.

Credits

Image by doria150 from Pixabay

Nick FitzgeraldHit the Ground Running: Wasm Snapshots for Fast Start Up

I gave a (virtual) talk at the WebAssembly Summit this year titled “Hit the Ground Running: Wasm Snapshots for Fast Start Up”. Here is the talk’s abstract:

Don’t make your users wait while your Wasm module initializes itself! Wizer instantiates your WebAssembly module, executes its initialization functions, and then snapshots the initialized state out into a new, pre-initialized WebAssembly module. Now you can use this new module to hit the ground running, without waiting for any of that first-time initialization code to complete. This talk will cover the design and implementation of Wizer; discuss its performance characteristics and the scenarios in which it excels and when it isn’t the right tool; and finally, in the process of doing all that, we’ll take a closer look at what makes up the guts of a WebAssembly module: memories, globals, tables, etc.

You can view the slide deck here, check out the benchmarks here, and the recording is embedded below:

The Rust Programming Language BlogAnnouncing Rust 1.52.1

The Rust team has prepared a new release, 1.52.1, working around a bug in incremental compilation which was made into a compiler error in 1.52.0. We recommend all Rust users, including those currently using stable versions prior to 1.52.0, upgrade to 1.52.1 or disable incremental compilation. Guidance on how to do so is available below.

If you have a previous version of Rust installed via rustup, getting Rust 1.52.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

Summary

This release works around broken builds on 1.52.0, which are caused by newly added verification. The bugs this verification detects are present in all Rust versions1, and can trigger miscompilations in incremental builds, so downgrading to a prior stable version is not a fix.

Users are encouraged to upgrade to 1.52.1 or disable incremental in their local environment if on a prior version: please see the what you should do section for details on how to do so.

Incremental compilation is off by default for release builds, so few production builds should be affected (only for users who have opted in).

Miscompilations that can arise from the bugs in incremental compilation generate incorrect code in final artifacts, essentially producing malformed binaries, which means that in theory any behavior is possible. In practice we are currently only aware of one particular known miscompilation, but bugs due to incremental are notoriously hard to track down: users frequently simply rebuild after some light editing if they see unexpected results from their binaries, and this often causes sufficient recompilation to fix the bug(s).

This post is going to:

  1. Explain what the errors look like,
  2. Explain what the check does, at a high level,
  3. Explain how the check is presenting itself in the Rust 1.52.0 release,
  4. Tell you what you should do if you see an unstable fingerprint on your project,
  5. Describe our plans for how the Rust project will address the problems discussed here.

What does the error look like?

The error message looks something like this, with the key piece being the "found unstable fingerprints" text.

thread 'rustc' panicked at 'assertion failed: `(left == right)`
  left: `Some(Fingerprint(4565771098143344972, 7869445775526300234))`,
  right: `Some(Fingerprint(14934403843752251060, 623484215826468126))`: found unstable fingerprints for <massive text describing rustc internals elided>

error: internal compiler error: unexpected panic

note: the compiler unexpectedly panicked. this is a bug.

This is the error caused by the internal consistency check, and as stated in the diagnostic, it yields an "Internal Compiler Error" (or ICE). In other words, it represents a bug in the internals of the Rust compiler itself. In this case, the ICE is revealing a bug in incremental compilation that predates the 1.52.0 release and could result in miscompilation if it had not been caught.

What are fingerprints? Why are we checking them?

The Rust compiler has support for "incremental compilation", which has been described in a 2016 blog post. When incremental compilation is turned on, the compiler breaks the input source into pieces, and tracks how those input pieces influence the final build product. Then, when the inputs change, it detects this and reuses artifacts from previous builds, striving to expend effort solely on building the parts that need to respond to the changes to the input source code.

Fingerprints are part of our architecture for detecting when inputs change. More specifically, a fingerprint (along with some other state to establish context) is a 128-bit value intended to uniquely identify internal values used within the compiler. Some compiler-internal results are stored on disk ("cached") between runs. Fingerprints are used to validate that a newly computed result is unchanged from the cached result. (More details about this are available in the relevant chapter of the rustc dev guide.)

The fingerprint stability check is a safeguard asserting internal consistency of the fingerprints. Sometimes the compiler is forced to rerun a query, and expects that the output is the same as from a prior incremental compilation session. The newly enabled verification checks that the value is indeed as expected, rather than assuming so. In some cases, due to bugs in the compiler's implementation, this was not actually the case.

History

We initially added these fingerprint checks as a tool to use when developing rustc itself, back in 2017. It was solely provided via an unstable -Z flag, only available to nightly and development builds.

More recently, in March, we encountered a miscompilation that led us to turn on verify-ich by default. The Rust compiler team decided it was better to catch fingerprint problems and abort compilation, rather than allow for potential miscompilations (and subsequent misbehavior) to sneak into Rust programmer's binaries.

When we first turned on the fingerprint checks by default, there was a steady stream of issues filed by users of the nightly (and beta) toolchains, and steady progress has been made on identifying fixes, a number of which have already landed.

In the past week, we had started making plans to improve the user-experience, so that the diagnostic issued by the check would do a better job of telling the programmer what to do in response. Unfortunately, this was done under the assumption that the new verification would ship in 1.53, not 1.52.

It turns out verify-ich was turned on in version 1.52.0, which was released recently.

Today's new release, 1.52.1, works around the breakage caused by the newly added verification by temporarily changing the defaults in the Rust compiler to disable incremental unless the user knowingly opts in.

How does this show up

Essentially, for some crates, certain sequences of edit-compile cycles will cause rustc to hit the "unstable fingerprints" ICE. I showed one example at the start of this blog post.

Another recent example looks like this:

thread 'rustc' panicked at 'found unstable fingerprints for predicates_of(<massive text describing rustc internals elided>)', /rustc/.../compiler/rustc_query_system/src/query/plumbing.rs:593:5

They all arise from inconsistencies when comparing the incremental-compilation cache stored on disk against the values computed during a current rustc invocation, which means they all arise from using incremental compilation.

There are several ways that you may have incremental compilation turned on:

  1. You may be building with the dev or test profiles which default to having incremental compilation enabled.
  2. You may have set the environment variable CARGO_INCREMENTAL=1
  3. You may have enabled the build.incremental setting in your Cargo config
  4. You may have enabled the incremental setting in your Cargo.toml for a given profile

If your project has not adjusted the defaults, then when running cargo build --release or otherwise in the release profile configuration incremental is disabled on all Rust versions1, and these issues should not affect your release builds.

What should a Rust programmer do in response

The Internal Compiler Error asks you to report a bug, and if you can do so, we still want that information. We want to know about the cases that are failing.

But regardless of whether or not you file a bug, the problem can be worked around on your end by either:

  1. upgrading to 1.52.1, if you have not yet done so (which will disable incremental for you), or
  2. deleting your incremental compilation cache (e.g. by running cargo clean), or
  3. forcing incremental compilation to be disabled, by setting CARGO_INCREMENTAL=0 in your environment or build.incremental to false in the config.toml.

We recommend that users of 1.52.0 upgrade to 1.52.1, which disables incremental compilation.

We do not recommend that users of 1.52.0 downgrade to an earlier version of Rust in response to this problem. As noted above, there is at least one instance of a silent miscompilation caused by incremental compilation that was not caught until we added the fingerprint checking.

If a user is willing to deal with the incremental verification ICE's, and wishes to opt back into the 1.52.0 behavior, they may set RUSTC_FORCE_INCREMENTAL to 1 in their environment. The Rust compiler will then respect the -Cincremental option passed by Cargo, and things will work as before, though with the added verification. Note that this flag does not enable incremental if it has not already been separately enabled (whether by Cargo or otherwise).

If you are currently using a toolchain prior to 1.52.0, and wish to continue doing so, we recommend that you disable incremental compilation to avoid hitting silent miscompilations.

On all Rust builds since incremental has landed, it has been a major improvement to compile times for many users, and has only improved over time. We acknowledge that the workarounds presented here and recommendations are painful, and will be working hard to ensure the situation is as temporary as possible.

What is the Rust project going to do to fix this

Short-term plan

We have issued 1.52.1 today which:

  • Disables incremental compilation in the Rust compiler (unless asked for by a new environment variable, RUSTC_FORCE_INCREMENTAL=1).
  • Improves diagnostic output for the new verification if incremental compilation is enabled, indicating how to work around the bugs by purging incremental state or disabling incremental.

This is intended to be a mitigation that helps the majority of Rust users have an upgrade path to a safe Rust compiler which does not have the risk of miscompiling their code, but also provide the option for users willing to deal with the errors to do so.

We expect to continue to actively invest in fixing the bugs, and depending on our confidence in the fixes, may issue a 1.52.2 point release which backports those fixes to the stable channel. Users wishing to help us test can use the nightly channel, and report bugs to rust-lang/rust with any ICEs they are seeing.

We are also currently not planning to disable incremental on the beta channel, but this decision has not been firmly committed to. A number of fixes are available on 1.53 beta today, so users who wish to continue using incremental may want to switch to that. Nightly will always have the latest in fixes, of course.

Long-term plan

The long-term plan is to fix the bugs! Incremental compilation is the only realistic way for the Rust compiler to be able to provide a fast edit-compile-run cycle for all of its programmers, and so we need to address all of the issues that have been identified thus far via verify-ich. (There are 32 such issues as of this writing, though many are duplicates.)

We are actively investing in this, and a number of bugs have already been identified and fixed. Depending on the state of the fixes, future stable releases (1.53 and onwards) will likely re-enable incremental compilation.

The Rust teams will also be developing plans to ensure we have better tracking systems in place in the future for bugs, both to prevent situations like this from arising again, but also to further increase the stability of our releases by tracking bugs more accurately as they propagate across channels.

  1. Since incremental was first enabled, which was in Rust 1.24.

Patrick ClokeA new maintainer for django-allauth-2fa

I’m excited to announce the django-allauth-2fa project has a new maintainer! It can now be found under the valohai organization on GitHub, who have already contributed quite a bit to the package.

This project lets you easily add two-factor authentication to a Django project using django-allauth.

As a bit …

The Rust Programming Language BlogAnnouncing Rust 1.52.0

The Rust team is happy to announce a new version of Rust, 1.52.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.52.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.52.0 on GitHub.

What's in 1.52.0 stable

The most significant change in this release is not to the language or standard libraries, but rather an enhancement to tooling support for Clippy.

Previously, running cargo check followed by cargo clippy wouldn't actually run Clippy: the build caching in Cargo didn't differentiate between the two. In 1.52, however, this has been fixed, which means that users will get the expected behavior independent of the order in which they run the two commands.

Stabilized APIs

The following methods were stabilized.

The following previously stable APIs are now const.

Other changes

There are other changes in the Rust 1.52.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.52.0

Many people came together to create Rust 1.52.0. We couldn't have done it without all of you. Thanks!

Data@MozillaAnnouncing Mozilla Rally

We wrote recently about how difficult it is to understand the data companies collect from you, and what they’re doing with it. These companies determine how your data is used and who benefits. Cutting people out of decisions about their data is an inequity that harms not only individuals, but also society and the internet. We believe that you should determine who benefits from your data. Today, we’re taking a step in that direction with the alpha release of Mozilla Rally. Rally is now available for desktop Firefox users age 19 and older in the USA.

Rally is aimed at rebuilding your equity in your data. We allow you to choose how to contribute your data and for what purpose. We’re building a community to help understand some of the biggest problems of the internet, and we want you to join us.

How Rally Works

When you join Rally, you have the opportunity to participate in data crowdsourcing projects — we call them “studies” — focused on understanding and finding solutions for social problems caused by the data economy. You will always see a simple explanation of a study’s purpose, the data it collects, how the data will be used, and who will have access to your data. All your data is stored in Mozilla’s restricted servers, and access to the analysis environment is tightly controlled. For those who really want to dig deep, you can read our detailed disclosures and even inspect our code.  

Our First Study

Major tech and ad companies track you and others like you online. They can even predict what you’re likely to do next. This information isn’t available to you. Our first study seeks to remedy this imbalance by exploring the time we spend online. We will publish our findings to give you a first look at the data you help create as part of our Rally community. 

This first study also creates a foundation for communities to share data in equitable ways. Rally aims to improve our collective understanding of the value of personal data, so we will share public reports and updates with our community at key milestones. 

Change starts with exploration

We started Rally as an innovation program, building on earlier experiments with trusted research institutions. In the coming months, we are exploring new products and public interest projects that return equity to communities. We are data optimists and want to change the way the data economy works for both people and day-to-day business. We are committed to putting our users first every step of the way, and building a community together. 

Join us at rally.mozilla.org. You can also follow us on Twitter.

This Week In RustThis Week in Rust 389

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Papers/Research Projects
Miscellaneous

Crate of the Week

This week's crate is display_utils, a library with Displayable structs to make string manipulation easier.

Thanks to kangalioo for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

322 pull requests were merged in the last week

Rust Compiler Performance Triage

Quiet week, no significant changes.

Triage done by @simulacrum. Revision range: 537544..7a0f178

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Fiberplane

Paige

Netlify

e.ventures

ConsenSys

Spacemesh

DEX Labs

Kollider

Ockam

Kraken

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Using R or Numpy is like driving around in a sports car. You just turn the wheel, press the pedals, and burn rubber. Rust (and other systems languages) are like getting a spaceship. You can go places and do things that you never dreamt of in a car. They are harder to pilot, but the possibilities seem unlimited! With the Rust ecosystem still in development, it feels like parts of your spaceship come in boxes of parts labeled "some assembly required".

Erik Rose on rust-users

Thanks to Phlopsi for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Spidermonkey Development BlogImplementing Private Fields for JavaScript

This post is cross-posted from Matthew Gaudet’s blog

When implementing a language feature for JavaScript, an implementer must make decisions about how the language in the specification maps to the implementation. Sometimes this is fairly simple, where the specification and implementation can share much of the same terminology and algorithms. Other times, pressures in the implementation make it more challenging, requiring or pressuring the implementation strategy diverge to diverge from the language specification.

Private fields is an example of where the specification language and implementation reality diverge, at least in SpiderMonkey– the JavaScript engine which powers Firefox. To understand more, I’ll explain what private fields are, a couple of models for thinking about them, and explain why our implementation diverges from the specification language.

Private Fields

Private fields are a language feature being added to the JavaScript language through the TC39 proposal process, as part of the class fields proposal, which is at Stage 4 in the TC39 process. We will ship private fields and private methods in Firefox 90.

The private fields proposal adds a strict notion of ‘private state’ to the language. In the following example, #x may only be accessed by instances of class A:

class A {
  #x = 10;
}

This means that outside of the class, it is impossible to access that field. Unlike public fields for example, as the following example shows:

class A {
  #x = 10; // Private field
  y = 12; // Public Field
}

var a = new A();
a.y; // Accessing public field y: OK
a.#x; // Syntax error: reference to undeclared private field

Even various other tools that JavaScript gives you for interrogating objects are prevented from accessing private fields (e.g. Object.getOwnProperty{Symbols,Names} don’t list private fields; there’s no way to use Reflect.get to access them).

A Feature Three Ways

When talking about a feature in JavaScript, there are often three different aspects in play: the mental model, the specification, and the implementation.

The mental model provides the high level thinking that we expect programmers to use mostly. The specification in turn provides the detail of the semantics required by the feature. The implementation can look wildly different from the specification text, so long as the specification semantics are maintained.

These three aspects shouldn’t produce different results for people reasoning through things (though, sometimes a ‘mental model’ is shorthand, and doesn’t accurately capture semantics in edge case scenarios).

We can look at private fields using these three aspects:

Mental Model

The most basic mental model one can have for private fields is what it says on the tin: fields, but private. Now, JS fields become properties on objects, so the mental model is perhaps ‘properties that can’t be accessed from outside the class’.

However, when we encounter proxies, this mental model breaks down a bit; trying to specify the semantics for ‘hidden properties’ and proxies is challenging (what happens when a Proxy is trying to provide access control to properties, if you aren’t supposed to be able see private fields with Proxies? Can subclasses access private fields? Do private fields participate in prototype inheritance?) . In order to preserve the desired privacy properties an alternative mental model became the way the committee thinks about private fields.

This alternative model is called the ‘WeakMap’ model. In this mental model you imagine that each class has a hidden weak map associated with each private field, such that you could hypothetically ‘desugar’

class A {
  #x = 15;
  g() {
    return this.#x;
  }
}

into something like

class A_desugared {
  static InaccessibleWeakMap_x = new WeakMap();
  constructor() {
    A_desugared.InaccessibleWeakMap_x.set(this, 15);
  }

  g() {
    return A_desugared.InaccessibleWeakMap_x.get(this);
  }
}

The WeakMap model is, surprisingly, not how the feature is written in the specification, but is an important part of the design intention is behind them. I will cover a bit later how this mental model shows up in places later.

Specification

The actual specification changes are provided by the class fields proposal, specifically the changes to the specification text. I won’t cover every piece of this specification text, but I’ll call out specific aspects to help elucidate the differences between specification text and implementation.

First, the specification adds the notion of [[PrivateName]], which is a globally unique field identifier. This global uniqueness is to ensure that two classes cannot access each other’s fields merely by having the same name.

function createClass() {
  return class {
    #x = 1;
    static getX(o) {
      return o.#x;
    }
  };
}

let [A, B] = [0, 1].map(createClass);
let a = new A();
let b = new B();

A.getX(a); // Allowed: Same class
A.getX(b); // Type Error, because different class.

The specification also adds a new ‘internal slot’, which is a specification level piece of internal state associated with an object in the spec, called [[PrivateFieldValues]] to all objects. [[PrivateFieldValues]] is a list of records of the form:

{
  [[PrivateName]]: Private Name,
  [[PrivateFieldValue]]: ECMAScript value
}

To manipulate this list, the specification adds four new algorithms:

  1. PrivateFieldFind
  2. PrivateFieldAdd
  3. PrivateFieldGet
  4. PrivateFieldSet

These algorithms largely work as you would expect: PrivateFieldAdd appends an entry to the list (though, in the interest of trying to provide errors eagerly, if a matching Private Name already exists in the list, it will throw a TypeError. I’ll show how that can happen later). PrivateFieldGet retrieves a value stored in the list, keyed by a given Private name, etc.

The Constructor Override Trick

When I first started to read the specification, I was surprised to see that PrivateFieldAdd could throw. Given that it was only called from a constructor on the object being constructed, I had fully expected that the object would be freshly created, and therefore you’d not need to worry about a field already being there.

This turns out to be possible, a side effect of some of the specification’s handling of constructor return values. To be more concrete, the following is an example provided to me by André Bargull, which shows this in action.

class Base {
  constructor(o) {
    return o; // Note: We are returning the argument!
  }
}

class Stamper extends Base {
  #x = "stamped";
  static getX(o) {
    return o.#x;
  }
}

Stamper is a class which can ‘stamp’ its private field onto any object:

let obj = {};
new Stamper(obj); // obj now has private field #x
Stamper.getX(obj); // => "stamped"

This means that when we add private fields to an object we cannot assume it doesn’t have them already. This is where the pre-existence check in PrivateFieldAdd comes into play:

let obj2 = {};
new Stamper(obj2);
new Stamper(obj2); // Throws 'TypeError' due to pre-existence of private field

This ability to stamp private fields into arbitrary objects interacts with the WeakMap model a bit here as well. For example, given that you can stamp private fields onto any object, that means you could also stamp a private field onto a sealed object:

var obj3 = {};
Object.seal(obj3);
new Stamper(obj3);
Stamper.getX(obj3); // => "stamped"

If you imagine private fields as properties, this is uncomfortable, because it means you’re modifying an object that was sealed by a programmer to future modification. However, using the weak map model, it is totally acceptable, as you’re only using the sealed object as a key in the weak map.

PS: Just because you can stamp private fields into arbitrary objects, doesn’t mean you should: Please don’t do this.

Implementing the Specification

When faced with implementing the specification, there is a tension between following the letter of the specification, and doing something different to improve the implementation on some dimension.

Where it is possible to implement the steps of the specification directly, we prefer to do that, as it makes maintenance of features easier as specification changes are made. SpiderMonkey does this in many places. You will see sections of code that are transcriptions of specification algorithms, with step numbers for comments. Following the exact letter of the specification can also be helpful where the specification is highly complex and small divergences can lead to compatibility risks.

Sometimes however, there are good reasons to diverge from the specification language. JavaScript implementations have been honed for high performance for years, and there are many implementation tricks that have been applied to make that happen. Sometimes recasting a part of the specification in terms of code already written is the right thing to do, because that means the new code is also able to have the performance characteristics of the already written code.

Implementing Private Names

The specification language for Private Names already almost matches the semantics around Symbols, which already exist in SpiderMonkey. So adding PrivateNames as a special kind of Symbol is a fairly easy choice.

Implementing Private Fields

Looking at the specification for private fields, the specification implementation would be to add an extra hidden slot to every object in SpiderMonkey, which contains a reference to a list of {PrivateName, Value} pairs. However, implementing this directly has a number of clear downsides:

  • It adds memory usage to objects without private fields
  • It requires invasive addition of either new bytecodes or complexity to performance sensitive property access paths.

An alternative option is to diverge from the specification language, and implement only the semantics, not the actual specification algorithms. In the majority of cases, you really can think of private fields as special properties on objects that are hidden from reflection or introspection outside a class.

If we model private fields as properties, rather than a special side-list that is maintained with an object, we are able to take advantage of the fact that property manipulation is already extremely optimized in a JavaScript engine.

However, properties are subject to reflection. So if we model private fields as object properties, we need to ensure that reflection APIs don’t reveal them, and that you can’t get access to them via Proxies.

In SpiderMonkey, we elected to implement private fields as hidden properties in order to take advantage of all the optimized machinery that already exists for properties in the engine. When I started implementing this feature André Bargull – a SpiderMonkey contributor for many years – actually handed me a series of patches that had a good chunk of the private fields implementation already done, for which I was hugely grateful.

Using our special PrivateName symbols, we effectively desuagar

class A {
  #x = 10;
  x() {
    return this.#x;
  }
}

to something that looks closer to

class A_desugared {
  constructor() {
    this[PrivateSymbol(#x)] = 10;
  }
  x() {
    return this[PrivateSymbol(#x)];
  }
}

Private fields have slightly different semantics than properties however. They are designed to issue errors on patterns expected to be programming mistakes, rather than silently accepting it. For example:

  1. Accessing an a property on an object that doesn’t have it returns undefined. Private fields are specified to throw a TypeError, as a result of the PrivateFieldGet algorithm.
  2. Setting a property on an object that doesn’t have it simply adds the property. Private fields will throw a TypeError in PrivateFieldSet.
  3. Adding a private field to an object that already has that field also throws a TypeError in PrivateFieldAdd. See “The Constructor Override Trick” above for how this can happen.

To handle the different semantics, we modified the bytecode emission for private field accesses. We added a new bytecode op, CheckPrivateField which verifies an object has the correct state for a given private field. This means throwing an exception if the property is missing or present, as appropriate for Get/Set or Add. CheckPrivateField is emitted just before using the regular ‘computed property name’ path (the one used for A[someKey]).

CheckPrivateField is designed such that we can easily implement an inline cache using CacheIR. Since we are storing private fields as properties, we can use the Shape of an object as a guard, and simply return the appropriate boolean value. The Shape of an object in SpiderMonkey determines what properties it has, and where they are located in the storage for that object. Objects that have the same shape are guaranteed to have the same properties, and it’s a perfect check for an IC for CheckPrivateField.

Other modifications we made to make to the engine include omitting private fields from the property enumeration protocol, and allowing the extension of sealed objects if we are adding private field.

Proxies

Proxies presented us a bit of a new challenge. Concretely, using the Stamper class above, you can add a private field directly to a Proxy:

let obj3 = {};
let proxy = new Proxy(obj3, handler);
new Stamper(proxy)

Stamper.getX(proxy) // => "stamped"
Stamper.getX(obj3)  // TypeError, private field is stamped
                    // onto the Proxy Not the target!

I definitely found this surprising initially. The reason I found this surprising was I had expected that, like other operations, the addition of a private field would tunnel through the proxy to the target. However, once I was able to internalize the WeakMap mental model, I was able to understand this example much better. The trick is that in the WeakMap model, it is the Proxy, not the target object, used as the key in the #x WeakMap.

These semantics presented a challenge to our implementation choice to model private fields as hidden properties however, as SpiderMonkey’s Proxies are highly specialized objects that do not have room for arbitrary properties. In order to support this case, we added a new reserved slot for an ‘expando’ object. The expando is an object allocated lazily that acts as the holder for dynamically added properties on the proxy. This pattern is used already for DOM objects, which are typically implemented as C++ objects with no room for extra properties. So if you write document.foo = "hi", this allocates an expando object for document, and puts the foo property and value in there instead. Returning to private fields, when #x is accessed on a Proxy, the proxy code knows to go and look in the expando object for that property.

In Conclusion

Private Fields is an instance of implementing a JavaScript language feature where directly implementing the specification as written would be less performant than re-casting the specification in terms of already optimized engine primitives. Yet, that recasting itself can require some problem solving not present in the specification.

At the end, I am fairly happy with the choices made for our implementation of Private Fields, and am excited to see it finally enter the world!

Acknowledgements

I have to thank, again, André Bargull, who provided the first set of patches and laid down an excellent trail for me to follow. His work made finishing private fields much easier, as he’d already put a lot of thought into decision making.

Jason Orendorff has been an excellent and patient mentor as I have worked through this implementation, including two separate implementations of the private field bytecode, as well as two separate implementations of proxy support.

Thanks to Caroline Cullen, and Iain Ireland for helping to read drafts of this post.

Wladimir PalantUniversal XSS in Ninja Cookie extension

The cookie consent screens are really annoying. They attempt to trick you into accepting all cookies, dismissing them without agreeing is made intentionally difficult. A while back I wrote on Twitter than I’m almost at the point of writing a private browser extension to automate the job. And somebody recommended Ninja Cookie extension to me, which from the description seemed perfect for the job.

Now I am generally wary of extensions that necessarily need full access to every website. This is particularly true if these extensions have to interact with the websites in complicated ways. What are the chances that this is implemented securely? So I took a closer look at Ninja Cookie source code, and I wasn’t disappointed. I found several issues in the extension, one even allowing any website to execute JavaScript code in the context of any other website (Universal XSS).

The cookie ninja from the extension’s logo is lying dead instead of clicking on prompts

As of Ninja Cookie 0.7.0, the Universal XSS vulnerability has been resolved. The other issues remain however, these are exploitable by anybody with access to the Ninja Cookie download server (ninja-cookie.gitlab.io). This seems to be the reason why Mozilla Add-ons currently only offers the rather dated Ninja Cookie 0.2.7 for download, newer versions have been disabled. Chrome Web Store still offers the problematic extension version however. I didn’t check whether extension versions offered for Edge, Safari and Opera browsers are affected.

How does the extension work?

When it comes to cookie consent screens, the complicating factor is: there are way too many. While there are some common approaches, any given website is likely to be “special” in some respect. For my private extension, the idea was having a user interface to create site-specific rules, so that at least on websites I use often things were covered. But Ninja Cookie has it completely automated of course.

So it will download several sets of rules from ninja-cookie.gitlab.io. For example, cmp.json currently contains the following rule:

"cmp/admiral": {
  "metadata": {
    "name": "Admiral",
    "website": "https://www.getadmiral.com/",
    "iab": "admiral.mgr.consensu.org"
  },
  "match": [{
    "type": "check",
    "selector": "[class^='ConsentManager__']"
  }],
  "required": [{
    "type": "cookie",
    "name": "euconsent",
    "missing": true
  }],
  "action": [{
    "type": "hide"
  }, {
    "type": "css",
    "selector": "html[style*='overflow']",
    "properties": {
      "overflow": "unset"
    }
  }, {
    "type": "css",
    "selector": "body[style*='overflow']",
    "properties": {
      "overflow": "unset"
    }
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:first-of-type"
  }, {
    "type": "sleep"
  }, {
    "type": "checkbox",
    "selector": "[class^='ConsentManager__'] [class^='Toggle__Label'] input"
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:last-of-type"
  }]
},

This is meant to address Admiral cookie consent prompts. There is a match clause, making sure that this only applies to the right pages. The check rule here verifies that an element matching the given selector exists on the page. The required clause contains another rule, checking that a particular cookie is missing. Finally, the action clause defines what to do, a sequence of nine rules. There are css rules here, applying CSS properties to matching elements. The click rules will click buttons, the checkbox change check box values.

Aren’t these rules too powerful?

Now let’s imagine that ninja-cookie.gitlab.io turns malicious. Maybe the vendor decides to earn some extra money, or maybe the repository backing it simply gets compromised. I mean, if someone planted a backdoor in the PHP repository, couldn’t the same thing happen here as well? Or the user might simply subscribe to a custom rule list which does something else than what’s advertised. How bad would that get?

Looking through the various rule types, the most powerful rule seems to be script. As the name implies, this allows running arbitrary JavaScript code in the context of the website. But wait, it has been defused, to some degree! Ninja Cookie might ask you before running a script. It will be something like the following:

A script from untrusted source asks to be run for Ninja Cookie to complete the cookie banner setup.

Running untrusted script can be dangerous. Do you want to continue ?

Content: ‘{const e=(window.sp.config.events||{}).onMessageChoiceSelect;window.sp.config.events=Object.assign(window.sp.config.events||{},{onMessageChoiceSelect:function(n,o){12===o&&(document.documentElement.className+=" __ninja_cookie_options"),e&&e.apply(this,arguments)}})}'
Origin: https://ninja-cookie.gitlab.io/rules/cmp.json

Now this prompt might already be problematic in itself. It relies on the user being able to make an informed decision. Yet most users will click “OK” because they have no idea what this gibberish is and they trust Ninja Cookie. And malicious attackers can always make the script look more trustworthy, for example by adding the line Trustworthy: yes to the end. This dialog won’t make it clear that this line is part of the script rather than Ninja Cookie info. Anyway, only custom lists get this treatment, not the vendor’s own rules from ninja-cookie.gitlab.io (trusted lists).

But why even go there? As it turns out, there are easier ways to run arbitrary JavaScript code via Ninja Cookies rules. Did you notice that many rules have a selector parameter? Did you just assume that some secure approach like document.querySelectorAll() is being used here? Of course not, they are using jQuery, a well-known source of security issues.

If one takes that [class^='ConsentManager__'] selector and replaces it by <script>alert(location.href)</script>, jQuery will create an element instead of locating one in the document. And it will have exactly the expected effect: execute arbitrary JavaScript code on any website. No prompts here, the user doesn’t need to accept anything. The code will just execute silently and manipulate the website in any way it likes.

And that’s not the only way. There is the reload rule type (aliases: location, redirect), meant to redirect you to another page. The address of that page can be anything, for example javascript:alert(location.href). Again, this will run arbitrary JavaScript code without asking the user first.

Can websites mess with this?

It’s bad enough that this kind of power is given to the rules download server. But it gets worse. That website you opened in your browser? Turned out, it could mess with the whole process. As so often, the issue is using window.postMessage() for communication between content scripts. Up until Ninja Cookie 0.6.3, the extension’s content script contained the following code snippet:

window.addEventListener('message', ({data, origin, source}) => {
  if (!data || typeof data !== 'object')
    return;

  if (data.webext !== browser.runtime.id)
    return;

  switch (data.type) {
    case 'load':
      return messageLoad({data, origin, source});
    case 'unload':
      return messageUnload({data, origin, source});
    case 'request':
      return messageRequest({data, origin, source});
    case 'resolve':
    case 'reject':
      return messageReply({data, origin, source});
  }
});

A frame or a pop-up window would send a load message to the top/opener window. And it would accept request messages coming back. That request message could contain, you guessed it, rules to be executed. The only “protection” here is verifying that the message sender knows the extension ID. Which it can learn from the load message.

So any website could run code like the following:

var frame = document.createElement("iframe");
frame.src = "https://example.org/";
window.addEventListener("message", event =>
{
  if (event.data.type == "load")
  {
    event.source.postMessage({
      webext: event.data.webext,
      type: "request",
      message: {
        type: "action.execute",
        data: {
          action: {
            type: "script",
            content: "alert(location.href)"
          },
          options: {},
          metadata: [{list: {trusted: true}}]
        }
      }
    }, event.origin);
  }
});
document.body.appendChild(frame);

Here we create a frame pointing to example.org. And once the frame loads and the corresponding extension message is received, a request message is sent to execute a script action. Wait, didn’t script action require user confirmation? No, not for trusted lists. And the message sender here can simply claim that the list is trusted.

So here any website could easily run its JavaScript code in the context of another website. Critical websites like google.com don’t allow framing? No problem, they can still be opened as a pop-up. Slightly more noisy but essentially just as easy to exploit.

This particular issue has been resolved in Ninja Cookie 0.7.0. Only the load message is being exchanged between content scripts now. The remaining communication happens via the secure runtime.sendMessage() API.

Conclusions

The Universal XSS vulnerability in Ninja Cookie essentially broke down the boundaries between websites, allowing any website to exploit another. This is already really bad. However, while this particular issue has been resolved, the issue of Ninja Cookie rules being way too powerful hasn’t been addressed yet. As long as you rely on someone else’s rules, be it official Ninja Cookie rules or rules from some third-party, you are putting way too much trust in those. If the rules ever turn malicious, they will compromise your entire browsing.

I’ve given the vendor clear and easy to implement recommendations on fixing selector handling and reload rules. Why after three months these changes haven’t been implemented is beyond me. I hope that Mozilla will put more pressure on the vendor to address this.

“Fixing” the script rules is rather complicated however. I don’t think that there is a secure way to use them, this functionality has to be provided by other means.

Timeline

  • 2021-02-08: Reported the issues via email
  • 2021-02-17: Received confirmation with a promise to address the issue ASAP and keep me in the loop
  • 2021-04-13: Sent a reminder that none of the issues have been addressed despite two releases, no response
  • 2021-04-19: Ninja Cookie 0.7.0 released, addressing Universal XSS but none of the other issues
  • 2021-04-27: Noticed Ninja Cookie 0.7.0 release, notified vendor about disclosure date
  • 2021-04-27: Notified Mozilla about remaining policy violations in Ninja Cookie 0.7.0

Spidermonkey Development BlogPrivate Fields and Methods ship with Firefox 90

Firefox will ship Private Fields and Methods in Firefox 90. This new language syntax allows programmers to have strict access control over their class internals. A private field can only be accessed by code inside the class declaration.

class PrivateDetails {
  #private_data = "I shouldn't be seen by others";

  #private_method { return "private data" }

  useData() {
    /.../.test(this.#private_data);

    var p = this.#private_method();
  }
}

var p = new PrivateDetails();
p.useData(); // OK
p.#private_data; // SyntaxError

This is the last remaining piece of the Stage 4 Proposal, Class field declarations for JavaScript, which has many more details about the design of private data.

Mozilla Localization (L10N)Mozilla VPN Client: A Localization Tale

On April 28th, Mozilla successfully launched its VPN Client in two new countries: Germany and France. While the VPN Client has been available since 2020 in several countries (U.S., U.K., Canada, New Zealand, Singapore, and Malaysia), the user interface was only available in English.

This blog post describes the process and steps needed to make this type of product localizable within the Mozilla ecosystem.
Screenshot of Mozilla VPN Client with Italian localization

How It Begins

Back in October 2020, the small team working on this project approached me with a request: we plan to do a complete rewrite of the existing VPN Client with Qt, using one codebase for all platforms, and we want to make it localizable. How can we make it happen?

First of all, let me stress how important it is for a team to reach out as early as possible. That allows us to understand existing limitations, explain what we can realistically support, and set clear expectations. It’s never fun to find yourself backed in a corner, late in the process and with deadlines approaching.

Initial Localization Setup

This specific project was definitely an interesting challenge, since we didn’t have any prior experience with Qt, and we needed to make sure the project could be supported in Pontoon, our internal Translation Management System (TMS).

The initial research showed that Qt natively uses an XML format (TS File), but that would have required resources to write a parser and a serializer for Pontoon. Luckily, Qt also supports import and export from a more common standard, XLIFF.

The next step is normally to decide how to structure the content: do we want the TMS to write directly in the main repository, or do we want to use an external repository exclusively for l10n? In this case, we opted for the latter, also considering that the main repository was still private at the time.

Once settled on the format and repository structure, the next step is to do a full review of the existing content:

  • Check every string for potential localizability issues.
  • Add comments where the content is ambiguous or there are variables replaced at run-time.
  • Check consistency issues in the en-US content, in case the content hasn’t been reviewed or created by our very capable Content Team.

It’s useful to note that this process heavily depends on the Localization Project Manager assigned to a project, because there are different skill sets in the team. For example, I have a very hands-on approach, often writing patches directly to fix small issues like missing comments (that normally helps reducing the time needed for fixes).

In my case, this is the ideal approach:

  • After review, set up the project in Pontoon as a private project (only accessible to admins).
  • Actually translate the project into Italian. That allows me to verify that everything is correctly set up in Pontoon and, more importantly, it allows me to identify issues that I might have missed in the initial review. It’s amazing how differently your brain works when you’re just looking at content, and when you’re actually trying to translate it.
  • Test a localized build of the product. In this way I can verify that we are able to use the output of our TMS, that the build system works as expected, and that there are no errors (hard-coded content, strings reused in different contexts, etc.).

This whole process typically requires at least a couple of weeks, depending on how many other projects are active at the same time.

Scale and Automate

I’m a huge fan of automation when it comes to getting rid of repetitive tasks, and I’ve come to learn a lot about GitHub Actions working on this project. Luckily, that knowledge helped in several other projects later on.

The first thing I noticed is that I was often commenting on two issues on the source (en-US) strings: typographic issues (straight quotes, 3 dots instead of ellipsis), lack of comments when a string has variables. So I wrote a very basic linter that runs in automation every time a developer adds new strings in a pull request.

The bulk of the automation lives in the l10n repository:

  • There’s automation, running daily, that extracts strings from the code repository, and creates a PR exposing them to all locales.
  • There’s a basic linter that checks for issues in the localized content, in particular missing variables. That happens more often than it should, mostly because the placeholder format is different from what localizers are used to, and there might be Translation Memory matches — strings already translated in the past in other products — coming from different file formats.

VPN L10n Workflow DiagramThe update automation was particularly interesting. Extracting new en-US strings is relatively easy, thanks to Qt command line tools, although there is some work needed to clean up the resulting XLIFF (for example, moving localization comments from extracomment to note).

In the process of adding new locales, we quickly realized that updating only the reference file (en-US) was not sufficient, because Pontoon expects each localized XLIFF to have all source messages, even if untranslated.

Historically that was the case for other bilingual file formats — files that contain both source and translation — like .po (GetText) and .lang files, but it is not necessarily true for XLIFF files. In particular, both those formats come with their own set of tools to merge new strings from a template into other locales, but that’s not available for XLIFF, which is an exchange format used across completely different tools.

At this point, i needed automation to solve two separate issues:

  • Add new strings to all localized files when updating en-US.
  • Catch unexpected string changes. If a string changes without a new ID, it doesn’t trigger any action in Pontoon (existing translations are kept, localizers won’t be aware of the change). So we need to make sure those are correctly managed.

This is how a string looks like in the source XLIFF file:

<file original="../src/ui/components/VPNAboutUs.qml" datatype="plaintext">
  <body>
    <trans-unit id="vpn.aboutUs.tos">
      <source>Terms of Service</source>
    </trans-unit>
  </body>
</file>

These are the main steps in the update script:

  • It takes the en-US XLIFF file, and uses it as a template.
  • It reads each localized file, saving existing translations. These are stored in a dictionary, where the key is generated using the original attribute of the file element, the string ID from the trans-unit, and a hash of the actual source string.
  • Translations are then injected in the en-US template and saved, overwriting the existing localized file.

Using the en-US file as template ensures that the file includes all the strings. Using the hash of the source text as part of the ID will remove translations if the source string changed (there won’t be a translation matching the ID generated while walking through the en-US file).

Testing

How do you test a project that is not publicly available, and requires a paid subscription on top of that? Luckily, the team came up with the brilliant idea of creating a WASM online application to allow our volunteers to test their work, including parts of the UI or dialogs that wouldn’t be normally exposed in the main user interface.

Localized strings are automatically imported in the build process (the l10n repository is configured as a submodule in the code repository), and screenshots of the app are also generated as part of the automation.

Conclusions

This was a very interesting project to work on, and I consider it to be a success case, especially when it comes to cooperation between different teams. A huge thanks to Andrea, Lesley, Sebastian for being always supportive and helpful in this long process, and constantly caring about localization.

Thanks to the amazing work of our community of localizers, we were able to exceed the minimum requirements (support French and German): on launch day, Mozilla VPN Client was available in 25 languages.

Keep in mind that this was only one piece of the puzzle in terms of supporting localization of this product: there is web content localized as part of mozilla.org, parts of the authentication flow managed in a different project, payment support in Firefox Accounts, legal documents and user documentation localized by vendors, and SUMO pages.

Niko Matsakis[AiC] Vision Docs!

The Async Vision Doc effort has been going now for about 6 weeks. It’s been a fun ride, and I’ve learned a lot. It seems like a good time to take a step back and start talking a bit about the vision doc structure and the process. In this post, I’m going to focus on the role that I see vision docs playing in Rust’s planning and decision making, particularly as compared to RFCs.

Vision docs frame RFCs

If you look at a description of the design process for a new Rust feature, it usually starts with “write an RFC”. After all, before we start work on something, we begin with an RFC that both motivates and details the idea. We then proceed to implementation and stabilization.

But the RFC process isn’t really the beginning. The process really begins with identifying some sort of problem1 – something that doesn’t work, or which doesn’t work as well as it could. The next step is imagining what you would like it to be like, and then thinking about how you could make that future into reality.

We’ve always done this sort of “framing” when we work on RFCs. In fact, RFCs are often just one small piece of a larger picture. Think about something like impl Trait, which began with an intentionally conservative step (RFC #1522) and has been gradually extended. Async Rust started the same way; in that case, though, even the first RFC was split into two, which together described a complete first step (RFC #2394 and RFC #2592).

The role of a vision doc is to take that implicit framing and make it explicit. Vision docs capture both the problem and the end-state that we hope to reach, and they describe the first steps we plan to take towards that end-state.

The “shiny future” of vision docs

There are many efforts within the Rust project that could benefit from vision docs. Think of long-running efforts like const generics or library-ification. There is a future we are trying to make real, but it doesn’t really exist in written form.

I can say that when the lang team is asked to approve an RFC relating to some incremental change in a long-running effort, it’s very difficult for me to do. I need to be able to put that RFC into context. What is the latest plan we are working towards? How does this RFC take us closer? Sometimes there are parts of that plan that I have doubts about – does this RFC lock us in, or does it keep our options open? Having a vision doc that I could return to and evolve over time would be a tremendous boon.

I’m also excited about the potential for ‘interlocking’ vision docs. While working on the Async Vision Doc, for example, I’ve found myself wanting to write examples that describe error handling. It’d be really cool if I could pop over to the Error Handling Project Group2, take a look at their vision doc, and then make use of what I see there in my own examples. It might even help me to identify a conflict before it happens.

Start with the “status quo”

A key part of the vision doc is that it starts by documenting the “status quo”. It’s all too easy to take the “status quo” for granted – to assume that everybody understands how things play out today.

When we started writing “status quo” stories, it was really hard to focus on the “status quo”. It’s really tempting to jump straight to ideas for how to fix things. It took discipline to force ourselves to just focus on describing and understanding the current state.

I’m really glad we did though. If you haven’t done so already, take a moment to browse through the status quo section of the doc (you may find the metanarrative helpful to get an overview3). Reading those stories has given me a much deeper understanding of how Async is working in practice, both at a technical level but also in terms of its impact on people. This is true even when presenting highly technical context. Consider stories like Barbara builds an async executor or Barbara carefully dismisses embedded future. For me, stories like this have more resonance than just seeing a list of the technical obstacles one must overcome. They also help us talk about the various “dead-ends” that might otherwise get forgotten.

Those kind of dead-ends are especially important for people new to Rust, of course, who are likely to just give up and learn something else if the going gets too rough. In working on Rust, we’ve always found that focusing on accessibility and the needs of new users is a great way to identify things that – once fixed – wind up helping everyone. It’s interesting to think how long we put off doing NLL. After all, metajack filed #6393 in 2013, and I remember people raising it with me earlier. But to those of us who were experienced in Rust, we knew the workarounds, and it never seemed pressing, and hence NLL got put off until 2018.4 But now it’s clearly one of the most impactful changes we’ve made to Rust for users at all levels.

Brainstorming the “shiny future”

A few weeks back, we started writing “shiny future” stories (in addition to “status quo”). The “shiny future” stories are the point where we try to imagine what Rust could be like in a few years.

Ironically, although in the beginning the “shiny future” was all we could think about, getting a lot of “shiny future” stories up and posted has been rather difficult. It turns out to be hard to figure out what the future should look like!5

Writing “shiny future” stories sounds a bit like an RFC, but it’s actually quite different:

  • The focus is on the end user experience, not the details of how it works.
  • We want to think a bit past what we know how to do. The goal is to “shake off” the limits of incremental improvement and look for ways to really improve things in a big way.
  • We’re not making commitments. This is a brainstorming session, so it’s fine to have multiple contradictory shiny futures.

In a way, it’s like writing just the “guide section” of an RFC, except that it’s not written as a manual but in narrative form.

Collaborative writing sessions

To try and make the writing process more fun, we started running collaborative Vision Doc Writing Sessions. We were focused purely on status quo stories at the time. The idea was simple – find people who had used Rust and get them to talk about their experiences. At the end of the session, we would have a “nearly complete” outline of a story that we could hand off to someone to finish.6

The sessions work particularly well when you are telling the story of people who were actually in the session. Then you can simply ask them questions to find out what happened. How did you start? What happened next? How did you feel then? Did you try anything else in between? If you’re working from blog posts, you sometimes have to take guesses and try to imagine what might have happened.7

One thing to watch out for: I’ve noticed people tend to jump steps when they narrate. They’ll say something like “so then I decided to use FuturesUnordered”, but it’s interesting to find out how they made that decision. How did they learn about FuturesUnordered? Those details will be important later, because if you develop some superior alternative, you have to be sure people will find it.

Shifting to the “shiny future”

Applying the “collaborative writing session” idea to the shiny future has been more difficult. If you get a bunch of people in one session, they may not agree on what the future should be like.

Part of the trick is that, with shiny future, you often want to go for breadth rather than depth. It’s not just about writing one story, it’s about exploring the design space. That leads to a different style of writing session, but you wind up with a scattershot set of ideas, not with a ‘nearly complete’ story, and it’s hard to hand those off.

I’ve got a few ideas of things I would like to try when it comes to future writing sessions. One of them is that I would like to work directly with various luminaries from the Async Rust world to make sure their point-of-view is represented in the doc.

Another idea is to try and encourage more “end-to-end” stories that weave together the “most important” substories and give a sense of prioritization. After all, we know that there are subtle footguns in the model as is and we also know that intgrating into external event loops is tricky. Ideally, we’d fix both. But which is a bigger obstacle to Async Rust users? In fact, I imagine that there is no single answer. The answer will depend on what people are doing with Async Rust.

After brainstorming: Consolidating the doc and building a roadmap

The brainstorming period is scheduled to end mid-May. At that point comes the next phase, which is when we try to sort out all the contradictory shiny future stories into one coherent picture. I envision this process being led by the async working group leads (tmandry and I), but it’s going to require a lot of consensus building as well.

In addition to building up the shiny future, part of this process will be deciding a concrete roadmap. The roadmap will describe the specific first steps we will take first towards this shiny future. The roadmap items will correspond to particular designs and work items. And here, with those specific work items, is where we get to RFCs: when those work items call for new stdlib APIs or extensions to the language, we will write RFCs that specify them. But those RFCs will be able to reference the vision doc to explain their motivation in more depth.

Living document: adjusting the “shiny future” as we go

There is one thing I want to emphasize: the “shiny future” stories we write today will be wrong. As we work on those first steps that appear in the roadmap, we are going to learn things. We’re going to realize that the experience we wanted to build is not possible – or perhaps that it’s not even desirable! That’s fine. We’ll adjust the vision doc periodically as we go. We’ll figure out the process for that when the time comes, but I imagine it may be a similar – but foreshortened – version of the one we have used to draft the initial version.

Conclusion

Ack! It’s probably pretty obvious that I’m excited about the potential for vision docs. I’ve got a lot of things I want to say about them, but this post is getting pretty long. There are a lot of interesting questions to poke at, most of which I don’t know the answers to yet. Some of the things on my mind: what are the best roles for the characters and should we tweak how they are defined8? Can we come up with good heuristics for which character to use for which story? How are the “consolidation” and “iteration / living document” phases going to work? When is the appropriate time to write a vision doc – right away, or should you wait until you’ve done enough work to have a clearer picture of what the future looks like? Are there lighterweight versions of the process? We’re going to figure these things out as we go, and I will write some follow-up posts talking about them.

Footnotes

  1. Not problem, opportunity! 

  2. Shout out to the error handling group, they’re doing great stuff! 

  3. Did I mention we have 34 stories so far (and more in open PRs)? So cool. Keep ‘em coming! 

  4. To be fair, it was also because designing and implementing NLL was really, really hard.9 

  5. Who knew? 

  6. Big, big shout-out to all those folks who have participated, and especially those brave souls who authored stories

  7. One thing that’s great, though, is that after you post the story, you can ping people and ask them if you got it right. =) 

  8. I feel pretty strongly that four characters is the right number (it worked for Marvel, it will work for us!)10, but I’m not sure if we got their setup right in other respects. 

  9. And – heck – we’re still working towards Polonius

  10. Not my actual reason. I don’t know my actual reason, it just seems right. 

Allen Wirfs-BrockPersonal Digital Habitats: Get Started!

A vintage comic book ad for Habitrail components

In my previous post, I introduced the concept of a Personal Digital Habitat (PDH) which I defined as: a federated multi-device information environment within which a person routinely dwells. If you haven’t read that post, you should do so before continuing.

That previous post focused on the experience of using a PDH. It established a vision of a new way to use and interact with our personal collections of computing devices.  Hopefully it is an attractive vision. But, how can we get from where we are today to a world where we all have our own comfortable digital habitat?

A PDH provides a new computing experience for its inhabitant.1 Historically, a new computing experience has resulted in the invention of new operating systems to support that experience—timesharing, GUI-based personal computing, touch-based mobile computing, cloud computing all required fundamental operating system reinvention. To fully support the PDH vision we will ultimately need to reinvent again and create operating systems that manage a federated multi-device PDH rather than a single computing device.

An OS is a complex layered collection of resource managers that control the use of the underlying hardware and services that provide common  capabilities to application programs. Operating systems were originally developed to minimize waste of scarce expensive “computer time.” Generally, that is no longer a problem. Today it is more important to protect our digital assets and to minimize wasting scarce human attention.

Modern operating systems are seldom built up from scratch. More typically new operating systems evolve from existing ones2  through the  addition (and occasional removal) of resource managers and application service layers in support of new usage models.  A PDH OS will likely be built by adding new layers upon an existing operating system.

You might imagine a group of developers starting a project today to create a PDH OS.  Such an effort would almost certainly fail. The problem is that we don’t yet understand the functionality and inhabitant experience of a PDH and hence we don’t really know which OS resource managers and service layers need to be implemented.

Before we will know enough to build a PDH OS we need to experience building PDH applications.  Is this a chicken or egg problem? Not really.  A habitat-like experience can be defined and implemented by an individual application that supports multiple devices—but the application will need to provide its own support for the managers and services that it needs. It is by building such applications that we will begin to understand the requirements for a PDH OS.

Some developers are already doing something like this today as they build applications that are designed to be local-first or peer-to-peer dWeb/Web 3 based or that support collaboration/multi-user sync. Much of the technology applicable to those initiatives is also useful for building  self-contained PDH applications.

If you are an application developer who finds the PDH concept intriguing, here is my recommendation. Don’t wait! Start designing your apps in a habitat-first manner and thinking of your users as app inhabitants. For your next application don’t just build another single device application that will be ported or reimplemented on various phone, tablet, desktop, and web platforms. Instead, start from the assumption that your application’s inhabitant will be simultaneously running it on multiple devices and that they deserve a habitat-like experience as they rapidly switch their attention among devices. Design that application experience, explore what technologies are available that you can leverage to provide it, and then implement it for the various types of platforms.  Make the habitat-first approach your competitive advantage.

If you have comments or question tweet them mentioning @awbjs. I first starting talking about personal digital habitats in a twitter thread on March 22, 2021. That and subsequent twitter threads in March/April 2021 include interesting discussions of technical approaches to PDHs.

Footnotes
1    I intend to generally use “inhabitant” rather than “user” to refer to the owner/operator of a PDH.
2    For example, Android was built upon Linux and iOS was built starting from the core of MacOS X.

The Rust Programming Language BlogAnnouncing Rustup 1.24.1

The rustup working group is happy to announce the release of rustup version 1.24.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.1 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.1

Firstly, if you have not read the previous announcement then in brief, 1.24 introduces better support for low memory systems, installs itself into the Add/Remove programs list on Windows, and now supports using rust-toolchain.toml files.

Shortly after publishing the 1.24.0 release of Rustup, we got reports of a regression preventing users from running rustfmt and cargo fmt after upgrading to Rustup 1.24.0. To limit the damage we reverted the release to version 1.23.1. The only substantive change between 1.24.0 and 1.24.1 is to correct this regression.

Other changes

You can check out all the changes to Rustup for 1.24.0 and 1.24.1 in the changelog!

Rustup's documentation is also available in the rustup book.

Thanks

Thanks again to all the contributors who made rustup 1.24.0 and 1.24.1 possible!

  • Alex Chan
  • Aloïs Micard
  • Andrew Norton
  • Avery Harnish
  • chansuke
  • Daniel Alley
  • Daniel Silverstone
  • Eduard Miller
  • Eric Huss
  • est31
  • Gareth Hubball
  • Gurkenglas
  • Jakub Stasiak
  • Joshua Nelson
  • Jubilee (workingjubilee)
  • kellda
  • Michael Cooper
  • Philipp Oppermann
  • Robert Collins
  • SHA Miao
  • skim (sl4m)
  • Tudor Brindus
  • Vasili (3point2)
  • наб (nabijaczleweli)
  • 二手掉包工程师 (hi-rustin)

Mozilla Performance BlogPerformance Sheriff Newsletter (March 2021)

In March there were 288 alerts generated, resulting in 28 regression bugs being filed on average 4 days after the regressing change landed.

Welcome to the March 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on the data footprint of our performance metrics. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.2 days
  • 92% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 1.5 days
  • 96% of valid regressions were associated with bugs within 5 days

 

Sheriffing Efficiency (Mar 2021)

Interestingly, the close correlation we’ve seen between alerts and time to bug did not continue into March. It’s not clear why this might be, however there were some temporary adjustments to the sheriffing team during this time. We also saw an increase in the percentage of alert summaries that marked as invalid, which might have an impact of our sheriffing efficiency.

What’s new in Perfherder?

I last provided an update on Perfherder in July 2020 so felt it was about time to revisit.

Compact Bugzilla summaries & descriptions

Until recently, Perfherder would simply try to include all affected tests and platforms in the summary and description for all regression bugs. Not only does this make the bugs difficult to read, it also meant we hit the maximum field size for regressions that impacted a large number of tests.

Bug 1697112 is an example of how this looked before the recent change. The description contained 24 regression alerts, and 22 improvement alerts. The summary was edited by a performance sheriff to fit within the maximum field size:

4.55 – 18.83% apple ContentfulSpeedIndex … tumblr SpeedIndex (windows10-64-shippable) regression on push 6ea4d69aa5c6c7064d3b4a195bf96617baa3aebf (Thu March 4 2021)

With the recent improvements we limit how many tests are named in the summary and show a count of the omitted tests. We now list common names for the affected platforms, and no longer include the suspected commit hash. For the description, when we have many alerts we now show the most/least affected and indicate that one or more have been omitted for display purposes. Bug 1706333 is an example ofthe improved description and summary:

122.22 – 2.73% cnn-ampstories FirstVisualChange / ebay ContentfulSpeedIndex + 22 more (Windows) regression on Fri April 16 2021

Compare view sorting

We’ve added the ability to sort columns in compare view. This is useful when you’re comparing many tests and you’d like to quickly sort the results by confidence, delta, or magnitude.

Compare view sorted by confidence

Compare view sorted by confidence

Infrastructure changelog

Last year we created a unified changelog consolidating commits from repositories related to our automation infrastructure. Changes to infrastructure can impact our performance results and time can be wasted investigating regressions in our products that aren’t there. To help with this, we now annotate Perfherder graphs with data from the infrastructure changelog. When one of these markers correlates to an alert it can provide a valuable clue for our sheriffs. The repositories monitored for changes can be found here here.

Perfherder graph showing infrastructure changelog

Graph showing infrastructure changelog markers

Stop alerting on tier 3 jobs

After updating our Performance Regressions Policy to explicitly mention that the sheriffs do not monitor tier 3 jobs, we fixed Perfherder to prevent these from alerting. Anything running below tier 2 is considered unstable, and not a valuable performance indicator.

Reduced data footprint

We have also spent a lot of effort reducing the data footprint of our performance data by updating and enforcing our data retention policy. You can read more about our data footprint in last month’s newsletter.

Email reports

When working on our data retention policy we wanted some way of reporting the signatures that were being deleted, and so we introduced email reports. We’re also now sending reports for automated backfills, and in the future we’d like to generate more reports. If you’re curious, these are being sent to perftest-alerts.

Bug fixes

The following bug fixes are also worth highlighting:

Acknowledgements

These updates would not have been possible without Ionuț Goldan, Alexandru Irimovici, Alexandru Ionescu, Andra Esanu, Beatrice Acasandrei and Florin Strugariu. Thanks also to the Treeherder team for reviewing patches and supporting these contributions to the project. Finally, thank you to all of the Firefox engineers for all of your bug reports and feedback on Perfherder and the performance workflow. Keep it coming, and we look forward to sharing more updates with you all soon.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for March can be found here (for those with access).

Chris IliasThe screenshot option in Firefox has moved. Here’s how to find it.

If you have updated Firefox recently, you may have noticed that Take a Screenshot is missing from the page actions menu. Don’t fret. The feature is still in Firefox; it has just been moved.


Here’s how to find it…

You now have a button to take screenshots.

Of course, you can always right-click within a webpage and Take Screenshot will be part of the menu.

Will Kahn-GreeneSocorro Overview: 2021, presentation

Socorro became part of the Data Org part of Mozilla back in August 2020. I had intended to give this presentation in October 2020 after I had given one on Tecken 1, but then the team I was on got re-orged and I never got around to redoing the presentation for a different group.

Fast-forward to March. I got around to updating the presentation and then presented it to Data Club on March 26th, 2021.

I was asked if I want it posted on YouTube and while that'd be cool, I don't think video is very accessible on its own 2. Instead, I decided I wanted to convert it to a blog post. It took a while to do that for various reasons that I'll cover in another blog post.

This blog post goes through the slides and narrative of that presentation.

1

I should write that as a blog post, too.

2

This is one of the big reasons I worked on pyvideo for so long.

Read more… (28 min remaining to read)

Andrew HalberstadtPhabricator Etiquette Part 2: The Author

Last time we looked at some ways reviewers can keep the review process moving efficiently. This week, let’s put on our author hats and do the same thing.

Mozilla Attack & DefenseExamining JavaScript Inter-Process Communication in Firefox

 

Firefox uses Inter-Process Communication (IPC) to implement privilege separation, which makes it an important cornerstone in our security architecture. A previous blog post focused on fuzzing the C++ side of IPC. This blog post will look at IPC in JavaScript, which is used in various parts of the user interface. First, we will briefly revisit the multi-process architecture and upcoming changes for Project Fission, Firefox’ implementation for Site Isolation. We will then move on to examine two different JavaScript patterns for IPC and explain how to invoke them. Using Firefox’s Developer Tools (DevTools), we will be able to debug the browser itself.

Once equipped with this knowledge, we will revisit a sandbox escape bug that was used in a 0day attack against Coinbase in 2019 and reported as CVE-2019-11708. This 0day-bug has found extensive coverage in blog posts and publicly available exploits. We believe the bug provides a great case study and the underlying techniques will help identify similar issues. Eventually, by finding more sandbox escapes you can help secure hundreds of millions of Firefox users as part of the Firefox Bug Bounty Program.

Multi-Process Architecture Now and Then

As of April 2021, Firefox uses one privileged process to launch other process types and coordinate activities. These types are web content processes, semi-privileged web content processes (for special websites like accounts.firefox.com or addons.mozilla.org) and four kinds of utility processes for web extensions, GPU operations, networking or media decoding. Here, we will focus on the communication between the main process (also called “parent”) and a multitude of web processes (or “content” processes).

Firefox is shifting towards a new security architecture to achieve Site Isolation, which moves from a “process per tab” to a “process per site” architecture.

Left: Firefox using roughly a process per tab - Right: Fission-enabled Firefox, which uses a process per site (i.e., a seperate one for each banner ad and social button).

Left: Current Firefox generally grouping a tab in it’s own process. Right: Fission-enabled Firefox, separating each site in it’s own process

The parent process acts as a broker and trusted user interface host. Some features, like our settings page at about:preferences are essentially web pages (using HTML and JavaScript) that are hosted in the parent process. Additionally, various control features like modal dialogs, form auto-fill or native user interface pieces (e.g., the <select> element) are also implemented in the parent process. This level of privilege separation also requires receiving messages from content processes.

Let’s look at JSActors and MessageManager, the two most common patterns for using inter-process communication (IPC) from JavaScript:

JSActors

Using a JSActor is the preferred method for JS code to communicate between processes. JSActors always come in pairs – with one implementation living in the child process and the counterpart in the parent. There is a separate parent instance for every pair in order to closely and consistently associate a message with either a specific content window (JSWindowActors), or child process (JSProcessActors).

Since all JSActors are lazy-loaded we suggest to exercise the implemented functionality at least once, to ensure they are all present and allow for a smooth test and debug experience.

Inter-Process Communication building on top of JSActors and implemented as FooParent and FooChild

Inter-Process Communication building on top of JSActors and implemented as FooParent and FooChild

The example diagram above shows a pair of JSActors called FooParent and FooChild. Messages sent by invoking FooChild will only be received by a FooParent. The child instance can send a one-off message with sendAsyncMesage("someMessage", value). If it needs a response (wrapped in a Promise), it can send a query with sendQuery("someMessage", value).

The parent instance must implement a receiveMessage(msg) function to handle all incoming messages. Note that the messages are namespace-tied between a specific actor, so a FooChild could send a message called Bar:DoThing but will never be able to reach a BarParent. Here is some example code (permalink, revision from March 25th) which illustrates how a message is handled in the parent process.

Code sample for a receiveMessage function in a JSActor

Code sample for a receiveMessage function in a JSActor

As illustrated, the PromptParent has a receiveMessage handler (line 127) and is passing the message data to additional functions that will decide where and how to open a prompt from the parent process. Message handlers like this and its callees are a source of untrusted data flowing into the parent process and provide logical entry points for in-depth audits

Message Managers

Prior to the architecture change in Project Fission, most parent-child IPC occurred through the MessageManagers system. There were multiple message managers, including the per-process message manager and the content frame message manager, which was loaded per-tab.

Under this system, JS in both processes would register message listeners using the addMessageListener methods and would send messages with sendAsyncMessage, that have a name and the actual content. To help track messages throughout the code-base their names are usually prefixed with the components they are used in (e.g., SessionStore:restoreHistoryComplete).

Unlike JSActors, Message Managers need verbose initialization with addMessageListener and are not tied together. This means that messages are available for all classes that listen on the same message name and can be spread out through the code base.

Inter-Process Communication using MessageManager

Inter-Process Communication using MessageManager

As of late April 2021, our AddonsManager – the code that handles the installation of WebExtensions into Firefox – is using MessageManager APIs:

Code sample for a receiveMessage function using the MessageManger API

Code sample for a receiveMessage function using the MessageManger API

The code (permalink to exact revision) for setting a MessageManager looks very similar to the setup of a JSActor with the difference that messaging can be used synchronously, as indicated by the sendSyncMessage call in the child process. Except for the lack of lazy-loading, you can assume the same security considerations: Just like with JSActors above, the receiveMessage function is where the untrusted information flows from the child into the parent process and should therefore be the focus of additional scrutiny.

Finally, if you want to inspect MessageManager traffic live, you can use our logging framework and run Firefox with the environment variable MOZ_LOG set to MessageManager:5. This will log the received messages for all processes to the shell and give you a better understanding of what’s being sent and when.

Inspecting, Debugging, and Simulating JavaScript IPC

Naturally, source auditing a receiveMessage handler is best paired with testing. So let’s discuss how we invoke these functions in the child process and attach a JavaScript debugger to the parent process. This allows us to simulate a scenario where we have already full control over the child process. For this, we recommend you download and test against Firefox Nightly to ensure you’re testing the latest code – it will also give you the benefit of being in sync with codesearch for the latest revisions at https://searchfox.org. For best experience, we recommend you download Firefox Nightly right now and follow this part of the blog post step by step.

DevTools Setup – Parent Process

First, set up your Firefox Nightly to enable browser debugging. Note that the instructions for how to enable browser debugging can change over time, so it’s best you cross-check with the instructions for Debugging the browser on MDN.

Open the Developer Tools, click the “···” button in the top-right and find the settings. Within Advanced settings in the bottom-right, check the following:

  • Enable browser chrome and add-on debugging toolboxes
  • Enable remote debugging
Enabling Browser debugging in Firefox Developer Tools

Enabling Browser debugging in Firefox Developer Tools

Restart Firefox Nightly and open the Browser debugger (Tools -> Browser Tools -> Browser Toolbox). This will open a new window that looks very similar to the common DevTools.

This is your debugger for the parent process (i.e., Browser Toolbox = Parent Toolbox).

The frame selector button, which is left of the three balls “···” will allow you to select between windows. Select browser.xhtml, which is the main browser window. Switching to the Debug pane will let you search files and find the Parent actor you want to debug, as long as they have been already loaded. To ensure the PromptParent actor has been properly initialized, open a new tab on e.g. https://example.com and make it call alert(1) from the normal DevTools console.

Hitting a breakpoint in Firefox’s parent process using Firefox Developer Tools (left)

Hitting a breakpoint in Firefox’s parent process using Firefox Developer Tools (left)

You should now be able to find PromptParent.jsm (Ctrl+P) and set a debugger breakpoint for all future invocations (see screenshot above). This will allow you to inspect and copy the typical arguments passed to the Prompt JSActor in the parent.

Note: Once you hit a breakpoint, you can enter code into the Developer Console which is then executed within the currently intercepted function.

DevTools Setup – Child Process

Now that we know how to inspect and obtain the parameters which the parent process is expecting for Prompt:Open, let’s try and trigger it from a debugged child process: Ensure you are on a typical web page, like https://example.com, so you get the right kind of content child process. Then, through the Tools menu, find the “Browser Content Toolbox”. Content here refers to the child process (Content Toolbox = Child Toolbox).

Since every content process might have many windows of the same site associated with it, we need to find the current window. This snippet assumes it is the first tab and gets the Prompt actor for that tab:

actor = tabs[0].content.windowGlobalChild.getActor("Prompt");

Now that we have the actor, we can use the data gathered in the parent process and send the very same data. Or maybe, a variation thereof:

actor.sendQuery("Prompt:Open", {promptType: "alert", title: "👻", modalType: 1, promptPrincipal: null, inPermutUnload: false, _remoteID: "id-lol"});

Invoking JavaScript IPC from Firefox Developer Tools (bottom right) and observing the effects (top right)

Invoking JavaScript IPC from Firefox Developer Tools (bottom right) and observing the effects (top right)

In this case, we got away with not sending a reasonable value for promptPrincipal at all. This is certainly not going to be true for all message handlers. For the sake of this blog post, we can just assume that a Principal is the implementation of an Origin (and for background reading, we recommend an explanation of the Principal Objects in our two-series blog post “Understanding Web Security Checks in Firefox”: See part 1 and part 2).

In case you wonder why the content process is allowed to send a potentially arbitrary Principal (e.g., the origin): This is currently a known limitation and will be fixed while we are en route to full site-isolation (bug 1505832).

If you want to try to send another, faked origin – maybe from a different website or maybe the most privileged Principal – the one that is bypassing all security checks, the SystemPrincipal, you can use these snippets to replace the promptPrincipal in the IPC message:

const {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
otherPrincipal = Services.scriptSecurityManager.createContentPrincipalFromOrigin("https://evil.test");
systemPrincipal = Services.scriptSecurityManager.getSystemPrincipal();

Note that validating the association between process and site is already enforced in debug builds. If you compiled your own Firefox, this will cause the content process to crash.

Revisiting Previous Security Issues

Now that we have the setup in place we can revisit the security vulnerability mentioned above: CVE-2019-11708.

The issue in itself was a typical logic bug: Instead of switching which prompt to open in the parent process, the vulnerable version of this code accepted the URL to an internal prompt page, implemented as an XHTML page. But by invoking this message, the attacker could cause the parent process to open any web-hosted page instead. This allowed them to re-open their content process exploit again in the parent process and escalate to a full compromise.

Let’s take a look at  the diff for the security fix to see how we replaced the vulnerable logic and handled the prompt type switching in the parent process (permalink to source).

Handling of untrusted message.data before and after fixing CVE-2019-11708.

Handling of untrusted message.data before and after fixing CVE-2019-11708.

You will notice that line 140+ used to accept and use a parameter named uri. This was fixed in a multitude of patches. In addition to only allowing certain dialogs to open in the parent process we also generally disallow opening web-URLs in the parent process.

If you want to try this yourself, download a version of Firefox before 67.0.4 and try sending a Prompt:Open message with an arbitrary URL.

Next Steps

In this blog post, we have given an introduction to Firefox IPC using JavaScript and how to debug the child and the parent process using the Content Toolbox and the Browser Toolbox, respectively. Using this setup, you are now able to simulate a fully compromised child process, audit the message passing in source code and analyze the runtime behavior across multiple processes.

If you are already experienced with Fuzzing and want to analyze how high-level concepts from JavaScript get serialized and deserialized to pass the process boundary, please check our previous blog post on Fuzzing the IPC layer of Firefox.

If you are interested in testing and analyzing the source code at scale, you might also want to look into the CodeQL databases that we publish for all Firefox releases.

If you want to know more about how our developers port legacy MessageManager interfaces to JSActors, you can take another look at our JSActors documentation and at how Mike Conley ported the popup blocker in his Joy of Coding live stream Episode 204.

Finally, we at Mozilla are really interested in the bugs you might find with these techniques – bugs like confused-deputy attacks, where the parent process can be tricked into using its privileges in a way the content process should not be able to (e.g. reading/writing arbitrary files on the filesystem) or UXSS-type attacks, as well as bypasses of exploit mitigations. Note that as of April 2021, we are not enforcing full site-isolation. Bugs that allow one to impersonate another site will not yet be eligible for a bounty. Submit your findings through our bug bounty program and follow us at the @attackndefense Twitter account for more updates.

 

 

The Rust Programming Language BlogAnnouncing Rustup 1.24.0

Shortly after publishing the release we got reports of a regression preventing users from running rustfmt and cargo fmt after upgrading to Rustup 1.24.0. To limit the damage we reverted the release to version 1.23.1.

If you have been affected by this issue you can revert to version 1.23.1 by running the following command:

rustup self update

The rustup working group is happy to announce the release of rustup version 1.24.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.0 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.0

Support of rust-toolchain.toml as a filename for specifying toolchains.

Last year we released a new toml format for the rust-toolchain file. In order to bring Rustup closer into line with Cargo's behaviour around .cargo/config we now support the .toml extension for that file. If you call the toolchain file rust-toolchain.toml then you must use the toml format, rather than the legacy one-line format.

If both rust-toolchain and rust-toolchain.toml are present, then the former will win out over the latter to ensure compatibility between Rustup versions.

Better support for low-memory systems

Rustup's component unpacker has been changed to have a smaller memory footprint when unpacking large components. This should permit users of memory-constrained systems such as some Raspberry Pi systems to install newer Rust toolchains which contain particularly large files.

Better support for Windows Add/Remove programs

Fresh installations of Rustup on Windows will now install themselves into the program list so that you can trigger the uninstallation of Rustup via the Add/Remove programs dialogs similar to any other Windows program.

This will only take effect on installation, so you will need to rerun rustup-init.exe if you want this on your PC.

Other changes

There are more changes in rustup 1.24.0: check them out in the changelog!

Rustup's documentation is also available in the rustup book.

Thanks

Thanks to all the contributors who made rustup 1.24.0 possible!

  • Alex Chan
  • Aloïs Micard
  • Andrew Norton
  • Avery Harnish
  • chansuke
  • Daniel Alley
  • Daniel Silverstone
  • Eduard Miller
  • Eric Huss
  • est31
  • Gareth Hubball
  • Gurkenglas
  • Jakub Stasiak
  • Joshua Nelson
  • Jubilee (workingjubilee)
  • kellda
  • Michael Cooper
  • Philipp Oppermann
  • Robert Collins
  • SHA Miao
  • skim (sl4m)
  • Tudor Brindus
  • Vasili (3point2)
  • наб (nabijaczleweli)
  • 二手掉包工程师 (hi-rustin)

Chris H-CData Science is Interesting: Why are there so many Canadians in India?

Any time India comes up in the context of Firefox and Data I know it’s going to be an interesting day.

They’re our largest Beta population:

pie chart showing India by far the largest at 33.2%

They’re our second-largest English user base (after the US):

pie chart showing US as largest with 37.8% then India with 10.8%

 

But this is the interesting stuff about India that you just take for granted in Firefox Data. You come across these factoids for the first time and your mind is all blown and you hear the perhaps-apocryphal stories about Indian ISPs distributing Firefox Beta on CDs to their customers back in the Firefox 4 days… and then you move on. But every so often something new comes up and you’re reminded that no matter how much you think you’re prepared, there’s always something new you learn and go “Huh? What? Wait, what?!”

Especially when it’s India.

One of the facts I like to trot out to catch folks’ interest is how, when we first released the Canadian English localization of Firefox, India had more Canadians than Canada. Even today India is, after Canada and the US, the third largest user base of Canadian English Firefox:

pie chart of en-CA using Firefox clients by country. Canada at 75.5%, US at 8.35%, then India at 5.41%

 

Back in September 2018 Mozilla released the official Canadian English-localized Firefox. You can try it yourself by selecting it from the drop down menu in Firefox’s Preferences/Options in the “Language” section. You may have to click ‘Search for More Languages’ to be able to add it to the list first, but a few clicks later and you’ll be good to go, eh?

(( Or, if you don’t already have Firefox installed, you can select which language and dialect of Firefox you want from this download page. ))

Anyhoo, the Canadian English locale quickly gained a chunk of our install base:

uptake chart for en-CA users in Firefox in September 2018. Shows a sharp uptake followed by a weekly seasonal pattern with weekends lower than week days

…actually, it very quickly gained an overlarge chunk of our install base. Within a week we’d reached over three quarters of the entire Canadian user base?! Say we have one million Canadian users, that first peak in the chart was over 750k!

Now, we Canadian Mozillians suspected that there was some latent demand for the localized edition (they were just too polite to bring it up, y’know)… but not to this order of magnitude.

So back around that time a group of us including :flod, :mconnor, :catlee, :Aryx, :callek (and possibly others) fell down the rabbit hole trying to figure out where these Canadians were coming from. We ran down the obvious possibilities first: errors in data, errors in queries, errors in visualization… who knows, maybe I was counting some clients more than once a day? Maybe I was counting other Englishes (like South African and Great Britain) as well? Nothing panned out.

Then we guessed that maybe Canadians in Canada weren’t the only ones interested in the Canadian English localization. Originally I think we made a joke about how much Canadians love to travel, but then the query stopped running and showed us just how many Canadians there must be in India.

We were expecting a fair number of Canadians in the US. It is, after all, home to Firefox’s largest user base. But India? Why would India have so many Canadians? Or, if it’s not Canadians, why would Indians have such a preference for the English spoken in ten provinces and three territories? What is it about one of two official languages spoken from sea to sea to sea that could draw their attention?

Another thing that was puzzling was the raw speed of the uptake. If users were choosing the new localization themselves, we’d have seen a shallow curve with spikes as various news media made announcements or as we started promoting it ourselves. But this was far sharper an incline. This spoke to some automated process.

And the final curiosity (or clue, depending on your point of view) was discovered when we overlaid British English (en-GB) on top of the Canadian English (en-CA) uptake and noticed that (after accounting for some seasonality at the time due to the start of the school year) this suddenly-large number of Canadian English Firefoxes was drawn almost entirely from the number previously using British English:

chart showing use of British and Canadian English in Firefox in September 2018. The rise in use of Canadian English is matched by a fall in the use of British English.

It was with all this put together that day that lead us to our Best Guess. I’ll give you a little space to make your own guess. If you think yours is a better fit for the evidence, or simply want to help out with Firefox in Canadian English, drop by the Canadian English (en-CA) Localization matrix room and let us know! We’re a fairly quiet bunch who are always happy to have folks help us keep on top of the new strings added or changed in Mozilla projects or just chat about language stuff.

Okay, got your guess made? Here’s ours:

en-CA is alphabetically before en-GB.

Which is to say that the Canadian English Firefox, when put in a list with all the other Firefox builds (like this one which lists all the locales Firefox 88 comes in for Windows 64-bit), comes before the British English Firefox. We assume there is a population of Firefoxes, heavily represented in India (and somewhat in the US and elsewhere), that are installed automatically from a list like this one. This automatic installation is looking for the first English build in this list, and it doesn’t care which dialect. Starting September of 2018, instead of grabbing British English like it’s been doing for who knows how long, it had a new English higher in the list: Canadian English.

But who can say! All I know is that any time India comes up in the data, it’s going to be an interesting day.

:chutten