Daniel StenbergThe curl roadmap 2020 video

On March 26th 2020, I did a live webinar where I talked about my roadmap visions of what to work on in curl during 2020.

Below you can see the youtube recording of the event.

You can also browse the slides separately.

Mozilla Addons BlogExtensions in Firefox 75

Extensions in Firefox 75

In Firefox 75 we have a good mix of new features and bugfixes. Quite a few volunteer contributors landed patches for this release please join me in cheering for them!

Thank you everyone for continuing to make Firefox WebExtensions amazing. I’m glad to see some new additions this time around and am eager to discover what the community is up to for Firefox 76. Interested in taking part? Get involved!

The post Extensions in Firefox 75 appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgInnovating on Web Monetization: Coil and Firefox Reality

Introducing Coil

In the coming weeks, Mozilla will roll out a Web Monetization experiment using Coil to support payments to creators in the Firefox Reality ecosystem. Web Monetization is an alternative approach to payments that doesn’t rely on advertising or stealing your data and attention. We wrote about Web Monetization for game developers back in the autumn, and now we’re excited to invite more of you to participate, first as creators and soon as consumers of all kinds of digital and virtual content.

Innovation: Web Monetization

Problem: Now more than ever, digital content creators need new options for earning money from their work in a fast-changing world. Solution: Mozilla is testing Coil as an alternative to credit card or Paypal payments for authors and independent content creators.


If you’ve developed a 3D experience, a game, a 360 video, or if you’re thinking of building something new, you’re invited to participate in this experiment. I encourage you as well to contact us directly at creator_payments at mozilla dot com to showcase your work in the Firefox Reality content feed.

You’ll find details on how to participate below. I will also share answers and observations, from my own perspective as an implementer and investigator on the Mixed Reality team.

A note about timing

Tangentially, the COVID-19 pandemic is dominating our attention. Just to be clear: This project is not a promise to create revenue for you during a planetary crisis. We support people where they are emotionally in their lives at this time and we do feel that real-world concerns are far more important. Also, we send thanks to everybody at Coil and Mozilla and all of you who are supporting this work when we’re all juggling family, chores, and our own lives.

In closing

We know that many of you are looking for solutions to make money from your creative work online. We are here for you and we want you to create, share, and thrive on the net. Take a look the details on how to participate below. I will also share answers to other questions and observations from my own perspective as an implementer and investigator on the Mixed Reality team. Please let us know how this works for you!

FAQ: How to participate

How do I participate as a Creator?

Do you have a piece of content—a blog post, an interactive experience, a 360 video, a WebXR game—that you want to share with people in Firefox Reality? Here’s how you can web monetize this content:

The first step is to add a meta tag to the top of your site which will define a payment pointer (an email address for money). This article walks you through the process in detail:

From js13kGames to MozFest Arcade: A game dev Web Monetization story

Alternatively, this article is also good:
Web Monetization: Quick Start Guide

If you have a WordPress blog here’s another way to add a payment pointer.

The second step is to simply please let us know! You can message us at creator_payments@mozilla.com and we’ll make sure that your work is showcased in the Firefox Reality Content feed.

What is this Coil thing anyway?

Coil is a for-profit membership service that charges users $5.00 a month and streams micropayments to creators based on member attention. Coil uses the Interledger network to move money, allowing creators to work in any currency they like.

Effectively you get paid for user attention—assuming those users are set up with web monetization. Web Monetization consists of an HTML tag, a JavaScript API, and uses the Interledger protocol for actually moving the money and enabling payments in many different currencies.

And just to be clear, Interledger is not a blockchain and there is no “Interledger token”. Interledger functions more like the Internet, in that it routes packets. Except the packets also represent money instead of just carrying data. You can find out more about how it works on interledger.org.

Why Coil?

Coil is an example of how open standards help foster healthy ecosystems. Coil can be thought of as a user-facing appliance running on top of the emerging Web Monetization and Payment Pointers standard. As Coil succeeds, the possibilities for other payment services to succeed also increases. Once creators set up a payment pointer, they themselves are not tied to Coil. Anybody or any new service can send money to that creator—without using Coil itself. By lowering the “activation energy” this opens up the door for new payment services. Also, at the same time, we at Mozilla stay true to our values—fostering open standards, working internationally, and protecting user privacy.

From a user perspective, if I install Coil how do I know it is working?

On a desktop browser, if you have a Coil subscription you can visit Hubs by Mozilla right now, and by clicking on the Coil plugin you can see that it is being monetized.

For testing Coil in Firefox Reality, you can visit this test site to see if you are Coil enabled. (Note: Personally, I’m not a big fan of the depiction of gender and the rags to riches narrative at that URL. But the site works for testing.)

From a creator perspective if I install Coil how do I know it is working?

If you visit this Coil Checker site and enter the URL of your own site, it will report if your Coil implementation is working.

Can I offer different content based on if patrons are paying or not?

As a creator, you can detect if patrons are Coil-enabled using javaScript. The recommended practice is called the “100+20” rule: offer special content for visitors who are paying, but do not disable the site or user experience for other guests. Please see the following links:

Web Monetization: Exclusive Content

The 100+20 Rule for Premium Content

Can I share revenue with other creators?

Ben has a great article on Probabilistic Revenue Sharing and Sabine has another one about a Web Monetized Image Gallery, showing how you can change where revenue is flowing based on which content is being examined.

If I enable Coil on my sites how do I get cash out of the system?

If you are in the United States you can get an account at Stronghold. There are other web wallets that support Interledger and can convert payments to local currencies. This varies depending on which country you are in. Note that in the U.S, the Securities and Exchange Commission (SEC) has important Know Your Customer (KYC) requirements, which means you’ll need to provide a driver’s license or passport.

Of course, Web Monetization will work better once it is adopted by a large number of users, built into most browsers, and so on, but the goal of this phase is less to generate a cash-out for you today and more about gathering data and feedback. So please keep in mind that this is an experiment! Also, again, we are especially interested in hearing how this works for you. Please make sure to contact us at creator_payments at mozilla dot com once you set this up.

How will Mozilla encourage users to participate in this experiment?

Stay tuned for a follow-up announcement in the not too distant future. In broad strokes, we will issue free Coil memberships to qualified users in the Firefox Reality ecosystem.

How does this experiment with Coil differ from Mozilla’s partnership with Scroll.com?

Astute observers will notice that Mozilla recently partnered with Scroll, a new ad-free subscription service. So how is this different from Scroll and why do we need both things?

The main difference today is that our collaboration with Coil today is to test adoption.

These two membership services are aimed at different use cases. Coil lets anybody be a content creator and get paid out of user attention. Because the payout rate is something like $0.36 per hour per user, Coil becomes useful if you have hundreds of people looking at your site. In contrast, Scroll partners with specific and often larger organizations such as publishers and media outlets with reporters, editors, and higher overheads. Their fees reflect the value of news in terms of quality and reputation.

Notably there are also other services such as Comixology, (also described here), Flattr, and Unlock. Each of these caters to different audience needs.

We can even imagine future services that have much higher payment rates, such as charging $30.00 an hour to allow a foreign language teacher in a virtual Hubs room to teach a small class of students and have a sustainable business. There will never be a single silver bullet that covers all consumer needs.

Does “Grant for the Web” relate to this?

Grant for the Web is a separate series of grants that are definitely worth applying for. This initiative plans to distribute 100 million dollars to support creators on the web. This is especially suitable for web projects that require up-front funding.

Payment Challenges and Solutions

Why are we looking at payment solutions?

Last year we interviewed web developers and creatives, asking them about how they monetized content on the web. They reported several challenges. These are two of the largest issues:

  • Payment Frictions: Many developers reported that they simply didn’t have good ways to charge money—especially small amounts of money. Solutions were fragmented and complex. Their patrons didn’t have a way to pay even if they wanted to.
  • Funding Up Front: Beyond payments, there was a specific issue around simply needing more money before starting development. Historically, smaller web developers were often able to incrementally grow their revenue. Yet because the web is becoming more powerful there are now higher up-front costs to build compelling experiences.

Note that there were many other related issues such as discoverability of content, consumer trust in content, defending intellectual property, better tools for building content and so on. But payments is one area that seems to require a known and trusted neutral third-party. And so, we believe that Mozilla is uniquely qualified to help with this.

A look at the digital content ecosystem

If we step back and look beyond the web, digital content ecosystems are exploding. Social apps such as Facebook are a $50 billion dollar a year industry, driven by advertising. Mobile app subscription revenue is $4 billion dollars a year. Mobile native games were forecast to capture over $70 billion dollars in 2019.

Experiences on app stores such as Google Play or the Apple App Store can capture up to 30% of that energy in transaction fees. (Admittedly, they provide other valuable services mentioned above, such as quality, trust, discovery, and recourse).

Although the boundary between native and web is somewhat porous, some developers we spoke to were packaging their web apps as native apps and releasing them into app stores such as itch.io just to get cash out of the system for their labor.

However, the web is unique. The web is open and accessible to all parties, not owned or controlled by any one party. Content can be shared with a URL. Because of this, the web has become the place of the “great conversation” – where we can all talk freely about issues all over the world.

Thanks to the work of many engineers, the web has many of the rich visual capabilities of native apps delivered via open standards. Technologies like WebAssembly and high-performance languages such as Rust make it possible for a game like Candy Crush Saga to become a web experience.

And yet, even though you can share a web experience with somebody halfway around the planet, there’s no way for them to tip you a quarter if they like their experience. If money is a form of communication, then it makes sense to fix money as well—in an open, scalable way that is fair, and that discourages bad actors.

Why not just stick with advertising?

Right now advertising dominates on the web as a revenue model. Ads are important and valuable as a form of social signaling in a noisy landscape. However, this means creators may be beholden to advertisers more so than to their patrons.

There are strong market incentives today to profile, segment, and target users. Targeting can even become a vector for bad actors, such as we saw with Cambridge Analytica. Cross-site tracking in particular is a concern. As a result, browser vendors such as Apple, Microsoft and Mozilla are working to reduce cross-site tracking. This is reducing the effectiveness of advertising as a whole.

But nature abhors a vacuum. We need to do more. From an ecosystem perspective, if we can support alternative revenue options and protect user privacy this feels like a win.

Don’t users expect content on the web to be free?

There is some argument that services like Patreon and Kickstarter exist because people who enjoy online content want to think of themselves as patrons of the arts; not merely consumers. We are doing this experiment in part to test that idea.

Why not invent some other digital currency or wallet solution?

Payments on the web is a complex topic. There are creator needs, user and patron needs, privacy concerns, international boundaries, payment frictions and costs, regulatory issues and so on. Consider even just the issue of people who don’t have credit cards; or artists, story-tellers, and creators around the world who don’t have bank accounts at all—especially women.

We’ve seen ideas over the years ranging from Beenz (now defunct) to Libra, and every imaginable point system and scheme. We see Brave and Puma providing browser-bound solutions. Industry-wide, we have working methods for purchases online for adults with credit cards such as through Stripe or Paypal. But there aren’t widely available solutions for smaller low-friction payments that hit all our criteria.

We will continue to explore a variety of options, but we want options like Web Monetization that are available today. We value options that have low activation energy; are transparent and easy to test; don’t require industry changes; handle small transactions; work over international boundaries; are abuse-resistant; help the unbanked; and protect user privacy.

Doesn’t Web Payments solve this problem?

We’re all familiar with the Error code 404 – page not found warning on the web. But probably not a single one of us has seen an “Error code 402 – payment required” warning. Payments are not something we as consumers use yet or encounter routinely in the wild.

402 Payment Required from girlie mac on flickr

Until we see “402 Payment Required” on the web it’s probably likely that we have not solved web payments.


Yes, we have W3C Web Payments—and this may become the right answer. There are some options from Paypal for micropayments, as well as a bewildering number of cryptocurrency solutions. Still, we have our work cut out for us.

Long-Term Trajectories: A Vision

Welcome to the future. It is August 1st, 2022. You’re surfing the web and you come across an amazing recipe site or blog post you really like, perhaps a kid-friendly indie game or photos of the northern lights, captured and shared by somebody in a different country. Or perhaps you’ve been watching a campaign to protect forests in Romania and you want to donate. Now imagine being able to easily send them money as a token of your appreciation.

Maybe you’ve become a fan of a new virtual reality interactive journalism site. Rather than subscribing to them specifically you use a service that streams background micropayments while you are on their site. As you and your friends join in, the authors get more and more support over time. The creator is directly supported in their work, and directly responsive to you, rather than having to chase grants. They’re able to vet their sources better and extend their reach.

This is the kind of vision that we want to support. Let’s get there together. If you have any other questions or comments please reach out!

The post Innovating on Web Monetization: Coil and Firefox Reality appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogAnnouncing the Mozilla Mixed Reality Merch Store!

Announcing the Mozilla Mixed Reality Merch Store!

Ever wanted to up your wardrobe game with some stylish Mixed Reality threads, while at the same time supporting Mozilla's work? Dream no more! The Mozilla Mixed Reality team is pleased to announce that you can now wear your support for our efforts on your literal sleeve!

The store (powered by Spreadshirt) is available worldwide and has a variety of items including clothing tailored for women, men, kids and babies, and accessories such as bag, caps, mugs, and more. All with a variety of designs to choose from, including our “low poly” Firefox Reality logo, our adorable new mascot, Foxr, and more.

We hope that you find something that strikes your fancy!

The Mozilla BlogMOSS launches COVID-19 Solutions Fund

Mozilla is announcing today the creation of a COVID-19 Solutions Fund as part of the Mozilla Open Source Support Program (MOSS). Through this fund, we will provide awards of up to $50,000 each to open source technology projects which are responding to the COVID-19 pandemic in some way.

The MOSS Program, created in 2015, broadens access, increases security, and empowers users by providing catalytic funding to open source technologists. We have already seen inspiring examples of open source technology being used to increase the capacity of the world’s healthcare systems to cope with this crisis. For example, just a few days ago, the University of Florida Center for Safety, Simulation, and Advanced Learning Technologies released an open source ventilator. We believe there are many more life-saving open source technologies in the world.

As part of the COVID-19 Solutions Fund, we will accept applications that are hardware (e.g., an open source ventilator), software (e.g., a platform that connects hospitals with people who have 3D printers who can print parts for that open source ventilator), as well as software that solves for secondary effects of COVID-19 (e.g., a browser plugin that combats COVID related misinformation).

A few key details of the program:

  • We are generally looking to fund reasonably mature projects that can immediately deploy our funding, early stage ideas are unlikely to receive funding.
  • We generally expect awardees to use all funds within three months of receiving the award.
  • We will accept applications from anywhere in the world to the extent legally permitted.
  • We will accept applications from any type of legal entity, including NGOs, for profit hospitals, or a team of developers with strong ties to an affected community.
  • Applications will be accepted and reviewed on a rolling basis.
  • The MOSS committee will only consider projects which are released publicly under a license that is either a free software license according to the FSF or an open source license according to the OSI. Projects which are not licensed for use under an open source license are not eligible for MOSS funding.

To apply, please visit: https://mozilla.fluxx.io/apply/MOSS

For more information about the MOSS program, please visit: Mozilla.org/moss.


The Mozilla Open Source Support (MOSS) awards program, created in 2015, broadens access, increases security, and empowers users by providing catalytic funding to open source technologists. In addition to the COVID-19 Solutions Fund, MOSS has three tracks:

  • Track I – Foundational Technology: supports open source projects that Mozilla relies on, either as an embedded part of our products or as part of our everyday work.
  • Track II – Mission Partners: supports open source projects that significantly advance Mozilla’s mission.
  • Track III – The Secure Open Source Fund: supports security audits for widely used open source software projects as well as any work needed to fix the problems that are found.

Tracks I and II and this new COVID-19 Solutions Fund accept applications on a rolling basis. For more information about the MOSS program, please visit: Mozilla.org/moss

The post MOSS launches COVID-19 Solutions Fund appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 332

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crates is async-recursion, a macro to allow recursion in async functions.

Thanks to Zicklag for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

468 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Meta-Comment: I started this topic as someone completely uninvolved in the rust project. It's very reassuring seeing the nature of the response. Even knowing how fantastic the Rust community is, I was still prepared to be met with at least a small element of condescension given the nature of this issue. I haven't felt any sense of it. It's amazing. Anyone that has impact on the community culture deserves credit: This sort of experience doesn't come from nowhere. It comes from a long history of many people nudging things in the right direction. Thank you.

Ben on Zulip

Thanks to Josh Triplett for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

The Mozilla BlogWe’re Fixing the Internet. Join Us.

For over two decades, Mozilla has worked to build the internet into a global public resource that is open and accessible to all. As the internet has grown, it has brought wonder and utility to our lives, connecting people in times of joy and crisis like the one being faced today.

But that growth hasn’t come without challenges. In order for the internet and Mozilla to well serve people into the future, we need to keep innovating and making improvements that put the interests of people back at the center of online life.

To help achieve this, Mozilla is launching the Fix-the-Internet Spring MVP Lab and inviting coders, creators and technologists from around the world to join us in developing the distributed Web 3.0.

“The health of the internet and online life is why we exist, and this is a first step toward ensuring that Mozilla and the web are here to benefit society for generations to come,” said Mozilla Co-Founder and Interim CEO Mitchell Baker.

Mozilla’s Fix-the-Internet Spring MVP Lab is a day one, start from scratch program to build and test new products quickly. By energizing a community of creators who bring a hacker’s approach to vibrant experimentation, Mozilla aims to help find sustainable solutions and startup ideas around several key themes designed to fix the internet:

  1. Collaboration & Society: particularly in view of the current global crisis: (i) Foster better collaboration online, (ii) Grassroots collaboration around issues & emergencies, (iii) Local & neighborhood support networks, (iv) Supporting small businesses, (v) Social money-pooling for issues, people & businesses.
  2. Decentralized Web: build a new, decentralized architecture for the internet from infrastructure, communications, media & money, using the blockchain and peer-to-peer technologies.
  3. Messaging & Social Networking: can we build a new way to communicate online that favors privacy, people, and users’ interests? What needs to evolve?
  4. Misinformation & Content: ideas for services that help us get beyond polarization, filter bubbles and fake news.
  5. Surveillance Capitalism: whether it’s big tech or governments, everyone’s collecting your data. How do we put the user back in control of their data?
  6. Artificial Intelligence that works to benefit communities and citizens.

Participants in the Fix-the-Internet Spring MVP Lab will:

  • Work in teams of two-to-four people
  • Have access to mentorship through virtual weekly collaboration with product and engineering professionals
  • Receive a $2500 stipend per participant. If you are applying as a team, each team member will receive $2500.
  • Be eligible for cash prizes: $25k (1st), $10K (2nd) and $5K (3rd)
  • Benefit from Mozilla’s promotion of finished projects to gain users and awareness
  • Retain ownership of their projects and intellectual property

Visit http://www.mozilla.org/builders for additional details and information on how to apply by the April 6, 2020 deadline.

The post We’re Fixing the Internet. Join Us. appeared first on The Mozilla Blog.

Mozilla Addons BlogAdd developer comments to your extension’s listing page on addons.mozilla.org

In November 2017, addons.mozilla.org (AMO) underwent a major refresh. In addition to updating the site’s visual style, we separated the code for frontend and backend features and  re-architected the frontend to use the popular combination of React and Redux.

With a small team, finite budget, and other competing priorities, we weren’t able to migrate all features to the new frontend. Some features were added to our project backlog with the hope that one day a staff or community member would have the interest and bandwidth to implement it.

One of these features, a dedicated section for developer comments on extension listing pages, has recently been re-enabled thanks to a contribution by community member Lisa Chan. Extension developers can use this section to inform users about any known issues or other transient announcements.

This section can be found below the “About this extension” area on an extension listing page. Here’s an example from NoScript:

Image of developer comments section on AMO

Extension developers can add comments to this section by signing into the Developer Hub and clicking the “Edit Product Page” link under the name of the extension. On the next page, scroll down to the Technical Details section and click the Edit button to add or change the content of this section.

If you are an extension developer and you had used this section before the 2017 AMO refresh, please take a few minutes to review and update any comments in this field. Any text in that section will be visible on your extension’s listing page.

We’d like to extend a special thanks to Lisa for re-enabling this feature. If you’re interested in contributing code to addons.mozilla.org, please visit our onboarding wiki for information about getting started.


The post Add developer comments to your extension’s listing page on addons.mozilla.org appeared first on Mozilla Add-ons Blog.

Daniel Stenbergcurl ootw: –proxy-basic

Previous command line options of the week.

--proxy-basic has no short option. This option is closely related to the option --proxy-user, which has as separate blog post.

This option has been provided and supported since curl 7.12.0, released in June 2004.


In curl terms, a proxy is an explicit middle man that is used to go through when doing a transfer to or from a server:

curl <=> proxy <=> server

curl supports several different kinds of proxies. This option is for HTTP(S) proxies.

HTTP proxy authentication

Authentication: the process or action of proving or showing something to be true, genuine, or valid.

When it comes to proxies and curl, you typically provide name and password to be allowed to use the service. If the client provides the wrong user or password, the proxy will simply deny the client access with a 407 HTTP response code.

curl supports several different HTTP proxy authentication methods, and the proxy can itself reply and inform the client which methods it supports. With the option of this week, --proxy-basic, you ask curl to do the authentication using the Basic method. “Basic” is indeed very basic but is the actual name of the method. Defined in RFC 7616.


The Basic method sends the user and password in the clear in the HTTP headers – they’re just base64 encoded. This is notoriously insecure.

If the proxy is a HTTP proxy (as compared to a HTTPS proxy), users on your network or on the path between you and your HTTP proxy can see your credentials fly by!

If the proxy is a HTTPS proxy however, the connection to it is protected by TLS and everything is encrypted over the wire and then the credentials that is sent in HTTP are protected from snoopers.

Also note that if you pass in credentials to curl on the command line, they might be readable in the script where you do this from. Or if you do it interactively in a shell prompt, they might be viewable in process listings on the machine – even if curl tries to hide them it isn’t supported everywhere.


Use a proxy with your name and password and ask for the Basic method specifically. Basic is also the default unless anything else is asked for.

curl --proxy-user daniel:password123 --proxy-basic --proxy http://myproxy.example https://example.com


With --proxy you specify the proxy to use, and with --proxy-user you provide the credentials.

Also note that you can of course set and use entirely different credentials and HTTP authentication methods with the remote server even while using Basic with the HTTP(S) proxy.

There are also other authentication methods to selected, with --proxy-anyauth being a very practical one to know about.

Karl DubostWeek notes - 2020 w13 - worklog - everything is broken

  • next Mozilla Toronto All Hands is canceled. That's a good thing.
  • Coronavirus had no impact on my working life for now. The same as usual.
    1. Mozilla is working well in a distributed team.
    2. Webcompat team is already a very distributed team. None of our team members are in the same location. So this is business as usual for us.
    3. And as long as the economic impact is not affecting Mozilla, we will continue to work as normal. That will be a different story, if/when the economic crisis reaches us. We are for now a very privileged tribe.
    4. In Japan, we do not have yet a lockdown. (This is silly, I think. People are having non responsible behaviors.)
      • Updates 2020-03-26: and here we are. Tokyo, Kanagawa, Chiba and Saitama provinces have asked people to stay home this coming week-end, because the numbers of cases are accelerating.
    5. The school has already stopped for 2 weeks, but is supposed to restart beginning in April for the new year. (yeah Japan has a different schedule). But we had already removed our child the last 10 days of schools.
  • published my work notes left out for a while. I need to refocus my energy here to keep track of what I'm doing
  • One thing which is obvious with regards to Web compatibility and websites these last couple of days. If there are websites not working in one of your favorite browsers when working or in your daily activities to follow the impact of coronavirus on our daily lives, please report it on webcompat.com. Ksenia and I will try our best to diagnose it.
    • Be as descriptive as possible of all the steps you did to encounter the issue.
    • Try to reproduce it a couple of times.
    • Try to reproduce it in a different browsers.
  • Ksenia has started an amazing work on refactoring the webcompat.com multi-steps form
  • FIXED! Latest version of Firefox no longer accepts MacOS text clippings in the google search field. Thanks to Masayuki.
  • FIXED! document.createEvent("KeyEvents"). This is now forbidden in Firefox. A site was failing they fixed it! Thanks.
  • I wished today (tuesday) I could type leadfoot commands directly into the browser console to prototype my tests. Maybe it's possible in some ways.
    • intermediate solution proposed by Vlad is intern Recorder. I need to try this.
  • When someone is asking a Pull Request review on github, you receive an email with review_requested@noreply.github.com as one of the recipients. Easy to discover with a dynamic mailbox. I usually set my filtering on Mail.app with plenty of dynamic mailboxes. I have a couple of criteria but one which is always very useful to improve the performance is to add a "Received date" criteria with something around a couple of days I usually set around 14 days to 21 days.
  • I wish there was an annotation mode in git or github. Update 2020-03-30: Check my week notes of week 2020-14.
    • It would be even fun if you could share these annotations with people.
    • I could imagine that locally you could do something like. git annotate --lines 34-35 -m 'blablababalabla' module/verydope.py
    • Then you could later on see them again, extract again, share with someone. git readnotes hash_ref --from kdubost@mozilla.com
  • When inheriting a code from another, the first part is to read the code and comment it to try to piece things together.
  • Today, Friday, is diagnosis day.
    • Remarkable report with a real website and a reduced test case. That's really cool to see this.
    • I didn't handle as much as I wanted today. We will increase the load on Monday.

New form on webcompat.com

We had an issue with the new form design. We switched to 100% of our users on March 16, 2020. but indeed all the bugs received didn't get the label that they were actually reporting with the new form design. Probably only a third got the new form.

So that was the state when I fell asleep on Monday night. Mike pushed the bits a bit more during my night and opened.

My feeling is that if we are out of the experimental phase, we probably need to just not go through the AB code at all, and all of it becomes a lot simpler.

We can keep in place the code for future AB experiments and open a new issue for removing the old form code and tests once ksenia has finished refactoring the rest of the code for the new form.

So on this hypothesis, let's create a new PR. I expect tests to break badly. That's an Achilles' heel of our current setup. The AB experiment was an experiment at the beginning. Never let an experiment grows without the proper setup. We need to fix it.

(env) ~/code/webcompat.com % pytest
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /Users/karl/code/webcompat.com
collected 157 items

tests/unit/test_api_urls.py ...........                                  [  7%]
tests/unit/test_config.py ..                                             [  8%]
tests/unit/test_console_logs.py .....                                    [ 11%]
tests/unit/test_form.py .................                                [ 22%]
tests/unit/test_helpers.py ............................                  [ 40%]
tests/unit/test_http_caching.py ...                                      [ 42%]
tests/unit/test_issues.py ......                                         [ 45%]
tests/unit/test_rendering.py ......                                      [ 49%]
tests/unit/test_tools_changelog.py ..                                    [ 50%]
tests/unit/test_topsites.py ...                                          [ 52%]
tests/unit/test_uploads.py ...                                           [ 54%]
tests/unit/test_urls.py .................................                [ 75%]
tests/unit/test_webhook.py ......................................        [100%]

============================= 157 passed in 21.28s =============================

Unit testing seems to work. There was a simple fix. The big breakage should happen on the functional tests side. Let's create a PR and see how much CircleCI is not happy about it.

So this is breaking. My local functional testing was deeply broken, and after investigating in many directions, I found the solution. I guess this week is the week where everything is broken.

Questions about tools

I have been asked about the tools at Mozilla, I'm using for the job.

gather feedback from a semi-random set of engineers on various tooling that we have and translate it into actionable requests.

I was not sure which levels of details was needed and on which tools. This is what came from the top of my head.

  1. mozregression for finding bugs.
  2. How do we run mozregression for Firefox Preview (Fenix)?
  3. The documentation is dry. A lot more practical examples would be good. I'm happy to review the ones I would be using.
  4. it would be super cool to be able to run a mozregression on desktop and/or to have a remote mozregression on android.
  5. It would be very cool to be able to have a preset profile (kind of about:config with specific parameters) to launch with each iterations of the regression.
  6. Ice on top the cake: have a web driver set of instructions making the possibility to run the mozregression automatically. When running I sometimes needs to repeat the same set of interactions with a real webpage on a website. I want to be able to program these as instructions (aka enter this url, click here, type this text, etc.) And I probably could have a "run this js test" to see if it's good or bad version. So the regression could run automatically instead of having to repeat the things 20 times.
  7. Searchfox.org
  8. quite a cool tool, which can be very practical to find references sometimes.
  9. I don't have specific requests for it.
  10. Firefox devtools
  11. I have extensive requests. That's my main tool for working on day to day basis. But all my requests are already tracked by the devtools team. Just to say that this is essential for being able to work in the webcompat team and diagnose bugs.
  12. About contributing to mozilla central.
  13. I honestly gave up. I don't even know what is the current way of doing things. Sometimes I see a bug that I could help tackle or start tackling in C++. (Something super simple). But the time it takes to setup the full thing when contributing once every 3 months is dreadful. Things have changed, the tool for review has changed, or something else. The try seems complicated.
  14. As an aside, maybe it improved a lot since three years ago and I should give it another try with something like this
  15. So to take my comments with a pinch of salt.

State of my mozilla central repo

short story version: I had to reinstall everything.

After writing this above, I wondered about the state of my mozilla central repo and how ready I would be to locally compile firefox. Plus I will need it very soon.

hg --version
Mercurial Distributed SCM (version 4.3-rc)
(see https://mercurial-scm.org for more information)

Copyright (C) 2005-2017 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO

oops. Let's download the new version. Mercurial 5.2.2 for MacOS X 10.14+ it seems.

hg --version
*** failed to import extension firefoxtree from /Users/karl/.mozbuild/version-control-tools/hgext/firefoxtree: 'module' object has no attribute 'command'
*** failed to import extension reviewboard from /Users/karl/.mozbuild/version-control-tools/hgext/reviewboard/client.py: No module named wireproto
*** failed to import extension push-to-try from /Users/karl/.mozbuild/version-control-tools/hgext/push-to-try: 'module' object has no attribute 'command'
Mercurial Distributed SCM (version 5.2.2)
(see https://mercurial-scm.org for more information)

Copyright (C) 2005-2019 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO

Hmmm not sure it is better. Let explore the Mozilla Source Tree Documentation.

~/code/mozilla-central % hg pull
*** failed to import extension firefoxtree from /Users/karl/.mozbuild/version-control-tools/hgext/firefoxtree: 'module' object has no attribute 'command'
*** failed to import extension reviewboard from /Users/karl/.mozbuild/version-control-tools/hgext/reviewboard/client.py: No module named wireproto
*** failed to import extension push-to-try from /Users/karl/.mozbuild/version-control-tools/hgext/push-to-try: 'module' object has no attribute 'command'
pulling from https://hg.mozilla.org/mozilla-central/
abandon : certificate for hg.mozilla.org has unexpected fingerprint `sha****`
(check hostsecurity configuration)
  1. maybe I should remove the .mozbuild directory.
  2. There's the certificate issue.

ok I removed the line of [hostsecurity] section in .hgrc. Let's try again

hg.mozilla.org:fingerprints = sha******

The update will take around 40 minutes. Perfect… it's time for lunch. hmmm another fail. let's remove the current error message.

% hg pull
pulling from https://hg.mozilla.org/mozilla-central/
searching for changes
adding changesets
adding manifests
transaction abort!
rollback completed
abandon : stream ended unexpectedly  (got 9447 bytes, expected 32768)

let's try again

hg pull
pulling from https://hg.mozilla.org/mozilla-central/
searching for changes
adding changesets
adding manifests
adding file changes
added 90459 changesets with 787071 changes to 208862 files
new changesets 7a4290ed6a61:9f3f88599fff
(run 'hg update' to get a working copy)

yeah it worked.

hg update
157775 files updated, 0 files merged, 42118 files removed, 0 files unresolved
updated to "9f3f88599fff: Bug 1624113: Explicitly flip pref block_Worker_with_wrong_mime for test browser_webconsole_non_javascript_mime_worker_error.js. r=baku"
4 other heads for branch "default"


./mach bootstrap

returns an error.

 ./mach bootstrap

Note on Artifact Mode:

Artifact builds download prebuilt C++ components rather than building
them locally. Artifact builds are faster!

Artifact builds are recommended for people working on Firefox or
Firefox for Android frontends, or the GeckoView Java API. They are unsuitable
for those working on C++ code. For more information see:

Please choose the version of Firefox you want to build:
  1. Firefox for Desktop Artifact Mode
  2. Firefox for Desktop
  3. GeckoView/Firefox for Android Artifact Mode
  4. GeckoView/Firefox for Android
Your choice: 2

Looks like you have Homebrew installed. We will install all required packages via Homebrew.

Traceback (most recent call last):
    4: from /usr/local/Homebrew/Library/Homebrew/brew.rb:13:in `<main>'
    3: from /usr/local/Homebrew/Library/Homebrew/brew.rb:13:in `require_relative'
    2: from /usr/local/Homebrew/Library/Homebrew/global.rb:10:in `<top (required)>'
    1: from /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require'
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require': cannot load such file -- active_support/core_ext/object/blank (LoadError)
Error running mach:


The error occurred in code that was called by the mach command. This is either
a bug in the called code itself or in the way that mach is calling it.
You can invoke |./mach busted| to check if this issue is already on file. If it
isn't, please use |./mach busted file| to report it. If |./mach busted| is
misbehaving, you can also inspect the dependencies of bug 1543241.

If filing a bug, please include the full output of mach, including this error

The details of the failure are as follows:

subprocess.CalledProcessError: Command '['/usr/local/bin/brew', 'list']' returned non-zero exit status 1.

  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/mach_commands.py", line 44, in bootstrap
  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/bootstrap.py", line 442, in bootstrap
  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/osx.py", line 192, in install_system_packages
    getattr(self, 'ensure_%s_system_packages' % self.package_manager)(not hg_modern)
  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/osx.py", line 355, in ensure_homebrew_system_packages
  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/osx.py", line 304, in _ensure_homebrew_packages
  File "/Users/karl/code/mozilla-central/python/mozboot/mozboot/base.py", line 443, in check_output
    return subprocess.check_output(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 395, in check_output
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)

I wonder if it's because I switched from python 2.7 to python 3.7 a while ago as a default on my machine. OK I had an old version of brew. I updated it with

brew update

And ran again

./mach bootstrap

hmmm it fails again.

In the process of being solved. I erased ~/.mozbuild/version-control-tools/

Then reinstalled

./mach vcs-setup

Ah understood. My old system was configured to use reviewboard but now they use phabricator. grmbl.

I wish there was a kind of reset configuration system. Maybe there is.

I remove from ~/.hgrc

reviewboard = /Users/karl/.mozbuild/version-control-tools/hgext/reviewboard/client.py

but short summary I needed to re-install everything… because everything was old and outdated.

And now the "funny" thing is that reinstalling from scratch from Japan takes a lot of time… twice it fails because the stream is just failing. :/


hg clone https://hg.mozilla.org/mozilla-central/
destination directory: mozilla-central
applying clone bundle from https://hg.cdn.mozilla.net/mozilla-central/9f3f88599fffa54ddf0c744b98cc02df99f8d0b8.zstd-max.hg
adding changesets
adding manifests
adding file changes
added 520018 changesets with 3500251 changes to 567696 files
finished applying clone bundle
searching for changes
aucun changement trouvé
520018 local changesets published
updating to branch default
282924 files updated, 0 files merged, 0 files removed, 0 files unresolved

well that was not a smooth ride. After reinstalling from 0, I still had an issue because of the python 2.7 that brew installed in another location. Recommendations online were encouraging to reinstall it with brew. I really do not like brew. So I did the exact opposite and I removed the brew version of python 2.7. And it worked!

brew uninstall --ignore-dependencies python@2


./mach build

and success…

 3:22.01 0 compiler warnings present.
 3:22.17 Overall system resources - Wall time: 200s; CPU: 0%; Read bytes: 0; Write bytes: 0; Read time: 0; Write time: 0
To view resource usage of the build, run |mach resource-usage|.
 3:22.28 Your build was successful!
To take your build for a test drive, run: |mach run|
For more information on what to do now, see https://developer.mozilla.org/docs/Developer_Guide/So_You_Just_Built_Firefox


François MarierHow to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box.

The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through.

Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality

Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity.

Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:

  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).

  • Firefox: Ensure that http.network.referer.spoofSource is set to false in about:config, which it is by default.

  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have

Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on.

My favorite service at the moment is Whereby (formerly Appear.in), so I'm going to use that to connect from two different computers:

  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address ( is set as the DMZ host.


For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals.

Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:

  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true

Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them.

In the case of a direct connection, I saw the following on the remote-candidate:

  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx

and the following on local-candidate:

  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx

These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct.

On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:

  • ip shows an IP address belonging to the TURN server
  • candidateType: relay

and the same information as before on the local-candidate.


If you are using Firefox, the debugging page you want to look at is about:webrtc.

Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:

  • ICE State: succeeded
  • Nominated: true
  • Selected: true

then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay

In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections.

This isn't great and so I decided to tighten that up in two ways by:

  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.

To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: and pass it to the whois command:

$ whois | grep CIDR

To get the list of open UDP ports on siberia, I sshed into it and ran nmap:

$ sudo nmap -sU localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (
Host is up (0.000015s latency).
Not shown: 994 closed ports
631/udp   open|filtered ipp
5060/udp  open|filtered sip
5353/udp  open          zeroconf

Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds

I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):

# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s -p udp --dport 1024:65535 -j ACCEPT

François MarierInstalling Vidyo on Ubuntu 18.04

Following these instructions as well as the comments in there, I was able to get Vidyo, the proprietary videoconferencing system that Mozilla uses internally, to work on Ubuntu 18.04 (Bionic Beaver). The same instructions should work on recent versions of Debian too.

Installing dependencies

First of all, install all of the package dependencies:

sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module

Then, ensure you have a system tray application running. This should be the case for most desktop environments.

Building a custom Vidyo package

Download version 3.6.3 from the CERN Vidyo Portal but don't expect to be able to install it right away.

You need to first hack the package in order to remove obsolete dependencies.

Once that's done, install the resulting package:

sudo dpkg -i vidyodesktop-custom.deb

Packaging fixes and configuration

There are a few more things to fix before it's ready to be used.

First, fix the ownership on the main executable:

sudo chown root:root /usr/bin/VidyoDesktop

Then disable autostart since you don't probably don't want to keep the client running all of the time (and listening on the network) given it hasn't received any updates in a long time and has apparently been abandoned by Vidyo:

sudo rm /etc/xdg/autostart/VidyoDesktop.desktop

Remove any old configs in your home directory that could interfere with this version:

rm -rf ~/.vidyo ~/.config/Vidyo

Finally, launch VidyoDesktop and go into the settings to check "Always use VidyoProxy".

Robert O'CallahanWhat If C++ Abandoned Backward Compatibility?

Some C++ luminaries have submitted an intriguing paper to the C++ standards committee. The paper presents an ambitious vision to evolve C++ in the direction of safety and simplicity. To achieve this, the authors believe it is worthwhile to give up backwards source and binary compatibility, and focus on reducing the cost of migration (e.g. by investing in tool support), while accepting that the cost of each migration will be nonzero. They're also willing to give up the standard linking model and require whole-toolchain upgrades for each new version of C++.

I think this paper reveals a split in the C++ community. I think the proposal makes very good sense for organizations like Google with large legacy C++ codebases that they intend to continue investing in heavily for a long period of time. (I would include Mozilla in that set of organizations.) The long-term gains from improving C++ incompatibly will eventually outweigh the ongoing migration costs, especially because they're already adept at large-scale systematic changes to their code (e.g. thanks to gargantuan monorepo, massive-scale static and dynamic checking, and risk-mitigating deployment systems). Lots of existing C++ software really needs those safety improvements.

I think it also makes sense for C++ developers whose projects are relatively short-lived, e.g. some games. They don't need to worry about migration costs and will reap the benefits of C++ improvement.

For mature, long-lived projects that are poorly resourced, such as rr, it doesn't feel like a good fit. I don't foresee adding a lot of new code to rr, so we won't reap much benefit from improvements in C++. On the other hand it would hurt to pay an ongoing migration tax. (Of course rr already needs ongoing maintenance due to kernel changes etc, but every extra bit hurts.)

I wonder what compatibility properties toolchains would have if this proposal carries the day. I suspect the intent is the latest version of a compiler implements only the latest version of C++, but it's not clear. An aggressive policy like that would increase the pain for projects like rr (and, I guess, every other C++ project packaged by Linux distros) because we'd be dragged along more relentlessly.

It'll be interesting to see how this goes. I would not be completely surprised if it ends with a fork in the language.

The Firefox FrontierStay safe in your online life, too

During the COVID-19 pandemic, many of us are turning to the internet to connect, learn, work and entertain ourselves from home. We’re setting up new accounts, reading more news, watching … Read more

The post Stay safe in your online life, too appeared first on The Firefox Frontier.

Mozilla Localization (L10N)L10n Report: March Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

L10n Community in Matrix

As you might have read in the past weeks, Mozilla turned off IRC and officially switched to a new system for synchronous communications (Matrix), available at: https://chat.mozilla.org/

We have a channel dedicated to l10n community conversations. You can also join the room, after creating an account in Matrix, by searching for the “l10n-community” room.

You can find detailed information on how to access Matrix via browser and mobile apps in this wiki page: https://wiki.mozilla.org/Matrix

Messages written in Matrix are also mirrored (“bridged”) to the “Mozilla L10n Community” Telegram channel.

New content and projects

What’s new or coming up in Firefox desktop

As explained in the last l10n report, Firefox is now following a fixed 4-weeks release cycle:

  • Firefox 75 is currently in beta and will be released on April 7. The deadline to update localization has just passed (March 24).
  • Firefox 76, currently in Nightly, will move to Beta when 75 is officially released. The deadline to update localizations for that version will be April 21 (4 weeks after the current deadline).

In terms of upcoming content to localize, in Firefox 76 there’s a new authentication dialog, prompting users to authenticate with the Operating System when performing operations like setting a master password, or interacting with saved passwords in about:logins. Localizing this content is particularly challenging on macOS, since only part of the dialog’s text comes from Firefox (highlighted in red in the image below).

Make sure to read the instructions on the dev-l10n mailing list for some advice on how to localize this dialog.

What’s new or coming up in web projects


A lot of pages were added in the last month. Many are content heavy. Make sure to prioritize the pages based on deadlines and the priority star rating, as well as against other projects.

New and highest priority:
  • firefox/whatsnew_75.lang (due on 27 March, 3pm UTC)
  • firefox/welcome/page6.lang
  • firefox/welcome/page7.lang
Updates and higher priority:
  • mozorg/404.lang
  • mozorg/500.lang
New and lower priority:
  • firefox/compare.lang
  • firefox/compare/chrome.lang
  • firefox/compare/edge.lang
  • firefox/compare/ie.lang
  • firefox/compare/opera.lang
  • firefox/compare/safari.lang
  • firefox/compare/shared.lang
Firefox Accounts:

New content will be ready for localization on a weekly basis, currently released on Fridays.

Web of Things

After the month of March, the team will cease active development. However, they will push translated content to production from time to time.

What’s new or coming up in Foundation projects

The localization of *Privacy Not Included has started! Privacy Not Included is Mozilla’s attempt, through technical research, to help people shop products that are safe, secure and private. The project has been enabled on Pontoon and a first batch of strings has been made available. You can test your work on the staging website, updated almost daily. For the locales that have access to the project, you can also opt-in to localize the About section. If you’re interested, reach out to Théo. Not all locales can translate the project yet but the team is exploring technical options to make it happen. The next edition of the guide is scheduled for this fall, and more content will be exposed over time.

MozFest is moving to Amsterdam! After 10 years in London, the Mozilla Festival will move to Amsterdam for its next edition in March 2021. The homepage is now localized, including in Dutch, and support for Frisian will be added soon. The team will make more content available for localization during the time leading to the next festival edition.

What’s new or coming up in SuMo

The SUMO team is going to decommission all old SUMO accounts by the 23rd of March 2020.  If you have an account on SUMO, please take action to migrate it to the Firefox Accounts.

In order to migrate to Firefox Account, it’s better to always start by logging in to your old account and follow the prompt from there. Please read the FAQ and ask on this thread if you have any questions.

What’s new or coming up in Pontoon

Introducing comments

We’ve shipped the ability to add comments in Pontoon. One of the top requested features enables reviewers to give feedback on proposed suggestions, as well as facilitates general discussions about a specific string. Read more on the feature and how to use it on the blog.

Huge thanks to our Outreachy intern April Bowler who developed the feature, and many Mozilla L10n community members who have been actively involved in the design process.

Pre-translation and post-editing

We’re introducing the ability to pre-translate strings using translation memory and machine translation. Pre-translations are marked on dashboards as needing attention, but they end up in repositories (and products). Note that the feature will go through substantial testing and evaluation before it gets enabled in any of the projects.

Thanks to Vishal for developing the feature and bringing us closer to the post-editing world.

Word count

Thanks to Oleksandra, Pontoon finally got the ability to measure project size in words in addition to strings. The numbers are not exposed anywhere in the UI or API yet. If you’re interested in developing such feature, please let us know!


Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?


Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogWebXR Emulator Extension AR support

WebXR Emulator Extension AR support

In September we released the WebXR Emulator Extension which enables testing WebXR VR applications in your desktop browser. Today we are happy to announce a new feature: AR support.


The WebXR Device API is an API which provides the interface to create immersive (VR and AR) applications on the web across a wide variety of XR devices. The WebXR 1.0 API for VR has shipped.

AR (Augmented Reality) is becoming popular thanks to the new platforms, ARCore and ARKit. You may have seen online shops which let you view their items in your room. The AR market has the potential to be huge.

The Immersive Web Working Group has been working on the WebXR API for AR to introduce a more open AR platform on the web. Chrome 81 (which was going to release March 17th but is now postponed) enables WebXR API for AR and Hit Test by default. Support in other browsers is coming soon, too.

Once it lands you will be able to play around with AR applications on compatible devices  without installing anything. These are some WebXR AR examples you can try.

If you want to try on your android device now, you can use Chrome Android Beta. Install ARCore and Chrome Beta, and then access the examples above.

What the extension enables

You need AR compatible devices to play WebXR AR applications. Unfortunately you can’t run them on your desktop, even though the API is enabled, because your desktop doesn’t have the required hardware.

The WebXR Emulator Extension enables running WebXR AR applications on your desktop browser by emulating AR devices. As the following animation shows, you can test the application as if you run it on an emulated AR device in a virtual room. It includes the WebXR API polyfill so that it even works on browsers which do not natively support WebXR API for AR yet.

WebXR Emulator Extension AR support

How to use it

  1. Install the WebXR Emulator Extension from extension store for your browser (Firefox, Chrome) (Update: It's somehow disabled on Chrome web store now. Chrome users, please use Firefox one until I fix.)
  2. Open the WebXR tab in devtool panel and select “AR” device from the device list on top of the panel
  3. Visit WebXR application, for example Three.js WebXR AR examples
  4. You will notice that the application detects you have an AR device (emulated). Click a button or other UI to enter immersive mode
  5. You are now in a virtual room and the application runs on the emulated device. You can move around and control the device as you want.

No change is needed on the WebXR AR application side.


The extension resolves the difficulties of AR content creation. Similar to VR content creation, currently there are some difficulties to create AR content.

  1. You first need to get an AR device. You can’t start to create an application until you get one.
  2. Writing code on a desktop while testing and debugging on a device is annoying. And debugging on a device is harder than on desktop. Desktop browsers provide remote debugger but still hard.
  3. You need a place where you want to test. You need to arrange the room if you want to test placing an AR object on the floor. You need to bring a desk if you want to test placing an AR object on it.

This extension resolves these problems.

  1. Even if you don’t have an AR device you can start to create a WebXR AR application.
  2. You can do all code, test and debug on desktop. You can test and debug more easily and faster.
  3. You don’t need to arrange your room because you can test in a virtual room. Currently the room is empty but we plan to enable placing objects.

Although of course we strongly recommend testing on physical devices before product release, you will have a simpler workflow using the extension. You can test from the beginning to the end on desktop with the original application flow (open an application page, press button to enter immersive mode, and play AR) without any change in the application. Also you can keep using powerful desktop tools, for example screenshot capture, desktop video capture, and browser JavaScript debugger.

Virtual room advantage

Using a virtual room has another advantage in addition to the benefits I mentioned above. One of the difficulties in AR is recognizing objects in the world.  For example Hit Test feature requires plane recognition in the world. Upcoming Lighting estimation feature requires lighting detection in the world. Generally AR devices have special cameras, chips, or software to smoothly solve this complex problem. But the extension doesn’t need them because it knows everything in a virtual room. So that we can easily add new AR features support when they are ready.

What’s next for WebXR AR?

  • There are upcoming AR APIs, e.g. Anchor API and DOM Overlay API. We keep adding new APIs when they are ready
  • Currently we emulate only one phone type AR device. We plan to incorporate more AR devices.
  • Currently the virtual room is empty. We plan to enable placing objects.
  • We keep improving the usability.

We would love your feedback, feature requests, and bug reports. We are happy if you join us at the GitHub project.

And thanks to our Hello WebXR project and the asset author Diego. The very nice room asset is based on it.

The Firefox FrontierHow to switch from Microsoft Edge to Firefox in just a few minutes

You’ve heard that the Firefox browser is fast, private and secure, thanks to its built-in Enhanced Tracking Protection. You’ve also heard it’s made by people who want the web to … Read more

The post How to switch from Microsoft Edge to Firefox in just a few minutes appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgLearn web technology at “sofa school”

Lots of kids around the world are learning from home right now. In this post, I introduce free resources based on web technologies that will help them explore and learn from the safety of their living rooms. VR headsets and high-end graphics cards aren’t necessary. Really, all you need is a web browser!


Create a secret hideout

Hubs by Mozilla lets you share a virtual room with friends right in your browser. You can watch videos, play with 3D objects, or just hang out. Then, once you get the hang of Hubs, you can build almost anything imaginable with Spoke: a clubhouse, adventure island, or magic castle . . . . In Hubs, your little world becomes a place to spend time with friends (and show off your skills).

Try It Out

Example images of many different Hubs room environments created using Spoke


Play with the CSS Coloring Book

When kids (or adults) want to color, you have some options besides pulp paper booklets of princesses and sea creatures, thanks to Lubna, a front-end developer from the UK. Just click the “Edit On Codepen” button, and start playing. (The CSS color guide on MDN is a helpful reference). Young and old can learn by experimenting with this fun little toy.

Bring Bright Colors Into A Gray Day

Screenshot of the "Hello Spring" coloring book codepen with Edit button


Learn CSS Grid and Flexbox

The seasons are changing around the world—toward spring and toward fall. It feels like time to get into the garden and meet the frogs hopping from plant to plant. The Grid Garden that is, and Flexbox Froggy, to be precise. Educational software vendor Codepip created these attractive online learning experiences. They’re a great place for the young—and the young at heart—to get started with CSS.

Enter the Garden
Meet Flexbox Froggy

Screenshot of the Grid Garden where you can write CSS to grow pretend carrots


Fly to Jupiter

You don’t need a bus, car, submarine, or rocketship to go on a field trip. Educator Kai has created a variety of free VR experiences for kids and adults over at KaiXR. No headset is needed. Visit the planets of the solar system, Martin Luther King Memorial, the Mayan city of Chichen Itza, and the Taj Mahal in India. See dinosaurs, explore the human body, dive under the sea . . . and much, much more.

Go Exploring

Screenshot showing images of some of the Web XR journeys you can go on with Kai


It’s great to share resources!

Have other suggestions to share, or favorite learning resources? Tell us about them in the comments. They may be included in a future edition of our developer newsletter, where a portion of this content has already appeared.

The post Learn web technology at “sofa school” appeared first on Mozilla Hacks - the Web developer blog.

Daniel StenbergA curl dashboard

When I wrote up my looong blog post for the curl’s 22nd anniversary, I vacuumed my home directories for all the leftover scripts and partial hacks I’d used in the past to produce graphs over all sorts of things in the curl project. Being slightly obsessed with graphs, that means I got a whole bunch of them.

I made graphs with libreoffice

I dusted them off and made sure they all created a decent CSV output that I could use. I imported that data into libreoffice’s calc spreadsheet program and created the graphs that way. That was fun and I was happy with the results – and I could also manually annotate them with additional info. I then created a new git repository for the purpose of hosting the statistics scripts and related tools and pushed my scripts to it. Well, at least all the ones that seemed to work and were the most fun.

Having done the hard work once, it felt a little sad to just have that single moment snapshot of the project at the exact time I created the graphs, just before curl’s twenty-second birthday. Surely it would be cooler to have them updated automatically?

How would I update them automatically?

I of course knew of gnuplot since before as I’ve seen it used elsewhere (and I know its used to produce the graphs for the curl gitstats) but I had never used it myself.

How hard can it be?

I have a set of data files and there’s a free tool available for plotting graphs – and one that seems very capable too. I decided to have a go.

Of course I struggled at first when trying to get the basic concepts to make sense, but after a while I could make it show almost what I wanted and after having banged my head against it even more, it started to (partially) make sense! I’m still a gnuplot rookie, but I managed to tame it enough to produce some outputs!

The setup

I have a set of (predominantly) perl scripts that output CSV files. One output file for each script basically.

The statistics scripts dig out data using git from the source code repository and meta-data from the web site repository, and of course process that data in various ways that make sense. I figured a huge benefit of my pushing the scripts to a public repository is that they can be reviewed by anyone and the output should be possible to be reproduced by anyone – or questioned if I messed up somewhere!

The CSV files are then used as input to gnuplot scripts, and each such gnuplot script outputs its result as an SVG image. I selected SVG to make them highly scalable and yet be fairly small disk-space wise.

Different directory names

To spice things up a little, I decided that each new round of generated graph images should be put in a newly created directory with a random piece of string in its name. This, to make sure that we can cache the images on the curl web site very long and still not have a problem when we update the dashboard page on the site.


On the web site itself, the update script runs once every 24 hours, and it first updates its own clone of the source repo, and the stats code git repo before it runs over twenty scripts to make CSV files and the corresponding SVGs.

A dashboard

I like to view the final results as a dashboard. With over 20 up-to-date graphs showing the state of development, releases, commits, bug-fixes, authors etc it truly gives the reader an idea of how the project is doing and what the trends look like.

I hope to keep adding to and improving the graphs over time. If you have ideas of what to visualize and add to the collection, by all means let me know!


At the time of me writing this, the dashboard page looks like below. Click the image to go to the live dashboard.

For other projects?

Nothing in this effort makes my scripts particularly unique for curl so they could all be used for other projects as well – with little to a lot of hands on required. My data extraction scripts of course get and use data that we have stored, collected and keep logged in the project etc, and that data and those logs are highly curl specific.

Cameron KaiserTenFourFox FPR21b1 available

TenFourFox Feature Parity Release 21 beta 1 is now available (downloads, hashes, release notes). I decided against adding the AltiVec GCM accelerator for this release, since it needs some extra TLC to convert from VSX to VMX, and I'd like to test the other major changes independently without introducing a bigger bug exposure surface than necessary. As promised, however, this release does have support for higher-speed 0RTT TLS 1.3 with HTTP/2 (particularly useful on Google properties) and has additional performance adjustments to improve parallelism of TLS connections to HTTP/1.x sites (mostly everybody else). I also updated Reader mode to the most current version used in Firefox 74, incorporating several important fixes; for a slow or complex site that you don't need all the frills for, try turning on Reader mode by clicking the "book" icon in the URL bar. You can do it even while the page is loading (reload after if not all of it comes up). FPR21 will go live with Firefox 68.7/75 on April 7.

The Mozilla BlogTry our latest Test Pilot, Firefox for a Better Web, offering privacy and faster access to great content

Today we are launching a new Test Pilot initiative called Firefox Better Web with Scroll. The Firefox Better Web initiative is about bringing the ease back to browsing the web. We know that publishers are getting the short end of the stick in the current online ad ecosystem and advertising networks make it difficult for excellent journalism to thrive. To give users what they want most, which is great quality content without getting tracked by third parties, we know there needs to be a change. We’ve combined Firefox and Scroll’s growing network of ad-free sites to offer users a fast and private web experience that we believe can be our future.

If we’re going to create a better internet for everyone, we need to figure out how to make it work for publishers. Last year, we launched Enhanced Tracking Protection by default and have blocked more than two trillion third-party trackers to date, but it didn’t directly address the problems that publishers face. That’s where our partner Scroll comes in. By engaging with a better funding model, sites in their growing network no longer have to show you ads to make money. They can focus on quality not clicks. Firefox Better Web with Scroll gives you the fast, private web you want and supports publishers at the same time.

To try the Firefox Better Web online experience, Firefox users simply sign up for a Firefox account and install a web extension. As a Test Pilot, it will only be available in the US. The membership is 50% off for the first six months at $2.50 per month. This goes directly to fund publishers and writers, and in early tests we’ve found that sites make at least 40% more money than they would have made from showing you ads.

Early experimentation demonstrates desire for a “better web”

In February of 2019, we announced that we were exploring alternative revenue models on the web. Before we committed to any particular approach, we wanted to better understand the problem space. The entire investigation followed a very similar arc to the work we did with Firefox Monitor. We let the user be our guide, putting their expressed needs and concerns at the forefront of all of our work. We also tested cheaply and frequently. At each stage increasing the level of investment, but also the clarity of data.

One of our tests was an initial experiment with Scroll to discern whether there was an appetite and desire for this type of online experience. We wanted to get a better sense from our users on what they cared about the most and figure out the pain points for news sites as well, so we used multiple different value propositions to describe the service that Scroll offered. Here are our findings:

  • Users see ads as distracting and say their online experience is broken (in the tech world, we call it breakage).
  • Users care a great deal about supporting journalism. Many users intentionally choose not to install ad-blockers because of the impact that it would have on publishers.
  • Users want to support Mozilla because we’re a non-profit and put our users first with Firefox. A better web that supports publishers and the makers of Firefox? Sign me up!

How Firefox Better Web works:

Firefox Better Web combines the work we’ve done with third-party tracking protection and Scroll’s network of outstanding publishers. This ensures you will get a top notch experience while still supporting publishers directly and keeping the web healthy. We use a customized Enhanced Tracking Protection setting that will block third-party trackers, fingerprinters, and cryptominers.This provides additional privacy and a significant performance boost. Scroll then provides a network of top publishers who provide their content ad-free.

Firefox Better Web is available everywhere, but BEST in Firefox! See for yourself with our images:


With Firefox Better Web extension


With Firefox Better Web extension


Your membership is paid directly to the publishers in Scroll’s network based on the content you read. Our hope is that the success of this model will demonstrate to publishers the value of having a more direct, uncluttered connection to their online audience. In turn, the publisher network will continue to grow to include every site you care about so that your money can go directly to pay for the quality journalism you want to read. If you’re a publisher who wants to join this initiative, contact Scroll and see how this funding model can drive more revenue for you.

We invite you to try out Firefox Better Web and experience a private and faster way to read the news and stories you care about.

The post Try our latest Test Pilot, Firefox for a Better Web, offering privacy and faster access to great content appeared first on The Mozilla Blog.

Daniel Stenbergcurl ootw: –retry-max-time

Previous command line options of the week.

--retry-max-time has no short option alternative and it takes a numerical argument stating the time in seconds. See below for a proper explanation for what that time is.


curl supports retrying of operations that failed due to “transient errors”, meaning that if the error code curl gets signals that the error is likely to be temporary and not the fault of curl or the user using curl, it can try again. You enable retrying with --retry [tries] where you tell curl how many times it should retry. If it reaches the maximum number of retries with a successful transfer, it will return error.

A transient error can mean that the server is temporary overloaded or similar, so when curl retries it will by default wait a short while before doing the next “round”. By default, it waits one second on the first retry and then it doubles the time for every new attempt until the waiting time reaches 10 minutes which then is the max waiting time. A user can set a custom delay between retries with the --retry-delay option.

Transient errors

Transient errors mean either: a timeout, an FTP 4xx response code or an HTTP 408 or 5xx response code. All other errors are non-transient and will not be retried with this option.

Retry no longer than this

Retrying can thus go on for an extended period of time, and you may want to limit for how long it will retry if the server really doesn’t work. Enter --retry-max-time.

It sets the maximum number of seconds that are allowed to have elapsed for another retry attempt to be started. If you set the maximum time to 20 seconds, curl will only start new retry attempts within a twenty second period that started before the first transfer attempt.

If curl gets a transient error code back after 18 seconds, it will be allowed to do another retry and if that operation then takes 4 seconds, there will be no more attempts but if it takes 1 second, there will be time for another retry.

Of course the primary --retry option sets the number of times to retry, which may reach the end before the maximum time is met. Or not.


Since curl 7.66.0 (September 2019), the server’s Retry-After: HTTP response header will be used to figure out when the subsequent retry should be issued – if present. It is a generic means to allow the server to control how fast clients will come back, so that the retries themselves don’t become a problem that causes more transient errors…


In curl 7.52.0 curl got this additional retry switch that adds “connection refused” as a valid reason for doing a retry. If not used, a connection refused is not considered a transient error and will cause a regular error exit code.

Related options

--max-time limits the entire time allowed for an operation, including all retry attempts.

This Week In RustThis Week in Rust 331

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crates is flume, a fast multi-producer single-consumer channel.

Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

380 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is funny because in one sense it's hard and clunky. However, it's only ever precisely as hard and clunky as it needs to be. Everywhere something can be made more concise, or readable, or convenient, without sacrificing any control, it has been. Anytime something is hard or inconvenient, it's because the underlying domain really is exactly that hard or inconvenient.

Contrast this with other languages, which are often clunky when they don't need to be and/or "easy" when they shouldn't be.

brundolf on Hacker News

Thanks to pitdicker for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Support.Mozilla.OrgIntroducing Leo McArdle

Hi everyone,

We have good news from our team that I believe some of you might’ve already known. Finally, Tasos will no longer be a lone coder in our team as now we have a new additional member in SUMO. Please, say hi to Leo McArdle.

I’m sure Leo is not a new name for most of you. He’s been involved in the community for so long (some of you might know him as a “Discourse guy”) and now he’s taking a new role as a software engineer working in SUMO team.

Here is a short introduction from Leo:

Hey all, I’m Leo and joining the SUMO team as a Software Engineer. I’m very excited to be working on the SUMO platform, as it was through this community that I first started contributing to Mozilla. This led me to pursue programming first as a hobby, then as a profession, and ultimately end up back here! When I’m not programming, I’m usually watching some kind of motor racing, or attempting to cook something adventurous in the kitchen… and usually failing! See you all around!

Please join us to welcome him!

Daniel Stenberglet’s talk curl 2020 roadmap

tldr: join in and watch/discuss the curl 2020 roadmap live on Thursday March 26, 2020. Sign up here.

The roadmap is basically a list of things that we at wolfSSL want to work on for curl to see happen this year – and some that we want to mention as possibilities.(Yes, the word “webinar” is used, don’t let it scare you!)

If you can’t join live, you will be able to enjoy a recorded version after the fact.

I shown the image below in curl presentation many times to illustrate the curl roadmap ahead:

The point being that we as a project don’t really have a set future but we know that more things will be added and fixed over time.

Daniel, wolfSSL and curl

This is a balancing act where there I have several different “hats”.

I’m the individual who works for wolfSSL. In this case I’m looking at things we at wolfSSL want to work on for curl – it may not be what other members of the team will work on. (But still things we agree are good and fit for the project.)

We in wolfSSL cannot control or decide what the other curl project members will work on as they are volunteers or employees working for other companies with other short and long term goals for their participation in the curl project.

We also want to try to communicate a few of the bigger picture things for curl that we want to see done, so that others can join in and contribute their ideas and opinions about these features, perhaps even add your preferred subjects to the list – or step up and buy commercial curl support from us and get a direct-channel to us and the ability to directly affect what I will work on next.

As a lead developer of curl, I will of course never merge anything into curl that I don’t think benefits or advances the project. Commercial interests don’t change that.


Sign up here. The scheduled time has been picked to allow for participants from both North America and Europe. Unfortunately, this makes it hard for all friends not present on these continents. If you really want to join but can’t due to time zone issues, please contact me and let us see what we can do!


Top image by Free-Photos from Pixabay

Karl DubostWeek notes - 2020 w10, w11, w12 - worklog - Three weeks and the world is mad

So my latest work notes were 3 weeks ago, and what I was afraid about, just came to realization. We are in there for a long time. I'm living in Japan which seems to be spared puzzling many people. My non-professional armchair-epidemiologist crystall-ball impression is that Japan will not escape it seeing the daily behavior of people around me. Japan seems to have been quite protected by long cultural habits and human-less contacts society (to the extreme point of hikikomori). I don't think it will stand for a long time in a globalized world, but I'll be super happy to be wrong.

So the coronavirus anxiety porn has eaten my week notes, but we managed to maintain a relatively reasonable curve for the needsdiagnosis. That's a good news. I would love to modify a bit this curve to highlight the growing influx of unattended chrome issues. If someone from Google could give a shot to them: 55 to address. Probably less given that when you let a bug rest too long it disappears because the website has been redesigned in the meantime.

We restarted the machine learning bot classifying invalid issues. It was not playing very well with our new workflow for anonymous reporting. We had to change the criteria for selecting bugs.

oh also this happens this week and talk about a wonderful shot of pure vitamin D. That's one of the reasons of Mozilla being awesome in difficult circumstances.

I'll try to be better at my week notes in the next couple of weeks.


Nick FitzgeraldWriting Programs! That Write Other Programs!!

I gave a short talk about program synthesis at !!Con West this year titled “Writing Programs! That Write Other Progams!!” Here’s the abstract for the talk:

Why write programs when you can write a program to write programs for you? And that program writes programs that are faster than the programs you’d write by hand. And that program’s programs are actually, you know, correct. Wow!

Yep, it’s time to synthesize. But this ain’t Moog, this is program synthesis. What is that, and how can it upgrade our optimizers into super-optimizers? We’ll find out!!

The talk was a short, friendly introduction to the same stuff I wrote about in Synthesizing Loop-Free Programs with Rust and Z3.

The recording of the talk is embedded below. The presentation slides are available here.

Also make sure to also check out all the other talks from !!Con West 2020! !!Con West (and !!Con East) is a really special conference about the joy, surprise, and excitement of programming. It’s the anti-burnout conference: remembering all the fun and playfulness of programming, and embracing absurdist side projects because why not?! I love it, and I highly encourage you to come next year.

Daniel Stenbergcurl: 22 years in 22 pictures and 2222 words

curl turns twenty-two years old today. Let’s celebrate this by looking at its development, growth and change over time from a range of different viewpoints with the help of graphs and visualizations.

This is the more-curl-graphs-than-you-need post of the year. Here are 22 pictures showing off curl in more detail than anyone needs.

I founded the project back in the day and I remain the lead developer – but I’m far from alone in this. Let me take you on a journey and give you a glimpse into the curl factory. All the graphs below are provided in hires versions if you just click on them.

Below, you will learn that we’re constantly going further, adding more and aiming higher. There’s no end in sight and curl is never done. That’s why you know that leaning on curl for Internet transfers means going with a reliable solution.

Number of lines of code

Counting only code in the tool and the library (and public headers) it still has grown 80 times since the initial release, but then again it also can do so much more.

At times people ask how a “simple HTTP tool” can be over 160,000 lines of code. That’s basically three wrong assumptions put next to each other:

  1. curl is not simple. It features many protocols and fairly advanced APIs and super powers and it offers numerous build combinations and runs on just all imaginable operating systems
  2. curl supports 24 transfer protocols and counting, not just HTTP(S)
  3. curl is much more than “just” the tool. The underlying libcurl is an Internet transfer jet engine.

How much more is curl going to grow and can it really continue growing like this even for the next 22 years? I don’t know. I wouldn’t have expected it ten years ago and guessing the future is terribly hard. I think it will at least continue growing, but maybe the growth will slow down at some point?

Number of contributors

Lots of people help out in the project. Everyone who reports bugs, brings code patches, improves the web site or corrects typos is a contributor. We want to thank everyone and give all helpers the credit they deserve. They’re all contributors. Here’s how fast our list of contributors is growing. We’re at over 2,130 names now.

When I wrote a blog post five years ago, we had 1,200 names in the list and the graph shows a small increase in growth over time…

Daniel’s share of total commits

I started the project. I’m still very much involved and I spend a ridiculous amount of time and effort in driving this. We’re now over 770 commits authors and this graph shows how the share of commits I do to the project has developed over time. I’ve done about 57% of all commits in the source code repository right now.

The graph is the accumulated amount. Some individual years I actually did far less than 50% of the commits, which the following graph shows

Daniel’s share of commits per year

In the early days I was the only one who committed code. Over time a few others were “promoted” to the maintainer role and in 2010 we switched to git and the tracking of authors since then is much more accurate.

In 2014 I joined Mozilla and we can see an uptake in my personal participation level again after having been sub 50% by then for several years straight.

There’s always this argument to be had if it is a good or a bad sign for the project that my individual share is this big. Is this just because I don’t let other people in or because curl is so hard to work on and only I know my ways around the secret passages? I think the ever-growing number of commit authors at least show that it isn’t the latter.

What happens the day I grow bored or get run over by a bus? I don’t think there’s anything to worry about. Everything is free, open, provided and well documented.

Number of command line options

The command line tool is really like a very elaborate Swiss army knife for Internet transfers and it provides many individual knobs and levers to control the powers. curl has a lot of command line options and they’ve grown in number like this.

Is curl growing too hard to use? Should we redo the “UI” ? Having this huge set of features like curl does, providing them all with a coherent and understandable interface is indeed a challenge…

Number of lines in docs/

Documentation is crucial. It’s the foundation on which users can learn about the tool, the library and the entire project. Having plenty and good documentation is a project ambition. Unfortunately, we can’t easily measure the quality.

All the documentation in curl sits in the docs/ directory or sub directories in there. This shows how the amount of docs for curl and libcurl has grown through the years, in number of lines of text. The majority of the docs is in the form of man pages.

Number of supported protocols

This refers to protocols as in primary transfer protocols as in what you basically specify as a scheme in URLs (ie it doesn’t count “helper protocols” like TCP, IP, DNS, TLS etc). Did I tell you curl is much more than an HTTP client?

More protocols coming? Maybe. There are always discussions and ideas… But we want protocols to have a URL syntax and be transfer oriented to map with the curl mindset correctly.

Number of HTTP versions

The support for different HTTP versions has also grown over the years. In the curl project we’re determined to support every HTTP version that is used, even if HTTP/0.9 support recently turned disabled by default and you need to use an option to ask for it.

Number of TLS backends

The initial curl release didn’t even support HTTPS but since 2005 we’ve support customizable TLS backends and we’ve been adding support for many more ones since then. As we removed support for two libraries recently we’re now counting thirteen different supported TLS libraries.

Number of HTTP/3 backends

Okay, this graph is mostly in jest but we recently added support for HTTP/3 and we instantly made that into a multi backend offering as well.

An added challenge that this graph doesn’t really show is how the choice of HTTP/3 backend is going to affect the choice of TLS backend and vice versa.

Number of SSH backends

For a long time we only supported a single SSH solution, but that was then and now we have three…

Number of disclosed vulnerabilities

We take security seriously and over time people have given us more attention and have spent more time digging deeper. These days we offer good monetary compensation for anyone who can find security flaws.

Number of known vulnerabilities

An attempt to visualize how many known vulnerabilities previous curl versions contain. Note that most of these problems are still fairly minor and some for very specific use cases or surroundings. As a reference, this graph also includes the number of lines of code in the corresponding versions.

More recent releases have less problems partly because we have better testing in general but also of course because they’ve been around for a shorter time and thus have had less time for people to find problems in them.

Number of function calls in the API

libcurl is an Internet transfer library and the number of provided function calls in the API has grown over time as we’ve learned what users want and need.

Anything that has been built with libcurl 7.16.0 or later you can always upgrade to a later libcurl and there should be no functionality change and the API and ABI are compatible. We put great efforts into make sure this remains true.

The largest API additions over the last few year are marked in the graph: when we added the curl_mime_* and the curl_url_* families. We now offer 82 function calls. We’ve added 27 calls over the last 14 years while maintaining the same soname (ABI version).

Number of CI jobs per commit and PR

We’ve had automatic testing in the curl project since the year 2000. But for many years that testing was done by volunteers who ran tests in cronjobs in their local machines a few times per day and sent the logs back to the curl web site that displayed their status.

The automatic tests are still running and they still provide value, but I think we all agree that getting the feedback up front in pull-requests is a more direct way that also better prevent bad code from ever landing.

The first CI builds were added in 2013 but it took a few more years until we really adopted the CI lifestyle and today we have 72, spread over 5 different CI services (travis CI, Appveyor, Cirrus CI, Azure Pipelines and Github actions). These builds run for every commit and all submitted pull requests on Github. (We actually have a few more that aren’t easily counted since they aren’t mentioned in files in the git repo but controlled directly from github settings.)

Number of test cases

A single test case can test a simple little thing or it can be a really big elaborate setup that tests a large number of functions and combinations. Counting test cases is in itself not really saying much, but taken together and looking at the change over time we can at least see that we continue to put efforts into expanding and increasing our tests. It should also be considered that this can be combined with the previous graph showing the CI builds, as most CI jobs also run all tests (that they can).

Number of commits per month

A commit can be tiny and it can be big. Counting a commit might not say a lot more than it is a sign of some sort of activity and change in the project. I find it almost strange how the number of commits per months over time hasn’t changed more than this!

Number of authors per month

This shows number of unique authors per month (in red) together with the number of first-time authors (in blue) and how the amounts have changed over time. In the last few years we see that we are rarely below fifteen authors per month and we almost always have more than five first-time commit authors per month.

I think I’m especially happy with the retained high rate of newcomers as it is at least some indication that entering the project isn’t overly hard or complicated and that we manage to absorb these contributions. Of course, what we can’t see in here is the amount of users or efforts people have put in that never result in a merged commit. How often do we miss out on changes because of project inabilities to receive or accept them?

72 operating systems

Operating systems on which you can build and run curl for right now, or that we know people have ran curl on before. Most mortals cannot even list this many OSes off the top of their heads. If you know of any additional OS that curl has run on, please let me know!

20 CPU architectures

CPU architectures on which we know people have run curl. It basically runs on any CPU that is 32 bit or larger. If you know of any additional CPU architecture that curl has run on, please let me know!

32 third party dependencies

Did I mention you can build curl in millions of combinations? That’s partly because of the multitude of different third party dependencies you can tell it to use. curl support no less than 32 different third party dependencies right now. The picture below is an attempt to some sort of block diagram and all the green boxes are third party libraries curl can potentially be built to use. Many of them can be used simultaneously, but a bunch are also mutually exclusive so no single build can actually use all 32.

60 libcurl bindings

If you’re looking for more explanations how libcurl ends up being used in so many places, here are 60 more. Languages and environments that sport a “binding” that lets users of these languages use libcurl for Internet transfers.

Missing pictures

“number of downloads” could’ve been fun, but we don’t collect the data and most users don’t download curl from our site anyway so it wouldn’t really say a lot.

“number of users” is impossible to tell and while I’ve come up with estimates every now and then, making that as a graph would be doing too much out of my blind guesses.

“number of graphs in anniversary blog posts” was a contender, but in the end I decided against it, partly since I have too little data.

(Scripts for most graphs)


Every anniversary is an opportunity to reflect on what’s next.

In the curl project we don’t have any grand scheme or roadmap for the coming years. We work much more short-term. We stick to the scope: Internet transfers specified as URLs. The products should be rock solid and secure. The should be high performant. We should offer the features, knobs and levers our users need to keep doing internet transfers now and in the future.

curl is never done. The development pace doesn’t slow down and the list of things to work on doesn’t shrink.

Mike HoyeNotice


As far as I can tell, 100% of the google results for “burnout” or “recognizing burnout” boil down to victim-blaming; they’re all about you, and your symptoms, and how to recognize when you’re burning out. Are you frustrated, overwhelmed, irritable, tired? Don’t ask for help, here’s how to self-diagnose! And then presumably do something.

What follows is always the most uselessly vague advice, like “listen to yourself” or “build resiliency” or whatever, which all sounds great and reinforces that the burden of recovery is entirely on the person burning out. And if you ask about the empirical evidence supporting it, this advice is mostly on par with leaving your healing crystals in the sun, getting your chakras greased or having your horoscope fixed by changing your birthday.

Resiliency and self-awareness definitely sound nice enough, and if your crystals are getting enough sun good for them, but just about all of this avoiding-burnout advice amounts to lighting scented candles downwind of a tire fire. If this was advice about a broken leg or anaphylaxis we’d see it for the trash it is, but because it’s about mental health somehow we don’t call it out. Is that a shattered femur? Start by believing in yourself, and believing that change is possible. Bee stings are just part of life; maybe you should take the time to rethink your breathing strategy. This might be a sign that breathing just isn’t right for you.

Even setting that aside: if we could all reliably self-assess and act on the objective facts we discerned thereby, burnout (and any number of other personal miseries) wouldn’t exist. But somehow here we are in not-that-world-at all. And as far as I can tell approximately none percent of these articles are ever about, say, “how to foster an company culture that doesn’t burn people out”, or “managing people so they don’t burn out”, or “recognizing impending burnout in others, so you can intervene.”

I’ll leave why that might be as an exercise for the reader.

Fortunately, as in so many cases like this, evidence comes to the rescue; you just need to find it. And the best of the few evidence-based burnout-prevention guidelines I can find come from the field of medicine where there’s a very straight, very measurable line between physician burnout and patient care outcomes. Nothing there will surprise you, I suspect; “EHR stress” (Electronic Health Records) has a parallel in our lives with tooling support, and the rest of it – sane scheduling, wellness surveys, agency over meaningful work-life balance and so on – seems universal. And it’s very clear from the research that recognizing the problem in yourself and in your colleagues is only one, late step. Getting support to make changes to the culture and systems in which you find yourself embedded is, for the individual, the next part of the process.

The American Medical Association has a “Five steps to creating a wellness culture” document, likewise rooted in gathered evidence, and it’s worth noting that the key takeaways are that burnout is a structural problem and mitigating it requires structural solutions. “Assess and intervene” is the last part of the process, not the first. “Self-assess and then do whatever” is not on the list at all, because that advice is terrible and the default setting of people burning out is self-isolation and never, ever asking people for the help they need.

We get a lot of things right where I work, and we’re better at taking care of people now than just about any other org I’ve ever heard of, but we still need to foster an “if you see something, say something” approach to each others’ well being. I bet wherever you are, you do too. Particularly now that the whole world has hard-cutover to remote-only and we’re only seeing each other through screens.

Yesterday, I told some colleagues that “if you think somebody we work with is obviously failing at self-care, talk to them”, and I should have been a lot more specific. This isn’t a perfect list by any means, but if you ask someone how they’re doing and they can’t so much as look you in the eye when they answer, see that. If you’re talking about work and they start thumbing their palms or rubbing their wrists or some other reflexive self-soothing twitch, notice. If you ask them about what they’re working on and they take a long breath and longer choosing their words, pay attention. If somebody who isn’t normally irritable or prone to cynical or sardonic humor starts trending that way, if they’re hunched over in meetings looking bedraggled when they normally take care of posture and basic grooming, notice that and say so.

If “mental health” is just “health” – and I guarantee it is – then burnout is an avoidable workplace injury, and I don’t believe in unavoidable mental-health injuries any more than I believe in unavoidable forklift accidents. Keep an eye out for your colleagues. If you think somebody you work with is failing at self-care, talk to them. Maybe talk to a friend, maybe talk to their manager or yours.

But say something. Don’t let it slide.

Firefox NightlyThese Weeks in Firefox: Issue 71


  • Pour one out because irc.mozilla.org is no more! Now raise a cup, because we’re all chatting on Matrix now, come join us!
  • The Network Monitor now shows links to the place where the request was initiated. Clicking on the links navigates the user to the Stack Trace side panel, with the entire stack trace showing.
    • The Network Monitor Developer Tool is showing a list of outgoing network requests in a table. One of the columns is Initiator, and it lists the file and line number where the request was initiated. That column is circled.

      🎶 Where did you come from, where did you go? 🎶

  • The password doorhanger icon now appears (by default) as soon as a password field is edited on a webpage. This allows the password to be saved to Firefox on any site where the user hasn’t chosen to never save! Please file bugs on this new feature.
  • Today’s Firefox 74 release includes Picture-in-Picture toggle adjustments for Instagram, Udemy and Twitch
  • The new search configuration format has now been turned on for Nightly builds. If you see anything unexpected with your default (Firefox-provided) search engines, please let us know by filing a bug.
  • Both Pocket Collections and Pocket stories in en-GB are in beta, and moving along to release now. We have a smoke test experiment going out in beta.

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • aarushivij
  • dw-dev
  • Florens Verschelde :fvsch
  • Itiel
  • KC
  • Kriti Singh
  • Outvi V
  • Sebastian Zartner [:sebo]
  • Thal Marcelin
  • Tim Nguyen :ntim
  • Uday Mewada

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixes related to geolocation and browserSettings optional permissions (Bug 1618398, Bug 1618500).
  • privacy is now supported as an optional permission (Bug 1618399).
  • Starting from Firefox 74, the dns permission doesn’t trigger a permission prompt anymore (Bug 1617861). uBlock has recently started to use this API to block trackers that disguise themselves as first party, but the wording for the DNS permission in the prompt was confusing (e.g. See Bug 1617873).
WebExtension APIs
  • Starting from Firefox 75, tabs.saveAsPDF supports two new optional properties: fileName and pageSettings (Bug 1483590). Thanks to dw-dev for contributing this!
  • The browserSettings API now supports the new zoomSiteSpecific and zoomFullPage settings (Bug 1286953), thanks also to dw-dev.
  • Tom Schuster fixed an issue triggered by calling browser.find.highlightResults with result objects that are missing the option rangeIndex parameter (Bug 1615761).
Addon Manager & about:addons
  • about:addons ot a small fix to make sure that the options menu doesn’t reopen when the user clicks on it again while it’s already opened (Bug 1603352)


Sync and Storage

  • Durable Sync, a project to port the Sync storage backend to Rust, is rolling out to more users! We’re going from 50% of new Sync users to 75% on March 11th.

Developer Tools


  • In the Browser Toolbox, you can now select the context in which you want to evaluate a given expression (Bug 1605329)
    • A dropdown at the bottom of the Browser Toolbox console displays a dropdown that lets the user choose which context to run JavaScript in. parser-worker.js is currently selected.

      Context is important!

Network Monitor

  • It’s possible to use wildcards to block requests. See async-*.js example of the following screenshot.
    • A table in the Network Monitor developer tool shows a series of requests. Two of the requests are marked as "Blocked by DevTools", and a column shows that this block is due to a regular expression in the Blocking pane.

      Blocked by DevTools!

  • It’s possible to filter WebSockets messages with regular expressions (bug)
    • A table in the Network Monitor Developer Tool shows a series of requests, one of which is a WebSocket connection. The messages being sent over the WebSocket connection are being filtered by a regular expression in the side pane.

      This can be pretty handy if there’s lots of traffic going over the WebSocket.


  • Bernard finished his first fix, to get the unselected tab hover ported to Fission. This allows video decoding to start when hovering over a tab one might switch to.
  • Porting front end actors/components to JSWindowActors is now ⅔ complete.


  • Started work to move the installer UI out of NSIS and into a web based framework so that developers can make UI changes using plain ol’ (literally, it has to run in the old Internet Explorer rendering engine! 🗿) HTML and CSS.



  • Fenix has support for WebPush! 🌐🖐 You can now get push notifications from sites that support it (e.g. Instagram, Twitter). Please file bugs for sites that don’t work.

New Tab Page

  • Starting to work on moving v2 recommendation code into a web worker.
  • We’ll also be exploring some new New Tab layouts designed for higher engagement, and exploring story density configuration. This gives the user back the ability to configure how much Pocket they see.
    • Both these ideas are in early design and likely see dev work in 77.
  • All development is now happening in mozilla-central (GitHub repo is frozen), and docs are updated. See this mailing list thread.


  • `mach vendor node` and other automation work is being scoped out. Anyone potentially interested in helping out with implementation is highly encouraged to ask @dmosedale or in the #fx-desktop-dev room on Matrix!

Password Manager


Performance Tools

  • The brand new capturing workflow has landed and you can use it on Firefox Nightly! You can enable it on profiler.firefox.com if you haven’t already.
    • A new panel for the Profiler Toolbar icon is displayed. It shows a single dropdown allowing users to select domain-specific settings. The current setting is "Web Developer".

      Shiny and new, and easy to use too!

  • There is a context menu for the timeline markers now.
    • A context menu is displayed over the tracks in the Firefox Profiler letting the user perform various actions on the region that was clicked.

      This is a very handy addition to the profiler UI!

  • Now you can drag and drop the profiles into profiler.firefox.com even though there is an opened profile already.


Search and Navigation

  • Dale has fixed an issue with right-clicking and selecting search when using DuckDuckGo Lite.
  • Mark fixed an issue where shutting down in the middle of search engine startup could write an incomplete cache.
Address Bar
  • Notable changes:
    • Unified address bar and search bar clickSelectsAll behavior across all the platforms (prefs have been removed) – Bug 333714
  • Visual redesign (update 1)
    • Release scheduled for Firefox 75
    • Will run a pref-flip study in 74
    • Various minor fixes around design polish, telemetry and code cleanups
    • When Top Sites are disabled in the new tab page, the address bar fallbacks to the old list – Bug 1617345
    • Top sites are shown when the address bar input is cleared – Bug 1617408
  • Make Address Bar modules more easily reusable by other projects
    • Making the code more self-contained and less depending on the browser code layout
  • Address Bar results composition improvements
    • Aimed at improving results composition by fixing papercuts and improving frecency
    • Don’t suggest switching to the current tab – Bug 555694


User Journey

The Firefox FrontierExtension Spotlight: Worldwide Radio

Before Oleksandr Popov had the idea to build a browser extension that could broadcast thousands of global radio stations, his initial motivation was as abstract as it was aspirational. “I … Read more

The post Extension Spotlight: Worldwide Radio appeared first on The Firefox Frontier.

Jan-Erik RedigerReview Feedback: a response to the Feedback Ladder

Last week I read Feedback Ladders: How We Encode Code Reviews at Netlify and also shared that with my team at Mozilla. In this post I want to summarize how we organize our reviews and compare that to Netlify's Feedback Ladder.

My team is mainly responsible for all work on Firefox Telemetry and our other projects. (Nearly) everything we do is first tracked in Bugs on Bugzilla. No code change (nor doc change) will land without review. For changes to land in Firefox the developer is responsible for picking the right reviewer, though right now that's mostly shared work between chutten and me. Sometimes we need to involve experts from other components of Firefox.

On Glean we rely on an auto-assign bot to pick a reviewer after opening a pull request. Sometimes the submitter also actively picks one from the team as a reviewer, e.g. if it's a followup to previous work or if some niche expertise is needed.

When reviewing we use a system not too dissimilar to the Feedback ladder. However it is much more informal.

Let's compare the different steps:

⛰ Mountain / Blocking and requires immediate action

In Mozilla speak that would be an "r-" - "Rejected". In my team this rarely (never?) happens on code changes.

On any bigger changes or features we usually start with a design proposal, that goes through feedback iterations with stakeholders (the direct team or colleagues from other teams, depending on scope). This would be the point to shut down ideas or turn them around to fit our and our user's needs. Design proposals vary in depth, but may already include implementation details where required.

🧗‍♀️ Boulder / Blocking

For us this is "Changes requested". Both review tools we use (Phabricator and plain GitHub PRs) have this as explicit review states.

The code change can't land until problems are fixed. Once the developer pushed new changes the pull request will need another round of review.

All problems should be clearly pointed out during the review and comments attached to where the problem is. However, unlike the Feedback ladder, our individual comments don't follow a strict wording, so the developer who submitted the change can't differentiate between them easily.

⚪️ Pebble / Non-blocking but requires future action

This is famously known here as "r+wc" - "review accepted, with comments".

Now for us this is actually two different parts:

First, the reviewer is fine with the overall change, but found some smaller things that definitely need to be changed, such as documentation wording, code comments or naming. However, it is considered the developer's task to ensure these changes get made before the PR is landed and no additional round of review needs to follow. GitHub luckily allows reviewers to submit the exact change required and the developer can apply it with a button click, so there's not always the need to go back to the code editor, commit code, push it, ...

Second, some things require a follow-up, such as some possible code refactor, additional features or tracking the bug fix through the release process for later validation (we deal with data, so for some fixes we need to see real-world data coming in to determine if the fix worked). Reviewers should ask for a bug to be filed and the developer usually posts the filed bug as a comment. In that state the pull request is then ready to get merged.

Again, there's no formal concept or wording (other than "this needs a follow-up bug" and GitHub's "Apply suggestion") we use to make the review comments stick out.

⏳ Sand / Non-blocking but requires future consideration

This is very similar to the "⚪️ Pebble" step, but the follow-up bugs filed are more likely going to be "Investigate ..." or "Proposal for ...".

🌫 Dust / Non-blocking, “take it or leave it”

This is all the other little comments and will usually end in a "r+" - "Review done & accepted". We enforce code formatting via tools, so there's rarely need to discuss this. Therefore this comes down to naming or slightly different code patterns.

More often than not I label these in my comments as "small nit: ...". More often than not these are still taken in and applied.

So ... ?

In my team we already use an informal but working method to express the different kinds of review feedback. We certainly lack in clarity and immediate visibility of the different patterns and that's certainly a thing we can improve. Not only might that help us right now, it would also help in later onboarding new folks.

I'm not fully sold on the metaphors used in the Feedback Ladder, but I do like using emojis in combination with plaintext to signal things.

Daniel Stenbergcurl up 2020 goes online only

curl up 2020 will not take place in Berlin as previously planned. The corona times are desperate times and we don’t expect things to have improved soon enough to make a physical conference possible at this date.

curl up 2020 will still take place, and at the same date as planned (May 9-10), but we will change the event to a pure online and video-heavy occasion. This way we can of course also even easier welcome audience and participants from even furher away who previously would have had a hard time to participate.

We have not worked out the details yet. What tools to use, how to schedule, how to participate, how to ask questions or how to say cheers with your local favorite beer. If you have ideas, suggestions or even experiences to share regarding this, please join the curl-meet mailing list and help!

Daniel Stenbergcurl write-out JSON

This is not a command line option of the week post, but I feel a need to tell you a little about our brand new addition!

--write-out [format]

This option takes a format string in which there are a number of different “variables” available that let’s a user output information from the previous transfer. For example, you can get the HTTP response code from a transfer like this:

curl -w 'code: %{response_code}' https://example.org -o saved

There are currently 34 different such variables listed and described in the man page. The most recently added one is for JSON output and it works like this:


It is a single variable that outputs a full json object. You would for example invoke it like this when you get data from example.com:

curl --write-out '%{json}' https://example.com -o saved

That command line will spew some 800 bytes to the terminal and it won’t be very human readable. You will rather take care of that output with some kind of script/program, or if you want an eye pleasing version you can pipe it into jq and then it can look like this:

  "url_effective": "https://example.com/",
  "http_code": 200,
  "response_code": 200,
  "http_connect": 0,
  "time_total": 0.44054,
  "time_namelookup": 0.001067,
  "time_connect": 0.11162,
  "time_appconnect": 0.336415,
  "time_pretransfer": 0.336568,
  "time_starttransfer": 0.440361,
  "size_header": 347,
  "size_request": 77,
  "size_download": 1256,
  "size_upload": 0,
  "speed_download": 0.002854,
  "speed_upload": 0,
  "content_type": "text/html; charset=UTF-8",
  "num_connects": 1,
  "time_redirect": 0,
  "num_redirects": 0,
  "ssl_verify_result": 0,
  "proxy_ssl_verify_result": 0,
  "filename_effective": "saved",
  "remote_ip": "",
  "remote_port": 443,
  "local_ip": "",
  "local_port": 44832,
  "http_version": "2",
  "scheme": "HTTPS",
  "curl_version": "libcurl/7.69.2 GnuTLS/3.6.12 zlib/1.2.11 brotli/1.0.7 c-ares/1.15.0 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.3.0) nghttp2/1.40.0 librtmp/2.3"

The JSON object

It always outputs the entire object and the object may of course differ over time, as I expect that we might add more fields into it in the future.

The names are the same as the write-out variables, so you can read the --write-out section in the man page to learn more.


The feature landed in this commit. This new functionality will debut in the next pending release, likely to be called 7.70.0, scheduled to happen on April 29, 2020.


This is the result of fine coding work by Mathias Gumz.

Top image by StartupStockPhotos from Pixabay

Ludovic HirlimannNew Job

I started working for a new Gig, at the beginning of this month. It's a nice little company focused on mapping solutions. They work on open source software, QGIS and Postgis, and have developed a nice webapp called lizmap. I am a sysadmin there managing their SAS offering.

This Week In RustThis Week in Rust 330

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crates is beef, an alternative memory-compact Clone on Write (CoW) implementation.

Thanks to Vlad Frolov for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

309 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I thought up a clever qotw bait one liner to stick in here that prompted me to actually write it then forgot it while writing the post in favor of being genuine... whoops

Christopher Durham confessing to rust-users

Thanks to Jules Kerssemakers for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

David HumphreyTeaching in the time of Corona

Like so many of my post-secondary colleagues around the world, I've been trying to figure out what it means to conduct the remainder of the Winter 2020 term in a 100% online format.  I don't have an answer yet, but here is some of what I'm currently thinking.


It seems crazy to give background or provide links to what's going on, but for my future self, here is the context of the current situation.  On Thursday March 12, the Ontario government announced the closure of all publicly funded schools (K-12) in the province.  This measure was part of a series of closures that were cascading across the US and Canada.  The effect was that suddenly every teacher, parent, student, and all their family members were now part of the story. What was happening on Twitter, at airports, or "somewhere else" had now landed with a thud on everyone's doorstep.

What we didn't hear on Thursday was any news about post-secondary institutions.  If K-12 had to close, how could we possibly keep colleges and universities open?  Our classes are much larger, and our faculty and student populations much more mobile.  It made no sense, and many of my colleagues were upset.

I went to work on Friday in order to give two previously scheduled tests.  As I was handing out the test papers, an email came to my laptop.  Our college's president was announcing an end of all in-person classes, and move to "online."

The plan is as follows:

  • March 16-20, no classes at all (online or in-person), giving faculty a chance to prepare, and students a chance to make new arrangements.
  • March 23 - April 2, all teaching and other academic interaction is to be done online.
  • April 6 - some classes will resume in-person lab work, while others will continue to be 100% online.  For me, I don't think there's any need to do in-person for the rest of the term, but we'll see.
  • April 13-17 - final exams are cancelled, and alternative final assessments will happen this week, as classes wrap up.


I've been working and teaching online for at least 15 years, much of it as part of Mozilla.  I love it, and as an introvert and writer, it's my preferred way of working. However, I've never done everything online, especially not lecturing.

I love lecturing (just ask my teenagers!), and it's really hard to move it out of an in-person format.  I've given lots of online talks and lectures in the past.  Sometimes it happened because a conference needed to accommodate a larger audience than could safely fit in the room.  Other times I've had a few remote people want to join an event.  Once I gave a lecture to hundreds of CS students in France from Toronto, and I've even given a talk in English from Stanford that was simultaneously translated in real-time into Japanese and broadcast in Japan.

It can be done.  But it's not how I like to work.  A good lecture is dynamic, and includes the audience, their questions and comments, but also their desired pace, level of understanding, etc.  I rarely use notes or slides anymore, preferring to have a more conversational, authentic interaction with my students.  I don't think it's possible to "move this online" the way I do it, or at least, I don't know how (yet).


I was lucky to have a chance to meet with many of my students on Friday, and talk with them about how they were feeling, and what their needs would be.  When we talk about moving a course online, much of the conversation gets focused on technology needs.

In the past few days, I've been amazed to watch my colleagues grapple with the challenge of doing everything online.  Here's some of what I've seen:

  • Some people are using Slack to have meetings and discussions
  • Lots of Microsoft Teams is happening at my institution
  • Many people are experimenting with Zoom to hold office hours and give lectures
  • Others are trying Google Hangouts, Skype for Business, and Webex
  • A few people are using BigBlueButton
  • Lots of people are putting things on YouTube and other video platforms
  • My institution recommends using tools in Blackboard, which I'm not even going to mention (or use).

Imagine being a faculty member who suddenly has to evaluate and learn some or all of these.  I've used them all before, but many of my colleagues haven't.  I spent Friday afternoon showing some of my peers how to setup a few of the tools, and it's a lot to pick up quickly.

Now imagine being a student who suddenly has faculty wanting you to use all of these new tools in parallel, and everyone doing things slightly differently!  That's also a lot, and I have a lot of empathy for what the students are facing.

Understanding the Audience

Thankfully, technology isn't the problem: we have so much of it, and lots of it is "good enough."  The real problem is trying to figure out how to support our students in ways that are actually helpful to the learning process.

I asked my students what they wanted me to do.  Here's some of what I heard

  1. Many expressed concern about trying to attend scheduled times online, since they now have new childcare responsibilities to deal with (since the schools have closed and their kids are home).
  2. Many talked about not wanting to lose in-person sessions.  "The notes aren't enough."  I was asked to create some videos and post those so that they could go through them later and "multiple times."
  3. A lot of people were worried about how to get their questions answered, and how to show me problems they faced in their code.  On any given day, after I finish a lecture, I'm always greeted by a long line of students with laptops open who need help debugging something in their code.  "How can I show you my work and ask questions?"
  4. Others talked about their fears of isolation and anxiety at facing this alone.  I have many students from abroad, some new to Canada, or who are here alone, and classes are an important chance to connect with peers, work on English, and otherwise connect into Canadian society.  Losing that is losing a lot.
  5. Finally, some of my students expressed concern about losing the chance to celebrate successes together.  My open source students have decided that if they manage to ship a 1.0 by the end of the term, I'm going to get them cake (cheesecake, actually).  "What happens with our cake if we don't see each other!?"

How do you pivot "celebrate 1.0 with cake together" to purely online?

One Approach

I've always wanted to do more of my courses online, and it feels like this is an interesting time to experiment.  At the same time, everyone (including myself) is totally overwhelmed with what's happening in society.  My wife told me to be realistic with my expectations for myself and the process, and as always, I know she's right.

I'm going to go slower and smaller than I might if I was building these courses online from day one.  Here's my current thinking:

  • I'm going to use Slack for communication.  So many of the open source projects we interact with use it, and it's good for the students to get a chance to try it.  My open source classes already use it, but my Web Programming students don't, and it will be new for them.  Slack lets me stay closely connected with the students, have real-time conversations, but also allows them to drop-in later and scroll back through anything they missed.
  • I'm not going to do online lectures.
  • Instead, I'm going to try creating some short screencasts to supplement my lecture notes and put them on YouTube.  My Web Programming students expressed that they needed examples of me writing code, and explaining what I was doing.  I'll use these as a way to show them practical examples of what I've written in the notes.  Luckily I wrote extensive online course notes a few terms ago, and this doesn't need to get done in a rush right now.  
  • In place of tests, I'm going to move to practical assessments that get submitted online.  The students already do their assignments this way, but I'll add some lab work, and get them to show me that they understand the weekly material via practical application.
  • Assignments can stay the same as before, which is a blessing.
  • I'm not sure what I'll do for a final assessment.  The other profs teaching my course with me all agreed to revisit this on Friday when we've got more information about what's likely to happen in April.

Electricity, Water, Web

All of the courses I'm teaching right now are really "open web" courses.  It's nice for me because the act of moving my courses online is itself a case study of what it means to apply what we're learning in the classroom.

In the coming weeks, where possible, I'm going to try and use examples that touch on the parts of the web that are most critical right now.  For example, I'd love to use WebSockets and WebRTC if possible, to show the students how the tech they're using in all their classes are also within their grasp as developers (as an aside, I'm looking for some easy ways to have them work with WebRTC in the browser only, and need to figure out some public signaling solutions, in case you know of any).

I've been amazed to watch just how significant the web has been to the plans of countries all around the world in the face of the Coronavirus.  Working from home and teaching and learning online are impossible without the open web platform to support the needs of everybody right now.

In 2020, the web is a utility, and society expects it to work.  Understanding how the web works is critical to the functioning of a modern society, and I'm proud to have dedicated my career to building and teaching all this web technology.  It's amazing to see it being used for so much good, and an honour to teach the next generation how to keep it working.

Daniel Stenbergcurl ootw happy eyeballs timeout

Previous options of the week.

This week’s option has no short option and the long name is indeed quite long:

--happy-eyeballs-timeout-ms <milliseconds>

This option was added to curl 7.59.0, March 2018 and is very rarely actually needed.

To understand this command line option, I think I should make a quick recap of what “happy eyeballs” is exactly and what timeout in there that this command line option is referring to!

Happy Eyeballs

This is the name of a standard way of connecting to a host (a server really in curl’s case) that has both IPv4 and IPv6 addresses.

When curl resolves the host name and gets a list of IP addresses back for it, it will try to connect to the host over both IPv4 and IPv6 in parallel, concurrently. The first of these connects that completes its handshake is considered the winner and the other connection attempt then gets ditched and is forgotten. To complicate matters a little more, a host name can resolve to a list of addresses of both IP versions and if a connect to one of the addresses fails, curl will attempt the next in a way so that IPv4 addresses and IPv6 addresses will be attempted, simultaneously, until one succeeds.

curl races connection attempts against each other. IPv6 vs IPv4.

Of course, if a host name only has addresses in one IP version, curl will only use that specific version.

Happy Eyeballs Timeout

For hosts having both IPv6 and IPv4 addresses, curl will first fire off the IPv6 attempt and then after a timeout, start the first IPv4 attempt. This makes curl prefer a quick IPv6 connect.

The default timeout from the moment the first IPv6 connect is issued until the first IPv4 starts, is 200 milliseconds. (The Happy Eyeballs RFC 6555 claims Firefox and Chrome both use a 300 millisecond delay, but I’m not convinced this is actually true in current versions.)

By altering this timeout, you can shift the likeliness of one or the other connect to “win”.

Example: change the happy eyeballs timeout to the same value said to be used by some browsers (300 milliseconds):

curl --happy-eyeballs-timeout-ms 300 https://example.com/

Happy Eyeballs++

There’s a Happy Eyeballs version two, defined in RFC 8305. It takes the concept a step further and suggests that a client such as curl should start the first connection already when the first name resolve answers come in and not wait for all the responses to arrive before it starts the racing.

curl does not do that level “extreme” Happy Eyeballing because of two simple reasons:

1. there’s no portable name resolving function that gives us data in that manner. curl won’t start the actual connection procedure until the name resolution phase is completed, in its entirety.

2. getaddrinfo() returns addresses in a defined order that is hard to follow if we would side-step that function as described in RFC 8305.

Taken together, my guess is that very few internet clients today actually implement Happy Eyeballs v2, but there’s little to no reason for anyone to not implement the original algorithm.

Curios extra

curl has done Happy Eyeballs connections since 7.34.0 (December 2013) and yet we had this lingering bug in the code that made it misbehave at times, only just now fixed and not shipped in a release yet. This bug makes curl sometimes retry the same failing IPv6 address multiple times while the IPv4 connection is slow.

Related options

--connect-timeout limits how long to spend trying to connect and --max-time limits the entire curl operation to a fixed time.

Daniel StenbergWarning: curl users on Windows using FILE://

The Windows operating system will automatically, and without any way for applications to disable it, try to establish a connection to another host over the network and access it (over SMB or other protocols), if only the correct file path is accessed.

When first realizing this, the curl team tried to filter out such attempts in order to protect applications for inadvertent probes of for example internal networks etc. This resulted in CVE-2019-15601 and the associated security fix.

However, we’ve since been made aware of the fact that the previous fix was far from adequate as there are several other ways to accomplish more or less the same thing: accessing a remote host over the network instead of the local file system.

The conclusion we have come to is that this is a weakness or feature in the Windows operating system itself, that we as an application and library cannot protect users against. It would just be a whack-a-mole race we don’t want to participate in. There are too many ways to do it and there’s no knob we can use to turn off the practice.

We no longer consider this to be a curl security flaw!

If you use curl or libcurl on Windows (any version), disable the use of the FILE protocol in curl or be prepared that accesses to a range of “magic paths” will potentially make your system try to access other hosts on your network. curl cannot protect you against this.

We have updated relevant curl and libcurl documentation to make users on Windows aware of what using FILE:// URLs can trigger (this commit) and posted a warning notice on the curl-library mailing list.

Previous security advisory

This was previously considered a curl security problem, as reported in CVE-2019-15601. We no longer consider that a security flaw and have updated that web page with information matching our new findings. I don’t expect any other CVE database to update since there’s no established mechanism for updating CVEs!


Many thanks to Tim Sedlmeyer who highlighted the extent of this issue for us.

The Rust Programming Language Blogdocs.rs now allows you to choose your build targets

Recently, docs.rs added a feature that allows crates to opt-out of building on all targets. If you don't need to build on all targets, you can enable this feature to reduce your build times.

What does the feature do?

By default, docs.rs builds all crates published to crates.io for every tier one target. However, most crates have the same content on all targets. Of the platform-dependent crates, almost all target a single platform, and do not need to be built on other targets. For example, winapi only has documentation on the x86_64-pc-windows-msvc and i686-pc-windows-msvc targets, and is blank on all others.

This feature allows you to request building only on specific targets. For example, winapi could opt into only building windows targets by putting the following in its Cargo.toml:

# This also sets the default target to `x86_64-pc-windows-msvc`
targets = ["x86_64-pc-windows-msvc", "i686-pc-windows-msvc"]

If you only need a single target, it's even simpler:

# This sets the default target to `x86_64-unknown-linux-gnu`
# and only builds that target
targets = ["x86_64-unknown-linux-gnu"]

See the docs.rs documentation for more details about how to opt-in.

How does this help my crate?

Instead of building for every tier-one target, you can build for only a single target, reducing your documentation build times by a factor of 6. This can especially help large crates or projects with many crates that take several hours to document.

How does this help docs.rs?

Building all crates from crates.io can take a long time! Building fewer targets will allow us to reduce wait times for every crate. Additionally, this will decrease the growth of our storage costs, improving the sustainability of the project.

Possible future changes

We're considering turning this on by default in the future; i.e. only building for one target unless multiple targets are specifically requested. However, we don't want to break anyone's documentation, so we're making this feature opt-in while we decide the migration strategy.

This change will also make it easier for docs.rs to build for targets that are not tier one, such as embedded targets.

How can I learn more?

You can learn more about the change in the issue proposing it and the PR with the implementation. Details on building non-tier-one targets are also available in the issue requesting the feature.

More information on targets and what it means to be a tier-one target is available in the platform support page.

Julien VehentVideo-conferencing the right way

I work from home. I have been doing so for the last four years, ever since I joined Mozilla. Some people dislike it, but it suits me well: I get the calm, focused, relaxing environment needed to work on complex problems all in the comfort of my home.

Even given the opportunity, I probably wouldn't go back to working in an office. For the kind of work that I do, quiet time is more important than high bandwidth human interaction.

Yet, being able to talk to my colleagues and exchanges ideas or solve problems is critical to being productive. That's where the video-conferencing bit comes in. At Mozilla, we use Vidyo Zoom primarily, sometimes Hangout and more rarely Skype. We spend hours every week talking to each other via webcams and microphones, so it's important to do it well.

Having a good video setup is probably the most important and yet least regarded aspect of working remotely. When you start at Mozilla, you're given a laptop and a Zoom account. No one teaches you how to use it. Should I have an external webcam or use the one on your laptop? Do I need headphones, earbuds, a headset with a microphone? What kind of bandwidth does it use? Those things are important to good telepresence, yet most of us only learn them after months of remote work.

When your video setup is the main interface between you and the rest of your team, spending a bit of time doing it right is far from wasted. The difference between a good microphone and a shitty little one, or a quiet room and taking calls from the local coffee shop, influence how much your colleagues will enjoy working with you. I'm a lot more eager to jump on a call with someone I know has good audio and video, than with someone who will drag me in 45 minutes of ambient noise and coughing in his microphone.

This is a list of tips and things that you should care about, for yourself, and for your coworkers. They will help you build a decent setup with no to minimal investment.

The place

It may seem obvious, but you shouldn't take calls from a noisy place. Airports, coffee shops, public libraries, etc. are all horribly noisy environments. You may enjoy working from those places, but your interlocutors will suffer from all the noise. Nowadays, I refuse to take calls and cut meetings short when people try to force me into listening to their surrounding. Be respectful of others and take meetings from a quiet space.


Despite what ISPs are telling you, no one needs 300Mbps of upstream bandwidth. Take a look at the graph below. It measures the egress point of my gateway. The two yellow spikes are video meetings. They don't even reach 1Mbps! In the middle of the second one, there's a short spike at 2Mbps when I set Vidyo to send my stream at 1080p, but shortly reverted because that software is broken and the faces of my coworkers disappeared. Still, you get the point: 2Mbps is the very maximum you'll need for others to see you, and about the same amount is needed to download their streams.

You do want to be careful about ping: latency can increase up to 200ms without issue, but even 5% packet drop is enough to make your whole experience miserable. Ask Tarek what bad connectivity does to your productivity: he works from a remote part of france where bandwidth is scarce and latency is high. I coined him the inventor of the Tarek protocol, where you have to repeat each word twice for others to understand what you're saying. I'm joking, but the truth is that it's exhausting for everyone. Bad connectivity is tough on remote workers.

(Tarek thought it'd be worth mentioning that he tried to improve his connectivity by subscribing to a satellite connection, but ran into issues in the routing of his traffic: 700ms latency was actually worse than his broken DSL.)


Perhaps the single most important aspect of video-conferencing is the quality of your microphone and how you use it. When everyone is wearing headphones, voice quality matters a lot. It is the difference between a pleasant 1h conversation, or a frustrating one that leaves you with a headache.

Rule #1: MUTE!

Let me say that again: FREAKING MUTE ALREADY!

Video softwares are terrible at routing the audio of several people at the same time. This isn't the same as a meeting room, where your brain will gladly separate the voice of someone you're speaking to from the keyboard of the dude next to you. On video, everything is at the same volume, so when you start answering that email while your colleagues are speaking, you're pretty much taking over their entire conversation with keyboard noises. It's terrible, and there's nothing more annoying than having to remind people to mute every five god damn minutes. So, be a good fellow, and mute!

Rule #2: no coughing, eating, breathing, etc... It's easy enough to mute or move your microphone away from your mouth that your colleagues shouldn't have to hear you breathing like a marathoner who just finished the olympics. We're going back to rule #1 here.

Now, let's talk about equipment. A lot of people neglect the value of a good microphone, but it really helps in conversations. Don't use your laptop microphone, it's crap. And so is the mic on your earbuds (yes, even the apple ones). Instead, use a headset with a microphone.

If you have a good webcam, it's somewhat ok to use the microphone that comes with it. The Logitech C920 is a popular choice. The downside of those mics is they will pick up a lot of ambient noise and make you sound distant. I don't like them, but it's an acceptable trade-off.

If you want to go all out, try one of those fancy podcast microphones, like the Blue Yeti.

You most definitely don't need that for good mic quality, but they sound really nice. Here's a recording comparing my camera's embedded mic, Planctonic headsets, a Blue Yeti and a Deity S-Mic 2 shotgun microphone.


This part is easy because most laptops already come with 720p webcam that provide decent video quality. I do find the Logitech renders colors and depth better than the webcam embedded on my Lenovo Carbon X1, but the difference isn't huge.

The most important part of your webcam setup should be its location. It's a bit strange to have someone talk to you without looking straight at you, but this is often what happens when people place their webcam to the side of their screen.

I've experimented a bit with this, and my favorite setup is to put the webcam right in the middle of my screen. That way, I'm always staring right at it.

It does consume a little space in the middle of my display, but with a large enough screen - I use an old 720p 35" TV - doesn't really bother me.

Lighting and background are important parameters too. Don't bring light from behind, or your face will look dark, and don't use a messy background so people can focus on what you're saying. These factors contribute to helping others read your facial expressions, which are an important part of good communication. If you don't believe me, ask Cal Lightman ;).

Spread the word!

In many ways, we're the first generation of remote workers, and people are learning how to do it right. I believe video-conferencing is an important part of that process, and I think everyone should take a bit of time and improve their setup. Ultimately, we're all a lot more productive when communication flows easily, so spread the word, and do tell your coworkers when they setup is getting in the way of good conferencing.

Allen Wirfs-BrockJavaScript: The First 20 Years

JavaScript: The First 20 Years  by Allen Wirfs-Brock and Brendan Eich

Our HOPL paper is done and submitted to the ACM for June 2020 publication in the PACMPL (Proceedings of the ACM on Programming Languages)  and presentation at the HOPL 4 conference whenever it actually occurs. PACMPL is an open access journal so there won’t be a paywall preventing people from reading our paper.  Regardless, starting right now you can access the preprint at https://zenodo.org/record/3707007. But before you run off and start reading this 190 page “paper” I want to talk a bit about HOPL.

The History of Programming Languages Conferences

HOPL is a unique conference and the foremost conference relating to the history of programming languages.  HOPL-IV wll be only the 4th HOPL. Previous HOPLs occurred in 1978, 1993, and 2007.  The History of HOPL web page  provides an overview of the conference’s history and which languages were covered at each of the three previous HOPLs.  HOPL papers can be quite long.  As the HOPL-IV call for papers says, “Because of the complex nature of the history of programming languages, there is no upper bound on the length of submitted papers—authors should strive for completeness.” HOPL papers are often authored by the original designers of an important language or individuals who have made significant contributions to the evolution of a language.

As the HOPL-IV call for papers describes, writing a HOPL paper is an arduous multi-year process. Initial submissions were due in September 2018 and reviewed by the program committee.  For papers that made it through that review, the second major review draft was due September 2019.  The final “camera ready” manuscripts were due March 13, 2020.  Along the way, each paper received extensive reviews from members of  the program  committee and each paper was closely monitored by one or more program committee “shepherds” who worked very closely with the authors. One of the challenges for most of the authors was to learn what it meant to write a history paper rather than a traditional technical paper.  Authors were encouraged  to learn to think and write  like a professional historian.

I’ve long been a fan of HOPL and have read most of the papers from the first three HOPLs.  But I’d never actually attended one.  I first heard about HOPL-IV on July 7, 2017 when I received an invitation from Guy Steele and Richard Gabriel to serve on the program committee. I immediately checked whether PC members could submit and because the answer was yes, I accepted. I knew that JavaScript needed to be included in a HOPL and that I probably was best situated to write it. But my direct experience with JS only dates to 2007 so I knew I would need Brendan Eich’s input in order to cover the early history of the language and he agreed to sign-on as coauthor.   My initial outline for the paper is dated July 20, 2017 and was titled “JavaScript: The First 25 Years” (we decided to cut it down to the first 20 years after the first round of reviews). The outline was seven pages long. I hadn’t looked at it since sometime in 2018 but looking at it today, I found it remarkably close to what is in the final paper.  I knew the paper was going to be long.  But I never thought it would end up at 190 pages.  Many thanks to Richard Gabriel for repeatedly saying “don’t worry about the length.”

There is a lot I have to say about gathering primary source materials (like a real historian) but I’m going to save that for another post is a few days. So, if you’re interested in the history of JavaScript, start reading

Chris H-CDistributed Teams: Not Just Working From Home

Technology companies taking curve-flattening exercises of late has resulted in me digging up my old 2017 talk about working as and working with remote employees. Though all of the advice in it holds up even these three years later, surprisingly little of it seemed all that relevant to the newly-working-from-home (WFH) multitudes.

Thinking about it, I reasoned that it’s because the talk (slides are here if you want ’em) is actually more about working on a distributed team than working from home. Though it contained the usual WFH gems of “have a commute”, “connect with people”, “overcommunicate”, etc etc (things that others have explained much better than I ever will); it also spent a significant amount of its time talking about things that are only relevant if your team isn’t working in the same place.

Aspects of distributed work that are unique not to my not being in the office but my being on a distributed team are things like timezones, cultural differences, personal schedules, presentation, watercooler chats, identity… things that you don’t have to think about or spend effort on if you work in the same place (and, not coincidentally, things I’ve written about in the past). If we’re all in Toronto you know not only that 12cm of snow fell since last night but also what that does to the city in the morning. If we’re all in Italy you know not to schedule any work in August. If we see each other all the time then I can use a picture I took of a glacier in Iceland for my avatar instead of using it as a rare opportunity to be able to show you my face.

So as much as I was hoping that all this sudden interest in WFH was going to result in a sea change in how working on a distributed team is viewed and operates, I’m coming to the conclusion that things probably will not change. Maybe we’ll get some better tools… but none that know anything about being on a distributed team (like how “working hours” aren’t always contiguous (looking at you, Google Calendar)).

At least maybe people will stop making the same seven jokes about how WFH means you’re not actually working.


Patrick ClokeMatrix Live Interview

I was interviewed for Matrix Live as part of last week’s This Week in Matrix. I talked a bit about my background and my experiences contributing to Mozilla (as part of Instantbird and Thunderbird projects) as well as what I will be working on for Synapse, the reference implementation …

Mozilla Addons BlogFriend of Add-ons: Zhengping

Please meet our newest Friend of Add-ons, Zhengping! A little more than two years ago, Zhengping decided to switch careers and become a software developer. After teaching himself the basics of web development, he started looking for real-world projects where he could hone his skills. After fixing a few frontend bugs on addons.mozilla.org (AMO), Zhengping began contributing code the add-ons code manager, a new tool to help keep add-on users safe.

In the following months, he tackled increasingly harder issues, like using TypeScript with React to create complex UI with precision and efficiency. His contributions helped the add-ons team complete the first iteration of the code manager, and he continued to provide important patches based on feedback from add-on reviewers.

“The comments from staff members in code review helped me deepen my understanding of what is good code,” Zhengping notes. “People on the add-ons team, staff and contributors, are very friendly and willing to help,” he says. “It is a wonderful experience to work with them.”

When he isn’t coding, Zhengping enjoys skiing.

Thank you so much for all of your wonderful contributions to the Firefox add-ons community, Zhengping!

If you are interested in getting involved with the add-ons community, please take a look at our current contribution opportunities.

The post Friend of Add-ons: Zhengping appeared first on Mozilla Add-ons Blog.

Botond BalloTrip Report: C++ Standards Meeting in Prague, February 2020

Summary / TL;DR

Project What’s in it? Status
C++20 See Reddit report Technically complete
Library Fundamentals TS v3 Library utilities incubating for standardization Under development
Concepts Constrained templates Shipping as part of C++20
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published!
Executors Abstraction for where/how code runs in a concurrent context Targeting C++23
Concurrency TS v2 Concurrency-related infrastructure (e.g. fibers) and data structures Under active development
Networking TS Sockets library based on Boost.ASIO Published! Not in C++20.
Ranges Range-based algorithms and views Shipping as part of C++20
Coroutines Resumable functions (generators, tasks, etc.) Shipping as part of C++20
Modules A component system to supersede the textual header file inclusion model Shipping as part of C++20
Numbers TS Various numerical facilities Under active development
C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Under active development
Contracts Preconditions, postconditions, and assertions Under active development
Pattern matching A match-like facility for C++ Under active development
Reflection TS Static code reflection mechanisms Publication imminent
Reflection v2 A value-based constexpr formulation of the Reflection TS facilities, along with more advanced features such as code injection Under active development

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected any day). If you encounter such a link, please check back in a few days.


A few weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Prague, Czech Republic. This was the first committee meeting in 2020; you can find my reports on 2019’s meetings here (November 2019, Belfast), here (July 2019, Cologne), and here (February 2019, Kona), and previous ones linked from those. These reports, particularly the Belfast one, provide useful context for this post.

This meeting once again broke attendance records, with about ~250 people present. It also broke the record for the number of national standards bodies being physically represented at a meeting, with reps from Austria and Israel joining us for the first time.

The Prague meeting wrapped up the C++20 standardization cycle as far as technical work is concerned. The highest-priority work item for all relevant subgroups was to continue addressing any remaining comments on the C++20 Committee Draft, a feature-complete C++20 draft that was circulated for feedback in July 2019 and received several hundred comments from national standards bodies (“NB comments”). Many comments had been addressed already at the previous meeting in Belfast, and the committee dealt with the remaining ones at this meeting.

The next step procedurally is for the committee to put out a revised draft called the Draft International Standard (DIS) which includes the resolutions of any NB comments. This draft, which was approved at the end of the meeting, is a technically complete draft of C++20. It will undergo a further ballot by the national bodies, which is widely expected to pass, and the official standard revision will be published by the end of the year. That will make C++20 the third standard revision to ship on time as per the committee’s 3-year release schedule.

I’m happy to report that once again, no major features were pulled from C++20 as part of the comment resolution process, so C++20 will go ahead and ship with all the major features (including modules, concepts, coroutines, and library goodies like ranges, date handling and text formatting) that were present in the Committee Draft. Thanks to this complement of important and long-anticipated features, C++20 is widely viewed by the community as the language’s most significant release since C++11.

Subgroups which had completed processing of NB comments for the week (which was most study groups and the Evolution groups for most of the week) proceeded to process post-C++20 proposals, of which there are plenty in front of the committee.

As with my blog post about the previous meeting, this one will also focus on proceedings in the Evolution Working Group Incubator (EWG-I) which I co-chaired at this meeting (shout-out to my co-chair Erich Keane who was super helpful and helped keep things running smoothly), as well as drawing attention to a few highlights from the Evolution Working Group and the Reflection Study Group. For a more comprehensive list of what features are in C++20, what NB comment resolutions resulted in notable changes to C++20 at this meeting, and which papers each subgroup looked at, I will refer you to the excellent collaborative Reddit trip report that fellow committee members have prepared.

As a reminder, since the past few meetings the committee has been tracking its proposals in GitHub. For convenience, I will also be linking to proposals’ GitHub issues (rather than the papers directly) from this post. I hope as readers you find this useful, as the issues contain useful information about a proposal’s current status; the actual papers are just one further click away. (And shout-out to @m_ou_se for maintaining wg21.link which makes it really easy for me to do this.)

Evolution Working Group Incubator (EWG-I)

EWG-I is a relatively new subgroup whose purpose is to give feedback on and polish proposals that include core language changes — particularly ones that are not in the purview of any of the domain-specific subgroups, such as SG2 (Modules), SG7 (Reflection), etc. — before they proceed to the Evolution Working Group (EWG) for design review.

EWG-I met for three days at this meeting, and reviewed around 22 proposals (all post-C++20 material).

In this section, I’ll go through the proposals that were reviewed, categorized by the review’s outcome.

Forwarded to EWG

The following proposals were considered ready to progress to EWG in their current state:

  • A type trait to detect narrowing conversions. This is mainly a library proposal, but core language review was requested to make sure the specification doesn’t paint us into a corner in terms of future changes we might make to the definition of narrowing conversion.
  • Guaranteed copy elision for named return objects. This codifies a set of scenarios where all implementations were already eliding a copy, thereby making such code well-formed even for types that are not copyable or movable.
  • Freestanding language: optional ::operator new. This is one piece of a larger effort to make some language and library facilities optional in environments that may not be able to support them (e.g. embedded environments or kernel drivers). The paper was favourably reviewed by both EWG-I, and later in the week, by EWG itself.

Forwarded to EWG with modifications

For the following proposals, EWG-I suggested specific revisions, or adding discussion of certain topics, but felt that an additional round of EWG-I review would not be helpful, and the revised paper should go directly to EWG. The revisions requested were typically minor, sometimes as small as adding a feature test macro:

  • Language support for class layout control. This introduces a mechanism to control the order in which class data members are laid out in memory. This was previously reviewed by the Refection Study Group which recommended allowing the order to be specified via a library function implemented using reflection facilities. However, as such reflection facilities are still a number of years away, EWG-I felt there was room for a small number of ordering strategies specified in core wording, and forwarded the paper to EWG with one initial strategy, to yield the smallest structure size.
  • Object relocation in terms of move and destroy. This aims to address a long-standing performance problem in the language caused by the fact that a move must leave an object in a valid state, and a moved-from object still needs to be destroyed. There is another proposal in this space but EWG-I felt they are different enough that they should advance independently.
  • Generalized pack declaration and usage. This proposal significantly enhances the language’s ability to work with variadic parameter packs and tuple-like types. It was reviewed previously by EWG-I, and in this update was reworked to address the feedback from that review. The group felt the proposal was thorough and mature and largely ready to progress to EWG, although there was one outstanding issue of ambiguity that remained to be resolved.
  • Types with array-like object representations. This provides a mechanism for enforcing that a structure containing several fields of the same type is laid out in memory exactly the same as a corresponding array type, and the two types can be freely punned.
  • C++ identifier syntax using Unicode standard Annex 31. While identifiers in C++ source code can now contain Unicode characters, we do want to maintain some sanity, and so this proposal restricts the set of characters that can appear in identifiers to certain categories (excluding, for example, “invisible” characters).
  • Member templates for local classes. Since C++14 introduced generic lambdas (which are syntactic sugar for objects of a local class type defined on the fly, with a templated member call operator), the restriction against explicitly-defined local classes having member templates has been an artificial one, and this proposal lifts it.
  • Enable variable template template parameters. Another fairly gratuitous restriction; EWG-I forwarded it, with a suggestion to add additional motivating examples to the paper.

Forwarded to another subgroup

The following proposals were forwarded to a domain-specific subgroup:

  • In-source mechanism to identify importable headers. Headers which are sufficiently modular can be imported into a module as if they were modules themselves (this feature is called header units, and is a mechanism for incrementally transitioning large codebases to modules). Such headers are currently identified using some out-of-band mechanism (such as build system metadata). This proposal aims to allow annotating the headers as such in their source itself. EWG-I liked the idea but felt it was in the purview of the Tooling Study Group (SG15).
  • Stackable, thread local, signal guards. This aims to bring safer and more modern signal handling facilities to C++. EWG-I reviewed the proposal favourably, and sent it onward to the Library Evolution Incubator and the Concurrency Study Group (the latter saw the proposal later in the week and provided additional technical feedback related to concurrency).

Feedback given

For the following proposals, EWG-I gave the author feedback, but did not consider it ready to forward to another subgroup. A revised proposal would come back to EWG-I.

  • move = bitcopies. This is other paper in the object relocation space, aiming for a more limited solution which can hopefully gain consensus sooner. The paper was reviewed favourably and will return after revisions.
  • Just-in-time compilation. Many attendees indicated this is something they’d find useful in their application domains, and several aspects of the design were discussed. The paper was also seen by the Reflection Study Group earlier in the week.
  • Universal template parameters. This allows parameterizing a template over the kind of its template parameters (or, put another way, having template parameters of “wildcard” kind which can match non-type, type, or template template arguments). EWG-I felt the idea was useful but some of the details need to be refined. The proposed syntax is typename auto Param.
  • A pipeline-rewrite operator. This proposes to automatically rewrite a |> f(b) as f(a, b), thereby allowing a sequence of compositions of operations to be expressed in a more “linear” way in code (e.g. x |> f(y) |> g(z) instead of g(f(x, y), z)). It partly brings to mind previous attempts at a unified function call syntax, but avoids many of the issues with that by using a new syntax rather than trying to make the existing member-call (dot) syntax work this way. Like “spaceship” (<=>), this new operator ought to have a fun name, so it’s dubbed the “pizza” operator (too bad calling it the “slice” operator would be misleading).
  • Partially mutable lambda captures. This proposal seeks to provide finer-grained control over which of a lambda’s captured data members are mutable (currently, they’re all const by default, or you can make them all mutable by adding a trailing mutable to the lambda declarator). EWG-I suggested expanding the paper’s approach to allow either mutable or const on any individual capture (the latter useful if combined with a trailing mutable), as well as to explore other integrations such as a mutable capture-default.

No consensus in current form

The following proposals had no consensus to continue to progress in their current form. However, a future revision may still be seen by EWG-I if additional motivation is provided or new information comes to light. In some cases, such as with Epochs, there was a strong desire to solve the problems the proposal aims to solve, and proposals taking new approaches to tackling these problems would certainly be welcome.

  • Narrowing and widening conversions. This proposal aims to extend the notion of narrowing vs. widening conversions to user-defined conversions, and tweak the overload resolution rules to avoid ambiguity in more cases by preferring widening conversions to narrowing ones (and among widening conversions, prefer the “least widening” one). EWG-I felt that a change as scary as touching the overload resolution rules needed more motivation.
  • Improve rules of standard layout. There wasn’t really encouragement of any specific direction, but there was a recognition that “standard layout” serves multiple purposes some of which (e.g. which types are usable in offsetof) could potentially be split out.
  • Epochs: a backward-compatible language evolution mechanism. As at the last meeting, this proposal — inspired heavily by Rust’s editions — attracted the largest crowds and garnered quite a lot of discussion. Overall, the room felt that technical concerns about the handling of templates and the complexity of having to define how features interact across different epochs made the proposal as-is not viable. However, as mentioned, there was strong interest in solving the underlying problems, so I wouldn’t be terribly surprised to see a different formulation of a feature along these lines come back at some point.
  • Namespace templates. EWG-I felt the motivation was not sufficiently compelling to justify the technical complexity this proposal would entail.
  • Using ? : to reduce the scope of constexpr if. This proposes to allow ? : in type expressions, as in e.g. using X = cond ? Foo : Bar;. EWG-I didn’t really find the motivation compelling enough to encourage further work on the proposal.

Thoughts on the role of EWG-I

I wrote in my previous post about EWG-I being a fairly permissive group that lets a lot of proposals sail through it. I feel like at this meeting the group was a more effective gatekeeper. However, we did have low attendance at times, which impacted the quantity and quality of feedback that some proposals received. If you’re interested in core language evolution and attend meetings, consider sitting in EWG-I while it’s running — it’s a chance to provide input to proposals at an earlier stage than most other groups!

Other Highlights

Here are some highlights of what happened in some of the other subgroups, with a focus on Evolution and Reflection (the rooms I sat in when I wasn’t in EWG-I):

Planning and Organization

As we complete C++20 and look ahead to C++23, the committee has been taking the opportunity to refine its processes, and tackle the next standards cycle with a greater level of planning and organization than ever before. A few papers touched on these topics:

  • To boldly suggest an overall plan for C++23. This is a proposal for what major topics the committee should focus on for C++23. A previous version of this paper contained a similar plan for C++20, but one thing that’s new is that the latest version also contains guidance for subgroups for how to prioritize proposals procedurally to achieve the policy objectives laid out in the paper.
  • C++ IS Schedule. This formalizes the committee’s 3-year release schedule in paper form, including what milestones we aim for at various parts of the cycle (e.g. a deadline to merge TS’es, a deadline to release a Committee Draft, things like that).
  • Direction for ISO C++. Authored by the committee’s Direction Group, this sets out high-level goals and direction for the language, looking forward not just to the next standard release but the language’s longer-term evolution.
  • Process proposal: double-check Evolutionary material via a Tentatively Ready status. This is a procedural tweak where proposals approved by Evolution do not proceed to Core for wording review immediately, but rather after one meeting of delay. The intention is to give committee members with an interest in the proposal’s topic but who were perhaps unable to attend its discussions in Evolution (or were unaware of the proposal altogether) — keep in mind, with committee meetings having a growing number of parallel tracks (nine at this meeting), it’s hard to stay on top of everything — to raise objections or chime in with other design-level feedback before the proposal graduates to Core.

ABI Stability

In one of the week’s most notable (and talked-about) sessions, Evolution and Library Evolution met jointly to discuss a paper about C++’s approach to ABI stability going forward.

The main issue that has precipitated this discussion is the fact that the Library Evolution group has had to reject multiple proposals for improvements to existing library facilities over the past several years, because they would be ABI-breaking, and implementers have been very reluctant to implement ABI-breaking changes (and when they did, like with std::string in C++11, the C++ ecosystem’s experience with the break hasn’t been great). The paper has a list of such rejected improvements, but one example is not being able to change unordered_map to take advantage of more efficient hashing algorithms.

The paper argues that rejections of such library improvements demonstrate that the C++ community faces a tradeoff between ABI stability and performance: if we continue to enforce a requirement that C++ standard library facilities remain ABI-compatible with their older versions, as time goes by the performance of these facilities will lag more and more behind the state of the art.

There was a lengthy discussion of this issue, with some polls at the end, which were far from unanimous and in some cases not very conclusive, but the main sentiments were:

  • We would like to break the ABI “at some point”, but not “now” (not for C++23, which would be our earliest opportunity). It is unclear what the path is to getting to a place where we would be willing to break the ABI.
  • We are more willing to undertake a partial ABI break than a complete one. (In this context, a partial break means some facilities may undergo ABI changes on an as-needed basis, but if you don’t use such facilities at ABI boundaries, you can continue to interlink translation units compiled with the old and new versions. The downside is, if you do use such facilities at ABI boundaries, the consequence is usually runtime misbehaviour. A complete break would mean all attempts to interlink TUs from different versions are prevented at link time.)
  • Being ABI-breaking should not cause a library proposal to be automatically rejected. We should consider ABI-breaking changes on a case-by-case basis. Some implementers mentioned there may be opportunities to apply implementation tricks to avoid or mitigate the effects of some breaks.

There was also a suggestion that we could add new language facilities to make it easier to manage the evolution of library facilities — for example, to make it easier to work with two different versions of a class (possibly with different mangled names under the hood) in the same codebase. We may see some proposals along these lines being brought forward in the future.

Reflection and Metaprogramming

The Reflection Study Group (SG7) met for one day. The most contentious item on the agenda concerned exploration of a potential new metaprogramming model inspired by the Circle programming language, but I’ll first mention some of the other papers that were reviewed:

  • Just-in-time compilation. This is not really a reflection proposal, but it needs a mechanism to create a descriptor for a template instantiation to JIT-compile at runtime, and there is potential to reuse reflection facilities there. SG7 was in favour of reusing reflection APIs for this rather than creating something new.
  • Reflection-based lazy evaluation. This was a discussion paper that demonstrated how reflection facilities could be leveraged in a lazy evaluation feature. While this is not a proposal yet, SG7 did affirm that the group should not rule out the possibility of reflecting on expressions (which is not yet a part of any current reflection proposal), since that can enable interesting use cases like this one.
  • Constraint refinement for special-cased functions. This paper aims to fix some issues with parameter constraints, a feature proposed as part of a recent reflection proposal, but the authors withdrew it because they’ve since found a better approach.
  • Tweaks to the design of source code fragments. This is another enhancement to the reflection proposal mentioned above, related to source code injection. SG7 encouraged further work in this direction.
  • Using ? : to reduce the scope of constexpr if. Like EWG-I, SG7 did not find this proposal to be sufficiently motivating.
  • Function parameter constraints are fragile. This paper was mooted by the withdrawal of this one (and of parameter constraints more generally) and was not discussed.

Now onto the Circle discussion. Circle is a set of extensions to C++ that allow for arbitrary code to run at compile time by actually invoking (as opposed to interpreting or emulating) that code at compile time, including having the compiler call into arbitrary third-party libraries.

Circle has come up in the context of considering C++’s approach to metaprogramming going forward. For the past few years, the committee has been trying to make metaprogramming be more accessible by making it more like regular programming, hence the shift from template metaprogramming to constexpr-based metaprogramming, and the continuing increase of the set of language constructs allowed in constexpr code (“constexpr all the things”).

However, this road has not been without bumps. A recent paper argues that constexpr programming is still quite a bit further from regular programming that we’d like, due to the variety of restrictions on constexpr code, and the gotchas / limitations of facilities like std::is_constant_evaluated and promotion of dynamically allocated storage to runtime. The paper argues that if C++ were to adopt Circle’s metaprogramming model, then compile-time code could look exactly the same as runtime code, thereby making it more accessible and facilitating more code reuse.

A response paper analyzes the Circle metaprogramming model and argues that it is not a good fit for C++.

SG7 had an extensive discussion of these two papers. The main concerns that were brought up were the security implications of allowing the compiler to execute arbitrary C++ code at compile time, and the fact that running (as opposed to interpreting) C++ code at compile time presents a challenge for cross-compilation scenarios (where e.g. sizeof(int) may be different on the host than the target; existing compile-time programming facilities interpret code as if it were running on the target, using the target’s sizes for types and such).

Ultimately, SG7 voted against allowing arbitrary C++ code to run at compile-time, and thus against a wholesale adoption of Circle’s metaprogramming model. It was observed that there may be some aspects of Circle’s model that would still be useful to adopt into C++, such as its model for handling state and side effects, and its syntactic convenience.

Evolution Highlights

Some proposals which went through EWG-I earlier in the week were also reviewed by EWG — in this case, all favourably:

Here are some other highlights from EWG this week:

  • Pattern matching continues to be under active development and is a priority item for Evolution as per the overall plan. Notable topics of discussion this week included whether pattern matching should be allowed in both statement and expression contexts and, if allowed in expression contexts, whether non-exhaustiveness should be a compile-time error or undefined / other runtime behaviour (and if it’s a compile-time error, in what cases we can expect the compiler to prove exhaustiveness).
  • Deducing *this has progressed to a stage where the design is pretty mature and EWG asked for an implementation prior to approving it.
  • auto(x): decay-copy in the language. EWG liked auto(x), and asked the Library Working Group to give an opinion on how useful it would be inside library implementations. EWG was not convinced of the usefulness of decltype(auto)(x) as a shorthand for forwarding.
  • fiber_context – fibers without scheduler. EWG was in favour of having this proposal target a Technical Specification (TS), but wanted the proposal to contain a list of specific questions that they’d like answered through implementation and use experience with the TS. Several candidate items for that list came up during the discussion.
  • if consteval, a language facility that aims to address some of the gotchas with std::is_constant_evaluated, will not be pursued in its current form for C++23, but other ideas in this space may be.
  • Top-level is constant evaluated is one such other idea, aiming as it does to replace std::is_constant_evaluated with a way to provide two function bodies for a function: a constexpr one and a runtime one. EWG felt this was a less general solution than if constexpr and not worth pursuing.
  • A proposal for a new study group for safety-critical applications. EWG encouraged collaboration among people interested in this topic in the wider community. A decision of actually launching a Study Group has been deferred until we see that there is a critical mass of such interest.
  • =delete‘ing variable templates. EWG encouraged further work on a more general solution that could also apply to other things besides variable templates.

Study Group Highlights

While I wasn’t able to attend other Study Groups besides Reflection, a lot of interesting topics were discussed in other Study Groups. Here is a very brief mention of some highlights:

  • In the Concurrency Study Group, Executors have design approval on a long-in-the-works consensus design, and will proceed to wording review at the next meeting, with a view to merging them into the C++23 working draft in the early 2021 timeframe, and thereby unblocking important dependencies like Networking.
  • In the Networking Study Group, a proposal to introduce lower-level I/O abstractions that what are currently in the Networking TS was reviewed. Further exploration was encouraged, without blocking the existing Networking TS facilities.
  • In the Transactional Memory Study Group, a “Transactional Memory Lite” proposal is being reviewed with an aim to produce a TS based on it.
  • The Numerics Study Group is collaborating with the Low Latency and Machine Learning Study Groups on library proposals related to linear algebra and unit systems.
  • The Undefined and Unspecified Behaviour Study Group is continuing its collaboration with MISRA and the Programming Language Vulnerabilities Working Group. A revised proposal for a mechanism to mitigate Spectre v1 in C++ code was also reviewed.
  • The I/O Study Group reviewed feedback papers concerning audio and graphics. One outcome of the graphics discussion is that the group felt the existing graphics proposal should be more explicit about what use cases are in and out of scope. One interesting observation that was made is that is that learning use cases that just require a simple canvas-like API could be met by using the actual web Canvas API via WebAssembly.
  • The Low Latency Study Group reviewed a research paper about low-cost deterministic exceptions in C++, among many other things.
  • The Machine Learning Study Group reviewed a proposal for language or library suppport for automatic differentiation.
  • The Unicode Study Group decided that std::regex needs to be deprecated due to severe performance problems that are unfixable due to ABI constraints.
  • The Tooling Study Group reviewed two papers concerning the debuggability of C++20 coroutines, as well as several Modules-related papers. There were also suggestions that the topic of profiling may come up, e.g. in the form of extensions to std::thread motivated by profiling.
  • The Contracts Study Group reviewed a paper summarizing the issues that were controversial during past discussions (which led to Contracts slipping from C++20), and a paper attempting to clarify the distinction between assertions and assumptions.

Next Meeting

The next meeting of the Committee will (probably) be in Varna, Bulgaria, the week of June 1st, 2020.


As always, this was a busy and productive meeting. The headline accomplishment is completing outsanding bugfixes for C++20 and approving the C++20 Draft International Standard, which means C++20 is technically complete and is expected to be officially published by the end of the year. There was also good progress made on post-C++20 material such as pattern matching and reflection, and important discussions about larger directional topics in the community such as ABI stability.

There is a lot I didn’t cover in this post; if you’re curious about something I didn’t mention, please feel free to ask in a comment.

Other Trip Reports

In addition to the collaborative Reddit report which I linked to earlier, here are some other trip reports of the Prague meeting that you could check out:

Spidermonkey Development BlogNewsletter 3 (Firefox 74-75)


🏆 New contributors

🎁 New features

  • Yulia implemented the optional chaining (?.) operator (Firefox 74)
  • André implemented public static class fields (Firefox 75)
  • André implemented the Intl.Locale proposal (Firefox 75)

🐒 SmooshMonkey

The previous newsletter introduced Visage, a new JavaScript frontend we’re working on that’s written in Rust. Visage has since been renamed to SmooshMonkey, a name that’s known and well accepted by the JavaScript community (#SmooshGate). After a dinner and discussions with project members, it got authoritatively renamed by speaking about it at the All-Hands.

The team is making good progress:

  • SmooshMonkey has been integrated in Gecko behind a configure flag and a runtime flag.
  • Passes 100% of SpiderMonkey tests (falling back on SpiderMonkey’s current parser for non-implemented features).
  • Added stats about the project using Github CI.
  • The bytecode emitter has been improved to prevent generating bytecode which might have undefined behaviour.
  • There’s a new parser generator to support context-dependent aspects of the JavaScript grammar instead of exploding the number of states of the equivalent context-free grammar.

❇️ Stencil

Progress on Project Stencil is continuing. Huge thanks to André for helping knock three blocking bugs off in quick succession!

Matthew landed many patches to clean up our compilation management data structures. He also added a SourceExtent structure to store source information and changed the frontend to always defer supported GC allocations.

Kannan is working on removing GC atom allocation from the frontend. The frontend uses atoms in many places and to make the frontend GC-free we need a different strategy for that.

Caroline is working at cleaning up the flags used throughout the frontend, to unify BaseScript::ImmutableFlags, CompileOptions, and FunctionBox flags into one representation.

📚 JSScript/LazyScript unification

The JSScript/LazyScript unification is nearing completion. Ted has landed patches for Firefox 75 to use the same GC TraceKind for LazyScript and JSScript and after that was able to merge the GC arenas. The is-lazy state has been moved from JSFunction to BaseScript.

The next big step is delazifying/relazifying scripts in place so that we never have to keep both a JSScript and a LazyScript for a function.

🛸 WarpBuilder

Ion, our optimizing JIT, currently relies on a global Type Inference (TI) mechanism. Ion and TI have a number of shortcomings so we’re experimenting with a much simpler MIR builder for Ion that’s based on Baseline ICs (CacheIR) instead of TI. If this works out it will let us delete some of our most complicated code, allow us to do more work off-thread, and result in memory usage reductions and performance improvements across the engine.

The past weeks Jan landed patches preparing for this and added a very primitive WarpBuilder implementation that’s able to build MIR off-thread. He’s now adding support for more bytecode instructions.

⏩ Regular expression engine update

Iain is working on upstreaming some changes to v8 to improve case-insensitive match support. He also started posting patches to implement shim definitions to make the code work in SpiderMonkey.

🏎 JIT optimizations for classes and spread calls

André added Ion support for derived class constructors. He then made it possible for Ion to inline class constructors. He also added Ion support for spread-new and spread-super calls.

✏️ Miscellaneous


  • Ryan added support for using Cranelift when reference types are enabled. Reference types allow WebAssembly to pass JavaScript values as function arguments and return values, to store them in local variables, and to load and store them in tables. Cranelift can now emit annotations that inform the JavaScript garbage collector of value locations, so that the GC can trace and relocate object pointers.
  • Andy from Igalia added support for functions that return multiple values to the WebAssembly baseline compiler; when the corresponding implementation lands in the optimizing compiler (Firefox 76), the feature will finally be able to ride the train out of Nightly.
  • Work is ongoing to replace Cranelift’s instruction selection mechanism as well as register allocation. We can now compile basic wasm programs with either Wasmtime or Spidermonkey using Cranelift’s new pipeline.
  • We also bid a fond farewell to the #wasm IRC channel; you can now find the SpiderMonkey WebAssembly team over on chat.mozilla.org’s WebAssembly room.

The Talospace ProjectFirefox 74 on POWER

So far another uneventful release on ppc64le; I'm typing this blog post in Fx74. Most of what's new in this release is under the hood, and there are no OpenPOWER specific changes (I need to sit down with some of my other VMX/VSX patches and prep them for upstream). The working debug and optimized .mozconfigs are unchanged from Firefox 67.

The Rust Programming Language BlogAnnouncing Rust 1.42.0

The Rust team is happy to announce a new version of Rust, 1.42.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.42.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.42.0 on GitHub.

What's in 1.42.0 stable

The highlights of Rust 1.42.0 include: more useful panic messages when unwrapping, subslice patterns, the deprecation of Error::description, and more. See the detailed release notes to learn about other changes not covered by this post.

Useful line numbers in Option and Result panic messages

In Rust 1.41.1, calling unwrap() on an Option::None value would produce an error message looking something like this:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /.../src/libcore/macros/mod.rs:15:40

Similarly, the line numbers in the panic messages generated by unwrap_err, expect, and expect_err, and the corresponding methods on the Result type, also refer to core internals.

In Rust 1.42.0, all eight of these functions produce panic messages that provide the line number where they were invoked. The new error messages look something like this:

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/main.rs:2:5

This means that the invalid call to unwrap was on line 2 of src/main.rs.

This behavior is made possible by an annotation, #[track_caller]. This annotation is not yet available to use in stable Rust; if you are interested in using it in your own code, you can follow its progress by watching this tracking issue.

Subslice patterns

In Rust 1.26, we stabilized "slice patterns," which let you match on slices. They looked like this:

fn foo(words: &[&str]) {
    match words {
        [] => println!("empty slice!"),
        [one] => println!("one element: {:?}", one),
        [one, two] => println!("two elements: {:?} {:?}", one, two),
        _ => println!("I'm not sure how many elements!"),

This allowed you to match on slices, but was fairly limited. You had to choose the exact sizes you wished to support, and had to have a catch-all arm for size you didn't want to support.

In Rust 1.42, we have expanded support for matching on parts of a slice:

fn foo(words: &[&str]) {
    match words {
        ["Hello", "World", "!", ..] => println!("Hello World!"),
        ["Foo", "Bar", ..] => println!("Baz"),
        rest => println!("{:?}", rest),

The .. is called a "rest pattern," because it matches the rest of the slice. The above example uses the rest pattern at the end of a slice, but you can also use it in other ways:

fn foo(words: &[&str]) {
    match words {
        // Ignore everything but the last element, which must be "!".
        [.., "!"] => println!("!!!"),

        // `start` is a slice of everything except the last element, which must be "z".
        [start @ .., "z"] => println!("starts with: {:?}", start),

        // `end` is a slice of everything but the first element, which must be "a".
        ["a", end @ ..] => println!("ends with: {:?}", end),

        rest => println!("{:?}", rest),

If you're interested in learning more, we published a post on the Inside Rust blog discussing these changes as well as more improvements to pattern matching that we may bring to stable in the future! You can also read more about slice patterns in Thomas Hartmann's post.


This release of Rust stabilizes a new macro, matches!. This macro accepts an expression and a pattern, and returns true if the pattern matches the expression. In other words:

// Using a match expression:
match self.partial_cmp(other) {
    Some(Less) => true,
    _ => false,

// Using the `matches!` macro:
matches!(self.partial_cmp(other), Some(Less))

You can also use features like | patterns and if guards:

let foo = 'f';
assert!(matches!(foo, 'A'..='Z' | 'a'..='z'));

let bar = Some(4);
assert!(matches!(bar, Some(x) if x > 2));

use proc_macro::TokenStream; now works

In Rust 2018, we removed the need for extern crate. But procedural macros were a bit special, and so when you were writing a procedural macro, you still needed to say extern crate proc_macro;.

In this release, if you are using Cargo, you no longer need this line when working with the 2018 edition; you can use use like any other crate. Given that most projects will already have a line similar to use proc_macro::TokenStream;, this change will mean that you can delete the extern crate proc_macro; line and your code will still work. This change is small, but brings procedural macros closer to regular code.


Stabilized APIs

Other changes

There are other changes in the Rust 1.42.0 release: check out what changed in Rust, Cargo, and Clippy.

Compatibility Notes

We have two notable compatibility notes this release: a deprecation in the standard library, and a demotion of 32-bit Apple targets to Tier 3.

Error::Description is deprecated

Sometimes, mistakes are made. The Error::description method is now considered to be one of those mistakes. The problem is with its type signature:

fn description(&self) -> &str

Because description returns a &str, it is not nearly as useful as we wished it would be. This means that you basically need to return the contents of an Error verbatim; if you wanted to say, use formatting to produce a nicer description, that is impossible: you'd need to return a String. Instead, error types should implement the Display/Debug traits to provide the description of the error.

This API has existed since Rust 1.0. We've been working towards this goal for a long time: back in Rust 1.27, we "soft deprecated" this method. What that meant in practice was, we gave the function a default implementation. This means that users were no longer forced to implement this method when implementing the Error trait. In this release, we mark it as actually deprecated, and took some steps to de-emphasize the method in Error's documentation. Due to our stability policy, description will never be removed, and so this is as far as we can go.

Downgrading 32-bit Apple targets

Apple is no longer supporting 32-bit targets, and so, neither are we. They have been downgraded to Tier 3 support by the project. For more details on this, check out this post from back in January, which covers everything in detail.

Contributors to 1.42.0

Many people came together to create Rust 1.42.0. We couldn't have done it without all of you. Thanks!

Daniel Stenbergcurl 7.69.1 better patch than sorry

This release comes but 7 days since the previous and is a patch release only, hence called 7.69.1.


the 190th release
0 changes
7 days (total: 8,027)

27 bug fixes (total: 5,938)
48 commits (total: 25,405
0 new public libcurl function (total: 82)
0 new curl_easy_setopt() option (total: 270)

0 new curl command line option (total: 230)
19 contributors, 6 new (total: 2,133)
7 authors, 1 new (total: 772)
0 security fixes (total: 93)
0 USD paid in Bug Bounties

Unplanned patch release

Quite obviously this release was not shipped aligned with our standard 8-week cycle. The reason is that we had too many semi-serious or at least annoying bugs that were reported early on after the 7.69.0 release last week. They made me think our users will appreciate a quick follow-up that addresses them. See below for more details on some of those flaws.

How can this happen in a project that soon is 22 years old, that has thousands of tests, dozens of developers and 70+ CI jobs for every single commit?

The short answer is that we don’t have enough tests that cover enough use cases and transfer scenarios, or put another way: curl and libcurl are very capable tools that can deal with a nearly infinite number of different combinations of protocols, transfers and bytes over the wire. It is really hard to cover all cases.

Also, an old wisdom that we learned already many years ago is that our code is always only properly widely used and tested the moment we do a release and not before. Everything can look good in pre-releases among all the involved developers, but only once the entire world gets its hands on the new release it really gets to show what it can or cannot do.

This time, a few of the changes we had landed for 7.69.0 were not good enough. We then go back, fix issues, land updates and we try again. So here comes 7.69.1 – better patch than sorry!


As the numbers above show, we managed to land an amazing number of bug-fixes in this very short time. Here are seven of the more important ones, from my point of view! Not all of them were regressions or even reported in 7.69.0, some of them were just ripe enough to get landed in this release.

unpausing HTTP/2 transfers

When I fixed the pausing and unpausing of HTTP/2 streams for 7.69.0, the fix was inadequate for several of the more advanced use cases and unfortunately we don’t have good enough tests to detect those. At least two browsers built to use libcurl for their HTTP engines reported stalled HTTP/2 transfers due to this.

I reverted the previous change and I’ve landed a different take that seems to be a more appropriate one, based on early reports.

pause: cleanups

After I had modified the curl_easy_pause function for 7.69.0, we also got reports about crashes with uses of this function.

It made me do some additional cleanups to make it more resilient to bad uses from applications, both when called without a correct handle or when it is called to just set the same pause state it is already in

socks: connection regressions

I was so happy with my overhauled SOCKS connection code in 7.69.0 where it was made entirely non-blocking. But again it turned out that our test cases for this weren’t entirely mimicking the real world so both SOCKS4 and SOCKS5 connections where curl does the name resolving could easily break. The test cases probably worked fine there because they always resolve the host name really quick and locally.

SOCKS4 connections are now also forced to be done over IPv4 only, as that was also something that could trigger a funny error – the protocol doesn’t support IPv6, you need to go to SOCKS5 for that!

Both version 4 and 5 of the SOCKS proxy protocol have options to allow the proxy to resolve the server name or you can have the client (curl) do it. (Somewhat described in the CURLOPT_PROXY man page.) These problems were found for the cases when curl resolves the server name.

libssh: MD5 hex comparison

For application users of the libcurl CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option, which is used to verify that curl connects to the right server, this change makes sure that the libssh backend does the right thing and acts exactly like the libssh2 backend does and how the documentation says it works…

libssh2: known hosts crash

In a recent change, libcurl will try to set a preferred method for the knownhost matching libssh2 provides when connecting to a SSH server, but the code unfortunately contained an easily triggered NULL pointer dereference that no review caught and obviously no test either!

c-ares: duphandle copies DNS servers too

curl_easy_duphandle() duplicates a libcurl easy handle and is frequently used by applications. It turns out we broke a little piece of the function back in 7.63.0 as a few DNS server options haven’t been duplicated properly since then. Fixed now!

curl_version: thread-safer

The curl_version and curl_version_info functions are now both thread-safe without the use of any global context. One issue less left for having a completely thread-safe future curl_global_init.

Schedule for next release

This was an out-of-schedule release but the plan is to stick to the established release schedule, which will have the effect that the coming release window will be one week shorter than usual and the full cycle will complete in 7 weeks instead of 8.

Release video

Robert O'CallahanDebugging Gdb Using rr: Ptrace Emulation

Someone tried using rr to debug gdb and reported an rr issue because it didn't work. With some effort I was able to fix a couple of bugs and get it working for simple cases. Using improved debuggers to improve debuggers feels good!

The main problem when running gdb under rr is the need to emulate ptrace. We had the same problem when we wanted to debug rr replay under rr. In Linux a process can only have a single ptracer. rr needs to ptrace all the processes it's recording — in this case gdb and the process(es) it's debugging. Gdb needs to ptrace the process(es) it's debugging, but they can't be ptraced by both gdb and rr. rr circumvents the problem by emulating ptrace: gdb doesn't really ptrace its debuggees, as far as the kernel is concerned, but instead rr emulates gdb's ptrace calls. (I think in principle any ptrace user, e.g. gdb or strace, could support nested ptracing in this way, although it's a lot of work so I'm not surprised they don't.)

Most of the ptrace machinery that gdb needs already worked in rr, and we have quite a few ptrace tests to prove it. All I had to do to get gdb working for simple cases was to fix a couple of corner-case bugs. rr has to synthesize SIGCHLD signals sent to the emulated ptracer; these signals weren't interacting properly with sigsuspend. For some reason gdb spawns a ptraced process, then kills it with SIGKILL and waits for it to exit; that wait has to be emulated by rr because in Linux regular "wait" syscalls can only wait for a non-child process if the waiter is ptracing the target process, and under rr gdb is not really the ptracer, so the native wait doesn't work. We already had logic for that, but it wasn't working for process exits triggered by signals, so I had to rework that, which was actually pretty hard (see the rr issue for horrible details).

After I got gdb working I discovered it loads symbols very slowly under rr. Every time gdb demangles a symbol it installs (and later removes) a SIGSEGV handler to catch crashes in the demangler. This is very sad and does not inspire trust in the quality of the demangling code, especially if some of those crashes involve potentially corrupting memory writes. It is slow under rr because rr's default syscall handling path makes cheap syscalls like rt_sigaction a lot more expensive. We have the "syscall buffering" fast path for the most frequent syscalls, but supporting rt_sigaction along that path would be rather complicated, and I don't think it's worth doing at the moment, given you can work around the problem using maint set catch-demangler-crashes off. I suspect that (with KPTI especially) doing 2-3 syscalls per symbol demangle (sigprocmask is also called) hurts gdb performance even without rr, so ideally someone would fix that. Either fix the demangling code (possibly writing it in a safe language like Rust), or batch symbol demangling to avoid installing and removing a signal handler thousands of times, or move it to a child process and talk to it asynchronously over IPC — safer too!

Hacks.Mozilla.OrgSecurity means more with Firefox 74

Today sees the release of Firefox number 74. The most significant new features we’ve got for you this time are security enhancements: Feature Policy, the Cross-Origin-Resource-Policy header, and removal of TLS 1.0/1.1 support. We’ve also got some new CSS text property features, the JS optional chaining operator, and additional 2D canvas text metric features, along with the usual wealth of DevTools enhancements and bug fixes.

As always, read on for the highlights, or find the full list of additions in the following articles:

Note: In the Security enhancements section below, we detail the removal of TLS 1.0/1.1 in Firefox 74, however we reverted this change for an undetermined amount of time, to better enable access to critical government sites sharing COVID19 information. We are keeping the infomation below intact because it is still useful to give you an idea of future intents. (Updated Monday, 30 March.)

Security enhancements

Let’s look at the security enhancement we’ve got in 74.

Feature Policy

We’ve finally enabled Feature Policy by default. You can now use the <iframe> allow attribute and the Feature-Policy HTTP header to set feature permissions for your top level documents and IFrames. Syntax examples follow:

<iframe src="https://example.com" allow="fullscreen"></iframe>
Feature-Policy: microphone 'none'; geolocation 'none'


We’ve also enabled support for the Cross-Origin-Resource-Policy (CORP) header, which allows web sites and applications to opt in to protection against certain cross-origin requests (such as those coming from <script> and <img> elements). This can help to mitigate speculative side-channel attacks (like Spectre and Meltdown) as well as Cross-Site Script Inclusion attacks.

The available values are same-origin and same-site. same-origin only allows requests that share the same scheme, host, and port to read the relevant resource. This provides an additional level of protection beyond the web’s default same-origin policy. same-site only allows requests that share the same site.

To use CORP, set the header to one of these values, for example:

Cross-Origin-Resource-Policy: same-site

TLS 1.0/1.1 removal

Last but not least, Firefox 74 sees the removal of TLS 1.0/1.1 support, to help raise the overall level of security of the web platform. This is vital for moving the TLS ecosystem forward, and getting rid of a number of vulnerabilities that existed as a result of TLS 1.0/1.1 not being as robust as we’d really like — they’re in need of retirement.

The change was first announced in October 2018 as a shared initiative of Mozilla, Google, Microsoft, and Apple. Now in March 2020 we are all acting on our promises (with the exception of Apple, who will be making the change slightly later on).

The upshot is that you’ll need to make sure your web server supports TLS 1.2 or 1.3 going forward. Read TLS 1.0 and 1.1 Removal Update to find out how to test and update your TLS/SSL configuration. From now on, Firefox will return a Secure Connection Failed error when connecting to servers using the older TLS versions. Upgrade now, if you haven’t already!

secure connection failed error message, due to connected server using TLS 1.0 or 1.1

Note: For a couple of release cycles (and longer for Firefox ESR), the Secure Connection Failed error page will feature an override button allowing you to Enable TLS 1.0 and 1.1 in cases where a server is not yet upgraded, but you won’t be able to rely on it for too long.

To find out more about TLS 1.0/1.1 removal and the background behind it, read It’s the Boot for TLS 1.0 and TLS 1.1.

Other web platform additions

We’ve got a host of other web platform additions for you in 74.

New CSS text features

For a start, the text-underline-position property is enabled by default. This is useful for positioning underlines set on your text in certain contexts to achieve specific typographic effects.

For example, if your text is in a horizontal writing mode, you can use text-underline-position: under; to put the underline below all the descenders, which is useful for ensuring legibility with chemical and mathematical formulas, which make frequent use of subscripts.

.horizontal {
  text-underline-position: under;

In text with a vertical writing-mode set, we can use values of left or right to make the underline appear to the left or right of the text as required.

.vertical {
  writing-mode: vertical-rl;
  text-underline-position: left;

In addition, the text-underline-offset and text-decoration-thickness properties now accept percentage values, for example:

text-decoration-thickness: 10%;

For these properties, this is a percentage of 1em in the current font’s size.

Optional chaining in JavaScript

We now have the JavaScript optional chaining operator (?.) available. When you are trying to access an object deep in a chain, this allows for implicit testing of the existence of the objects higher up in the chain, avoiding errors and the need to explicitly write testing code.

let nestedProp = obj.first?.second;

New 2D canvas text metrics

The TextMetrics interface (retrieved using the CanvasRenderingContext2D.measureText() method) has been extended to contain four more properties measuring the actual bounding box — actualBoundingBoxLeft, actualBoundingBoxRight, actualBoundingBoxAscent, and actualBoundingBoxDescent.

For example:

const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
const text = ctx.measureText('Hello world');

text.width;                    // 56.08333206176758
text.actualBoundingBoxAscent;  // 8
text.actualBoundingBoxDescent; // 0
text.actualBoundingBoxLeft;    // 0
text.actualBoundingBoxRight;   // 55.733333333333334

DevTools additions

Next up, DevTools additions.

Device-like rendering in Responsive Design Mode

While Firefox for Android is being relaunched with GeckoView to be faster and more private, the DevTools need to stay ahead. Testing on mobile should be as frictionless as possible, both when using Responsive Design Mode on your desktop and on-device with Remote Debugging.

Correctness is important for Responsive Design Mode, so developers can trust the output without a device at hand. Over the past releases, we rolled out major improvements that ensure meta viewport is correctly applied with Touch Simulation. This ties in with improved device presets, which automatically enable touch simulation for mobile devices.

animated gif showing how responsive design mode now represents view meta settings better

Fun fact: The team managed to make this simulation so accurate that it has already helped to identify and fix rendering bugs for Firefox on Android.

DevTools Tip: Open Responsive Design Mode without DevTools via the tools menu or Ctrl + Shift + M on Windows/Cmd + Opt + M on macOS.

We’d love to hear about your experiences when giving your site a spin in RDM or on your Android phone with Firefox Nightly for Developers.

CSS tools that work for you

The Page Inspector’s new in-context warnings for inactive CSS rules have received a lot of positive feedback. They help you solve gnarly CSS issues while teaching you about the intricate interdependencies of CSS rules.

Since its launch, we have continued to tweak and add rules, often based on user feedback. One highlight for 74 is a new detection setting that warns you when properties depend on positioned elements – namely z-index, top, left, bottom, and right.

Firefox Page Inspector now showing inactive position-related properties such as z-index and top

Your feedback will help to further refine and expand the rules. Say hi to the team in the DevTools chat on Mozilla’s Matrix instance or follow our work via @FirefoxDevTools.

Debugging for Nested Workers

Firefox’s JavaScript Debugger team has been focused on optimizing Web Workers over the past few releases to make them easier to inspect and debug. The more developers and frameworks that use workers to move processing off the main thread, the easier it will be for browsers to prioritize running code that is fired as a result of user input actions.

Nested web workers, which allow workers to spawn and control their own worker instances, are now displayed in the Debugger:

Firefox JavaScript debugger now shows nested workers

Improved React DevTools integration

The React Developer Tools add-on is one of many developer add-ons that integrate tightly with Firefox DevTools. Thanks to the WebExtensions API, developers can create and publish add-ons for all browsers from the same codebase.

In collaboration with the React add-on maintainers, we worked to re-enable and improve the context menus in the add-on, including Go to definition. This action lets developers jump from React Components directly to their source files in the Debugger. The same functionality has already been enabled for jumping to elements in the Inspector. We want to build this out further, to make framework workflows seamless with the rest of the tools.

Early-access DevTools features in Developer Edition

Developer Edition is Firefox’s pre-release channel which gets early access to tooling and platform features. Its settings also enable more functionality for developers by default. We like to bring new features quickly to Developer Edition to gather your feedback, including the following highlights.

Instant evaluation for Console expressions

Exploring JavaScript objects, functions, and the DOM feels like magic with instant evaluation. As long as expressions typed into the Web Console are side-effect free, their results will be previewed while you type, allowing you to identify and fix errors more rapidly than before.

Async Stack Traces for Debugger & Console

Modern JavaScript code depends heavily upon stacking async/await on top of other async operations like events, promises, and timeouts. Thanks to better integration with the JavaScript engine, async execution is now captured to give a more complete picture.

Async call stacks in the Debugger let you step through events, timeouts, and promise-based function calls that are executed over time. In the Console, async stacks make it easier to find the root causes of errors.

async call stack shown in the Firefox JavaScript debugger

Sneak-peek Service Worker Debugging

This one has been in Nightly for a while, and we are more than excited to get it into your hands soon. Expect it in Firefox 76, which will become Developer Edition in 4 weeks.

The post Security means more with Firefox 74 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogSupport for extension sideloading has ended

Today marks the release of Firefox 74 and as we announced last fall, developers will no longer be able to install extensions without the user taking an action. This installation method was typically done through application installers, and is commonly referred to as “sideloading.”

If you are the developer of an extension that installs itself via sideloading, please make sure that your users can install the extension from your own website or from addons.mozilla.org (AMO).

We heard several questions about how the end of sideloading support affects users and developers, so we wanted to clarify what to expect from this change:

  1. Starting with Firefox 74, users will need to take explicit action to install the extensions they want, and will be able to remove previously sideloaded extensions when they want to.
  2. Previously installed sideloaded extensions will not be uninstalled for users when they update to Firefox 74. If a user no longer wants an extension that was sideloaded, they must uninstall the extension themselves.
  3. Firefox will prevent new extensions from being sideloaded.
  4. Developers will be able to push updates to extensions that had previously been sideloaded. (If you are the developer of a sideloaded extension and you are now distributing your extension through your website or AMO, please note that you will need to update both the sideloaded .xpi and the distributed .xpi; updating one will not update the other.)

Enterprise administrators and people who distribute their own builds of Firefox (such as some Linux and Selenium distributions) will be able to continue to deploy extensions to users. Enterprise administrators can do this via policies. Additionally, Firefox Extended Support Release (ESR) will continue to support sideloading as an extension installation method.

We will continue to support self-distributed extensions. This means that developers aren’t required to list their extensions on AMO and users can install extensions from sites other than AMO. Developers just won’t be able to install extensions without the user taking an action. Users will also continue being able to manually install extensions.

We hope this helps clear up any confusion from our last post. If you’re a user who has had difficulty uninstalling sideloaded extensions in the past, we hope that you will find it much easier to remove unwanted extensions with this update.

The post Support for extension sideloading has ended appeared first on Mozilla Add-ons Blog.

This Week In RustThis Week in Rust 329

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crates is plotly, a plotly.js-backed plotting library.

Thanks to Ioannis Giagkiozis for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

302 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I have no idea how to debug Rust, because in 2 years of Rust, I haven't had that type of low level bug.

papaf on hacker news

Thanks to zrk for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Discuss on r/rust.

Niko MatsakisAsync Interview #7: Withoutboats

Hello everyone! I’m happy to be posting a transcript of my async interview with withoutboats. This particularly interview took place way back on January 14th, but the intervening months have been a bit crazy and I didn’t get around to writing it up till now.


You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

Next steps for async

Before I go into boats’ interview, I want to talk a bit about the state of async-await in Rust and what I see as the obvious next steps. I may still do a few more async interviews after this – there are tons of interesting folks I never got to speak to! – but I think it’s also past time to try and come to a consensus of the “async roadmap” for the rest of the year (and maybe some of 2021, too). The good news is that I feel like the async interviews highlighted a number of relatively clear next steps. Sometime after this post, I hope to post a blog post laying out a “rough draft” of what such a roadmap might look like.


withoutboats is a member of the Rust lang team. Starting around the beginning on 2018, they started looking into async-await for Rust. Everybody knew that we wanted to have some way to write a function that could suspend (await) as needed. But we were stuck on a rather fundamental problem which boats explained in the blog post “self-referential structs”. This blog post was the first in a series of posts that ultimately documented the design that became the Pin type, which describes a pointer to a value that can never be moved to another location in memory. Pin became the foundation for async functions in Rust. (If you’ve not read the blog post series, it’s highly recommended.) If you’d like to learn more about pin, boats posted a recorded stream on YouTube that explores its design in detail.

Vision for async

All along, boats has been motivated by a relatively clear vision: we should make async Rust “just as nice to use” as Rust with blocking I/O. In short, you should be able to write code much like you ever did, but adding making functions which perform I/O into async and then adding await here or there as needed.

Since 2018, we’ve made great progress towards the goal of “async I/O that is as easy as sync” – most notably by landing and stabilizing the async-await MVP – but we’re not there yet. There remain a number of practical obstacles that make writing code using async I/O more difficult than sync I/O. So the mission for the next few years is to identify those obstacles and dismantle them, one by one.

Next step: async destructors

One of the first obstacles that boats mentioned was extending Rust’s Drop trait to work better for async code. The Drop trait, for those who don’t know Rust, is a special trait in Rust that types can implement in order to declare a destructor (code which should run when a value goes out of scope). boats wrote a blog post that discusses the problem in more detail and proposes a solution. Since that blog post, they’ve refined the proposal in response to some feedback, though the overall shape remains the same. The basic idea is to extend the Drop trait with an optional poll_drop_ready method:

trait Drop {
    fn drop(&mut self);
    fn poll_drop_ready(
        self: Pin<&mut Self>, 
        ctx: &mut Context<'_>,
    ) -> Poll<()> {

When executing an async fn, and a value goes out of scope, we will first invoke poll_drop_ready, and “await” if it returns anything other than Poll::Ready. This gives the value a chance to do async operations that may block, in preparation for the final drop. Once Poll::Ready is returned, the ordinary drop method is invoked.

This async-drop trait came up in early async interviews, and I raised Eliza’s use case with boats. Specifically, she wanted some way to offer values that are live on the stack a callback when a yield occurs and when the function is resumed, so that they can (e.g.) interact with thread-local state correctly in an async context. While distinct from async destructors, the issues are related because destructors are often used to manage thread-local values in a scoped fashion.

Adding async drop requires not only modifying the compiler but also modifying futures combinators to properly handle the new poll_drop_ready method (combinators need to propagate this poll_drop_ready to the sub-futures they contain).

Note that we wouldn’t offer any ‘guarantee’ that poll_drop_ready will run. For example, it would not run if a future is dropped without being resumed, because then there is no “async context” that can handle the awaits. However, like Drop, it would ultimately be something that types can “usually” expect to execute under ordinary circumstances.

Some of the use cases for async-drop include writers that buffer data and wish to ensure that the data is flushed out when the writer is dropped, transactional APIs, or anything that might do I/O when dropped.

block_on in the std library

One very small addition that boats proposed is adding block_on to the standard library. Invoking block_on(future) would block the current thread until future has been fully executed (and then return the resulting value). This is actually something that most async I/O code would never want to do – if you want to get the value from a future, after all, you should do future.await. So why is block_on useful?

Well, block_on is basically the most minimal executor. It allows you to take async code and run it in a synchronous context with minimal fuss. It’s really convenient in examples and documentation. I would personally like it to permit writing stand-alone test cases. Those reasons alone are probably good enough justification to add it, but boats has another use in mind as well.

async fn main

Every Rust program ultimately begins with a main somewhere. Because main is invoked by the surrounding C library to start the program, it also tends to be a place where a certain amount of “boilerplate code” can accumulate in order to “setup” the environment for the rest of the program. This “boilerplate setup” can be particularly annoying when you’re just getting started with Rust, as the main function is often the first one you write, and it winds up working differently than the others. A similar program effects smaller code examples.

In Rust 2018, we extended main so that it supports Result return values. This meant that you could now write main functions that use the ? operator, without having to add some kind of intermediate wrapper:

fn main() -> Result<(), std::io::Error> {
    let file = std::fs::File::create("output.txt")?;

Unfortunately, async code today suffers from a similar papercut. If you’re writing an async project, most of your code is going to be async in nature: but the main function is always synchronous, which means you need to bridge the two somehow. Sometimes, especially for larger projects, this isn’t that big a deal, as you likely need to do some setup or configuration anyway. But for smaller examples, it’s quite a pain.

So boats would like to allow people to write an “async” main. This would then permit you to directly “await” futures from within the main function:

async fn main() {
    let x = load_data(22).await;

async fn load_data(port: usize) -> Data { ... }

Of course, this raises the question: since the program will ultimately run synchronized, how do we bridge from the async fn main to a synchronous main? This is where block_on comes in: at least to start, we can simply declare that the future generated by async fn main will be executed using block_on, which means it will block the main thread until main completes (exactly what we want). For simple programs and examples, this will be exactly what you want.

But most real programs will ultimately want to start some other executor to get more features. In fact, following the lead of the runtime crate, many executors already offer a procedural macro that lets you write an async main. So, for example, tokio and async-std offer attributes called #[tokio::main] and #[async_std::main] respectively, which means that if you have an async fn main program you can pick an executor just by adding the appropriate attribute:

#[tokio::main] // or #[async_std::main], etc
async fn main() {

I imagine that other executors offer a similar procedural macro – or if they don’t yet, they could add one. =)

(In fact, since async-std’s runtime starts implicitly in a background thread when you start using it, you could use async-std libraries without any additional setup as well.)

Overall, this seems pretty nice to me. Basically, when you write async fn main, you get Rust’s “default executor”, which presently is a very bare-bones executor suitable only for simple examples. To switch to a more full-featured executor, you simply add a #[foo::main] attribute and you’re off to the races!

(Side note #1: This isn’t something that boats and I talked about, but I wonder about adding a more general attribute, like #[async_runtime(foo)] that just desugars to a call like foo::main_wrapper(...), which is expected to do whatever setup is appropriate for the crate foo.)

(Side note #2: This also isn’t something that boats and I talked about, but I imagine that having a “native” concept of async fn main might help for some platforms where there is already a native executor. I’m thinking of things like GStreamer or perhaps iOS with Grand Central Dispatch. In short, I imagine there are environments where the notion of a “main function” isn’t really a great fit anyhow, although it’s possible I have no idea what I’m talking about.)

async-await in an embedded context

One thing we’ve not talked about very much in the interviews so far is using async-await in an embedded context. When we shipped the async-await MVP, we definitely cut a few corners, and one of those had to do with the use of thread-local storage (TLS). Currently, when you use async fn, the desugaring winds up using a private TLS variable to carry the Context about the current async task down through the stack. This isn’t necessary, it was just a quick and convenient hack that sidestepped some questions about how to pass in arguments when resuming a suspended function. For most programs, TLS works just fine, but some embedded environments don’t support it. Therefore, it makes sense to fix this bug and permit async fn to pass around its state without the use of TLS. (In fact, since boats and I talked, jonas-schievink opened PR #69033 which does exactly this, though it’s not yet landed.)

Async fn are implemented using a more general generator mechanism

You might be surprised when I say that we’ve already started fixing the TLS problem. After all, the reason we used TLS in the first place is that there were unresolved questions about how to pass in data when waking up a suspended function – and we haven’t resolved those problems. So why are we able to go ahead and use them to support TLS?

The answer is that, while the async fn feature is implemented atop a more general mechanism of suspendable functions1, the full power of that mechanism is not exposed to end-users. So, for example, suspendable functions in the compiler permit yielding arbitrary values, but async functions always yield up (), since they only need to signal that they are blocked waiting on I/O, not transmit values. Similarly, the compiler’s internal mechanism will allow us to pass in a new Context when we wake up from a yield, and we can use that mechanism to pass in the Context argument from the future API. But this is hidden from the end-user, since that Context is never directly exposed or accessed.

In short, the suspended functions supported by the compiler are not a language feature: they are an implementation detail that is (currently) only used for async-await. This is really useful because it means we can change how they work, and it also means that we don’t have to make them support all possible use cases one might want. In this particular case, it means we don’t have to resolve some of the thorny questions about to pass in data after a yield, because we only need to use them in a very specific way.

Supporting generators (iterators) and async generators (streams)

One observation that boats raised is that people who write Async I/O code are interacting with Pin much more directly than was expected. The primary reason for this is that people are having to manually implement the Stream trait, which is basically the async version of an iterator. (We’ve talked about Stream in a number of previous async interviews.) I have also found that, in my conversations with users of async, streams come up very, very often. At the moment, consuming streams is generally fairly easy, but creating them is quite difficult. For that matter, even in synchronous Rust, manually implementing the Iterator traits is kind of annoying (although significantly easier than streams).

So, it would be nice if we had some way to make it easier to write iterators and streams. And, indeed, this design space has been carved out in other languages: the basic mechanism is to add a generator2, which is some sort of function that can yield up a series of values before terminating. Obviously, if you’ve read up to this point, you can see that the “suspendable functions” we used to implement async await can also be used to support some form of generator abstractions, so a lot of the hard implementation work has been done here.

That said, support generator functions has been something that we’ve been shying away from. And why is that, if a lot of the implementation work is done? The answer is primarily that the design space is huge. I alluded to this earlier in talking about some of the questions around how to pass data in when resuming a suspended function.

Full generality considered too dang difficult

boats however contends that we are making our lives harder than they need to be. In short, if we narrow our focus from “create the perfect, flexible abstraction for suspended functions and coroutines” to “create something that lets you write iterators and streams”, then a lot of the thorny design problems go away. Now, under the covers, we still want to have some kind of unified form of suspended functions that can support async-await and generators, but that is a much simpler task.

In short, we would want to permit writing a gen fn (and async gen fn), which would be some function that is able to yield values and which eventually returns. Since the iterator’s next method doesn’t take any arguments, we wouldn’t need to support passing data in after yields (in the case of streams, we would pass in data, but only the Context values that are not directly exposed to users). Similarly, iterators and streams don’t produce a “final value” when they’re done, so these functions would always just return unit.

Adopting a more narrow focus wouldn’t close the door to exposing our internal mechanism as a first-class language feature at some point, but it would help us to solve urgent problems sooner, and it would also give us more experience to use when looking again at the more general task. It also means that we are adding features that makes writing iterators and streams as easy as we can make it, which is a good thing3. (In case you can’t tell, I was sympathetic to boats’ argument.)


Extending the stdlib with some key traits

boats is in favor of adding the “big three” traits to the standard library (if you’ve been reading these interviews, these traits will be quite familiar to you by now):

  • AsyncRead
  • AsyncWrite
  • Stream

Stick to the core vision: Async and sync should be analogous

One important point: boats believes (and I agree) that we should try to maintain the principle that the async and synchronous versions of the traits should align as closely as possible. This matches the overarching design vision of minimizing the differences between “async Rust” and “sync Rust”. It also argues in favor of the proposal that sfackler proposed in their interview, where we address the questions of how to handle uninitialized memory in an analogous way for both Read and AsyncRead.

We talked a bit about the finer details of that principle. For example, if we were to extend the Read trait with some kind of read_buf method (which can support an uninitialized output buffer), then this new method would have to have a default, for backwards compatibility reasons:

trait Read {
    fn read(&mut self, ...);
    fn read_buf(&mut self, buf: &mut BufMut<..>) { }

This is a bit unfortunate, as ideally you would only implement read_buf. For AsyncRead, since the trait doesn’t exist yet, we could switch the defaults. But boats pointed out that this carries costs too: we would forever have to explain why the two traits are different, for example. (Another option is to have both methods default to one another, so that you can implement either one, which – combined with a lint – might be the best of both worlds.)

Generic interface for spawning

Some time back, boats wrote a post proposing global executors. This would basically be a way to add a function to the stdlib to spawn a task, which would then delegate (somehow) to whatever executor you are using. Based on the response to the post, boats now feels this is probably not a good short-term goal.

For one thing, there were a lot of unresolved questions about just what features this global executor should support. But for another, the main goal here is to enable libraries to write “executor independent” code, but it’s not clear how many libraries spawn tasks anyway – that’s usually done more at the application level. Libraries tend to instead return a future and let the application do the spawning (interestingly, one place this doesn’t work is in destructors, since they can’t return futures; supporting async drop, as discussed earlier, would help here.)

So it’d probably be better to revisit this question once we have more experience, particularly once we have the async I/O and stream traits available.

The futures crate

We discussed other possible additions to the standard library. There are a lot of “building blocks” currently in the futures library that are independent from executors and which could do well in the standard library. Some of the things that we talked about:

  • async-aware mutexes, clearly a useful building block
  • channels
    • though std channels are not the most loved, crossbeam’s are genreally preferred
    • interstingly, channel types do show up in public APIs from time to time, as a way to receive data, so having them in std could be particularly useful

In general, where things get more complex is whenever you have bits of code that either have to spawn tasks or which do the “core I/O”. These are the points where you need a more full-fledged reactor or runtime. But there are lots of utilities that don’t need that and which could profitably level in the std library.

Where to put async things in the stdlib?

One theme that boats and I did not discuss, but which has come up when I’ve raised this question with others, is where to put async-aware traits in the std hierarchy, particularly when there are sync versions. For example, should we have std::io::Read and std::io::AsyncRead? Or would it be better to have std::io::Read and something like std::async::io::Read (obviously, async is a keyword, so this precise path may not be an option). In other words, should we combine sync/async traits into the same space, but with different names, or should we carve out a space for “async-enabled” traits and use the same names? An interesting question, and I don’t have an opinion yet.

Conclusion and some of my thoughts

I always enjoy talking with boats, and this time was no exception. I think boats raised a number of small, practical ideas that hadn’t come up before. I do think it’s important that, in addition to stabilizing fundamental building blocks like AsyncRead, we also consider improvements to the ergonomic experience with smaller changes like async fn main, and I agree with the guiding principle that boats raised of keeping async and sync code as “analogous” as possible.


There is a thread on the Rust users forum for this series.


  1. In the compiler, we call these “suspendable functions” generators, but I’m avoiding that terminology for a reason. 

  2. This is why I was avoiding using the term “generator” earlier – I want to say “suspendable functions” when referring to the implementation mechanism, and “generator” when referring to the user-exposed feature. 

  3. though not one that a fully general mechanism necessarily 

The Rust Programming Language BlogThe 2020 RustConf CFP is Now Open!

Greetings fellow Rustaceans!

The 2020 RustConf Call for Proposals is now open!

Got something to share about Rust? Want to talk about the experience of learning and using Rust? Want to dive deep into an aspect of the language? Got something different in mind? We want to hear from you! The RustConf 2020 CFP site is now up and accepting proposals.

If you may be interested in speaking but aren't quite ready to submit a proposal yet, we are here to help you. We will be holding speaker office hours regularly throughout the proposal process, after the proposal process, and up to RustConf itself on August 20 and 21, 2020. We are available to brainstorm ideas for proposals, talk through proposals, and provide support throughout the entire speaking journey. We need a variety of perspectives, interests, and experience levels for RustConf to be the best that it can be - if you have questions or want to talk through things please don't hesitate to reach out to us! Watch this blog for more details on speaker office hours - they will be posted very soon.

The RustConf CFP will be open through Monday, April 5th, 2020, hope to see your proposal soon!

Daniel Stenbergcurl ootw: –quote

Previous command line options of the week.

This option is called -Q in its short form, --quote in its long form. It has existed for as long as curl has existed.


The name for this option originates from the traditional unix command ‘ftp’, as it typically has a command called exactly this: quote. The quote command for the ftp client is a way to send an exact command, as written, to the server. Very similar to what --quote does.


This option was originally made for supported only for FTP transfers but when we added support for FTPS, it worked there too automatically.

When we subsequently added SFTP support, even such users occasionally have a need for this style of extra commands so we made curl support it there too. Although for SFTP we had to do it slightly differently as SFTP as a protocol can’t actually send commands verbatim to the server as we can with FTP(S). I’ll elaborate a bit more below.

Sending FTP commands

The FTP protocol is a command/response protocol for which curl needs to send a series of commands to the server in order to get the transfer done. Commands that log in, changes working directories, sets the correct transfer mode etc.

Asking curl to access a specific ftp:// URL more or less converts into a command sequence.

The --quote option provides several different ways to insert custom FTP commands into the series of commands curl will issue. If you just specify a command to the option, it will be sent to the server before the transfer takes places – even before it changes working directory.

If you prefix the command with a minus (-), the command will instead be send after a successful transfer.

If you prefix the command with a plus (+), the command will run immediately before the transfer after curl changed working directory.

As a second (!) prefix you can also opt to insert an asterisk (*) which then tells curl that it should continue even if this command would cause an error to get returned from the server.

The actually specified command is a string the user specifies and it needs to be a correct FTP command because curl won’t even try to interpret it but will just send it as-is to the server.

FTP examples

For example, remove a file from the server after it has been successfully downloaded:

curl -O ftp://ftp.example/file -Q '-DELE file'

Issue a NOOP command after having logged in:

curl -O ftp://user:password@ftp.example/file -Q 'NOOP'

Rename a file remotely after a successful upload:

curl -T infile ftp://upload.example/dir/ -Q "-RNFR infile" -Q "-RNTO newname"

Sending SFTP commands

Despite sounding similar, SFTP is a very different protocol than FTP(S). With SFTP the access is much more low level than FTP and there’s not really a concept of command and response. Still, we’ve created a set of command for the --quote option for SFTP that lets the users sort of pretend that it works the same way.

Since there is no sending of the quote commands verbatim in the SFTP case, like curl does for FTP, the commands must instead be supported by curl and get translated into their underlying SFTP binary protocol bits.

In order to support most of the basic use cases people have reportedly used with curl and FTP over the years, curl supports the following commands for SFTP: chgrp, chmod, chown, ln, mkdir, pwd, rename, rm, rmdir and symlink.

The minus and asterisk prefixes as described above work for SFTP too (but not the plus prefix).

Example, delete a file after a successful download over SFTP:

curl -O sftp://example/file -Q '-rm file'

Rename a file on the target server after a successful upload:

curl -T infile sftp://example/dir/ -Q "-rename infile newname"

SSH backends

The SSH support in curl is powered by a third party SSH library. When you build curl, there are three different libraries to select from and they will have a slightly varying degree of support. The libssh2 and libssh backends are pretty much feature complete and have been around for a while, where as the wolfSSH backend is more bare bones with less features supported but at much smaller footprint.

Related options

--request changes the actual command used to invoke the transfer when listing directories with FTP.

Wladimir PalantYahoo! and AOL: Where two-factor authentication makes your account less secure

If you are reading this, you probably know already that you are supposed to use two-factor authentication for your most important accounts. This way you make sure that nobody can take over your account merely by guessing or stealing your password, which makes an account takeover far less likely. And what could be more important than your email account that everything else ties into? So you probably know, when Yahoo! greets you like this on login – it’s only for your own safety:

Yahoo! asking for a recovery phone number on login

Yahoo! makes sure that “Remind me later” link is small and doesn’t look like an action, so it would seem that adding a phone number is the only way out here. And why would anybody oppose adding it anyway? But here is the thing: complying reduces the security of your account considerably. This is due to the way Verizon Media (the company which acquired Yahoo! and AOL a while ago) implements account recovery. And: yes, everything I say about Yahoo! also applies to AOL accounts.

Summary of the findings

I’m not the one who discovered the issue. A Yahoo! user wrote me:

I entered my phone number to the Yahoo! login, and it asked me if I wanted to receive a verification key/access key (2fa authentication). So I did that, and typed in the access key… Surprise, I logged in ACCIDENTALLY to the Yahoo! mail of the previous owner of my current phone number!!!

I’m not even the first one to write about this issue. For example, Brian Krebs mentioned this a year ago. Yet here we still are: anybody can take over a Yahoo! or AOL account as long as they control the recovery phone number associated with it.

So if you’ve got a new phone number recently, you could check whether its previous owner has a Yahoo! or AOL account. Nothing will stop you from taking over that account. And not just that: adding a recovery phone number doesn’t necessarily require verification! So when I tested it out, I was offered access to a Yahoo! account which was associated with my phone number even though the account owner almost certainly never proved owning this number. No, I did not log into their account…

How two-factor authentication is supposed to work

The idea behind two-factor authentication is making account takeover more complicated. Instead of logging in with merely a password (something you know), you also have to demonstrate access to a device like your phone (something you have). There is a number of ways how malicious actors could learn your password, e.g. if you are in the habit of reusing passwords; chances are that your password has been compromised in one of the numerous data breaches. So it’s a good idea to set the bar for account access higher.

The already mentioned article by Brian Krebs explains why phone numbers aren’t considered a good second factor. Not only do phone numbers change hands quite often, criminals have been hijacking them en masse via SIM swapping attacks. Still, despite sending SMS messages to a phone number being considered a weak authentication scheme, it provides some value when used in addition to querying the password.

The Yahoo! and AOL account recovery process

But that’s not how it works with Yahoo! and AOL accounts. I added a recovery phone to my Yahoo! account and enabled two-factor authentication with the same phone number (yes, you have to do it separately). So my account should have been as secure as somehow possible.

And then I tried “recovering” my account. From a different browser. Via a Russian proxy. While still being logged into this account in my regular browser. That should have been enough for Yahoo! to notice something being odd, right?

Yahoo! form for account recovery, only asking for a phone number

Clicking “Forgot username” brought me to a form asking me for a recovery phone number or email address. I entered the phone number and received a verification code via SMS. Entered it into the web page and voilà!

Yahoo! offering me access to my account and as well as some

Now it’s all very straightforward: I click on my account, set a new password and disable two-factor authentication. The session still open in my regular browser is logged out. As far as Yahoo! is concerned, somebody from Russia just took over my account using only the weak SMS-based authentication, not knowing my password or even my name. Yet Yahoo! didn’t notice anything suspicious about this and didn’t feel any need for additional checks. But wait, there is a notification sent to the recovery email address!

Yahoo! notifying me about account takeover

Hey, big thanks Yahoo! for carefully documenting the issue. But you could have shortened the bottom part as “If this wasn’t you then we are terribly sorry but the horse has already left the barn.” If somebody took over my account and changed my password, I’ll most likely not get a chance to review my email addresses and phone numbers any more.

Aren’t phone numbers verified?

Now you are probably wondering: who is that other “X Y” account? Is that my test account? No, it’s not. It’s some poor soul who somehow managed to enter my phone number as their recovery phone. Given that this phone number has many repeating digits, it’s not too surprising that somebody typed it in merely to avoid Yahoo! nagging them. The other detail is surprising however: didn’t they have to verify that they actually own this number?

Now I had to go back to Yahoo!’s nag screen:

Yahoo! asking for a recovery phone number on login

If I enter a phone number into that text field and click the small “Add” link below it, the next step will require entering a verification code that I receive via SMS. However, if I click the much bigger and more obvious “Add email or mobile no.” button, it will bring me to another page where I can enter my phone number. And there the phone number will be added immediately, with the remark “Not verified” and a suggestion to verify it later. Yet the missing verification won’t prevent this the phone number from being used in account recovery.

With the second flow being the more obvious one, I suspect that a large portion of Yahoo! and AOL users never verified that they actually own the phone number they set as their recovery phone. They might have made a typo, or they might have simply invented a number. These accounts can be compromised by the rightful owner of that number at any time, and Verizon Media will just let them.

What does Verizon Media think about that?

Do the developers at Verizon Media realize that their trade-off is tilted way too much towards convenience and sacrifices security as a result? One would think so, at the very least after a big name like Brian Krebs wrote about this issue. Then again, having dealt with the bureaucratic monstrosity that is Yahoo! even before they got acquired, I was willing to give them the benefit of the doubt.

Of course, there is no easy way of reaching the right people at Yahoo!. Their own documentation suggests reporting issues via their HackerOne bug bounty program, and I gave it a try. Despite my explicitly stating that the point was making the right team at Verizon aware of the issue, my report was immediately closed as a duplicate by HackerOne staff. The other report (filed last summer) was also closed by HackerOne staff, stating that exploitation potential wasn’t proven. There is no indication that either report ever made it to the people responsible.

So it seems that the only way of getting a reaction from Verizon Media is by asking publicly and having as many people as possible chime in. Google and Microsoft make account recovery complicated for a reason, the weakest factor is not enough there. So Verizon Media, why don’t you? Do you care so little about security?

Allen Wirfs-BrockTeaser—JavaScript: The First 20 Years

Our HOPL paper is done—all 190 pages of it. The preprint will be posted this week.  In the meantime, here’s a little teaser.

JavaScript: The First 20 Years
By Allen Wirfs-Brock and Brendan Eich


In 2020, the World Wide Web is ubiquitous with over a billion websites accessible from billions of Web-connected devices. Each of those devices runs a Web browser or similar program which is able to process and display pages from those sites. The majority of those pages embed or load source code written in the JavaScript programming language. In 2020, JavaScript is arguably the world’s most broadly deployed programming language. According to a Stack Overflow [2018] survey it is used by 71.5% of professional developers making it the world’s most widely used programming language.

This paper primarily tells the story of the creation, design, and evolution of the JavaScript language over the period of 1995–2015. But the story is not only about the technical details of the language. It is also the story of how people and organizations competed and collaborated to shape the JavaScript language which dominates the Web of 2020.

This is a long and complicated story. To make it more approachable, this paper is divided into four major parts—each of which covers a major phase of JavaScript’s development and evolution. Between each of the parts there is a short interlude that provides context on how software developers were reacting to and using JavaScript.

In 1995, the Web and Web browsers were new technologies bursting onto the world, and Netscape Communications Corporation was leading Web browser development. JavaScript was initially designed and implemented in May 1995 at Netscape by Brendan Eich, one of the authors of this paper. It was intended to be a simple, easy to use, dynamic language that enabled snippets of code to be included in the definitions of Web pages. The code snippets were interpreted by a browser as it rendered the page, enabling the page to dynamically customize its presentation and respond to user interactions.

Part 1, The Origins of JavaScript, is about the creation and early evolution of JavaScript. It examines the motivations and trade-offs that went into the development of the first version of the JavaScript language at Netscape. Because of its name, JavaScript is often confused with the Java programming language. Part 1 explains the process of naming the language, the envisioned relationship between the two languages, and what happened instead. It includes an overview of the original features of the language and the design decisions that motivated them. Part 1 also traces the early evolution of the language through its first few years at Netscape and other companies.

A cornerstone of the Web is that it is based upon non-proprietary open technologies. Anybody should be able to create a Web page that can be hosted by a variety of Web servers from different vendors and accessed by a variety of browsers. A common specification facilitates interoperability among independent implementations. From its earliest days it was understood that JavaScript would need some form of standard specification. Within its first year Web developers were encountering interoperability issues between Netscape’s JavaScript and Microsoft’s reverse-engineered implementation. In 1996, the standardization process for JavaScript was begun under the auspices of the Ecma International standards organization. The first official standard specification for the language was issued in 1997 under the name “ECMAScript”

Two additional revised and enhanced editions, largely based upon Netscape’s evolution of the language, were issued by the end of 1999.

Part 2, Creating a Standard, examines how the JavaScript standardization effort was initiated, how the specifications were created, who contributed to the effort, and how decisions were made.

By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or
individual “Benevolent Dictator for Life,” the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the ECMAScript committee tried to find its own path forward evolving the language. All the while, actual usage of JavaScript rapidly grew, often using implementation-specific extensions. This created a huge legacy of unmaintained JavaScript-dependent Web pages and revealed new interoperability issues. Web developers began to create complex client-side JavaScript Web applications and were asking for standardized language enhancements to support them.

Part 3, Failed Reformations, examines the unsuccessful attempts to revise the language, the resulting turmoil within the standards committee, and how that turmoil was ultimately resolved.

In 2008 the standards committee restored harmonious operations and was able to create a modestly enhanced edition of the standard that was published in 2009.  With that success, the standards committee was finally ready to successfully undertake the task of compatibly modernizing the language. Over the course of seven years the committee developed major enhancements to the language and its specification. The result, known as ECMAScript 2015, is the foundation for the ongoing evolution of JavaScript. After completion of the 2015 release, the committee again modified its processes to enable faster incremental releases and now regularly completes revisions on a yearly schedule.

Part 4, Modernizing JavaScript, is the story of the people and processes that were used to create both the 2009 and 2015 editions of the ECMAScript standard. It covers the goals for each edition and how they addressed evolving needs of the JavaScript development community. This part examines the significant foundational changes made to the language in each edition and important new features that were added to the language.

Wherever possible, the source materials for this paper are contemporaneous primary documents. Fortunately, these exist in abundance. The authors have ensured that nearly all of the primary documents are freely and easily accessible on the Web from reliable archives using URLs included in the references. The primary document sources were supplemented with interviews and personal communications with some of the people who were directly involved in the story. Both authors were significant participants in many events covered by this paper. Their recollections are treated similarly to those of the third-party informants.

The complete twenty-year story of JavaScript is long and so is this paper. It involves hundreds of distinct events and dozens of individuals and organizations. Appendices A through E are provided to help the reader navigate these details. Appendices A and B provide annotated lists of the people and organizations that appear in the story. Appendix C is a glossary that includes terms which are unique to JavaScript or used with meanings that may be different from common usage within the computing community in 2020 or whose meaning might change or become unfamiliar for future readers.The first use within this paper of a glossary term is usually italicized and marked with a “g” superscript.’ Appendix D defines abbreviations that a reader will encounter. Appendix E contains four detailed timelines of events, one for each of the four parts of the paper.

The Firefox FrontierMeet the women who man social media brand accounts

Being online can be an overwhelming experience especially when it comes to misinformation, toxicity and inequality. Many of us have the option to disconnect and retreat from it all when … Read more

The post Meet the women who man social media brand accounts appeared first on The Firefox Frontier.

Cameron KaiserTenFourFox FPR20 available

TenFourFox Feature Parity Release 20 final is now available for testing (downloads, hashes, release notes). This version is the same as the beta except for one more tweak to fix the preferences for those who prefer to suppress Reader mode. Assuming no issues, it will go live Monday evening Pacific as usual.

I have some ideas for FPR21, including further updates to Reader mode, AltiVec acceleration for GCM (improving TLS throughput) and backporting later improvements to 0RTT, but because of a higher than usual workload it is possible development may be stalled and the next release will simply be an SPR. More on that once I get a better idea of the timeframes necessary.

Giorgio MaoneA cross-browser code library for security/privacy extensions. Interested?

The problem

Google's "Manifest V3" ongoing API changes are severely hampering browser extensions in their ability to  block unwanted content and to enforce additional security policies, threatening the usefulness, if not to the very existence, of many popular privacy and security tools. uBlock's developer made clear that this will cause him to cease supporting Chromium-based browsers. Also EFF (which develops extensions such as HTTPS Everywhere and Privacy Badger) publicly stigmatized Google's decisions, questioning both their consequences and their motivations.

NoScript is gravely affected too, although its position is not as dire as others': in facts, I've finished porting it to Chromium-based browsers in the beginning of 2019, when Manifest V3 had already been announced. Therefore, in the late stages of that project and beyond, I've spent considerable time researching and experimenting alternate techniques, mostly based on standardized Web Platform APIs and thus unaffected by Manifest V3, allowing to implement comparable NoScript functionality albeit at the price of added complexity and/or performance costs. Furthermore Mozilla developers stated that, even though staying as much compatible as possible with the Chome extensions API is a goal of theirs, they do not plan to follow Google in those choices which are more disruptive for content blockers (such as the deprecation of blocking webRequest).

While this means that the future of NoScript is relatively safe, on Firefox and the Tor Browser at least, the browser extensions APIs and capabilities are going to diverge even more: developing and maintaining a cross-browser extension, especially if privacy and/or security focused, will become a complexity nightmare, and sometimes an impossible puzzle: unsurprisingly, many developers are ready to throw in the towel.

What would I do?

NoScript Commons Library

The collection of alternate content interception/blocking/filtering techniques I've experimented with and I'm still researching in order to overcome the severe limitations imposed by Manifest V3, in their current form are best defined as "a bunch of hacks": they're hardly maintainable, and even less so reusable by the many projects which are facing similar hurdles. What I'd like to do is to refine, restructure and organize them into an open source NoScript Commons Library. It will provide an abstraction layer on top of common functionality needed to implement in-browser security and privacy software tools.

The primary client of the library will be obviously NoScript itself, refactored to decouple its core high-level features from their browser-dependent low-level implementation details, becoming easier to isolate and manage. But this library will also be freely available (under the General Public License) in a public code repository which any developer can reuse as it is or improve/fork/customize according to their needs, and hopefully contribute back to.

What do I hope?

Some of the desired outcomes:

  • By refactoring its browser-dependent "hacks" into a Commons Library, NoScript manages to keep its recently achieved cross-browser compatibility while minimizing the cross-browser maintenance burden and the functionality loss coming from Manifest V3, and mitigating the risk of bugs, regressions and security flaws caused by platform-specific behaviors and unmanageable divergent code paths.
  • Other browser extensions in the same privacy/security space as NoScript are offered similar advantages by a toolbox of cross-browser APIs and reusable code, specific to their application domain. This can also motivate their developers (among the most competent people in this field) to scrutinize, review and improve this code, leading to a less buggy, safer and overall healthier privacy and security browser extensions ecosystem.
  • Clearly documenting and benchmarking the unavoidable differences between browser-specific implementations help users make informed choices based on realistic expectations, and pressure browser vendors into providing better support (either natively or through enhanced APIs) for the extensions-provided features which couldn't be optimized for their product. This will clearly outline, in a measurable way, the difference in commitment for a striving ecosystem of in-browser security/privacy solutions between Mozilla and other browser vendors, keeping them accountable.
  • Preserving a range of safe browsing options, beyond Firefox-based clients, increases the diversity in the "safe browsing" ecosystem, making web-based attacks significantly more difficult and costly than they are in a Firefox-based Tor Browser mono-culture.

I want you!

Are you an extensions developer, or otherwise interested in in-browser privacy/security tools? I'd be very grateful to know your thoughts, and especially:

  1. Do you think this idea is useful / worth pursing?
  2. What kind of features would you like to see supported? For instance, content interception and contextual blocking, filtering, visual objects replacement (placeholders), missing behavior replacement (script "surrogates"), user interaction control (UI security)...
  3. Would you be OK with a API and documentation styles similar to what we have for Firefox's WebExtensions?
  4. How likely would you be to use such a library (either for an existing or for a new project), and/or to contribute to it?

Many thanks in advance for your feedback!

Mike HoyeBrace For Impact

I don’t spend a lot of time in here patting myself on the back, but today you can indulge me.

In the last few weeks it was a ghost town, and that felt like a victory. From a few days after we’d switched it on to Monday, I could count the number of human users on any of our major channels on one hand. By the end, apart from one last hurrah the hour before shutdown, there was nobody there but bots talking to other bots. Everyone – the company, the community, everyone – had already voted with their feet.

About three weeks ago, after spending most of a month shaking out some bugs and getting comfortable in our new space we turned on federation, connecting Mozilla to the rest of the Matrix ecosystem. Last Monday we decommissioned IRC.Mozilla.org for good, closing the book on a 22-year-long chapter of Mozilla’s history as we started a new one in our new home on Matrix.

I was given this job early last year but the post that earned it, I’m guessing, was from late 2018:

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community. […] That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual.

Some inside baseball here, but if you’re wondering: that’s why I pushed back on the idea of federation from the beginning, for all invective that earned me. That’s why I refused to include it as a requirement and held the line on that for the entire process. The fact that on classically-federated systems distributed access and non-accountable administration means that the burden of personal safety falls entirely on the individual. That’s not a unique artifact of federated systems, of course – Slack doesn’t think you should be permitted to protect yourself either, and they’re happy to wave vaguely in the direction of some hypothetical HR department and pretend that keeps their hands clean, as just one example of many – but it’s structurally true of old-school federated systems of all stripes. And bluntly, I refuse to let us end up in a place where asking somebody to participate in the Mozilla project is no different from asking them to walk home at night alone.

And yet here we are, opting into the Fediverse. It’s not because I’ve changed my mind.

One of the strongest selling points of Matrix is the combination of powerful moderation and safety tooling that hosting organizations can operate with robust tools for personal self-defense available in parallel. Critically, these aren’t half-assed tools that have been grafted on as an afterthought; they’re first-class features, robust enough that we can not only deploy them with confidence, but can reasonably be held accountable by our colleagues and community for their use. In short, we can now have safe, accountable infrastructure that complements, rather than comes at the cost, of individual user agency.

That’s not the best thing, though, and I’m here to tell you about my favorite Matrix feature that nobody knows about: Federated auto-updating blocklist sharing.

If you decide you trust somebody else’s decisions, at some other organization – their judgment calls about who is and is not welcome there – those decisions can be immediately and automatically reflected in your own. When a site you trust drops the hammer on some bad actor that ban can be adopted almost immediately by your site and your community as well. You don’t have to have ever seen that person or have whatever got them banned hit you in the eyes. You don’t even need to know they exist. All you need to do is decide you trust that other site judgment and magically someone persona non grata on their site is precisely that grata on yours.

Another way to say that is: among people or communities who trust each other in these decisions, an act of self-defense becomes, seamlessly and invisibly, an act of collective defense. No more everyone needing to fight their own fights alone forever, no more getting isolated and picked off one at a time, weakest first; shields-up means shields-up for everyone. Effective, practical defensive solidarity; it’s the most important new idea I’ve seen in social software in years. Every federated system out should build out their own version, and it’s very clear to me, at least, that is going to be the table stakes of a federated future very soon.

So I feel pretty good about where we’ve ended up, and where we’re going.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

I’ve got a graph here that’s pointing up and to the right, and it’s got nothing to do with scraping fractions of pennies out of rageclicks and misery; just people making a choice to go somewhere better, safer and happier. Maybe, just maybe, we can salvage this whole internet thing. Maybe all is not yet lost, and the future is not yet written.

The Mozilla BlogGetting Closer on Dot Org?

Over the past few months, we’ve raised concerns about the Internet Society’s plan to sell the non-profit Public Interest Registry (PIR) to Ethos Capital. Given the important role of dot org in providing a platform for free and open speech for non-profits around the world, we believe this deal deserves close scrutiny.

In our last post on this issue, we urged ICANN to take a closer look at the dot org sale. And we called on Ethos and the Internet Society to move beyond promises of accountability by posting a clear stewardship charter for public comment. As we said in our last post:

One can imagine a charter that provides the council with broad scope, meaningful independence, and practical authority to ensure PIR continues to serve the public benefit. One that guarantees Ethos and PIR will keep their promises regarding price increases, and steer any additional revenue from higher prices back into the dot org ecosystem. One that enshrines quality service and strong rights safeguards for all dot orgs. And one that helps ensure these protections are durable, accounting for the possibility of a future resale.

On February 21, Ethos and ISOC posted two proposals that address many concerns Mozilla and others have raised. The proposals include: 1. a charter for a stewardship council, including sections on free expression and personal data; and 2. an amendment to the contract between PIR and ICANN (aka a Public Interest Commitment), touching on price increases and the durability of the stewardship council. Ethos and ISOC also announced a public engagement process to gather input on these proposals.

These new proposals get a number of things right, but they also leave us with some open questions.

What do they get right? First, the proposed charter gives the stewardship council the veto power over any changes that PIR might want to make to the freedom of expression and personal data rules governing dot org domains. These are two of the most critical issues that we’d urged Ethos and ISOC to get specific about. Second, the proposed Public Interest Commitment provides ICANN with forward-looking oversight over dot org. It also codifies both the existence of the stewardship council and a price cap for dot org domains. We’d suggested a modification to the contract between PIR and ICANN in one of our posts. It was encouraging to see this suggestion taken on board.

Yet questions remain about whether the proposals will truly provide the level of accountability the dot org community deserves.

The biggest question: with PIR having the right to make the initial appointments and to veto future members, will the stewardship council really be independent? The fact that the council alone can nominate future members provides some level of independence, but that independence could be compromised by the fact that PIR will make all the initial nominations and board appointment veto authority. While it makes sense for PIR to have a significant role, the veto power should be cabined in some way. For example, the charter might call for the stewardship council to propose a larger slate of candidates and give the PIR board optional veto, down the number of positions to fill, should it so choose. And, to address the first challenge, maybe a long standing and trusted international non profit could nominate the initial council instead of PIR?

There is also a question about whether the council will have enough power to truly provide oversight of dot org freedom of expression and personal data policies. The charter requires that council vetoes of PIR freedom of expression and data policy changes require a supermajority — five out of seven members. Why not make it a simple majority?

There are a number of online meetings happening over the next week during ICANN 67, including a meeting of the Governmental Advisory Committee. Our hope is that these meetings will provide an opportunity for the ICANN community to raise questions about the dot org sale, including questions like these. We also hope that the public consultation process that Ethos and ISOC are running over the coming week will generate useful ideas on how these questions might be answered.

Mozilla will continue to keep an eye on the sale as this unfolds, with a particular eye to ensuring that, if the deal goes ahead, the stewardship council has the independence and authority needed to protect the dot org community.

The post Getting Closer on Dot Org? appeared first on The Mozilla Blog.

About:CommunityFirefox 74 new contributors

With the release of Firefox 74, we are pleased to welcome the 29 developers who contributed their first code change to Firefox in this release, 27 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Karl DubostWeek notes - 2020 w08 - worklog - pytest is working

(late publishing on March 6, 2020)


last Friday, Monday and Tuesday have led some interesting new issues.


A couple of months ago i had a discussion about unittests and nosetests with friends. On webcompat.com we are using nose (nosetests) to run the tests. My friends encouraged me to switch to pytest. I didn't have yet the motivation to do it. But last week, I started to look at it. And I finally landed this week the pull request for switching to pytest. I didn't convert all tests. This will be done little by little when touching the specific tests. The tests are running as-is, so there's not much benefits to change them at this point.

Hello modernity.


Mozilla Open Policy & Advocacy BlogMozilla Statement on EARN IT Act

On March 5th, Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) introduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act. The bill would threaten free speech on the internet and make the internet less secure by undermining strong encryption.

While balancing the needs of national and individual security in today’s fragile cybersecurity landscape is challenging, this bill creates problems rather than offering a solution. 

The law enforcement community has made it clear this law is another attempt to weaken the encryption that is the bedrock of digital security. Encryption ensures our information — from our sensitive financial and medical details to emails and text messages — is protected. Without it, the world is a far more dangerous place.

While well-intentioned, the EARN IT Act would cause great harm to the open internet and put everyday Americans at greater risk. We look forward to working with Sens. Graham and Blumenthal and their respective committees to find better ideas to create a safer and more secure internet for everyone.

The post Mozilla Statement on EARN IT Act appeared first on Open Policy & Advocacy.

Mozilla VR BlogJumpy Balls, a little demo Showcasing ecsy-three

If you have a VR headset, go to https://mixedreality.mozilla.org/jumpy-balls/ and try the game!
Jumpy Balls, a little demo Showcasing ecsy-three

Jumpy Balls is a simple game where you have to guide balls shot from a cannon towards a target. Use your controllers to drag blocks with different physical properties, making the balls bounce and reach their goal.

Behind Jumpy Balls

Developing a 3D application is a complex task. While 3D engines like three.js provide a solid foundation, there are still many different systems that must work together (eg: app states, flow, logic, collisions, physics, UI, IA, sound…), and you probably do not want to rebuild all this from scratch on each project.

Also, when creating a small experiment or simple technical demo (like those at https://threejs.org/examples), disparate parts of an application can be managed in an ad-hoc manner. But as the software grows with more interactions, a richer UI, and more user feedback,  the increased complexity demands a better strategy to coordinate modules and state.

We created ECSY to help organize all of this architecture of more complex applications and ECSY-Three to ease the process when working with the three.js engine.

The technologies we used to develop Jumpy Balls were:

The ecsy-three Module

We created ecsy-three to facilitate developing applications using ECSY and three.js. It is a set of components, systems, and helpers that will (eventually) include a large number of reusable components for the most commonly used patterns when developing 3D applications.

Jumpy Balls, a little demo Showcasing ecsy-three

The ecsy-three module exports an initialize() function that creates the entities that you will commonly need in a three.js application: scene, camera and renderer.

// Create a new world to hold all our entities and systems
world = new World();
// Initialize the default sets of entities and systems
let data = initialize(world, {vr: true});

And data will have the following structure, letting you access the created entities:

   entities: {

It will also initialize internally the systems that will take care of common tasks such as updating the transforms of the objects, listening for changes on the cameras, and of course, rendering the scene.

Please notice that you can always modify that behaviour if needed and create the entities by yourself.

Once you get the initialized entities, you can modify them. For example, changing the camera position:

// Grab the initialized entities
let {scene, renderer, camera} = data.entities;

// Modify the position for the default camera
let transform = camera.getMutableComponent(Transform);
transform.position.z = 40;

You could then create a textured box using the standard three.js API:

// Create a three.js textured box
var texture = new THREE.TextureLoader().load( 'textures/crate.gif' );
var geometry = new THREE.BoxBufferGeometry( 20, 20, 20 );
var material = new THREE.MeshBasicMaterial( { map: texture } );
mesh = new THREE.Mesh( geometry, material );

And to include that box into the ECSY world, we just create a new entity and attach the Object3D component to it:

var rotatingBox = world.createEntity()
   .addComponent(Object3D, { value: mesh })

Once we have the entity created we can add new components to modify its behaviour, for example by creating a custom component called Rotating that will be used to identify all the objects that will be rotating on our scene:


And now we can implement a system that queries the entities with a Rotating component and rotate them:

class RotationSystem extends System {
  execute(delta) {
    this.queries.entities.results.forEach(entity => {
      var rotation = entity.getMutableComponent(Transform).rotation;
      rotation.x += 0.5 * delta;
      rotation.y += 0.1 * delta;

RotationSystem.queries = {
  entities: {
    components: [Rotating, Transform]
# ```

For more info please visit the ecsy-three repository.

At this point in the development of ecsy-three, we have not defined yet all the components and systems that will be part of it, as we are following these goals:

  • Keeping the components and systems as simple as possible, implementing just the minimum functionality, so they can be used as small building blocks that can be combined together with each other to build more complex components versus mega components (and systems) very opinionated.
  • Add components and systems as we need them, based on our real needs of real examples, like Jumpy Balls, versus premature overengineering.
  • Verbosity over unnecessary abstraction and syntactic sugars.

This does not mean that in the future ecsy-three will not contain larger, more abstract and complex modules and syntactic sugar that accelerates programming in ecsy-three. Simply that moment has not arrived, we are still at an early stage.

What’s next

We will also bet on improving the ecosystem around Blender, so that the integration between Blender and the ecsy’s components and three.js is more comfortable and transparent, to avoid unneeded intermediate steps or “hacks” from the creation of an asset in Blender until we can use it in our application.

Feel free to join the discussions at:

Mozilla Addons BlogExtensions in Firefox 74

Welcome to another round of updates from Firefox Add-ons in Firefox 74. Here is what our community has been up to:

  • Keyboard shortcuts using the commands API can now be unset by setting them to an empty string using commands.update. Users can also do so manually via the new shortcut removal control at about:addons. (Thanks, Rob)
  • There were some issues with long browserAction badge texts wrapping incorrectly. The badge text supports three characters, a fourth may fit in depending on the letters used. Everything else will be cropped. Please keep this in mind when setting your badge text (Thank you Brian)
  • The global theme is not reset using themes.reset unless the current global theme was created by the extension (Kudos to you, Ajitesh)
  • An urlClassification value was added to webRequest to give insight into how the URLs were classified by Firefox. (Hurrah, Shane)
  • The extensions.webextensions.remote preference will only be read once. If you are changing this preference, the browser needs to be restarted for it to apply. This preference is used to disable out-of-process extensions, which is an unsupported configuration. The preference will be removed in a future update (bug 1613141).

We’ll be back for more in a few weeks when Firefox 75 is on the horizon. If you’d like to help make this list longer, please consider contributing to add-ons in Firefox. I’d be excited to feature your changes next time.

The post Extensions in Firefox 74 appeared first on Mozilla Add-ons Blog.

The Firefox FrontierFour tips to refresh your relationship status with social media

Can you even remember a world before selfies or memes? Things have escalated quickly. Social media has taken over our lives and, for better or worse, become an extension of … Read more

The post Four tips to refresh your relationship status with social media appeared first on The Firefox Frontier.

Mike HommeyStanding up the Cross-Compilation of Firefox for Windows on Linux

I’ve spent the past few weeks, and will spend the next few weeks, setting up cross-compiled builds of Firefox for Windows on Linux workers on Mozilla’s CI. Following is a long wall of text, if that’s too much for you, you may want to check the TL;DR near the end. If you’re a Windows user wondering about the Windows Subsystem for Linux, please at least check the end of the post.

What is it?

Traditionally, compiling software happens mostly on the platform it is going to run on. Obviously, this becomes less true when you’re building software that runs on smartphones, because you’re usually not developing on said smartphone. This is where Cross-Compilation comes in.

Cross-Compilation is compiling for a platform that is not the one you’re compiling on.

Cross-Compilation is less frequent for desktop software, because most developers will be testing the software on the machine they are building it with, which means building software for macOS on a Windows PC is not all that interesting to begin with.

Continuous Integration, on the other hand, in the era of “build pipelines”, doesn’t necessarily care that the software is built in the same environment as the one it runs on, or is being tested on.

But… why?

Five years ago or so, we started building Firefox for macOS on Linux. The main drivers, as far as I can remember, were resources and performance, and they were both tied: the only (legal) way to run macOS in a datacenter is to rack… Macs. And it’s not like Apple had been producing rackable, server-grade, machines. Okay, they have, but that didn’t last. So we were using aging Mac minis. Switching to Linux machines led to faster compilation times, and allowed to recycle the Mac minis to grow the pool running tests.

But, you might say, Windows runs on standard, rackable, server-grade machines. Or on virtually all cloud providers. And that is true. But for the same hardware, it turns out Linux performs better (more on that below), and the cost per hour per machine is also increased by the Windows license.

But then… why only now?

Firefox has a legacy of more than 20 years of development. That shows in its build system. All the things that allow cross-compiling Firefox for Windows on Linux only lined up recently.

The first of them is the compiler. You might interject with “mingw something something”, but the reality is that binary compatibility for accessibility (screen readers, etc.) and plugins (Flash is almost dead, but not quite) required Microsoft Visual C++ until recently. What changed the deal is clang-cl, and Mozilla has stopped using MSVC for the builds of Firefox it ships with Firefox 63, about 20 months ago. , Another is the process of creating the symbol files used to process crash reports, which was using one of the tools from breakpad to dump the debug info from PDB files in the right format. Unfortunately, that was using a Windows DLL to do so. What recently changed is that we now have a platform-independent tool to do this, that doesn’t require that DLL. And to place credit where credit is due, this was thanks to the people from Sentry providing Rust crates for most of the pieces necessary to do so.

Another is the build system itself, which assumed in many places that building for Windows meant you were on Windows, which doesn’t help cross-compiling for Windows. But worse than that, it also assumed that the compiler was similar. This worked fine when cross-compiling for Android or MacOS on Linux because compiling tools for the build itself (most notably a clang plugin) and compiling Firefox use compatible compilers, that take the same kind of arguments. The story is different when one of the compilers is clang, which has command line arguments like GCC, and the other is clang-cl, which has command line arguments like MSVC. This changed recently with work to allow building Android Geckoview on Windows (I’m not entirely sure all the pieces for that are there just yet, but the ones in place surely helped me ; I might have inadvertently broken some things, though).

So how does that work?

The above is unfortunately not the whole story, so when I started looking a few weeks ago, the idea was to figure out how far off we were, and what kind of shortcuts we could take to make it happen.

It turns out we weren’t that far off, and for a few things, we could work around by… just running the necessary Windows programs with Wine with some tweaks to the build system (Ironically, that means the tool to create symbol files didn’t matter). For others… more on that further below.

But let’s start looking how you could try this for yourself, now that blockers have been fixed.

First, what do you need?

  • A copy of Microsoft Visual C++. Unfortunately, we still need some of the tools it contains, like the assembler, as well as the platform development files.
  • A copy of the Windows 10 SDK.
  • A copy of the Windows Debug Interface Access (DIA) SDK.
  • A good old VFAT filesystem, large enough to hold a copy of all the above.
  • A WOW64-supporting version of Wine (wine64).
  • A full install of clang, including clang-cl (it usually comes along).
  • A copy of the Windows version of clang-cl (yes, both a Linux clang-cl and a Windows clang-cl are required at the moment, more on this further below).

Next, you need to setup a .mozconfig that sets the right target:

ac_add_options --target=x86_64-pc-mingw32

(Note: the target will change in the future)

You also need to set a few environment variables:

  • WINDOWSSDKDIR, with the full path to the base of the Windows 10 SDK in your VFAT filesystem.
  • DIA_SDK_PATH, with the full path to the base of the Debug Interface Access SDK in your VFAT filesystem.

You also need to ensure all the following are reachable from your $PATH:

  • wine64
  • ml64.exe (somewhere in the copy of MSVC in your VFAT filesystem, under a Hostx64/x64 directory)
  • clang-cl.exe (you also need to ensure it has the executable bit set)

And I think that’s about it. If not, please leave a comment or ping me on Matrix (@glandium:mozilla.org), and I’ll update the instructions above.

With an up-to-date mozilla-central, you should now be able to use ./mach build, and get a fresh build of Firefox for 64-bits Windows as a result (Well, not right now as of writing, the final pieces only just landed on autoland, they will be on mozilla-central in a few hours).

What’s up with that VFAT filesystem?

You probably noticed I was fairly insistive about some things being in a VFAT filesystem. The reason is filesystem case-(in)sensitivity. As you probably know, filesystems on Windows are case-insensitive. If you create a file Foo, you can access it as foo, FOO, fOO, etc.

On Linux, filesystems are most usually case-sensitive. So when some C++ file contains #include "windows.h" and your filesystem actually contains Windows.h, things don’t align right. Likewise when the linker wants kernel32.lib and you have kernel32.Lib.

Ext4 recently gained some optional case-insensitivity, but it requires a very recent kernel, and doesn’t work on existing filesystems. VFAT, however, as supported by Linux, has always(?) been case-insensitive. It is the simpler choice.

There’s another option, though, in the form of FUSE filesystems that wrap an existing directory to expose it as case-insensitive. That’s what I tried first, actually. CIOPFS does just that, with the caveat that you need to start from an empty directory, or an all-lowercase directory, because files with any uppercase characters in their name in the original directory don’t appear in the mountpoint at all. Unfortunately, the last version, from almost 9 years ago doesn’t withstand parallelism: when several processes access files under the mountpoint, one or several of them get failures they wouldn’t otherwise get if they were working alone. So during my first attempts cross-building Firefox I was actually using -j1. Needless to say, the build took a while, but it also made it more obvious when I hit something broken that needed fixing.

Now, on Mozilla CI, we can’t really mount a VFAT filesystem or use FUSE filesystems that easily. Which brings us to the next option: LD_PRELOAD. LD_PRELOAD is an environment variable that can be set to tell the dynamic loader (ld.so) to load a specified library when loading programs. Which in itself doesn’t do much, but the symbols the library exposes will take precedence over similarly named symbols from other libraries. Such as libc.so symbols. Which allows to divert e.g. open, opendir, etc. See where this is going? The library can divert the functions programs use to access files and change the paths the programs are trying to use on the fly.

Such libraries do exist, but I had issues with the few I tried. The most promising one was libcasefold, but building its dependencies turned out to be more work than it should have been, and the hooking it does via libsyscall_intercept is more hardcore than what I’m talking about above, and I wasn’t sure we wanted to support something that hotpatches libc.so machine code at runtime rather than divert it.

The result is that we now use our own, written in Rust (because who wants to write bullet-proof path munging code in C?). It can be used instead of a VFAT filesystem in the setup described above.

So what’s up with needing clang-cl.exe?

One of the tools Firefox needs to build is the MIDL compiler. To do its work, the MIDL compiler uses a C preprocessor, and the Firefox build system makes it use clang-cl. Something amazing that I discovered while working on this is that Wine actually supports executing Linux programs from Windows programs. So it looked like it was going to be possible to use the Linux clang-cl for that. Unfortunately, that doesn’t quite work the same way executing a Windows program does from the parent process’s perspective, and the MIDL compiler ended up being unable to read the output from the preprocessor.

Technically speaking, we could have made the MIDL compiler use MSVC’s cl.exe as a preprocessor, since it conveniently is in the same directory as ml64.exe, meaning it is already in $PATH. But that would have been a step backwards, since we specifically moved off cl.exe.

Alternatively, it is also theoretically possible to compile with --disable-accessibility to avoid requiring the MIDL compiler at all, but that currently doesn’t work in practice. And while that would help for local builds, we still want to ship Firefox with accessibility on.

What about those compilation times, then?

Past my first attempts at -j1, I was able to get a Windows build on my Linux machine in slightly less than twice the time for a Linux build, which doesn’t sound great. Several things factor in this:

  • the build system isn’t parallelizing many of the calls to the MIDL compiler, and in practice that means the build sits there doing only that and nothing else (there are some known inefficiencies in the phase where this runs).
  • the build system isn’t parallelizing the calls to the Effect compiler (FXC), and this has the same effect on build times as the MIDL compiler above.
  • the above two wouldn’t actually be that much of a problem if … Wine wasn’t slow. When running full fledged applications or games, it really isn’t, but there is a very noticeable overhead when running a lot of short-lived processes. That accumulates to several minutes over a full Firefox compilation.

That third point may or may not be related to the version of Wine available in Debian stable (what I was compiling on), or how it’s compiled, but some lucky accident made things much faster on my machine.

See, we actually already have some Windows cross-compilation of Firefox on Mozilla CI, using mingw. Those were put in place to avoid breaking Tor Browser, because that’s how they build for Windows, and because not breaking the Tor Browser is important to us. And those builds are already using Wine for the Effect compiler (FXC).

But the Wine they use doesn’t support WOW64. So one of the first things necessary to setup 64-bits Windows cross-builds with clang-cl on Mozilla CI was to get a WOW64-supporting Wine. Following the Wine build instructions was more or less straightforward, but I hit a snag: it wasn’t possible to install the freetype development files for both the 32-bits version and the 64-bits version because the docker images where we build Wine are still based on Debian 9 for reasons, and the freetype development package was not multi-arch ready on Debian 9, while it now is on Debian 10.

Upgrading to Debian 10 is most certainly possible, but that has a ton more implications than what I was trying to achieve is supposed to. You might ask “why are you building Wine anyways, you could use the Debian package”, to which I’d answer “it’s a good question, and I actually don’t know. I presume the version in Debian 9 was too old (it is older than the one we build)”.

Anyways, in the moment, while I happened to be reading Wine’s configure script to get things working, I noticed the option --without-x and thought “well, we’re not running Wine for any GUI stuff, how about I try that, that certainly would make things easy”. YOLO, right?

Not only did it work, but testing the resulting Wine on my machine, compilation times were now down to only be 1 minute slower than a Linux build, rather than 4.5 minutes! That was surely good enough to go ahead and try to get something running on CI.

Tell us about those compilation times already!

I haven’t given absolute values so far, mainly because my machine is not representative (I’ll have a blog post about that soon enough, but you may have heard about it on Twitter, IRC or Slack, but I won’t give more details here), and because the end goal here is Mozilla automation, for both the actual release of Firefox (still a long way to go there), and the Try server. Those are what matters more to my fellow developers. Also, I actually haven’t built under Windows on my machine for a fair comparison.

So here it comes:

Build times on CI

Let’s unwrap a little:

  • The yellowish and magenta points are native Windows “opt” builds, on two different kinds of AWS instances.
  • The other points are Cross-Compilations with the same “opt” configuration on three different kinds of AWS instances, one of which is the same as one used for Windows, and another one having better I/O than all the others (the cyan circles).
  • We use a tool to share a compilation cache between builds on automation (sccache), which explains the very noisy nature of the build times, because they depend on the amount of source code changes and of the cache misses they induce.
  • The Cross-Compiled builds were turned on around the 27th of February and started about as fast as the native Windows builds were at the beginning of the graph, but they had just seen a regression.
  • The regression was due to a recent change that made the clang plugin change in every build, which led to large numbers of cache misses.
  • After fixing the regression, the build times came back to their previous level on the native jobs.
  • Sccache handled clang-cl arguments in a way that broke cross-compilation, so when we turned on the cross-compiled jobs on automation, they actually had the cache turned off!
  • Let me state this explicitly because that wasn’t expected at all: the cross-compiled jobs WITHOUT a cache were as fast as native jobs WITH a cache!
  • A day later, after fixing sccache, we turned it on for the cross-compiled jobs, and build times dropped.
  • The week-end passed, and with more realistic work loads where actual changes to compiled code happen and invalidate parts of the cache, build times get more noisy but stay well under what they are on native Windows.

But the above only captures build times. On automation, a job does actually more than build. It also needs to get the source code, and install the tools needed to build. The latter is unfortunately not tracked at the moment, but the former is:

clone times on CI Now, for some explanation of the above graph:

  • The colors don’t match the previous graph. Sorry about that.
  • The colors vary by AWS instance type, and there is no actual distinction between Windows and Linux, so the instance type that is shared between them has values for both, which explain why it now looks bimodal.
  • It can be seen that the ones with better I/O (in red) are largely faster to get the source code, but also that for the shared instance type, Linux is noticeably faster.

It would be fair to say that independently of Windows vs. Linux, way too much time is spent getting the source code, and there’s other ongoing work to make things better.


Overall, the fast end of native Windows builds on Mozilla CI, including Try server, is currently around 45 minutes. That is the time taken by the entire job, and the minimum time between a developer pushing and Windows tests starting to run.

With Cross-Compilation, the fast end is, as of writing, 13 minutes, and can improve further.

As of writing, no actual Windows build job has switched over to Cross-compilation yet. Only an experimental, tier 2, job has been added. But the main jobs developers rely on on the Try server are going to switch real soon now™ (opt and debug for 32-bits, 64-bits and aarch64). Running all the test suites on Try against them yields successful results (modulo the usual known intermittent failures).

Actually shipping off Cross-compiled builds will take longer. We first need to understand the extent of the differences with the native builds and be confident that no subtle breakage happens. Also, PGO and LTO haven’t been tested so far. Everything will come in time.

What about Windows Subsystem for Linux (WSL)?

The idea to allow developers on Windows to build Firefox from WSL has floated for a while. The work to stand up Cross-compiled builds on automation has brought us the closest ever to actually being able to do it! If you’re interested in making it pass the finish line, please come talk to me in #build:mozilla.org on Matrix, there shouldn’t be much work left and we can figure it out (essentially, all the places using Wine would need to do something else, and… that’s it(?)). That should yield faster build times than natively with MozillaBuild.

Robert KaiserPicard Filming Sites: Season 1, Part 1

Ever since I was on a tour to Star Trek filming sites in 2016 with Geek Nation Tours and Larry Nemecek, I've become ever more interested in finding out to which actual real-world places TV/film crews have gone "on location" and shot scenes for our favorite on-screen stories. While the background of production of TV and film is of interest to me in general, I focus mostly on everything Star Trek and I love visiting locations they used and try to catch pictures that recreate the base setting of the shots in the production - but just the way the place looks "in the real world" and right now.
This has gone as far as me doing several presentations about the topic - two of which (one in German, one in English language) I will give at this year's FedCon as well, and creating an experimental website at filmingsites.com where I note all locations used in Star Trek productions as soon as I become aware of them.

In the last few years, around the Star Trek Las Vegas Conventions, I did get the chance to have a few days traveling around Los Angeles and vicinity, visit a few locations and take pictures there. And after Discovery being filmed up in the Toronto area (and generally using quite few locations outside the studios), Picard is back producing in Southern California and using plenty of interesting places! And now with the first half of season 1 in the books (or at least ready to watch for us via streaming), here are a few filming sites I found in those episodes:

Image No. 23473
And we actually get started with our first location (picture is a still from the series) in "Remembrance" right after Picard wakes up from the "cold open" dream sequence: Château Picard was filmed at Sunstone Winery's Villa this time (after different places were used in its TNG appearances). The Winery's general manager even said "We encourage all the Trekkies and Trekkers to come visit us." - so I guess I'll need to put it in my travels plans soon. :)

Another one I haven't seen yet but will need to put in my plans to see is One Culver, previously known as Sony Pictures Plaza. That's where the scenes in the Daystrom Institute were shot - interestingly, in walking distance to the location of the former Desilu Culver soundstages (now "The Culver Studios") and its backlot (now a residential area), where the original Star Trek series shot its first episodes and several outdoor scenes of later ones as well. One Culver's big glass front structure and the huge screen on its inside are clearly visible multiple times in Picard's Daystrom Institute scenes, as is the rainbow arch behind it on the Sony Studios parking lot. Not having been there, I could only include a promotional picture from their website here.
Image No. 23476

Now a third filming site that appears in "Remembrance" is actually one I do have my own pictures of: After seeing the first trailer for Picard and getting a hint where that building depicted that clip is, I made my way last summer to a place close to Disneyland and took a few pictures of Anaheim Convention Center. Walking by to the main entrance, I found the attached Arena to just look good, so I also got one shot of that one in - and then I see that in this episode, they used it as the Starfleet Archive Museum!
Of course, in the second episode, "Maps and Legends", we then see the main entrance, where Picard goes to meet the C-in-C, so presumably Starfleet headquarters. It looks like the roof scenes with Dahj would actually be on the same building, on satellite pictures, there seems to be an area with those stairs South of the main entrance. I'm still a bit sad though that Starfleet seems to have moved their headquarters and it's not the Tillman administration building any more that was used in previous series (actually, for both headquarters and the Academy - so maybe it comes back in some series as the Academy, with its beautiful Japanese garden).
Image No. 23474 Image No. 23475

Of course, at the end of this episode we get to Raffi's home, and we stay there for a bit and see more of it in "The End is the Beginning". The description in the episode tells us it's located at a place called "Vasquez Rocks" - and this time, that's actually the real filming site! Now, Trekkies know this of course, as a whole lot of Trek has been filmed there - most famously the fight between Kirk and the Gorn captain in "Arena. Vasquez Rocks has surely been of the most-used Star Trek filming sites over the years, though - at least before Picard - I'd say that it ranked second behind Bronson Canyon. How what's nowadays a Natural Area park becomes a place to live in by 2399 is up to anyone's speculation. ;-)
Image No. 23479 Image No. 23480

I guess in the 3 introductory episodes we had more different filming sites than in any of the two whole seasons of Discovery seen so far, but right in the next episode, Absolute Candor, we got yet another interesting place! A lot of that episode plays on the planet Vashti, with three sets of scenes on their main place with the bar setting: In the "cold open" / flashback, when Picard beams down to the planet again in the show's present, and before he leaves, including the fight scene. Given that there were multiple hints of shooting taking place at Universal Studios Hollywood, and the sets having a somewhat familiar look, more Mexican than totally alien, it did not take long to identify where those scenes were filmed: It's the standing "Mexican Street" / "Old Mexico Place" set on Universal's backlot - which you usually can visit with the Studio Tour as an attraction of their Theme Park. The pictures, of the bar area, and basically from there in the direction of Picard's beam-in point, are from a one of those tours I took in 2013.
Image No. 23477 Image No. 23478

In the following two episodes, I could not make out any filming sites, so I guess they pretty much filmed those at Santa Clarita Studios where the production of the series is based. I know we will have some location(s) to talk about in the second half of the season though - not sure if there's as many as in the first few episodes, but I hope we'll have a few good ones!

Hacks.Mozilla.OrgFuture-proofing Firefox’s JavaScript Debugger Implementation

Or: The Implementation of the SpiderMonkey Debugger (and its cleanup)

We’ve made major improvements to JavaScript debugging in Firefox DevTools over the past two years. Developer feedback has informed and validated our work on performance, source maps, stepping reliability, pretty printing, and more types of breakpoints. Thank you. If you haven’t tried Firefox for debugging modern JavaScript in a while, now is the time.

Recent Debugger features, Service Workers and Async Stack Traces, in action

Many of the aforementioned efforts focused on the Debugger frontend (written in React and Redux). We were able to make steady progress. The integration with SpiderMonkey, Firefox’s JavaScript engine, was where work went more slowly. To tackle larger features like proper asynchronous call stacks (available now in DevEdition), we needed to do a major cleanup. Here’s how we did that.

Background: A Brief History of the JS Debugger

The JavaScript debugger in Firefox is based on the SpiderMonkey engine’s Debugger API. This API was added in 2011. Since then, it has survived the addition of four JIT compilers, the retirement of two of them, and the addition of a WebAssembly compiler. All that, without needing to make substantial changes to the API’s users. Debugger imposes a performance penalty only temporarily, while the developer is closely observing the debuggee’s execution. As soon as the developer looks away, the program can return to its optimized paths.

A few key decisions (some ours, others imposed by the situation) influenced the Debugger‘s implementation:

  • For better or worse, it is a central tenet of Firefox’s architecture that JavaScript code of different privilege levels can share a single heap. Object edges and function calls cross privilege boundaries as needed. SpiderMonkey’s compartments ensure the necessary security checks get performed in this free-wheeling environment. The API must work seamlessly across compartment boundaries.
  • Debugger is an intra-thread debugging API: events in the debuggee are handled on the same thread that triggered them. This keeps the implementation free of threading concerns, but invites other sorts of complications.
  • Debuggers must interact naturally with garbage collection. If an object won’t be missed, it should be possible for the garbage collector to recycle it, whether it’s a Debugger, a debuggee, or otherwise.
  • A Debugger should observe only activity that occurs within the scope of a given set of JavaScript global objects (say, a window or a sandbox). It should have no effect on activity elsewhere in the browser. But it should also be possible for multiple Debuggers to observe the same global, without too much interference.

Garbage Collection

People usually explain garbage collectors by saying that they recycle objects that are “unreachable”, but this is not quite correct. For example, suppose we write:

  .then(res => {
    res.body.getReader().closed.then(() => console.log("stream closed!"))

Once we’re done executing this statement, none of the objects it constructed are reachable by the rest of the program. Nonetheless, the WHATWG specification forbids the browser from garbage collecting everything and terminating the fetch. If it were to do so, the message would not be logged to the console, and the user would know the garbage collection had occurred.

Garbage collectors obey an interesting principle: an object may be recycled only if it never would be missed. That is, an object’s memory may be recycled only if doing so would have no observable effect on the program’s future execution—beyond, of course, making more memory available for further use.

The Principle in Action

Consider the following code:

// Create a new JavaScript global object, in its own compartment.
var global = newGlobal({ newCompartment: true });

// Create a new Debugger, and use its `onEnterFrame` hook to report function
// calls in `global`.
new Debugger(global).onEnterFrame = (frame) => {
  if (frame.callee) {
    console.log(`called function ${frame.callee.name}`);

  function f() { }
  function g() { f(); }

When run in SpiderMonkey’s JavaScript shell (in which Debugger constructor and the newGlobal function are immediately available), this prints:

called function g
called function f

Just as in the fetch example, the new Debugger becomes unreachable by the program as soon as we are done setting its onEnterFrame hook. However, since all future function calls within the scope of global will produce console output, it would be incorrect for the garbage collector to remove the Debugger. Its absence would be observable as soon as global made a function call.

A similar line of reasoning applies for many other Debugger facilities. The onNewScript hook reports the introduction of new code into a debuggee global’s scope, whether by calling eval, loading a <script> element, setting an onclick handler, or the like. Or, setting a breakpoint arranges to call its handler function each time control reaches the designated point in the code. In all these cases, debuggee activity calls functions registered with a Debugger, which can do anything the developer likes, and thus have observable effects.

This case, however, is different:

var global = newGlobal({ newCompartment: true });

new Debugger(global);

  function f() { }
  function g() { f(); }

Here, the new Debugger is created, but is dropped without any hooks being set. If this Debugger were disposed of, no one would ever be the wiser. It should be eligible to be recycled by the garbage collector. Going further, in the onEnterFrame example above, if global becomes unnecessary, with no timers or event handlers or pending fetches to run code in it ever again, then global, its Debugger, and its handler function must all be eligible for collection.

The principle is that Debugger objects are not anything special to the GC. They’re simply objects that let us observe the execution of a JavaScript program, and otherwise follow the same rules as everyone else. JavaScript developers appreciate knowing that, if they simply avoid unnecessary entanglements, the system will take care of cleaning up memory for them as soon as it’s safe to do so. And this convenience extends to code using the Debugger API.

The Implementation

Looking through the description above, it seems clear that when a Debugger has an onEnterFrame hook, an onNewScript hook, or something else like that, its debuggee globals hold an owning reference to it. As long as those globals are alive, the Debugger must be retained as well. Clearing all those hooks should remove that owning reference. Thus, the liveness of the global no longer guarantees that the Debugger will survive. (References from elsewhere in the system might, of course.)

And that’s pretty much how it’s done. At the C++ level, each JavaScript global has an associated JS::Realm object, which owns a table of DebuggerLink objects, one for each Debugger of which it is a debuggee. Each DebuggerLink object holds an optional strong reference to its Debugger. This is set when the Debugger has interesting hooks, and cleared otherwise. Hence, whenever the Debugger has hooks set, there is a strong path, via the DebuggerLink intermediary, from its debuggee globals to the Debugger. In contrast, when the hooks are clear, there is no such path.

A breakpoint set in a script behaves similarly. It acts like an owning reference from that script to the breakpoint’s handler function and the Debugger to which it belongs. As long as the script is live, the handler and Debugger must remain alive, too. Or, if the script is recycled, certainly that breakpoint will never be hit again, so the handler might as well go, too. And if all the Debugger‘s breakpoints’ scripts get recycled, then the scripts no longer protect the Debugger from collection.

However, things were not always so straightforward.

What’s Changed

Originally, Debugger objects had an enabled flag, which, when set to false, immediately disabled all the Debugger‘s hooks and breakpoints. The intent was to provide a single point of control. In this way, the Firefox Developer Tools server could neutralize a Debugger (say, when the toolbox is closed), ensuring that it would have no further impact on the system. Of course, simply clearing out the Debugger‘s set of debuggee globals—a capability we needed for other purposes anyway—has almost exactly the same effect. So this meant the enabled flag was redundant. But, we reasoned, how much trouble could a simple boolean flag really cause?

What we did not anticipate was that the presence of the enabled flag made the straightforward implementation described above seem impractical. Should setting enabled to false really go and clear out all the breakpoints in the debuggee’s scripts? And should setting it back to true go and put them all back in? That seemed ridiculous.

So, rather than treating globals and scripts as if they owned references to their interested Debuggers, we added a new phase to the garbage collection process. Once the collector had found as many objects as possible to retain, we would loop over all the Debuggers in the system. We would ask each one: Are any of your debuggees sure to be retained? Do you have any hooks or breakpoints set? And, are you enabled? If so, we marked the Debugger itself for retention.

Naturally, once we decided to retain a Debugger, we alsohad to retain any objects it or its handler functions could possibly use. Thus, we would restart the garbage collection process, let it run to exhaustion a second time, and repeat the scan of all Debuggers.

Cleaning up Garbage Collection

In the fall of 2019, Logan Smyth, Jason Laster, and I undertook a series of debugger cleanups. This code, named Debugger::markIteratively, was one of our targets. We deleted the enabled flag, introduced the owning edges described above (among others), and shrunk Debugger::markIteratively down to the point that it could be safely removed. This work was filed as bug 1592158: “Remove Debugger::hasAnyLiveFrames and its vile henchmen”. (In fact, in a sneak attack, Logan removed it as part of a patch for a blocker, bug 1592116.)

The SpiderMonkey team members responsible for the garbage collector also appreciated our cleanup. It removed a hairy special case from the garbage collector. The replacement is code that looks and behaves much more like everything else in SpiderMonkey. The idea that “this points to that; thus if we’re keeping this, we’d better keep that, too” is the standard path for a garbage collector. And so, this work turned Debugger from a headache into (almost) just another kind of object.


The Debugger API presented the garbage collector maintainers with other headaches as well, in its interactions with SpiderMonkey compartments and zones.

In Firefox, the JavaScript heap generally includes a mix of objects from different privilege levels and origins. Chrome objects can refer to content objects, and vice versa. Naturally, Firefox must enforce certain rules on how these objects interact. For example, content code might only be permitted to call certain methods on a chrome object. Or, chrome code might want to see only an object’s original, web-standard-specified methods, regardless of how content has toyed with its prototype or reconfigured its properties.

(Note that Firefox’s ongoing ‘Fission’ project will segregate web content from different origins into different processes, so inter-origin edges will become much less common. But even after Fission, there will still be interaction between chrome and content JavaScript code.)

Runtimes, Zones, and Realms

To implement these checks, to support garbage collection, and to support the web as specified, Firefox divides up the JavaScript world as follows:

  • A complete world of JavaScript objects that might interact with each other is called a runtime.
  • A runtime’s objects are divided into zones, which are the units of garbage collection. Every garbage collection processes a certain set of zones. Typically there is one zone per browser tab.
  • Each zone is divided into compartments, which are units of origin or privilege. All the objects in a given compartment have the same origin and privilege level.
  • A compartment is divided into realms, corresponding to JavaScript window objects, or other sorts of global objects like sandboxes or JSMs.

Each script is assigned to a particular realm, depending on how it was loaded. And each object is assigned a realm, depending on the script that creates it.

Scripts and objects may only refer directly to objects in their own compartment. For inter-compartment references, each compartment keeps a collection of specialized proxies, called cross-compartment wrappers. Each of these wrappers represents a specific object in another compartment. These wrappers intercept all property accesses and function calls and apply security checks. This is done to decide whether they should proceed, based on the relative privilege levels and origins of the wrapper’s compartment and its referent’s compartment. Rather than passing or returning an object from one compartment to another, SpiderMonkey looks up that object’s wrapper in the destination compartment (creating it if none exists). Then it hands over the wrapper instead of the object.

Wrapping Compartments

An extensive system of assertions, in the garbage collector but also throughout the rest of SpiderMonkey, verify that no direct inter-compartment edges are ever created. Furthermore, scripts must only directly touch objects in their own compartments.

But since every inter-compartment reference must be intercepted by a wrapper, the compartments’ wrapper tables form a convenient registry of all inter-zone references as well. This is exactly the information that the garbage collector needs to collect one set of zones separately from the rest. If an object has no wrappers representing it in compartments outside its own zone, then the collector knows. All without having to examine the entire runtime. No other zone would miss that object if it were recycled.

Inter-Compartment Debugging

The Debugger API’s Debugger.Object objects throw a wrench into this neat machinery. Since the debugger server is privileged chrome code, and the debuggee is usually content code, these fall into separate compartments. This means that a Debugger.Object‘s pointer to its reference is an inter-compartment reference.

But the Debugger.Objects cannot be cross-compartment wrappers. A compartment may have many Debugger objects, each of which has its own flock of Debugger.Objects, so there may be many Debugger.Objects referring to the same debuggee object in a single compartment. (The same is true of Debugger.Script and other API objects. We’ll focus on Debugger.Object here for simplicity.)

Previously, SpiderMonkey coped with this by requiring that each Debugger.Object be paired with a special entry to the compartment’s wrapper table. The table’s lookup key was not simply a foreign object, but a (Debugger, foreign object) pair. This preserved the invariant that the compartments’ wrapper tables had a record of all inter-compartment references.

Unfortunately, these entries required special treatment. An ordinary cross-compartment wrapper can be dropped if its compartment’s objects no longer point there, since an equivalent wrapper can be constructed on demand. But a Debugger.Object must be retained for as long as its Debugger and referent are alive. A user might place a custom property on a Debugger.Object or use it as a key in a weak map. That user might expect to find the property or weak map entry when encountering the corresponding debuggee object again. Also, special care is required to ensure that the wrapper table entries are reliably created and removed in sync with Debugger.Object creation, even if out-of-memory errors or other interruptions arise.

Cleaning up Compartments

As part of our Fall 2019 code cleanup, we removed the special wrapper table entries. By simply consulting the Debugger API’s own tables of Debugger.Objects, we changed the garbage collector find cross-compartment references. This is Debugger-specific code, which we would, of course, prefer to avoid, but the prior arrangement was also Debugger-specific. The present approach is more direct. It looks more like ordinary garbage collector tracing code. This removes the need for careful synchronization between two tables.

Forced Returns and Exceptions

When SpiderMonkey calls a Debugger API hook to report some sort of activity in the debuggee, most hooks can return a resumption value to say how the debuggee should continue execution:

  • undefined means that the debuggee should proceed normally, as if nothing had happened.
  • Returning an object of the form { throw: EXN } means that the debuggee should proceed as if the value EXN were thrown as an exception.
  • Returning an object of the form { return: RETVAL } means that the debuggee should return immediately from whatever function is running now, with RETVAL as the return value.
  • null means that the debuggee should be terminated, as if by the slow script dialog.

In SpiderMonkey’s C++ code, there was an enumerated type named ResumeMode, which had values Continue, Throw, Return, and Terminate, representing each of these possibilities. Each site in SpiderMonkey that needed to report an event to Debugger and then respect a resumption value needed to have a switch statement for each of these cases. For example, the code in the bytecode interpreter for entering a function call looked like this:

switch (DebugAPI::onEnterFrame(cx, activation.entryFrame())) {
  case ResumeMode::Continue:
  case ResumeMode::Return:
    if (!ForcedReturn(cx, REGS)) {
      goto error;
    goto successful_return_continuation;
  case ResumeMode::Throw:
  case ResumeMode::Terminate:
    goto error;
    MOZ_CRASH("bad DebugAPI::onEnterFrame resume mode");

Discovering Relevant SpiderMonkey Conventions

However, Logan Smyth noticed that, except for ResumeMode::Return, all of these cases were already covered by SpiderMonkey’s convention for ‘fallible operations’. According to this convention, a C++ function that might fail should accept a JSContext* argument, and return a bool value. If the operation succeeds, it should return true; otherwise, it should return false and set the state of the given JSContext to indicate a thrown exception or a termination.

For example, given that JavaScript objects can be proxies or have getter properties, fetching a property from an object is a fallible operation. So SpiderMonkey’s js::GetProperty function has the signature:

bool js::GetProperty(JSContext* cx,
                     HandleValue v, HandlePropertyName name,
                     MutableHandleValue vp);

The value v is the object, and name is the name of the property we wish to fetch from it. On success, GetProperty stores the value in vp and returns true. On failure, it tells cx what went wrong, and returns false. Code that calls this function might look like:

if (!GetProperty(cx, obj, id, &value)) {
  return false; // propagate failure to our caller

All sorts of functions in SpiderMonkey follow this convention. They can be as complex as evaluating a script, or as simple as allocating an object. (Some functions return a nullptr instead of a bool, but the principle is the same.)

This convention subsumes three of the four ResumeMode values:

  • ResumeMode::Continue is equivalent to returning true.
  • ResumeMode::Throw is equivalent to returning false and setting an exception on the JSContext.
  • ResumeMode::Terminate is equivalent to returning false but setting no exception on the JSContext.

The only case this doesn’t support is ResumeMode::Return.

Building on SpiderMonkey Conventions

Next, Logan observed that SpiderMonkey is already responsible for reporting all stack frame pops to the DebugAPI::onLeaveFrame function, so that Debugger can call frame onPop handlers and perform other bookkeeping. So, in principle, to force an immediate return, we could:

  • stash the desired return value somewhere;
  • return false without setting an exception to force termination;
  • wait for the termination to propagate through the current function call, at which point SpiderMonkey will call DebugAPI::onLeaveFrame;
  • recover our stashed return value, and store it in the right place in the stack frame; and finally
  • return true as if nothing had happened, emulating an ordinary return.

With this approach, there would be no need for the ResumeMode enum or special handling at DebugAPI call sites. SpiderMonkey’s ordinary rules for raising and propagating exceptions are already very familiar to any SpiderMonkey developer. Those rules do all the work for us.

As it turns out, the machinery for stashing the return value and recognizing the need for intervention in DebugAPI::onLeaveFrame already existed in SpiderMonkey. Shu-Yu Guo had implemented it years ago to handle a rare case involving slow script timeouts and single-stepping.

With this collection of insights, Logan was able to turn the call sites at which SpiderMonkey reports activity to Debugger into call sites just like those of any other fallible function. The call to DebugAPI::onEnterFrame shown above now reads, simply:

if (!DebugAPI::onEnterFrame(cx, activation.entryFrame())) {
  goto error;

Other Cleanups

We carried out a number of other minor cleanups as part of our Fall 2019 effort:

  • We split the file js/src/vm/Debugger.cpp, originally 14k lines long and containing the entire Debugger implementation, into eight separate source files, and moved them to the directory js/src/debugger. Phabricator no longer refuses to colorize the file because of its length.
  • Each Debugger API object type, Debugger.Object, Debugger.Frame, Debugger.Environment, Debugger.Script, and Debugger.Source, is now represented by its own C++ subclass of js::NativeObject. This lets us use the organizational tools C++ provides to structure and scope their implementation code. We can also replace dynamic type checks in the C++ code with types. The compiler can check those at compile time.
  • The code that lets Debugger.Script and Debugger.Source refer to both JavaScript and WebAssembly code was simplified so that Debugger::wrapVariantReferent, rather than requiring five template parameters, requires only one–and one that could be inferred by the C++ compiler, to boot.

I believe this work has resulted in a substantial improvement to the quality of life of engineers who have to deal with Debugger‘s implementation. I hope it is able to continue to serve Firefox effectively in the years to come.

The post Future-proofing Firefox’s JavaScript Debugger Implementation appeared first on Mozilla Hacks - the Web developer blog.

The Firefox FrontierSpotty privacy practices of popular period trackers

We don’t think twice when it comes to using technology for convenience. That can include some seriously personal aspects of day-to-day life like menstruation and fertility tracking. For people who … Read more

The post Spotty privacy practices of popular period trackers appeared first on The Firefox Frontier.

The Firefox FrontierMore privacy means more democracy

The 2020 U.S. presidential election season is underway, and no matter your political lean, you want to know the facts. What do all of these politicians believe? How do their … Read more

The post More privacy means more democracy appeared first on The Firefox Frontier.