Ryan HarterWhen the Bootstrap Breaks - ODSC 2019

I'm excited to announce that I'll be presenting at the Open Data Science Conference in Boston next week. My colleague Saptarshi and I will be talking about When the Bootstrap Breaks.

I've included the abstract below, but the high-level goal of this talk is to strip some varnish off the bootstrap. Folks often look to the bootstrap as a panacea for weird data, but all tools have their failure cases. We plan on highlighting some problems we ran into when trying to use the bootstrap for Firefox data and how we dealt with the issues, both in theory and in practice.


Resampling methods like the bootstrap are becoming increasingly common in modern data science. For good reason too; the bootstrap is incredibly powerful. Unlike t-statistics, the bootstrap doesn’t depend on a normality assumption nor require any arcane formulas. You’re no longer limited to working with well understood metrics like means. One can easily build tools that compute confidence for an arbitrary metric. What’s the standard error of a Median? Who cares! I used the bootstrap.

With all of these benefits the bootstrap begins to look a little magical. That’s dangerous. To understand your tool you need to understand how it fails, how to spot the failure, and what to do when it does. As it turns out, methods like the bootstrap and the t-test struggle with very similar types of data. We’ll explore how these two methods compare on troublesome data sets and discuss when to use one over the other.

In this talk we’ll explore what types to data the bootstrap has trouble with. Then we’ll discuss how to identify these problems in the wild and how to deal with the problematic data. We will explore simulated data and share the code to conduct the simulations yourself. However, this isn’t just a theoretical problem. We’ll also explore real Firefox data and discuss how Firefox’s data science team handles this data when analyzing experiments.

At the end of this session you’ll leave with a firm understanding of the bootstrap. Even better, you’ll understand how to spot potential issues in your data and avoid false confidence in your results.

The Mozilla BlogIt’s Complicated: Mozilla’s 2019 Internet Health Report

Our annual open-source report examines how humanity and the internet intersect. Here’s what we found


Today, Mozilla is publishing the 2019 Internet Health Report — our third annual examination of the internet, its impact on society and how it influences our everyday lives.


The Report paints a mixed picture of what life online looks like today. We’re more connected than ever, with humanity passing the ‘50% of us are now online’ mark earlier this year. And, while almost all of us enjoy the upsides of being connected, we also worry about how the internet and social media are impacting our children, our jobs and our democracies.

When we published last year’s Report, the world was watching the Facebook-Cambridge Analytica scandal unfold — and these worries were starting to grow. Millions of people were realizing that widespread, laissez-faire sharing of our personal data, the massive growth and centralization of the tech industry, and the misuse of online ads and social media was adding up to a big mess.

Over the past year, more and more people started asking: what are we going to do about this mess? How do we push the digital world in a better direction?

As people asked these questions, our ability to see the underlying problems with the system — and to imagine solutions — has evolved tremendously. Recently, we’ve seen governments across Europe step up efforts to monitor and thwart disinformation ahead of the upcoming EU elections. We’ve seen the big tech companies try everything from making ads more transparent to improving content recommendation algorithms to setting up ethics boards (albeit with limited effect and with critics saying ‘you need to do much more!’). And, we’ve seen CEOs and policymakers and activists wrestling with each other over where to go next. We have not ‘fixed’ the problems, but it does feel like we’ve entered a new, sustained era of debate about what a healthy digital society should look like.

The 2019 Internet Health Report examines the story behind these stories, using interviews with experts, data analysis and visualization, and original reporting. It was also built with input from you, the reader: In 2018, we asked readers what issues they wanted to see in the next Report.

In the Report’s three spotlight articles, we unpack three big issues: One examines the need for better machine decision making — that is, asking questions like Who designs the algorithms? and What data do they feed on? and Who is being discriminated against? Another examines ways to rethink the ad economy, so surveillance and addiction are no longer design necessities.  The third spotlight article examines the rise of smart cities, and how local governments can integrate tech in a way that serves the public good, not commercial interests.

Of course, the Report isn’t limited to just three topics. Other highlights include articles on the threat of deepfakes, the potential of user-owned social media platforms, pornography literacy initiatives, investment in undersea cables, and the dangers of sharing DNA results online.

So, what’s our conclusion? How healthy is the internet right now? It’s complicated — the digital environment is a complex ecosystem, just like the planet we live on. There have been a number of positive trends in the past year that show that the internet — and our relationship with it — is getting healthier:

Calls for privacy are becoming mainstream. The last year brought a tectonic shift in public awareness about privacy and security in the digital world, in great part due to the Cambridge Analytica scandal. That awareness is continuing to grow — and also translate into action. European regulators, with help from civil society watchdogs and individual internet users, are enforcing the GDPR: In recent months, Google has been fined €50 million for GDPR violations in France, and tens of thousands of violation complaints have been filed across the continent.

There’s a movement to build more responsible AI. As the flaws with today’s AI become more apparent, technologists and activists are speaking up and building solutions. Initiatives like the Safe Face Pledge seek facial analysis technology that serves the common good. And experts like Joy Buolamwini, founder of the Algorithmic Justice League, are lending their insight to influential bodies like the Federal Trade Commission and the EU’S Global Tech Panel.

Questions about the impact of ‘big tech’ are growing. Over the past year, more and more people focused their attention on the fact that eight companies control much of the internet. As a result, cities are emerging as a counterweight, ensuring municipal technology prioritizes human rights over profit — the Cities for Digital Rights Coalition now has more than two dozen participants. Employees at Google, Amazon, and Microsoft are demanding that their employers don’t use or sell their tech for nefarious purposes. And ideas like platform cooperativism and collaborative ownership are beginning to be discussed as alternatives.

On the flipside, there are many areas where things have gotten worse over the past year — or where there are new developments that worry us:

Internet censorship is flourishing. Governments worldwide continue to restrict internet access in a multitude of ways, ranging from outright censorship to requiring people to pay additional taxes to use social media. In 2018, there were 188 documented internet shutdowns around the world. And a new form of repression is emerging: internet slowdowns. Governments and law enforcement restrict access to the point where a single tweet takes hours to load. These slowdowns diffuse blame, making it easier for oppressive regimes to deny responsibility.

Biometrics are being abused. When large swaths of a population don’t have access to physical IDs, digital ID systems have the potential to make a positive difference. But in practice, digital ID schemes often benefit heavy-handed governments and private actors, not individuals. In India, over 1 billion citizens were put at risk by a vulnerability in Aadhaar, the government’s biometric ID system. And in Kenya, human rights groups took the government to court over its soon-to-be-mandatory National Integrated Identity Management System (NIIMS), which is designed to capture people’s DNA information, the GPS location of their home, and more.

AI is amplifying injustice. Tech giants in the U.S. and China are training and deploying AI at a breakneck pace that doesn’t account for potential harms and externalities. As a result, technology used in law enforcement, banking, job recruitment, and advertising often discriminates against women and people of color due to flawed data, false assumptions, and lack of technical audits. Some companies are creating ‘ethics boards’ to allay concerns — but critics say these boards have little or no impact.

When you look at trends like these — and many others across the Report — the upshot is: the internet has the potential both to uplift and connect us. But it also has the potential to harm and tear us apart. This has become clearer to more and more people in the last few years. It has also become clear that we need to step up and do something if we want the digital world to net out as a positive for humanity rather than a negative.

The good news is that more and more people are dedicating their lives to creating a healthier, more humane digital world. In this year’s Report, you’ll hear from technologists in Ethiopia, digital rights lawyers in Poland, human rights researchers from Iran and China, and dozens of others. We’re indebted to these individuals for the work they do every day. And also to the countless people in the Mozilla community — 200+ staff, fellows, volunteers, like-minded organizations — who helped make this Report possible and who are committed to making the internet a better place for all of us.

This Report is designed to be both a reflection and resource for this kind of work. It is meant to offer technologists and designers inspiration about what they might build; to give policymakers context and ideas for the laws they need to write; and, most of all, to provide citizens and activists with a picture of where others are pushing for a better internet, in the hope that more and more people around the world will push for change themselves. Ultimately, it is by more and more of us doing something in our work and our lives that we will create an internet that is open, human and humane.

I urge you to read the Report, leave comments and share widely.

PS. This year, you can explore all these topics through reading “playlists,” curated by influential people in the internet health space like Esra’a Al Shafei, Luis Diaz Carlos, Joi Ito and others.

The post It’s Complicated: Mozilla’s 2019 Internet Health Report appeared first on The Mozilla Blog.

Mark SurmanWhy AI + consumer tech?

In my last post, I shared some early thoughts on how Mozilla is thinking about AI as part of our overall internet health agenda. I noted in that post that we’re leaning towards consumer tech as the focus and backdrop for whatever goals we take on in AI. In our draft issue brief we say:

Mozilla is particularly interested in how automated decision making is being used in consumer products and services. We want to make sure the interests of all users are designed into these products and services. Where they aren’t, we want to call that out.

After talking to nearly 100 AI experts and activists, this consumer tech focus feels right for Mozilla. But it also raises a number of questions: what do we mean by consumer tech? What is in scope for this work? And what is not? Are we missing something important with this focus?

At its simplest, the consumer tech platforms that we are talking about are general purpose internet products and services aimed at a wide audience for personal use. These include things like social networks, search engines, retail e-commerce, home assistants, computers, smartphones, fitness trackers, self-driving cars, etc. — almost all of which are connected to the internet and are fueled by our personal data. The leading players in these areas are companies like Google, Amazon, Facebook, Microsoft and Apple in the US as well as companies like Baidu, TenCent, and AliBaba in China. These companies are also amongst the biggest developers and users of AI, setting trends and shipping technology that shapes the whole of the tech industry. And, as long as we remain in the era of machine learning, these companies have a disproportionate advantage in AI development as they control huge amounts for data and computing power that can be used to train automated systems.

Given the power of the big tech companies in shaping the AI agenda — and the growing pervasiveness of automated decision making in the tech we all use everyday — we believe we need to set a higher bar for the development, use and impact of AI in consumer products and services. We need a way to reward companies who reach that bar. And push back and hold to account those who do not.

Of course, AI isn’t bad or good on it’s own — it is just another tool in the toolbox of computer engineering. Benefits, harms and side effects come from how systems are designed, what data is selected to train them and what business rules they are given. For example, search for ‘doctor’ on Google, you mostly see white doctors because that bias is in the training data. Similarly, content algorithms on sites like YouTube often recommend increasingly extreme content because the main business rule they are optimized for is to keep people on the site or app for as long as possible. Humans — and the companies they work in — can avoid or fix problems like these. Helping them do so is important work. It’s worth doing.

Of course, there are important issues related to AI and the health of the internet that go beyond consumer technology. The use of biased facial recognition software by police and immigration authorities. Similarly biased and unfair resume sorting algorithms used by human resource departments as part of hiring processes. The use of AI by the military to automate and add precision to killing from a distance. Ensuring that human rights and dignity are protected as the use of machine decision making grows within government and the back offices of big business is critical. Luckily, there is an amazing crew of organizations stepping up to address these issues such as AI Now in the US and Algorithm Watch in Europe. Mozilla taking a lead in these areas wouldn’t add much. Here, we should play a supporting role.

In contrast, there are few players focused squarely on how AI is showing up in consumer products and services. Yet this is one of the places where the power and the impact of AI is moving most rapidly. Also, importantly, consumer tech is the field on which Mozilla has always played. As we try to shape where AI is headed, it makes sense to do so here. We’re already doing so in small ways with technology, showing a more open way to approach machine learning with projects like Deep Speech and Common Voice. However, we have a chance to do much more by using our community, brand and credibility to push the development and use of AI in consumer tech in the right direction. We might do this as a watchdog. Or by collecting a brain trust of fellows with new ideas about privacy in AI. Or by helping to push for policies that lead to real accountability. There are many options. Whatever we pick, it feels like the consumer tech space is both in need of attention and well suited to the strengths that Mozilla brings to the table.

I would say that we’re 90% decided that consumer tech is the right place to focus Mozilla’s internet health movement building work around AI. That means there is a 9/10 chance that this is where we will go — but there is a chance that we hear something at this stage that changes this thinking in a meaningful way. As we zero in on this decision, I’d be interested to know what others think: If we go in this direction, what are the most important things to be thinking about? Where are the big opportunities? On the flip side, are there important things we’ll be missing if we go down this path? Feel free to comment on this post, tweet or email me if you have thoughts.

The post Why AI + consumer tech? appeared first on Mark Surman.

The Firefox Frontier5 times when video ads autoplay and ruin everything.

The room is dark and silent. Suddenly, a loud noise pierces your ears. You panic as everyone turns in your direction. You just wanted to read an article about cute … Read more

The post 5 times when video ads autoplay and ruin everything. appeared first on The Firefox Frontier.

Ian Bicking“Users want control” is a shoulder shrug

Making the claim “users want control” is the same as saying you don’t know what users want, you don’t know what is good, and you don’t know what their goals are.

I first started thinking about this during the debate over what would become the ACA. The rhetoric was filled with this idea that people want choice in their medical care: people want control.

No! People want good health care. If they don’t trust systems to provide them good health care, if they don’t trust their providers to understand their priorities, then choice is the fallback: it’s how you work the system when the system isn’t working for you. And it sucks! Here you are, in the middle of some health issue, with treatments and symptoms and the rest of your life duties, and now you have to become a researcher on top of it? But the politicians and the pundits could not stop talking about control.

Control is what you need when you want something and it won’t happen on its own. But (usually) it’s not control you want, it’s just a means.

So when we say users want control over X – their privacy, their security, their data, their history – we are first acknowledging that current systems act against users, but we aren’t proposing any real solution. We’re avoiding even talking about the problems.

For instance, we say “users want control over their privacy,” but what people really want is some subset of:

  1. To avoid embarrassment
  2. To avoid persecution
  3. … sometimes for doing illegal and wrong things
  4. To keep from having the creeping sensation they left something sitting out that they didn’t want to
  5. They want to make some political statement against surveillance
  6. They want to keep things from the prying eyes of those close to them
  7. They want to avoid being manipulated by bad-faith messaging

There’s no easy answers, not everyone holds all these desires, but these are concrete ways of thinking about what people want. They don’t all point in the same direction. (And then consider the complex implications of someone else talking about you!)

There are some cases when a person really does want control. If the person wants to determine their own path, if having choice is itself a personal goal, then you need control. That’s a goal about who you are not just what you get. It’s worth identifying moments when this is important. But if a person does not pay attention to something then that person probably does not identify with the topic and is not seeking control over it. “Privacy advocates” pay attention to privacy, and attain a sense of identity from the very act of being mindful of their own privacy. Everyone else does not.

Let’s think about another example: users want control over their data. What are some things they want?

  1. They don’t want to lose their data
  2. They don’t want their data used to hold them hostage (e.g., to a subscription service)
  3. They don’t want to delete data and have it still reappear
  4. They want to use their data however they want, but more likely they want their data available for use by some other service or tool
  5. They feel it’s unfair if their data is used for commercial purposes without any compensation
  6. They are offended if their data is used to manipulate themselves or others
  7. They don’t want their data used against them in manipulative ways
  8. They want to have shared ownership of data with other people
  9. They want to prevent unauthorized or malicious access to their data

Again these motivations are often against each other. A person wants to be able to copy their data between services, but also delete their data permanently and completely. People don’t want to lose their data, but having personal control over your data is a great way to lose it, or even to lose control over it. The professionalization and centralization of data management by services has mostly improved access control and reliability.

When we simply say users want control it’s giving up on understanding people’s specific desires. Still it’s not exactly wrong: it’s reasonable to assume people will use control to achieve their desires. But if, as technologists, we can’t map functionality to desire, it’s a bit of a stretch to imagine everyone else will figure it out on the fly.

This Week In RustThis Week in Rust 283

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is color-backtrace, a crate to give panic backtraces more information (and some color, too). Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

221 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

David HumphreyTeaching Open Source: Sept 2018 - April 2019

Today I submitted my grades and completed another year of teaching.  I've spent the past few weeks marking student projects non-stop, which has included reading a lot of pull requests in my various open source courses.

As a way to keep myself sane while I marked, I wrote some code to do analysis of all the pull requests my students worked on during the fall and winter terms.  I've been teaching open source courses at Seneca since 2005, and I've always wanted to do this.  Now that I've got all of my students contributing to projects on GitHub, it's become much easier to collect this info via the amazing GitHub API.

I never teach these classes exactly the same way twice.  Some years I've had everyone work on different parts of a larger project, for example, implementing features in Firefox, or working on specific web tooling.  This year I took a different approach, and let each student be self-directed, giving them more freedom to choose whatever open source projects they wanted.  Having done so, I wanted to better understand the results, and what lessons I could draw for subsequent years.

GitHub Analysis

To begin, here are some of the numbers:

104 students participated in my open source courses from Sept 2018 to April 2019, some taking both the first and second courses I teach.

Together, these students made 1,014 Pull Requests to 308 Repositories.  The average number of PRs per student was 9.66 (mode=12, median=10, max=22).  Here's a breakdown of what happened with these PRs:

Total Percent
Merged 606 60%
Still Open 258 25%
Closed (Unmerged) 128 13%
Deleted (By Student) 22 2%

I was glad to see so many get merged, and so few get closed without full resolution.  There are lots of projects that are slow to respond to PRs, or never respond.  But the majority of the "Still Open" PRs are those that were completed in the last few weeks.

Next, I was really interested to see which languages the students would choose.  All of the students are in their final semesters of a programming diploma or degree, and have learned half-a-dozen programming languages by now.  What did they choose?  Here's the top of the list:

Language Total (PRs) Percent
JavaScript 508 51%
Python 85 9%
C++ 79 8%
Java 68 7%
TypeScript 39 3.9%
C# 34 3.4%
Swift 28 2.8%
Other 152 14.9%

In some ways, no surprises here. This list mirrors other similar lists I've seen around the web.  I suspect the sway toward JS also reflects my own influence in the courses, since I teach using a lot of case studies of JS projects.

The "Other" category is interesting to me.  Many students purposely chose to work in languages they didn't know in order to try something new (I love and encourage this, by the way).  Among these languages I saw PRs in all of:

Rust (16), Go (13), Kotlin (11), PHP (5), Ruby (3), Clojure (3), Lua (2), as well as Scala, F#, Dart, PowerShell, Assembly, GDScript, FreeMaker, and Vim script.

Over the weeks and months, my goal for the students is that they would progress, working on larger and more significant projects and pull requests.  I don't define progress or significant in absolute terms, since each student is at a different place when they arrive in the course, and progress can mean something quite different depending on your starting point.  That said, I was interested to see the kinds of "larger" and more "complex" projects the students chose.  Some examples of more recognizable projects and repos I saw:

They worked on editors (Notepad++, Neovim), blogging platforms (Ghost, WordPress), compilers (emscripten), blockchain, game engines, email apps, online books, linting tools, mapping tools, terminals, web front-end toolkits, and just about everything you can think of, including lots of things I wouldn't have thought of (or recommended!).

They also did lots and lots of very small contributions: typos, dead code removal, translation and localization, "good first issue," "help wanted," "hacktoberfest."  I saw everything.

Stories from Student Blogs

Along the way I had them write blog posts, and reflect on what they were learning, what was working and what wasn't, and how they felt.  Like all students, many do the bare minimum to meet this requirement; but some understand the power and reach of a good blog post.  I read some great ones this term.  Here are just a few stories of the many I enjoyed watching unfold.


Julia pushed herself to work on a lot of different projects, from VSCode to Mozilla's Voice-Web to nodejs.  Of her contribution to node, she writes:

I think one of my proudest contributions to date was for Node.js. This is something I never would have imagined contributing to even just a  year ago.

We talk a lot, and openly, about imposter syndrome.  Open source gives students a chance to prove to themselves, and the world, that they are indeed capable of working at a high level.  Open source is hard, and when you're able to do it, and your work gets merged, it's very affirming on a personal level.  I love to see students realize they do in fact have something to contribute, that maybe they do belong.

Having gained this confidence working on node, Julia went on to really find her stride working within Microsoft's Fast-DNA project, fixing half-a-dozen issues during the winter term:

I’ve gotten to work with a team that seems dedicated to a strong  development process and code quality, which in turn helps me build good  habits when writing code.

Open source takes students out of the confines and limitations of a traditional academic project, and lets them work with professionals in industry, learning how they work, and how to build world-class software.


Alexander was really keen to learn more about Python, ML, and data science.  In the fall he discovered the data analysis library Pandas, and slowly began learning how to contribute to the project.  At first he focused on bugs related to documentation and linting, which led to him learning how their extensive unit tests worked.  I think he was a bit surprised to discover just how much his experience with the unit tests would help him move forward to fixing bugs:

In the beginning, I had almost no idea what any of the functions did and I would get lost navigating through the directories when searching for something. Solving linting errors was a great start for me and was also challenging enough due to my lack of knowledge in open  source and the Pandas project specifically. Now I could identify where the issue originates from easily and also  write tests to ensure that the requested functionality works as expected. Solving the actual issue is still challenging because finding a solution to the actual problem requires the most time and research.  However, now I am able to solve real code problems in Pandas, which I  would not be able to do when I started. I'm proud of my progress...

Open source development tends to favour many small, incremental improvements vs. big changes, and this maps well to the best way for students to build confidence and learn: bit at a time, small steps on the road of discovery.

One of the many Pandas APIs that Alexander worked on was the dropna() function.  He fixed a number of bugs related to its implementation. Why bother fixing dropna?  With the recent black hole imaging announcement, I noticed that source code for the project was put on GitHub.  Within that code I thought it was interesting to discover Pandas and dropna() being used, and further, that it had been commented out due to a bug.  Was this fixed by one of Alexander's recent PRs?  Hard to say, but regardless, lots of future  scientists and researchers will benefit from his work to fix these bugs.  Software maintenance is rewarding work.

Over and over again during the year, I heard students discuss how surprised they were to find bugs in big software projects.  If you've been working on software for a long time, you know that all software has bugs.  But when you're new, it feels like only you make mistakes.

In the course I emphasize the importance of software maintenance, the value of fixing or removing existing code vs. always adding new features.  Alexander spent all his time maintaining Pandas and functions like dropna(), and I think it's an ideal way for students to get involved in the software stack.


Volodymyr was interested to gain more experience developing for companies and projects he could put on his resume after he graduates.  Through the fall and winter he contributed to lots of big projects: Firefox, Firefox Focus, Brave for iOS, VSCode, and more.  Eventually he found his favourite, Airbnb's Lona project.

With repeated success and a trail of merged PRs, Volodymyr described being able to slowly overcome his feelings of self doubt: "I wasn’t sure if I was good enough to work on these bugs."

A real turning point for him came with a tweet from Dan Abramov, announcing a project to localize the React documentation:

I develop a lot using React, and I love this library a lot. I wanted to  contribute to the React community, and it was a great opportunity to do  it, so I applied for it. Shortly after it, the repository for Ukrainian  translation was created, and I was assigned to maintain it 🤠

Over the next three months, Volodymyr took on that task of maintaining a very active and high-profile localization project (96% complete as I write this), and in so doing learned all kinds of things about what it's like to be on the other side of a pull request, this time having to do reviews, difficult merges, learning how to keep community engaged, etc.  Seeing the work ship has been very rewarding.

Open source gives students a chance to show up, to take on responsibility, and become part of the larger community.  Having the opportunity to move from being a user to a contributor to a leader is unique and special.

My Own Personal Learning

Finally, I wanted to pause for a moment to consider some of the things I learned with this iteration of the courses.  In no particular order, here are some of the thoughts I've been having over the last week:

Mentoring 100+ students across 300+ projects and 28 programming languages is daunting for me.  Students come to me every day, all day, and ask for help with broken dev environments, problems with reviewers, issues with git, etc.  I miss having a single project where everyone works together, because it allows me to be more focused and helpful.  At the same time, the diversity of what the students did speaks to the value of embracing the chaos of open source in all its many incarnations.

Related to my previous point, I've felt a real lack of stable community around a lot of the projects the students worked in.  I don't mean there aren't people working on them.  Rather, it's not always easy to find ways to plug them in.  Mailing lists are no longer hot, and irc has mostly disappeared.  GitHub Issues usually aren't the right place to discuss things that aren't laser focused on a given bug, but students need places to go and talk about tools, underlying concepts in the code, and the like.  "What about Slack?"  Some projects have it, some don't.  Those that do don't always give invitations easily.  It's a bit of a mess, and I think it's a thing that's really missing.

Open source work on a Windows machine is still unnecessarily hard.  Most of my students use Windows machines.  I think this is partly due to cost, but also many of them simply like it as an operating system.  However, trying to get them involved in open source projects on a Windows machine is usually painful.  I can't believe how much time we waste getting basic things setup, installed, and building.  Please support Windows developers in your open source projects.

When we start the course, I often ask the students which languages they like, want to learn, and feel most comfortable using.  Over and over again I'm told "C/C++".  However, looking at the stats above, C-like languages only accounted for ~15% of all pull requests.  There's a disconnect between what students tell me they want to do, and what they eventually do.  I don't fully understand this, but my suspicion is that real-world C/C++ code is much more complicated than their previous academic work.

Every project thinks they know how to do open source the right way, and yet, they all do it differently.  It's somewhat hilarious for me to watch, from my perch atop 300+ repos.  If you only contribute to a handful of projects within a small ecosystem, you can start to assume that how "we" work is how "everyone" works.  It's not.  The processes for claiming a bug, making a PR, managing commits, etc. is different in just about every project.  Lots of them expect exactly the opposite behaviour!  It's confusing for students.  It's confusing for me.  It's confusing for everyone.

It's still too hard to match new developers with bugs in open source projects.  One of my students told me, "It was easier to find a husband than a good open source project to work in!"  There are hundreds of thousands of issues on GitHub that need a developer.  You'd think that 100+ students should have no problem finding good work to do.  And yet, I still find it's overly difficult.  It's a hard problem to solve on all sides: I've been in every position, and none of them are easy.  I think students waste a lot of time looking for the "right" project and "perfect" bug, and could likely get going on lots of things that don't initially look "perfect."  Until you have experience and confidence to dive into the unknown, you tend to want to work on things you feel you can do easily.  I need to continue to help students build more of this confidence earlier.  It happens, but it's not quick.

Students don't understand the difference between apps and the technologies out of which they are made.  Tools, libraries, frameworks, test automation--there is a world of opportunity for contribution just below the surface of visible computing.  Because these areas are unknown and mysterious to students, they don't tend to gravitate to them.  I need to find ways to change this.  Whenever I hear "I want to work on Android apps..." I despair a little.

Teaching open source in 2019 has really been a proxy for teaching git and GitHub.  While I did have some students work outside GitHub, it was really rare.  As such, students need a deep understanding of git and its various workflows, so this is what I've focused on in large part.  Within days of joining a project, students are expected to be able to branch, deal with remotes, rebase, squash commits, fix commit messages, and all sorts of other intermediate to advanced things with git.  I have to move fast to get them ready in time.

Despite all the horrible examples you'll see on Twitter, the open source community has, in large part, been really welcoming and kind to the majority of my students.  I'm continually amazed how much time maintainers will take with reviews, answering questions, and helping new people get started.  It's not uncommon for one of my students to start working on a project, and all of a sudden be talking to its creator, who is patiently walking them through some setup problem. Open source isn't always a loving place (I could tell you some awful stories, too).  But the good outweighs the bad, and I'm still happy to take students there.


I'm ready for a break, but I've also had a lot of fun, and been inspired by many of my best students.  I'm hoping I'll be able to teach these courses again in the fall.  Until then, I'll continue to reflect on what worked and what didn't, and try to improve things next time.

In the meantime, I'll mention that I could use your support.  Doing this work is hard, and requires a lot of my time.  In the past I've had companies like Mozilla generously help me stay on track.  If you or your company would like to find ways to partner or support this work, please get in touch.  Also, if you're hiring new developers or interns, please consider hiring some of these amazing students I've been teaching.  I know they would be grateful to talk to you as well.

Thanks for your continued interest in what we're doing.  I see lots of you out there in the wild, doing reviews, commenting on pull requests, giving students a favourite on Twitter, leaving a comment in their blog.  Open source works because people take the time to help one another.

The Rust Programming Language BlogRust's 2019 roadmap

Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 73 blog posts written over the span of a few weeks. The end result is the recently-merged 2019 roadmap RFC. To get all of the details, please give it a read, but this post lays out some of the highlights.

The theme: Maturity

In short, 2019 will be a year of rejuvenation and maturation for the Rust project. We shipped a lot of stuff last year, and grew a lot. Now it's time to take a step back, take stock, and prepare for the future.

The work we have planned for this year falls into three major categories:

  • Governance: improving how the project is run
  • Finish long-standing requests: closing out work we've started but never finished
  • Polish: improving the overall quality of the language and tooling


Over the last three years, the Rust project has grown a lot. Rust used to have a core team of 8 members. When we added sub-teams in 2015, we grew to 23 members. We've now grown to over 100 — that's bigger than many companies! And of course, looking beyond the teams, the size of the Rust community as a whole has grown tremendously as well. As a result of this growth, we've found that the processes which served us well when we were a smaller project are starting to feel some strain.

Many of the teams have announced plans to look over revamp their processes to scale better. Often this can be as simple as taking the time to write down things that previously were understood only informally — sometimes it means establishing new structures.

Because of this widespread interest in governance, we've also created a new Governance Working Group. This group is going to be devoted to working with each team to hone its governance structure and to help pass lessons and strategies between teams.

Additionally, the RFC process has been a great boon for Rust, but as we've grown, there have been times where it didn't work so well too. We may look at revisions to the process this year.

Long-standing requests

There are a number of exciting initiatives that have been sitting in a limbo state — the majority of the design is done, but there are some lingering complications that we haven't had time to work out. This year we hope to take a fresh look at some of these problems and try hard to resolve those lingering problems.

Examples include:

  • The Cargo team and custom registries
  • The Language team is taking a look at async/await, specialization, const generics, and generic associated types
  • The Libs team wants to finish custom allocators


Finally, the last few years have also seen a lot of foundational work. The compiler, for example, was massively refactored to support incremental compilation and to be better prepared for IDEs. Now that we've got these pieces in place, we want to do the “polish” work that really makes for a great experience.



This post only covered a few examples of the plans we've been making. If you'd like to see the full details, take a look at the RFC itself.

Here's to a great 2019 for Rust!

Mozilla VR BlogVoxelJS: Chunking Magic

VoxelJS: Chunking Magic

A couple of weeks ago I relaunched VoxelJS with modern ThreeJS and modules support. Today I'm going to talk a little bit about how VoxelJS works internally, specifically how voxels are represented and drawn. This is the key magic part of a voxel engine and I owe a tremendous debt to Max Ogden, James Halliday and Mikola Lysenko

Voxels are represented by numbers in a large three dimensional array. Each number says what type of block goes in that block slot, with 0 representing empty. The challenge is how to represent a potentially infinite set of voxels without slowing the computer to a crawl. The only way to do this is to load just a portion of the world at a time.

The Challenge

We could do this by generating a large structure, say 512 x 512 x 512 blocks, then setting entries in the array whenever the player creates or destroys a block.

This process would work, but we have a problem. A grid of numbers is great for representing voxels, but we don’t have a way to draw them. ThreeJS (and by extension, WebGL) only thinks in terms of polygons. So we need a process to turn the voxel grid in to a polygon mesh. This process is called meshing.

The simple way to do meshing would be to create a cube object for every voxel in the original data. This would work but it would be incredibly slow. Plus most cubes are inside terrain and would never be seen, but they would still take up memory in the GPU and have to be culled from the view. Clearly we need to find a way to create more efficient meshes.

Even if we have a better way to do the meshing we still have another problem. When the player creates or destroys a block we need to update the data. Updating a single entry in an array is easy, but then we have to recreate the entire polygon mesh. That’s going to be incredibly slow if we must rebuild the entire mesh of the entire world every time the player creates a block, which will happen a lot in a game like Minecraft. VoxelJS has a solution to both of our problems. : chunks.


The world is divided into large cubes called chunks, each composed of a uniform set of blocks. By default each chunk is 16x16x16 blocks, though you can adjust this depending on your need. Each chunk is represented by an array of block types, just as before, and when any block in the chunk changes the entire chunk transformed into a new polygon mesh and sent to the GPU. Because the chunk is so much smaller than before this shouldn’t take very long. In practice it can be done in less than a few milliseconds on a laptop. Only the single chunk which changed needs to be updated, the rest can stay in GPU memory and just be redrawn every frame (which is something modern GPUs excel at).

As the player moves around the world VoxelJS maintains a list of chunks that are near by. This is defined as chunks within a certain radius of the player (also configurable). As chunks go out of the nearby list they are deleted and new chunks are created. Thus at any given time only a small number of chunks are in memory.

This chunking system works pretty well with the naive algorithm that creates a cube for every voxel, but it still ends up using a tremendous amount of GPU memory and can be slow to draw, especially on mobile GPUs. We can do better.


The first optimization is to only create cube geometry for voxels that are actually visible from the outside. All internal voxels can be culled. In VoxelJS this is implemented by the CullingMesher in the main src directory.

Culling works a lot better than naive meshing, but we could still do better. If there is a 10 x 10 wall it would be represented by 100 cubes (then multiplied another 12 when turned into triangles); yet in theory the wall could be drawn with a single box object.

To handle this case VoxelJS has another algorithm called the GreedyMesher. This produces vastly more efficient meshes and takes only slightly longer to compute than the culling mesh.

There is a downside to greedy meshing, however. Texture mapping is very easy on cubes, especially when using a texture atlas. The same with ambient occlusion. However, for arbitrary sized rectangular shapes the texturing and lighting calculations become a lot more complicated.


Once we have the voxels turned into geometry we can upload it to the GPU, but this won’t let us draw different block types differently. We could do this with vertex colors, but for a real Minecraft like environment we need textures. A mesh can only be drawn with a single texture in WebGL. (Actually it is possible to use multiple textures per polygon using Texture Arrays but those are only supported with WebGL 2). In order to render multiple textures on a single mesh, which is required for chunks that have more than one block type, we must combine the different images into a single texture called a texture atlas.

To use a texture atlas we need to tell the GPU which part of the atlas to draw for each triangle in the mesh. VoxelJS does this by adding a subrects attribute to the buffer that holds the vertex data. Then the fragment shader can use this data to calculate the correct UV values for the required sub-texture.

Animated textures

VoxelJS can also handle animated textures for special blocks like water and lava. This is done by uploading all of the animation frames next to each other in the texture atlas. A frameCount attribute is passed to the shader along with a time uniform. The shader uses this information to offset the texture coordinates to get the particular frame that should be drawn.


This animation system works pretty well but it does require all animations to have the same speed. To fix this I think we would need to have a speed attribute passed to the shader.

Currently VoxelJS supports both culled and greedy meshing. You can switch between them at runtime by swapping out the mesher and regenerating all chunks. Ambient occlusion lighting works wonderfully with culled meshing but does not currently work correctly with the greedy algorithm. Additionally the texturing code for greedy meshing is far more complicated and less efficient than it should be. All of these are open issues to be fixed.

Getting Involved

I hope this gives you insight into how VoxelJS Next works and how you can customize it to your needs.

If you'd like to get involved I've created a list of issues that are great for people new to the project.

Mozilla B-Teamhappy bmo push day!

Bugfixes + enabling the new security feature for API keys.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1541303] Default component bug type is not set as expected; enhancement severity is still used for existing bugs
  • [1543760] When cloning a bug, the bug is added to ‘Regressed by’ of the new bug
  • [1543718] Obsolete attachments should have a strikethrough
  • [1543798] Do not treat email addresses with invalid.bugs as unassigned…

View On WordPress

Daniel Stenbergcurl + hackerone = TRUE

There seems to be no end to updated posts about bug bounties in the curl project these days. Not long ago I mentioned the then new program that sadly enough was cancelled only a few months after its birth.

Now we are back with a new and refreshed bug bounty program! The curl bug bounty program reborn.

This new program, which hopefully will manage to survive a while, is setup in cooperation with the major bug bounty player out there: hackerone.

Basic rules

If you find or suspect a security related issue in curl or libcurl, report it! (and don’t speak about it in public at all until an agreed future date.)

You’re entitled to ask for a bounty for all and every valid and confirmed security problem that wasn’t already reported and that exists in the latest public release.

The curl security team will then assess the report and the problem and will then reward money depending on bug severity and other details.

Where does the money come from?

We intend to use funds and money from wherever we can. The Hackerone Internet Bug Bounty program helps us, donations collected over at opencollective will be used as well as dedicated company sponsorships.

We will of course also greatly appreciate any direct sponsorships from companies for this program. You can help curl getting even better by adding funds to the bounty program and help us reward hard-working researchers.

Why bounties at all?

We compete for the security researchers’ time and attention with other projects, both open and proprietary. The projects that can help put food on these researchers’ tables might have a better chance of getting them to use their tools, time, skills and fingers to find our problems instead of someone else’s.

Finding and disclosing security problems can be very time and resource consuming. We want to make it less likely that people give up their attempts before they find anything. We can help full and part time security engineers sustain their livelihood by paying for the fruits of their labor. At least a little bit.

Only released code?

The state of the code repository in git is not subject for bounties. We need to allow developers to do mistakes and to experiment a little in the git repository, while we expect and want every actual public release to be free from security vulnerabilities.

So yes, the obvious downside with this is that someone could spot an issue in git and decide not to report it since it doesn’t give any money and hope that the flaw will linger around and ship in the release – and then reported it and claim reward money. I think we just have to trust that this will not be a standard practice and if we in fact notice that someone tries to exploit the bounty in this manner, we can consider counter-measures then.

How about money for the patches?

There’s of course always a discussion as to why we should pay anyone for bugs and then why just pay for reported security problems and not for heroes who authored the code in the first place and neither for the good people who write the patches to fix the reported issues. Those are valid questions and we would of course rather pay every contributor a lot of money, but we don’t have the funds for that. And getting funding for this kind of dedicated bug bounties seem to be doable, where as a generic pay contributors fund is trickier both to attract money but it is also really hard to distribute in an open project of curl’s nature.

How much money?

At the start of this program the award amounts are as following. We reward up to this amount of money for vulnerabilities of the following security levels:

Critical: 2,000 USD
High: 1,500 USD
Medium: 1,000 USD
Low: 500 USD

Depending on how things go, how fast we drain the fund and how much companies help us refill, the amounts may change over time.

Found a security flaw?

Report it!

Niko MatsakisAiC: Collaborative summary documents

In my previous post, I talked about the idea of mapping the solution space:

When we talk about the RFC process, we always emphasize that the point of RFC discussion is not to select the best answer; rather, the point is to map the solution space. That is, to explore what the possible tradeoffs are and to really look for alternatives. This mapping process also means exploring the ups and downs of the current solutions on the table.

One of the challenges I see with how we often do design is that this “solution space” is actually quite implicit. We are exploring it through comments, but each comment is only tracing out one path through the terrain. I wanted to see if we could try to represent the solution space explicitly. This post is a kind of “experience report” on one such experiment, what I am calling a collaborative summary document (in contrast to the more standard summary comment that we often do).

The idea: a collaborative summary document

I’ll get into the details below, but the basic idea was to create a shared document that tried to present, in a neutral fashion, the arguments for and against a particular change. I asked the people to stop commenting in the thread and instead read over the document, look for things they disagreed with, and offer suggestions for how it could be improved.

My hope was that we could not only get a thorough summary from the process, but also do something deeper: change the focus of the conversation from “advocating for a particular point of view” towards “trying to ensure a complete and fair summary”. I figured that after this period was done, people were likely go back to being advocates for their position, but at least for some time we could try to put those feelings aside.

So how did it go?

Overall, I felt very positive about the experience and I am keen to try it again. I think that something like “collaborative summary documents” could become a standard part of our process. Still, I think it’s going to take some practice trying this a few times to figure out the best structure. Moreover, I think it is not a silver bullet: to realize the full potential, we’re going to have to make other changes too.

What I did in depth

What I did more specifically was to create a Dropbox Paper document. This document contained my best effort at summarizing the issue at hand, but it was not meant to be just my work. The idea was that we would all jointly try to produce the best summary we could.

After that, I made an announcement on the original thread asking people to participate in the document. Specifically, as the document states, the idea was for people to do something like this:

  • Read the document, looking for things they didn’t agree with or felt were unfairly represented.
  • Leave a comment explaining their concern; or, better, supplying alternate wording that they did agree with
    • The intention was always to preserve what they felt was the sense of the initial comment, but to make it more precise or less judgemental.

I was then playing the role of editor, taking these comments and trying to incorporate them into the whole. The idea was that, as people edited the document, we would gradually approach a fixed point, where there was nothing left to edit.

Structure of the shared document

Initially, when I created the document, I structured it into two sections – basically “pro” and “con”. The issue at hand was a particular change to the Futures API (the details don’t matter here). In this case, the first section advocated for the change, and the second section advocated against it. So, something like this (for a fictional problem):


We should make this change because of X and Y. The options we have now (X1, X2) aren’t satisfying because of problem Z.


This change isn’t needed. While it would make X easier, there are already other useful ways to solve that problem (such as X1, X2). Similarly, the goals of isn’t very desirable in the first place because of A, B, and C.

I quickly found this structure rather limiting. It made it hard to compare the arguments – as you can see here, there are often “references” between the two sections (e.g., the con section refers to the argument X and tries to rebut it). Trying to compare and consider these points required a lot of jumping back and forth between the sections.

Using nested bullets to match up arguments

So I decided to restructure the document to integrate the arguments for and against. I created nesting to show when one point was directly in response to another. For example, it might read like this (this is not an actual point; those were much more detailed):

  • Pro: We should make this change because of X.
    • Con: However, there is already the option of X1 and X2 to satisfy that use-case.
      • Pro: But X1 and X2 suffer from Z.
  • Pro: We should make this change because of Y and Z.
    • Con: Those goals aren’t as important because of A, B, and C.

Furthermore, I tried to make the first bullet point a bit special – it would be the one that encapsulated the heart of the dispute, from my POV, with the later bullet points getting progressively more into the weeds.

Nested bullets felt better, but we can do better still I bet

I definitely preferred the structure of nested bullets to the original structure, but it didn’t feel perfect. For one thing, it requires me to summarize each argument into a single paragraph. Sometimes this felt “squished”. I didn’t love the repeated “pro” and “con”. Also, things don’t always fit neatly into a tree; sometimes I had to “cross-reference” between points on the tree (e.g., referencing another bullet that had a detailed look at the trade-offs).

If I were to do this again, I might tinker a bit more with the format. The most extreme option would be to try and use a “wiki-like” format. This would allow for free inter-linking, of course, and would let us hide details into a recursive structure. But I worry it’s too much freedom.

Adding “narratives” on top of the “core facts”

One thing I found that surprised me a bit: the summary document aimed to summarize the “core facts” of the discussion – in so doing, I hoped to summarize the two sides of the argument. But I found that facts alone cannot give a “complete” summary: to give a complete summary, you also need to present those facts “in context”. Or, put another way, you also need to explain the weighting that each side puts on the facts.

In other words, the document did a good job of enumerating the various concerns and “facets” of the discussion. But it didn’t do a good job of explaining why you might fall on one side or the other.

I tried to address this by crafting a “summary comment” on the main thread. This comment had a very specific form. It begin by trying to identify the “core tradeoff” – the crux of the disagreement:

So the core tradeoff here is this:

  • By leaving the design as is, we keep it as simple and ergonomic as it can be;
    • but, if we wish to pass implicit parameters to the future when polling, we must use TLS.

It then identifies some of the “facets” of the space which different people weight in different ways:

So, which way you fall will depend on

  • how important you think it is for Future to be ergonomic
    • and naturally how much of an ergonomic hit you believe this to be
    • how likely you think it is for us to want to add implicit parameters
    • how much of a problem you think it is to use TLS for those implicit parameters

And then it tried to tell a series of “narratives”. Basically to tell the story of each group that was involved and why that led them to assign different weights to those points above. Those weights in turn led to a different opinion on the overall issue.

For example:

I think a number of people feel that, by now, between Rust and other ecosystems, we have a pretty good handle on what sort of data we want to thread around and what the best way is to do it. Further, they feel that TLS or passing parameters explicitly is the best solution approach for those cases. Therefore, they prefer to leave the design as is, and keep things simple. (More details in the doc, of course.)

Or, on the other side:

Others, however, feel like there is additional data they want to pass implicitly and they do not feel convinced that TLS is the best choice, and that this concern outweights the ergonomic costs. Therefore, they would rather adopt the PR and keep our options open.

Finally, it’s worth noting that there aren’t always just two sides. In fact, in this case I identified a third camp:

Finally, I think there is a third position that says that this controversy just isn’t that important. The performance hit of TLS, if you wind up using it, seems to be minimal. Similarly, the clarity/ergonomics of Future are not as criticial, as users who write async fn will not implement it directly, and/or perhaps the effect is not so large. These folks probably could go either way, but would mostly like us to stop debating it and start building stuff. =)

One downside of writing the narratives in a standard summary comment was that it was not “part of” the main document. In fact, it feels to me like these narratives are a pretty key part of the whole thing. In fact, it was only once I added these narratives that I really felt I started to understand why one might choose one way or the other when it came to this decision.

If I were to do this again, I would make narratives more of a first-place entity in the document itself. I think I would also focus on some other “meta-level reasoning”, such as fears and risks. I think it’s worth thinking, for any given decision, “what if we make the wrong call” – e.g., in this case, what happens if we decide not to future proof, but then we regret it; in contrast, what happens if we decide to add future proofing, but we never use it.

We never achieved “shared ownership” of the summary

One of my goals was that we could, at least for a moment, disconnect people from their particular position and turn their attention towards the goal of achieving a shared and complete summary. I didn’t feel that we were very succesful in this goal.

For one thing, most participants simply left comments on parts they disagreed with; they didn’t themselves suggest alternate wording. That meant that I personally had to take their complaint and try to find some “middle ground” that accommodated the concern but preserved the original point. This was stressful for me and a lot of work. More importantly, it meant that most people continued to interact with the document as advocates for their point-of-view, rather than trying to step back and advocate for the completeness of the summary.

In other words: when you see a sentence you disagree with, it is easy to say that you disagree with it. It is much harder to rephrase it in a way that you do agree with – but which still preserves (what you believe to be) the original intent. Doing so requires you to think about what the other person likely meant, and how you can preserve that.

However, one possible reason that people may have been reluctant to offer suggestions is that, often, it was hard to make “small edits” that addressed people’s concerns. Especially early on, I found that, in order to address some comment, I would have to make larger restructurings. For example, taking a small sentence and expanding it to a bullet point of its own.

Finally, some people who were active on the thread didn’t participate in the doc. Or, if they did, they did so by leaving comments on the original GitHub thread. This is not surprising: I was asking people to do something new and unfamiliar. Also, this whole process played out relatively quickly, and I suspect some people just didn’t even see the document before it was done.

If I were to do this again, I would want to start it earlier in the process. I would also want to consider synchronous meetings, where we could go try to process edits as a group (but I think it would take some thought to figure out how to run such a meeting).

In terms of functioning asynchronously, I would probably change to use a Google Doc instead of a Dropbox Paper. Google Docs have a better workflow for suggesting edits, I believe, as well, as a richer permissions model.

Finally, I would try to draw a harder line in trying to get people to “own” the document and suggest edits of their own. I think the challenge of trying to neutrally represent someone else’s point of view is pretty powerful.

Concluding remarks

Conducting this exercise taught me some key lessons:

  • We should experiment with the best way to describe the back-and-forth (I found it better to put closely related points together, for example, rather than grouping the arguments into ‘pro and con’).
  • We should include not only the “core facts” but also the “narratives” that weave those facts together.
  • We should do this summary process earlier and we should try to find better ways to encourage participation.

Overall, I felt very good about the idea of “collaborative summary documents”. I think they are a clear improvement over the “summary comment”, which was the prior state of the art.

If nothing else, the quality of the summary itself was greatly improved by being a collaborative document. I felt like I had a pretty good understanding of the question when I started, but getting feedback from others on the things they felt I misunderstood, or just the places where my writing was unclear, was very useful.

But of course my aims run larger. I hope that we can change how design work feels, by encouraging all of us to deeply understand the design space (and to understand what motivates the other side). My experiment with this summary document left me feeling pretty convinced that it could be a part of the solution.


I’ve created a discussion thread on the internals forum where you can leave questions or comments. I’ll definitely read them and I will try to respond, though I often get overwhelmed1, so don’t feel offended if I fail to do so.


  1. So many things, so little time.

The Servo BlogThis Month In Servo 128

In the past month, we merged 189 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the team’s plans for 2019.

This week’s status updates are here.

Exciting works in progress

Notable Additions

  • ferjm added support for replaying media that has ended.
  • ferjm fixed a panic that could occur when playing audio on certain platforms.
  • nox ensured a source of unsafety in layout now panics instead of causing undefined behaviour.
  • soniasinglad removed the need for OpenSSL binaries to be present in order to run tests.
  • ceyusa implemented support for EGL-only hardware accelerated media playback.
  • Manishearth improved the transform-related parts of the WebXR implementation.
  • AZWN exposed some hidden unsafe behaviour in Promise-related APIs.
  • ferjm added support for using MediaStream as media sources.
  • georgeroman implemented support for the media canPlayType API.
  • JHBalaji and others added support for value curve automation in the WebAudio API.
  • jdm implemented a sampling profiler.
  • gterzian made the sampling profiler limit the total number of samples stored.
  • Manishearth fixed a race in the WebRTC backend.
  • kamal-umudlu added support for using the fullscreen capabilities of the OS for the Fullscreen API.
  • jdm extended the set of supported GStreamer packages on Windows.
  • pylbrecht added measurements for layout queries that are forced to wait on an ongoing layout operation to complete.
  • TheGoddessInari improved the MSVC detection in Servo’s build system on Windows.
  • sbansal3096 fixed a panic when importing a stylesheet via CSSOM APIs.
  • georgeroman implemented the missing XMLSerializer API.
  • KwanEsq fixed web compatibility issue with a CSSOM API.
  • aditj added support for the DeleteCookies WebDriver API.
  • peterjoel redesigned the preferences support to better support preferences at compile-time.
  • gterzian added a thread pool for the network code.
  • lucasfantacuci refactored a bunch of code that makes network requests to use a builder pattern.
  • cdeler implemented the missing DOMException constructor API.
  • gterzian and jdm added Linux support to the thread sampling implementation.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Firefox NightlyThese Weeks in Firefox: Issue 57


Toggling the print styles on wikipedia

Toggling the print styles on wikipedia

Controlling letter-spacing from the Fonts panel

Controlling letter-spacing from the Fonts panel

  • The Web Console now groups content blocking messages, if you flip the devtools.webconsole.groupWarningMessages pref. Much easier to read the output!
  • The new remote debugging is ON in Nightly now. Check out about:debugging to see the new experience!
    • An “Intent to unship” notice for WebIDE and the Connect dialog has been sent.
  • The Oxidation of Firefox Sync continues, with the team writing new components in Rust! 🦀
  • Firefox Front-end Performance Update #16 posted, highlighting some performance improvements that are going out in Firefox 67
  • We enabled the FIDO U2F API in Nightly, targeting an uplift to Firefox 67
  • Access to the logins list from the entry points not tied to a specific website (about:preferences and the main menu) has nearly doubled in the week-and-a-half since adding the main menu item.

  • Access from a page context (filtered to show logins for that domain) has grown over 50x in just two days since enabling the autocomplete footer!

Friends of the Firefox team

Here’s a list of all resolved bugs by volunteers

Fixed more than one bug

  • Carolina Jimenez Gomez
  • Dhyey Thakore [:dhyey35]
  • Hemakshi Sachdev [:hemakshis]
  • Ian Moody [:Kwan] (UTC+0)
  • Martin Stránský [:stransky]
  • Mellina Y.
  • PhoenixAbhishek
  • Suriyaa Sundararuban [:suriyaa]
  • Trishul
  • Yuan Cheng

New contributors (🌟 = first patch)

Project Updates

Activity Stream

Discovery Stream

Add-ons / Web Extensions


Firefox Accounts

Developer Tools

  • Layout Tools
    • Starting to work on Inactive CSS. This lets users see when CSS declarations are valid but do not have any effect on the page (bug, mockup #1 “how inactive CSS declarations look in the inspector”, mockup #2 “example of tooltips telling users why a declaration is inactive”).
    • Coming soon: Making CSS warnings in the console more useful. That means, e.g., not emitting warnings for vendor-prefixed properties when corresponding unprefixed properties exist. Or linking warnings to DOM nodes in the inspector. Thanks to jdescottes, nchevobbe and emilio for working on the platform support.
  • Debugger
    • Uplifting several fixes for column breakpoints and windowless workers.
    • Column Breakpoints are pretty solid now. As always, please keep an eye out for issues and report any if needed.
    • Most of the team is busy with general debugger quality issues they’ve prioritized.
  • Console
    • Clicking on a location in the console now opens the debugger at the expected column (thanks Mellina (yogmel), bug).
    • Switch `devtools.webconsole.input.autocomplete` if you want to turn autocompletion off entirely (thanks Dhruvi, bug).
  • Fission
    • Platform work on this requires the DevTools toolbox to be loaded via `<iframe type=”content”/>`. This means the toolbox document is slightly more “sandboxed” into its frame. This unblocks Fission, which is good! (see bug and bug).





Firefox for Android
  • Nightly 68 now has an ARM64 JIT. ARM64 will improve stability (fewer out-of-memory crashes) and security (better ASLR), but is not expected to have much impact on performance at this time.
  • Nightly 68 has enabled Android PGO. Improves Speedometer score by 5%!
  • Nightly 68 has reduced paint suppression delay to improve First Contentful Paint time.
  • Nightly 68 has enabled Retained Display Lists to improve responsiveness on complex pages.

Password Manager


Policy Engine

Privacy / Security

Search and Navigation

Quantum Bar
  • Fixed a regression causing us to get search suggestions for file:// uris
  • Refining details about the first Quantum Bar experiment, to be run in Q2
  • 35 bugs fixed in Quantum Bar in the last 2 weeks, 13 open bugs to reach MVP

Christopher Arnold

At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms.  One developer at the conference, Opaque Mulitimedia, demonstrated "Earthlight" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below.  In the simulation, a Microsoft Kinect sensor was following the position of my hands.  But what I saw in the visor was that my hands were enclosed in an astronaut’s suit.  The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted.  (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.)  The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists create 3D worlds in a server-hosted environment that other people could visit as avatars.  

Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events.  I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe.  This is quite a compelling and satisfying way to experience visual splendor depicted spatially.  With the New York Times and  iMax now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.

Still the experiences of these simulated spaces seems very ethereal.  Which brings me to another emerging field.  At Mozilla Festival in London a few years ago, I had a chance to meet Yasuaki Kakehi of Keio University in Japan, who was demonstrating a haptic feedback device called Techtile.  The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device.  When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer.  The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet.  Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers.  But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder could feel as an implied sense of force direction without the device actually moving the user's hand at all.  It was a haptic illusion instead of a precise physical sensation.

As haptics work advances it has potential to impact common everyday experiences beyond the theoretical and experimental demonstrations I experienced.  This year haptic devices are available in the new Honda cars on sale this year as Road Departure Mitigation, whereby steering wheels can simulate rumble strips on the sides of a lane just by sensing the painted lines on the pavement with cameras.
I am also very excited to see this field expand to include music.  At Ryerson University's SMART lab, Dr. Maria Karam, Dr. Deborah Fels and Dr. Frank Russo applied the concepts of haptics and somatosensory depiction of music to people who didn't have the capability of appreciating music aurally.  Their first product, called the Emoti-chair breaks the frequency range of music to depict different audio qualities spatially to the listeners back.  This is based on the concept that the human cochlea is essentially a large coiled surface upon which sounds of different frequencies resonate and are felt at different locations.  While I don't have perfect pitch, I think having a spatial-perception of tonal scale would allow me to develop a cognitive sense of pitch correctness to compensate using a listening aid like this.  Fortunately, Dr. Karam is advancing this work to introduce new form factors to the commercial market in coming years.

Over many years I have had the chance to study various forms of folk percussion.  One of the most interesting drumming experiences I have had was a visit to Lombok, Indonesia where I had the chance to see a Gamelan performance in a small village along with the large Gendang Belek drums accompanying.  The Gendang Belek is a large barrel drum worn with a strap that goes over the shoulders.  When the drum is struck the reverberation is so fierce and powerful that it shakes the entire body, by resonating through the spine.  I had an opportunity to study Japanese Taiko while living in Japan.  The taiko, resonates in the listener by resonating in the chest.  But the experience of bone-conduction through the spine is altogether a more intense way to experience rhythm.

Because I am such an avid fan of physical experiences of music, I am frequently gravitating toward bassey music.  I tend to play it in a sub-woofer-heavy car stereo, or seek out experiences to hear this music in nightclub or festival performances where large speakers animate the lower frequencies of music.  I can imagine that if more people had the physical experience of drumming that I've had, instead of just the auditory experience of it, more people would enjoy making music themselves.

As more innovators like TADs Inc. (an offshoot of the Ryerson University project) bring physical experiences of music to the general consumer, I look forward to experiencing my music in greater depth.

Christopher ArnoldHow a speech-based internet will change our perceptions

A long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.

Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.

I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.

This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here: http://www.lezzeppelin.com/)
But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?

Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't anticipated a decade ago.  It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.

What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable.  In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin").  Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk.  Five generations removed, nobody will care who the original author was.  The "rights" holder will be irrelevant.  The only thing that will matter in 100 years is what the bot suggests.

Our language isn't ours.  It is the path to the convenient.  In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless.  Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest.  The language and the needs of the spontaneous are immediate!

Christopher ArnoldMy 20 years of web

Twenty years ago I resigned from my former job at a financial news wire to pursue a career in San Francisco.  We were transitioning our news service (Jiji Press, a Japanese wire service similar to Reuters) to being a web-based news site.  I had followed the rise and fall of Netscape and the Department of Justice anti-trust case on Microsoft's bundling of IE with Windows.  But what clinched it for me was a Congressional testimony of the Federal Reserve Chairman (the US central bank) about his inability to forecast the potential growth of the Internet.

Working in the Japanese press at the time gave me a keen interest in international trade.  Prime Minister Hashimoto negotiated with United States Trade Representative Mickey Cantor to enhance trade relations and reduce protectionist tariffs that the countries used to artificially subsidize domestic industries.  Japan was the second largest global economy at the time.  I realized that if I was going to play a role in international trade it was probably going to be in Japan or on the west coast of the US.
 I decided that because Silicon Valley was the location where much of the industry growth in internet technology was happening, that I had to relocate there if I wanted to engage in this industry.  So I packed up all my belongings and moved to San Francisco to start my new career.

At the time, there were hundreds of small agencies that would build websites for companies seeking to establish or expand their internet presence.  I worked with one of these agencies to build Japanese versions of clients' English websites.  My goal was to focus my work on businesses seeking international expansion.

At the time, I met a search engine called LookSmart, which aspired to offer business-to-business search engines to major portals. (Business-to-Business is often abbreviated B2B and is a tactic of supporting companies that have their own direct consumers, called business-to-consumer, which is abbreviated B2C.)  Their model was similar to Yahoo.com, but instead of trying to get everyone to visit one website directly, they wanted to distribute the search infrastructure to other companies, combining the aggregate resources needed to support hundreds of companies into one single platform that was customized on demand for those other portals.

At the time LookSmart had only English language web search.  So I proposed launching their first foreign language search engine and entering the Japanese market to compete with Yahoo!'s largest established user base outside the US.  Looksmart's President had strong confidence in my proposal and expanded her team to include a Japanese division to invest in the Japanese market launch.  After we delivered our first version of the search engine, Microsoft's MSN licensed it to power their Japanese portal and Looksmart expanded their offerings to include B2B search services for Latin America and Europe.

I moved to Tokyo, where I networked with the other major portals of Japan to power their web search as well.  Because at the time Yahoo! Japan wasn't offering such a service, a dozen companies signed up to use our search engine.  Once the combined reach of Looksmart Japan rivaled that of the destination website of Yahoo! Japan, our management brokered a deal for LookSmart Japan to join Yahoo! Japan.  (I didn't negotiate that deal by the way.  Corporate mergers and acquisitions tend to happen at the board level.)

By this time Google was freshly independent of Yahoo! exclusive contract to provide what they called "algorithmic backfill" for the Yahoo! Directory service that Jerry Yang and David Filo had pioneered at Stanford University.  Google started a B2C portal and started offering of their own B2B publishing service by acquiring Yahoo! partner Applied Semantics, giving them the ability to put Google ads into every webpage on the internet without needing users to conduct searches anymore.  Yahoo!, fearing competition from Google in B2B search, acquired Inktomi, Altavista, Overture, and Fast search engines, three of which were leading B2B search companies.  At this point Yahoo!'s Overture division hired me to work on market launches across Asia Pacific beyond Japan.

With Yahoo! I had excellent experiences negotiating search contracts with companies in Japan, Korea, China, Australia, India and Brazil before moving into their Corporate Partnerships team to focus on the US search distribution partners.

Then in 2007 Apple launched their first iPhone.  Yahoo! had been operating a lightweight mobile search engine for html that was optimized for being shown on mobile phones.  One of my projects in Japan had been to introduce Yahoo!'s mobile search platform as an expansion to the Overture platform.  However, with the ability for the iPhone to actually show full web pages, the market was obviously going to shift.

I and several of my colleagues became captivated by the potential to develop specifically for the iPhone ecosystem.  So I resigned from Yahoo! to launch my own company, ncubeeight.  Similar to the work I had been doing at LookSmart and prior, we focused on companies that had already launched on the desktop internet that were now seeking to expand to the mobile internet ecosystem.

Being a developer in a nascent ecosystem was fascinating.  But it's much more complex than the open internet because discovery of content on the phone depends on going through a marketplace, which is something like a business directory.  Apple and Google knew there were great business models of being a discovery gateway for this specific type of content.  Going "direct to consumer" is an amazing challenge of marketing on small-screen devices.  And gaining visibility in Apple iTunes and Google Play is even more challenging a marketing problem that publicizing your services on the desktop Internet. 

Next I joined the Mozilla to work on the Firefox platform partnerships.  It has been fascinating working with this team, which originated from the Netscape browser in the 1990's and transformed into an open-source non-profit focusing on the advancement of internet technology in conjunction rather than solely in competition with Netscape's former competitors.

What is interesting to the outside perspective is most likely that companies that used to compete against each other for engagement (by which I mean your attention) are now unified in the idea of working together to enhance the ecosystem of the web.  Google, Mozilla and Apple now all embrace open source for the development of their web rendering engines.  Now these companies are beholden to an ecosystem of developers who create end-user experiences as well as the underlying platforms that each company provides as a custodian of the ecosystem.   The combined goals of a broad collaborative ecosystem are more important and impactful than any single platform or company.  A side note: Amazon is active in the wings here, basing their software on spin-off code from Google's Android open source software.  Also, after their mobile phone platform faltered they started focusing on a space where they could completely pioneer a new web-interface, voice.

When I first came to the web, much of what it was made up of was static html.  Over the past decade, web pages shifted to dynamically assembled pages and content feeds determined by individual user customizations.    This is a fascinating transition that I witnessed while at Yahoo! which has been the subject of many books.   (My favorite being Sarah Lacy's Once You're Lucky, Twice You're Good.)

Sometimes in reflective moments, one things back to what one's own personal legacy will be.  In this industry, dramatic shifts happen every three months.  Websites and services I used to enjoy tremendously 10 or 20 years ago have long since been acquired, shut down or pivoted into something new.  So what's going to exist that you could look back on after 100 years?  Probably very little except for the content that has been created by website developers themselves.  It is the diversity of web content accessible that brings us everyday to the shores of the world wide web.

There is a service called the Internet Archive that registers historical versions of web pages.  I wonder what the current web will look like from the future perspective, in this current era of dynamically-customized feeds that differ based on the user viewing them.  If an alien landed on Earth and started surfing the history of the Internet Archive's "Wayback Machine", I imagine they'll see a dramatic drop-off in content that was published in static form after 2010.

The amazing thing about the Internet is the creativity it brings out of the people who engage with it.  Back when I started telling the story of the web to people, I realized I needed to have my own web page.  So I needed to figure out what I wanted to amplify to the world.  Because I admired folk percussion that I'd seen while I was living in Japan, I decided to make my website about the drums of the world.  I used a web editor called Geocities to create this web page you see at right.  I decided to leave in its original 1999 Geocities template design for posterity's sake.  Since then my drum pursuits have expanded to include various other web projects including a YouTube channel dedicated to traditional folk percussion.  A flickr channel dedicated to drum photos.  Subsequently I launched a Soundcloud channel and a Mixcloud DJ channel for sharing music I'd composed or discovered over the decades.

The funny thing is, when I created this website, people found me who I never would have met or found otherwise.  I got emails from people around the globe who were interested in identifying drums they'd found.   Even Cirque de Soleil wrote me asking for advice on drums they should use in their performances!

Since I'd opened the curtains on my music exploration, I started traveling around to regions of the world that had unique percussion styles.  What had started as a small web development project became a broader crusade in my life, taking me to various remote corners of the world I never would have traveled to otherwise.  And naturally, this spawned a new website with another Youtube channel dedicated to travel videos.

The web is an amazing place where we can express ourselves, discover and broaden our passions and of course connect to others across the continents. 

When I first decided to leave the journalism industry, it was because I believed the industry itself was inherently about waiting for other people to do or say interesting things.  In the industry I pursued, the audience was waiting for me do to that interesting thing myself.  The Internet is tremendously valuable as a medium.  It has been an amazing 20 years watching it evolve.  I'm very proud to have had a small part in its story.  I'm riveted to see where it goes in the next two decades!  And I'm even more riveted to see where I go, with its help.

On the web, the journey you start seldom ends where you thought it would go!

Mozilla Localization (L10N)L10n report: April edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

The deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Firefox 68 is going to be an ESR version, so it’s particularly important to ship the best localization possible. The deadline for that will be June 25.

The migration to Fluent is progressing steadily, and we are approaching two important milestones:

  • 3 thousand FTL messages.
  • Less than 2 thousand DTD strings.

What’s new or coming up in mobile

Lot’s of things have been happening on the mobile front, and much more is going to follow shortly.

One of the first things we’d like to call out if that Fenix browser strings have arrived for localization! While work has been opened up to only a small subset of locales, you can expect us to add more progressively, quite soon. More details around this can be found here and here.

We’ve also exposed strings for Firefox Reality, Mozilla’s mixed-reality browser! Also open to only a subset of locales, we expect to be able to add more locales once the in-app locale switcher is in place. Read more about this here.

There are more new and exciting projects coming up in the next few weeks, so as usual, stay tuned to the Dev.l10n mailing list for more announcements!

Concerning existing projects: Firefox iOS v17 l10n cycle is going to start within the next days, so keep an eye out on your Pontoon folder.

And concerning Fennec, just like for Firefox desktop, the deadline to ship localization updates in Firefox 67 is quickly approaching (April 30). Please read the section above for more details.

What’s new or coming up in web projects

  • firefox/whatsnew_67.lang: The page must be fully localized by 3 May to be included in the Firefox 67 product release.
  • navigation.lang: The file has been available on Pontoon for more than 2 months. This newly designed navigation menu will be switched on whether it is localized or not. This means every page you browse on mozilla.org will show the new layout. If the file is not fully localized, you will see the menu mixed with English text.
  • Three new pages will be opened up for localization in select locales: adblocker,  browser history and what is a browser. Be on the lookout on Pontoon.

What’s new or coming up in SuMo

What’s new or coming up in Fluent

Fluent Syntax 1.0 has been published! The syntax is now stable. Thanks to everyone who shared their feedback about Fluent in the past; you have made Fluent better for everyone. We published a blog post on Mozilla Hacks with more details about this release and about Fluent in general.

Fluent is already used in over 3000 messages in Firefox, as well as in Firefox Send and Common Voice. If you localize these projects, chances are you already localized Fluent messages. Thanks to the efforts of Matjaž and Adrian, Fluent is already well-supported in Pontoon. We continue to improve the Fluent experience in Pontoon and we’re open to your feedback about how to make it best-in-class.

You can learn more about the Fluent Syntax on the project’s website, through the Syntax Guide, and in the Mozilla localizer documentation. If you want to quickly see it in action, try the Fluent Playground—an online editor with shareable Fluent snippets.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Kudos to Sonia who introduced Mozilla and Pontoon to her fellow attendees. She ran a short workshop on localization at Dive into Open Source event held in Jalandhar, India in late March. After the event, she onboarded and mentored Anushka, Jasmine, and Sanja who have started contributing to various projects in Punjabi.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla BlogAndroid Browser Choice Screen in Europe

Today, Google announced a new browser choice screen in Europe. We love an opportunity to show more people our products, like Firefox for Android. Independent browsers that put privacy and security first (like Firefox) are great for consumers and an important part of the Android ecosystem.

There are open questions, though, about how well this implementation of a choice screen will enable people to easily adopt options other than Chrome as their default browser. The details matter, and the true measure will be the impact on competition and ultimately consumers. As we assess the results of this launch on Mozilla’s Firefox for Android, we’ll share our impressions and the impact we see.

The post Android Browser Choice Screen in Europe appeared first on The Mozilla Blog.

Niko MatsakisAiC: Adventures in consensus

In the talk I gave at Rust LATAM, I said that the Rust project has always emphasized finding the best solution, rather than winning the argument. I think this is one of our deepest values. It’s also one of the hardest for us to uphold.

Let’s face it – when you’re having a conversation, it’s easy to get attached to specific proposals. It’s easy to have those proposals change from “Option A” vs “Option B” to “my option” and “their option”. Once this happens, it can be very hard to let them “win” – even if you know that both options are quite reasonable.

This is a problem I’ve been thinking a lot about lately. So I wanted to start an irregular series of blog posts entitled “Adventures in consensus”, or AiC for short. These posts are my way of exploring the topic, and hopefully getting some feedback from all of you while I’m at it.

This first post dives into what a phrase like “finding the best solution” even means (is there a best?) as well as the mechanics of how one might go about deciding if you really have the “best” solution. Along the way, we’ll see a few places where I think our current process could do better.

Beyond tradeoffs

Part of the challenge here, of course, is that often there is no “best” solution. Different solutions are better for different things.

This is the point where we often talk about tradeoffs – and tradeoffs are part of it. But I’m also wary of the term. It often brings to mind a simplistic, zero-sum approach to the problem, where we can all too easily decide that we have to pick A or B and leave it at that.

But often when we are faced with two irreconcilable options, A or B, there is a third one waiting in the wings. This third option often turns on some hidden assumption that – once lifted – allows us to find a better overall approach; one that satisfies both A and B.

Example: the ? operator

I think a good example is the ? operator. When thinking about error handling, we seem at first two face two irreconcilable options:

  • Explicit error codes, like in C, make it easy to see where errors can occur, but they require tedious code at the call site of each function to check for errors, when most of the time you just want to propagate the error anyway. This seems to favor explicit reasoning at the expense of the happy path.
    • (In C specifically, there is also the problem of it being easy to forget to check for errors in the first place, but let’s leave that for now.)
  • Exceptions propagate the error implicitly, making the happy path clean, but making it very hard to see where an error occurs.

By now, a number of languages have seen that there is a third way – a kind of “explicit exception”, where you make it very easy and lightweight to propagate errors In Rust, we do this via the ? operator (which desugars to a match). In Swift (if I understand correctly) invoking a method that throws an exception is done by adding a prefix, like try foo(). Joe Duffy describes a similar mechanism in the midori language in his epic article dissecting Midori error handling.

Having used ? for a long time now, I can definitely attest that (a) it is very nice to be able to propagate errors in a light-weight fasion and (b) having the explicit marker is very useful. Many times I’ve found bugs by scrutinizing the code for ?, uncovering surprising control flow I wasn’t considering.

There is no free lunch

Of course, I’d be remiss if I didn’t point out that the discussion over ? was a really difficult one for our community. It was one of the longest RFC threads in history, and one in which the same arguments seemed to rise up again and again in a kind of cycle. Moreover, we’re still wrestling with what extensions (if any) we might want to consider to the basic mechanism (e.g., try blocks, perhaps try fns, etc).

I think part of the reason for this is that “the third option ain’t free”. In other words, the ? operator did a nice job of sidestepping the dichotomy that seemed to be presented by previous options (clear but tedious vs elegant but hidden), but it did so by coming into contact with other principles. In this case, the primary debate was over whether to consider some mechanism like Haskell’s do syntax for working with monads.1

I think this is generally true. All the examples that I can come up with where we’ve found a third option generally come at some sort of price – but often it’s a price we’re content to pay. In the case of ?, this means that we have some syntax in the language that is dedicated to errors, when perhaps it could have been more general (but that might itself have come with limitations, or meant more complexity elsewhere).

Rust’s origin story

Overcoming tradeoffs is, in my view, the core purpose of Rust. After all, the ur-tradeoff of them all is control vs safety:

  • Control – let the programmer decide about memory layout, threads, runtime.
  • Safety – avoid crashes.

This choice used to be embodied by having to decide between using C++ (and gaining the control, and thus often performance) or a garbage-collected language like Java (and sacrifice control, often at the cost of performance). Deciding whether or not to use threads was a similar choice between peril and performance.

Ownership and borrowing eliminated that tradeoff – but not for free! They come with a steep learning curve, after all, and they impose some limitations of their own. (Flattening the slope of that learning curve – and extending the range of patterns that we accept – was of course a major goal of the non-lexical lifetimes effort, and I think will continue to be an area of focus for us going forward.)

Tradeoffs after all – but the right ones

So, even though I maligned tradeoffs earlier as simplistic thinking, perhaps in the end it does all come down to tradeoffs. Rust is definitely a language for people who prefer to measure twice and cut once, and – for such folks – learning ownership and borrowing has proven to be worth the effort (and then some!). But this clearly isn’t the right choice for all people and all situations.

I guess then that the trick is being sure that you’re trading the right things. You will probably have to trade something, but it may not be the things you’re discussing right now.

Mapping the solution space

When we talk about the RFC process, we always emphasize that the point of RFC discussion is not to select the best answer; rather, the point is to map the solution space. That is, to explore what the possible tradeoffs are and to really look for alternatives. This mapping process also means exploring the ups and downs of the current solutions on the table.

What does mapping the solution space really mean?

When you look at it, “mapping the solution space” is actually a really complex task. There are a lot of pieces to it:

  • Identifying stakeholders: figuring out who are the people affected by this change, for good or ill.
  • Clarifying motivations: what exactly are we aiming to solve with a given proposal? It’s interesting how often this is left unstated (and, I suspect, not fully understood). Often we have a general idea of the problem, but we could sharpen it quite a bit. It’s also very useful to figure out which parts of the problem are most important.
  • Finding the pros and cons of the current proposals: what works well with each solution and what are its costs.
  • Identifying new possibilities: finding new ways to solve the motivations. Sometimes this may not solve the complete problem we set out to attack, but only the most important part – and that can be a good thing, if it avoids some of the downsides.
  • Finding the hidden assumption(s): This is in some way the same as identifying new possibilities, but I thought it was worth pulling out separately. There often comes a point in the design where you feel like you are faced with two bad options – and then you realize that one of the design constraints you took as inviolate isn’t, really, all that essential. Once you weaken that constraint, or drop it entirely, suddenly the whole design falls into place.

Our current process mixes all of these goals

Looking at that list of tasks, is it any wonder that some RFC threads go wrong? The current process doesn’t really try to separate out these various tasks in any way or even to really highlight them. We sort of expect people to “figure it out” on their own.

Worse, I think the current process often starts with a particular solution. This encourages people to react to that solution. The RFC author, then, is naturally prone to be defensive and to defend their proposal. We are right away kicking things off with an “A or B” mindset, where ideas belong to people, rather than the process. I think ‘disarming’ the attachment of people to specific ideas, and instead trying to focus everyone’s attention on the problem space as a whole, is crucial.

Now, I am not advocating for some kind of “waterfall” process here. I don’t think it’s possible to cleanly separate each of the goals above and handle them one at a time. It’s always a bit messy – you start with a fuzzy idea of the problem (and some stakeholders) and you try to refine it. Then you take a stab at what a solution might look like, which helps you to understand better the problem itself, but which also starts to bring in more stakeholders. Figuring out the pros and cons may spark new ideas. And so forth.

But just because we can’t use waterfall doesn’t mean we can’t give more structure. Exploring what that might mean is one of the things I hope to do in subsequent blog posts.


Ultimately, this post is about the importance of being thorough and deliberate in our design efforts. If we truly want to find the best design – well, I shouldn’t say the best design. If we want to find the right design for Rust, it’s often going to take time. This is because we need to take the time to elaborate on the implications of the decisions we are making, and to give time for a “third way” to be found.

But – lo – even here there is a tradeoff. We are trading away time, it seems, for optimality. And this clearly isn’t always the right choice. After all, “real artists ship”. Often, there comes a point where further exploration yields increasingly small improvements (“diminishing returns”).

As we explore ways to improve the design process, then, we should try to ensure we are covering the whole design space, but we also have to think about knowing when to stop and move on to the next thing.

Oh, one last thing…

Also, by the by, if you’ve not already read aturon’s 3-part series on “listening and trust”, you should do so.


I’ve created a discussion thread on the internals forum where you can leave questions or comments. I’ll definitely read them and I will try to respond, though I often get overwhelmed2, so don’t feel offended if I fail to do so.


  1. If you’d like to read more about the ? decision, this summary comment tried to cover the thread and lay out the reasoning behind the ultimate decision.

  2. So many things, so little time.

Hacks.Mozilla.OrgIntroducing Mozilla WebThings

The Mozilla IoT team is excited to announce that after two years of development and seven quarterly software updates that have generated significant interest from the developer & maker community, Project Things is graduating from its early experimental phase and from now on will be known as Mozilla WebThings.

Mozilla’s mission is to “ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.”

The Mozilla IoT team’s mission is to create a Web of Things implementation which embodies those values and helps drive IoT standards for security, privacy and interoperability.

Mozilla WebThings is an open platform for monitoring and controlling devices over the web, including:

  • WebThings Gateway – a software distribution for smart home gateways focused on privacy, security and interoperability
  • WebThings Framework – a collection of reusable software components to help developers build their own web things

WebThings Gateway UI


We look forward to a future in which Mozilla WebThings software is installed on commercial products that can provide consumers with a trusted agent for their “smart”, connected home.

WebThings Gateway 0.8

The WebThings Gateway 0.8 release is available to download from today. If you have an existing Things Gateway it should have automatically updated itself. This latest release includes new features which allow you to privately log data from all your smart home devices, a new alarms capability and a new network settings UI.


Have you ever wanted to know how many times the door was opened and closed while you were out? Are you curious about energy consumption of appliances plugged into your smart plugs? With the new logs features you can privately log data from all your smart home devices and visualise that data using interactive graphs.

Logs UI

In order to enable the new logging features go to the main menu ➡ Settings ➡ Experiments and enable the “Logs” option.

Experiment Settings

You’ll then see the Logs option in the main menu. From there you can click the “+” button to choose a device property to log, including how long to retain the data.

Add log UI

The time series plots can be viewed by hour, day, or week, and a scroll bar lets users scroll back through time. This feature is still experimental, but viewing these logs will help you understand the kinds of data your smart home devices are collecting and think about how much of that data you are comfortable sharing with others via third party services.

Note: If booting WebThings Gateway from an SD card on a Raspberry Pi, please be aware that logging large amounts of data to the SD card may make the card wear out more quickly!


Home safety and security are among the big potential benefits of smart home systems. If one of your “dumb” alarms is triggered while you are at work, how will you know? Even if someone in the vicinity hears it, will they take action? Do they know who to call? WebThings Gateway 0.8 provides a new alarms capability for devices like smoke alarms, carbon monoxide alarms or burglar alarms.

Alarm Capability

This means you can now check whether an alarm is currently active, and configure rules to notify you if an alarm is triggered while you’re away from home.

Network Settings

In previous releases, moving your gateway from one wireless network to another when the previous Wi-Fi access point was still active could not be done without console access and command line changes directly on the Raspberry Pi. With the 0.8 release, it is now possible to re-configure your gateway’s network settings from the web interface. These new settings can be found under Settings ➡ Network.

Network Settings UI

You can either configure the Ethernet port (with a dynamic or static IP address) or re-scan available wireless networks and change the Wi-Fi access point that the gateway is connected to.

WebThings Gateway for Wireless Routers

We’re also excited to share that we’ve been working on a new OpenWrt-based build of WebThings Gateway, aimed at consumer wireless routers. This version of WebThings Gateway will be able to act as a wifi access point itself, rather than just connect to an existing wireless network as a client.

This is the beginning of a new phase of development of our gateway software, as it evolves into a software distribution for consumer wireless routers. Look out for further announcements in the coming weeks.

Online Documentation

Along with a refresh of the Mozilla IoT website, we have made a start on some online user & developer documentation for the WebThings Gateway and WebThings Framework. If you’d like to contribute to this documentation you can do so via GitHub.

Thank you for all the contributions we’ve received so far from our wonderful Mozilla IoT community. We look forward to this new and exciting phase of the project!

The post Introducing Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

Mark SurmanGetting crisper about ‘better AI’

As I wrote a few weeks back, Mozilla is increasingly coming to the conclusion that making sure AI serves humanity rather than harms it is a key internet health issue. Our internet health movement building efforts will be focused in this area in the coming years.

In 2019, this means focusing a big part of our investment in fellowships, awards, campaigns and the Internet Health Report on AI topics. It also means taking the time to get crisper on what we mean by ‘better’ and defining a specific set of things we’d like to see happen around the politics, technology and use of AI. Thinking this through now will tee up work in the years to come.

We started this thinking in an ‘issue brief’ that looks at AI issues through the lens of internet health and the Mozilla Manifesto. It builds from the idea that intelligent systems are ultimately designed and shaped by humans — and then looks at areas where we need to collectively figure out how we want these systems used, who should control them and how we should mitigate their risks. The purpose of this brief is to spark discussions in and around Mozilla that will help us come up with more specific goals and plans of action.

As we dig into this thinking, one thing is starting to become clear: the most likely way for Mozilla to to push the future of AI in a good direction is to focus on the how automated decision making is being used in consumer products and services. The big tech companies in the US and China that provide the bulk of the internet products we use everyday are also the biggest creators and users of AI technology. The ways they deploy AI — from home assistants to ad targeting to navigation and delivery to content recommendation– also impact a growing majority of people on the planet. They are also major vendors of AI tools — like facial recognition software — used by governments and other companies. As we look around, it feels like not enough people are investigating how AI is playing out in big tech companies — and in the products and services they create. Also, consumer tech has always been the place where Mozilla has focused. It makes sense to stay focused here as we look at AI.

Beyond this constraint, the universe of possible goals for this work is quite broad. Some of the options that we are batting around include:

  • Should we focus on user empowerment, ensuring people can shape, question or opt out from automated decisions?
  • Should Mozilla serve as a watchdog, ensuring companies are held accountable for the decisions made by the systems they create?
  • Could Mozilla play a role in democratizing AI by encouraging researchers and industry to make their software and training data open source?
  • Is there a particular region we could focus on, like Europe, where the chance that AI which respects privacy and rights takes hold is greater than elsewhere.
  • Or, should Mozilla focus more broadly on ensuring that automated systems respect rights like privacy, freedom of expression and protection from discrimination?

These are the sorts of questions we’re starting to debate — and will invite you to debate with us — over the coming months.

It’s worth noting that of these possible goals are focused on outcomes for users and society, and not on core AI technology. Mozilla is doing important and interesting technology work with things like Deep Speech and Common Voice, showing that collaborative, inclusionary, open source AI approaches are possible. However, Mozilla’s work on AI technology is modest at this point. This is one of the reasons that we decided to make ‘better machine decision making’ a focus of our movement building work right now. AI represents the next wave of computing and will shape what the internet looks like — how things work out with AI will have a huge impact on whether we live in a healthy digital environment, or not. It is critical that Mozilla weigh in on this early and strongly, and this includes going beyond what we’re able to do directly through writing code. The internet health movement building work we’ve been doing over the last few years gives us a way to do this, working with allies around the world who are also trying to nudge the future of AI in a good direction.

If you have thoughts on where this work is going — or should go — I’d love to hear them. You can comment on this blog, tweet or send me an email. There is also a wiki where you can track this work. And, there will be more specific opportunities for feedback on potential goals for our working coming over the next couple of months.

PS. I will write more about the topic of consumer tech and why we should focus on this area in an upcoming post.

The post Getting crisper about ‘better AI’ appeared first on Mark Surman.

Mozilla B-Teamhappy bmo push day!

happy bmo push day!

Note that I’ve missed the last two push announcements, you’ll want to check https://wiki.mozilla.org/BMO/Recent_Changes#Recent_Changes to be fully up-to date. That said, we’ve been very busy. In the past 30 days.

6 authors have pushed 76 commits to master and 81 commits to all branches.
On master, 213 files have changed and there have been 2,852 additions
and 850 deletions.

Below the fold are all…

View On WordPress

Chris H-CDistributed Teams: A Test Failing Because It’s Run West of Newfoundland and Labrador

(( Not quite 500 mile email-level of nonsense, but might be the closest I get. ))

A test was failing.

Not really unusual, that. Tests fail all the time. It’s how we know they’re good tests: protecting us developers from ourselves.

But this one was failing unusually. Y’see, it was failing on my machine.

(Yes, har har, it is a common-enough occurrence given my obvious lack of quality as a developer how did you guess.)

The unusual part was that it was failing only for me… and I hadn’t even touched anything yet. It wasn’t failing on our test infrastructure “try”, and it wasn’t failing on the machine of :raphael, the fellow responsible for the integration test harness itself. We were building Firefox the same way, running telemetry-tests-client the same way… but I was getting different results.

I fairly quickly locked down the problem to be an extra “main” ping with reason “environment-change” being sent during the initial phases of the test. By dumping some logging into Firefox, rebuilding it, and then routing its logs to console with --gecko-log "-" I learned that we were sending a ping because a monitored user preference had been changed: browser.search.region.

When Firefox starts up the first time, it doesn’t know where it is. And it needs to know where it is to properly make a first guess at what language you want and what search engines would work best. Google’s results are pretty bad in Canada unless you use “google.ca”, after all.

But while Firefox doesn’t know where it is, it does know is what timezone it’s in from the settings in your OS’s clock. On top of that it knows what language your OS is set to. So we make a first guess at which search region we’re in based on whether or not the timezone overlaps a US timezone and if your OS’ locale is `en-US` (United States English).

What this fails to take into account is that United States English is the “default” locale reported by many OSes even if you aren’t in the US. And how even if you are in a timezone that overlaps with the US, you might not be there.

So to account for that, Mozilla operates a location service to double-check that the search region is appropriately set. This takes a little time to get back with the correct information, if it gets back to us at all. So if you happen to be in a US-overlapping timezone with an English-language OS Firefox assumes you’re in the US. Then if the location service request gets back with something that isn’t “US”, browser.search.region has to be updated.

And when it updates, it changes the Telemetry Environment.

And when the Environment changes, we send a “main” ping.

And when we send a “main” ping, the test breaks.

…all because my timezone overlaps the OS and my language is “Default” English.

I feel a bit persecuted, but this shows another strength of Distributed Teams. No one else on my team would be able to find this failure. They’re in Germany, Italy, and the US. None of them have that combination of “Not in the US, but in a US timezone” needed to manifest the bug.

So on one hand this sucks. I’m going to have to find a way around this.

But on the other hand… I feel like my Canadianness is a bit of a bug-finding superpower. I’m no Brok Windsor or Captain Canuck, but I can get the job done in a way no one else on my team can.

Not too bad, eh?


Mozilla Open Policy & Advocacy BlogMozilla reacts to European Parliament plenary vote on EU Terrorist Content regulation

This evening lawmakers in the European Parliament voted to adopt the institution’s negotiating position on the EU Terrorist Content regulation.

Here is a statement from Owen Bennett, Internet Policy Manager, reacting to the vote –

As the recent atrocities in Christchurch underscored, terrorism remains a serious threat to citizens and society, and it is essential that we implement effective strategies to combat it. But the Terrorist Content regulation passed today in the European Parliament is not that. This legislation does little but chip away at the rights of European citizens and further entrench the same companies the legislation was aimed at regulating. By demanding that companies of all sizes take down ‘terrorist content’ within one hour the EU has set a compliance bar that only the most powerful can meet.

Our calls for targeted, proportionate and evidence-driven policy responses to combat the evolving threat, were not accepted by the majority, but we will continue to push for a more effective regulation in the next phase of this legislative process. The issue is simply too important to get wrong, and the present shortfalls in the proposal are too serious to let stand.

The post Mozilla reacts to European Parliament plenary vote on EU Terrorist Content regulation appeared first on Open Policy & Advocacy.

Nicholas NethercoteA better DHAT

DHAT is a heap profiler that comes with Valgrind. (The name is short for “Dynamic Heap Analysis Tool”.) It tells your where all your heap allocations come from, and can help you find the following: places that cause excessive numbers of allocations; leaks; unused and under-used allocations; short-lived allocations; and allocations with inefficient data layouts. This old blog post goes into some detail.

In the new Valgrind 3.15 release I have given DHAT a thorough overhaul.

The old DHAT was very useful and I have used it a lot while profiling the Rust compiler. But it had some rather annoying limitations, which the new DHAT overcomes.

First, the old DHAT dumped its data as text at program termination. The new DHAT collects its data in a file which is read by a graphical viewer that runs in a web browser. This gives several advantages.

  • The separation of data collection and data presentation means you can run a program once under DHAT and then sort and filter the data in various ways, instead of having to choose a particular sort order in advance. Also, full data is in the output file, and the graphical viewer chooses what to omit.
  • The data can be sorted in more ways than previously. Some of these sorts involve useful filters such as “short-lived” and “zero reads or zero writes”.
  • The graphical viewer allows parts of the data to be hidden and unhidden as necessary.

Second, the old DHAT divided its output into records, where each record consisted of all the heap allocations that have the same allocation stack trace. The choice of stack trace depth could greatly affect the output.

In contrast, the new DHAT is based around trees of stack traces that avoid the need to choose stack trace depth. This avoids both the problem of not enough depth (when records that should be distinct are combined, and may not contain enough information to be actionable) and the problem of too much depth (when records that should be combined are separated, making them seem less important than they really are).

Third, the new DHAT also collects and/or shows data that the old DHAT did not.

  • Byte and block measurements are shown with a percentage relative to the global measurements, which helps gauge relative significance of different parts of the profile.
  • Byte and block measurements are also shown with an allocation rate (bytes and blocks per million instructions), which enables comparisons across multiple profiles, even if those profiles represent different workloads.
  • Both global and per-node measurements are taken at the global heap peak, which gives Massif-like insight into the point of peak memory use.
  • The final/lifetimes stats are a bit more useful than the old deaths stats. (E.g. the old deaths stats didn’t take into account lifetimes of unfreed blocks.)

Finally, the new DHAT has a better handling of realloc. The sequence p = malloc(100); realloc(p, 200); now increases the total block count by 2 and the total byte count by 300. In the old DHAT it increased them by 1 and 200. The new handling is a more operational view that better reflects the effect of allocations on performance. It makes a significant difference in the results, giving paths involving reallocation (e.g. repeated pushing to a growing vector) more prominence.

Overall these changes make DHAT more powerful and easier to use.

The following screenshot gives an idea of what the new graphical viewer looks like.

Sample output from DHAT's viewer

The new DHAT can be run using the --tool=dhat flag, in contrast with the old DHAT, which was an “experimental” tool and so used the --tool=exp-dhat flag. For more details see the documentation.

Hacks.Mozilla.OrgFluent 1.0: a localization system for natural-sounding translations

Fluent is a family of localization specifications, implementations and good practices developed by Mozilla. It is currently used in Firefox. With Fluent, translators can create expressive translations that sound great in their language. Today we’re announcing version 1.0 of the Fluent file format specification. We’re inviting translation tool authors to try it out and provide feedback.

The Problem Fluent Solves

With almost 100 supported languages, Firefox faces many localization challenges. Using traditional localization solutions, these are difficult to overcome. Software localization has been dominated by an outdated paradigm: translations that map one-to-one to the source language. The grammar of the source language, which at Mozilla is English, imposes limits on the expressiveness of the translation.

Consider the following message which appears in Firefox when the user tries to close a window with more than one tab.

tabs-close-warning-multiple =
    You are about to close {$count} tabs.
    Are you sure you want to continue?

The message is only displayed when the tab count is 2 or more. In English, the word tab will always appear as plural tabs. An English-speaking developer may be content with this message. It sounds great for all possible values of $count.

<figcaption>In English, a single variant of the message is enough for all values of $count.</figcaption>

Many translators, however, will quickly point out that the word tab will take different forms depending on the exact value of the $count variable.

In traditional localization solutions, the onus of fixing this message is on developers. They need to account for the fact that other languages distinguish between more than one plural form, even if English doesn’t. As the number of languages supported in the application grows, this problem scales up quickly—and not well.

  • In some languages, nouns have genders which require different forms of adjectives and past participles. In French, connecté, connectée, connectés and connectées all mean connected.
  • Style guides may require that different terms be used depending on the platform the software runs on. In English Firefox, we use Settings on Windows and Preferences on other systems, to match the wording of the user’s operating system. In Japanese, the difference is starker: some computer-related terms are spelled with a different writing system depending on the user’s OS.
  • The context and the target audience of the application may require adjustments to the copy. In English, software used in accounting may format numbers differently than a social media website. Yet in other languages such a distinction may not be necessary.

There are many grammatical and stylistic variations that don’t map one-to-one between languages. Supporting all of them using traditional localization solutions isn’t straightforward. Some language features require trade-offs in order to support them, or aren’t possible at all.

Asymmetric Localization

Fluent turns the localization landscape on its head. Rather than require developers to predict all possible permutations of complexity in all supported languages, Fluent keeps the source language as simple as it can be.

We make it possible to cater to the grammar and style of other languages, independently of the source language. All of this happens in isolation; the fact that one language benefits from more advanced logic doesn’t require any other localization to apply it. Each localization is in control of how complex the translation becomes.

Consider the Czech translation of the “tab close” message discussed above. The word panel (tab) must take one of two plural forms: panely for counts of 2, 3, and 4, and panelů for all other numbers.

tabs-close-warning-multiple = {$count ->
    [few] Chystáte se zavřít {$count} panely. Opravdu chcete pokračovat?
   *[other] Chystáte se zavřít {$count} panelů. Opravdu chcete pokračovat?

Fluent empowers translators to create grammatically correct translations and leverage the expressive power of their language. With Fluent, the Czech translation can now benefit from correct plural forms for all possible values of the $count variable.

<figcaption>In Czech, $count values of 2, 3, and 4 require a special plural form of the noun.</figcaption>

At the same time, no changes are required to the source code nor the source copy. In fact, the logic added by the Czech translator to the Czech translation doesn’t affect any other language. The same message in French is a simple sentence, similar to the English one:

tabs-close-warning-multiple =
    Vous êtes sur le point de fermer {$count} onglets.
    Voulez-vous vraiment continuer ?

The concept of asymmetric localization is the key innovation of Fluent, built upon 20 years of Mozilla’s history of successfully shipping localized software. Many key ideas in Fluent have also been inspired by XLIFF and ICU’s MessageFormat.

At first glance, Fluent looks similar to other localization solutions that allow translations to use plurals and grammatical genders. What sets Fluent apart is the holistic approach to localization. Fluent takes these ideas further by defining the syntax for the entire text file in which multiple translations can be stored, and by allowing messages to reference other messages.

Terms and References

A Fluent file may consist of many messages, each translated into the translator’s language. Messages can refer to other messages in the same file, or even to messages from other files. In the runtime, Fluent combines files into bundles, and references are resolved in the scope of the current bundle.

Referencing messages is a powerful tool for ensuring consistency. Once defined, a translation can be reused in other translations. Fluent even has a special kind of message, called a term, which is best suited for reuse. Term identifiers always start with a dash.

-sync-brand-name = Firefox Account

Once defined, the -sync-brand-name term can be referenced from other messages, and it will always resolve to the same value. Terms help enforce style guidelines; they can also be swapped in and out to modify the branding in unofficial builds and on beta release channels.

sync-dialog-title = {-sync-brand-name}
sync-headline-title =
    {-sync-brand-name}: The best way to bring
    your data always with you
sync-signedout-account-title =
    Connect with your {-sync-brand-name}

Using terms verbatim in the middle of a sentence may cause trouble for inflected languages or for languages with different capitalization rules than English. Terms can define multiple facets of their value, suitable for use in different contexts. Consider the following definition of the -sync-brand-name term in Italian.

-sync-brand-name = {$capitalization ->
   *[uppercase] Account Firefox
    [lowercase] account Firefox

Thanks to the asymmetric nature of Fluent, the Italian translator is free to define two facets of the brand name. The default one (uppercase) is suitable for standalone appearances as well as for use at the beginning of sentences. The lowercase version can be explicitly requested by passing the capitalization parameter, when the brand name is used inside a larger sentence.

sync-dialog-title = {-sync-brand-name}
sync-headline-title =
    {-sync-brand-name}: il modo migliore
    per avere i tuoi dati sempre con te

# Explicitly request the lowercase variant of the brand name.
sync-signedout-account-title =
    Connetti il tuo {-sync-brand-name(capitalization: "lowercase")}

Defining multiple term variants is a versatile technique which allows the localization to cater to the grammatical needs of many languages. In the following example, the Polish translation can use declensions to construct a grammatically correct sentence in the sync-signedout-account-title message.

-sync-brand-name = {$case ->
   *[nominative] Konto Firefox
    [genitive] Konta Firefox
    [accusative] Kontem Firefox

sync-signedout-account-title =
    Zaloguj do {-sync-brand-name(case: "genitive")}

Fluent makes it possible to express linguistic complexities when necessary. At the same time, simple translations remain simple. Fluent doesn’t impose complexity unless it’s required to create a correct translation.

sync-signedout-caption = Take Your Web With You
sync-signedout-caption = Il tuo Web, sempre con te
sync-signedout-caption = Zabierz swoją sieć ze sobą
sync-signedout-caption = So haben Sie das Web überall dabei.

Fluent Syntax

Today, we’re announcing the first stable release of the Fluent Syntax. It’s a formal specification of the file format for storing translations, accompanied by beta releases of parser implementations in JavaScript, Python, and Rust.

You’ve already seen a taste of Fluent Syntax in the examples above. It has been designed with non-technical people in mind, and to make the task of reviewing and editing translations easy and error-proof. Error recovery is a strong focus: it’s impossible for a single broken translation to break the entire file, or even the translations adjacent to it. Comments may be used to communicate contextual information about the purpose of a message or a group of messages. Translations can span multiple lines, which helps when working with longer text or markup.

Fluent files can be opened and edited in any text editor, lowering the barrier to entry for developers and localizers alike. The file format is also well supported by Pontoon, Mozilla’s open-source translation management system.

<figcaption>Fluent Playground is an online sandbox for trying out Fluent live inside the browser.</figcaption>

You can learn more about the syntax by reading the Fluent Syntax Guide. The formal definition can be found in the Fluent Syntax specification. And if you just want to quickly see it in action, try the Fluent Playground—an online editor with shareable Fluent snippets.

Request for Feedback

Firefox has been the main driver behind the development of Fluent so far. Today, there are over 3000 Fluent messages in Firefox. The migration from legacy localization formats started early last year and is now in full swing. Fluent has proven to be a stable and flexible solution for building complex interfaces, such as the UI of Firefox Preferences. It is also used in a number of Mozilla websites, such as Firefox Send and Common Voice.

We think Fluent is a great choice for applications that value simplicity and a lean runtime, and at the same time require that elements of the interface depend on multiple variables. In particular, Fluent can help create natural-sounding translations in size-constrained UIs of mobile apps; in information-rich layouts of social media platforms; and in games, to communicate gameplay statistics and mechanics to the player.

We’d love to hear from projects and localization vendors outside of Mozilla. Because we’re developing Fluent with a future standard in mind, we invite you to try it out and let us know if it addresses your challenges. With your help, we can iterate and improve Fluent to address the needs of many platforms, use cases, and industries.

We’re open to your constructive feedback. Learn more about Fluent on the project’s website and please get in touch on Fluent’s Discourse.

The post Fluent 1.0: a localization system for natural-sounding translations appeared first on Mozilla Hacks - the Web developer blog.

Daniel StenbergOne year in still no visa

One year ago today. On the sunny Tuesday of April 17th 2018 I visited the US embassy in Stockholm Sweden and applied for a visa. I’m still waiting for them to respond.

My days-since-my-visa-application counter page is still counting. Technically speaking, I had already applied but that was the day of the actual physical in-person interview that served as the last formal step in the application process. Most people are then getting their visa application confirmed within weeks.

Initially I emailed them a few times after that interview since the process took so long (little did I know back then), but now I haven’t done it for many months. Their last response assured me that they are “working on it”.

Lots of things have happened in my life since last April. I quit my job at Mozilla and started a new job at wolfSSL, again working for a US based company. One that I cannot go visit.

During this year I missed out on a Mozilla all-hands, I’ve been invited to the US several times to talk at conferences that I had to decline and a friend is getting married there this summer and I can’t go. And more.

Going forward I will miss more interesting meetings and speaking opportunities and I have many friends whom I cannot visit. This is a mild blocker to things I would want to do and it is an obstacle to my profession and career.

I guess I might get my rejection notice before my counter reaches two full years, based on stories I’ve heard from other people in similar situations such as mine. I don’t know yet what I’ll do when it eventually happens. I don’t think there are any rules that prevent me from reapplying, other than the fact that I need to pay more money and I can’t think of any particular reason why they would change their minds just by me trying again. I will probably give it a rest a while first.

I’m lucky and fortunate that people and organizations have adopted to my situation – a situation I of course I share with many others so it’s not uniquely mine – so lots of meetings and events have been held outside of the US at least partially to accommodate me. I’m most grateful for this and I understand that at times it won’t work and I then can’t attend. These days most things are at least partly accessible via video streams etc, repairing the harm a little. (And yes, this is a first-world problem and I’m fortunate that I can still travel to most other parts of the world without much trouble.)

Finally: no, I still have no clue as to why they act like this and I don’t have any hope of ever finding out.

The Mozilla BlogLatest Firefox for iOS Now Available

Today’s Firefox for iPhone and iPad users offers enhancements that will make it easier to get you to what you want faster, from new links within your library and managing your logins and passwords, plus deleting your history as recent as the last hour.

Leave no trace with your web history

With today’s release, we made it easier to clear your web history with one tap on the history page. In the menu or on the Firefox Home page, tap ‘Your Library’, then ‘History’, and ‘Clear Recent History’.

Because we all make wrong turns on the web from time to time, you can now choose to delete your history from the last hour, that specific day, and the one before or, as it has always been, your full browsing history.

Clear your web history with one tap

Shortcuts in your library

Everyone likes a shortcut that gets you quickly to the place you need to go. We created links in your library to get you to your bookmarks, history, reading list and downloads all from the Firefox Home screen.

Get you to your bookmarks, history, reading list and downloads all from the Firefox Home screen

Get to your logins and passwords faster

We simplified the place where you can find your logins and passwords in the menu. Go to the menu and tap ‘Logins & Passwords’. Also, from there you can enable Face ID or password authentication in Settings to keep your passwords even more secure. It’s located in the Face ID & Passcode option.

Find your logins and passwords easily

To get the latest version of Firefox for iOS, visit the App Store.


The post Latest Firefox for iOS Now Available appeared first on The Mozilla Blog.

Mozilla Cloud Services BlogMaking Firefox Accounts more transparent in Firefox

Over the past few months the Firefox Accounts team have been working on making users more aware of Firefox Account and the benefits of having an account. This phase had an emphasis on our desktop users because we believe that would have the highest impact.

The problem

Based on user testing and feedback, most of our users did not clearly understand all of the advantages of having a Firefox Account. Most users failed to understand the value proposition of an account and why they should create one. Additionally, if they had an account, users were not always aware of the current signed in status in the browser. This meant that users could be syncing their private data and not fully aware they were doing that.

The benefits of an account we wanted to highlight were:

  • Sync bookmarks, passwords, tabs, add-ons and history among all your Firefox Browsers
  • Send tabs between Firefox Browsers
  • Stores your data encrypted
  • Use one account to log in to supported Firefox services (Ex. Monitor, Lockbox, Send)

Previously, users that downloaded Firefox would only see the outlined benefits at a couple touch points in the browser. Specifically these points below

First run page (New installation)

What’s New page (Firefox installation upgraded)

Browsing to preferences and clicking Firefox Account menu

If a user failed to create an account and login during the first two points, it was very unlikely that they would organically discover Firefox Accounts at point three. Having only these touch points meant that users could not always set-up a Firefox Account at their own pace.

Our solution

Our team decided that we needed to make this easier and solicited input from our Growth and Marketing teams, particularly on how to best showcase and highlight our features. From these discussions, we decided to experiment with putting a top level account menu item next to the Firefox application menu. Our hypothesis was that having a top level menu would drive engagement and reinforce the benefits of Firefox Accounts.

We believed that having an account menu item at this location would give users more visibility into their account status and allow them to quickly manage it.

While most browsers have some form of a top level account menu, we decided to experiment with the feature because Firefox users are more privacy focused and might not behave as other browser users.

New Firefox Account Toolbar menu

Our designs

The initial designs for this experiment had a toolbar menu left of the Firefox application menu. This menu could not be removed and was always fixed. After consulting with engineering teams, having a fixed menu could more easily be achieved as a native browser feature. However, because of the development cycle of Firefox browser (new releases every 6 weeks), we would have to wait 6 weeks to test our experiment as a native feature.

Initial toolbar design

If the requirement that the menu was not fixed was lifted then we could ship a Shield web extension experiment and get results much more quickly (2-3 weeks). Shield experiments are not tied to a specific Firefox release schedule and users can opt in and out of them. This means that Firefox can install shield experiments, run them and then uninstall them at the end of the experiment.

After discussions with product and engineering teams, we decided to develop and ship the Shield web extension experiment (with these known limitations) and while that experiment was gathering data, start development of the native browser version for the originally spec design. There was a consensus between product and engineering teams that if the experiment was successful, it should eventually live as a native feature in the browser and not as a web extension.

Account toolbar Web Extension Version

Experiment results

Our experiment ran for 28 days and at the end of it, users were given a survey to fill out. There was a treatment (installed web extension) and control (did not install web extension) group. Below is a summary of the results

  • Treatment users authenticated into Firefox Account roughly 8% more
  • From survey
    • 45% of users liked the avatar
    • 45% of users were indifferent
    • 10% didn’t like it

The overall feedback for the top level menu was positive and most users were not bothered by it. Based on this we decided to update and iterate on the design for the final native browser version.

Iterating designs

While the overall feedback was positive for the new menu item, there were a few tweaks to the final design. Most notably

  • New menu is fully customizable (can be placed anywhere in Firefox browser vs fixed position)
    • After discussing with team, while having this menu fixed was in the original designs, we decided to have it not fixed because that was what the experiment ran as and it was more inline with the Firefox brand of being personalizable.
  • Ability to easily connect another device via SMS
  • View current sync tabs
  • Added ability to send current and multiple tabs to a device
  • Added ability to force a sync of data

Because we started working on the native feature while the experiment was running, we did not have to dedicate as many development resources to complete new requirements.

Final toolbar design

Check out the development bug here for more details.

Next steps

We are currently researching ways to best surface new account security features and services for Firefox Accounts using the new toolbar menu. Additionally, there are a handful of usability tweaks that we would like to add in future versions. Stay tuned!

The Firefox Account toolbar menu will be available starting in desktop Firefox 67. If you want to experiment early with it, you can download Firefox Beta 67 or Firefox Nightly.

Big thank you for all the teams and folks that helped to make this feature happen!

Mozilla Open Policy & Advocacy BlogBrussels Mozilla Mornings: A policy blueprint for internet health

On 14 May, Mozilla will host the next installment of our Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This event will coincide with the launch of the 2019 Mozilla Foundation Internet Health Report. We’re bringing together an expert panel to discuss some of the report’s highlights, and their vision for how the next EU political mandate can enhance internet health in Europe.


Prabhat Agarwal
Deputy Head of Unit, E-Commerce & Platforms
European Commission, DG CNECT

Claudine Vliegen
Telecoms & Digital Affairs attaché
Permanent Representation of the Netherlands to the EU

Mark Surman
Executive Director, Mozilla Foundation

Introductory remarks by Solana Larsen
Editor in Chief, Internet Health Report

Moderated by Jennifer Baker, EU tech journalist

Logistical information

14 May, 2019
Radisson Red, Rue d’Idalie 35, 1050 Brussels

Hacks.Mozilla.OrgPyodide: Bringing the scientific Python stack to the browser

Pyodide is an experimental project from Mozilla to create a full Python data science stack that runs entirely in the browser.

Density of 311 calls in Oakland, California

The impetus for Pyodide came from working on another Mozilla project, Iodide, which we presented in an earlier post.  Iodide is a tool for data science experimentation and communication based on state-of-the-art web technologies.  Notably, it’s designed to perform data science computation within the browser rather than on a remote kernel.

Unfortunately, the “language we all have” in the browser, JavaScript, doesn’t have a mature suite of data science libraries, and it’s missing a number of features that are useful for numerical computing, such as operator overloading. We still think it’s worthwhile to work on changing that and moving the JavaScript data science ecosystem forward. In the meantime, we’re also taking a shortcut: we’re meeting data scientists where they are by bringing the popular and mature Python scientific stack to the browser.

It’s also been argued more generally that Python not running in the browser represents an existential threat to the language—with so much user interaction happening on the web or on mobile devices, it needs to work there or be left behind. Therefore, while Pyodide tries to meet the needs of Iodide first, it is engineered to be useful on its own as well.

Pyodide gives you a full, standard Python interpreter that runs entirely in the browser, with full access to the browser’s Web APIs.  In the example above (50 MB download), the density of calls to the City of Oakland, California’s “311” local information service is plotted in 3D. The data loading and processing is performed in Python, and then it hands off to Javascript and WebGL for the plotting.

For another quick example, here’s a simple doodling script that lets you draw in the browser window:

from js import document, iodide

canvas = iodide.output.element('canvas')
canvas.setAttribute('width', 450)
canvas.setAttribute('height', 300)
context = canvas.getContext("2d")
context.strokeStyle = "#df4b26"
context.lineJoin = "round"
context.lineWidth = 5

pen = False
lastPoint = (0, 0)

def onmousemove(e):
    global lastPoint

    if pen:
        newPoint = (e.offsetX, e.offsetY)
        context.moveTo(lastPoint[0], lastPoint[1])
        context.lineTo(newPoint[0], newPoint[1])
        lastPoint = newPoint

def onmousedown(e):
    global pen, lastPoint
    pen = True
    lastPoint = (e.offsetX, e.offsetY)

def onmouseup(e):
    global pen
    pen = False

canvas.addEventListener('mousemove', onmousemove)
canvas.addEventListener('mousedown', onmousedown)
canvas.addEventListener('mouseup', onmouseup)

And this is what it looks like:

Interactive doodle example

The best way to learn more about what Pyodide can do is to just go and try it! There is a demo notebook (50MB download) that walks through the high-level features. The rest of this post will be more of a technical deep-dive into how it works.

Prior art

There were already a number of impressive projects bringing Python to the browser when we started Pyodide.  Unfortunately, none addressed our specific goal of supporting a full-featured mainstream data science stack, including NumPy, Pandas, Scipy, and Matplotlib.

Projects such as Transcrypt transpile (convert) Python to JavaScript. Because the transpilation step itself happens in Python, you either need to do all of the transpiling ahead of time, or communicate with a server to do that work. This doesn’t really meet our goal of letting the user write Python in the browser and run it without any outside help.

Projects like Brython and Skulpt are rewrites of the standard Python interpreter to JavaScript, therefore, they can run strings of Python code directly in the browser.  Unfortunately, since they are entirely new implementations of Python, and in JavaScript to boot, they aren’t compatible with Python extensions written in C, such as NumPy and Pandas. Therefore, there’s no data science tooling.

PyPyJs is a build of the alternative just-in-time compiling Python implementation, PyPy, to the browser, using emscripten.  It has the potential to run Python code really quickly, for the same reasons that PyPy does.  Unfortunately, it has the same issues with performance with C extensions that PyPy does.

All of these approaches would have required us to rewrite the scientific computing tools to achieve adequate performance.  As someone who used to work a lot on Matplotlib, I know how many untold person-hours that would take: other projects have tried and stalled, and it’s certainly a lot more work than our scrappy upstart team could handle.  We therefore needed to build a tool that was based as closely as possible on the standard implementations of Python and the scientific stack that most data scientists already use.  

After a discussion with some of Mozilla’s WebAssembly wizards, we saw that the key to building this was emscripten and WebAssembly: technologies to port existing code written in C to the browser.  That led to the discovery of an existing but dormant build of Python for emscripten, cpython-emscripten, which was ultimately used as the basis for Pyodide.

emscripten and WebAssembly

There are many ways of describing what emscripten is, but most importantly for our purposes, it provides two things:

  1. A compiler from C/C++ to WebAssembly
  2. A compatibility layer that makes the browser feel like a native computing environment

WebAssembly is a new language that runs in modern web-browsers, as a complement to JavaScript.  It’s a low-level assembly-like language that runs with near-native performance intended as a compilation target for low-level languages like C and C++.  Notably, the most popular interpreter for Python, called CPython, is implemented in C, so this is the kind of thing emscripten was created for.

Pyodide is put together by:

  • Downloading the source code of the mainstream Python interpreter (CPython), and the scientific computing packages (NumPy, etc.)
  • Applying a very small set of changes to make them work in the new environment
  • Compiling them to WebAssembly using emscripten’s compiler

If you were to just take this WebAssembly and load it in the browser, things would look very different to the Python interpreter than they do when running directly on top of your operating system. For example, web browsers don’t have a file system (a place to load and save files). Fortunately, emscripten provides a virtual file system, written in JavaScript, that the Python interpreter can use. By default, these virtual “files” reside in volatile memory in the browser tab, and they disappear when you navigate away from the page.  (emscripten also provides a way for the file system to store things in the browser’s persistent local storage, but Pyodide doesn’t use it.)

By emulating the file system and other features of a standard computing environment, emscripten makes moving existing projects to the web browser possible with surprisingly few changes. (Some day, we may move to using WASI as the system emulation layer, but for now emscripten is the more mature and complete option).

Putting it all together, to load Pyodide in your browser, you need to download:

  • The compiled Python interpreter as WebAssembly.
  • A bunch of JavaScript provided by emscripten that provides the system emulation.
  • A packaged file system containing all the files the Python interpreter will need, most notably the Python standard library.

These files can be quite large: Python itself is 21MB, NumPy is 7MB, and so on. Fortunately, these packages only have to be downloaded once, after which they are stored in the browser’s cache.

Using all of these pieces in tandem, the Python interpreter can access the files in its standard library, start up, and then start running the user’s code.

What works and doesn’t work

We run CPython’s unit tests as part of Pyodide’s continuous testing to get a handle on what features of Python do and don’t work.  Some things, like threading, don’t work now, but with the newly-available WebAssembly threads, we should be able to add support in the near future.  

Other features, like low-level networking sockets, are unlikely to ever work because of the browser’s security sandbox.  Sorry to break it to you, your hopes of running a Python minecraft server inside your web browser are probably still a long way off. Nevertheless, you can still fetch things over the network using the browser’s APIs (more details below).

How fast is it?

Running the Python interpreter inside a JavaScript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration.

Notably, code that runs a lot of inner loops in Python tends to be slower by a larger factor than code that relies on NumPy to perform its inner loops. Below are the results of running various Pure Python and Numpy benchmarks in Firefox and Chrome compared to natively on the same hardware.

Pyodide benchmark results: Firefox and Chrome vs. native

Interaction between Python and JavaScript

If all Pyodide could do is run Python code and write to standard out, it would amount to a cool trick, but it wouldn’t be a practical tool for real work.  The real power comes from its ability to interact with browser APIs and other JavaScript libraries at a very fine level. WebAssembly has been designed to easily interact with the JavaScript running in the browser.  Since we’ve compiled the Python interpreter to WebAssembly, it too has deep integration with the JavaScript side.

Pyodide implicitly converts many of the built-in data types between Python and JavaScript.  Some of these conversions are straightforward and obvious, but as always, it’s the corner cases that are interesting.

Conversion of data types between Python and JavaScript

Python treats dicts and object instances as two distinct types. dicts (dictionaries) are just mappings of keys to values.  On the other hand, objects generally have methods that “do something” to those objects. In JavaScript, these two concepts are conflated into a single type called Object.  (Yes, I’ve oversimplified here to make a point.)

Without really understanding the developer’s intention for the JavaScript Object, it’s impossible to efficiently guess whether it should be converted to a Python dict or object.  Therefore, we have to use a proxy and let “duck typing” resolve the situation.

Proxies are wrappers around a variable in the other language.  Rather than simply reading the variable in JavaScript and rewriting it in terms of Python constructs, as is done for the basic types, the proxy holds on to the original JavaScript variable and calls methods on it “on demand”.  This means that any JavaScript variable, no matter how custom, is fully accessible from Python. Proxies work in the other direction, too.

Duck typing is the principle that rather than asking a variable “are you a duck?” you ask it “do you walk like a duck?” and “do you quack like a duck?” and infer from that that it’s probably a duck, or at least does duck-like things.  This allows Pyodide to defer the decision on how to convert the JavaScript Object: it wraps it in a proxy and lets the Python code using it decide how to handle it. Of course, this doesn’t always work, the duck may actually be a rabbit. Thus, Pyodide also provides ways to explicitly handle these conversions.

It’s this tight level of integration that allows a user to do their data processing in Python, and then send it to JavaScript for visualization. For example, in our Hipster Band Finder demo, we show loading and analyzing a data set in Python’s Pandas, and then sending it to JavaScript’s Plotly for visualization.

Accessing Web APIs and the DOM

Proxies also turn out to be the key to accessing the Web APIs, or the set of functions the browser provides that make it do things.  For example, a large part of the Web API is on the document object. You can get that from Python by doing:

from js import document

This imports the document object in JavaScript over to the Python side as a proxy.  You can start calling methods on it from Python:


All of this happens through proxies that look up what the document object can do on-the-fly.  Pyodide doesn’t need to include a comprehensive list of all of the Web APIs the browser has.

Of course, using the Web API directly doesn’t always feel like the most Pythonic or user-friendly way to do things.  It would be great to see the creation of a user-friendly Python wrapper for the Web API, much like how jQuery and other libraries have made the Web API easier to use from JavaScript.  Let us know if you’re interested in working on such a thing!

Multidimensional Arrays

There are important data types that are specific to data science, and Pyodide has special support for these as well.  Multidimensional arrays are collections of (usually numeric) values, all of the same type. They tend to be quite large, and knowing that every element is the same type has real performance advantages over Python’s lists or JavaScript’s Arrays that can hold elements of any type.

In Python, NumPy arrays are the most common implementation of multidimensional arrays. JavaScript has TypedArrays, which contain only a single numeric type, but they are single dimensional, so the multidimensional indexing needs to be built on top.

Since in practice these arrays can get quite large, we don’t want to copy them between language runtimes.  Not only would that take a long time, but having two copies in memory simultaneously would tax the limited memory the browser has available.

Fortunately, we can share this data without copying.  Multidimensional arrays are usually implemented with a small amount of metadata that describes the type of the values, the shape of the array and the memory layout. The data itself is referenced from that metadata by a pointer to another place in memory. It’s an advantage that this memory lives in a special area called the “WebAssembly heap,” which is accessible from both JavaScript and Python.  We can simply copy the metadata (which is quite small) back and forth between the languages, keeping the pointer to the data referring to the WebAssembly heap.

Sharing memory for arrays between Python and Javascript

This idea is currently implemented for single-dimensional arrays, with a suboptimal workaround for higher-dimensional arrays.  We need improvements to the JavaScript side to have a useful object to work with there. To date there is no one obvious choice for JavaScript multidimensional arrays. Promising projects such as Apache Arrow and xnd’s ndarray are working exactly in this problem space, and aim to make the passing of in-memory structured data between language runtimes easier.  Investigations are ongoing to build off of these projects to make this sort of data conversion more powerful.

Real-time interactive visualization

One of the advantages of doing the data science computation in the browser rather than in a remote kernel, as Jupyter does, is that interactive visualizations don’t have to communicate over a network to reprocess and redisplay their data.  This greatly reduces the latency — the round trip time it takes from the time the user moves their mouse to the time an updated plot is displayed to the screen.

Making that work requires all of the technical pieces described above to function together in tandem.  Let’s look at this interactive example that shows how log-normal distributions work using matplotlib. First, the random data is generated in Python using Numpy. Next, Matplotlib takes that data, and draws it using its built-in software renderer. It sends the pixels back to the JavaScript side using Pyodide’s support for zero-copy array sharing, where they are finally rendered into an HTML canvas.  The browser then handles getting those pixels to the screen. Mouse and keyboard events used to support interactivity are handled by callbacks that call from the web browser back into Python.

Interacting with distributions in matplotlib


The Python scientific stack is not a monolith—it’s actually a collection of loosely-affiliated packages that work together to create a productive environment.  Among the most popular are NumPy (for numerical arrays and basic computation), Scipy (for more sophisticated general-purpose computation, such as linear algebra), Matplotlib (for visualization) and Pandas (for tabular data or “data frames”).  You can see the full and constantly updated list of the packages that Pyodide builds for the browser here.

Some of these packages were quite straightforward to bring into Pyodide. Generally, anything written in pure Python without any extensions in compiled languages is pretty easy. In the moderately difficult category are projects like Matplotlib, which required special code to display plots in an HTML canvas. On the extremely difficult end of the spectrum, Scipy has been and remains a considerable challenge.  

Roman Yurchak worked on making the large amount of legacy Fortran in Scipy compile to WebAssembly. Kirill Smelkov improved emscripten so shared objects can be reused by other shared objects, bringing Scipy to a more manageable size. (The work of these outside contributors was supported by Nexedi).  If you’re struggling porting a package to Pyodide, please reach out to us on Github: there’s a good chance we may have run into your problem before.

Since we can’t predict which of these packages the user will ultimately need to do their work, they are downloaded to the browser individually, on demand.  For example, when you import NumPy:

import numpy as np

Pyodide fetches the NumPy library (and all of its dependencies) and loads them into the browser at that time.  Again, these files only need to be downloaded once, and are stored in the browser’s cache from then on.

Adding new packages to Pyodide is currently a semi-manual process that involves adding files to the Pyodide build. We’d prefer, long term, to take a distributed approach to this so anyone could contribute packages to the ecosystem without going through a single project.  The best-in-class example of this is conda-forge. It would be great to extend their tools to support WebAssembly as a platform target, rather than redoing a large amount of effort.

Additionally, Pyodide will soon have support to load packages directly from PyPI (the main community package repository for Python), if that package is pure Python and distributes its package in the wheel format.  This gives Pyodide access to around 59,000 packages, as of today.

Beyond Python

The relative early success of Pyodide has already inspired developers from other language communities, including Julia, R, OCaml, Lua, to make their language runtimes work well in the browser and integrate with web-first tools like Iodide.  We’ve defined a set of levels to encourage implementors to create tighter integrations with the JavaScript runtime:

  • Level 1: Just string output, so it’s useful as a basic console REPL (read-eval-print-loop).
  • Level 2: Converts basic data types (numbers, strings, arrays and objects) to and from JavaScript.
  • Level 3: Sharing of class instances (objects with methods) between the guest language and JavaScript.  This allows for Web API access.
  • Level 4: Sharing of data science related types  (n-dimensional arrays and data frames) between the guest language and JavaScript.

We definitely want to encourage this brave new world, and are excited about the possibilities of having even more languages interoperating together.  Let us know what you’re working on!


If you haven’t already tried Pyodide in action, go try it now! (50MB download)

It’s been really gratifying to see all of the cool things that have been created with Pyodide in the short time since its public launch.  However, there’s still lots to do to turn this experimental proof-of-concept into a professional tool for everyday data science work. If you’re interested in helping us build that future, come find us on gitter, github and our mailing list.

Huge thanks to Brendan Colloran, Hamilton Ulmer and William Lachance, for their great work on Iodide and for reviewing this article, and Thomas Caswell for additional review.

The post Pyodide: Bringing the scientific Python stack to the browser appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogAnnouncing the Hubs Discord Bot

Announcing the Hubs Discord Bot

We’re excited to announce an official Hubs integration with Discord, a platform that provides text and voice chat for communities. In today's digital world, the ways we stay connected with our friends, family, and co-workers is evolving. Our established social networks span across different platforms, and we believe that shared virtual reality should build on those relationships and that they enhance the way we communicate with the people we care about. Being co-present as avatars in a shared 3D space is a natural progression for the tools we use today, and we’re building on that idea with Hubs to allow you to create private spaces where your conversations, content, and data is protected.

In recent years, Discord has grown in popularity for communities organized around games and technology, and is the platform we use internally on the Hubs development team for product development discussions. Using Discord as a persistent platform that is open to the public gives us the ability to be open about our ongoing work and initiatives on the Hubs team and integrate the community’s feedback into our product planning and development. If you’re a member of the Discord server for Hubs, you may have already seen the bot in action during our internal testing this month!

The Hubs Discord integration allows members to use their Discord identity to connect to rooms and connects a Discord channel with a specific Hubs room in order to capture the text chat, photos taken, and media shared between users in each space. With the ability to add web content to rooms in Hubs, users who are co-present together are able to collaborate and entertain one another, watch videos, chat, share their screen / webcam feed, and pull in 3D objects from Sketchfab and Google Poly. Users will be able to chat in the linked Discord channel to send messages, see the media added to the connected Hubs room, and easily get updates on who has joined or left at any given time.

Announcing the Hubs Discord Bot

We believe that embodied avatar presence will empower communities to be more creative, communicative, and collaborative - and that all of that should be doable without replacing or excluding your existing networks. Your rooms belong to you and the people you choose to share them with, and we feel strongly that everyone should be able to meet in secure spaces where their privacy is protected.

In the coming months, we plan to introduce additional platform integrations and new tools related to room management, authentication, and identity. While you will be able to continue to use Hubs without a persistent identity or login, having an account for the Hubs platform or using your existing identity on a platform such as Discord grants you additional permissions and abilities for the rooms you create. We plan to work closely with communities who are interested in joining the closed beta to help us understand how embodied communication works for them in order to focus our product planning on what meets their needs.

You can see the Hubs Discord integration in action live on the public Hubs Community Discord server for our weekly meetings. If you run a Discord server and are interested in participating in the closed beta for the Hubs Discord bot, you can learn more on the Hubs website.

This Week In RustThis Week in Rust 282

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is sendfd, a simple way to send file descriptors over UNIX sockets. Thanks to Léo Gaspard for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

241 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla Security BlogMozilla’s Common CA Database (CCADB) promotes Transparency and Collaboration

The Common CA Database (CCADB) is helping us protect individuals’ security and privacy on the internet and deliver on our commitment to use transparent community-based processes to promote participation, accountability and trust. It is a repository of information about Certificate Authorities (CAs) and their root and subordinate certificates that are used in the web PKI, the publicly-trusted system which underpins secure connections on the web. The Common CA Database (CCADB) paves the way for more efficient and cost-effective management of root stores and helps make the internet safer for everyone. For example, the CCADB automatically detects and alerts root store operators when a root CA has outdated audit statements or a gap between audit periods. This is important, because audit statements provide assurance that a CA is following required procedures so that they do not issue fraudulent certificates.

Through the CCADB we are extending the checks and balances on root CAs to subordinate CAs to provide similar assurance that the subordinate CAs are not issuing fraudulent certificates. Root CAs, who are directly included in Mozilla’s program, can have subordinate CAs who also issue SSL/TLS certificates that are trusted by Firefox. There are currently about 150 root certificates in Mozilla’s root store, which leads to over 3,100 subordinate CA certificates that are trusted by Firefox. In our efforts to ensure that all subordinate CAs follow the rules, we require that they be disclosed in the CCADB along with their audit statements.

Additionally, the CCADB is making it possible for Mozilla to implement Intermediate CA Preloading in Firefox, with the goal of improving performance and privacy. Intermediate CA Preloading is a new way to hande websites that are not properly configured to serve up the intermediate certificate along with its SSL/TLS certificate. When other browsers encounter such websites they use a mechanism to connect to the CA and download the certificate just-in-time. Preloading the intermediate certificate data (aka subordinate CA data) from the CCADB avoids the just-in-time network fetch, which delays the connection. Avoiding the network fetch improves privacy, because it prevents disclosing user browsing patterns to the CA that issued the certificate for the misconfigured website.

Mozilla created and runs the CCADB, which is also used and contributed to by Microsoft, Google, Cisco, and Apple. Even though the common CA data is shared, each root store operator has a customized experience in the CCADB, allowing each root store operator to see the data sets that are important for managing root certificates included in their program.


  • Makes root stores more transparent through public-facing reports, encouraging community involvement to help ensure that CAs and subordinate CAs are correctly issuing certificates.
    • For example the crt.sh website combines information from the CCADB and Certificate Transparency (CT) logs to identify problematic certificates.
  • Adds automation to improve the level and accuracy of management and rule enforcement. For example the CCADB automates:
  • Enables CAs to provide their annual updates in one centralized system, rather than communicating those updates to each root store separately; and in the future will enable CAs to apply to multiple root stores with a single application process.

Maintaining a root store containing only credible CAs is vital to the security of our products and the web in general. The major root store operators are using the CCADB to promote efficiency in maintaining root stores, to improve internet security by raising the quality and transparency of CA and subordinate CA data, and to make the internet safer by enforcing regular and contiguous audits that provide assurances that root and subordinate CAs do not issue fraudulent certificates. As such, the CCADB is enabling us to help ensure individuals’ security and privacy on the internet and deliver on our commitment to use transparent community-based processes to promote participation, accountability and trust.

The post Mozilla’s Common CA Database (CCADB) promotes Transparency and Collaboration appeared first on Mozilla Security Blog.

QMOFirefox 67 Beta 10 Testday Results

Hello Mozillians!

As you may already know, last Friday April 12th – we held a new Testday event, for Firefox 67 Beta 10.

Thank you all for helping us make Mozilla a better place: Rok Žerdin, noelonassis,  gaby2300, Kamila kamciatek

From Mozilla Bangladesh Community: Sayed Ibn Masud, Md.Rahimul Islam, Shah Yashfique Bhuian, Maruf Rahman, Niyaz Bin Hashem, Md. Almas Hossain, Ummay hany eiti, Kazi Ashraf Hossain.


– 2 bugs verified: 1522078, 1522237.

– 1 bug triaged: 1544069

– several test cases executed for: Graphics compatibility & support and Session Restore.

Thanks for yet another awesome testday, we appreciate your contribution 🙂

We hope to see you all in our next events, keep an eye on QMO, we will make announcements as soon as somethings shows up!

The Mozilla BlogThe Bug in Apple’s Latest Marketing Campaign

Apple’s latest marketing campaign — “Privacy. That’s iPhone” — made us raise our eyebrows.

It’s true that Apple has an impressive track record of protecting users’ privacy, from end-to-end encryption on iMessage to anti-tracking in Safari.

But a key feature in iPhones has us worried, and makes their latest slogan ring a bit hollow.

Each iPhone that Apple sells comes with a unique ID (called an “identifier for advertisers” or IDFA), which lets advertisers track the actions users take when they use apps. It’s like a salesperson following you from store to store while you shop and recording each thing you look at. Not very private at all.

The good news: You can turn this feature off. The bad news: Most people don’t know that feature even exists, let alone that they should turn it off. And we think that they shouldn’t have to.

That’s why we’re asking Apple to change the unique IDs for each iPhone every month. You would still get relevant ads — but it would be harder for companies to build a profile about you over time.

If you agree with us, will you add your name to Mozilla’s petition?

If Apple makes this change, it won’t just improve the privacy of iPhones — it will send Silicon Valley the message that users want companies to safeguard their privacy by default.

At Mozilla, we’re always fighting for technology that puts users’ privacy first: We publish our annual *Privacy Not Included shopping guide. We urge major retailers not to stock insecure connected devices. And our Mozilla Fellows highlight the consequences of technology that makes publicity, and not privacy, the default.

The post The Bug in Apple’s Latest Marketing Campaign appeared first on The Mozilla Blog.

The Firefox FrontierHow you can take control against online tracking

Picture this. You arrive at a website you’ve never been to before and the site is full of ads for things you’ve already looked at online. It’s not a difficult … Read more

The post How you can take control against online tracking appeared first on The Firefox Frontier.

Niko MatsakisMore than coders

Lately, the compiler team has been changing up the way that we work. Our goal is to make it easier for people to track what we are doing and – hopefully – get involved. This is an ongoing effort, but one thing that has become clear immediately is this: the compiler team needs more than coders.

Traditionally, when we’ve thought about how to “get involved” in the compiler team, we’ve thought about it in terms of writing PRs. But more and more I’m thinking about all the other jobs that go into maintaining the compiler. “What kinds of jobs are these?”, you’re asking. I think there are quite a few, but let me give a few examples:

  • Running a meeting – pinging folks, walking through the agenda.
  • Design documents and other documentation – describing how the code works, even if you didn’t write it yourself.
  • Publicity – talking about what’s going on, tweeting about exciting progress, or helping to circulate calls for help. Think steveklabnik, but for rustc.
  • …and more! These are just the tip of the iceberg, in my opinion.

I think we need to surface these jobs more prominently and try to actively recruit people to help us with them. Hence, this blog post.

“We need an open source whenever”

In my keynote at Rust LATAM, I quoted quite liberally from an excellent blog post by Jessica Lord, “Privilege, Community, and Open Source”. There’s one passage that keeps coming back to me:

We also need an open source whenever. Not enough people can or should be able to spare all of their time for open source work, and appearing this way really hurts us.

This passage resonates with me, but I also know it is not as simple as she makes it sound. Creating a structure where people can meaningfully contribute to a project with only small amounts of time takes a lot of work. But it seems clear that the benefits could be huge.

I think looking to tasks beyond coding can be a big benefit here. Every sort of task is different in terms of what it requires to do it well – and I think the more ways we can create for people to contribute, the more people will be able to contribute.

The context: working groups

Let me back up and give a bit of context. Earlier, I mentioned that the compiler has been changing up the way that we work, with the goal of making it much easier to get involved in developing rustc. A big part of that work has been introducing the idea of a working group.

A working group is basically an (open-ended, dynamic) set of people working towards a particular goal. These days, whenever the compiler team kicks off a new project, we create an associated working group, and we list that group (and its associated Zulip stream) on the compiler-team repository. There is also a central calendar that lists all the group meetings and so forth. This makes it pretty easy to quickly see what’s going on.

Working groups as a way into the compiler

Working groups provide an ideal vector to get involved with the compiler. For one thing, they give people a more approachable target – you’re not working on “the entire compiler”, you’re working towards a particular goal. Each of your PRs can then be building on a common part of the code, making it easier to get started. Moreover, you’re working with a smaller group of people, many of whom are also just starting out. This allows people to help one another and form a community.

Running a working group is a big job

The thing is, running a working group can be quite a big job – particularly a working group that aims to incorporate a lot of contributors. Traditionally, we’ve thought of a working group as having a lead – maybe, at best, two leads – and a bunch of participants, most of whom are being mentored:

           | Lead(s)     |
           |             |

  +--+  +--+  +--+  +--+  +--+  +--+
  |  |  |  |  |  |  |  |  |  |  |  |
  |  |  |  |  |  |  |  |  |  |  |  |
  |  |  |  |  |  |  |  |  |  |  |  |
  +--+  +--+  +--+  +--+  +--+  +--+
  |                                |

Now, if all these participants are all being mentored to write code, that means that the set of jobs that fall on the leads is something like this:

  • Running the meeting
  • Taking and posting minutes from the meeting
  • Figuring out the technical design
  • Writing the big, complex PRs that are hard to mentor
  • Writing the design documents
  • Writing mentoring instructions
  • Writing summary blog posts and trying to call attention to what’s going on
  • Synchronizing with the team at large to give status updates etc
  • Being a “point of contact” for questions
  • Helping contributors debug problems
  • Triaging bugs and ensuring that the most important ones are getting fixed

Is it any wonder that the vast majority of working group leads have full-time, paid employees? Or, alternatively, is it any wonder that often many of those tasks just don’t get done?

(Consider the NLL working group – there, we had both Felix and I working as full-time leads, essentially. Even so, we had a hard time writing out design documents, and there were never enough summary blog posts.)

Running a working group is really a lot of smaller jobs

The more I think about it, the more I think the flaw is in the way we’ve talked about a “lead”. Really, “lead” for us was mostly a kind of shorthand for “do whatever needs doing”. I think we should be trying to get more precise about what those things are, and then that we should be trying to split those roles out to more people.

For example, how awesome would it be if major efforts had some people who were just trying to ensure that the design was documented – working on rustc-guide chapters, for example, showing the major components and how they communicated. This is not easy work. It requires a pretty detailed technical understanding. It does not, however, really require writing the PRs in question – in fact, ideally, it would be done by different people, which ensures that there are multiple people who understand how the code works.

There will still be a need, I suspect, for some kind of “lead” who is generally overseeing the effort. But, these days, I like to think of it in a somewhat less… hierarchical fashion. Perhaps “organizer” is the right term. I’m not sure.

Each job is different

Going back to Jessica Lord’s post, she continues:

We need everything we can get and are thankful for all that you can contribute whether it is two hours a week, one logo a year, or a copy-edit twice a year.

Looking over the list of tasks that are involved in running a working-group, it’s interesting how many of them have distinct time profiles. Coding, for example, is a pretty intensive activity that can easily take a kind of “unbounded” amount of time, which is something not everyone has available. But consider the job of running a weekly sync meeting.

Many working groups use short, weekly sync meetings to check up on progress and to keep everything progressing. It’s a good place for newcomers to find tasks, or to triage new bugs and make sure they are being addressed. One easy, and self-contained, task in a working group might be to run the weekly meetings. This could be as simple as coming onto Zulip at the right time, pinging the right people, and trying to walk through the status updates and take some minutes. However, it might also get more complex – e.g., it might involve doing some pre-triage to try and shape up the agenda.

But note that, however you do it, this task is relatively time-contained – it occurs at a predictable point in the week. It might be a way for someone to get involved who has a fixed hole in their schedule, but can’t afford the more open-ended, coding tasks.

Just as important as code

In my last quote from Jessica Lord’s post, I left out the last sentence from the paragraph. Let me give you the paragraph in full (emphasis mine):

We need everything we can get and are thankful for all that you can contribute whether it is two hours a week, one logo a year, or a copy edit twice a year. You, too, are a first class open source citizen.

I think this is a pretty key point. I think it’s important that we recognize that working on the compiler is more than coding – and that we value those tasks – whether they be organizational tasks, writing documentation, whatever – equally.

I am worried that if we had working groups where some people are writing the code and there is somebody else who is “only” running the meetings, or “only” triaging bugs, or “only” writing design docs, that those people will feel like they are not “real” members of the working group. But to my mind they are equally essential, if not more essential. After all, it’s a lot easier to find people who will spend their free time writing PRs than it is to find people who will help to organize a meeting.

Growing the compiler team

The point of this post, in case you missed it, is that I would like to grow our conception of the compile team beyond coders. I think we should be actively recruiting folks with a lot of different skill sets and making them full members of the compiler team:

  • organizers and project managers
  • documentation authors
  • code evangelists

I’m not really sure what this full set of roles should be, but I know that the compiler team cannot function without them.

Beyond the compiler team

One other note: I think that when we start going down this road, we’ll find that there is overlap between the “compiler team” and other teams in the rust-lang org. For example, the release team already does a great job of tracking and triaging bugs and regressions to help ensure the overall quality of the release. But perhaps the compiler team also wants to do its own triaging. Will this lead to a “turf war”? Personally, I don’t really see the conflict here.

One of the beauties of being an open-source community is that we don’t need to form strict managerial hierarchies. We can have the same people be members of both the release team and the compiler team. As part of the release team, they would presumably be doing more general triaging and so forth; as part of the compiler team, they would be going deeper into rustc. But still, it’s a good thing to pay attention to. Maybe some things don’t belong in the compiler-team proper.


I don’t quite a have a call to action here, at least not yet. This is still a WIP – we don’t know quite the right way to think about these non-coding roles. I think we’re going to be figuring that out, though, as we gain more experience with working groups.

I guess I can say this, though: If you are a project manager or a tech writer, and you think you’d like to get more deeply involved with the compiler team, now’s a good time. =) Start attending our steering meetings, or perhaps the weekly meetings of the meta working group, or just ping me over on the rust-lang Zulip.

Alex GibsonMy sixth year working at Mozilla

Photo of San Fransisco's skyline taken at the Monday night event for Mozilla All-Hands, June 2018. <figcaption>Photo of San Fransisco's skyline taken at the Monday night event for Mozilla All-Hands, June 2018.</figcaption>

This week marks my sixth year working at Mozilla! I’ll be honest, this year’s mozillaversary came by so fast I nearly forgot all about writing this blog post. It feels hard to believe that I’ve been working here for a full six years. I’ve guess grown and learned a lot in that time, but it still doesn’t feel like all that long ago when I first joined full time. Years start to blur together. So, what’s happened in this past 12 months?

Building a design system

Mozilla’s website design system, named Protocol, is now a real product. You can install it via NPM and build on-brand Mozilla web pages using its compenents. Protocol builds on a system of atoms, molecules, and organiams, following the concepts first made popular in Atomic Web Design. Many of the new design system components can be seen in use on the recently redesigned www.mozilla.org pages.

Sections of the mozilla.org homepage, built using Protocol components. <figcaption>Sections of the mozilla.org homepage, built using Protocol components.</figcaption>

It was fun to help get this project off the ground, and to see it finally in action on a live website. Making a flexible, intuitive design system is not easy, and we learned a lot in the first year of the project that can help us to improve Protocol over the coming months. By the end of the year, our hope is to have fully ported all mozilla.org content to use Protocol. This is not an easy task for a small team and a large website that’s been around for over a decade. It’ll be an interesting challenge!

Measuring clicks to installs

Supporting the needs of experimentation on Firefox download pages is something that our team has been helping to manage and facilitate for several years now. The breadth of data now required in order to fully understand the effectiveness of experiments is a lot more complex today compared to when we first started. Product retention (i.e. how often people actively use Firefox) is now the key metric of success. Measuring how many clicks a download button on a web page receives is relatively straight forward, but understanding how many of those people go on to actually run the installer, and then how often they end up actively using the product for requires a multi-step funnel of measurement. Our team has continued to help build custom tools to facilitate this kind of data in experimentation, so that we can make better informed product decisions.

Publishing systems

One of our team’s main objectives is to enable people at Mozilla to publish quality content to the web quickly and easily, whether that be on mozilla.org, a microsite, or on a official blog. We’re a small team however, and the marketing organisation has a great appetite for wanting new content at a fast pace. This was one of the (many) reasons why we invested in building a design system, so that we can create on-brand web pages at a faster pace with less repetitive manual work. We also invested in building more custom publishing systems, so that other teams can work more independently. We’ve long had publishing systems in place for things like Firefox release notes, and now we also have some initial systems in place for publishing marketing content, such as the what can currently be seen on the mozilla.org homepage.

Individual contributions

  • I made over 167 commits to bedrock this past year.
  • I made over 78 commits to protocol this past year.
  • We moved to GitHub issues for most of our projects over the past year, so my Bugzilla usage has dropped. But I’ve now filed over 534 bugs, made over 5185 comments, and been assigned more than 656 bugs on Bugzilla in total. Yikes.


I got to visit Portland, San Fransisco for Mozilla’s June all-hands event, and also Orlando for December’s all-hands. I brought the family along to Disney World for an end-of-year vacation afterwards, who all had an amazing time. We’re very lucky!

Family fun outside the Disney Swan & Dolphin, December 2018. <figcaption>Family photo outside the Disney Swan & Dolphin, December 2018.</figcaption>

Daniel StenbergHow to curl up 2020?

We’re running a short poll asking people about where and how we should organize curl up 2020 – our annual curl developers conference. I’m not making any promises, but getting people’s opinions will help us when preparing for next year.

Take the poll

I’ll leave the poll open for a couple of days so please respond asap.

Asa DotzlerMy New Role at Mozilla

Several months ago I took on a new role at Mozilla, product manager for Firefox browser accessibility. I couldn’t be more excited about this. It’s an area I’ve been interested in for nearly my entire career at Mozilla.

It was way back in 2000, after talking with Aaron Leventhal at a Netscape/Mozilla developer event, that I first started thinking about accessibility in Mozilla products and how well the idea of inclusivity fit with some my personal reasons for working on the Mozilla project. If I remember correctly, Aaron was working on a braille reader or similar assistive technologies and he was concerned that the new Mozilla browser, which used a custom UI framework, wasn’t accessible to that assistive technology. Aaron persisted and Mozilla browser technologies became some of the most accessible available.

Thanks in big part to Aaron’s advocacy, hacking, and other efforts over many years, accessibility became “table stakes” for Mozilla applications. The browsers we shipped over the years were always designed for everyone and “accessible to all” came to the Mozilla Mission.

Our mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.

I’m excited to be working on something so directly tied to Mozilla’s core values. I’m also super-excited to be working with so many great Firefox teams, and in particular the Firefox Accessibility Engineering team, who have been doing amazing work on Firefox’s accessibility features for many years.

I’m still just getting my feet wet, and I’ve got a lot more to learn. Stay tuned to this space for the occasional post around my new role with a focus on our efforts to ensure that Firefox is the best experience possible for people with disabilities. I expect to write at least monthly updates as we prioritize, fix, test and ship improvements to our core accessibility features like keyboard navigation, screen reader support, high contrast mode, narration, and the accessibility inspector and auditors, etc.

François MarierSecure ssh-agent usage

ssh-agent was in the news recently due to the matrix.org compromise. The main takeaway from that incident was that one should avoid the ForwardAgent (or -A) functionality when ProxyCommand can do and consider multi-factor authentication on the server-side, for example using libpam-google-authenticator or libpam-yubico.

That said, there are also two options to ssh-add that can help reduce the risk of someone else with elevated privileges hijacking your agent to make use of your ssh credentials.

Prompt before each use of a key

The first option is -c which will require you to confirm each use of your ssh key by pressing Enter when a graphical prompt shows up.

Simply install an ssh-askpass frontend like ssh-askpass-gnome:

apt install ssh-askpass-gnome

and then use this to when adding your key to the agent:

ssh-add -c ~/.ssh/key

Automatically removing keys after a timeout

ssh-add -D will remove all identities (i.e. keys) from your ssh agent, but requires that you remember to run it manually once you're done.

That's where the second option comes in. Specifying -t when adding a key will automatically remove that key from the agent after a while.

For example, I have found that this setting works well at work:

ssh-add -t 10h ~/.ssh/key

where I don't want to have to type my ssh password everytime I push a git branch.

At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:

ssh-add -t 4h ~/.ssh/key

Making these options the default

I couldn't find a configuration file to make these settings the default and so I ended up putting the following line in my ~/.bash_aliases:

alias ssh-add='ssh-add -c -t 4h'

so that I can continue to use ssh-add as normal and have not remember to include these extra options.

Daniel StenbergTest servers for curl

curl supports some twenty-three protocols (depending on exactly how you count).

In order to properly test and verify curl’s implementations of each of these protocols, we have a test suite. In the test suite we have a set of handcrafted servers that speak the server-side of these protocols. The more used a protocol is, the more important it is to have it thoroughly tested.

We believe in having test servers that are “stupid” and that offer buttons, levers and thresholds for us to control and manipulate how they act and how they respond for testing purposes. The control of what to send should be dictated as much as possible by the test case description file. If we want a server to send back a slightly broken protocol sequence to check how curl supports that, the server must be open for this.

In order to do this with a large degree of freedom and without restrictions, we’ve found that using “real” server software for this purpose is usually not good enough. Testing the broken and bad cases are typically not easily done then. Actual server software tries hard to do the right thing and obey standards and protocols, while we rather don’t want the server to make any decisions by itself at all but just send exactly the bytes we ask it to. Simply put.

Of course we don’t always get what we want and some of these protocols are fairly complicated which offer challenges in sticking to this policy all the way. Then we need to be pragmatic and go with what’s available and what we can make work. Having test cases run against a real server is still better than no test cases at all.


“SOCKS is an Internet protocol that exchanges network packets between a client and server through a proxy server. Practically, a SOCKS server proxies TCP connections to an arbitrary IP address, and provides a means for UDP packets to be forwarded.

(according to Wikipedia)

Recently we fixed a bug in how curl sends credentials to a SOCKS5 proxy as it turned out the protocol itself only supports user name and password length of 255 bytes each, while curl normally has no such limits and could pass on credentials with virtually infinite lengths. OK, that was silly and we fixed the bug. Now curl will properly return an error if you try such long credentials with your SOCKS5 proxy.

As a general rule, fixing a bug should mean adding at least one new test case, right? Up to this time we had been testing the curl SOCKS support by firing up an ssh client and having that setup a SOCKS proxy that connects to the other test servers.

curl -> ssh with SOCKS proxy -> test server

Since this setup doesn’t support SOCKS5 authentication, it turned out complicated to add a test case to verify that this bug was actually fixed.

This test problem was fixed by the introduction of a newly written SOCKS proxy server dedicated for the curl test suite (which I simply named socksd). It does the basic SOCKS4 and SOCKS5 protocol logic and also supports a range of commands to control how it behaves and what it allows so that we can now write test cases against this server and ask the server to misbehave or otherwise require fun things so that we can make really sure curl supports those cases as well.

It also has the additional bonus that it works without ssh being present so it will be able to run on more systems and thus the SOCKS code in curl will now be tested more widely than before.

curl -> socksd -> test server

Going forward, we should also be able to create even more SOCKS tests with this and make sure to get even better SOCKS test coverage.

Firefox UXPaying Down Enterprise Content Debt: Part 3

Paying Down Enterprise Content Debt

Part 3: Implementation & governance

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. Part 1 frames the Firefox add-ons space in terms of enterprise content debt. Part 2 lists the eight steps to develop a new content model. This final piece describes the deliverables we created to support that new model.

<figcaption>@neonbrand via Unsplash</figcaption>

Content guidelines for the “author experience”

“Just as basic UX principles tell us to help users achieve tasks without frustration or confusion, author experience design focuses on the tasks and goals that CMS users need to meet — and seeks to make it efficient, intuitive, and even pleasurable for them to do so.” — Sara Wachter-Boettcher, Content Everywhere

A content model is a useful tool for organizations to structure, future-proof, and clean up their content. But that content model is only brought to life when content authors populate the fields you have designed with actual content. And the quality of that content is dependent in part on how the content system supports those authors in their endeavor.

We had discovered through user research that developers create extensions for a great variety of reasons — including as a side hobby or for personal enjoyment. They may not have the time, incentive, or expertise to produce high-quality, discoverable content to market their extensions, and they shouldn’t be expected to. But, we can make it easier for them to do so with more actionable guidelines, tools, and governance.

An initial review of the content submission flow revealed that the guidelines for developers needed to evolve. Specifically, we needed to give developers clearer requirements, explain why each content field mattered and where that content showed up, and provide them with examples. On top of that, we needed to give them writing exercises and tips when they hit a dead end.

So, to support our developer authors in creating our ideal content state, I drafted detailed content guidelines that walked extension developers through the process of creating each content element.

<figcaption>Draft content guidelines for extension elements, mocked up in a rough Google Site for purposes of feedback and testing.</figcaption>

Once a draft was created, we tested it with Mozilla extension developer, Dietrich Ayala. Dietrich appreciated the new guidelines, and more importantly, they helped him create better content.

<figcaption>Sample of previous Product Page content</figcaption>
<figcaption>Sample of revised Product Page content</figcaption>
<figcaption>Sample of revised Product Page content: New screenshots to illustrate how extension works</figcaption>

We also conducted interviews with a cohort of developers in a related project to redesign the extensions submission flow (i.e., the place in which developers create or upload their content). As part of that process, we solicited feedback from 13 developers about the new guidelines:

  • Developers found the guidelines to be helpful and motivating for improving the marketing and SEO of their extensions, thereby better engaging users.
  • The clear “do this/not that” section was very popular.
  • They had some suggestions for improvement, which were incorporated into the next version.

Excerpts from developer interviews:

“If all documentation was like this, the world would be a better place…It feels very considered. The examples of what to do, what not do is great. This extra mile stuff is, frankly, something I don’t see on developer docs ever: not only are we going to explain what we want in human language, but we are going to hold your hand and give you resources…It’s [usually] pulling teeth to get this kind of info [for example, icon sizes] and it’s right here. I don’t have to track down blogs or inscrutable documentation.”
“…be more upfront about the stuff that’s possible to change and the stuff that would have consequences if you change it.”

Finally, to deliver the guidelines in a useful, usable format, we partnered with the design agency, Turtle, to build out a website.

<figcaption>Draft content guidelines page</figcaption>
<figcaption>Draft writing exercise on the subtitle content guidelines page</figcaption>

Bringing the model to life

Now that the guidelines were complete, it was time to develop the communication materials to accompany the launch.

To bring the content to life, and put a human face on it, we created a video featuring our former Director of Firefox UX, Madhava Enros, and an extension developer. The video conveys the importance of creating good content and design for product pages, as well as how to do it.

<figcaption>Product Page Content & Design Video</figcaption>

Preliminary results

Our content model had a tall order to fill, as detailed in our objectives and measurements template.

So, how did we do against those objectives? While the content guidelines have not yet been published by the time of this blog post, here’s a snapshot of preliminary results:

<figcaption>Snapshot of preliminary results</figcaption>

And, for illustration, some examples in action:

1. Improved Social Share Quality

<figcaption>Previous Facebook Share example</figcaption>
<figcaption>New Facebook Share example</figcaption>

2. New Google search snippet model reflective of SEO best practices and what we’d learned from user research — such as the importance of social proof via extension ratings.

<figcaption>Previous Google “search snippet”</figcaption>
<figcaption>New Google “search snippet”</figcaption>

3. Promising, but early, SEO findings:

  • Organic search traffic decline slowed down by 2x compared to summer 2018.
  • Impressions in search for the AMO site are much higher (30%) than before. Shows potential of the domain.
  • Overall rankings are stable, and top 10 rankings are a bit up.
  • Extension/theme installs via organic search are stable after previous year declines.
  • Conversion rate to installs via organic search are growing, from 42% to 55%.

Conclusion & resources

There’s isn’t a quick or easy fix to enterprise content debt, but investing the time and energy to think about your content as structured data, to cultivate a content model based upon your business goals, and develop the guidance and guardrails to realize that model with your content authors, pays dividends in the long haul.

Hopefully this series has provided a series of steps and tools to figure out your individual payment plan. For more on this topic:

And, if you need more help, and you’re fortunate enough to have a content strategist on your team, you can always ask that friendly content nerd for some content credit counseling.

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 3 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXPaying Down Enterprise Content Debt: Part 2

Paying Down Enterprise Content Debt

Part 2: Developing Solutions

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. In Part 1 , I framed the Firefox add-ons space in terms of an enterprise content debt problem. In this piece, I walk through the eight steps we took to develop solutions, culminating in a new content model. See Part 3 for the deliverables we created to support that new model.

<figcaption>Source: Sam Truong Dan</figcaption>
  • Step 1: Stakeholder interviews
  • Step 2: Documenting content elements
  • Step 3: Data analysis — content quality
  • Step 4: Domain expert review
  • Step 5: Competitor compare
  • Step 6: User research — What content matters?
  • Step 7: Creating a content model
  • Step 8: Refine and align

Step 1: Stakeholder interviews

To determine a payment plan for our content debt, we needed to first get a better understanding of the product landscape. Over the course of a couple of weeks, the team’s UX researcher and I conducted stakeholder interviews:

Who: Subject matter experts, decision-makers, and collaborators. May include product, engineering, design, and other content folks.

What: Schedule an hour with each participant. Develop a spreadsheet with questions that get at the heart of what you are trying to understand. Ask the same set of core questions to establish trends and patterns, as well as a smaller set specific to each interviewee’s domain expertise.

<figcaption>Sample question template, including content-specific inquiries below</figcaption>

After completing the interviews, we summarized the findings and walked the team through them. This helped build alignment with our stakeholders around the issues and prime them for the potential UX and content solutions ahead.

Stakeholder interviews also allowed us to clarify our goals. To focus our work and make ourselves accountable to it, we broke down our overarching goal — improve Firefox users’ ability to discover, trust, install, and enjoy extensions — into detailed objectives and measurements using an objectives and measurements template. Our main objectives fell into three buckets: improved user experience, improved developer experience, and improved content structure. Once the work was done, we could measure our progress against those objectives using the measurements we identified.

Step 2: Documenting content elements

Product environment surveyed, we dug into the content that shaped that landscape.

Extensions are recommended and accessed not only through AMO, but in a variety of places, including the Firefox browser itself, in contextual recommendations, and in external content. To improve content across this large ecosystem, we needed to start small…at the cellular content level. We needed to assess, evolve, and improve our core content elements.

By “content elements,” I mean all of the types of content or data that are attached to an extension — either by developers in the extension submission process, by Mozilla on the back-end, or by users. So, very specifically, these are things like description, categories, tags, ratings, etc. For example, the following image contains three content elements: icon, extension name, summary:

Using Excel, I documented existing content elements. I also documented which elements showed up where in the ecosystem (i.e., “content touchpoints”):

<figcaption>Excerpt of content elements documentation</figcaption>
<figcaption>Excerpt of content elements documentation: content touchpoints</figcaption>

The content documentation Excel served as the foundational document for all the work that followed. As the team continued to acquire information to shape future solutions, we documented those learnings in the Excel, evolving the recommendations as we went.

Step 3: Data analysis — content quality

Current content elements identified, we could now assess the state of said content. To complete this analysis, we used a database query (created by our product manager) that populated all of the content for each content element for every extension and theme. Phew.

We developed a list of queries about the content…

<figcaption>Sample selection of data questions</figcaption>

…and then answered those queries for each content element field.

<figcaption>Sample data analysis for “Extension Name”</figcaption>
  • For quantitative questions (like minimum/maximum content length per element), we used Excel formulas.
  • For questions of content quality, we analyzed a sub-section of the data. For example, what’s the content quality state of extension names for the top 100 extensions? What patterns, good and bad, do we see?

Step 4: Domain expert review

I also needed input from domain experts on the content elements, including content reviewers, design, and localization. Through this process, we discovered pain points, areas of opportunity, and considerations for the new requirements.

For example, we had been contemplating a 10-character minimum for our long description field. Conversations with localization expert, Peiying Mo, revealed that this would not work well for non-English content authors…while 10 characters is a reasonable expectation in English, it’s asking for quite a bit of content when we are talking about 10 Chinese characters.

Because improving search engine optimization (SEO) for add-ons was a priority, review by SEO specialist, Raphael Raue, was especially important. Based on user research and analytics, we knew users often find extensions, and make their way to the add-ons site, through external search engines. Thus, their first impression of an add-on, and the basis upon which they may assess their interest to learn more, is an extension title and description in Google search results (also called a “search snippet”). So, our new content model needed to be optimized for these search listings.

<figcaption>Sample domain expert review comments for “Extension Name”</figcaption>

Step 5: Competitor compare

A picture of the internal content issues and needs was starting to take shape. Now we needed to look externally to understand how our content compared to competitors and other successful commercial sites.

Philip Walmsley, UX designer, identified those sites and audited their content elements, identifying surplus, gaps, and differences from Firefox. We discussed the findings and determined what to add, trim, or tweak in Firefox’s content element offerings depending on value to the user.

<figcaption>Excerpt of competitive analysis</figcaption>

Step 6: User research — what content matters?

A fair amount of user research about add-ons had already been done before we embarked on this journey, and Jennifer Davidson, our UX researcher, lead additional, targeted research over the course of the year. That research informed the content element issues and needs. In particular, a large site survey, add-ons team think-aloud sessions, and in-person user interviews identified how users discover and decide whether or not to get an extension.

Regarding extension product pages in particular, we asked:

  • Do participants understand and trust the content on the product pages?
  • What type of information is important when deciding whether or not to get an extension?
  • Is there content missing that would aid in their discovery and comprehension?

Through this work, we deepened our understanding of the relative importance of different content elements (for example: extension name, summary, long description were all important), what elements were critical to decision making (such as social proof via ratings), and where we had content gaps (for example, desire for learning-by-video).

Step 7: Creating a content model

“…content modeling gives you systemic knowledge; it allows you to see what types of content you have, which elements they include, and how they can operate in a standardized way — so you can work with architecture, rather than designing each one individually.” — Sara Wachter-Boettcher, Content Everywhere, 31

Learnings from steps 1–6 informed the next, very important content phase: identifying a new content model for an add-ons product page.

A content model defines all of the content elements in an experience. It details the requirements and restrictions for each element, as well as the connections between elements. Content models take diverse shapes and forms depending on project needs, but the basic steps often include documentation of the content elements you have (step 2 above), analysis of those elements (steps 3–6 above), and then charting new requirements based on what you’ve learned and what the organization and users need.

Creating a content model takes quite a bit of information and input upfront, but it pays dividends in the long-term, especially when it comes to addressing and preventing content debt. The add-ons ecosystem did not have a detailed, updated content model and because of that, developers didn’t have the guardrails they needed to create better content, the design team didn’t have the content types it needed to create scalable, user-focused content, and users were faced with varying content quality.

A content model can feel prescriptive and painfully detailed, but each content element within it should provide the flexibility and guidance for content creators to produce content that meets their goals and the goals of the system.

<figcaption>Sample content model for “Extension Name”</figcaption>

Step 8: Refine and align

Now that we had a draft content model — in other words, a list of recommended requirements for each content element — we needed review and input from our key stakeholders.

This included conversations with add-ons UX team members, as well as partners from the initial stakeholder interviews (like product, engineering, etc.). It was especially important to talk through the content model elements with designers Philip and Emanuela, and to pressure test whether each new element’s requirements and file type served design needs across the ecosystem. One of the ways we did this was by applying the new content elements to future designs, with both best and worst-case content scenarios.

[caption id=”attachment_4182" align=”aligncenter” width=”820"]

<figcaption>Re-designed product page with new content elements (note — not a final design, just a study). Design lead: Philip Walmsley</figcaption>
<figcaption>Draft “universal extension card” with new content elements (note — not a final design, just a study). This card aims to increase user trust and learnability when user is presented with an extension offering anywhere in the ecosystem. Design lead: Emanuela Damiani</figcaption>

Based on this review period and usability testing on usertesting.com, we made adjustments to our content model.

Okay, content model done. What’s next?

Now that we had our new content model, we needed to make it a reality for the extension developers creating product pages.

In Part 3, I walk through the creation and testing of deliverables, including content guidelines and communication materials.

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 2 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXPaying Down Enterprise Content Debt: Part 1

Paying Down Enterprise Content Debt

Part 1: Framing the problem

Summary: This series outlines the process to diagnose, treat, and manage enterprise content debt, using Firefox add-ons as a case study. This first part frames the enterprise content debt issue. Part 2 lists the eight steps to develop a new content model. Part 3 describes the deliverables we created to support that new model.

<figcaption>QuinceMedia via Wikimedia</figcaption>


If you want to block annoying ads or populate your new tab with sassy cats, you can do it…with browser extensions and themes. Users can download thousands of these “add-ons” from Firefox’s host site, addons.mozilla.org (“AMO”), to customize their browsing experience with new functionality or a dash of whimsy.

<figcaption>The Tabby Cat extension takes over your new tab with adorable cats</figcaption>

Add-ons can be a useful and delightful way for people to improve their web experience — if they can discover, understand, trust, and appreciate their offerings. Over the last year, the add-ons UX pod at Firefox, in partnership with the larger add-ons team, worked on ways to do just that.

One of the ways we did this was by looking at these interconnected issues through the lens of content structure and quality. In this series, I’ll walk you through the steps we took to develop a new content model for the add-ons ecosystem.

Understanding the problem

Add-ons are largely created by third-party developers, who also create the content that describes the add-ons for users. That content includes things like extension name, icon, summary, long, description, screenshots, etcetera:

<figcaption>Sample developer-provided content for the Momentum extension</figcaption>

With 10,000+ extensions and 400,000+ themes, we are talking about a lot of content. And while the add-ons team completely appreciated the value of the add-ons themselves, we didn’t really understand how valuable the content was, and we didn’t use it to its fullest potential.

The first shift we made was recognizing that what we had was enterprise content — structured content and metadata stored in a formal repository, reviewed, sometimes localized, and published in different forms in multiple places.

Then, when we assessed the value of it to the enterprise, we uncovered something called content debt.

Content debt is the hidden cost of not managing the creation, maintenance, utility, and usability of digital content. It accumulates when we don’t treat content like an asset with financial value, when we value expediency over the big picture, and when we fail to prioritize content management. You can think of content debt like home maintenance. If you don’t clean your gutters now, you’ll pay in the long term with costly water damage.

AMO’s content debt included issues of quality (missed opportunities to communicate value and respond to user questions), governance (varying content quality with limited organizational oversight), and structure (the need for new content types to evolve site design and improve social shares and search descriptions).

A few examples of content debt in action:

<figcaption>Facebook social share experience: Confusing image accompanied by text describing how to report an issue with the extension</figcaption>
<figcaption>Google Search results example for an extension. Lacks description of basic functionality and value proposition. No SEO-optimized keywords or social proof like average rating.</figcaption>
<figcaption>A search for “Investing” doesn’t yield the most helpful results</figcaption>

All of this equals quite a bit of content debt that prevents developers and users from being as successful as they could be in achieving their interconnected goals: offering and obtaining extensions. It also hinders the ecosystem as a whole when it comes to things like SEO (Search Engine Optimization), which the team wanted to improve.

Given the AMO site’s age (15 years), and the amount of content it contains, debt is to be expected. And it’s rare for a content strategist to be given a content challenge that doesn’t involve some debt because it’s rare that you are building a system completely from scratch. But, that’s not the end of the story.

When we considered the situation from a systems-wide perspective, we realized that we needed to move beyond thinking of the content as something created by an individual developer in a vacuum. Yes, the content is developer-generated — and with that a certain degree of variation is to be expected — but how could we provide developers with the support and perspective to create better content that could be used not only on their product page, but across the content ecosystem? While the end goal is more users with more extensions by way of usable content, we needed to create the underpinning rules that allowed for that content to be expressed across the experience in a successful way.

In part 2, I walk through the specific steps the team took to diagnose and develop solutions for our enterprise content debt. Meanwhile, speaking of the team…

Meet our ‘UX supergroup’

“A harmonious interface is the product of functional, interdisciplinary communication and clear, well-informed decision making.” — Erika Hall, Conversational Design, 108

While the problem space is framed in terms of content strategy, the diagnosis and solutions were very much interdisciplinary. For this work, I was fortunate to be part of a “UX supergroup,” i.e., an embedded project team that included a user researcher (Jennifer Davidson), a designer with strength in interaction design (Emanuela Damiani), a designer with strength in visual design (Philip Walmsley), and a UX content strategist (me).

We were able to do some of our best work by bringing to bear our discipline-specific expertise in an integrated and deeply collaborative way. Plus, we had a mascot — Pusheen the cat — inspired in part by our favorite tab-changing extension, Tabby Cat.

<figcaption>Left to right: Jennifer, Emanuela, Philip, Meridel #teampusheen</figcaption>

See Part 2, Paying Down Enterprise Content Debt: Developing Solutions

Thank you to Michelle Heubusch, Jennifer Davidson, Emanuela Damiani, Philip Walmsley, Kev Needham, Mike Conca, Amy Tsay, Jorge Villalobos, Stuart Colville, Caitlin Neiman, Andreas Wagner, Raphael Raue, and Peiying Mo for their partnership in this work.

Paying Down Enterprise Content Debt: Part 1 was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogFirefox Reality 1.1.3

Firefox Reality 1.1.3

Firefox Reality 1.1.3 will soon be available for all users in the Viveport, Oculus, and Daydream app stores.

This release includes some major new features including support for 6DoF controllers, new environments, option for curved browser window, greatly improved YouTube support, and many bug fixes.


  • Improved support for 6DoF Oculus controllers and user height.
  • Added support for 6DoF VIVE Focus (WaveVR) controllers.
  • Updated the Meadow environment and added new Offworld, Underwater, and Winter environments (Settings > Environments).
  • Added new option for curved browser window (Settings > Display).
  • Improved default playback quality of YouTube videos (defaults to 1440p HD or next best).

Improvements/Bug Fixes:

  • Fixed User-Agent override to fix Delight-VR video playback.
  • Changed the layout of the Settings window so it’s easier and faster to find the option you need to change.
  • Performance improvements, including dynamic clock levels and Fixed Foveated Rendering on Oculus.
  • Improved resolution of text rendering in UI widgets.
  • Plus a myriad of web-content handling improvements from GeckoVview 68.
  • … and numerous other fixes

Full release notes can be found in our GitHub repo here.

Looking ahead, we are exploring content sharing and syncing across browsers (including bookmarks), multiple windows, as well as continuing to invest in baseline features like performance. We appreciate your ongoing feedback and suggestions — please keep them coming!

Firefox Reality is available right now.

Download for Oculus
(supports Oculus Go)

Download for Daydream
(supports all-in-one devices)

Download for Viveport (Search for Firefox Reality in Viveport store)
(supports all-in-one devices running VIVE Wave)

Mozilla VR BlogWrapping up a week of WebVR experiments

Wrapping up a week of WebVR experiments

Earlier this week, we kicked off a week of WebVR experiments with our friends at Glitch.com. Glitch creator and WebVR expert Andrés Cuervo put together seven projects that are fun, unique, and will challenge you to learn advanced techniques for building Virtual Reality experiences on the web.

If you are just getting started with WebVR, we recommend you check out this WebVR starter kit which will walk you through creating your very first WebVR experience.

Today, we launched the final experiment. If you haven't been following along, you can catch up on all of them below:

Motion Capture Dancing in VR

Learn how to use free motion capture data to animate a character running, dancing, or cartwheeling across a floor.

Adding Models and Shaders

Learn about how to load common file types into your VR scene.

Using 3D Shapes like Winding Knots

Learn how to work with the torus knot shape and the animation component that is included with A-frame.

Animated Torus Knot Rings

Learn about template and layout components while you continue to build on the previous Winding Knots example.

Generated Patterns

Create some beautiful patterns using some flat geometry in A-Frame with clever tilting.

Creating Optical Illusions

This is a simple optical optical illusion that is made possible with virtual reality.

Including Dynamic Content

Learn how to use an API to serve random images that are used as textures in this VR scene.

We hope you enjoyed learning and remixing these experiments (We really enjoyed putting them together). Follow Andrés Cuervo on Glitch for even more WebVR experiments.

The Firefox FrontierHow to stay safe online while on vacation

Vacations are a great time to unwind, sip a fruity drink with a tiny umbrella in it and expose your personal information to hackers if you’re traveling with a laptop, … Read more

The post How to stay safe online while on vacation appeared first on The Firefox Frontier.

Giorgio MaoneCross-Browser NoScript hits the Chrome Store

I'm pleased to announce that, some hours ago, the first public beta of cross-browser NoScript (10.6.1) passed Google's review process and has been published on the chrome web store.
This is a major milestone in NoScript history, started on May the 13th 2005 (next year we will celenbrate our 15th birthday!). NoScript on the chrome web store

Over all these years NoScript has undergone many transformations, porting and migrations:

  • three distinct Android portings (one for Fennec "classic", one for Firefox Mobile, the last as a WebExtension);
  • one partial rewrite, to make it multi-process compatible;
  • one full, long and quite dramatic rewrite, to migrate it to the WebExtensions API (in whose design and implementation Mozilla involved me as a contributor, in order to make this possible).

And finally today we've got an unified code-base compatible both with Firefox and Chromium, and in possibly in future with other browsers supporting the WebExtensions API to a sufficient extent.
One difference Chromium users need to be aware of: on their browser NoScript's XSS filter is currently disabled: at least for the time being they'll have to rely on the browser's built-in "XSS Auditor", which unfortunately over time proved not to be as effective as NoScript's "Injection Checker". The latter could not be ported yet, though, because it requires asynchronous processing of web requests: one of the several capabilities provided to extensions by Firefox only. To be honest, during the "big switch" to the WebExtensions API, which was largely inspired by Chrome, Mozilla involved me in its design and implementation with the explicit goal to ensure that it supported NoScript's use cases as much as possible. Regrettably, the additions and enhancements which resulted from this work have not picked up by Google.

Let me repeat: this is a beta, and I urge early adopters to report issues in the "Support" section of the NoScript Forum, and more development-oriented ones to file technical bug reports and/or contribute patches at the official source code repository. With your help as beta testers, I plan to bless NoScript 11 as a "stable Chromium-compatible release" by the end of June.

I couldn't thank enough the awesome Open Technology Fund folks or the huge support they gave to this project, and to NoScript in general. I'm really excited at the idea that, under the same umbrella, next week Simply Secure will start working on improving NoScript's usability and accessibility. At the same time, integration with the Tor Browser is getting smoother and smoother.

The future of NoScript has never been brigther :)

See also ZDNet's and GHacks' coverage of the announcement.

Daniel Stenbergno more global dns cache in curl

In January 2002, we added support for a global DNS cache in libcurl. All transfers set to use it would share and use the same global cache.

We rather quickly realized that having a global cache without locking was error-prone and not really advisable, so already in March 2004 we added comments in the header file suggesting that users should not use this option.

It remained in the code and time passed.

In the autumn of 2018, another fourteen years later, we finally addressed the issue when we announced a plan for this options deprecation. We announced a date for when it would become deprecated and disabled in code (7.62.0), and then six months later if no major incidents or outcries would occur, we said we would delete the code completely.

That time has now arrived. All code supporting a global DNS cache in curl has been removed. Any libcurl-using program that sets this option from now on will simply not get a global cache and instead proceed with the default handle-oriented cache, and the documentation is updated to clearly indicate that this is the case. This change will ship in curl 7.65.0 due to be released in May 2019 (merged in this commit).

If a program still uses this option, the only really noticeable effect should be a slightly worse name resolving performance, assuming the global cache had any point previously.

Programs that want to continue to have a DNS cache shared between multiple handles should use the share interface, which allows shared DNS cache and more – with locking. This API has been offered by libcurl since 2003.

Hacks.Mozilla.OrgDeveloper Roadshow 2019 returns with VR, IoT and all things web

Mozilla Developer Roadshow is a meetup-style, Mozilla-focused event series for people who build the web. In 2017, the Roadshow reached more than 50 cities around the world. We shared highlights of the latest and greatest Mozilla and Firefox technologies. Now, we’re back to tell the story of how the web continues to democratize opportunities for developers and digital creators.

New events in New York and Los Angeles

To open our 2019 series, Mozilla presents two events with VR visionary Nonny de la Peña and the Emblematic Group in Los Angeles (April 23) and in New York (May 20-23). de la Peña’s pioneering work in virtual reality, widely credited with helping create the genre of immersive journalism, has been featured in Wired, Inc., The New York Times, and on the cover of The Wall Street Journal. Emblematic will present their latest project, REACH in WebVR. Their presentation will include a short demo of their product. During the social hour, the team will be available to answer questions and share their learnings and challenges of developing for the web.

Funding and resource scarcity continue to be key obstacles in helping the creative community turn their ideas into viable products. Within the realm of cutting edge emerging technologies, such as mixed reality, it’s especially challenging for women. Because women receive less than 2% of total venture funding, the open distribution model of the web becomes a viable and affordable option to build, test, and deploy their projects.

Upcoming DevRoadshow events

The DevRoadshow continues on the road with eight more upcoming sessions in Europe and the Asia Pacific regions throughout 2019. Locations and dates will be announced soon. We’re eager to invite coders and creators around the world to join us this year. The Mozilla Dev Roadshow is a great way to make new friends and stay up to date on new products. Come learn about services and opportunities that extend the power of the web as the most accessible and inclusive platform for immersive experiences.

Check back to this post for updates, visit our DevRoadshow site for up to date registration opportunities, and follow along our journey on @mozhacks or sign up for the weekly Mozilla Developer Newsletter. We’ll keep you posted!

The post Developer Roadshow 2019 returns with VR, IoT and all things web appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Future Releases BlogFirefox Beta for Windows 10 on Qualcomm Snapdragon Always Connected PCs Now Available

Whether it’s checking the weather forecast or movie times, you can always count on the web to give you the information you’re seeking. Your choice of operating system and computer shouldn’t change your online experience. As part of Mozilla’s mission, we built Firefox as a trusted user agent for people on the web and it’s one of the reasons why we’re always looking to work with companies to optimize Firefox Quantum for their devices.

Last December, we announced our collaboration with Qualcomm, to create an ARM64-native build of Firefox for Snapdragon-powered Windows 10 Always Connected PCs. Today, we’re excited to report its availability in our beta release channel, a channel aimed at developers or early tech adopters to test upcoming features before they’re released to consumers.

Today’s release builds on the performance work done for Firefox Quantum, which uses multiple processes as efficiently as possible. Working with Qualcomm’s Snapdragon compute platform, we’re able to push the multi-core paradigm one step further, offering octa-core CPUs. We’re also taking advantage of Rust’s fearless concurrency to intelligently divide browsing tasks across those cores to deliver a fast, personal, and convenient experience.

Snapdragon powered Always Connected PCs are ideal for the road warrior because they are thin, fanless, lightweight with a long battery life and lightening fast cellular connectivity, and built to seamlessly perform daily work tasks on-the-go. We’re no stranger to optimizing the Firefox browser for any device. From Fire TV to the new iPad, we’ve custom tailored Firefox browsers for a number of different devices, because it shouldn’t matter what device you use.

Test Drive Firefox’s ARM64 for Windows 10 on Snapdragon

Your feedback is valuable for us to fine-tune this experience for a future release. If you have an ARM64 device running Windows 10, you can help by reporting bugs or submitting crash reports or simply sharing feedback. If you have any questions about your experience you can visit here for assistance.

To try the ARM64-native build of Firefox on Windows beta version, you can download it here.

The post Firefox Beta for Windows 10 on Qualcomm Snapdragon Always Connected PCs Now Available appeared first on Future Releases.

Mozilla Localization (L10N)Implementing Fluent in a localization tool

In order to produce natural sounding translations, Fluent syntax supports gender, plurals, conjugations, and virtually any other grammatical category. The syntax is designed to be simple to read, but translators without developer background might find more complex concepts harder to deal with.

That’s why we designed a Fluent-specific user interface in Pontoon, which unleashes Fluent powers to localizers who aren’t coders. Any other general purpose Translation Management System (TMS) with support for popular localization formats can follow the example. Let’s have a closer look at Fluent implementation in Pontoon.

Storing Fluent strings in the DB

We store Fluent translations in the Translation table differently compared to other file formats.

In the string column we usually only store the translatable content, which is sufficient for representing the translation across the application. For example, string hello = Hello, world! in the .properties file format is stored as Hello, world! in Translation.string. In Fluent, we store the entire string: hello = Hello, world!.

The main reason for storing full Fluent strings are attributes, which can be present in Fluent Messages and Terms in addition to their values. Having multiple localizable values stored together allows us to build advanced UIs like the ones we use for access keys (see below).

The alternative to storing the serialized string is to store its AST. At the time of implementing Fluent support in Pontoon, the AST wasn’t as stable as the syntax, but today you should be fine with using either of these approaches.

Different presentation formats

Due to their potential complexity, Fluent strings can be presented in 4 different ways in Pontoon:

  1. In simplified form: used in string list, History tab and various other places across Pontoon, it represents the string as it appears to the end-user in the application. We generate it using FluentSerializer.serializeExpression(). For example, hello = Hello, world! would appear simply as Hello, world!.
  2. In read-only form: used to represent the source string.
  3. In editable form: used to represent the translation in the editor.
  4. As source: used to represent the translation in the source editor.
Presentation formats

Where are different presentation formats used.
You can access the source view by clicking on the FTL button.

Example walk-through

The following is a list of various Fluent strings and their appearances in a Pontoon translation interface with source string panel on top and translation editor below. In the first batch are the strings using the existing Pontoon UI, shared with other file formats.

Simple strings
Simple strings

title = About Localization

Multiline strings
Multiline strings

feedbackUninstallCopy =
    Your participation in Firefox Test Pilot means
    a lot! Please check out our other experiments,
    and stay tuned for more to come!

Strings with a single attribute
Strings with a single attribute

emailOptInInput =
    .placeholder = email goes here :)

Next up are Fluent strings, which have attributes in addition to the value or multiple attributes. A separate text input is available for each of them.

Strings with a value and attributes
Strings with a value and an attribute

shotIndexNoExpirationSymbol = ∞
    .title = This shot does not expire

Since Fluent lets you combine labels and access keys within the same string, we designed a dedicated UI that lists all the allowed access keys as you translate the label. You can also type in a custom key if none of the candidates meets your criteria.

Access keys
Access keys

file-menu =
    .label = File
    .accesskey = F

Terms are similar to regular messages but they can only be used as references in other messages. Hence, they are best used to define vocabulary and glossary items which can be used consistently across the localization of the entire product. Terms can have multiple variants, and Pontoon makes it easy to add/remove them.


# Translated string
-brandShortName = { $case ->
    [nominative] Firefox Račun
   *[genitive] Firefox Računa

Selectors are a powerful feature of Fluent. They are used when there’s a need for multiple variants of the string based on an external variable. In the case case, PLATFORM().


platform = { PLATFORM() ->
    [win] Options
   *[other] Preferences

A common use case of selectors are plurals. When you translate a pluralized source string, Pontoon renders empty CLDR plural categories used in the target locale, each accompanied by an example number.


delete-all-message = { $num ->
    [one] Delete this download?
   *[other] Delete { $num } downloads?

Selectors can also be used in attributes. Pontoon hides most of the complexity.

Selectors in attributes
Selectors in attributes

download-choose-folder =
    .label = { PLATFORM() ->
        [macos] Choose…
       *[other] Browse…
    .accesskey = { PLATFORM() ->
        [macos] e
       *[other] o

If a value or an attribute contains multiple selectors, Pontoon indents them for better readability.

Strings with multiple selectors
Strings with multiple selectors

selector-multi = There { $num ->
    [one] is one email
   *[other] are many emails
} for { $gender ->
   *[masculine] him
    [feminine] her

Strings with nested selectors are not supported in Fluent editor, in which case Pontoon falls back to the source editor. The source editor is always available, and you can switch to it by clicking on the FTL button.

Unsupported strings
Unsupported strings

Next steps

The current state of the Fluent UI in Pontoon allows us to use Fluent to localize Firefox and various other projects. We don’t want to stop here, though. Instead, we’re looking to improve the Fluent experience further and make it the best among available localization formats. You can track the list of Fluent-related Pontoon bugs in Bugzilla. Some of the highlights include:

  • Ability to change default variant
  • Ability to add/remove variants
  • Ability to add selectors to simple messages
  • Making machinery usable with more complex strings
  • Prefilling complex message skeleton in source editor

Wladimir PalantBogus security mechanisms: Encrypting localhost traffic

Nowadays it is common for locally installed applications to also offer installing browser extensions that will take care of browser integration. Securing the communication between extensions and the application is not entirely trivial, something that Logitech had to discover recently for example. I’ve also found a bunch of applications with security issues in this area. In this context, one has to appreciate RememBear password manager going to great lengths to secure this communication channel. Unfortunately, while their approach isn’t strictly wrong, it seems to be based on a wrong threat assessment and ends up investing far more effort into this than necessary.

The approach

It is pretty typical for browser extensions and applications to communicate via WebSockets. In case of RememBear the application listens on port 8734, so the extension creates a connection to ws://localhost:8734. After that, messages can be exchanged in both directions. So far it’s all pretty typical. The untypical part is RememBear using TLS to communicate on top of an unencrypted connection.

So the browser extension contains a complete TLS client implemented in JavaScript. It generates a client key, and on first connection the user has to confirm that this client key is allowed to connect to the application. It also remembers the server’s public key and will reject connecting to another server.

Why use an own TLS implementation instead of letting the browser establish an encrypted connection? The browser would verify TLS certificates, whereas the scheme here is based on self-signed certificates. Also, browsers never managed to solve authentication via client keys without degrading user experience.

The supposed threat

Now I could maybe find flaws in the forge TLS client they are using. Or criticize them for using 1024 bit RSA keys which are deprecated. But that would be pointless, because the whole construct addresses the wrong threat.

According to RememBear, the threat here is a malicious application disguising as RememBear app towards the extension. So they encrypt the communication in order to protect the extension, making sure that it only talks to the real application.

Now the sad reality of password managers is: once there is a malicious application on the computer, you’ve lost already. Malware does things like logging keyboard input and should be able to steal your master password this way. Even if malware is “merely” running with user’s privileges, it can go as far as letting a trojanized version of RememBear run instead of the original.

But hey, isn’t all this setting the bar higher? Like, messing with local communication would have been easier than installing a modified application? One could accept this line argumentation of course. The trouble is: messing with that WebSocket connection is still trivial. If you check your Firefox profile directory, there will be a file called browser-extension-data/ff@remembear.com/storage.js. Part of this file: the extension’s client key and RememBear application’s public key, in plain text. A malware can easily read out (if it wants to connect to the application) or modify these (if to wants to fake the application towards the extension). With Chrome the data format is somewhat more complicated but equally unprotected.

Rusty lock not attached to anything
Image by Joybot

The actual threat

It’s weird how the focus is on protecting the browser extension. Yet the browser extension has no data that a malicious application could steal. If anything, malware might be able to trick the extension into compromising websites. Usually however, malware applications manage to do this on their own, without help.

In fact, the by far more interesting target is the RememBear application, the one with the passwords data. Yet protecting it against malware is a lost cause, whatever a browser extension running in the browser sandbox can do – malware can easily do the same.

The realistic threat here are actually regular websites. You see, same-origin policy isn’t enforced for WebSockets. Any website can establish a connection to any WebSocket server. It’s up to the WebSocket server to check the Origin HTTP header and reject connections from untrusted origins. If the connection is being established by a browser extension however, the different browsers are very inconsistent about setting the Origin header, so that recognizing legitimate connections is difficult.

In the worst case, the WebSocket server doesn’t do any checks and accepts any connection. That was the case with the Logitech application mentioned earlier, it could be reconfigured by any website.

Properly protecting applications

If the usual mechanisms to ensure connection integrity don’t work, what do you do? You can establish a shared secret between the extension and the application. I’ve seen extensions requiring you to copy a secret key from the application into the extension. Another option would be the extension generating a secret and requiring users to approve it in the application, much like RememBear does it right now with the extension’s client key. Add that shared secret to every request made by the extension and the application will be able to identify it as legitimate.

Wait, no encryption? After all, somebody called out 1Password for sending passwords in cleartext on a localhost connection (article has been removed since). That’s your typical bogus vulnerability report however. Data sent to localhost never leaves your computer. It can only be seen on your computer and only with administrator privileges. So we would again be either protecting against malware or a user with administrator privileges. Both could easily log your master password when you enter it and decrypt your password database, “protecting” localhost traffic wouldn’t achieve anything.

But there is actually an even easier solution. Using WebSockets is unnecessary, browsers implement native messaging API which is meant specifically to let extensions and their applications communicate. Unlike WebSockets, this API cannot be used by websites, so the application can be certain: any request coming in originates from the browser extension.

Conclusion and outlook

There is no reasonable way to protect a password manager against malware. With some luck, the malware functionality will be too generic to compromise your application. Once you expect it to have code targeting your specific application, there is really nothing you can do any more. Any protective measures on your end are easily circumvented.

Security design needs to be guided by a realistic threat assessment. Here, by far the most important threat is communication channels being taken over by a malicious website. This threat is easily addressed by authenticating the client via a shared secret, or simply using native messaging which doesn’t require additional authentication. Everything else is merely security theater that doesn’t add any value.

This isn’t the only scenario where bogus vulnerability reports prompted an overreaction however. Eventually, I want to deconstruct research scolding password managers for leaving passwords in memory when locked. Here as well, a threat scenario has been blown out of proportion.

The Rust Programming Language BlogAnnouncing Rust 1.34.0

The Rust team is happy to announce a new version of Rust, 1.34.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.34.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.34.0 stable

The largest feature in this release is the introduction of alternative cargo registries. The release also includes support for ? in documentation tests, some improvements for #[attribute(..)]s, as well as the stabilization of TryFrom. Read on for a few highlights, or see the detailed release notes for additional information.

Alternative cargo registries

Since before 1.0, Rust has had a public crate registry, crates.io. People publish crates with cargo publish and it's easy to include these crates in the [dependencies] section of your Cargo.toml.

However, not everyone wants to publish their crates to crates.io. People maintaining proprietary/closed-source code cannot use crates.io, and instead are forced to use git or path dependencies. This is usually fine for small projects, but if you have a lot of closed-source crates within a large organization, you lose the benefit of the versioning support that crates.io has.

With this release, Cargo gains support for alternate registries. These registries coexist with crates.io, so you can write software that depends on crates from both crates.io and your custom registry. Crates on crates.io cannot however depend on external registries.

To use an alternate registry, you must add these lines to your .cargo/config. This file can be in your home directory (~/.cargo/config) or relative to the package directory.

my-registry = { index = "https://my-intranet:8080/git/index" }

Depending on a crate from an alternate registry is easy. When specifying dependencies in your Cargo.toml, use the registry key to let Cargo know that you wish to fetch the crate from the alternate registry:

other-crate = { version = "1.0", registry = "my-registry" }

As a crate author, if you wish to publish your crate to an alternate registry, you first need to save the authentication token into ~/.cargo/credentials with the cargo login command:

cargo login --registry=my-registry

You can then use the --registry flag to indicate which registry to use when publishing:

cargo publish --registry=my-registry

There is documentation on how to run your own registry.

? in documentation tests

RFC 1937 proposed adding support for using the ? operator in fn main(), #[test] functions, and doctests, allowing them to return Option<T> or Result<T, E>, with error values causing a nonzero exit code in the case of fn main(), and a test failure in the case of the tests.

Support in fn main() and #[test] was implemented many releases ago. However, the support within documentation tests was limited to doctests that have an explicit fn main().

In this release, full support for ? in doctests has been added. Now, you can write this in your documentation tests:

/// ```rust
/// use std::io;
/// let mut input = String::new();
/// io::stdin().read_line(&mut input)?;
/// # Ok::<(), io::Error>(())
/// ```
fn my_func() {}

You still have to specify the error type being used at the bottom of the documentation test.

Custom attributes accept arbitrary token streams

Procedural macros in Rust can define custom attributes that they consume. Until now, such attributes were restricted to being trees of paths and literals according to a specific syntax, like:

#[foo = "bar"]
#[foo = 0]
#[foo(bar = true)]
#[foo(bar, baz(quux, foo = "bar"))]

Unlike procedural macros, these helper attributes could not accept arbitrary token streams in delimiters, so you could not write #[range(0..10)] or #[bound(T: MyTrait)]. Procedural macro crates would instead use strings for specifying syntaxes like this, e.g. #[range("0..10")]

With this Rust release, custom attributes #[attr($tokens)] now accept arbitrary token streams in $tokens, bringing them on par with macros. If you're the author of a procedural macro crate, please check if your custom attributes have unnecessary strings in their syntax and if they can be better expressed with token streams.

TryFrom and TryInto

The TryFrom and TryInto traits were stabilized to allow fallible type conversions.

For example, the from_be_bytes and related methods on integer types take arrays, but data is often read in via slices. Converting between slices and arrays is tedious to do manually. With the new traits, it can be done inline with .try_into().

let num = u32::from_be_bytes(slice.try_into()?);

For conversions that cannot fail, such as u8 to u32, the Infallible type was added. This also permits a blanket implementation of TryFrom for all existing From implementations. In the future, we hope to turn Infallible into an alias for the ! (never) type.

fn before_exec deprecated in favor of unsafe fn pre_exec

On Unix-like systems, the function CommandExt::before_exec allows you to schedule a closure to be run before exec is invoked.

The closure provided will be run in the context of the child process after a fork. This means that resources, such as file descriptors and memory-mapped regions, may get duplicated. In other words, you can now copy a value of a non-Copy type into a different process while retaining the original in the parent. This makes it possible to cause undefined behavior and break libraries assuming non-duplication.

The function before_exec should therefore have been marked as unsafe. In this release of Rust, we have deprecated fn before_exec in favor of the unsafe fn pre_exec. When calling CommandExt::pre_exec, it is your responsibility to make sure that the closure does not violate library invariants by making invalid use of these duplicates. If you provide a library that is in a similar situation as before_exec, consider deprecating and providing an unsafe alternative as well.

Library stabilizations

In 1.34.0, the set of stable atomic integer types was expanded, with signed and unsigned variants from 8 (AtomicU8) to 64 bits now available.

Previously, non-zero unsigned integer types, e.g. NonZeroU8, were stabilized. This gave Option<NonZeroU8> the same size as u8. With this Rust release, signed versions, e.g. NonZeroI8, have been stabilized.

The functions iter::from_fn and iter::successors have been stabilized. The former allows you to construct an iterator from FnMut() -> Option<T>. To pop elements from a vector iteratively, you can now write from_fn(|| vec.pop()). Meanwhile, the latter creates a new iterator where each successive item is computed based on the preceding one.

Additionally, these APIs have become stable:

See the detailed release notes for more details.

The Firefox FrontierFirst photo of a black hole or cosmic cousin of the Firefox logo?

A photo of a small, fiery circular shape floating in blackness will go down in history as the first photo of a black hole. It might not look like much, … Read more

The post First photo of a black hole or cosmic cousin of the Firefox logo? appeared first on The Firefox Frontier.

Mozilla GFXWebRender newsletter #43

WebRender is a GPU based 2D rendering engine for the web written in Rust, currently powering Mozilla’s research web browser servo and on its way to becoming Firefox‘s rendering engine.

A week in Toronto

The gfx team got together in Mozilla’s Toronto office last week. These gatherings are very valuable since the team is spread over many timezones (in no particular order, there are graphics folks in Canada, Japan, France, various parts of the US, Germany, England, Australia and New Zealand).

It was an intense week, filled with technical discussions and planning. I’ll go over some of them below:

New render task graph

Nical continues investigating a more powerful and expressive render task graph for WebRender. The work is happening in the toy render-graph repository and during the week a lot task scheduling and texture memory allocation strategies were discussed. It’s an interesting and open problem space where various trade-offs will play out differently on different platforms.
One of the things that came out of these discussions is the need for tools to understand the effects of the different graph scheduling and allocation strategies, and help debugging their effects and bugs once we get to integrating a new system into WebRender. As a result, Nical started building a standalone command-line interface for the experimental task graph and generate SVG visualizations.


Battery life

So far our experimentation has showed that energy consumption in WebRender (as well as Firefox’s painting and compositing architecture) is strongly correlated with the amount of pixels that are manipulated. In other word, it is dominated by memory bandwidth which is stressed by high screen resolutions. This is perhaps no surprise for someone having worked with graphics rendering systems, but what’s nice with this observation is that it gives us a very simple metric to measure and build optimizations and heuristics around.

Avenues for improvements in power consumption therefore include Doug’s work with document splitting and Markus’s work on better integration with MacOS’s compositing window manager with the Core Animation API. No need to tell the window manager to redraw the whole window when only the upper part changed (the document containing the browser’s UI for example).
The browser can break the window up into several surfaces and let the window manager composite them together which saves some work and potentially substantial amounts of memory bandwidth.
On Windows the same can be achieved with the Direct Composition API. On Linux with Wayland we could use sub-surfaces although in our preliminary investigation we couldn’t find a reliable/portable way to obtain the composited content for the whole browser window which is important for our testing infrastructure and other browser functionalities. Only recently did Android introduce similar functionalities with the SurfaceControl API.

We made some short and long term plans around the theme of better integration with the various window manager APIs. This is an area where I hope to see WebRender improve a lot this year.


Ryan gave us an overview of the architecture and progress of project Fission, which he has been involved with for some time. The goal of the project is to further isolate content from different origins by dramatically increasing the amount of content processes. There are several challenging aspects to this. Reducing the per-process memory overhead is an important one as we really want to remain better than Chrome overall memory usage. WebRender actually helps in this area as it moves most of the rendering out of content processes. There are also fun (with some notion of “fun”) edge cases to deal with such as page from domain A nested into iframe of domain B nested into iframe of domain A, and what this type of sandwichery implies in terms of processes, communication and what should happen when a CSS filter is applied to that kind of stack.

Fun stuff.

WebRender and software rendering

There will always be hardware and driver configurations that are too old, too buggy, or both, for us to support with WebRender using the GPU. For some time Firefox will fall back to the pre-WebRender architecture, but we’ll eventually want to get to a point where we phase out this legacy code while still work for all of our users. So WebRender needs some way to work when GPU doesn’t.

We discussed several avenues for this, one of which being to leverage WebRender’s OpenGL implementation with a software emulation such as SwiftShader. It’s unclear at his point whether or not we’ll be able to get acceptable performance this way, but Glenn’s early experiments show that there are a lot of low hanging fruits to pick and optimize such a software implementation, hopefully to the point where it provides a good user experience.

Other options include dedicated CPU rendering backend which we could safely expect to get to at least Firefox’s current software rendering performance, at the expense of more engineering resources.

Removing code

As WebRender replaces Firefox’s current rendering architecture, we’ll be able to remove large amounts of fairly complex code in the gfx and layout modules, which is an exciting prospect. We discussed how much we can simplify and at which stages of WebRender’s progressive rollout.

WebGPU status

Kvark gave us an update on the status of the WebGPU specification effort. Things are coming along in a lot of areas, although the binding model and shader format are still very much under debate. Apple proposes to introduce a brand new shading language called WebHLSL while Mozilla and Google want a bytecode format based on a safe subset of SPIRV (the Khronos group’s shader bytecode standard used in Vulkan OpenCL, and OpenGL through an extension). Having both a bytecode and high-level language is also on the table although fierce debates continue around the merits of introducing a new language instead of using and/or extending GLSL, already used in WebGL.

From what I have seen of the already-agreed-upon areas so far, WebGPU’s specification is shaping up to be a very nice and modern API, leaning towards Metal in terms of level of abstraction. I’m hopeful that it’ll eventually be extended into providing some of Vulkan’s lower level controls for performance.

Display list improvements

Display lists can be quite large and costly to process. Gankro is working on compression opportunities to reduce the IPC overhead, and Glenn presented plans to move some of the interning infrastructure to the API endpoints so as to to send deltas instead of the entire display lists at each layout update, further reducing IPC cost.

Debugging workshops

Kvark presented his approach to investigating WebRender bugs and shared clever tricks around manipulating and simplifying frame recordings.

Glenn presented his Android development and debugging workflow and shared his observations about WebRender’s performance characteristics on mobile GPUs so far.

To be continued

We’ve covered about half of the week’s topics and it’s already a lot for a single newsletter. We will go over the rest in the next episode.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on crates.io (documentation)

Mozilla Open Policy & Advocacy BlogUS House Votes to Save the Internet

Today, the House took a firm stand on behalf of internet users across the country. By passing the Save the Internet Act, members have made it clear that Americans have a fundamental right to access the open internet. Without these protections in place, big corporations like Comcast, Verizon, and AT&T could block, slow, or levy tolls on content at the expense of users and small businesses. We hope that the Senate will recognize the need for strong net neutrality protections and pass this legislation into law. In the meantime, we will continue to fight in the courts as the DC Circuit considers Mozilla v. FCC, our effort to restore essential net neutrality protections for consumers through litigation.

The post US House Votes to Save the Internet appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogWhat we think about the UK government’s ‘Online Harms’ white paper

The UK government has just outlined its plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK proposal has some promising ideas for what the next generation of content regulation should look like, there are several aspects that would have a worrying impact on individuals’ rights and the competitive ecosystem. Here we provide our preliminary assessment of the proposal, and offer some guidance on how it could be improved.

According to the UK white paper, companies of all sizes would be under a ‘duty of care’ to protect their users from a broad class of so-called ‘online harms’, and a new independent regulator would be established to police them. The proposal responds to legitimate public policy concerns around how platforms deal with illegal and harmful content online, as well as the general public demand for tech companies to ‘do more’. We understand that in many respects the current regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like.

The UK government’s proposed regulatory architecture (a duty of care overseen by an independent regulator) has some promising potential. It could shift focus to regulating systems and instilling procedural accountability, rather than a focus on individual pieces of content and the liability/no liability binary. If implemented properly, its principles-based approach could allow for scalability, fine tailoring, and future-proofing; features that are presently lacking in the European content regulation paradigm (see for instance the EU Copyright directive and the EU Terrorist Content regulation).

Yet while the high-level architecture has promise, the UK government’s vision for how this new regulatory model could be realised in practice contains serious flaws. These must be addressed if this proposal is to reduce rather than encourage online harms.

  • Scope issues: The duty of care would apply to an extremely broad class of online companies. While this is not necessarily problematic if the approach is a principles-based one, there is a risk that smaller companies will be disproportionately burdened if the Codes of Practice are developed with only the tech incumbents in mind. In addition, the scope would include both hosting services and messaging services, despite the fact that they have radically different technical structures.
  • A conflation of terms: The duty of care would apply not only to a range of types of content – from illegal content like child abuse material to legal but harmful material like disinformation – but also harmful activities – from cyber bullying, to immigration crime, to ‘intimidation’. This conflation of content/activities and legal/harmful is concerning, given that many content-related ‘activities’ are almost impossible to proactively identify, and there is rarely a shared understanding of what ‘harmful’ means in different contexts.
  • The role of the independent regulator: Given that this regulator will have unprecedented power to determine how online content control works, it is worrying that the proposal doesn’t spell out safeguards that will be put in place to ensure its Codes of Practice are rights-protective and workable for different types of companies. In addition, it doesn’t give any clarity as to how the development of the codes will be truly co-regulatory.

Yet as we noted earlier, the UK government’s approach holds some promise, and many of the above issues could be addressed if the government is willing. There’s some crucial changes that we’ll be encouraging the UK government to adopt when it brings forward the relevant legislation. These relate to:

  • The legal status: There is a whole corpus of jurisprudence around duties of care and negligence law that has developed over centuries, therefore it is essential that the UK government clarifies how this proposal would interact with and relate to existing duties of care
  • The definitions: There needs to be much more clarity on what is meant by ‘harmful’ content. Similarly, there must be much more clarity on what is meant by ‘activities’ and the duty of care must acknowledge that each of these categories of ‘online harms’ requires a different approach.
  • The independent regulator: The governance structure must be truly co-regulatory, to ensure the measures are workable for companies and protective of individuals’ rights.

We look forward to engaging with the UK government as it enters into a consultation period on the new white paper. Our approach to the UK government will mirror the one that we are taking vis-à-vis the EU – that is, building out a vision for a new content regulation paradigm that addresses lawmakers’ legitimate concerns, but in a way which is rights and ecosystem protective. Stay tuned for our consultation response in late June.


The post What we think about the UK government’s ‘Online Harms’ white paper appeared first on Open Policy & Advocacy.

Hacks.Mozilla.OrgTeaching machines to triage Firefox bugs

Many bugs, not enough triage

Mozilla receives hundreds of bug reports and feature requests from Firefox users every day. Getting bugs to the right eyes as soon as possible is essential in order to fix them quickly. This is where bug triage comes in: until a developer knows a bug exists, they won’t be able to fix it.

Given the large number of bugs filed, it is unworkable to make each developer look at every bug (at the time of writing, we’d reached bug number 1536796!). This is why, on Bugzilla, we group bugs by product (e.g. Firefox, Firefox for Android, Thunderbird, etc.) and component (a subset of a product, e.g. Firefox::PDF Viewer).

Historically, the product/component assignment has been mostly done manually by volunteers and some developers. Unfortunately, this process fails to scale, and it is effort that would be better spent elsewhere.

Introducing BugBug

To help get bugs in front of the right Firefox engineers quickly, we developed BugBug, a machine learning tool that automatically assigns a product and component for each new untriaged bug. By presenting new bugs quickly to triage owners, we hope to decrease the turnaround time to fix new issues. The tool is based on another technique that we have implemented recently, to differentiate between bug reports and feature requests. (You can read more about this at https://marco-c.github.io/2019/01/18/bugbug.html).

High-level architecture of bugbug training and operation

High-level architecture of BugBug training and operation

Training a model

We have a large training set of data for this model: two decades worth of bugs which have been reviewed by Mozillians and assigned to products and components.

Clearly we can’t use the bug data as-is: any change to the bug after triage has been completed would be inaccessible to the tool during real operation. So, we “roll back” the bug to the time it was originally filed. (This sounds easy in practice, but there are a lot of corner cases to take into account!).

Also, although we have thousands of components, we only really care about a subset of these. In the past ~2 years, out of 396 components, only 225 components had more than 49 bugs filed. Thus, we restrict the tool to only look at components with a number of bugs that is at least 1% of the number of bugs of the largest component.

We use features collected from the title, the first comment, and the keywords/flags associated with each bug to train an XGBoost model.

High-level overview of the bugbug model

High-level overview of the BugBug model

During operation, we only perform the assignment when the model is confident enough of its decision: at the moment, we are using a 60% confidence threshold. With this threshold, we are able to assign the right component with a very low false positive ratio (> 80% precision, measured using a validation set of bugs that were triaged between December 2018 and March 2019).

BugBug’s results

Training the model on 2+ years of data (around 100,000 bugs) takes ~40 minutes on a 6-core machine with 32 GB of RAM. The evaluation time is in the order of milliseconds. Given that the tool does not pause and is always ready to act, the tool’s assignment speed is much faster than manual assignment (which, on average, takes around a week).

Since we deployed BugBug in production at the end of February 2019, we’ve triaged around 350 bugs. The median time for a developer to act on triaged bugs is 2 days. (9 days is the average time to act, but it’s only 4 days when we remove outliers.)

BugBug in action, performing changes on a newly opened bug

BugBug in action

Plans for the future

We have plans to use machine learning to assist in other software development processes, for example:

  • Identifying duplicate bugs. In the case of bugs which crash Firefox, users will typically report the same bug several times, in slightly different ways. These duplicates are eventually found by the triage process, and resolved, but finding duplicate bugs as quickly as possible provides more information for developers trying to diagnose a crash.
  • Providing additional automated help for developers, such as detecting bugs in which “steps to reproduce” are missing and asking reporters to provide them, or detecting the type of bug (e.g. performance, memory usage, crash, and so on).
  • Detecting bugs that might be important for a given Firefox release as early as possible.

Right now our tool only assigns components for Firefox-related products. We would like to extend BugBug to automatically assign components for other Mozilla products.

We also encourage other organizations to adopt BugBug. If you use Bugzilla, adopting it will be very easy; otherwise, we’ll need to add support for your bug tracking system. File an issue on https://github.com/mozilla/bugbug and we’ll figure it out. We are willing to help!

The post Teaching machines to triage Firefox bugs appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Security BlogDNS-over-HTTPS Policy Requirements for Resolvers

Over the past few months, we’ve been experimenting with DNS-over-HTTPS (DoH), a protocol which uses encryption to protect DNS requests and responses, with the goal of deploying DoH by default for our users. Our plan is to select a set of Trusted Recursive Resolvers (TRRs) that we will use for DoH resolution in Firefox. Those resolvers will be required to conform to a specific set of policies that put privacy first.

To that end, today we are releasing a list of DOH requirements, available on the Mozilla wiki, that we will use to vet potential resolvers for Firefox. The requirements focus on three areas: 1) limiting data collection and retention from the resolver, 2) ensuring transparency for any data retention that does occur, and 3) limiting any potential use of the resolver to block access or modify content. This is intended to cover resolvers that Firefox will offer by default and resolvers that Firefox might discover in the local network.

In publishing this policy, our goal is to encourage adherence to practices for DNS that respect modern standards for privacy and security.  Not just for our potential DoH partners, but for all DNS resolvers.

The post DNS-over-HTTPS Policy Requirements for Resolvers appeared first on Mozilla Security Blog.

Mozilla Future Releases BlogProtections Against Fingerprinting and Cryptocurrency Mining Available in Firefox Nightly and Beta

At Mozilla, we have been working hard to protect you from threats and annoyances on the web, so you can live your online life with less to worry about. Last year, we told you about adapting our approach to anti-tracking given the added importance of keeping people’s information on the web private in today’s climate. We talked about blocking tracking while also offering a clear set of controls to give our users more choice over what information they share with sites. One of the three key initiatives we listed was mitigating harmful practices like fingerprinting and cryptomining. We have added a feature to block fingerprinting and cryptomining in Firefox Nightly as an option for users to turn on.

What are fingerprinting and cryptomining scripts?

A variety of popular “fingerprinting” scripts are invisibly embedded on many web pages, harvesting a snapshot of your computer’s configuration to build a digital fingerprint that can be used to track you across the web, even if you clear your cookies. Fingerprinting violates Firefox’s anti-tracking policy.

Another category of scripts called “cryptominers” run costly operations on your web browser without your knowledge or consent, using the power of your computer’s CPU to generate cryptocurrency for someone else’s benefit. These scripts slow down your computer, drain your battery and rack up your electric bill.

How will Firefox block these harmful scripts?

To combat these threats, we are pleased to announce new protections against fingerprinters and cryptominers. In collaboration with Disconnect, we have compiled lists of domains that serve fingerprinting and cryptomining scripts. Now in the latest Firefox Nightly and Beta versions, we give users the option to block both kinds of scripts as part of our Content Blocking suite of protections.

In Firefox Nightly 68 and Beta 67, these new protections against fingerprinting and cryptomining are currently disabled by default. You can enable them with the following steps:


  1. Click the Firefox main menu:
  2. Choose “Preferences”:
  3. Click on the “Privacy and Security” tab at left:
  4. Under “Content Blocking”, click on “Custom”:
  5. Finally, check “Cryptominers” and “Fingerprinters” so that they are both blocked:

Once enabled, Firefox will block any scripts that have been identified by Disconnect to participate in cryptomining or fingerprinting. (These protections will be turned on by default in Nightly in the coming weeks.)

Testing these protections by default

In the coming months, we will start testing these protections with small groups of users and will continue to work with Disconnect to improve and expand the set of domains blocked by Firefox. We plan to enable these protections by default for all Firefox users in a future release.

As always, we welcome your reports of any broken websites you may encounter. Just click on the Tracking Protection “shield” in the address bar and click “Report a Problem”:

Help us by reporting broken websites

We invite you to check out this feature to keep users safe on the current Nightly and Beta releases.

The post Protections Against Fingerprinting and Cryptocurrency Mining Available in Firefox Nightly and Beta appeared first on Future Releases.

The Firefox Frontier10 unicorn themes for Firefox to make Unicorn Day extra magical.

If today feels a little extra magical, that’s because it’s Unicorn Day! That’s right, a whole day devoted to these enchanting fairytale creatures. Unicorn Day is a new holiday created … Read more

The post 10 unicorn themes for Firefox to make Unicorn Day extra magical. appeared first on The Firefox Frontier.

This Week In RustThis Week in Rust 281

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Important Update: The This Week in Rust privacy policy has changed due to our migration to GitHub pages for hosting. The current policy can be accessed here. The git-diff can be viewed here.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is interact, a framework for online introspection of the running program state. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

198 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.


No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Sadly there was no suggestion this week.

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mike ConleyFirefox Front-End Performance Update #16

With Firefox 67 only a few short weeks away, I thought it might be interesting to take a step back and talk about some of the work that the Firefox Front-end Performance team is shipping to users in that particular release.

To be clear, this is not an exhaustive list of the great performance work that’s gone into Firefox 67 – but I picked a few things that the front-end team has been focused on to talk about.

Stop loading things we don’t need right away

The fastest code is the code that doesn’t run at all. Sometimes, as the browser evolves, we realize that there are components that don’t need to be loaded right away during start-up, and can instead of deferred until sometime after start-up. Sometimes, that means we can wait until the very last moment to initialize some component – that’s called loading something lazily.

Here’s a list of things that either got deferred until sometime after start-up, or made lazy:

FormAutofillContent and FormValidationChild

These are modules that support, you guessed it, Form Autofill – that part of the browser that helps you fill in web forms, and makes sure forms are passing validation. We were loading these modules too early, and now we load them only when there are forms to auto-fill or validate on a page.

The hidden window

The Hidden Window is a mysterious chunk of code that manages the state of the global menu bar on macOS when there are no open windows. The Hidden Window is also sometimes used as a singleton DOM window where various operations can take place. On Linux and Windows, it turns out we were creating this Hidden Window far early than needs be, and now it’s quite a bit lazier.

Page style

Page Style is a menu you can find under View in the main menu bar, and it’s used to switch between alternative style sheets on a page. It’s a pretty rarely used feature from what we can tell, but we were scanning pages for their alternative stylesheets far earlier than we needed to. We were also scanning pages that we know don’t have alternative stylesheets, like the about:home / about:newtab page. Now we only scan normal web pages, and we do so only after we service the idle event queue.

Cache invalidation

The Startup Cache is an important part of Firefox startup performance. It’s primary job is to cache computations that occur during each startup so that they only have to happen every once in a while. For example, the mark-up of the browser UI often doesn’t change from startup to startup, so we can cache a version of the mark-up that’s faster to read from disk, and only invalidate that during upgrades.

We were invalidating the whole startup cache every time a WebExtension was installed or uninstalled. This used to be necessary for old-style XUL add-ons (since those could cause changes to the UI that would need to go into the cache), but with those add-ons no longer available, we were able to remove the invalidation. This means faster startups more often.

Don’t touch the disk

The disk is almost always the slowest part of the system. Reading and writing to the disk can take a long time, especially on spinning magnetic drives. The less we can read and write, the better. And if we’re going to read, best to do it off of the main thread so that the UI remains responsive.

Old XUL icons code

We were reading from the disk on the main thread to search for window-specific icons to display in the window titlebar.

Firefox doesn’t use window-specific icons, so we made it so that we skip these checks. This means less disk activity, which is great for responsiveness and start-up!

Hitting every directory on the way down

We noticed that when we were checking that a directory exists on Windows (to write a file to it), we were using the CreateDirectoryW Windows API. This API checks each folder on the way down to the last one to see if they exist. That’s a lot of disk IO! We can avoid this if we assume that the parent directories exist, and only fall into the slow path if we fail to write our file. This means that we hit the faster path with less IO more often, which is good for responsiveness and start-up time.

Enjoy your Faster Fox!

Firefox 67 is slated to ship with these improvements on May 14th – just a little over a month away. Enjoy!

Firefox UXDesigning Better Security Warnings

Security messages are very hard to get right, but it’s very important that you do. The world of internet security is increasingly complex and scary for the layperson. While in-house security experts play a key role in identifying the threats, it’s up to UX designers and writers to communicate those threats in ways that enlighten and empower users to make more informed decisions.

We’re still learning what works and what doesn’t in the world of security messages, but there are some key insights from recent studies from the field at large. We had a chance to implement some of those recommendations, as well as learnings from our own in-house research, in a recent project to overhaul Firefox’s most common security certificate warning messages.


Websites prove their identity via security certificates (i.e., www.example.com is in fact www.example.com, and here’s the documentation to show it). When you try to visit any website, your browser will review the certificate’s authenticity. If everything checks out, you can safely proceed to the site.

If something doesn’t check out, you’ll see a security warning. 3% of Firefox users encounter a security certificate message on a daily basis. Nearly all users who see a security message see one of five different message types. So, it’s important that these messages are clear, accurate, and effective in educating and empowering users to make the informed (ideally, safer) choice.

These error messages previously included some vague, technical jargon nestled within a dated design. Given their prevalence, and Firefox’s commitment to user agency and safety, the UX and security team partnered up to make improvements. Using findings from external and in-house research, UX Designer Bram Pitoyo and I collaborated on new copy and design.

Old vs. New Designs

<figcaption>Example of an old Firefox security certificate message</figcaption>
<figcaption>Example of a new Firefox security message</figcaption>


Business goals:

  1. User safety: Prevent users from visiting potentially unsafe sites.
  2. User retention: Keep Firefox users who encounter these errors from switching to another browser.

User experience goals:

  1. Comprehension: The user understands the situation and can make an informed decision.
  2. Adherence: The user makes the informed, pro-safety choice. In the case of security warnings, this means the user does not proceed to a potentially unsafe site, especially if the user does not fully understand the situation and implications at hand.(1)


We met our goals, as illustrated by three different studies:

1. A qualitative usability study (remote, unmoderated on usertesting.com) of a first draft of redesigned and re-written error pages. The study evaluated the comprehensibility, utility, and tone of the new pages. Our internal user researcher, Francis Djabri, tested those messages with eight participants and we made adjustments based on the results.

2. A quantitative survey comparing Firefox’s new error pages, Firefox’s current error pages, and Chrome’s current comparative pages. This was a paid panel study that asked users about the source of message, how they felt about the message, and what actions they would take as a result of the message. Here’s a snapshot of the results:

When presented with the redesigned error message, we saw a 22–50% decrease in users stating they would attempt to ignore the warning message.
When presented with the redesigned error message, we saw a 29–60% decrease in users stating they would attempt to access the website via another browser. (Only 4.7–8.5 % of users who saw the new Firefox message said they would try another browser, in contrast to 10–11.3% of users who saw a Chrome message).
(Source: Firefox Strategy & Insights, Tyler Downer, November 2018 Survey Highlights)

3. A live study comparing the new and old security messages with Firefox users confirmed that the new messages did not significantly impact usage or retention in a negative way. This gave us the green light to go-live with the redesigned pages for all users.

How we did it:

<figcaption>The process of creating new security messages</figcaption>

In this blog post, I identify the eight design and content tips — based on outside research and our own — for creating more successful security warning messages.

Content & Design Tips

1. Avoid technical jargon, and choose your words carefully

Unless your particular users are more technical, it’s generally good practice to avoid technical terms — they aren’t helpful or accessible for the general population. Words like “security credentials,” “encrypted,” and even “certificate” are too abstract and thus ineffective in achieving user understanding.(2)

It’s hard to avoid some of these terms entirely, but when you do use them, explain what they mean. In our new messages, we don’t use the technical term, “security certificates,” but we do use the term “certificates.” On first usage, however, we explain what “certificate” means in plain language:

Some seemingly common terms can also be problematic. Our own user study showed that the term, “connection,” confused people. They thought, mistakenly, that the cause of the issue was a bad internet connection, rather than a bad certificate.(3) So, we avoid the term in our final heading copy:

2. Keep copy concise and scannable…because people are “cognitive misers”

When confronted with decisions online, we all tend to be “cognitive misers.” To minimize mental effort, we make “quick decisions based on learned rules and heuristics.” This efficiency-driven decision making isn’t foolproof, but it gets the job done. It means, however, that we cut corners when consuming content and considering outcomes.(4)

Knowing this, we kept our messages short and scannable.

  • Since people tend to read in an F-shaped pattern, we served up our most important content in the prime real estate of the heading and upper left-hand corner of the page.
  • We used bolded headings and short paragraphs so the reader can find and consume the most important information quickly and easily. Employing headers and prioritizing content into a hierarchy in this way also makes your content more accessible:

We also streamlined the decision-making process with opinionated design and progressive disclosure (read on below).

3. Employ opinionated design, to an appropriate degree

“Safety is an abstract concept. When evaluating alternatives in making a decision, outcomes that are abstract in nature tend to be less persuasive than outcomes that are concrete.” — Ryan West, “The Psychology of Security

When users encounter a security warning, they can’t immediately access content or complete a task. Between the two options — proceed and access the desired content, or retreat to avoid some potential and ambiguous threat — the former provides a more immediate and tangible award. And people like rewards.(5)

Knowing that safety may be the less appealing option, we employed opinionated design. We encourage users to make the safer choice by giving it a design advantage as the “clear default choice.”(6) At the same time, we have to be careful that we aren’t being a big brother browser. If users want to proceed, and take the risk, that’s their choice (and in some cases, the informed user can do so knowing they are actually safe from the particular certificate error at hand). It might be tempting to add ten click-throughs and obscure the unsafe choice, but we don’t want to frustrate people in the process. And, the efficacy of additional hurdles depends on how difficult those hurdles are.(7)

Striving for balance, we:

  • Made the pro-safety choice the most prominent and accessible. The blue button pops against the gray background, and contains text to indicate it is indeed the “recommended” course of action. The color blue is also often used in traffic signals to indicate guidance and direction, which is fitting for the desired pro-safety path.
  • In contrast, the “Advanced…” button is a muted gray, and, after selecting this button, the user is presented with one last barrier. That barrier is additional content explaining the risk. It’s followed by the button to continue to the site in a muted gray with the foreboding text, “Accept the risk…” We used the word “risk” intentionally to capture the user’s attention and be clear that they are putting themselves in a potentially precarious position.

4. Communicate the risk, and make it tangible

In addition to “safety” being an abstract concept, users tend to believe that they won’t be the ones to fall prey to the potential threat (i.e., those kind of things happen to other people…they won’t happen to me).(8) And, save for our more tech-savvy users, the general population might not care what particular certificate error is taking place and its associated details.

So, we needed to make the risk as concrete as possible, and communicate it in more personal terms. We did the following:

  • Use the word “Warning” in our header to capture the user’s attention.
  • Explain the risk in terms of real potential outcomes. The old message simply said, “To protect your information from being stolen…” Our new message is much more explicit, including examples of what was at risk of being stolen. Google Chrome employs similarly concrete wording.
  • Communicate the risk early in your content hierarchy — in our case, this meant the first paragraph (rather than burying it under the “Advanced” section).

5. Practice progressive disclosure

While the general population might not need or want to know the technical details, you should provide them for the users that do…in the right place.

Users rarely click on links like “Learn more” and “More Information.”(9) Our own usability study confirmed this, as half of the participants did not notice or feel compelled to select the “Advanced” button.(10) So, we privileged content that is more broadly accessible and immediately important on our first screen, but provided more detail and technical information on the second half of the screen, or behind the “Advanced” button. Knowing users aren’t likely to click on “Advanced,” we moved any information that was more important, such as content about what action the user could take, to the first screen.

The “Advanced” section thus serves as a form of progressive disclosure. We avoided cluttering up our main screen with technical detail, while preserving a less obtrusive place for that information for the users who want it.

6. Be transparent (no one likes the internet browser who cried wolf)

In the case of security errors, we don’t know for sure if the issue at hand is the result of an attack, or simply a misconfigured site. Hackers could be hijacking the site to steal credit card information…or a site may just not have its security certificate credentials in order, for example.

When there is chance of attack, communicate the potential risk, but be transparent about the uncertainty. Our messages employ language like “potential” and “attackers could,” and we acknowledge when there are two potential causes for the error (the former unsafe, the latter safe):

The website is either misconfigured or your computer clock is set to the wrong time.

Explain why you don’t trust a site, and offer the ability to learn more in a support article:

Websites prove their identity via certificates. Firefox does not trust example.com because its certificate issuer is unknown, the certificate is self-signed, or the server is not sending the correct intermediate certificates. Learn more about this error

A participant in our usability study shared his appreciation for this kind of transparency:

“I’m not frustrated, I’m enlightened. Often software tries to summarize things; they think the user doesn’t need to know, and they’ll just say something a bit vague. As a user, I would prefer it to say ‘this is what we think and this is how we determined it.’”— Participant from a usability study on redesigned error messages (User Research Firefox UX, Francis Djabri, 2018)

7. Choose imagery and color carefully

Illustration, iconography, and color treatment are important communication tools to accompany the copy. Visual cues can be even “louder” than words and so it’s critical to choose these carefully.

We wanted users to understand the risk at hand but we didn’t want to overstate the risk so that browsing feels like a dangerous act. We also wanted users to know and feel that Firefox was protecting them from potential threats.

Some warning messages employ more dramatic imagery like masked eyes, a robber, or police officer, but their efficacy is disputed.(11) Regardless, that sort of explicit imagery may best be reserved for instances in which we know the danger to be imminent, which was not our case.

The imagery must also be on brand and consistent with your design system. At Firefox, we don’t use human illustration within the product — we use whimsical critters. Critters would not be an appropriate choice for error messages communicating a threat. So, we decided to use iconography that telegraphs risk or danger.

We also selected color scaled according to threat level. At Firefox, yellow is a warning and red signifies an error or threat. We used a larger yellow icon for our messages as there is a potential risk but the risk is not guaranteed. We also added a yellow border as an additional deterrent for messages in which the user had the option to proceed to the unsafe site (this isn’t always the case).

<figcaption>Example of a yellow border around one of the new error messages</figcaption>

8. Make it human

Any good UX copy uses language that sounds and feels human, and that’s an explicit guiding principle in Firefox’s own Voice and Tone guidelines. By “human,” I mean language that’s natural and accessible.

If the context is right, you can go a bit farther and have some fun. One of our five error messages did not actually involve risk to the user — the user simply needed to adjust her clock. In this case, Communication Design Lead, Sean Martell, thought it appropriate to create an “Old Timey Berror” illustration. People in our study responded well… we even got a giggle:

<figcaption>New clock-related error message</figcaption>


The field of security messaging is challenging on many levels, but there are things we can do as designers and content strategists to help users navigate this minefield. Given the amount of frustration error messages can cause a user, and the risk these obstructions pose to business outcomes like retention, it’s worth the time and consideration to get these oft-neglected messages right…or, at least, better.

Thank you

Special thanks to my colleagues: Bram Pitoyo for designing the messages and being an excellent thought partner throughout, Johann Hofmann and Dana Keeler for their patience and security expertise, and Romain Testard and Tony Cinotto for their mad PM skills. Thank you to Sharon Bautista, Betsy Mikel, and Michelle Heubusch for reviewing an earlier draft of this post.



  1. Adrienne Porter Felt et al., “Improving SSL Warnings: Comprehension and Adherence.”(Philadelphia: Google, 2015).
  2. Ibid.
  3. User Research, Firefox UX, Francis Djabri, 2018.
  4. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  5. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  6. Adrienne Porter Felt et al., “Experimenting At Scale With Google Chrome’s SSL Warning.” (Toronto: CHI2014, April 26 — May 01 2014). https://dl.acm.org/citation.cfm?doid=2556288.2557292
  7. Ibid.
  8. West, Ryan. “The Psychology of Security.” Communications of the ACM 51, no. 4 (April 2008): 34–40. doi:10.1145/1330311.1330320.
  9. Devdatta Akhawe and Adrienne Porter Felt, “Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness.” Semantic Scholar, 2015.
  10. User Research, Firefox UX, Francis Djabri, 2018.
  11. Devdatta Akhawe and Adrienne Porter Felt, “Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness.” Semantic Scholar, 2015.

Designing Better Security Warnings was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla VR BlogVoxelJS Reboot

VoxelJS Reboot

If you’ve ever played Minecraft then you have used a voxel engine. My 7 year old son is a huge fan of Minecraft and asked me to make a Minecraft for VR. After some searching I found VoxelJS, a great open source library created by Max Ogden (@maxogden) and James Halliday (@substack). Unfortunately it hasn’t been updated for about five years and doesn't work with newer libraries.

So what to do? Simple: I dusted it off, ported it to modern ThreeJS & Javascript, then added WebXR support. I call it VoxelJS Next.

VoxelJS Reboot

What is it?

VoxelJS Next is a graphics engine, not a game. I think of Minecraft as a category of game, not a particular instance. I’d like to see Minecraft style voxels used for all sorts of experiences. Chemistry, water simulation, infinite runners, and many other fun experiences.

VoxelJS lets you build your own Minecraft-like game easily on the web. It can run in desktop mode, on a touch screen, full screen, and even in VR thanks to WebXR support. VoxelJS is built on top of ThreeJS.

How Does it Work:

I’ll talk about how data is stored and drawn on screen in a future blog, but the short answer is this:

The world is divided into chunks. Each chunk contains a grid of blocks and is created on demand. These chunks are turned into ThreeJS meshes which are then added to the scene. The chunks come and go as the player moves around the world. So even if you have an infinitely large world only a small number of chunks need to be loaded at a time.

VoxelJS is built as ES6 modules with a simple entity system. This lets you load only the parts you want. There is a module each for desktop controls, touch controls, VR, and more. Thanks to modern browser module support you don’t need to use a build tool like Webpack. Everything works just by importing modules.

How Do I get it?

Check out a live demo and get the code.

Next steps:

I don’t want to over sell VoxelJS Next. This is a super alpha release. The code is incredibly buggy, performance isn’t even half of what it should be, only a few textures, and tons of features are missing. VoxelJS Next is just a start. However, it’s better to get early feedback than to build features no one wants, so please try it out.

You can find the full list of feature and issues here. And here are a good set of issues for beginners to start with.

I also created a #voxels channel in the ThreeJS slack group. You can join it here.

I find voxels really fun to play with, as they give you great freedom to experiment. I hope you'll enjoy building web creations with VoxelJS Next.

Hacks.Mozilla.OrgSharpen your WebVR skills with experiments from Glitch and Mozilla

Join us for a week of Web VR experiments.

Earlier this year, we partnered with Glitch.com to produce a WebVR starter kit. In case you missed it, the kit includes a free, 5-part video course with interactive code examples that teach the fundamentals of WebVR using A-Frame. The kit is intended to help anyone get started – no coding experience required.

Today, we are kicking off a week of WebVR experiments. These experiments build on the basic fundamentals laid out in the starter kit. Each experiment is unique and is meant to teach and inspire as you craft your own WebVR experiences.

To build these, we once again partnered with the awesome team at Glitch.com as well as Glitch creator Andrés Cuervo. Andrés has put together seven experiments that range from incorporating motion capture to animating torus knots in 3D.

Today we are releasing the first 3 WebVR experiments, and we’ll continue to release a new one every day. If you want to follow along, you can click here and bookmark this page. The first three are shared below:

Dancing VR Scene

This example takes free motion capture data and adds it to a VR scene.



Noisy Sphere

The Noisy Sphere will teach you how to incorporate 3D models and textures into your project.



Winding Knots

In this example, you get a deeper dive on the torus knot shape and the animation component that is included in A-Frame.



If you are enjoying these experiments, we invite you to visit this page throughout the week as we’ll add a new experiment every day. We will post the final experiment this Friday.

Visit a week of WebVR Experiments.

The post Sharpen your WebVR skills with experiments from Glitch and Mozilla appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogRecommended Extensions program — coming soon

In February, we blogged about the challenge of helping extension users maintain their safety and security while preserving their ability to choose their browsing experience. The blog post outlined changes to the ecosystem to better protect users, such as making them more aware of the risks associated with extensions, reducing the visibility of extensions that haven’t been vetted, and putting more emphasis on curated extensions.

One of the ways we’re helping users discover vetted extensions will be through the Recommended Extensions program, which we’ll roll out in phases later this summer. This program will foster a curated list of extensions that meet our highest standards of security, utility, and user experience. Recommended extensions will receive enhanced visibility across Mozilla websites and products, including addons.mozilla.org (AMO).

We anticipate the eventual formation of this list to number in the hundreds, but we’ll start smaller and build the program carefully. We’re currently in the process of identifying candidates and will begin reaching out to selected developers later this month. You can expect to see changes on AMO by the end of June.

How will Recommended extensions be promoted?

On AMO, Recommended extensions will be visually identifiable by distinct badging. Furthermore, AMO search results and filtering will be weighted higher toward Recommended extensions.

Recommended extensions will also supply the personalized recommendations on the “Get Add-ons” page in the Firefox Add-ons Manager (about:addons), as well as any extensions we may include in Firefox’s Contextual Feature Recommender.

How are extensions selected to be part of the program?

Editorial staff will select the initial batch of extensions for the Recommended list. In time, we’ll provide ways for people to nominate extensions for inclusion.

When evaluating extensions, curators are primarily concerned with the following:

  • Is the extension really good at what it does? All Recommended extensions should not only do what they promise, but be very good at it. For instance, there are many ad blockers out there, but not all ad blockers are equally effective.
  • Does the extension offer an exceptional user experience? Recommended extensions should be delightful to use. Curators look for content that’s intuitive to manage and well-designed. Common areas of concern include the post-install experience (i.e. once the user installs the extension, is it clear how to use it?), settings management, user interface copy, etc.
  • Is the extension relevant to a general audience? The tightly curated nature of Recommended extensions means we will be selective, and will only recommend  extensions that are appealing to a general Firefox audience.
  • Is the extension safe? We’re committed to helping protect users against third-party software that may—intentionally or otherwise—compromise user security. Before an extension receives Recommended status, it undergoes a security review by staff reviewers. (Once on the list, each new version of a Recommended extension must also pass a full review.)

Participation in the program will require commitment from developers in the form of active development and a willingness to make improvements.

How will the list be maintained?

It’s our intent to develop a Recommended list that can remain relevant over time, which is to say we don’t anticipate frequent turnover in the program. The objective is to promote Recommended extensions that users can trust to be useful and safe for the lifespan of the software they install.

We recognize the need to keep the list current, and will make room for new, emerging extensions. Firefox users want the latest, greatest extensions. Talented developers all over the world continue to find creative ways to leverage the powerful capabilities of extensions and deliver fantastic new features and experiences. Once the program launches later this summer, we’ll provide ways for people to suggest extensions for inclusion in the program.

Will the community be involved?

We believe it’s important to maintain community involvement in the curatorial process. The Community Advisory Board—which for years has contributed to helping identify featured content—will continue to be involved in the Recommended extensions program.

We’ll have more details to share in the coming months as the Recommended extensions program develops. Please feel free to post questions or comments on the add-ons Discourse page.

The post Recommended Extensions program — coming soon appeared first on Mozilla Add-ons Blog.

Daniel Stenbergcurl says bye bye to pipelining

HTTP/1.1 Pipelining is the protocol feature where the client sends off a second HTTP/1.1 request already before the answer to the previous request has arrived (completely) from the server. It is defined in the original HTTP/1.1 spec and is a way to avoid waiting times. To reduce latency.

HTTP/1.1 Pipelining was badly supported by curl for a long time in the sense that we had a series of known bugs and it was a fragile feature without enough tests. Also, pipelining is fairly tricky to debug due to the timing sensitivity so very often enabling debug outputs or similar completely changes the nature of the behavior and things are not reproducing anymore!

HTTP pipelining was never enabled by default by the large desktop browsers due to all the issues with it, like broken server implementations and the likes. Both Firefox and Chrome dropped pipelining support entirely since a long time back now. curl did in fact over time become more and more lonely in supporting pipelining.

The bad state of HTTP pipelining was a primary driving factor behind HTTP/2 and its multiplexing feature. HTTP/2 multiplexing is truly and really “pipelining done right”. It is way more solid, practical and solves the use case in a better way with better performance and fewer downsides and problems. (curl enables multiplexing by default since 7.62.0.)

In 2019, pipelining should be abandoned and HTTP/2 should be used instead.

Starting with this commit, to be shipped in release 7.65.0, curl no longer has any code that supports HTTP/1.1 pipelining. It has been disabled in the code since 7.62.0 already so applications and users that use a recent version already should not notice any difference.

Pipelining was always offered on a best-effort basis and there was never any guarantee that requests would actually be pipelined, so we can remove this feature entirely without breaking API or ABI promises. Applications that ask libcurl to use pipelining can still do that, it just won’t have any effect.

Emily DunhamRustacean Hat Pattern

Rustacean Hat Pattern

Based on feedback from the crab plushie pattern, I took more pictures this time.

There are 40 pictures of the process below the fold.

Materials required

  • About 1/4 yard or 1/4 meter of orange fabric. Maybe more if it’s particularly narrow. Polar fleece is good because it stretches a little and does not fray near seams.
  • A measuring device. You can just use a piece of string and mark it.
  • Scissors, a sewing machine, pins, orange thread
  • Scraps of black and white cloth to make the face
  • The measurements of the hat wearer’s head. I’m using a hat to guess the measurements from.
  • A pen or something to mark the fabric with is handy.

Constructing the pattern pieces

If you’re using polar fleece, you don’t have to pre-wash it. Fold it in half. In these pictures, I have the fold on the left and the selvedges on the right.

The first step is to chop off a piece from the bottom of the fleece. We’ll use it to make the legs and spines later. Basically like this:


Next, measure the circumference you want the hat to be. I’ve measured on a hat to show you.


Find 1/4 of that circumference. If you measured with a string, you can just fold it, like I folded the tape measure. Or you could use maths.


That quarter-of-the-circumference is the distance that you fold over the left side of the big piece of fabric. Like so:


Leave it folded over, we’ll get right back to it. Guesstimate the height that a hat piece might need to be, so that we can sketch a piece of the hat on it. I do this by measuring front to back on a hat and folding the string, I mean tape measure, in half:


Back on the piece we folded over, put down the measurement so we make sure not to cut the hat too short. That measurement tells you roughly where to draw a curvy triangle on the folded fabric, just like this:


Now cut it out. Make sure not to cut off that folded edge. Like this:


Congratulations, you just cut out the lining of the hat! It should be all one piece. If we unfold the bit we just cut and the bit we cut it from, it’ll look like this:


Now we’re going to use that lining piece as a template to cut the outside pieces. Set it down on the fabric like so:


And cut around it. Afterwards you have 1 lining piece and 2 outer pieces:


Now grab that black and white scrap fabric and cut a couple eye sized black circles, and a couple bits of white for the light glints on the eyes. Also cut a black D shape to be the mouth if you want your hat to have a happy little mouth as well as just eyes.



Put the black and white together, and sew an eye glint kind of shape in the same spot on both, like so:


Pull the top threads to the back so the stitching looks all tidy. Then cut off the excess white fabric so it looks all pretty:


Now the fun part: Grab one of those outside pieces we cut before. Doesn’t matter which. Sew the eyes andmouth onto it like so:


Now it’s time to give the hat some shape. On both outside pieces – the one with the face and also the one without – sew that little V shaped gap shut. Like so:


Now they look kind of 3D, like so:


Let’s sew up the lining piece next. It’s the bit we cut off of the fold earlier. Fold then sew the Vs shut, thusly:


Next, sew most of the remaining seam of the lining, but leave a gap at the top so we can turn the whole thing inside out later:


Now that the lining is sewn, let’s sew 10 little legs. Grab that big rectangular strip we cut out at the very beginning, and sew its layers together into a bunch of little triangles with open bottoms. Then cut them apart and turn them inside out to get legs. Here’s how I did those steps:


Those little legs should have taken up maybe 1/3 of big rectangular strip. With the rest of it, let’s make some spines to go across Ferris’s back. They’re little triangles, wider than the legs, sewn up the same way.


Now put those spines onto one of the outside hat pieces. Leave some room at the bottom, because that’s where we’ll attach the claws that we’ll make later. The spines will stick toward the face when you pin them out, so when the whole thing turns right-side-out after sewing they’ll stick out.


Put the back of the outside onto this spine sandwich you’re building. Make sure the seam that sticks out is on the outside, because the outsides of this sandwich will end up inside the hat.


Pin and sew around the edge:


Note how the bottoms of the spines make the seam very bulky. Trim them closer to the seam, if you’re using a fabric which doesn’t fray, such as polar fleece.


The outer layer of the hat is complete!


At this point, we remember that Ferris has some claws that we haven’t accounted for yet. That’s ok because there was some extra fabric left over when we cut out the lining and outer for the hat. On that extra fabric, draw two claws. A claw is just an oval with a pie slice misisng, plus a little stem for the arm. Make sure the arms are wide enough to turn the claw inside out through later. It’s ok to draw them straight onto the fabric with a pen, since the pen marks will end up inside the claw later.


Then sew around the claws. It doesn’t have to match the pen lines exactly; nobody will ever know (except the whole internet in this case). Here are the front and back of the cloth sandwich that I sewed claws with:


Cut them out, being careful not to snip through the stitching when cutting the bit that sticks inward, and turn them right-side out:


Now it’s time to attach the liner and the hat outer together. First we need to pin the arms and legs in, making another sandwich kind of like we did with the spines along the back. I like to pin the arms sticking straight up and covering the outer’s side seams, like so:


Remember those 10 little legs we sewed earlier? Well, we need those now. And I used an extra spine from when we sewed the spines along Ferris’s back, in the center back, as a tail. Pin them on, 5 on each side, like little legs.


And finally, remember that liner we sewed, with a hole in the middle? Go find that one real quick:


Now we’re going to put the whole hat outer inside of the lining, creating Ferris The Bowl. All the pretty sides of things are INSIDE the sandwich, so all the seam allowances are visible.


Rearrange your pins to allow sewing, then sew around the entire rim of Ferris The Bowl.


Snip off the extra bits of the legs and stuff, just like we snipped off the extra bits of the spines before, like this:


Now Ferris The Bowl is more like Ferris The Football:


Reach in through the hole in the end of Ferris The Football, grab the other end, and pull. First it’ll look like this...


And then he’ll look like this:


Sew shut that hole in the bottom of the lining...


Stuff that lining into the hat, to make the whole thing hat-shaped, and you’re done!


QMOFirefox 67 Beta 10 Testday, April 12th

Hello Mozillians,

We are happy to let you know that Friday, April 12th, we are organizing Firefox 67 Beta 10 Testday. We’ll be focusing our testing on: Graphics compatibility & support and Session Restore.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Firefox NightlyThese Weeks in Firefox: Issue 56


  • Firefox Lockbox now available on Android! Check it out!
  • Responsive Design Mode in Dev Tools supports Meta Viewport, behind the devtools.responsive.metaViewport.enabled pref!

Responsive Design Mode with broken meta viewport handling Responsive Design Mode with fixed meta viewport handling

Screenshot of new "Logins and Passwords" app menu item

  • The autocomplete suggestion list has a spiffy new “View Saved Logins” button.

Screenshot of "View Saved Logins" button in autocomplete suggestion list

Screenshot of "Reload All Tabs" button in Privacy settings

Screenshot of new FxA avatar button in panel

  • Rust-powered bookmark syncing landed in Nightly, behind the services.sync.engine.bookmarks.buffer pref! This is an under-the-hood change that lets us share core sync logic between Desktop and mobile. Please file a bug if you notice issues.

Friends of the Firefox team

Here’s a list of all resolved bugs.

Fixed more than one bug

  • akshitha shetty
  • Carolina Jimenez Gomez
  • Dhruvi Butti
  • Fanny Batista Vieira [:fanny]
  • Florens Verschelde :fvsch
  • Helena Moreno (aka helenatxu)
  • Hemakshi Sachdev [:hemakshis]
  • Heng Yeow (:tanhengyeow)
  • Ian Moody [:Kwan]
  • Jawad Ahmed [:jawad]
  • Mellina Y.
  • Monika Maheshwari [:MonikaMaheshwari]
  • Neha
  • Nidhi Kumari
  • Syeda Asra Arshia Qadri [:aqadri]

New contributors (🌟 = first patch)

Project Updates

Activity Stream

  • Preparing to launch the Pin Tabs CFR message globally in 68
    • Currently just en-US in Beta (around 10% of the people that open the message will also pin the tab)
  • We’re investigating Pocket New Tab startup performance
    • There is a regression on the reference hardware compared to the default (66) New Tab experience
    • The current theory is that it is caused by network calls/processing for fetching images
    • The target is to make Pocket New Tab the default experience in 68
    • Working on lazy loading images
    • Meta Bug for performance work

Add-ons / Web Extensions


Services (Firefox Accounts / Sync / Push)
  • Thom is working on an HTTP client abstraction for our Rust components. We’ll use Necko via GeckoView on Android, and system networking libraries on iOS.
  • Thom and Ryan are also working on a crypto abstraction, using NSS and OS-level crypto, for encrypting and decrypting sync records and push messages.

Browser Architecture

Developer Tools

  • Thanks to Fanny Batista Vieira (GSOC), Adrian Anderson (Outreachy) and nikitarajput360 (Outreachy) for working on bug 1115363 – Add a Copy context menu item to the Storage Inspector.
  • Thanks to Avi Mathur for working on bug 1291427 – Table headers are not removed when selecting an empty storage
  • Good number of external contributions for the Console:
    • Helena Moreno added support for Cmd+K to clear the console – Bug 1532939
    • Helena Moreno added support to open URL of network messages with Cmd/Ctrl + click in the console – Bug 1466040
    • Erik Carillo made the console less painful to navigate with the keyboard by adding a role=main attribute on the output – Bug 1530936
    • Bisola Omisore implemented NOT clearing the console input when evaluating, for the editor mode – Bug 1519313
      • lots of in progress bugs
  • Grouping of tracking protection messages in Console is in progress – Bug 1524276
  • Column breakpoints in the Debugger are faster and more polished, esp. on reload
  • Network Panel now has support for resizable columns (Bug 1533764). Kudos to Lenka, who finished this feature during her Outreachy internship and who already picked up the next feature.
  • Landed a blank markup view bug related to a CORS issue (Bug 1535661)
  • Landed a redesigned settings panel for RDM (thanks to our design contributors @KrisKristin) and will soon land the ability to edit devices
  • You can now (remote) debug service workers in e10s multiprocess if you are also running the new ServiceWorkers implementation (dom.serviceWorkers.parent_intercept) (bug)
  • The all new and improved about:debugging is getting close to shipping (try it by enabling devtools.aboutdebugging.new-enabled or going to about:debugging-new). This new version allows you to debug Gecko in USB devices without launching WebIDE, amongst many other improvements.
    • If you test it and find bugs, please file them here and we’ll take care of them.
  • Removed Shader, Web audio, Canvas and shared components (see Intent to Unship for reference)


  • Ian Moody is rolling out the ESLint rule no-throw-literal across the tree.
    • This will help improve our error messages and handling.
  • Gijs is working on enabling at least basic ESLint parsing for XUL files.
  • ESlint is now enabled for docshell/, uriloader/, dom/browser-element/ and dom/url/
  • (Hopefully) landing soon:
    • ‘Automatic’ ESLint configuration for test directories.
    • Less .eslintrc.js files will be needed.
    • Directories where the path is of the following formats will be automatically configured and not need a .eslintrc.js file:
    • xpcshell:
      • **/test*/unit*/
      • **/test*/xpcshell/
    • browser-chrome mochitests: **/test*/**/browser/
    • plain mochitests: **/test*/mochitest/
    • chrome mochitests: **/test*/chrome/
    • I’m planning more follow-up work in the future to reorganise non-matching test directories to fit these structures where possible. Things like browser/base/content/test may get special exceptions.


Firefox for Echo Show
Android Components

Password Manager


This is a graph of the ts_paint startup paint Talos benchmark. The highlighted node is the first mozilla-central build with the hidden window work. Lower is better, so this looks like a nice win!

Performance tools

  • Added “Build Type” and “Update Channel” information to header metadata panel.

Metadata panel on Firefox Profiler that includes “Build Type” and “Update Channel” information

  • Added a “PID” label under the global tracks.

Parent Process track with PID under it

  • Working on PII sanitization before sharing a profile. Will be ready within a couple of weeks.


Search and Navigation

  • Studies:
    • Federated learning should launch in April
    • NewTab Search in private browsing is live
    • Quantum Bar in nightly is live
  • Working on remaining unit test failures, before landing built-in WebExtension Search Engines on Nightly 68
Quantum Bar
  • Nightly study is ongoing, got useful bug reports, keep them coming
  • Lots of autofill fixes and burning down list of blockers
  • Work continues on accessibility, RTL, flicker
  • Initial API design for future experiments, under discussion