Blog of DataThis Week in Glean: The Three Roles of Data Engagements

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

I’ve just recently started my sixth year working at Mozilla on data and data-adjacent things. In those years I’ve started to notice some patterns in how data is approached, so I thought I’d set them down in a TWiG because Glean’s got a role to play in them.

Data Engagements

A Data Engagement is when there’s a question that needs to engage with data to be answered. Something like “How many bookmarks are used by Firefox users?”.

(No one calls these Data Engagements but me, and I only do because I need to call them _something_.)

I’ve noticed three roles in Data Engagements at Mozilla:

  1. Data Consumer: The Question-Asker. The Temperature-Taker. This is the one who knows what questions are important, and is frustrated without an answer until and unless data can be collected and analysed to provide it. “We need to know how many bookmarks are used to see if we should invest more in bookmark R&D.”
  2. Data Analyst: The Answer-Maker. The Stats-Cruncher. This is the one who can use Data to answer a Consumer’s Question. “Bookmarks are used by Canadians more than Mexicans most of the time, but only amongst profiles that have at least one bookmark.”
  3. Data Instrumentor: The Data-Digger. The Code-Implementor. This one can sift through product code and find the correct place to collect the right piece of data. “The Places database holds many things, we’ll need to filter for just bookmarks to count them.”

This image has an empty alt attribute; its file name is fFe-UchE_Xy3d1LhX0shV41TJeABBBxJZigDH9_HXKAhu-m0JaM9fhfS8PQvW2WknXlMfk8lSIheZ-YMtT-NaQcLfdLYZnHC_f3LCkAIb-yYN0qWVvi0UPjQnz9C77sX5r0VLbsR=s1600

(diagrams courtesy of :brizental)

It’s through these three working in concert — The Consumer having a question that the Instrumentor instruments to generate data the Analyst can analyse to return an answer back to the Consumer — that a Data Engagement succeeds.

At Mozilla, Data Engagements succeed very frequently in certain circumstances. The Graphics team answers many deeply-technical questions about Firefox running in the wild to determine how well WebRender is working. The Telemetry team examines the health of the data collection system as a whole. Mike Conley’s old Tab Switcher Dashboard helped find and solve performance regressions in (unsurprisingly) Tab Switching. These go well, and there’s a common thread here that I think is the secret of why:

In these and the other high-success-rate Data Engagements, all three roles (Consumer, Analyst, and Instrumentor) are embodied by the same person.

This image has an empty alt attribute; its file name is laonFTnKBH6lmRWFRhcjUGx2aTG8iZbf3Wp99ulVqsu5J4qwZuq2pRaJ9WtBoXTEeAeDtui1yFn2gqMxxoFFZ1F87pLUXgmsymS9alcMqH0QBD7mz1bsTINN5FuVW1s9L0KSew8j=s1600

It’s a common problem in the industry. It’s hard to build anything at all, but it’s least hard to build something for yourself. When you are in yourself the Question-Asker, Answer-Maker, and Data-Digger, you don’t often mistakenly dig the wrong data to create an answer that isn’t to the question you had in mind. And when you accidentally do make a mistake (because, remember, this is hard), you can go back in and change the instrumentation, update the analysis, or reword the question.

But when these three roles are in different parts of the org, or different parts of the planet, things get harder. Each role is now trying to speak the others’ languages and infer enough context to do their jobs independently.

In comes the Data Org at Mozilla which has had great successes to date on the theme of “Making it easier for anyone to be their own Analyst”. Data Democratization. When you’re your own Analyst, then there’s fewer situations when the roles are disparate: Instrumentors who are their own Analysts know when data won’t be the right shape to answer their own questions and Consumers who are their own Analysts know when their questions aren’t well-formed.

Unfortunately we haven’t had as much success in making the other roles more accessible. Everyone can theoretically be their own Consumer: curiosity in a data-rich environment is as common as lanyards at an industry conference[1]. Asking _good_ questions is hard, though. Possible, but hard. You could just about imagine someone in a mature data organization becoming able to tell the difference between questions that are important and questions that are just interesting through self-serve tooling and documentation.

As for being your own Instrumentor… that is something that only a small fraction of folks have the patience to do. I (and Mozilla’s Community Managers) welcome you to try: it is possible to download and build Firefox yourself. It’s possible to find out which part of the codebase controls which pieces of UI. It’s… well, it’s more than possible, it’s actually quite pleasant to add instrumentation using Glean… but on the whole, if you are someone who _can_ Instrument Firefox Desktop you probably already have a copy of the source code on your hard drive. If you check right now and it’s not there, then there’s precious little likelihood that will change.

(Unless you come and work for Mozilla, that is.)

So let’s assume for now that democratizing instrumentation is impossible. Why does it matter? Why should it matter that the Consumer is a separate person from the Instrumentor?

Communication

Each role communicates with each other role with a different language:

  • Consumers talk to Instrumentors and Analysts in units of Questions and Answers. “How many bookmarks are there? We need to know whether people are using bookmarks.”
  • Analysts speak Data, Metadata, and Stats. “The median number of bookmarks is, according to a representative sample of Firefox profiles, twelve (confidence interval 99.5%).”
  • Instrumentors speak Data and Code. “There’s a few ways we delete bookmarks, we should cover them all to make sure the count’s correct when the next ping’s sent”

Some more of the Data Org and Mozilla’s greatest successes involve supplying context at the points in a Data Engagement where they’re most needed. We’ve gotten exceedingly good at loading context about data (metadata) to facilitate communication between Instrumentors and Analysts with tools like Glean Dictionary.

Ah, but once again the weak link appears to be the communication of Questions and Answers between Consumers and Instrumentors. Taking the above example, does the number of bookmarks include folders?

The Consumer knows, but the further away they sit from the Instrumentor, the less likely that the data coming from the product and fueling the analysis will be the “correct” one.

(Either including or excluding folders would be “correct” for different cases. Which one do you think was “more correct”?)

So how do we improve this?

Glean

Well, actually, Glean doesn’t have a solution for this. I don’t actually know what the solutions are. I have some ideas. Maybe we should share more context between Consumers and Instrumentors somehow. Maybe we should formalize the act of question-asking. Maybe we should build into the Glean SDK a high-enough level of metric abstraction that instead of asking questions, Consumers learn to speak a language of metrics.

The one thing I do know is that Glean is absolutely necessary to making any of these solutions possible. Without Glean, we have too many systems that are fractally complex for any context to be relevantly shared. How can we talk about sharing context about bookmark counts when we aren’t even counting things consistently[2]?

Glean brings that consistency. And from there we get to start solving these problems.

Expect me to come back to this realm of Engagements and the Three Roles in future posts. I’ve been thinking about:

  • how tooling affects the languages the roles speak amongst themselves and between each other,
  • how the roles are distributed on the org chart,
  • which teams support each role,
  • how Data Stewardship makes communication easier by adding context and formality,
  • how Telemetry and Glean handle the same situations in different ways, and
  • what roles Users play in all this. No model about data is complete without considering where the data comes from.

I’m not sure how many I’ll actually get to, but at least I have ideas.

:chutten

[1] Other rejected similes include “as common as”: maple syrup on Canadian breakfast tables, frustration in traffic, sense isn’t.

[2] Counting is harder than it looks.

(( This post is a syndicated copy of the original. ))

hacks.mozilla.orgHacks Decoded: Thomas Park, Founder of Codepip

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work. 

Meet Thomas Park 

Thomas Park is a software developer based in the U.S. (Philadelphia, specifically). Previously, he was a teacher and researcher at Drexel University and even worked at Mozilla Foundation for a stint. Now, he’s the founder of Codepip, a platform that offers games that teach players how to code. Park has made a couple games himself: Flexbox Froggy and Grid Garden.

We spoke with Thomas over email about coding, his favourite apps and his past life at Mozilla. Check it out below and welcome to Hacks: Decoded.

Where’d you get your start, Thomas? How did you end up working in tech, what was the first piece of code you wrote, what’s the Thomas Park origin story?

The very first piece of code I wrote was in elementary school. We were introduced to Logo, an educational programming language that was used to draw graphics with a turtle (a little cursor that was shaped like the animal). I drew a rudimentary weapon that shot an animated laser beam, with the word “LAZER” misspelled under it.

Afterwards, I took an extremely long hiatus from coding. Dabbled with HyperCard and HTML here and there, but didn’t pick it up in earnest until college.

Post-college, I worked in the distance education department at the Center for Talented Youth at Johns Hopkins University, designing and teaching online courses. It was there I realized how much the technology we used mediated the experience of our students. I also realized how much better the design of this tech should be. That motivated me to go to grad school to study human-computer interaction, with a focus on educational technology. I wrote a decent amount of code to build prototypes and analyze data during my time there.

What is Codepip? What made you want to create it? 

Codepip is a platform I created for coding games that help people learn HTML, CSS, JavaScript, etc. The most popular game is Flexbox Froggy.

Codepip actually has its roots in Mozilla. During grad school, I did an internship with the Mozilla Foundation. At the time, they had a code editor geared toward teachers and students called Thimble. For my internship, I worked with Mozilla employees to integrate a tutorial feature into Thimble.

Anyway, through this internship I got to attend Mozilla Festival. And there I met many people who did brilliant work inside and outside of Mozilla. One was an extremely talented designer named Luke Pacholski. By that time, he had created CSS Diner, a game about CSS selectors. And we got to chatting about other game ideas.

After I returned from MozFest, I worked weekends for about a month to create Flexbox Froggy. I was blown away by the reception, from both beginners who wanted to learn CSS, to more experienced devs curious about this powerful new CSS module called flexbox. To me, this affirmed that coding games could make a good complement to more traditional ways of learning. Since then, I’ve made other games that touch on CSS grid, JS math, HTML shortcuts with Emmet, and more.

Gamified online learning has become quite popular in the past couple of years, what are some old school methods that you still recommend and use?

Consulting the docs, if you can call that old school. I often visit the MDN Web Docs to learn some aspect of CSS or JS. The articles are detailed, with plenty of examples.

On occasion I find myself doing a deep dive into the W3C standards, though navigating the site can be tricky.

Same goes for any third-party library or framework you’re working with — read the docs!

What’s one thing you wish you knew when you first started to code?

I wish I knew git when I first started to code. Actually, I wish I knew git now.

It’s never too early to start version controlling your projects. Sign up for a free GitHub account, install GitHub’s client or learn a handful of basic git commands, and backup your code. You can opt for your code to be public if you’re comfortable with it, private if not. There’s no excuse.

Plus, years down the line when you’ve mastered your craft, you can get some entertainment value from looking back at your old code.

Whose work do you admire right now? Who should more people be paying attention to?

I’m curious how other people answer this. I feel like I’m out of the loop on this one.

But since you asked, I will say that when it comes to web design with high stakes, the teams at Stripe and Apple have been the gold standard for years. I’ll browse their sites and get inspired by the many small, almost imperceptible details that add up to something magical. Or something in your face that blows my mind.

On a more personal front, there’s the art of Diana Smith and Ben Evans, which pushes the boundaries of what’s possible with pure CSS. I love how Lynn Fisher commits to weird side projects. And I admire the approachability of Josh Comeau’s writings on technical subjects.

What’s a part of your journey that many may not realize when they look at your resume or LinkedIn page?

My resume tells a cohesive story that connects the dots of my education and employment. As if there was a master plan that guided me to where I am.

The truth is I never had it all figured out. I tried some things I enjoyed, tried other things which I learned I did not, and discovered whole new industries that I didn’t even realize existed. On the whole, the journey has been rewarding, and I feel fortunate to be doing work right now that I love and feel passionate about. But that took time and is subject to change.

Some beginners may feel discouraged that they don’t have their career mapped out from A to Z, like everyone else seemingly does. But all of us are on our own journeys of self-discovery, even if the picture we paint for prospective employers, or family and friends, is one of a singular path.

What’s something you’ve realized since we’ve been in this pandemic? Tech-related or otherwise?

Outside of tech, I’ve realized how grateful I am for all the healthcare workers, teachers, caretakers, sanitation workers, and food service workers who put themselves at risk to keep things going. At times I got a glimpse of what happens without them and it wasn’t pretty.

Tech-related, the pandemic has accelerated a lot of tech trends by years or even decades. Not everything is as stark as, say, Blockbuster getting replaced by Netflix, but industries are irreversibly changing and new technology is making that happen. It really underscores how in order to survive and flourish, we as tech workers have to always be ready to learn and adapt in a fast-changing world.

Okay a random one — you’re stranded on a desert island with nothing but a smartphone. Which three apps could you not live without?

Assuming I’ll be stuck there for a while, I’d definitely need my podcasts. My podcast app of choice has long been Overcast. I’d load it up with some 99% Invisible and Planet Money. Although I’d probably only need a single episode of Hardcore History to last me before I got rescued.

I’d also have Simplenote for all my note-taking needs. When it comes to notes, I prefer the minimalist, low-friction approach of Simplenote to manage my to-dos and projects. Or count days and nights in this case.

Assuming I have bars, my last app is Reddit. The larger subs get most of the attention, but there are plenty of smaller ones with strong communities and thoughtful discussion. Just avoid the financial investing advice from there.

Last question — what’s next for you?

I’m putting the finishing touches on a new coding game called Disarray. You play a cleaning expert who organizes arrays of household objects using JavaScript methods like push, sort, splice, and map, sparking joy in the homeowner.

And planning for a sequel. Maybe a game about databases…

Thomas Park is a software developer living in Philly. You can keep up with his work right here and keep up with Mozilla on Twitter and Instagram. Tune into future articles in the Hacks: Decoded series on this very blog.

The post Hacks Decoded: Thomas Park, Founder of Codepip appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogWelcome Imo Udom, Mozilla’s new Senior Vice President, Innovation Ecosystems

I am delighted to share that Imo Udom has joined Mozilla as Senior Vice President, Innovation Ecosystems. Imo brings a unique combination of strategy, technical and product expertise and an entrepreneurial spirit to Mozilla and our work to design, develop and deliver new products and services. 

While Mozilla is no stranger to innovation, this role is a new and exciting one for us. Imo’s focus won’t only be on new products that complement the work already happening in each of our product organizations, but also on creating the right systems to nurture new ideas within Mozilla and with like-minded people and organizations outside the company. I’m convinced that our brightest future comes from a combination of the products we offer directly and our connection to a broad ecosystem of creators, founders, and entrepreneurs in the world who are also trying to build a better internet. 

“People deserve technology that not only makes their lives better and easier, but technology that they can trust,” said Udom. “Mozilla is one of the few companies already doing this work through its core products. I am thrilled to join the team to help Mozilla and others with the same mission build next generation products that bring people the best of modern technology while still keeping their best interests at the center.”

Previously, Imo was the Chief Strategy and Product Officer at Outmatch where he was responsible for ensuring the business and product strategy delivered value to customers, while never losing sight of its mission to match people with purpose. Prior to Outmatch, Imo co-founded and served as CEO of Wepow, a video interviewing solution that reduces interviewing time and improves hiring quality. Imo helped grow Wepow from a small side-project in 2010 to a successful enterprise platform supporting hundreds of global brands that was later acquired by Outmatch. 

Beyond Imo’s impressive experience and background, it was his passion for learning and commitment to impacting the world in a positive way that made it clear that he was the right person for this work. Imo will report directly to me and will also sit on the steering committee. 

I look forward to working closely with Imo as we write the next chapter of innovation at Mozilla.

The post Welcome Imo Udom, Mozilla’s new Senior Vice President, Innovation Ecosystems appeared first on The Mozilla Blog.

SUMO BlogWhat’s up with SUMO – October 2021

Hey folks,

As we enter October, I hope you’re all pumped up to welcome the last quarter of the year and, basically, wrapping up projects that we have for the remainder of the year. With that spirit, let’s start by welcoming the following folks into our community.

Welcome on board!

  1. Welcome to the support forum crazy.cat, Losa, and Zipyio!
  2. Also, welcome to Ihor from Ukraine, Static_salt from the Netherlands, as well as Eduardo and hcasellato from Brazil. Thanks for your contribution to the KB localization!

Community news

  • If you’ve been hearing about Firefox Suggest and are confused about what exactly is that, please read this contributor forum thread to find out more and join our discussion about it.
  • Last month, we welcomed Firefox Focus into the Play Store Support program. We connected the app to Conversocial so now, Play Store Support contributors should be able to reply to Google Play Store reviews for Firefox Focus from the tool. We also prepared this guideline on how to reply to the reviews.
  • Learn more about Firefox 93 here.
  • Another warm welcome for our new content manager, Abby Parise! She made a quick appearance in our community call last month. So go ahead and watch the call if you haven’t!
  • Check out the following release notes from Kitsune during the previous period:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in September!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Sep 2021 8,244,817 -2.57%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. K_alex
  5. Julie

KB Localization

Top 10 locale based on total page views

Locale Sep 2021 pageviews (*) Localization progress (per Sep, 14)(**)
de 8.13% 100%
zh-CN 7.56% 100%
fr 6.59% 88%
es 6.10% 39%
pt-BR 5.96% 60%
ja 3.85% 54%
ru 3.77% 100%
it 2.22% 100%
pl 2.09% 87%
zh-TW 1.91% 5%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Michele Rodaro
  3. Jim Spentzos
  4. Valery Ledovskoy
  5. Soucet

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Sep 2021 2274 85.31% 24.32% 65.89%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Seburo
  4. Jscher2000
  5. Sfhowes

Social Support

Twitter stats

Channel Sep 2021
Total conv Conv interacted
@firefox 3318 785
@FirefoxSupport 290 240

Top 5 contributors in Q3 2021

  1. Christophe Villeneuve
  2. Felipe Koji
  3. Andrew Truong

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX Desktop 94 (Nov 2)
    • Monochromatic Themes (Personalize Fx by opting into a polished monochromatic theme from a limited set)
    • Avoid interruptions when closing Firefox
    • Fx Desktop addition to Windows App store
    • Video Playback testing on MacOS: (Decrease power consumption during full screen playback)

Firefox mobile

Major Release 2 Mobile (Nov 2)

Area Feature Android IOS Focus
Firefox Home Jump Back in (Open Tabs) X X
Recently saved/Reading List X
Recent bookmarks X X
Customize Pocket Articles X
Clutter Free Tabs Inactive Tabs X
Better Search History Highlights in Awesome bar X
Themes Settings Themes Settings X
  • Check out Android Beta which has most of major feature updates
    • More features to come in FX Android V95/IOS V40 and beyond.

Other products / Experiments

  • Mozilla VPN V2.6 (Oct 20)
    • Multi-Account Container: When used with Mozilla VPN on, MAC allows for even greater privacy by having separate Wireguard tunnels for each container.  This will allow users to have tabs exit in different nodes in the same instance of the browser.
  • Firefox Relay Premium – launch (Oct 27)
    • Unlimited aliases
    • Create your own Domain name

Shout-outs!

  • Thanks for Selim and Chris for helping me with Turkish and Polish keywords for Conversocial.
  • Thanks for Wxie for the help in recognizing other zh-cn locale contributors! Thanks for taking the lead. The team is lucky to have you as a locale leader!
  • Props to Julie for her video experiment in the KB and for sharing the stats to the rest of us. Thanks for bringing more colors to our Knowledge Base!
  • Thanks for Jefferson Scher for straightening the Firefox Suggest confusion on Reddit. That definitely help people to understand the feature better.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

Blog of DataThis Week in Glean: Designing a telemetry collection with Glean

Designing a telemetry collection with Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

Whenever I get a chance to write about Glean, I am usually writing about some aspects of working on Glean. This time around I’m going to turn that on its head by sharing my experience working with Glean as a consumer with metrics to collect, specifically in regards to designing a Nimbus health metrics collection. This post is about sharing what I learned from the experience and what I found to be the most important considerations when designing a telemetry collection.

I’ve been helping develop Nimbus, Mozilla’s new experimentation platform, for a while now. It is one of many cross-platform tools written in Rust and it exists as part of the Mozilla Application Services collection of components. With Nimbus being used in more and more products we have a need to monitor its “health”, or how well it is performing in the wild. I took on this task of determining what we would need to measure and designing the telemetry and visualizations because I was interested in experiencing Glean from a consumer’s perspective.

So how exactly do you define the “health” of a software component? When I first sat down to work on this project, I had some vague idea of what this meant for Nimbus, but it really crystallized once I started looking at the types of measurements enabled by Glean. Glean offers different metric types designed to measure anything from a text value, multiple ways to count things, and even events to see how things occur in the flow of the application. For Nimbus, I knew that we would want to track errors, as well as a handful of numeric measurements like how much memory we used and how long it takes to perform certain critical tasks.

As a starting point, I began thinking about how to record errors, which seemed fairly straightforward. The first thing I had to consider was exactly what it was we were measuring (the “shape” of the data), and what questions we wanted to be able to answer with it. Since we have a good understanding about the context in which each of the errors can occur, we really only wanted to monitor the counts of errors to know if they increase or decrease. So, counting things, that’s one of the things Glean is really good at! So my choice in which metric type to use came down to flexibility and organization. Since there are 20+ different errors that are interesting to Nimbus, we could have used a separate counter metric for each of them, but this starts to get a little burdensome when declaring them in the metrics.yaml file. That would require a separate entry in the file for each. The other problem with using a separate counter for each error comes in adding just a bit of complexity to writing SQL for analysis or a dashboard. A query for analyzing the errors if the metrics are defined separately would require each error metric to be in the select statement, and any new errors that are added would also require the query to be modified to add them.

Instead of distinct counters for each error, I chose to model recording Nimbus errors after how Glean records its own internal errors, by using a LabeledCounterMetric. This means that all errors are collected under the same metric name, but have an additional property that is a “label”. Labels are like sub-categories within that one metric. That makes it a little easier to instrument, first in keeping clutter down in the metrics.yaml file, and maybe making it a little easier to create useful dashboards for monitoring error rates. We want to end up with a chart of errors that lets us see if we start to see an unusual spike or change in the trends, something like this:

A line graph showing multiple colored lines and their changes over time

We expect some small amount of errors, these are computers after all, but we can easily establish a baseline for each type of error, which allows us to configure some alerts if things are too far outside expectations.

The next set of things I wanted to know about Nimbus were in the area of performance. We want to detect regressions or problems with our implementation that might not show up locally for a developer in a debug build, so we measure these things at scale to see what performance looks like for everyone using Nimbus. Once again, I needed to think about what exactly we wanted to measure, and what sort of questions we wanted to be able to answer with the data. Since the performance data we were interested in was a measurement of time or memory, we wanted to be able to measure samples from a client periodically and then look at how different measurements are distributed across the population. We also needed to consider exactly when and where we wanted to measure these things. For instance, was it more important or more accurate to measure the database size as we were initializing, or deinitializing? Finally, I knew we would be interested in how that distribution changes over time so having some way to represent this by date or by version when we analyzed the data.

Glean gives us some great metric types to measure samples of things like time and size such as TimingDistributionMetrics and MemoryDistributionMetrics. Both of these metric types allow us to specify a resolution that we care about so that they can “bucket” up the samples into meaningfully sized chunks to create a sparse payload of data to keep things lean. These metric types also provide a “sum” so we can calculate an average from all the samples collected. When we sum these samples across the population, we end up with a histogram like the following, where measurements collected are on the x-axis, and the counts or occurrences of those measurements on the y-axis:

A histogram showing bell curve shaped data

This is a little limited because we can only look at the point in time of the data as a single dimension, whether that’s aggregated by time such as per day/week/year or aggregated on something else like the version of the Nimbus SDK or application. We can’t really see the change over time or version to see if something we added really impacted our performance. Ideally, we wanted to see how Nimbus performed compared to other versions or other weeks. When I asked around for good representations to show something like this, it was suggested that something like a ridgeline chart would be a great visualization for this sort of data:

A ridgeline chart, represented as a series of histograms arranged to form visualization that looks like a mountain ridge

Ridgeline charts give us a great idea of how the distribution changes, but unfortunately I ran into a little setback when I found out that the tools we use don’t currently have a view like that, so I may be stuck in a bit of a compromise until it does. Here is another visualization example, this time with the data stacked on top of each other:

A series of histograms stacked on top of each other

Even though something like this is much harder to read than the ridgeline, we still can see some change from one version to the next, just picking out the sequence becomes much harder. So I’m still left with a little bit of an issue with representing the performance data the way that we wanted. I think it’s at least something that can be iterated on to be more usable in the future, perhaps using something similar to GLAM’s visualization of percentiles of a histogram.

To conclude, I really learned the value of planning and thinking about telemetry design before instrumenting anything. The most important things to consider when designing a collection is what are you measuring, and what questions will you need to answer with the data. Both of those questions can affect not only which metric type you choose to represent your data, but where you want to measure something. Thinking about what questions you want to answer ahead of time allows you to be able to make sure that you are measuring the right things to be able to answer those questions. Planning before instrumenting can also help you to choose the right visualizations to make answering those questions easier, as well as being able to add things like alerts for when things aren’t quite right. So, take a little time to think about your telemetry collection ahead of instrumenting metrics, and don’t forget to validate the metrics once they are instrumented to ensure that they are, in fact, measuring what you think and expect. Plan ahead and I promise you, your data scientists will thank you.

Mozilla L10NL10n Report: October Edition

October 2021 Report

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New l10n-driver

Welcome eemeli, our new l10n-driver! He will be working on Fluent and Pontoon, and is part of our tech team along with Matjaž. We hope we can all connect soon so you can meet him.

New localizers

Katelem from Obolo locale. Welcome to localization at Mozilla!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

Obolo (ann) locale was added to Pontoon.

New content and projects

What’s new or coming up in Firefox desktop

A new major release (MR2) is coming for Firefox desktop with Firefox 94. The deadline to translate content for this version, currently in Beta, is October 24.

While MR2 is not as content heavy as MR1, there are changes to very visible parts of the UI, like the onboarding for both new and existing users. Make sure to check out the latest edition of the Firefox L10n Newsletter for more details, and instructions on how to test.

What’s new or coming up in mobile

Focus for Android and iOS have gone through a new refresh! This was done as part of our ongoing MR2 work – which has also covered Firefox for Android and iOS. You can read about all of this here.

Many of you have been heavily involved in this work, and we thank you for making this MR2 launch across all mobile products such a successful release globally.

We are now starting our next iteration of MR2 releases. We are still currently working on scoping out the mobile work for l10n, so stay tuned.

One thing to note is that the l10n schedule dates for mobile should now be aligned across product operating systems: one l10n release cycle for all of android, and another release cycle for all of iOS. As always, Pontoon deadlines remain your source of truth for this.

What’s new or coming up in web projects
Firefox Accounts

Firefox Accounts team has been working on transitioning Gettext to Fluent. They are in the middle of migrating server.po to auth.ftl, the component that handles the email feature. Unlike previous migrations where the localized strings were not part of the plan, this time, the team wanted to include them as much as possible. The initial attempt didn’t go as planned due to multiple technical issues. The new auth.ftl file made a brief appearance in Pontoon and is now disabled. They will give it a go after confirming that the identified issues were addressed and tested.

Legal docs

All the legal docs are translated by our vendor. Some of you have reported translation errors or they are out of sync with the English source. If you spot any issues, wrong terminology, typo, missing content, to name a few, you can file a bug. Generally we do not encourage localizers to provide translations because of the nature of the content. If they are minor changes, you can create a PR and ask for a peer review to confirm your change before the change can be merged. If the overall quality is bad, we will request the vendor to change the translators.

Please note, the locale support for legal docs varies from product to product. Starting this year, the number of supported locales also has decreased to under 20. Some of the previously localized docs are no longer updated. This might be the reason you see your language out of sync with the English source.

Mozilla.org

Five more mobile specific pages were added since our last report. If you need to prioritize them, please give higher priority to the Focus, Index and Compare pages.

What’s new or coming up in SuMo

Lots of new stuff since our last update here in June. Here are some of the highlights:

  • We’re working on refreshing the onboarding experience in SUMO. The content preparation has mostly done in Q3 and the implementation is expected in this quarter before the end of the year.
  • Catch up on what’s new in our support platform by reading our release notes in Discourse. One highlight of the past quarter is that we integrated Zendesk form for Mozilla VPN into SUMO. We don’t have the capability to detect subscriber at the moment, so everyone can file a ticket now. But we’re hoping to add the capability for that in the future.
  • Firefox Focus joined our forces in Play Store support. Contributors should be able to reply to Google Play Store reviews for Firefox Focus from Conversocial now. We also create this guideline to help contributors compose a reply for Firefox Focus reviews.
  • We welcomed 2 new team members in Q3. Joe who is our Support Operation Manager is now taking care of the premium customer support experience. And Abby, the new Content Manager, is our team’s latest addition who will be working closely with Fabi and our KB contributors to improve our help content.

You’re always welcome to join our Matrix or the contributor forum to talk more about anything related to support!

What’s new or coming up in Pontoon

Submit your ideas and report bugs via GitHub

We have enabled GitHub Issues in the Pontoon repository and made it the new place for tracking bugs, enhancements and tasks for Pontoon development. At the same time, we have disabled the Pontoon Component in Bugzilla, and imported all open bugs into GitHub Issues. Old bugs are still accessible on their existing URLs. For reporting security vulnerabilities, we’ll use a newly created component in Bugzilla, which allows us to hide security problems from the public until they are resolved.

Using GitHub Issues will make it easier for the development team to resolve bugs via commit messages and put them on a Roadmap, which will also be moved to GitHub soon. We also hope GitHub Issues will make suggesting ideas and reporting issues easier for the users. Let us know if you run into any issues or have any questions!

More improvements to the notification system coming

As part of our H1 effort to better understand how notifications are being used, the following features have received most votes in a localizer survey:

  • Notifications for new strings should link to the group of strings added.
  • For translators and locale managers, get notifications when there are pending suggestions to review.
  • Add the ability to opt-out of specific notifications.

Thanks to eemeli, the first item was resolved back in August. The second feature has also been implemented, which means reviewers will receive weekly notifications about newly created unreviewed suggestions within the last week. Work on the last item – ability to opt-out of specific notification types – has started.

Newly published localizer facing documentation

We published two new posts in the Localization category on Discourse:

Events

  • Michal Stanke shared his experience as a volunteer in the open source community at the annual International Translation Day event hosted by WordPress! Way to go!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla BlogHacked! Unravelling a data breach

This is a story about paying a steep price for a pair of cheap socks.

The first loose thread in June

One Tuesday morning as I* was having my coffee and toast before kicking off the work day, I got a text from my credit card company alerting me to a suspected fraud charge. Of course I was alarmed and started looking into it right away. 

I messaged my husband: Are you getting any fraud charge alerts? Nope, just me. 

Soon after, I received an email order confirmation (then another and another) for electronic goods I didn’t purchase. The email receipt showed my home billing address, with a different shipping address, which happened to be the location of a hotel in my city. I found it odd and scary that someone local had my credit card number matched to my actual name, home address and email address. I imagined them holed up in a hotel room opening boxes of stolen goods and reselling them on Craigslist. But wouldn’t the thief realize I (and other victims) would get these email messages? 

Wait. Was someone using my email account?! 

Hoping it wasn’t too late, I sprang into action, quickly changing my email password and verifying that my account wasn’t logged into any unfamiliar devices. Everything seemed okay there. I wondered if it could have been a mashup of data breaches and scrapes that allowed a thief to merge the information into a more complete picture. The thought crossed my mind that a keylogger was installed on my computer. 

Meanwhile, my credit card company canceled my cards and set about issuing new ones. What had actually happened didn’t pinpoint me personally — and here’s what I was able to weave together.

Backstitch to May

Like most people on Instagram, I love to see friends’ pics and scroll through other fun visual content. I don’t mind ads for movies and shows (hello entertaining videos that fill my playlist) or for clothes and accessories (hello virtual window shopping.) One ad kept reappearing for custom print socks. So cute. I caved and ordered a pair of these socks for my husband for Father’s Day, featuring our kids’ faces. They arrived, as adorable as could be, and we all had a good laugh when he opened them. 

Life went on. Then something else happened.

A tangled knot in July

Apparently the would-be credit card thief had also used FedEx for shipping, and when my credit card was declined, FedEx reverted to billing the shipper, which was the thief posing as me with my real address.

When I received the first invoice in the mail from FedEx, I called my credit card company who  assured me that the charge had been flagged as fraud. The representative advised me to ignore the letter, and that FedEx knew the charge wasn’t mine. But the second letter from FedEx was clear they weren’t giving up on collecting the fee billed to my “account” even though the real me doesn’t have one. 

When I called FedEx and gave the case number listed on the letter, the representative started asking what I felt were increasingly privacy-invading questions (wouldn’t the case number be enough information?), and I was worried this was a phishing expedition. Eventually, after a few more phone calls I was able to get this resolved. I think. No more letters. Fees removed. Still, it was unnerving.

Knitting the threads together in September

The email subject line caught my attention: Security Incident Notification. The e-commerce host for the adorable sock company I ordered from in May had been compromised. They wrote that:

The hosting company, by their own admission, forgot to enable one of the most basic security features, and this security oversight allowed our business to be attacked by an unknown 3rd party using a malicious file, allowing them to access some payment information.

The hosting company’s failure in ensuring traditional security and data-protection measures allowed the unknown 3rd-party to skim the information as it was entered.

So it appears the alarms that went off in June were related to a purchase I made in May. I can’t be sure that my data isn’t still out there, but at least my credit card has been replaced. I did check my credit report recently to make sure there wasn’t any suspicious activity.

The takeaway

I can only assume that the fraudsters had a huge dump of data, and they figured they could get away with theft from some people who wouldn’t even notice the charges. If the credit card hadn’t flagged the fraud, they might have gone unnoticed by someone who doesn’t review their monthly bill. It’s mildly inconvenient to have credit cards reissued, and it can also create problems with automatic bill-pays and urgent needs. Taking care of the fallout took time and effort. I’m assuming this is over, but maybe it’s not.

* * * * *

Truthfully, it could have been much worse. We can’t predict the future, but we can be prepared in case our personal information is ever part of a data breach. Luke Crouch, a cybersecurity expert with Mozilla, recommends people do the following when faced with a data breach:

  1. Lock down your email accounts by updating your passwords and setting up 2-factor authentication.
  2. Get a password manager.
  3. Use Firefox Monitor to see if your email has been part of any other breaches.

The bottom line: If you get snagged in a data breach, tie up any loose threads quickly to protect yourself, and stay on top of monitoring your accounts for suspicious activity.


*Ed note: This person’s name has been removed to protect their privacy.

At Mozilla, we work towards creating a safe and joyful Internet experience every day. That’s why this year for Cyber Security Awareness month, we’ll be featuring privacy and security experts as they weigh in on personal stories of cybercrime and more. Check back each week in October for a new story and expert advice on how to protect yourself online. In the meantime, kick start your own cyber security journey with products designed to keep you safe online including: Mozilla VPN to Firefox Monitor and Firefox Relay.


The post Hacked! Unravelling a data breach appeared first on The Mozilla Blog.

The Mozilla BlogHTTPS and your online security

We have long advised Web users to look for HTTPS and the lock icon in the address bar of their favorite browser (Firefox!) before typing passwords or other private information into a website. These are solid tips, but it’s worth digging deeper into what HTTPS does and doesn’t do to protect your online security and what steps you need to take to be safer.

Trust is more than encryption

It’s true that looking for the lock icon and HTTPS will help you prevent attackers from seeing any information you submit to a website. HTTPS also prevents your internet service provider (ISP) from seeing what pages you visit beyond the top level of a website. That means they can see that you regularly visit https://www.reddit.com, for example, but they won’t see that you spend most of your time at https://www.reddit.com/r/CatGifs/. But while HTTPS does guarantee that your communication is private and encrypted, it doesn’t guarantee that the site won’t try to scam you.

Because here’s the thing: Any website can use HTTPS and encryption. This includes the good, trusted websites as well as the ones that are up to no good — the scammers, the phishers, the malware makers.

You might be scratching your head right now, wondering how a nefarious website can use HTTPS. You’ll be forgiven if you wonder in all caps HOW CAN THIS BE?

The answer is that the security of your connection to a website — which HTTPS provides — knows nothing about the information being relayed or the motivations of the entities relaying it. It’s a lot like having a phone. The phone company isn’t responsible for scammers calling you and trying to get your credit card. You have to be savvy about who you’re talking to. The job of HTTPS is to provide a secure line, not guarantee that you won’t be talking to crooks on it.

That’s your job. Tough love, I know. But think about it. Scammers go to great lengths to trick you, and their motives largely boil down to one: to separate you from your money. This applies everywhere in life, online and offline. Your job is to not get scammed.

How do you spot a scam website?

Consider the uniform. It generally evokes authority and trust. If a legit looking person in a spiffy uniform standing outside of your bank says she works for the bank and offers to take your cash in and deposit it, would you trust her? Of course not. You’d go directly to the bank yourself. Apply that same skepticism online.

Since scammers go to great lengths to trick you, you can expect them to appear in a virtual uniform to convince you to trust them. “Phishing” is a form of identity theft that occurs when a malicious website impersonates a legitimate one in order to trick you into giving up sensitive information such as passwords, account details or credit card numbers. Phishing attacks usually come from email messages that attempt to lure you, the recipient, into updating your personal information on fake but very real-looking websites. Those websites may also use HTTPS in an attempt to boost their legitimacy in your eyes.

Here are some things you should do.

Don’t click suspicious links.

I once received a message telling me that my Bank of America account had been frozen, and I needed to click through to fix it. It looked authentic, however, I don’t have a BoFA account. That’s what phishing is — casting a line to bait someone. If I did have a BoFA account, I may have clicked through and been hooked. A safer approach would be to go directly to the Bank of America website, or give them a call to find out if the email was fake.

If you get an email that says your bank account is frozen / your PayPal account has a discrepancy / you have an unpaid invoice / you get the idea, and it seems legitimate, go directly to the source. Do not click the link in the email, no matter how convinced you are.

Stop for alerts.

Firefox has a built-in Phishing and Malware Protection feature that will warn you when a page you visit has been flagged as a bad actor. If you see an alert, which looks like this, click the “Get me out of here!” button.

HTTPS matters

Most major websites that offer a customer login already use HTTPS. Think: financial institutions, media outlets, stores, social media. But it’s not universal. Every website out there doesn’t automatically use HTTPS.

With HTTPS-Only Mode in Firefox, the browser forces all connections to websites to use HTTPS. Enabling this mode provides a guarantee that all of your connections to websites are upgraded to use HTTPS and hence secure. Some websites only support HTTP and the connection cannot be upgraded. If HTTPS-Only Mode is enabled and a HTTPS version of a site is not available, you will see a “Secure Connection Not Available” page. If you click Continue to HTTP Site, you accept the risk and then will visit a HTTP version of the site. HTTPS-Only Mode will be turned off temporarily for that site.

It’s not difficult for sites to convert. The website owner needs to get a certificate from a certificate authority to enable HTTPS. In December 2015, Mozilla joined with Cisco, Akamai, EFF and University of Michigan to launch Let’s Encrypt, a free, automated, and open certificate authority, run for the public’s benefit.

HTTPS across the web is good for Internet Health because it makes a more secure environment for everyone. It provides integrity, so a site can’t be modified, and authentication, so users know they’re connecting to the legit site and not some attacker. Lacking any one of these three properties can cause problems. More non-secure sites means more risk for the overall web.

If you come across a website that is not using HTTPS, send them a note encouraging them to get on board. Post on their social media or send them an email to let them know it matters: @favoritesite I love your site, but I noticed it’s not secure. Get HTTPS from @letsencrypt to protect your site and visitors. If you operate a website, encrypting your site will make your it more secure for yourself and your visitors and contribute to the security of the web in the process.

In the meantime, share this article with your friends so they understand what HTTPS does and doesn’t do for their online security.

The post HTTPS and your online security appeared first on The Mozilla Blog.

hacks.mozilla.orgLots to see in Firefox 93!

Firefox 93 comes with lots of lovely updates including AVIF image format support, filling of XFA-based forms in its PDF viewer and protection against insecure downloads by blocking downloads relying on insecure connections.

Web developers are now able to use static initialization blocks within JavaScript classes, and there are some Shadow DOM and Custom Elements updates. The SHA-256 algorithm is now supported for HTTP Authentication using digests. This allows much more secure authentication than previously available using the MD5 algorithm.

This blog post provides merely a set of highlights; for all the details, check out the following:

AVIF Image Support

The AV1 Image File Format (AVIF) is a powerful, open source, royalty-free file format. AVIF has the potential to become the “next big thing” for sharing images in web content. It offers state-of-the-art features and performance, without the encumbrance of complicated licensing and patent royalties that have hampered comparable alternatives.

It offers much better lossless compression compared to PNG or JPEG formats, with support for higher color depths and transparency. As support is not yet comprehensive, you should include fallbacks to formats with better browser support (i.e. using the <picture> element).

Read more about the AVIF image format in the Image file type and format guide on MDN.

Static initialization blocks

Support for static initialization blocks in JavaScript classes is now available in Firefox 93. This enables more flexibility as it allows developers to run blocks of code when initializing static fields. This is handy if you want to set multiple fields from a single value or evaluate statements.

You can have multiple static blocks within a class and they come with their own scope. As they are declared within a class, they have access to a class’s private fields. You can find more information about static initialization blocks on MDN.

Custom Elements & Shadow DOM

In Firefox 92 the Imperative Slotting API was implemented giving developers more control over assigning slots within a custom element. Firefox 93 included support for the slotchange event that fires when the nodes within a slot change.

Also implemented in Firefox 93 is the HTMLElement.attachInternals() method. This returns an instance of ElementInternals, allowing control over an HTML element’s internal features. The ElementInternals.shadowRoot property was also added, meaning developers can gain access to the shadow root of elements, even if they themselves didn’t create the element.

If you want to learn more about Custom Elements and the Shadow DOM, check out MDN’s guides on the topics.

Other highlights

A few other features worth noting include:

The post Lots to see in Firefox 93! appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgImplementing form filling and accessibility in the Firefox PDF viewer

Intro

Last year, during lockdown, many discovered the importance of PDF forms when having to deal remotely with administrations and large organizations like banks. Firefox supported displaying PDF forms, but it didn’t support filling them: users had to print them, fill them by hand, and scan them back to digital form. We decided it was time to reinvest in the PDF viewer (PDF.js) and support filling PDF forms within Firefox to make our users’ lives easier.

While we invested more time in the PDF viewer, we also went through the backlog of work and prioritized improving the accessibility of our PDF reader for users of assistive technologies. Below we’ll describe how we implemented the form support, improved accessibility, and made sure we had no regressions along the way.

Brief Summary of the PDF.js Architecture

Overview of the PDF.js ArchitectureTo understand how we added support for forms and tagged PDFs, it’s first important to understand some basics about how the PDF viewer (PDF.js) works in Firefox.

First, PDF.js will fetch and parse the document in a web worker. The parsed document will then generate drawing instructions. PDF.js sends them to the main thread and draws them on an HTML5 canvas element.

Besides the canvas, PDF.js potentially creates three more layers that are displayed on top of it. The first layer, the text layer, enables text selection and search. It contains span elements that are transparent and line up with the text drawn below them on the canvas. The other two layers are the Annotation/AcroForm layer and the XFA form layer. They support form filling and we will describe them in more detail below.

Filling Forms (AcroForms)

AcroForms are one of two types of forms that PDF supports, the most common type of form.

AcroForm structure

Within a PDF file, the form elements are stored in the annotation data. Annotations in PDF are separate elements from the main content of a document. They are often used for things like taking notes on a document or drawing on top of a document. AcroForm annotation elements support user input similar to HTML input e.g. text, check boxes, radio buttons.

AcroForm implementation

In PDF.js, we parse a PDF file and create the annotations in a web worker. Then, we send them out from the worker and render them in the main process using HTML elements inserted in a div (annotation layer). We render this annotation layer, composed of HTML elements, on top of the canvas layer.

The annotation layer works well for displaying the form elements in the browser, but it was not compatible with the way PDF.js supports printing. When printing a PDF, we draw its contents on a special printing canvas, insert it into the current document and send it to the printer. To support printing form elements with user input, we needed to draw them on the canvas.

By inspecting (with the help of the qpdf tool) the raw PDF data of forms saved using other tools, we discovered that we needed to save the appearance of a filled field by using some PDF drawing instructions, and that we could support both saving and printing with a common implementation.

To generate the field appearance, we needed to get the values entered by the user. We introduced an object called annotationStorage to store those values by using callback functions in the corresponding HTML elements. The annotationStorage is then passed to the worker when saving or printing, and the values for each annotation are used to create an appearance.

Example PDF.js Form Rendering

On top a filled form in Firefox and on bottom the printed PDF opened in Evince.

Safely Executing JavaScript within PDFs

Thanks to our Telemetry, we discovered that many forms contain and use embedded JavaScript code (yes, that’s a thing!).

JavaScript in PDFs can be used for many things, but is most commonly used to validate data entered by the user or automatically calculate formulas. For example, in this PDF, tax calculations are performed automatically starting from user input. Since this feature is common and helpful to users, we set out to implement it in PDF.js.

The alternatives

From the start of our JavaScript implementation, our main concern was security. We did not want PDF files to become a new vector for attacks. Embedded JS code must be executed when a PDF is loaded or on events generated by form elements (focus, input, …).

We investigated using the following:

  1. JS eval function
  2. JS engine compiled in WebAssembly with emscripten
  3. Firefox JS engine ComponentUtils.Sandbox

The first option, while simple, was immediately discarded since running untrusted code in eval is very unsafe.

Option two, using a JS engine compiled with WebAssembly, was a strong contender since it would work with the built-in Firefox PDF viewer and the version of PDF.js that can be used in regular websites. However, it would have been a large new attack surface to audit. It would have also considerably increased the size of PDF.js and it would have been slower.

The third option, sandboxes, is a feature exposed to privileged code in Firefox that allows JS execution in a special isolated environment. The sandbox is created with a null principal, which means that everything within the sandbox can only be accessed by it and can only access other things within the sandbox itself (and by privileged Firefox code).

Our final choice

We settled on using a ComponentUtils.Sandbox for the Firefox built-in viewer. ComponentUtils.Sandbox has been used for years now in WebExtensions, so this implementation is battle tested and very safe: executing a script from a PDF is at least as safe as executing one from a normal web page.

For the generic web viewer (where we can only use standard web APIs, so we know nothing about ComponentUtils.Sandbox) and the pdf.js test suite we used a WebAssembly version of QuickJS (see pdf.js.quickjs for details).

The implementation of the PDF sandbox in Firefox works as follows:

  • We collect all the fields and their properties (including the JS actions associated with them) and then clone them into the sandbox;
  • At build time, we generate a bundle with the JS code to implement the PDF JS API (totally different from the web API we are accustomed to!). We load it in the sandbox and then execute it with the data collected during the first step;
  • In the HTML representation of the fields we added callbacks to handle the events (focus, input, …). The callbacks simply dispatch them into the sandbox through an object containing the field identifier and linked parameters. We execute the corresponding JS actions in the sandbox using eval (it’s safe in this case: we’re in a sandbox). Then, we clone the result and dispatch it outside the sandbox to update the states in the HTML representations of the fields.

We decided not to implement the PDF APIs related to I/O (network, disk, …) to avoid any security concerns.

Yet Another Form Format: XFA

Our Telemetry also informed us that another type of PDF forms, XFA, was fairly common. This format has been removed from the official PDF specification, but many PDFs with XFA still exist and are viewed by our users so we decided to implement it as well.

The XFA format

The XFA format is very different from what is usually in PDF files. A normal PDF is typically a list of drawing commands with all layout statically defined by the PDF generator. However, XFA is much closer to HTML and has a more dynamic layout that the PDF viewer must generate. In reality XFA is a totally different format that was bolted on to PDF.

The XFA entry in a PDF contains multiple XML streams: the most important being the template and datasets. The template XML contains all the information required to render the form: it contains the UI elements (e.g. text fields, checkboxes, …) and containers (subform, draw, …) which can have static or dynamic layouts. The datasets XML contains all the data used by the form itself (e.g. text field content, checkbox state, …). All these data are bound into the template (before layout) to set the values of the different UI elements.

Example Template
<template xmlns="http://www.xfa.org/schema/xfa-template/3.6/">
  <subform>
    <pageSet name="ps">
      <pageArea name="page1" id="Page1">
        <contentArea x="7.62mm" y="30.48mm" w="200.66mm" h="226.06mm"/>
        <medium stock="default" short="215.9mm" long="279.4mm"/>
      </pageArea>
    </pageSet>
    <subform>
      <draw name="Text1" y="10mm" x="50mm" w="200mm" h="7mm">
        <font size="15pt" typeface="Helvetica"/>
        <value>
          <text>Hello XFA & PDF.js world !</text>
        </value>
      </ draw>
    </subform>
  </subform>
</template>
Output From Template

Rendering of XFA Document

The XFA implementation

In PDF.js we already had a pretty good XML parser to retrieve metadata about PDFs: it was a good start.

We decided to map every XML node to a JavaScript object, whose structure is used to validate the node (e.g. possible children and their different numbers). Once the XML is parsed and validated, the form data needs to be bound in the form template and some prototypes can be used with the help of SOM expressions (kind of XPath expressions).

The layout engine

In XFA, we can have different kinds of layouts and the final layout depends on the contents. We initially planned to piggyback on the Firefox layout engine, but we discovered that unfortunately we would need to lay everything out ourselves because XFA uses some layout features which don’t exist in Firefox. For example, when a container is overflowing the extra contents can be put in another container (often on a new page, but sometimes also in another subform).  Moreover, some template elements don’t have any dimensions, which must be inferred based on their contents.

In the end we implemented a custom layout engine: we traverse the template tree from top to bottom and, following layout rules, check if an element fits into the available space. If it doesn’t, we flush all the elements layed out so far into the current content area, and we move to the next one.

During layout, we convert all the XML elements into JavaScript objects with a tree structure. Then, we send them to the main process to be converted into HTML elements and placed in the XFA layer.

The missing font problem

As mentioned above, the dimensions of some elements are not specified. We must compute them ourselves based on the font used in them. This is even more challenging because sometimes fonts are not embedded in the PDF file.

Not embedding fonts in a PDF is considered bad practice, but in reality many PDFs do not include some well-known fonts (e.g. the ones shipped by Acrobat or Windows: Arial, Calibri, …) as PDF creators simply expected them to be always available.

To have our output more closely match Adobe Acrobat, we decided to ship the Liberation fonts and glyph widths of well-known fonts. We used the widths to rescale the glyph drawing to have compatible font substitutions for all the well-known fonts.

Comparing glyph rescaling

On the left: default font without glyph rescaling. On the right: Liberation font with glyph rescaling to emulate MyriadPro.

The result

In the end the result turned out quite good, for example, you can now open PDFs such as 5704 – APPLICATION FOR A FISH EXPORT LICENCE in Firefox 93!

Making PDFs accessible

What is a Tagged PDF?

Early versions of PDFs were not a friendly format for accessibility tools such as screen readers. This was mainly because within a document, all text on a page is more or less absolutely positioned and there’s not a notion of a logical structure such as paragraphs, headings or sentences. There was also no way to provide a text description of images or figures. For example, some pseudo code for how a PDF may draw text:

showText(“This”, 0 /*x*/, 60 /*y*/);
showText(“is”, 0, 40);
showText(“a”, 0, 20);
showText(“Heading!”, 0, 0);

This would draw text as four separate lines, but a screen reader would have no idea that they were all part of one heading. To help with accessibility, later versions of the PDF specification introduced “Tagged PDF.” This allowed PDFs to create a logical structure that screen readers could then use. One can think of this as a similar concept to an HTML hierarchy of DOM nodes. Using the example above, one could add tags:

beginTag(“heading 1”);
showText(“This”, 0 /*x*/, 60 /*y*/);
showText(“is”, 0, 40);
showText(“a”, 0, 20);
showText(“Heading!”, 0, 0);
endTag(“heading 1”);

With the extra tag information, a screen reader knows that all of the lines are part of “heading 1” and can read it in a more natural fashion. The structure also allows screen readers to easily navigate to different parts of the document.

The above example is only about text, but tagged PDFs support many more features than this e.g. alt text for images, table data, lists, etc.

How we supported Tagged PDFs in PDF.js

For tagged PDFs we leveraged the existing “text layer” and the browsers built in HTML ARIA accessibility features. We can easily see this by a simple PDF example with one heading and one paragraph. First, we generate the logical structure and insert it into the canvas:

<canvas id="page1">
  <!-- This content is not visible, 
  but available to screen readers   -->
  <span role="heading" aria-level="1" aria-owns="heading_id"></span>
  <span aria_owns="some_paragraph"></span>
</canvas>

In the text layer that overlays the canvas:

<div id="text_layer">
  <span id="heading_id">Some Heading</span>
  <span id="some_paragaph">Hello world!</span>
</div>

A screen reader would then walk the DOM accessibility tree in the canvas and use the `aria-owns` attributes to find the text content for each node. For the above example, a screen reader would announce:

Heading Level 1 Some Heading
Hello World!

For those not familiar with screen readers, having this extra structure also makes navigating around the PDF much easier: you can jump from heading to heading and read paragraphs without unneeded pauses.

Ensure there are no regressions at scale, meet reftests

Reference Test Analyzer

Crawling for PDFs

Over the past few months, we have built a web crawler to retrieve PDFs from the web and, using a set of heuristics, collect statistics about them (e.g. are they XFA? What fonts are they using? What formats of images do they include?).

We have also used the crawler with its heuristics to retrieve PDFs of interest from the “stressful PDF corpus” published by the PDF association, which proved particularly interesting as they contained many corner cases we did not think could exist.

With the crawler, we were able to build a large corpus of Tagged PDFs (around 32000), PDFs using JS (around 1900), XFA PDFs (around 1200) which we could use for manual and automated testing. Kudos to our QA team for going through so many PDFs! They now know everything about asking for a fishing license in Canada, life skills!

Reftests for the win

We did not only use the corpus for manual QA, but also added some of those PDFs to our list of reftests (reference tests).

A reftest is a test consisting of a test file and a reference file. The test file uses the pdf.js rendering engine, while the reference file doesn’t (to make sure it is consistent and can’t be affected by changes in the patch the test is validating). The reference file is simply a screenshot of the rendering of a given PDF from the “master” branch of pdf.js.

The reftest process

When a developer submits a change to the PDF.js repo, we run the reftests and ensure the rendering of the test file is exactly the same as the reference screenshot. If there are differences, we ensure that the differences are improvements rather than regressions.

After accepting and merging a change, we regenerate the references.

The reftest shortcomings

In some situations a test may have subtle differences in rendering compared to the reference due to, e.g., anti-aliasing. This introduces noise in the results, with “fake” regressions the developer and reviewer have to sift through. Sometimes, it is possible to miss real regressions because of the large number of differences to look at.

Another shortcoming of reftests is that they are often big. A regression in a reftest is not as easy to investigate as a failure of a unit test.

Despite these shortcomings, reftests are a very powerful regression prevention weapon in the pdf.js arsenal. The large number of reftests we have boosts our confidence when applying changes.

Conclusion

Support for AcroForms landed in Firefox v84. JavaScript execution in v88. Tagged PDFs in v89. XFA forms in v93 (tomorrow, October 5th, 2021!).

While all of these features have greatly improved form usability and accessibility, there are still more features we’d like to add. If you’re interested in helping, we’re always looking for more contributors and you can join us on element or github.

We also want to say a big thanks to two of our contributors Jonas Jenwald and Tim van der Meij for their on going help with the above projects.

The post Implementing form filling and accessibility in the Firefox PDF viewer appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataMy first time experience at the SciPy conference

In July 2021, I and a few fellow Mozillians attended the SciPy conference with Mozilla as a diversity sponsor, meaning that our sponsorship went towards paying the stipend for the diversity speaker, Tess Tannenbaum. This was my first time attending a SciPy conference and also my first time supporting data science recruiting efforts at a conference.  The conference involved the showcasing of the latest open source Python projects for advancement in scientific computing.  I was eager to meet the contributors of many commonly used data science Python packages and hear about new features in upcoming releases. I was excited about having this opportunity as I strongly believe that conference attendance is an extremely rewarding experience for networking and learning about industry trends.  As a Data Scientist, my day to day work often involves using Python libraries such as scikit-learn, numpy and pandas to derive insights from data.  It felt particularly close to heart for a technical and data science geek like me to learn about code developments and use cases from other enthusiasts in the industry.

One talk that I particularly enjoyed was on the topic of Time-to-Event Modeling in Python led by Brian Kent and a few other data science experts.  Time-to-Event Modeling is also referred to as survival analysis, which was traditionally used in biological research studies to predict lifespans. The speakers at the talk were the contributors of some of the most popular survival analysis python packages.  For example, Lifelines is an introductory Python package that can be used for starters in survival analysis.  Scikit-Survival is another package built on top of Scikit-learn, which is a commonly used package in machine learning.  The focus of the talk was around how survival analysis could be useful in many different scenarios, such as in customer analytics.  There is also increasing usage of survival analysis in SaaS businesses where it can be used to predict customer churn, which can help companies plan their retention strategies.  I am curious how Mozilla can potentially apply survival analysis in ways that also respects data governance guidelines.

Like many other large group events that happened in the past year, the conference was entirely virtual and utilized various platforms to host talks and engagement activities.  In addition to having Slack as a communication tool, the conference also used Airmeet and Gather town this year.  The various sessions, tutorials and recruiting booths were hosted in Airmeet.  The more interactive talks took place in Gather town, which I find quite entertaining and enjoyable.  It is a game-like environment where everyone has a character that can walk around in the virtual environment.  It allows you to network or meet with others by walking up to other characters and their video cameras would show up as you walk towards them.  Conference organizers did a great job quickly adapting to hosting virtual gatherings and coordinating multiple tools to deliver a seamless experience.

When the SciPy conference happens next year in 2022, I will dedicate more time for networking and attending more tutorials.  This will ideally be the likely outcome with the hope that I can attend the conference in person next year.  I am also hopeful that it can be a potential opportunity to meet some remote colleagues from Mozilla in person.  Overall, the conference experience was definitely rewarding as it is important to stay current with new developments and collaborate with other technical enthusiasts in the rapidly changing scientific computing industry.

 

Resources:

Mozillians sharing the 2021 SciPy Conference experience:

SciPy 2021 conference proceedings

SciPy 2021 YouTube Channel

hacks.mozilla.orgControl your data for good with Rally

Let’s face it, if you have ever used the internet or signed up for an online account, or even read a blog post like this one, chances are that your data has left a permanent mark on the interwebs and online services have exploited your data without your awareness for a very long time. 

The Fight for Privacy

The fight for privacy is compounded by the rise in misinformation and platforms like Facebook willingly sharing information that is untrustworthy, shutting down platforms like Crowdtangle and recently terminating the accounts of New York University researchers that built Ad Observer, an extension dedicated to bringing greater transparency to political advertising. We think a better internet is one where people have more control over their data. 

Contribute your data for good

In a world where data and AI are reshaping society, people currently have no tangible way to put their data to work for the causes they believe in. To address this, we built the Rally platform, a first-of-its-kind tool that enables you to contribute your data to specific studies and exercise consent at a granular level. Mozilla Rally puts you in control of your data while building a better Internet and a better society. 

Mozilla Rally

Like Mozilla, Rally is a community-driven open source project and we publish our code on GitHub, ensuring that it’s open-source and freely available for you to audit. Privacy, control and transparency are foundational to Rally. Participating is voluntary, meaning we won’t collect data unless you agree to it first, and we’ll provide you with a clear understanding of what we have access to at every step of the way.

 

With your help, we can create a safer, more transparent, and more equitable internet that protects people, not Big Tech. 

Interested?

Rally needs users and is currently available on Firefox. In the future, we will expand to other web browsers. We’re currently looking for users who are residents in the United States, age 19 and older. 

Protecting the internet and its users is hard work!  We’re also hiring to grow our Rally Team.

 

The post Control your data for good with Rally appeared first on Mozilla Hacks - the Web developer blog.

SUMO BlogIntroducing Abby Parise

Hi folks,

It’s with great pleasure that I introduce Abby Parise, who is the latest addition to the Customer Experience team. Abby is taking the role of Support Content Manager, so you’ll definitely see more of her in SUMO. If you were with us or have watched September’s community call, you might’ve seen her there.

Here’s a brief introduction from Abby:

Hi there! My name is Abby and I’m the new Support Content Manager for Mozilla. I’m a longtime Firefox user with a passion for writing compelling content to help users achieve their goals. I’m looking forward to getting to know our contributors and would love to hear form you on ideas to make our content more helpful and user-friendly!

Please join me to welcome Abby!

hacks.mozilla.orgTab Unloading in Firefox 93

Starting with Firefox 93, Firefox will monitor available system memory and, should it ever become so critically low that a crash is imminent, Firefox will respond by unloading memory-heavy but not actively used tabs. This feature is currently enabled on Windows and will be deployed later for macOS and Linux as well. When a tab is unloaded, the tab remains in the tab bar and will be automatically reloaded when it is next selected. The tab’s scroll position and form data are restored just like when the browser is restarted with the restore previous windows browser option.

On Windows, out-of-memory (OOM) situations are responsible for a significant number of the browser and content process crashes reported by our users. Unloading tabs allows Firefox to save memory leading to fewer crashes and avoids the associated interruption in using the browser.

We believe this may especially benefit people who are doing heavy browsing work with many tabs on resource-constrained machines. Or perhaps those users simply trying to play a memory-intensive game or using a website that goes a little crazy. And of course, there are the tab hoarders, (no judgement here). Firefox is now better at surviving these situations.

We have experimented with tab unloading on Windows in the past, but a problem we could not get past was that finding a balance between decreasing the browser’s memory usage and annoying the user because there’s a slight delay as the tab gets reloaded, is a rather difficult exercise, and we never got satisfactory results.

We have now approached the problem again by refining our low-memory detection and tab selection algorithm and narrowing the action to the case where we are sure we’re providing a user benefit: if the browser is about to crash. Recently we have been conducting an experiment on our Nightly channel to monitor how tab unloading affects browser use and the number of crashes our users encounter. We’ve seen encouraging results with that experiment. We’ll continue to monitor the results as the feature ships in Firefox 93.

With our experiment on the Nightly channel, we hoped to see a decrease in the number of OOM crashes hit by our users. However, after the month-long experiment, we found an overall significant decrease in browser crashes and content process crashes. Of those remaining crashes, we saw an increase in OOM crashes. Most encouragingly, people who had tab unloading enabled were able to use the browser for longer periods of time. We also found that average memory usage of the browser increased.

The latter may seem very counter-intuitive, but is easily explained by survivorship bias. Much like in the archetypal example of the Allied WWII bombers with bullet holes, browser sessions that had such high memory usage would have crashed and burned in the past, but are now able to survive by unloading tabs just before hitting the critical threshold.

The increase in OOM crashes, also very counter-intuitive, is harder to explain. Before tab unloading was introduced, Firefox already responded to Windows memory-pressure by triggering an internal memory-pressure event, allowing subsystems to reduce their memory use. With tab unloading, this event is fired after all possible unloadable tabs have been unloaded.

This may account for the difference. Another hypothesis is that it’s possible our tab unloading sometimes kicks in a fraction too late and finds the tabs in a state where they can’t even be safely unloaded any more.

For example, unloading a tab requires a garbage collection pass over its JavaScript heap. This needs some additional temporary storage that is not available, leading to the tab crashing instead of being unloaded but still saving the entire browser from going down.

We’re working on improving our understanding of this problem and the relevant heuristics. But given the clearly improved outcomes for users, we felt there was no point in holding back the feature.

When does Firefox automatically unload tabs?

When system memory is critically low, Firefox will begin automatically unloading tabs. Unloading tabs could disturb users’ browsing sessions so the approach aims to unload tabs only when necessary to avoid crashes. On Windows, Firefox gets a notification from the operating system (setup using CreateMemoryResourceNotification) indicating that the available physical memory is running low. The threshold for low physical memory is not documented, but appears to be around 6%. Once that occurs, Firefox starts periodically checking the commit space (MEMORYSTATUSEX.ullAvailPageFile).

When the commit space reaches a low-memory threshold, which is defined with the preference “browser.low_commit_space_threshold_mb”, Firefox will unload one tab, or if there are no unloadable tabs, trigger the Firefox-internal memory-pressure warning allowing subsystems in the browser to reduce their memory use. The browser then waits for a short period of time before checking commit space again and then repeating this process until available commit space is above the threshold.

We found the checks on commit space to be essential for predicting when a real out-of-memory situation is happening. As long as there is still swap AND physical memory available, there is no problem. If we run out of physical memory and there is swap, performance will crater due to paging, but we won’t crash.

On Windows, allocations fail and applications will crash if there is low commit space in the system even though there is physical memory available because Windows does not overcommit memory and can refuse to allocate virtual memory to the process in this case. In other words, unlike Linux, Windows always requires commit space to allocate memory.

How do we end up in this situation? If some applications allocate memory but do not touch it, Windows does not assign the physical memory to such untouched memory. We have observed graphics drivers doing this, leading to low swap space when plenty of physical memory is available.

In addition, crash data we collected indicated that a surprising number of users with beefy machines were in this situation, some perhaps thinking that because they had a lot of memory in their machine, the Windows swap could be reduced to the bare minimum. You can see why this is not a good idea!

How does Firefox choose which tabs to unload first?

Ideally, only tabs that are no longer needed will be unloaded and the user will eventually restart the browser or close unloaded tabs before ever reloading them. A natural metric is to consider when the user has last used a tab. Firefox unloads tabs in least-recently-used order.

Tabs playing sound, using picture-in-picture, pinned tabs, or tabs using WebRTC (which is used for video and audio conferencing sites) are weighted more heavily so they are less likely to be unloaded. Tabs in the foreground are never unloaded. We plan to do more experiments and continue to tune the algorithm, aiming to reduce crashes while maintaining performance and being unobtrusive to the user.

about:unloads

For diagnostic and testing purposes, a new page about:unloads has been added to display the tabs in their unload-priority-order and to manually trigger tab unloading. This feature is currently in beta and will ship with Firefox 94.

Screenshot of the about:unloads page in beta planned for Firefox 94.

Screenshot of the about:unloads page in beta planned for Firefox 94.

Browser Extensions

Some browser extensions already offer users the ability to unload tabs. We expect these extensions to interoperate with automatic tab unloading as they use the same underlying tabs.discard() API. Although it may change in the future, today automatic tab unloading only occurs when system memory is critically low, which is a low-level system metric that is not exposed by the WebExtensions API. (Note: an extension could use the native messaging support in the WebExtensions API to accomplish this with a separate application.) Users will still be able to benefit from tab unloading extensions and those extensions may offer more control over when tabs are unloaded, or deploy more aggressive heuristics to save more memory.

Let us know how it works for you by leaving feedback on ideas.mozilla.org or reporting a bug. For support, visit support.mozilla.org.Firefox crash reporting and telemetry adheres to our data privacy principles. See the Mozilla Privacy Policy for more information.

Thanks to Gian-Carlo Pascutto, Toshihito Kikuchi, Gabriele Svelto, Neil Deakin, Kris Wright, and Chris Peterson, for their contributions to this blog post and their work on developing tab unloading in Firefox.

The post Tab Unloading in Firefox 93 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogNews from Firefox Focus and Firefox on Mobile

One of our promises this year was to deliver ways that can help you navigate the web easily and get you quickly where you need to go. We took a giant step in that direction earlier this year when we shared a new Firefox experience. We were on a mission to save you time and streamline your everyday use of the browser. This month, we continue to deliver on that mission with new features in our Firefox on mobile products. For our Firefox Focus mobile users, we have a fresh redesign plus new features including shortcuts to get you faster to the things you want to get to. This Cybersecurity Awareness month, you can manage your passwords and take them wherever you go whenever you use your Firefox on Android mobile app. 

Fresh, new Firefox Focus 

Since its launch, Firefox Focus has been a favorite app with its minimal design, streamlined features and for those times when you want to do a super quick search without the distractions. So, when it came to refreshing Firefox Focus we wanted to offer a simple, privacy by default companion app, allowing users to quickly complete searches without distraction and worry of being tracked or bombarded with advertisements. We added a fresh new look with new colors, a new logo and a dark theme. We added a shortcut feature so that users can get to the sites they visit the most. And with privacy in mind you will see the privacy Tracking Protection Shield icon which is accessible from the search bar so you can quickly turn the individual trackers on or off when you click the shield icon. Plus, we added a global counter that shows you all the trackers blocked for you. Check out the new Firefox Focus and try it for life’s “get in and get out” moments. 

<figcaption>New shortcut feature to get you to the sites you visit most</figcaption>

Got a ton of passwords? Keep them safe on Firefox on Android

What do Superman, Black Widow and Wolverine have in common? They make horrible passwords. At least that’s what we discovered when we took a look to see how fortified superhero passwords are in the fight against hackers and breaches. You can see how your favorite superheroes fared in “Superhero passwords may be your kryptonite wherever you go online.”  

This Cybersecurity Awareness month, we added new features on Firefox on Android, to keep your passwords safe. We’ve increasingly become dependent on the web, whether it’s signing up for streaming services or finding new ways to connect with families and friends, we’ve all had to open an account and assign a completely new password. Whether it’s 10 or 100 passwords, you can take your passwords wherever you go on Firefox on Android. These new features will be available on iOS later this year. The new features include:

Creating and adding new passwords is easy – Now, when you create an account for any app on your mobile device, you can also create and add a new password, which you can save directly in the Firefox browser and you can use it on both mobile and desktop.  

<figcaption>Create and add new passwords</figcaption>

  • Take your passwords with you on the go – Now you can easily autofill your password on your phone and use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. Plus, if you have a Firefox account then you can sync all your passwords across desktop and mobile devices. It’s that seamless and simple. 
<figcaption>Sync all your passwords across desktop and mobile devices</figcaption>

  • Unlock your passwords with your fingerprint or face – Now only you can safely open your accounts when you use your operating system’s biometric security, such as your face or your fingerprint touch to unlock the access page to your logins and passwords.

Firefox coming soon to a Windows store near you

Microsoft has loosened restrictions on its Windows Store that effectively banned third-party browsers from the store. We have been advocating for years for more user choice and control on the Windows operating system. We welcome the news that their store is now more open to companies and applications, including independent browsers like Firefox. We believe that a healthier internet is one where people have an opportunity to choose from a diverse range of browsers and browser engines. Firefox will be available in the Windows store later this year. 

Get the fast, private browser for your desktop and mobileFirefox on Android, Firefox for iOS and Firefox Focus today.

For more on Firefox:

11 secret tips for Firefox that will make you an internet pro

7 things to know (and love) about the new Firefox for Android

Modern, clean new Firefox clears the way to all you need online

Behind the design: A fresh new Firefox

The post News from Firefox Focus and Firefox on Mobile appeared first on The Mozilla Blog.

Web Application SecurityFirefox 93 features an improved SmartBlock and new Referrer Tracking Protections

We are happy to announce that the Firefox 93 release brings two exciting privacy improvements for users of Strict Tracking Protection and Private Browsing. With a more comprehensive SmartBlock 3.0, we combine a great browsing experience with strong tracker blocking. In addition, our new and enhanced referrer tracking protection prevents sites from colluding to share sensitive user data via HTTP referrers.

SmartBlock 3.0

In Private Browsing and Strict Tracking Protection, Firefox goes to great lengths to protect your web browsing activity from trackers. As part of this, the built-in content blocking will automatically block third-party scripts, images, and other content from being loaded from cross-site tracking companies reported by Disconnect. This type of aggressive blocking could sometimes bring small inconveniences, such as missing images or bad performance. In some rare cases, it could even result in a feature malfunction or an empty page.

To compensate, we developed SmartBlock, a mechanism that will intelligently load local, privacy-preserving alternatives to the blocked resources that behave just enough like the original ones to make sure that the website works properly.

The third iteration of SmartBlock brings vastly improved support for replacing the popular Google Analytics scripts and added support for popular services such as Optimizely, Criteo, Amazon TAM and various Google advertising scripts.

As usual, these replacements are bundled with Firefox and can not track you in any way.

HTTP Referrer Protections

The HTTP Referer [sic] header is a browser signal that reveals to a website which location “referred” the user to that website’s server. It is included in navigations and sub-resource requests a browser makes and is frequently used by websites for analytics, logging, and cache optimization. When sent as part of a top-level navigation, it allows a website to learn which other website the user was visiting before.

This is where things get problematic. If the browser sends the full URL of the previous site, then it may reveal sensitive user data included in the URL. Some sites may want to avoid being mentioned in a referrer header at all.

The Referrer Policy was introduced to address this issue: it allows websites to control the value of the referrer header so that a stronger privacy setting can be established for users. In Firefox 87, we went one step further and decided to set the new default referrer policy to strict-origin-when-cross-origin which will automatically trim the most sensitive parts of the referrer URL when it is shared with another website. As such, it prevents sites from unknowingly leaking private information to trackers.

However, websites can still override the introduced default trimming of the referrer, and hence effectively deactivate this protection and send the full URL anyway. This would invite websites to collude with trackers by choosing a more permissive referrer policy and as such remains a major privacy issue.

With the release of version 93, Firefox will ignore less restrictive referrer policies for cross-site requests, such as ‘no-referrer-when-downgrade’, ‘origin-when-cross-origin’, and ‘unsafe-url’ and hence renders such privacy violations ineffective. In other words, Firefox will always trim the HTTP referrer for cross-site requests, regardless of the website’s settings.

For same-site requests, websites can of course still send the full referrer URL.

Enabling these new Privacy Protections

As a Firefox user who is using Strict Tracking Protection and Private Browsing, you can benefit from the additionally provided privacy protection mechanism as soon as your Firefox auto-updates to Firefox 93. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 93 features an improved SmartBlock and new Referrer Tracking Protections appeared first on Mozilla Security Blog.

Web Application SecurityFirefox 93 protects against Insecure Downloads

 

Downloading files on your device still exposes a major security risk and can ultimately lead to an entire system compromise by an attacker. Especially because the security risks are not apparent. To better protect you from the dangers of insecure, or even undesired downloads, we integrated the following two security enhancements which will increase security when you download files on your computer. In detail, Firefox will:

  • block insecure HTTP downloads on a secure HTTPS page, and
  • block downloads in sandboxed iframes, unless the iframe is explicitly annotated with the allow-downloads attribute.

 

Blocking Downloads relying on insecure connections

Downloading files via an insecure HTTP connection, generally exposes a major security risk because data transferred by the regular HTTP protocol is unprotected and transferred in clear text, such that attackers are able to view, steal, or even tamper with the transmitted data. Put differently, downloading a file over an insecure connection allows an attacker to replace the file with malicious content which, when opened, can ultimately lead to an entire system compromise.

 

Firefox 93 prompting the end user about a ‘Potential security risk’ when downloading a file using an insecure connection.

 

As illustrated in the Figure above, if Firefox detects such an insecure download, it will initially block the download and prompt you signalling the Potential security risk. This prompt allows you to either stop the download and Remove the file, or alternatively grants you the option to override the decision and download the file anyway, though it’s safer to abandon the download at this point.

 

Blocking Downloads in sandboxed iframes

The Inline Frame sandbox attribute is the preferred way to lock down capabilities of embedded third-party content. Currently, even with the sandbox attribute set, malicious content could initiate a drive-by download, prompting the user to download malicious files. Unless the sandboxed content is explicitly annotated with the ‘allow-downloads’ attribute, Firefox will  protect you against such drive-by downloads. Put differently, downloads initiated from sandboxed contexts without this attribute will be canceled silently in the background without any user browsing disruption.

 

It’s Automatic!

As a Firefox user, you can benefit from the additionally provided security mechanism as soon as your Firefox auto-updates to version 93. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 93 protects against Insecure Downloads appeared first on Mozilla Security Blog.

Web Application SecuritySecuring Connections: Disabling 3DES in Firefox 93

As part of our continuing work to ensure that Firefox provides secure and private network connections, it periodically becomes necessary to disable configurations or even entire protocols that were once thought to be secure, but no longer provide adequate protection. For example, last year, early versions of the Transport Layer Security (TLS) protocol were disabled by default.

One of the options that goes into configuring TLS is the choice of which encryption algorithms to enable. That is, which methods are available to use to encrypt and decrypt data when communicating with a web server?

Goodbye, 3DES

3DES (“triple DES”, an adaptation of DES (“Data Encryption Standard”)) was for many years a popular encryption algorithm. However, as attacks against it have become stronger, and as other more secure and efficient encryption algorithms have been standardized and are now widely supported, it has fallen out of use. Recent measurements indicate that Firefox encounters servers that choose to use 3DES about as often as servers that use deprecated versions of TLS.

As long as 3DES remains an option that Firefox provides, it poses a security and privacy risk. Because it is no longer necessary or prudent to use this encryption algorithm, it is disabled by default in Firefox 93.

Addressing Compatibility

As with disabling obsolete versions of TLS, deprecating 3DES may cause compatibility issues. We hypothesize that the remaining uses of 3DES correspond mostly to outdated devices that use old cryptography and cannot be upgraded. It may also be that some modern servers inexplicably (perhaps unintentionally) use 3DES when other more secure and efficient encryption algorithms are available. Disabling 3DES by default helps with the latter case, as it forces those servers to choose better algorithms. To account for the former situation, Firefox will allow 3DES to be used when deprecated versions of TLS have manually been enabled. This will protect connections by default by forbidding 3DES when it is unnecessary while allowing it to be used with obsolete servers if necessary.

The post Securing Connections: Disabling 3DES in Firefox 93 appeared first on Mozilla Security Blog.

The Mozilla BlogDo you need a VPN at home? Here are 5 reasons you might.

You might have heard of VPNs — virtual private networks — at some point, and chalked them up to something only “super techy” people or hackers would ever use. At this point in the evolution of online life, however, VPNs have become more mainstream, and anyone may have good reasons to use one. VPNs are beneficial for added privacy when you’re connected to a public wifi network, and you might also want to use a VPN at home when you’re online as well. Here are five reasons to consider using a VPN at home.

Stop your ISP from watching you 

Did you know that when you connect to the internet at home through your internet service provider (ISP), it can track what you do online? Even though your traffic is usually encrypted using HTTPS, this doesn’t conceal which sites you are visiting. Your ISP can see every site you visit and track things like how often you visit sites and how long you’re on them. That’s rich personal — and private — information you’re giving away to your ISP every time you connect to the internet at home. The good news is that a VPN at home can prevent your ISP from snooping on you by encrypting your traffic before the ISP can see it.

How do VPNs work?

Get answers to nine common questions about VPNs

Secure yourself on a shared building network

Some apartment buildings offer wifi as an incentive to residents, but just like your ISP, anyone else on the network can see what sites you are visiting. Do you even know all your neighbors, let alone know if they’re bumbling true crime podcast fanatics or even actual cyber criminals? Do you know for sure that your landlord or building manager isn’t tracking your internet traffic? If you’re concerned about any of that, a VPN can add extra privacy on your shared network by encrypting your traffic between you and your VPN provider so that no one on your local network can decipher or modify it.

Block nosy housemates

Similar to a shared apartment network, sharing an internet connection could leave your browsing behavior vulnerable to snooping by housemates or any other untrustworthy person who accesses your network. A VPN at home adds an extra layer of encryption, preventing people on your network from seeing what websites you go to.

Increase remote work security

Working remotely, at least part of the time, is the new normal for millions of office workers, and some people are experiencing a VPN for the first time. Some employers offer an enterprise VPN for home workers and some even require  logging into one to access a company file server.

Explore the world at home

There are some fun reasons to use a VPN at home, too. You can get access to shows, websites and livestreams in dozens of different countries. See what online shopping is like if you were in a different locale and get the feeling of gaming from somewhere new. 

The post Do you need a VPN at home? Here are 5 reasons you might. appeared first on The Mozilla Blog.

The Mozilla BlogMiracle Whip, Finstas, #InternationalPodcastDay, and #FreeBritney all made the Top Shelf this week

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of September 27, 2021, in no particular order.

From Licorice Pizza to McRibs to #NationalCoffeeDay, food-related topics boiled to the top of the trends this week on Twitter, though not every one of them is actually food… we’ll leave you to decide which!

Pocket Joy List Project

The Pocket Joy List Project

The stories, podcasts, poems and songs we always come back to

The post Miracle Whip, Finstas, #InternationalPodcastDay, and #FreeBritney all made the Top Shelf this week appeared first on The Mozilla Blog.

The Mozilla BlogAnalysis of Google’s Privacy Budget Proposal

Fingerprinting is a major threat to user privacy on the Web. Fingerprinting uses existing properties of your browser like screen size, installed add-ons, etc. to create a unique or semi-unique identifier which it can use to track you around the Web. Even if individual values are not particularly unique, the combination of values can be unique (e.g., how many people are running Firefox Nightly, live in North Dakota, have an M1 Mac and a big monitor, etc.)

This post discusses a proposal by Google to address fingerprinting called the Privacy Budget. The idea behind the Privacy Budget is to estimate the amount of information revealed by each piece of fingerprinting information (called a “fingerprinting surface”, e.g., screen resolution) and then limit the total amount of that information a site can obtain about you. Once the site reaches that limit (the “budget”), further attempts to learn more about you would fail, perhaps by reporting an error or returning a generic value. This idea has been getting a fair amount of attention and has been proposed as a potential privacy mitigation in some in-development W3C specifications.

While this seems like an attractive idea, our detailed analysis of the proposal raises questions about its feasibility.  We see a number of issues:

  • Estimating the amount of information revealed by a single surface is quite difficult. Moreover, because some values will be much more common than others, any total estimate is misleading. For instance, the Chrome browser has many users and so learning someone uses Chrome is not very identifying; by contrast, learning that someone uses Firefox Nightly is quite identifying because there are few Nightly users.
  • Even if we are able to set a common value for the budget, it is unclear how to determine whether a given set of queries exceeds that value. The problem is that these queries are not independent and so you can’t just add up each query. For instance, screen width and screen height are highly correlated and so once a site has queried one, learning the other is not very informative.
  • Enforcement is likely to lead to surprising and disruptive site breakage because sites will exceed the budget and then be unable to make API calls which are essential to site function. This will be exacerbated because the order in which the budget is used is nondeterministic and depends on factors such as the network performance of various sites, so some users will experience breakage and others will not.
  • It is possible that the privacy budget mechanism itself can be used for tracking by exhausting the budget with a particular pattern of queries and then testing to see which queries still work (because they already succeeded).

While we understand the appeal of a global solution to fingerprinting — and no doubt this is the motivation for the Privacy Budget idea appearing in specifications — the underlying problem here is the large amount of fingerprinting-capable surface that is exposed to the Web. There does not appear to be a shortcut around addressing that. We believe the best approach is to minimize the easy-to-access fingerprinting surface by limiting the amount of information exposed by new APIs and gradually reducing the amount of information exposed by existing APIs. At the same time, browsers can and should attempt to detect abusive patterns by sites and block those sites, as Firefox already does.

This post is part of a series of posts analyzing privacy-preserving advertising proposals.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

Privacy analysis of FLoC

Mozilla responds to the UK CMA consultation on google’s commitments on the Chrome Privacy Sandbox

Privacy analysis of SWAN.community and Unified ID 2.0

The post Analysis of Google’s Privacy Budget Proposal appeared first on The Mozilla Blog.

Open Policy & AdvocacyAddressing gender-based online harms in the DSA

Last year the European Commission published the Digital Services Act (DSA) proposal, a draft law that seeks to set a new standard for platform accountability. We welcomed the draft law when it was published, and since then we have been working to ensure it is strengthened and elaborated as it proceeds through the mark-up stage. Today we’re confirming our support for a new initiative that focuses on improving the DSA with respect to gender-based online harm, an objective that aligns with our policy vision and the Mozilla Manifesto addendum.

An overarching focus of our efforts to improve the DSA have focused on the draft law’s risk assessment and auditing provisions. In order to structurally improve the health of the internet ecosystem, we need laws that compel platforms to meaningfully assess and mitigate the systemic risks stemming from the design and operation of their services. While the draft DSA is a good start, it falls short when it comes to specifying the types of systemic risks that platforms need to address.

One such area of systemic risk that warrants urgent attention is gender-based online harm. Women and non-binary people are subject to massive and persistent abuse online, with 74% of women reporting experiencing some form of online violence in the EU in 2020. Women from marginalised communities, including LGBTQ+ people, women of colour, and Black women in particular, are often disproportionately targeted with online abuse.

In our own platform accountability research this untenable reality has surfaced time and time again. For instance, in one testimony submitted to Mozilla Foundation as part of our YouTube Regrets campaign, one person wrote “In coming out to myself and close friends as transgender, my biggest regret was turning to YouTube to hear the stories of other trans and queer people. Simply typing in the word “transgender” brought up countless videos that were essentially describing my struggle as a mental illness and as something that shouldn’t exist. YouTube reminded me why I hid in the closet for so many years.”

Another story read: “I was watching a video game series on YouTube when all of a sudden I started getting all of these anti-women, incel and men’s rights recommended videos. I ended up removing that series from my watch history and going through and flagging those bad recommendations as ‘not interested’. It was gross and disturbing. That stuff is hate, and I really shouldn’t have to tell YouTube that it’s wrong to promote it.”

Indeed, further Mozilla research into this issue on YouTube has underscored the role of automated content recommender systems in exacerbating the problem, to the extent that they can recommend videos that violate the platform’s very own policies, like hate speech.

This is not only a problem on YouTube, but on the web at large. And while the DSA is not a silver bullet for addressing gender-based online harm, it can be an important part of the solution. To underscore that belief, we – as the Mozilla Foundation – have today signed on to a joint Call with stakeholders from across the digital rights, democracy, and womens’ rights communities. This Call aims to invigorate efforts to improve the DSA provisions around risk assessment and management, and ensure lawmakers appreciate the scale of gender-based online harm that communities face today.

This initiative complements other DSA-focused engagements that seek to address gender-based online harms. In July, we signaled our support for the The Who Writes the Rules campaign, and we stand in solidarity with the just-published testimonies of gender-based online abuse faced by the initiative’s instigators.

The DSA has been rightly-billed as an accountability game-changer. Lawmakers owe it to those who suffer gender-based online harm to ensure those systemic risks are properly accounted for.

The full text of the Call can be read here.

The post Addressing gender-based online harms in the DSA appeared first on Open Policy & Advocacy.

The Mozilla BlogSuperhero passwords may be your kryptonite wherever you go online

A password is like a key to your house. In the online world, your password keeps your house of personal information safe, so a super strong password is like having a superhero in a fight of good vs. evil. In recognition of Cybersecurity Awareness month, we revisited our “Princesses make terrible passwords for Disney+ and every other account,” and took a look to see how fortified superhero passwords are in the fight against hackers and breaches. According to haveibeenpwned.com, take a look at the how many times these superhero passwords have showed up in breached datasets:

And if you thought maybe their real identities might make for a better password, think again!

Lucky for you, we’ve got a family of products from a company you can trust, Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet. Here are your best tools in the fight against hackers and breaches:

Keep passwords safe from cyber threats with this new Firefox super power on Firefox on Android

This Cybersecurity Awareness month, we added new features for Firefox on Android, to keep your passwords safe. You might not have every password memorized by heart, nor do you need to when you use Firefox. With Firefox, users will be able to seamlessly access Firefox saved passwords. This means you can use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. It’s that seamless and simple. Plus, you can also use biometric security, such as your face or fingerprint, to unlock the app and safely access your accounts. These new features will be available next Tuesday with the latest Firefox on Android release. Here are more details on the upcoming new features:

  • Creating and adding new passwords is easy – Now, when you create an account for any app on your mobile device, you can also create and add a new password, which you can save directly in the Firefox browser and you can use it on both mobile and desktop.  
<figcaption>Create and add new passwords</figcaption>
  • Take your passwords with you on the go – Now you can easily autofill your password on your phone and use any password you’ve saved in the browser to log into any online account like your Twitter or Instagram app. No need to open a web page. Plus, if you have a Firefox account then you can sync all your passwords across desktop and mobile devices. It’s that seamless and simple. 
<figcaption>Sync all your passwords across desktop and mobile devices</figcaption>
  • Unlock your passwords with your fingerprint and face – Now only you can safely open your accounts when you use biometric security such as your fingerprint or face to unlock the access page to your logins and passwords.

Forget J.A.R.V.I.S, keep informed of hacks and breaches with Firefox Monitor 

Avoid your spidey senses from tingling every time you hear about hacks and breaches by signing up with Firefox Monitor. You’ll be able to keep an eye on your accounts once you sign up for Firefox Monitor and get alerts delivered to your email whenever there’s been a data breach or if your accounts have been hacked.

X Ray vision won’t work on a Virtual Private Network like Mozilla VPN

One of the reasons people use a Virtual Private Network (VPN), an encrypted connection that serves as a tunnel between your computer and VPN server, is to protect themselves whenever they use a public WiFi network. It sounds harmless, but public WiFi networks can be like a backdoor for hackers. With a VPN, you can rest assured you’re safe whenever you use the public WiFi network at your local cafe or library. Find and use a trusted VPN provider like our Mozilla VPN, a fast and easy-to-use VPN service. Thousands of people have signed up to subscribe to our Mozilla VPN, which provides encryption and device-level protection of your connection and information when you are on the Web.


How did we get these numbers? Unfortunately, we don’t have a J.A.R.V.I.S, so we looked these up in haveipbeenpwned.com. We couldn’t access any data files, browse lists of passwords or link passwords to logins — that info is inaccessible and kept secure — but we could look up random passwords manually. Current numbers on the site may be higher than at time of publication as new datasets are added to HIBP. Alas, data breaches keep happening. There’s no time like the present to make sure all your passwords are built like Ironman.

The post Superhero passwords may be your kryptonite wherever you go online appeared first on The Mozilla Blog.

hacks.mozilla.orgMDN Web Docs at Write the Docs Prague 2021

The MDN Web Docs team is pleased to sponsor Write the Docs Prague 2021, which is being held remotely this year. We’re excited to join hundreds of documentarians to learn more about collaborating with writers, developers, and readers to make better documentation. We plan to take part in all that the conference has to offer, including the Writing Day, Job Fair, and the virtual hallway track.

In particular, we’re looking forward to taking part in the Writing Day on Sunday, October 3, where we’ll be joining our friends from Open Web Docs (OWD) to work on MDN content updates together. We’re planning to invite our fellow conference attendees to take part in making open source documentation. OWD is also sponsoring ​​Write the Docs; read their announcement to learn more.

The post MDN Web Docs at Write the Docs Prague 2021 appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Announcement: Glean.js v0.19.0 supports Node.js

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)


From the start, the Glean JavaScript SDK (Glean.js) was conceptualized as a JavaScript telemetry library for diverse JavaScript environments. When we built the proof-of-concept, we tested that idea out and created a library that worked in Qt/QML apps, websites, web extensions, Node.js servers and CLIs, and Electron apps.

However, the stakes are completely different when implementing a proof-of-concept library and a library to be used in production environments. Whereas for the proof-of-concept we wanted to try out as many platforms as possible, for the actual Glean.js library we want to minimize unnecessary work and focus on perfecting the features our users will actively benefit from. That meant, up until a few weeks ago, Glean.js supported browser extensions and Qt/QML apps. Today, that means it also supports Node.js environments.

🎉 (Of course, it’s always exciting to implement new features).

If you would also like to start using Glean.js in your Node.js project today, checkout the “Adding Glean to your JavaScript project” guide over in the Glean book, but note that there is one caveat: the Node.js implementation does not contain persistent storage, which means every time the app is restarted the state is reset and Glean runs as if it were the first run ever of the app. In the spirit of not implementing things that are not required, we spoke to the users that requested Node.js support and concluded that for their use case persistent storage was not necessary. If your use case does require that, leave a comment over on Bug 1728807 and we will re-prioritize that work.

:brizental

SeaMonkeySeaMonkey 2.53.9.1 is out!

Hi everyone,

Please note that SeaMonkey 2.53.9.1 has been released.

For transparency and fyi, this release was delayed by a few days due to some technical difficulties (mine… started the release process last week but failed a few times over the weekend).

That stated, the updates are up and confirmed running so your 2.53.5+ systems should be updating to 2.53.9.1.

Please check out [1] and/or [2].

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.9.1/

[2] – https://www.seamonkey-project.org/releases/2.53.9.1

The Mozilla BlogLocation history: How your location is tracked and how you can limit sharing it

In real estate, the age old mantra is “location, location, location,” meaning that location drives value. That’s true even when it comes to data collection in the online world, too — your location history is valuable, authentic information. In all likelihood, you’re leaving a breadcrumb trail of location data every day, but there are a few things you can do to clean that up and keep more of your goings-on to yourself. 

What is location history?

When your location is tracked and stored over time, it becomes a body of data called your location history. This is rich personal data that shows when you have been at specific locations, and can include things like frequency and duration of visits and stops along the way. Connecting all of that location history, companies can create a detailed picture and make inferences about who you are, where you live and work, your interests, habits, activities, and even some very private things you might not want to share at all.

How is location data used?

For some apps, location helps them function better, like navigating with a GPS or following a map. Location history can also be useful for retracing your steps to past places, like finding your way back to that tiny shop in Florence where you picked up beautiful stationery two years ago.

On the other hand, marketing companies use location data for marketing and advertising purposes. They can also use location to conduct “geomarketing,” which is targeting you with promotions based on where you are. Near a certain restaurant while you’re out doing errands at midday? You might see an ad for it on your phone just as you’re thinking about lunch.

Location can also be used to grant or deny access to certain content. In some parts of the world, content on the internet is “geo-blocked” or geographically-restricted based on your IP address, which is kind of like a mailing address, associated with your online activity. Geo-blocking can happen due to things like copyright restrictions, limited licensing rights or even government control. 

Who can view your location data?

Any app that you grant permission to see your location has access to it. Unless you carefully read each data policy or privacy policy, you won’t know how your location data — or any personal data — collected by your apps is used. 

Websites can also detect your general location through your IP address or by asking directly what your location is, and some sites will take it a step further by requesting more specifics like your zip code to show you different site content or search results based on your locale.

How to disable location request prompts

Tired of websites asking for your location? Here’s how to disable those requests:

Firefox: Type “about:preferences#privacy” in the URL bar. Go to Permissions > Location > Settings. Select “Block new requests asking to access your location”. Get more details about location sharing in Firefox.

Safari: Go to Settings > Websites > Location. Select “When visiting other websites: Deny.”

Chrome: Go to Settings > Privacy and security > Site Settings. Then click on Location and select “Don’t allow sites to see your location”

Edge: Go to Settings and more > Settings > Site permissions > Location. Select “Ask before accessing”

Limit, protect and delete your location data

Most devices have the option to turn location tracking off for the entire device or for select apps. Here’s how to view and change your location privacy settings:

How to delete your Google Location History
Ready to delete your Google Location History in one fell swoop? There’s a button for that.

It’s also a good idea to review all of the apps on your devices. Check to see if you’re sharing your location with some that don’t need it all or even all the time. Some of them might be set up just to get your location, and give you little benefit in return while sharing it with a network of third parties. Consider deleting apps that you don’t use or whose service you could just as easily get through a mobile browser where you might have better location protection.

Blur your device’s location for next-level privacy

Learn more about Mozilla VPN

The post Location history: How your location is tracked and how you can limit sharing it appeared first on The Mozilla Blog.

The Mozilla BlogDid you hear about Apple’s security vulnerability? Here’s how to find and remove spyware.

Spyware has been in the news recently with stories like the Apple security vulnerability that allowed devices to be infected without the owner knowing it, and a former editor of The New York Observer being charged with a felony for unlawfully spying on his spouse with spyware. Spyware is a sub-category of malware that’s aimed at surveilling the behavior of human target(s) using a given device where the spyware is running. This surveillance could include but is not limited to logging keystrokes, capturing what websites you are visiting, looking at your locally stored files/passwords, and capturing audio or video within proximity to the device.

How does spyware work?

Spyware, much like any other malware, doesn’t just appear on a device. It often needs to first be installed or initiated. Depending on what type of device, this could manifest in a variety of ways, but here are a few specific examples:

  • You could visit a website with your web browser and a pop-up prompts you to install a browser extension or addon.
  • You could visit a website and be asked to download and install some software you weren’t there to get.
  • You could visit a website that prompts you to access your camera or audio devices, even though the website doesn’t legitimately have that need.
  • You could leave your laptop unlocked and unattended in a public place, and someone could install spyware on your computer.
  • You could share a computer or your password with someone, and they secretly install the spyware on your computer.
  • You could be prompted to install a new and unknown app on your phone.
  • You install pirated software on your computer, but this software additionally contains spyware functionality.

With all the above examples, the bottom line is that there could be software running with a surveillance intent on your device. Once installed, it’s often difficult for a lay person to have 100% confidence that their device can be trusted again, but for many the hard part is first detecting that surveillance software is running on your device.

How to detect spyware on your computer and phone

As mentioned above, spyware, like any malware, can be elusive and hard to spot, especially for a layperson. However, there are some ways by which you might be able to detect spyware on your computer or phone that aren’t overly complicated to check for.

Cameras

On many types of video camera devices, you get a visual indication that the video camera is recording. These are often a hardware controlled light of some kind that indicates the device is active. If you are not actively using your camera and these camera indicator lights are on, this could be a signal that you have software on your device that is actively recording you, and it could be some form of spyware. 

Here’s an example of what camera indicator lights look like on some Apple devices, but active camera indicators come in all kinds of colors and formats, so be sure to understand how your device works. A good way to test is to turn on your camera and find out exactly where these indicator lights are on your devices.

Additionally, you could make use of a webcam cover. These are small mechanical devices that allow users to manually open and shut cameras only when in use. These are generally a very cheap and low-tech way to protect snooping via cameras.

Applications

One pretty basic means to detect malicious spyware on systems is simply reviewing installed applications, and only keeping applications you actively use installed.

On Apple devices, you can review your applications folder and the app store to see what applications are installed. If you notice something is installed that you don’t recognize, you can attempt to uninstall it. For Windows computers, you’ll want to check the Apps folder in your Settings

Web extensions

Many browsers, like Firefox or Chrome, have extensive web extension ecosystems that allow users to customize their browsing experience. However, it’s not uncommon for malware authors to utilize web extensions as a medium to conduct surveillance activities of a user’s browsing activity.

On Firefox, you can visit about:addons and view all your installed web extensions. On Chrome, you can visit chrome://extensions and view all your installed web extensions. You are basically looking for any web extensions that you didn’t actively install on your own. If you don’t recognize a given extension, you can attempt to uninstall it or disable it.

Add features to Firefox to make browsing faster, safer or just plain fun.

Get quality extensions, recommended by Firefox.

How do you remove spyware from your device?

If you recall an odd link, attachment, download or website you interacted with around the time you started noticing issues, that could be a great place to start when trying to clean your system. There are various free online tools you can leverage to help get a signal on what caused the issues you are experiencing. VirusTotal, UrlVoid and HybridAnalysis are just a few examples. These tools can help you determine when the compromise of your system occurred. How they can do this varies, but the general idea is that you give it the file or url you are suspicious of, and it will return a report to you showing what various computer security companies know about the file or url. A point of infection combined with your browser’s search history would give you a starting point of various accounts you will need to double check for signs of fraudulent or malicious activity after you have cleaned your system. This isn’t entirely necessary in order to clean your system, but it helps jumpstart your recovery from a compromise.

There are a couple of paths that can be followed in order to make sure any spyware is entirely removed from your system and give you peace of mind:

Install an antivirus (AV) software from a well-known company and run scans on your system

  • If you have a Windows device, Windows Defender comes pre-installed, and you should double-check that you have it turned on.
  • If you currently have an AV software installed, make sure it’s turned on and that it’s up to date. Should it fail to identify and remove the spyware from your system, then it’s on to one of the following options.

Run a fresh install of your system’s operating system

  • While it might be tempting to backup files you have on your system, be careful and remember that your device was compromised and the file causing the issue could end up back on your system and again compromising it.
  • The best way to do this would be to wipe the hard drive of your system entirely, and then reinstall from an external device.

How can you protect yourself from getting spyware?

There are a lot of ways to help keep your devices safe from spyware, and in the end it can all be boiled down to employing a little healthy skepticism and practicing good basic digital hygiene. These tips will help you stay on the right track:

Be wary. Don’t click on links, open/download attachments from unknown senders. This applies to both messaging apps as well as emails. 

Stay updated. Take the time to install updates/patches. This helps make sure your devices and apps are protected against known issues.

Check legitimacy. If you aren’t sure if a website or email is giving legitimate information, take the time to use your favorite search engine to find the legitimate website. This helps avoid issues with typos potentially leading you to a bad website

Use strong passwords. Ensure all your devices have solid passwords that are not shared. It’s easier to break into a house that isn’t locked.

Delete extras. Remove applications you don’t use anymore. This reduces the total attack surface you are exposing, and has the added bonus of saving space for things you care about.

Use security settings. Enable built in browser security features. By default, Firefox is on the lookout for malware and will alert you to Deceptive Content and Dangerous Software.

The post Did you hear about Apple’s security vulnerability? Here’s how to find and remove spyware. appeared first on The Mozilla Blog.

Blog of DataThis Week in Glean: Glean & GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index.


This is a followup post to Shipping Glean with GeckoView.

It landed!

It took us several more weeks to put everything into place, but we’re finally shipping the Rust parts of the Glean Android SDK with GeckoView and consume that in Android Components and Fenix. And it still all works, collects data and is sending pings! Additionally this results in a slightly smaller APK as well.

This unblocks further work now. Currently Gecko simply stubs out all calls to Glean when compiled for Android, but we will enable recording Glean metrics within Gecko and exposing them in pings sent from Fenix. We will also start work on moving other Rust components into mozilla-central in order for them to use the Rust API of Glean directly. Changing how we deliver the Rust code also made testing Glean changes across these different components a bit more challenging, so I want to invest some time to make that easier again.

SUMO BlogWhat’s up with SUMO – September 2021

Hey SUMO folks,

September is going to be the last month for Q3, so let’s see what we’ve been up to for the past quarter.

Welcome on board!

  1. Welcome to SUMO family for Bithiah, mokich1one, handisutrian, and Pomarańczarz. Bithiah has been pretty active on contributing to the support forum for a while now, while Mokich1one, Handi, and Pomarańczarz are emerging localization contributors respectively for Japanese, Bahasa Indonesia, and Polish.

Community news

  • Read our post about the advanced customization in the forum and KB here and let us know if you still have any questions!
  • Please join me to welcome Abby into the Customer Experience Team. Abby is our new Content Manager who will be in charge of our Knowledge Base as well as Localization effort. You can learn more about Abby soon.
  • Learn more about Firefox 92 here.
  • Can you imagine what’s gonna happen when we reach version 100? Learn more about the experiment we’re running in Firefox Nightly here and see how you can help!
  • Are you a fan of Firefox Focus? Join our foxfooding campaign for focus that is coming. You can learn more about the campaign here.
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in August!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Aug 2021 8,462,165 +2.47%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Thomas8
  3. Michele Rodaro
  4. K_alex
  5. Pierre Mozinet

KB Localization

Top 10 locale based on total page views

Locale Aug 2021 pageviews (*) Localization progress (per Sep, 7)(**)
de 8.57% 99%
zh-CN 6.69% 100%
pt-BR 6.62% 63%
es 5.95% 44%
fr 5.43% 91%
ja 3.93% 57%
ru 3.70% 100%
pl 1.98% 100%
it 1.81% 86%
zh-TW 1.45% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Michele Rodaro
  3. Jim Spentzos
  4. Soucet
  5. Artist

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Aug 2021 3523 75.59% 17.40% 66.67%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Aug 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top contributors in Aug 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

Other products / Experiments

  • Mozilla VPN V2.5 Expected to release 09/15
  • Fx Search experiment:
    • From Sept 6, 2021 1% of the Desktop user base will be experimenting with Bing as the default search engine. The study will last into early 2022, likely wrapping up by the end of January.
    • Common response:
      • Forum: Search study – September 2021
      • Conversocial clipboard: “Mozilla – Search study sept 2021”
      • Twitter: Hi, we are currently running a study that may cause some users to notice that their default search engine has changed. To revert back to your search engine of choice, please follow the steps in the following article → https://mzl.la/3l5UCLr
  • Firefox Suggest + Data policy update (Sept 16 + Oct 5)
    • September 16th, the Mozilla Privacy Policy will be updated to supplement the roll out of FX Suggest online mode. Currently, FX Suggest is utilizing offline mode which limits the data collected. Online mode will collect additional anonymized information after users opt-in to this feature. Users can opt-out of this experience by following the instructions here.

Shout-outs!

  • Kudos for Julie for her work in the Knowledge Base lately. She’s definitely adding a new color in our KB world with her video and article improvement.
  • Thanks to those who contributed to the FX Desktop Topics Discussion
    • If you have input or questions please post them to the thread above

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

Blog of DataData and Firefox Suggest

Introduction

Firefox Suggest is a new feature that displays direct links to content on the web based on what users type into the Firefox address bar. Some of the content that appears in these suggestions is provided by partners, and some of the content is sponsored.

In building Firefox Suggest, we have followed our long-standing Lean Data Practices and Data Privacy Principles. Practically, this means that we take care to limit what we collect, and to limit what we pass on to our partners. The behavior of the feature is straightforward–suggestions are shown as you type, and are directly relevant to what you type.

We take the security of the datasets needed to provide this feature very seriously. We pursue multi-layered security controls and practices, and strive to make as much of our work as possible publicly verifiable.

In this post, we wanted to give more detail about what data is needed to provide this feature, and about how we handle it.

Changes with Firefox Suggest

The address bar experience in Firefox has long been a blend of results provided by partners (such as the user’s default search provider) and information local to the client (such as recently visited pages). For the first time, Firefox Suggest augments these data sources with search completions from Mozilla.

Firefox Suggest data flow diagram

In its current form, Firefox Suggest compares searches against a list of allowed terms that is local to the client. When the search text matches a term on the allowed list, a completion suggestion may be shown alongside the local and default search engine suggestions.

Data Collected by Mozilla for smarter contextual suggestions

We are in the process of rolling out a new offering in the Firefox Suggest experience — “Firefox Suggest with smarter contextual suggestions.” This feature requires access to new data and is only available to a small number of users via an opt-in prompt. Mozilla collects the following information to power Firefox Suggest when users have opted in to smarter contextual suggestions.

  • Search queries and suggest impressions: Firefox Suggest sends Mozilla search terms and information about engagement with Firefox Suggest, some of which may be shared with partners to provide and improve the suggested content.
  • Clicks on suggestions: When a user clicks on a suggestion, Mozilla receives notice that suggested links were clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.

How Data is Handled and Shared

Mozilla approaches handling this data conservatively. We take care to remove data from our systems as soon as it’s no longer needed. When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.

A specific example of this principle in action is the search’s location. The location of a search is derived from the Firefox client’s IP address. However, the IP address can identify a person far more precisely than is necessary for our purposes. We therefore convert the IP address to a more general location immediately after we receive it, and we remove the IP address from all datasets and reports downstream. Access to machines and (temporary, short-lived) datasets that might include the IP address is highly restricted, and limited only to a small number of administrators. We don’t enable or allow analysis on data that includes IP addresses.

We’re excited to be bringing Firefox Suggest to you. See the product announcement to learn more!

EDIT: 2021-10-20: Updated to clarify the purpose and scope of the new data collection.

hacks.mozilla.orgTime for a review of Firefox 92

Release time comes around so quickly! This month we have quite a few CSS updates, along with the new Object.hasOwn() static method for JavaScript.

This blog post provides merely a set of highlights; for all the details, check out the following:

CSS Updates

A couple of CSS features have moved from behind a preference and are now available by default: accent-color and size-adjust.

accent-color

The accent-color CSS property sets the color of an element’s accent. Accents appear in elements such as a checkbox or radio input. It’s default value is auto which represents a UA-chosen color, which should match the accent color of the platform. You can also specify a color value. Read more about the accent-color property here.

size-adjust

The size-adjust descriptor for @font-face takes a percentage value which acts as a multiplier for glyph outlines and metrics. Another tool in the CSS box for controlling fonts, it can help to harmonize the designs of various fonts when rendered at the same font size. Check out some examples on the size-adjust descriptor page on MDN.

And more…

Along with both of those, the break-inside property now has support for values avoid-page and avoid-column, the font-size-adjust property accepts two values and if that wasn’t enough system-ui as a generic font family name for the font-family property is now supported.

break-inside property on MDN

font-size-adjust property on MDN

font-family property on MDN

Object.hasOwn arrives

A nice addition to JavaScript is the Object.hasOwn() static method. This returns true if the specified property is a direct property of the object (even if that property’s value is null or undefined). false is returned if the specified property is inherited or not declared. Unlike the in operator, this method does not check for the specified property in the object’s prototype chain.

Object.hasOwn() is recommended over Object.hasOwnProperty() as it works for objects created using Object.create(null) and with objects that have overridden the inherited hasOwnProperty() method.

Read more about Object.hasOwn() on MDN

The post Time for a review of Firefox 92 appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Data Reviews are Important, Glean Parser makes them Easy

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we put a lot of stock in Openness. Source? Open. Bug tracker? Open. Discussion Forums (Fora?)? Open (synchronous and asynchronous).

We also have an open process for determining if a new or expanded data collection in a Mozilla project is in line with our Privacy Principles and Policies: Data Review.

Basically, when a new piece of instrumentation is put up for code review (or before, or after), the instrumentor fills out a form and asks a volunteer Data Steward to review it. If the instrumentation (as explained in the filled-in form) is obviously in line with our privacy commitments to our users, the Data Steward gives it the go-ahead to ship.

(If it isn’t _obviously_ okay then we kick it up to our Trust Team to make the decision. They sit next to Legal, in case you need to find them.)

The Data Review Process and its forms are very generic. They’re designed to work for any instrumentation (tab count, bytes transferred, theme colour) being added to any project (Firefox Desktop, mozilla.org, Focus) and being collected by any data collection system (Firefox Telemetry, Crash Reporter, Glean). This is great for the process as it means we can use it and rely on it anywhere.

It isn’t so great for users _of_ the process. If you only ever write Data Reviews for one system, you’ll find yourself answering the same questions with the same answers every time.

And Glean makes this worse (better?) by including in its metrics definitions almost every piece of information you need in order to answer the review. So now you get to write the answers first in YAML and then in English during Data Review.

But no more! Introducing glean_parser data-review and mach data-review: command-line tools that will generate for you a Data Review Request skeleton with all the easy parts filled in. It works like this:

  1. Write your instrumentation, providing full information in the metrics definition.
  2. Call python -m glean_parser data-review <bug_number> <list of metrics.yaml files> (or mach data-review <bug_number> if you’re adding the instrumentation to Firefox Desktop).
  3. glean_parser will parse the metrics definitions files, pull out only the definitions that were added or changed in <bug_number>, and then output a partially-filled-out form for you.

Here’s an example. Say I’m working on bug 1664461 and add a new piece of instrumentation to Firefox Desktop:

fog.ipc:
  replay_failures:
    type: counter
    description: |
      The number of times the ipc buffer failed to be replayed in the
      parent process.
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_sensitivity:
      - technical
    notification_emails:
      - chutten@mozilla.com
      - glean-team@mozilla.com
    expires: never

I’m sure to fill in the `bugs` field correctly (because that’s important on its own _and_ it’s what glean_parser data-review uses to find which data I added), and have categorized the data_sensitivity. I also included a helpful description. (The data_reviews field currently points at the bug I’ll attach the Data Review Request for. I’d better remember to come back before I land this code and update it to point at the specific comment…)

Then I can simply use mach data-review 1664461 and it spits out:

!! Reminder: it is your responsibility to complete and check the correctness of
!! this automatically-generated request skeleton before requesting Data
!! Collection Review. See https://wiki.mozilla.org/Data_Collection for details.

DATA REVIEW REQUEST
1. What questions will you answer with this data?

TODO: Fill this in.

2. Why does Mozilla need to answer these questions? Are there benefits for users?
   Do we need this information to address product or business requirements?

TODO: Fill this in.

3. What alternative methods did you consider to answer these questions?
   Why were they not sufficient?

TODO: Fill this in.

4. Can current instrumentation answer these questions?

TODO: Fill this in.

5. List all proposed measurements and indicate the category of data collection for each
   measurement, using the Firefox data collection categories found on the Mozilla wiki.

Measurement Name | Measurement Description | Data Collection Category | Tracking Bug
---------------- | ----------------------- | ------------------------ | ------------
fog_ipc.replay_failures | The number of times the ipc buffer failed to be replayed in the parent process.  | technical | https://bugzilla.mozilla.org/show_bug.cgi?id=1664461


6. Please provide a link to the documentation for this data collection which
   describes the ultimate data set in a public, complete, and accurate way.

This collection is Glean so is documented
[in the Glean Dictionary](https://dictionary.telemetry.mozilla.org).

7. How long will this data be collected?

This collection will be collected permanently.
**TODO: identify at least one individual here** will be responsible for the permanent collections.

8. What populations will you measure?

All channels, countries, and locales. No filters.

9. If this data collection is default on, what is the opt-out mechanism for users?

These collections are Glean. The opt-out can be found in the product's preferences.

10. Please provide a general description of how you will analyze this data.

TODO: Fill this in.

11. Where do you intend to share the results of your analysis?

TODO: Fill this in.

12. Is there a third-party tool (i.e. not Telemetry) that you
    are proposing to use for this data collection?

No.

As you can see, this Data Review Request skeleton comes partially filled out. Everything you previously had to mechanically fill out has been done for you, leaving you more time to focus on only the interesting questions like “Why do we need this?” and “How are you going to use it?”.

Also, this saves you from having to remember the URL to the Data Review Request Form Template each time you need it. We’ve got you covered.

And since this is part of Glean, this means this is already available to every project you can see here. This isn’t just a Firefox Desktop thing.

Hope this saves you some time! If you can think of other time-saving improvements we could add once to Glean so every Mozilla project can take advantage of, please tell us on Matrix.

If you’re interested in how this is implemented, glean_parser’s part of this is over here, while the mach command part is here.

:chutten

(( This is a syndicated copy of the original post. ))

Web Application SecurityMozilla VPN Security Audit

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt5 App for macOS
  • Mozilla VPN Qt5 App for Linux
  • Mozilla VPN Qt5 App for Windows
  • Mozilla VPN Qt5 App for iOS
  • Mozilla VPN Qt5 App for Android

Here’s a summary of the items discovered within this security audit that were medium or higher severity:

  • FVP-02-014: Cross-site WebSocket hijacking (High)
    • Mozilla VPN client, when put in debug mode, exposes a WebSocket interface to localhost to trigger events and retrieve logs (most of the functional tests are written on top of this interface). As the WebSocket interface was used only in pre-release test builds, no customers were affected.  Cure53 has verified that this item has been properly fixed and the security risk no longer exists.
  • FVP-02-001: VPN leak via captive portal detection (Medium)
    • Mozilla VPN client allows sending unencrypted HTTP requests outside of the tunnel to specific IP addresses, if the captive portal detection mechanism has been activated through settings.  However, the captive portal detection algorithm requires a plain-text HTTP trusted endpoint to operate. Firefox, Chrome, the network manager of MacOS and many applications have a similar solution enabled by default. Mozilla VPN utilizes the Firefox endpoint.  Ultimately, we have accepted this finding as the user benefits of captive portal detection outweigh the security risk.
  • FVP-02-016: Auth code could be leaked by injecting port (Medium)
    • When a user wants to log into Mozilla VPN, the VPN client will make a request to https://vpn.mozilla.org/api/v2/vpn/login/windows to obtain an authorization URL. The endpoint takes a port parameter that will be reflected in a <img> element after the user signs into the web page. It was found that the port parameter could be of an arbitrary value. Further, it was possible to inject the @ sign, so that the request will go to an arbitrary host instead of localhost (the site’s strict Content Security Policy prevented such requests from being sent). We fixed this issue by improving the port number parsing in the REST API component. The fix includes several tests to prevent similar errors in the future.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

More information on the issues identified in this report can be found in our MFSA2021-31 Security Advisory published on July 14th, 2021.

The post Mozilla VPN Security Audit appeared first on Mozilla Security Blog.

Open Policy & AdvocacyMozilla Mornings on the Digital Markets Act: Key questions for Parliament

On 13 September, Mozilla will host the next installment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

For this installment, we’re checking in on the Digital Markets Act. Our panel of experts will discuss the key outstanding questions as the debate in Parliament reaches its fever pitch.

Speakers

Andreas Schwab MEP
IMCO Rapporteur on the Digital Markets Act
Group of the European People’s Party

Mika Shah
Co-Acting General Counsel
Mozilla

Vanessa Turner
Senior Advisor
BEUC

With opening remarks by Raegan MacDonald, Director of Global Public Policy, Mozilla

Moderated by Jennifer Baker
EU technology journalist

 

Logistical details

Monday 13 September, 17:00 – 18:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the Digital Markets Act: Key questions for Parliament appeared first on Open Policy & Advocacy.

SeaMonkeyUpdating revisited..

I’d like to revisit something about Updates and versions < 2.49.x.  [This could possibly be a repeat note.]

If you check for updates for versions < 2.38 [iirc],  you will get an “Update Failed” message along with information stating that the “Update XML file malformed (200)” message.

This is a known issue.  It stems from the fact that client doesn’t recognize the certificate that’s being supplied by the site.

Can something be done about this?  Aside for the user manually downloading and installing the latest version, unfortunately, there’s nothing that we can do.  At least, nothing that would make sense in doing.

Why?

What needs to be done is to make the old installed version recognize the certificates as well as support the correct set of ssl cyphers.  The former can probably be achieved by manually downloading the new certificate roots.  The latter, unfortunately will be blocking the former as well, since this is a code-based issue.  So patches will need to be created to be applied to the binary code.   I’d hazard a guess that no one will be able to take that time to create patches on old source to get old clients to update to the newest version.

So, all in all, it’d be a lot easier just to download and install the latest, providing the operating system is supported by the latest version.  [I can certainly post a rant on this support issue… but I digress..]

And with the possibility of aus2-community’s domain being decommissioned, versions <= 2.53.4  will require an addition to your “user.js” file to redirect to the actual updates site.

Best regards,

:ewong

 

Blog of DataThis Week in Glean: Why choosing the right data type for your metric matters

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

One of my favorite tasks that comes up in my day to day adventure at Mozilla is a chance to work with the data collected by this amazing Glean thing my team has developed. This chance often arises when an engineer needs to verify something, or a product manager needs a quick question answered. I am not a data scientist (and I always include that caveat when I provide a peek into the data), but I do understand how the data is collected, ingested, and organized and I can often guide people to the correct tools and techniques to find what they are looking for.

In this regard, I often encounter challenges in trying to read or analyze data that is related to another common task I find myself doing: advising engineering teams on how we intend Glean to be used and what metric types would best suit their needs. A recent example of this was a quick Q&A for a group of mobile engineers who all had similar questions. My teammate chutten and I were asked to explain the differences between Counter Metrics and Event Metrics, and try and help them understand the situations where each of them were the most appropriate to use. It was a great session and I felt like the group came away with some deeper understanding of the Glean principles. But, after thinking about it afterwards, I realized that we do a lot of hand-wavy things when explaining why not to do things. Even in our documentation, we aren’t very specific about the overhead of things like Event Metrics. For example, from the Glean documentation section regarding “Choosing a Metric Type” in a warning about events:

“Important: events are the most expensive metric type to record, transmit, store and analyze, so they should be used sparingly, and only when none of the other metric types are sufficient for answering your question.”

This is sufficiently scary to make me think twice about using events! But what exactly do we mean by “they are the most expensive”? What about recording, transmitting, storing, and analyzing makes them “expensive”? Well, that’s what I hope to dive into a little deeper with some real numbers and examples, rather than using scary hand-wavy words like “expensive” and “should be used sparingly”. I’ll mostly be focusing on events here, since they contain the “scariest” warning. So, without further ado, let’s take a look at some real comparisons between metric types, and what challenges someone looking at that data may encounter when trying to answer questions about it or with it.

Our claim is that events are expensive to record, store and transmit; so let’s start by examining that a little closer. The primary API surface for the Event Metric Type in Glean is the record() function. This function also takes an optional collection of “extra” information in a key-value shape, which is supposed to be used to record additional state that is important to the event. The “extras”, along with the category, name, and (relative) timestamp, makes up the data that gets recorded, stored, and eventually transmitted to the ingestion pipeline for storage in the data warehouse.

Since Glean is built with Rust and then provides SDKs in various target languages, one of the first things we have to do is serialize the data from the shiny target language object that Glean generates into something we can pass into the Rust that is at the heart of Glean. It is worth noting that the Glean JavaScript SDK does this a little differently, but the same ideas should apply about events. A similar structure is used to store the data and then transmit it to the telemetry endpoint when the Events Ping is assembled. A real-world example of what this serialized event, coming from Fenix’s “Entered URL” event would look like this JSON:

{
"category": "events",
"extra": {
"autocomplete": "false"
},
"name": "entered_url",
"timestamp": 33191
}

A similar amount of data would be generated every time the metric was recorded, stored and transmitted. So, if the user entered in 10 URLs, then we would record this same thing 10 times, each with a different relative timestamp. To take a quick look at how this affects using this data for analysis: if I only needed to know how many users interacted with this feature and how often, I would have to count each event with this category and name for every user. To complicate the analysis a bit further, Glean doesn’t transmit events one at a time, it collects all events during a “session” (or if it hits 500 events recorded) and transmits them as an array within an Event Ping. This Event Ping then becomes a single row in the data, and nested in a column we find the array of events. In order to even count the events, I would need to “unnest” them and flatten out the data. This involves cross joining each event in the array back to the parent ping record in order to even get at the category, name, timestamp and extras. We end up with some SQL that looks like this (WARNING: this is just an example. Don’t try this, it could be expensive and shouldn’t work because I left out the filter on the submission date):

SELECT *
FROM fenix
CROSS JOIN UNNEST (events) AS event

For an average day in Fenix we see 75-80 million Event Pings from clients on our release version, with an average of a little over 8 events per ping. That adds up to over 600 million events per day, and just for Fenix! So when we do this little bit of SQL flattening of the data structure, we end up manipulating over a half a billion records for a single day, and that adds up really quickly if you start looking at more than one day at a time. This can take a lot of computer horsepower, both in processing the query and in trying to display the results in some visual representation. Now that I have the events flattened out, I can finally filter for the category and name of the event I am looking for and count how many of that specific event is present. Using the Fenix event “entered_url” from above, I end up with something like this to count the number of clients and events:

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
COUNT(*) AS event_count,
DATE(submission_timestamp) AS event_date
FROM
fenix.events
CROSS JOIN
UNNEST(events.events) AS event -- Yup, event.events, naming=hard
WHERE
submission_timestamp >= ‘2021-08-12’
AND event.category = ‘events’
AND event.name = ‘entered_url’
GROUP BY
event_date
ORDER BY
event_date

Our query engine is pretty good, this only takes about 8 seconds to process and it has narrowed down the data it needs to scan to a paltry 150 GB, but this is a very simple analysis of the data involved. I didn’t even dig into the “extra” information, which would require yet another level of flattening through UNNESTing the “extras” array that they are stored in in each individual event.

As you can see, this explodes pretty quickly into some big datasets for just counting things. Don’t get me wrong, this is all very useful if you need to know the sequence of events that led the client to entering a URL, that’s what events are for after all. To be fair, our lovely Data Engineering folks have taken the time and trouble to create views where these events are already unnested, and so I could have avoided doing it manually and instead use the automatically flattened dataset. I wanted to better illustrate the additional complexity that goes on downstream from events and working with the “raw” data seemed the best way to do this.

If we really just need to know how many clients interact with a feature and how often, then a much lighter weight alternative recommended by the Glean team would be a Counter Metric. To return to what the data representation of this looks like, we can look at an internal Glean metric that counts the number of times Fenix enters the foreground per day (since the metrics ping is sent once per day). It looks like this:

"counter": {
"glean.validation.foreground_count": 1
}

No matter how many times we add() to this metric, it will always take up that same amount of space right there, only the value would change. So, we don’t end up with one record per event, but a single value that represents the count of the interactions. When I go to query this and find out how many clients this involved and how many times the app moved to the foreground of the device, I can do something like this in SQL (without all the UNNESTing):

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
SUM(m.metrics.counter.glean_validation_foreground_count) AS foreground_count,
DATE(submission_timestamp) AS event_date
FROM
org_mozilla_firefox.metrics AS m
WHERE
submission_timestamp >= '2021-08-12'
GROUP BY
event_date
ORDER BY
event_date

This runs in just under 7 seconds, but the query only has to scan about 5 GB of data instead of the 150 GB we saw with the event. And, for comparison, there were only about 8 million of those entered_url events per day compared to 80 million foreground occurrences per day. Even with many more incidents, the amount of data scanned by the query that used the Counter Metric Type to count things scanned 1/30th the amount of data. It is also fairly obvious which query is easier to understand. The foreground count is just a numeric counter value stored in a single row in the database along with all of the other metrics that are collected and sent on the daily metrics ping, and it ultimately results in selecting a single column value. Rather than having to unnest arrays and then counting them, I can simply SUM the values stored in the column for the counter to get my result.

Events do serve a beautiful purpose, like building an onboarding funnel to determine how well we retain users and what onboarding path results in that. We can’t do that with counters because they don’t have the richness to be able to show the flow of interactions through the app. Counters also serve a purpose, and can answer questions about the usage of a feature with very little overhead. I just hope that as you read this, you will consider what questions you need to answer and remember that there is probably a well-suited Glean Metric Type just for your purpose, and if there isn’t, you can always request a new metric type! The Glean Team wants you to get the most out of your data while being true to our lean data practices, and we are always available to discuss which metric type is right for your situation if you have any questions.

SUMO BlogWhat’s up with SUMO – August 2021

Hey SUMO folks,

Summer is here. Despite the current situation of the world, I hope you can still enjoy a bit of sunshine and the breezing air wherever you are. And while vacations are planned, SUMO is still busy with lots of projects and releases. So let’s get the recap started!

Welcome on board!

  1. Welcome Julie and Rowan! Thank you for diving into the KB world.

Community news

  • One of our goal for Q3 this year is to revamp the onboarding experience for contributor that is focused on the /get-involved page. To support this work, we’re currently conducting a survey to understand how effective is the current onboarding information we provide. Please fill out the survey if you haven’t and share it to your community and fellow contributors!
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in July!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

Month Page views Vs previous month
Jul 2021 8,237,410 -10.81%
* KB pageviews number is a total of KB pageviews for /en-US/ only

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. Romado33
  5. K_alex

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 pageviews (*) Localization progress (per Jul, 9)(**)
de 8.62% 99%
zh-CN 6.92% 100%
pt-BR 6.32% 64%
es 6.22% 45%
fr 5.70% 91%
ja 4.13% 55%
ru 3.61% 99%
it 2.08% 100%
pl 2.00% 84%
zh-TW 1.44% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Soucet
  3. Jim Spentzos
  4. Michele Rodaro
  5. Mark Heijl

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jul 2021 3175 72.13% 15.02% 81.82%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Jul 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

  • FX for Android V91 (August 10)
  • FX for iOS V36 (August 10)
    • Fixes: Tab preview not showing in tab tray

Other products / Experiments

  • Mozilla VPN V2.5 (September 8)
    • Multi-hop: Using multiple VPN servers. VPN server chaining method gives extra security and privacy.
    • Support for Local DNS: If there is a need, you can set a custom DNS server when the Mozilla VPN is on.
    • Getting help if you cannot sign in: ‘get support’ improvements.

Upcoming Releases

  • FX Desktop 92, FX Android 92, FX iOS V37 (September 7)
  • Updates to FX Focus (October)

Shout-outs!

  • Thanks to Felipe Koji for his great work on Social Support.
  • Thanks to Seburo for constantly championing support for Firefox mobile.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Useful links:

hacks.mozilla.orgSpring cleaning MDN: Part 2

An illustration of a blue coloured dinosaur sweeping with a broom

Illustration by Daryl Alexsy

 

The bags have been filled up with all the things we’re ready to let go of and it’s time to take them to the charity shop.

Archiving content

Last month we removed a bunch of content from MDN. MDN is 16 years old (and yes it can drink in some countries), all that time ago it was a great place for all of Mozilla to document all of their things. As MDN evolved and the web reference became our core content, other areas became less relevant to the overall site. We have ~11k active pages on MDN, so keeping them up to date is a big task and we feel our focus should be there.

This was a big decision and had been in the works for over a year. It actually started before we moved MDN content to GitHub. You may have noticed a banner every now and again, saying certain pages weren’t maintained. Various topics were removed including all Firefox (inc. Gecko) docs, which you can now find here. Mercurial, Spidermonkey, Thunderbird, Rhino and XUL were also included in the archive.

So where is the content now?

It’s saved – it’s in this repo. We haven’t actually deleted it completely. Some of it is being re-hosted by various teams and we have the ability to redirect to those new places. It’s saved in both it’s rendered state and the raw wiki form. Just. In. Case.

The post Spring cleaning MDN: Part 2 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NL10n Report: August 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

In terms of new content, it’s been a pretty calm period for Firefox after the MR1 release, with less than 50 strings added over the last 6 weeks. We expect that to change in the coming weeks, starting with a few clean-ups that didn’t land in time for MR1, and brand new features.

These are the relevant deadlines for the next month:

  • Firefox 91 shipped last Tuesday (August 10), and we welcomed a new locale with it: Scots.
  • The deadline to localize Firefox 92 is August 29 (release will happen on September 7), while Firefox 93 just started its life cycle in Nightly.

A reminder that Firefox 91 is also the new ESR, and will be supported for about 1 year. We plan to update localizations for 91 ESR in a few weeks, to improve coverage and pick up some bug fixes.

What’s new or coming up in mobile

We have exciting news coming up on the mobile front. In case you haven’t heard yet, we just brought back Focus for iOS and Focus for Android to Pontoon for localization. We are eager to bring back these products to a global audience with updated translations!

Both Focus for Android and Focus for iOS should have all strings in by August 17th. L10n deadline for both localizing and testing your work is September 6th. One difference you will notice is that iOS strings will be trickling in regularly – vs what we usually do for Firefox for iOS where you get all strings in one bulk.

Concerning Firefox for Android and Firefox for iOS: both projects are going to start landing strings for the next release, which promises to be a very interesting one. More info to come soon, please stay tuned on Matrix and Discourse for this!

What’s new or coming up in web projects

mozilla.org

A set of VPN pages were landed recently.  As the Mozilla VPN product expands to more markets, it would be great to get these pages localized. Do plan to take some time and work as a team to complete 4000+ words of new content. The pages contain some basic information on what distinguishes Mozilla’s VPN from others on the market. You will find it useful to spread the words and promote the product in your language.

There will be a couple of new projects on the horizon. Announcements will be made through  Discourse and Matrix.

Newly published localizer facing documentation

Events

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Opportunities

International Translation Day

Call for community translator or manager as a panelist to represent the Mozilla l10n community:

As part of Translation Day 2021, the WordPress Polyglots team is organizing a handful of global events (in English) from Sept. 17 – 30, 2021. The planning team is still deciding on the format and dates for these events, but they will be virtual/online and accessible to anyone who’s interested. One of the events the team is putting together is a panel discussion between contributors from multiple open source or community-led translation projects. If you or anyone in your community would be interested in talking about your experience as a community translator and how translations work in your community or project, you would be a great fit!

Check out what the organizer and the communities were able to accomplish last year and what they are planning for this year. The panel discussion would involve localization contributors like you from other open source communities, sharing their experiences on the tools, process and creative ways to collaborate during the pandemic. We hope some of you can take the opportunity to share and learn.

Even if you are not able to participate in the event, maybe you can organize a virtual meeting within the community, meet and greet and celebrate this special day together.

Friends of the Lion

  • Congratulations to Temitope Olajide from the Yoruba l10n community, for your excellent job completing the Terminology project! Image by Elio Qoshi

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla Thunderbird BlogThunderbird 91 Available Now

The newest stable release of Thunderbird, version 91, is available for download on our website now. Existing Thunderbird users will be updated to the newest version in the coming weeks.

Thunderbird 91 is our biggest release in years with a ton of new features, bug fixes and polish across the app. This past year had its challenges for the Thunderbird team, our community and our users. But in the midst of a global pandemic, the important role that email plays in our lives became even more obvious. Our team was blown away by the support we received in terms of donations and open source contributions and we extend a big thanks to everyone who helped out Thunderbird in the lead up to this release.

There are a ton of changes in the new Thunderbird, you can see them all in the release notes. In this post we’ll focus on the most notable and visible ones.

Multi-Process Support (Faster Thunderbird)

Thunderbird has gotten faster with multi-process support. The new multi-process Thunderbird takes better advantage of the processor in your computer by splitting up the application into multiple smaller processes instead of running as one large one. That’s a lot of geekspeak to say that Thunderbird 91 will feel like it got a speed boost.

New Account Setup

One of the most noticeable changes for Thunderbird 91 is the new account setup wizard. The new wizard not only features a better look, but does auto-discovery of calendars and address books and allows most users to set them up with just a click. After setting up an account, the wizard also points users at additional (optional) things to do – such as adding a signature or setting up end-to-end encryption.

Account Setup Wizard

The New Account Setup Wizard

Attachments Pane + Drag-and-Drop Overlay

The attachments pane has been moved to the bottom of the compose window for better visibility of filenames as well as being able to see many at once. We’ve also added an overlay that appears when you drag-and-drop a file into the compose window asking how you would like to handle the file in that email (such as putting a picture in-line in your message or simply attaching it to the email).

The Thunderbird compose window with the attachment pane at the bottom.

Compose window with bottom attachment pane.

The new attachment drag-and-drop overlay.

The new attachment drag-and-drop overlay.

PDF Viewer

Thunderbird now has a built-in PDF viewer, which means you can read and even do some editing on PDFs sent to you as attachments. You can do all this without ever leaving Thunderbird, allowing you to return to your inbox without missing a beat.

The PDF Viewer in Thunderbird 91

The PDF Viewer in Thunderbird 91

UI Density Control

Depending on how you use Thunderbird and whether you are using it on a large desktop monitor or a small laptop touchscreen, you may want the icons and text of the interface to be larger and more spread out or very compact. In Thunderbird 91 under the View -> Density in the menu, you can select the UI density for the entire application. Three options are available: compact – which puts everything closer together, normal – the experience you are accustomed to in Thunderbird, and touch – that makes icons bigger and separates elements.

Play around with this new level of control and find what works best for you!

UI density control option

UI density control options

Calendar Sidebar Improvements

Managing multiple calendars has been made easier with the calendar sidebar improvements in this release. There is a quick enable button for disabled calendars, as well as a show/hide icon for easily toggling what calendars are visible. There is also a lock indicator for read-only calendars. Additionally, although not a sidebar improvement, there are now better color accents to highlight the current day in the calendar.

The improved calendar sidebar.

Improved Calendar sidebar

Better Dark Theme

Thunderbird’s Dark Theme got even better in this release. In the past some windows and dialogues looked a bit out of place if you had Thunderbird’s dark theme selected. Now almost every dialogue and window in Thunderbird is fully styled to respect the user’s color scheme preferences.

Dark Theme Screenshot

Dark Theme

Other Notable Mentions

You really have to scroll through the release notes as there are a lot of little changes that make Thunderbird 91 feel really polished. Some other notable mentions are:

hacks.mozilla.orgHopping on Firefox 91

Hopping on Firefox 91

August is already here, which means so is Firefox 91! This release has a Scottish locale added and, if the ‘increased contrast’ setting is checked, auto enables High Contrast mode on macOS.

Private browsing windows have an HTTPS-first policy and will automatically attempt to make all connections to websites secure. Connections will fall back to HTTP if the website does not support HTTPS.

For developers Firefox 91 supports the Visual Viewport API and adds some more additions to the Intl.DateTimeFormat object.

This blog post provides merely a set of highlights; for all the details, check out the following:

Visual Viewport API

Implemented back in Firefox 63, the Visual Viewport API was behind the pref dom.visualviewport.enabled in the desktop release. It is now no longer behind that pref and enabled by default, meaning the API is now supported in all major browsers.

There are two viewports on the mobile web, the layout viewport and the visual viewport. The layout viewport covers all the elements on a page and the visual viewport represents what is actually visible on screen. If a keyboard appears on screen, the visual viewport dimensions will shrink, but the layout viewport will remain the same.

This API gives you information about the size, offset and scale of the visual viewport and allows you to listen for resize and scroll events. You access it via the visualViewport property of the window interface.

In this simple example the resize event is listened for and when a user zooms in, hides an element in the layout, so as not to clutter the interface.

const elToHide = document.getElementById('to-hide');

var viewport = window.visualViewport;

function resizeHandler() {

   if (viewport.scale > 1.3)
     elToHide.style.display = "none";
   else
     elToHide.style.display = "block";
}

window.visualViewport.addEventListener('resize', resizeHandler);

New formats for Intl.DateTimeFormat

A couple of updates to the Intl.DateTimeFormat object include new timeZoneName options for formatting how a timezone is displayed. These include the localized GMT formats shortOffset and longOffset, and generic non-location formats shortGeneric and longGeneric. The below code shows all the different options for the timeZoneName and their format.

var date = Date.UTC(2021, 11, 17, 3, 0, 42);
const timezoneNames = ['short', 'long', 'shortOffset', 'longOffset', 'shortGeneric', 'longGeneric']

for (const zoneName of timezoneNames) {

  // Do something with currentValue
  var formatter = new Intl.DateTimeFormat('en-US', {
    timeZone: 'America/Los_Angeles',
    timeZoneName: zoneName,
  });

console.log(zoneName + ": " + formatter.format(date) );

}

// expected output:
// > "short: 12/16/2021, PST"
// > "long: 12/16/2021, Pacific Standard Time"
// > "shortOffset: 12/16/2021, GMT-8"
// > "longOffset: 12/16/2021, GMT-08:00"
// > "shortGeneric: 12/16/2021, PT"
// > "longGeneric: 12/16/2021, Pacific Time"

You can now format date ranges as well with the new formatRange() and formatRangeToParts() methods. The former returns a localized and formatted string for the range between two Date objects:

const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };

const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0));
const endDate = new Date(Date.UTC(2008, 0, 10, 11, 0, 0));

const dateTimeFormat = new Intl.DateTimeFormat('en', options1);
console.log(dateTimeFormat.formatRange(startDate, endDate));

// expected output: Wednesday, January 10, 2007 – Thursday, January 10, 2008

And the latter returns an array containing the locale-specific parts of a date range:

const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0)); // > 'Wed, 10 Jan 2007 10:00:00 GMT'
const endDate = new Date(Date.UTC(2007, 0, 10, 11, 0, 0));   // > 'Wed, 10 Jan 2007 11:00:00 GMT'

const dateTimeFormat = new Intl.DateTimeFormat('en', {
  hour: 'numeric',
  minute: 'numeric'
});
const parts = dateTimeFormat.formatRangeToParts(startDate, endDate);

for (const part of parts) {

  console.log(part);

}

// expected output (in GMT timezone):
// Object { type: "hour", value: "2", source: "startRange" }
// Object { type: "literal", value: ":", source: "startRange" }
// Object { type: "minute", value: "00", source: "startRange" }
// Object { type: "literal", value: " – ", source: "shared" }
// Object { type: "hour", value: "3", source: "endRange" }
// Object { type: "literal", value: ":", source: "endRange" }
// Object { type: "minute", value: "00", source: "endRange" }
// Object { type: "literal", value: " ", source: "shared" }
// Object { type: "dayPeriod", value: "AM", source: "shared" }

Securing the Gamepad API

There have been a few updates to the Gamepad API to fall in line with the spec. It is now only available in secure contexts (HTTPS) and is protected by Feature Policy: gamepad. If access to gamepads is disallowed, calls to Navigator.getGamepads() will throw an error and the gamepadconnected and gamepaddisconnected events will not fire.

 

The post Hopping on Firefox 91 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 91 Introduces Enhanced Cookie Clearing

We are pleased to announce a new, major privacy enhancement to Firefox’s cookie handling that lets you fully erase your browser history for any website. Today’s new version of Firefox Strict Mode lets you easily delete all cookies and supercookies that were stored on your computer by a website or by any trackers embedded in it.

Building on Total Cookie Protection, Firefox 91’s new approach to deleting cookies prevents hidden privacy violations and makes it easy for you to see which websites are storing information on your computer.

When you decide to tell Firefox to forget about a website, Firefox will automatically throw away all cookies, supercookies and other data stored in that website’s “cookie jar”. This “Enhanced Cookie Clearing” makes it easy to delete all traces of a website in your browser without the possibility of sneaky third-party cookies sticking around.

What data websites are storing in your browser

Browsing the web leaves data behind in your browser. A site may set cookies to keep you logged in, or store preferences in your browser. There are also less obvious kinds of site data, such as caches that improve performance, or offline data which allows web applications to work without an internet connection. Firefox itself also stores data safely on your computer about sites you have visited, including your browsing history or site-specific settings and permissions.

Firefox allows you to clear all cookies and other site data for individual websites. Data clearing can be used to hide your identity from a site by deleting all data that is accessible to the site. In addition, it can be used to wipe any trace of having visited the site from your browsing history.

Why clearing this data can be difficult

To make matters more complicated, the websites that you visit can embed content, such as images, videos and scripts, from other websites. This “cross-site” content can also read and write cookies and other site data.

Let’s say you have visited facebook.com, comfypants.com and mealkit.com. All of these sites store data in Firefox and leave traces on your computer. This data includes typical storage like cookies and localStorage, but also site settings and cached data, such as the HTTP cache. Additionally, comfypants.com and mealkit.com embed a like button from facebook.com.

Firefox Strict Mode includes Total Cookie Protection, where the cookies and data stored by each website on your computer are confined to a separate cookie jar. In Firefox 91, Enhanced Cookie Clearing lets you delete all the cookies and data for any website by emptying that cookie jar. Illustration: Megan Newell and Michael Ham.

Embedded third-party resources complicate data clearing. Before Enhanced Cookie Clearing, Firefox cleared data only for the domain that was specified by the user. That meant that if you were to clear storage for comfypants.com, Firefox deleted the storage of comfypants.com and left the storage of any sites embedded on it (facebook.com) behind. Keeping the embedded storage of facebook.com meant that it could identify and track you again the next time you visited comfypants.com.

How Enhanced Cookie Clearing solves this problem

Total Cookie Protection, built into Firefox, makes sure that facebook.com can’t use cookies to track you across websites. It does this by partitioning data storage into one cookie jar per website, rather than using one big jar for all of facebook.com’s storage. With Enhanced Cookie Clearing, if you clear site data for comfypants.com, the entire cookie jar is emptied, including any data facebook.com set while embedded in comfypants.com.

Now, if you click on Settings > Privacy and Security > Cookies and Site Data > Manage Data, Firefox no longer shows individual domains that store data. Instead, Firefox lists a cookie jar for each website you have visited. That means you can easily recognize and remove all data a website has stored on your computer, without having to worry about leftover data from third parties embedded in that website. Here is how it looks:

In Firefox’s Privacy and Security Settings, you can manage cookies and other site data stored on your computer. In Firefox 91 ETP Strict Mode, Enhanced Cookie Clearing ensures that all data for any site you choose has been completely removed.

How to Enable Enhanced Cookie Clearing

In order for Enhanced Cookie Clearing to work, you need to have Strict Tracking Protection enabled. Once enabled, Enhanced Cookie Clearing will be used whenever you clear data for specific websites. For example, when using “Clear cookies and site data” in the identity panel (lock icon) or in the Firefox preferences. Find out how to clear site data in Firefox.

If you not only want to remove a site’s cookies and caches, but want to delete it from history along with any data Firefox has stored about it, you can use the “Forget About This Site” option in the History menu:

Firefox’s History menu lets you clear all history from your computer of any site you have visited. Starting in Firefox 91 in ETP Strict Mode, Enhanced Cookie Clearing ensures that third-party cookies that were stored when you visited that site are deleted as well.

Thank you

We would like to thank the many people at Mozilla who helped and supported the development and deployment of Enhanced Cookie Clearing, including Steven Englehardt, Stefan Zabka, Tim Huang, Prangya Basu, Michael Ham, Mei Loo, Alice Fleischmann, Tanvi Vyas, Ethan Tseng, Mikal Lewis, and Selena Deckelmann.

 

The post Firefox 91 Introduces Enhanced Cookie Clearing appeared first on Mozilla Security Blog.

Web Application SecurityFirefox 91 introduces HTTPS by Default in Private Browsing

 

We are excited to announce that, starting in Firefox 91, Private Browsing Windows will favor secure connections to the web by default. For every website you visit, Firefox will automatically establish a secure, encrypted connection over HTTPS whenever possible.

What is the difference between HTTP and HTTPS?

The Hypertext Transfer Protocol (HTTP) is a key protocol through which web browsers and websites communicate. However, data transferred by the traditional HTTP protocol is unprotected and transferred in clear text, such that attackers are able to view, steal, or even tamper with the transmitted data. The introduction of HTTP over TLS (HTTPS) fixed this privacy and security shortcoming by allowing the creation of secure, encrypted connections between your browser and the websites that support it.

In the early days of the web, the use of HTTP was dominant. But, since the introduction of its secure successor HTTPS, and further with the availability of free, simple website certificates, the large majority of websites now support HTTPS. While there remain many websites that don’t use HTTPS by default, a large fraction of those sites do support the optional use of HTTPS. In such cases, Firefox Private Browsing Windows now automatically opt into HTTPS for the best available security and privacy.

How HTTPS by Default works

Firefox’s new HTTPS by Default policy in Private Browsing Windows represents a major improvement in the way the browser handles insecure web page addresses. As illustrated in the Figure below, whenever you enter an insecure (HTTP) URL in Firefox’s address bar, or you click on an insecure link on a web page, Firefox will now first try to establish a secure, encrypted HTTPS connection to the website. In the cases where the website does not support HTTPS, Firefox will automatically fall back and establish a connection using the legacy HTTP protocol instead:

If you enter an insecure URL in the Firefox address bar, or if you click an insecure link on a web page, Firefox Private Browsing Windows checks if the destination website supports HTTPS. If YES: Firefox upgrades the connection and establishes a secure, encrypted HTTPS connection. If NO: Firefox falls back to using an insecure HTTP connection.

(Note that this new HTTPS by Default policy in Firefox Private Browsing Windows is not directly applied to the loading of in-page components like images, styles, or scripts in the website you are visiting; it only ensures that the page itself is loaded securely if possible. However, loading a page over HTTPS will, in the majority of cases, also cause those in-page components to load over HTTPS.)

We expect that HTTPS by Default will expand beyond Private Windows in the coming months. Stay tuned for more updates!

It’s Automatic!

As a Firefox user, you can benefit from the additionally provided security mechanism as soon as your Firefox auto-updates to version 91 and you start browsing in a Private Browsing Window. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

Thank you

We are thankful for the support of our colleagues at Mozilla including Neha Kochar, Andrew Overholt, Joe Walker, Selena Deckelmann, Mikal Lewis, Gijs Kruitbosch, Andrew Halberstadt and everyone who is passionate about building the web we want: free, independent and secure!

The post Firefox 91 introduces HTTPS by Default in Private Browsing appeared first on Mozilla Security Blog.

Blog of DataThis Week in Glean: Building a Mobile Acquisition Dashboard in Looker

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

As part of the DUET (Data User Engagement Team) working group, some of my day-to-day work involves building dashboards for visualizing user engagement aspects of the Firefox product. At Mozilla, we recently decided to use Looker to create dashboards and interactive views on our datasets. It’s a new system to learn but provides a flexible model for exploring data. In this post, I’ll walk through the development of several mobile acquisition funnels built in Looker. The most familiar form of engagement modeling is probably through funnel analysis — measuring engagement by capturing a cohort of users as they flow through various acquisition channels into the product. Typically, you’d visualize the flow as a Sankey or funnel plot, counting retained users at every step. The chart can help build intuition about bottlenecks or the performance of campaigns.

Mozilla owns a few mobile products; there is Firefox for Android, Firefox for iOS, and then Firefox Focus on both operating systems (also known as Klar in certain regions). We use Glean to instrument these products. The foremost benefit of Glean is that it encapsulates many best practices from years of instrumenting browsers; as such, all of the tables that capture anonymized behavior activity are consistent across the products. One valuable idea from this setup is that writing a query for a single product should allow it to extend to others without too much extra work. In addition, we pull in data from both the Google Play Store and Apple App Store to analyze the acquisition numbers. Looker allows us to take advantage of similar schemas with the ability to templatize queries.

ETL Pipeline

The pipeline brings all of the data into BigQuery so it can be referenced in a derived table within Looker.

  1. App Store data is exported into a table in BigQuery.
  2. Glean data flows into the org_mozilla_firefox.baseline table.
  3. A derived org_mozilla_firefox.baseline_clients_first_seen table is created from the baseline table. An org_mozilla_firefox.baseline_clients_daily table is created that references the first seen table.
  4. A Looker explore references the baseline_clients_clients_daily table in a parameterized SQL query, alongside data from the Google Play Store.
  5. A dashboard references the explore to communicate important statistics at first glance, alongside configurable parameters.

Peculiarities of Data Sources

Before jumping off into implementing a dashboard, it’s essential to discuss the quality of the data sources. For one, Mozilla and the app stores count users differently, which leads to subtle inconsistencies.

For example, there is no way for Mozilla to tie a Glean client back to the Play Store installation event in the Play Store. Each Glean client is assigned a new identifier for each device, whereas the Play Store only counts new installs by account (which may have several devices). We can’t track a single user across this boundary, and instead have to rely on the relative proportions over time. There are even more complications when trying to compare numbers between Android and iOS. Whereas the Play Store may show the number of accounts that have visited a page, the Apple App Store shows the total number of page visits instead. Apple also only reports users that have opted into data collection, which under-represents the total number of users.

These differences can be confusing to people who are not intimately familiar with the peculiarities of these different systems. Therefore, an essential part of putting together this view is documenting and educating the dashboard users to understand the data better.

Building a Looker Dashboard

There are three components to building a Looker dashboard: a view, an explore, and a dashboard. These files are written in a markup called LookML. In this project, we consider three files:

  • mobile_android_country.view.lkml
    • Contains the templated SQL query for preprocessing the data, parameters for the query, and a specification of available metrics and dimensions.
  • mobile_android_country.explore.lkml
    • Contains joins across views, and any aggregate tables suggested by Looker.
  • mobile_android_country.dashboard.lkml
    • Generated dashboard configuration for purposes of version-control.

View

The view is the bulk of data modeling work. Here, there are a few fields that are particularly important to keep in mind. First, there is a derived table alongside parameters, dimensions, and measures.

The derived table section allows us to specify the shape of the data that is visible to Looker. We can either refer to a table or view directly from a supported database (e.g., BigQuery) or write a query against that database. Looker will automatically re-run the derived table as necessary. We can also template the query in the view for a dynamic view into the data.

derived_table: {
  sql: with period as (SELECT ...),
      play_store_retained as (
          SELECT
          Date AS submission_date,
          COALESCE(IF(country = "Other", null, country), "OTHER") as country,
          SUM(Store_Listing_visitors) AS first_time_visitor_count,
          SUM(Installers) AS first_time_installs
          FROM
            `moz-fx-data-marketing-prod.google_play_store.Retained_installers_country_v1`
          CROSS JOIN
            period
          WHERE
            Date between start_date and end_date
            AND Package_name IN ('org.mozilla.{% parameter.app_id %}')
          GROUP BY 1, 2
      ),
      ...
      ;;
}

Above is the derived table section for the Android query. Here, we’re looking at the play_store_retained statement inside the common table expression (CTE). Inside of this SQL block, we have access to everything available to BigQuery in addition to view parameters.

# Allow swapping between various applications in the dataset
parameter: app_id {
  description: "The name of the application in the `org.mozilla` namespace."
  type:  unquoted
  default_value: "fenix"
  allowed_value: {
    value: "firefox"
  }
  allowed_value: {
    value: "firefox_beta"
  }
  allowed_value: {
    value:  "fenix"
  }
  allowed_value: {
    value: "focus"
  }
  allowed_value: {
    value: "klar"
  }
}

View parameters trigger updates to the view when changed. These are referenced using the liquid templating syntax:

AND Package_name IN (‘org.mozilla.{% parameter.app_id %}’)

For Looker to be aware of the shape of the final query result, we must define dimensions and metrics corresponding to columns in the result. Here is the final statement in the CTE from above:

SELECT
    submission_date,
    country,
    max(play_store_updated) AS play_store_updated,
    max(latest_date) AS latest_date,
    sum(first_time_visitor_count) AS first_time_visitor_count,
    ...
    sum(activated) AS activated
FROM play_store_retained
FULL JOIN play_store_installs
USING (submission_date, country)
FULL JOIN last_seen
USING (submission_date, country)
CROSS JOIN period
WHERE submission_date BETWEEN start_date AND end_date
GROUP BY 1, 2
ORDER BY 1, 2

 

Generally, in an aggregate query like this, the grouping columns will become dimensions while the aggregate values become metrics. A dimension is a column that we can filter or drill down into to get a different slice of the data model:

dimension: country {
  description: "The country code of the aggregates. The set is limited by those reported in the play store."
  type: string
  sql: ${TABLE}.country ;;
}

Note that we can refer to the derived table using the ${TABLE} variable (not unlike interpolating a variable in a bash script).

A measure is a column that represents a metric. This value is typically dependent on the dimensions.

measure: first_time_visitor_count {
  description: "The number of first time visitors to the play store."
  type: sum
  sql: ${TABLE}.first_time_visitor_count ;;
}

We must ensure that all dimensions and columns are declared to make them available to explores. Looker provides a few ways to create these fields automatically. For example, if you create a view directly from a table, Looker can autogenerate these from the schema. Likewise, the SQL editor has options to generate a view file directly. Whatever the method may be, some manual modification will be necessary to build a clean data model for use.

Explore

One of the more compelling features of Looker is the ability for folks to drill down into data models without the need to write SQL. They provide an interface where the dimensions and measures can be manipulated and plotted in an easy-to-use graphical interface. To do this, we need to declare which view to use. Often, just declaring the explore is sufficient:

include: "../views/*.view.lkml"

explore: mobile_android_country {
}

We include the view from a location relative to the explore file. Then we name an explore that shares the same name as the view. Once committed, the explore is available to explore in a drop-down menu in the main UI.

The explore can join multiple views and provide default parameters. In this project, we utilize a country view that we can use to group countries into various buckets. For example, we may have a group for North American countries, another for European countries, and so forth.

explore: mobile_android_country {
  join: country_buckets {
    type: inner
    relationship: many_to_one
    sql_on:  ${country_buckets.code} = ${mobile_android_country.country} ;;
  }
  always_filter: {
    filters: [
      country_buckets.bucket: "Overall"
    ]
  }
}

Finally, the explore is also the place where Looker will materialize certain portions of the view. Materialization is only relevant when copying the materialized segments from the exported dashboard code. An example of what this looks like follows:

aggregate_table: rollup__submission_date__0 {
  query: {
    dimensions: [
      # "app_id" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # app_id,
      # "country_buckets.bucket" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # country_buckets.bucket,
      # "history_days" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # history_days,
      submission_date
    ]
    measures: [activated, event_installs, first_seen, first_time_visitor_count]
    filters: [
      # "country_buckets.bucket" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      country_buckets.bucket: "tier-1",
      # "mobile_android_country.app_id" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.app_id: "firefox",
      # "mobile_android_country.history_days" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.history_days: "7"
    ]
  }  # Please specify a datagroup_trigger or sql_trigger_value
  # See https://looker.com/docs/r/lookml/types/aggregate_table/materialization
  materialization: {
    sql_trigger_value: SELECT CURRENT_DATE();;
  }
}

Dashboard

Looker provides the tooling to build interactive dashboards that are more than the sum of its parts. Often, the purpose is to present easily digestible information that has been vetted and reviewed by peers. To build a dashboard, you start by adding charts and tables from various explores. Looker provides widgets for filters and for markdown text used to annotate charts.  It’s an intuitive process that can be somewhat tedious, depending on how complex the information you’re trying to present.

Once you’ve built the dashboard, Looker provides a button to get a YAML representation to check into version control. The configuration file contains all the relevant information for constructing the dashboard and could even be written by hand with enough patience.

Strengths and Weaknesses of Looker

Now that I’ve gone through building a dashboard end-to-end, here are a few points summarizing my experience and the take-aways from putting together this dashboard.

Parameterized queries allow flexibility across similar tables

I worked with Glean-instrumented data in another project by parameterizing SQL queries using Jinja2 and running queries multiple times. Looker effectively brings this process closer to runtime and allows the ETL and visualization to live on the same platform. I’m impressed by how well it works in practice. The combination of consistent data models in bigquery-etl (e.g. clients_first_seen) and the ability to parameterize based on app-id was surprisingly straightforward. The dashboard can switch between Firefox for Android and Focus for Android without a hitch, even though they are two separate products with two separate datasets in BigQuery.

I can envision many places where we may not want to precompute all results ahead of time but instead just a subset of columns or dates on-demand. The costs of precomputing and materializing data is non-negligible, especially for large expensive queries that are viewed once in a blue moon or dimensions that fall in the long tail. Templating and parameters provide a great way to build these into the data model without having to resort to manually written SQL.

LookML in version control allows room for software engineering best practices

While Looker appeals to the non-technical crowd, it also affords many conveniences for the data practitioners who are familiar with the software development practices.

Changes to LookML files are version controlled (e.g., git). Being able to create branches and work on multiple features in parallel has been handy at times. It’s relieving to have the ability to make changes in my instance of the Looker files when trying out something new without having to lose my place. In addition, the ability to configure LookML views, explores, and dashboards in code allow for the process of creating new dashboards to incorporate many best practices like code review.

In addition, it’s nice to be able to use a real editor for mass revision. I was able to create a new dashboard for iOS data that paralleled the Android dashboard by copying over files, modifying the SQL in the view, and making a few edits to the dashboard code directly.

Workflow management is clunky for deploying new dashboards

While there are many upsides to having LookML explores and dashboards in code, there are several pain points while working with the Looker interface.

In particular, the workflow for editing a Dashboard goes something like this. First, you copy the dashboard into a personal folder that you can edit. Next, you make whatever modifications to that dashboard using the UI. Afterward, you export the result and copy-paste it into the dashboard code. While not ideal, this prevents the Dashboard from going out of sync from the one that you’re editing directly (since there won’t be conflicts between the UI and the code in version control). However, it would be nice if it were possible to edit the dashboard directly instead of making a copy with Looker performing any conflict resolution internally.

There have been moments where I’ve had to fight with the built-in git interface built into Looker’s development mode. Reverting a commit to a particular branch or dealing with merge conflicts can be an absolute nightmare. Suppose you do happen to pull the project in a local environment. In that case, you aren’t able to validate your changes locally (you’ll need to push, pull into Looker, and then validate and fix anything). Finally, the formatting option is stuck behind a keyboard shortcut while the browser is already using the keyboard shortcut.

Conclusion: Iterating on Feedback

Simply building a dashboard is not enough to demonstrate that it has value. It’s important to gather feedback from peers and stakeholders to determine the best path forward. Some things benefit from having a concrete implementation, though; there are differences between different platforms and inconsistencies in the data that may only appear after putting together an initial draft of a project.

While hitting goals of making data across app stores and our user populations visible, the funnel dashboard has room for improvement. Having this dashboard located in Looker makes the process of iterating that much easier, though. In addition, the feedback cycle of changing the query to seeing the results is relatively low and is easy to roll back. The tool is promising, and I look forward to seeing how it transforms the data landscape at Mozilla.

Resources

Mozilla Add-ons BlogThank you, Recommended Extensions Community Board!

Given the broad visibility of Recommended extensions across addons.mozilla.org (AMO), the Firefox Add-ons Manager, and other places we promote extensions, we believe our curatorial process should include a wide range of perspectives from our global community of contributors. That’s why we have the Recommended Extensions Advisory Board—an ongoing project that involves a rotating group of contributors to help identify and evaluate new extension candidates for the program.

Our most recent community board just completed their six-month project and I’d like to take a moment to thank Sylvain Giroux, Jyotsna Gupta, Chandan Baba, Juraj Mäsiar, and Pranjal Vyas for sharing their time, passion, and knowledge of extensions. Their insights helped usher a wave of new extensions into the Recommended program, including really compelling content like I Don’t Care About Cookies (A+ cookie manager), Tab Stash (highly original take on tab management), Custom Scrollbars (neon colored scrollbar? Yes please!), PocketTube (great way to organize a bunch of YouTube subscriptions), and many more. 

On behalf of the entire Add-ons staff, thank you and all!

Now we’ll turn our attention to forming the next community board for another six-month project dedicated to evaluating new Recommended candidates. If you have a passion for browser extensions and you think you could make an impact contributing your insights to our curatorial process, we’d love to hear from you by Monday, 30 August. Just drop us an email at amo-featured [at] mozilla.org along with a brief note letting us know a bit about your experience with extensions—whether as a developer, user, or both—and why you’d like to participate on the next Recommended Extensions Community Advisory Board.

The post Thank you, Recommended Extensions Community Board! appeared first on Mozilla Add-ons Community Blog.

Open Policy & AdvocacyAdvancing advertising transparency in the US Congress

At Mozilla we believe that greater transparency into the online advertising ecosystem can empower individuals, safeguard advertisers’ interests, and address systemic harms. Lawmakers around the world are stepping up to help realize that vision, and in this post we’re weighing in with some preliminary reflections on a newly-proposed ad transparency bill in the United States Congress: the Social Media DATA Act.

The bill – put forward by Congresswoman Lori Trahan of Massachusetts – mandates that very large platforms create and maintain online ‘ad libraries’ that would be accessible to academic researchers. The bill also seeks to advance the policy discourse around transparency of platform systems beyond advertising (e.g. content moderation practices; recommender systems; etc), by directing the Federal Trade Commission to develop best-practice guidelines and policy recommendations on general data access.

We’re pleased to see that the bill has some many welcome features that mirror’s Mozilla’s public policy approach to ad transparency:

  • Clarity: The bill spells out precisely what kind of data should be made available, and includes many overlaps with Mozilla’s best practice guidance for ad archive APIs. This approach provides clarity for companies that need to populate the ad archives, and a clear legal footing for researchers who wish to avail of those archives.
  • Asymmetric rules: The ad transparency provisions would only be applicable to very large platforms with 100 million monthly active users. This narrow scoping ensures the measures only apply to the online services for whom they are most relevant and where the greatest public interest risks lie.
  • A big picture approach: The bill recognizes that questions of transparency in the platform ecosystem go beyond simply advertising, but that more work is required to define what meaningful transparency regimes should look like for things like recommender systems and automated content moderation systems. It provides the basis for that work to ramp up.

Yet while this bill has many positives, it is not without its shortcomings. Specifically:

  • Access: Only researchers with academic affiliations will be able to benefit from the transparency provisions. We believe that academic affiliation should not be the sole determinant of who gets to benefit from ad archive access. Data journalists, unaffiliated public interest researchers, and certain civil society organizations can also be crucial watchdogs.
  • Influencer ads: This bill does not specifically address risks associated with some of the novel forms of paid online influence. For instance, our recent research into influencer political advertising on TikTok has underscored that this emergent phenomenon needs to be given consideration in ad transparency and accountability discussions.
  • Privacy concerns: Under this bill, ad archives would include data related to the targeting and audience of specific advertisements. If targeting parameters for highly micro-targeted ads are disclosed, this data could be used to identify specific recipients and pose a significant data protection risk.

Fortunately, these shortcomings are not insurmountable, and there are already some ideas for how they could be addressed if and when the bill proceeds to mark-up. In that regard, we look forward to working with Congresswoman Trahan and the broader policy community to fine-tune the bill and improve it.

We’ve long-believed that transparency is a crucial prerequisite for accountability in the online ecosystem. This bill signals an encouraging advancement in the policy discourse.

 

The post Advancing advertising transparency in the US Congress appeared first on Open Policy & Advocacy.

hacks.mozilla.orgHow MDN’s autocomplete search works

Last month, Gregor Weber and I added an autocomplete search to MDN Web Docs, that allows you to quickly jump straight to the document you’re looking for by typing parts of the document title. This is the story about how that’s implemented. If you stick around to the end, I’ll share an “easter egg” feature that, once you’ve learned it, will make you look really cool at dinner parties. Or, perhaps you just want to navigate MDN faster than mere mortals.

MDN's autocomplete search in action

In its simplest form, the input field has an onkeypress event listener that filters through a complete list of every single document title (per locale). At the time of writing, there are 11,690 different document titles (and their URLs) for English US. You can see a preview by opening https://developer.mozilla.org/en-US/search-index.json. Yes, it’s huge, but it’s not too huge to load all into memory. After all, together with the code that does the searching, it’s only loaded when the user has indicated intent to type something. And speaking of size, because the file is compressed with Brotli, the file is only 144KB over the network.

Implementation details

By default, the only JavaScript code that’s loaded is a small shim that watches for onmouseover and onfocus for the search <input> field. There’s also an event listener on the whole document that looks for a certain keystroke. Pressing / at any point, acts the same as if you had used your mouse cursor to put focus into the <input> field. As soon as focus is triggered, the first thing it does is download two JavaScript bundles which turns the <input> field into something much more advanced. In its simplest (pseudo) form, here’s how it works:

<input 
 type="search" 
 name="q"
 onfocus="startAutocomplete()" 
 onmouseover="startAutocomplete()"
 placeholder="Site search..." 
 value="q">
let started = false;
function startAutocomplete() {
  if (started) {
    return false;
  }
  const script = document.createElement("script");
  script.src = "/static/js/autocomplete.js";
  document.head.appendChild(script);
}

Then it loads /static/js/autocomplete.js which is where the real magic happens. Let’s dig deeper with the pseudo code:

(async function() {
  const response = await fetch('/en-US/search-index.json');
  const documents = await response.json();
  
  const inputValue = document.querySelector(
    'input[type="search"]'
  ).value;
  const flex = FlexSearch.create();
  documents.forEach(({ title }, i) => {
    flex.add(i, title);
  });

  const indexResults = flex.search(inputValue);
  const foundDocuments = indexResults.map((index) => documents[index]);
  displayFoundDocuments(foundDocuments.slice(0, 10));
})();

As you can probably see, this is an oversimplification of how it actually works, but it’s not yet time to dig into the details. The next step is to display the matches. We use (TypeScript) React to do this, but the following pseudo code is easier to follow:

function displayFoundResults(documents) {
  const container = document.createElement("ul");
  documents.forEach(({url, title}) => {
    const row = document.createElement("li");
    const link = document.createElement("a");
    link.href = url;
    link.textContent = title;
    row.appendChild(link);
    container.appendChild(row);
  });
  document.querySelector('#search').appendChild(container);
}

Then with some CSS, we just display this as an overlay just beneath the <input> field. For example, we highlight each title according to the inputValue and various keystroke event handlers take care of highlighting the relevant row when you navigate up and down.

Ok, let’s dig deeper into the implementation details

We create the FlexSearch index just once and re-use it for every new keystroke. Because the user might type more while waiting for the network, it’s actually reactive so executes the actual search once all the JavaScript and the JSON XHR have arrived.

Before we dig into what this FlexSearch is, let’s talk about how the display actually works. For that we use a React library called downshift which handles all the interactions, displays, and makes sure the displayed search results are accessible. downshift is a mature library that handles a myriad of challenges with building a widget like that, especially the aspects of making it accessible.

So, what is this FlexSearch library? It’s another third party that makes sure that searching on titles is done with natural language in mind. It describes itself as the “Web’s fastest and most memory-flexible full-text search library with zero dependencies.” which is a lot more performant and accurate than attempting to simply look for one string in a long list of other strings.

Deciding which result to show first

In fairness, if the user types foreac, it’s not that hard to reduce a list of 10,000+ document titles down to only those that contain foreac in the title, then we decide which result to show first. The way we implement that is relying on pageview stats. We record, for every single MDN URL, which one gets the most pageviews as a form of determining “popularity”. The documents that most people decide to arrive on are most probably what the user was searching for.

Our build-process that generates the search-index.json file knows about each URLs number of pageviews. We actually don’t care about absolute numbers, but what we do care about is the relative differences. For example, we know that Array.prototype.forEach() (that’s one of the document titles) is a more popular page than TypedArray.prototype.forEach(), so we leverage that and sort the entries in search-index.json accordingly. Now, with FlexSearch doing the reduction, we use the “natural order” of the array as the trick that tries to give users the document they were probably looking for. It’s actually the same technique we use for Elasticsearch in our full site-search. More about that in: How MDN’s site-search works.

The easter egg: How to search by URL

Actually, it’s not a whimsical easter egg, but a feature that came from the fact that this autocomplete needs to work for our content creators. You see, when you work on the content in MDN you start a local “preview server” which is a complete copy of all documents but all running locally, as a static site, under http://localhost:5000. There, you don’t want to rely on a server to do searches. Content authors need to quickly move between documents, so much of the reason why the autocomplete search is done entirely in the client is because of that.

Commonly implemented in tools like the VSCode and Atom IDEs, you can do “fuzzy searches” to find and open files simply by typing portions of the file path. For example, searching for whmlemvo should find the file files/web/html/element/video. You can do that with MDN’s autocomplete search too. The way you do it is by typing / as the first input character.

Activate "fuzzy search" on MDN

It makes it really quick to jump straight to a document if you know its URL but don’t want to spell it out exactly.
In fact, there’s another way to navigate and that is to first press / anywhere when browsing MDN, which activates the autocomplete search. Then you type / again, and you’re off to the races!

How to get really deep into the implementation details

The code for all of this is in the Yari repo which is the project that builds and previews all of the MDN content. To find the exact code, click into the client/src/search.tsx source code and you’ll find all the code for lazy-loading, searching, preloading, and displaying autocomplete searches.

The post How MDN’s autocomplete search works appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogNew tagging feature for add-ons on AMO

There are multiple ways to find great add-ons on addons.mozilla.org (AMO). You can browse the content featured on the homepage, use the top navigation to drill down into add-on types and categories, or search for specific add-ons or functionality. Now, we’re adding another layer of classification and opportunities for discovery by bringing back a feature called tags.

We introduced tagging long ago, but ended up discontinuing it because the way we implemented it wasn’t as useful as we thought. Part of the problem was that it was too open-ended, and anyone could tag any add-on however they wanted. This led to spamming, over-tagging, and general inconsistencies that made it hard for users to get helpful results.

Now we’re bringing tags back, but in a different form. Instead of free-form tags, we’ll provide a set of predefined tags that developers can pick from. We’re starting with a small set of tags based on what we’ve noticed users looking for, so it’s possible many add-ons don’t match any of them. We will expand the list of tags if this feature performs well.

The tags will be displayed on the listing page of the add-on. We also plan to display tagged add-ons in the AMO homepage.

Example of a tag shelf in the AMO homepage

Example of a tag shelf in the AMO homepage

We’re only just starting to roll this feature out, so we might be making some changes to it as we learn more about how it’s used. For now, add-on developers should visit the Developer Hub and set any relevant tags for their add-ons. Any tags that had been set prior to July 22, 2021 were removed when the feature was retooled.

The post New tagging feature for add-ons on AMO appeared first on Mozilla Add-ons Community Blog.

Web Application SecurityMaking Client Certificates Available By Default in Firefox 90

 

Starting with version 90, Firefox will automatically find and offer to use client authentication certificates provided by the operating system on macOS and Windows. This security and usability improvement has been available in Firefox since version 75, but previously end users had to manually enable it.

When a web browser negotiates a secure connection with a website, the web server sends a certificate to the browser to prove its identity. Some websites (most commonly corporate authentication systems) request that the browser sends a certificate back to it as well, so that the website visitor can prove their identity to the website (similar to logging in with a username and password). This is sometimes called “mutual authentication”.

Starting with Firefox version 90, when you connect to a website that requests a client authentication certificate, Firefox will automatically query the operating system for such certificates and give you the option to use one of them. This feature will be particularly beneficial when relying on a client certificate stored on a hardware token, since you do not have to import the certificate into Firefox or load a third-party module to communicate with the token on behalf of Firefox. No manual task or preconfiguration will be necessary when communicating with your corporate authentication system.

If you are a Firefox user, you don’t have to do anything to benefit from this usability and security improvement to load client certificates. As soon as your Firefox auto-updates to version 90, you can simply select your client certificate when prompted by a website. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the web.

The post Making Client Certificates Available By Default in Firefox 90 appeared first on Mozilla Security Blog.

Blog of DataThis Week in Glean: Shipping Glean with GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).


Glean SDK

The Glean SDK is Mozilla’s telemetry library, used in most mobile products and now for Firefox Desktop as well. By now it has grown to a sizable code base with a lot of functionality beyond just storing some metric data. Since its first release as a Rust crate in 2019 we managed to move more and more logic from the language SDKs (previously also known as “language bindings”) into the core Rust crate. This allows us to maintain business logic only once and can easily share that across different implementations and platforms. The Rust core is shipped precompiled for multiple target platforms, with the language SDK distributed through the respective package manager.

I talked about how this all works in more detail last year, this year and blogged about it in a previous TWiG.

GeckoView

GeckoView is Mozilla’s alternative implementation for WebViews on Android, based on Gecko, the web engine that also powers Firefox Desktop. It is used as the engine behind Firefox for Android (also called Fenix). The visible parts of what makes up Firefox for Android is written in Kotlin, but it all delegates to the underlying Gecko engine, written in a combination of C++, Rust & JavaScript.

The GeckoView code resides in the mozilla-central repository, next to all the other Gecko code. From there releases are pushed to Mozilla’s own Maven repository.

One Glean too many

Initially Firefox for Android was the only user of the Glean SDK. Up until today it consumes Glean through its release as part of Android Components, a collection of libraries to build browser-like applications.

But the Glean SDK is also available outside of Android Components, as its own package. And additionally it’s available for other languages and platforms too, including a Rust crate. Over the past year we’ve been busy getting Gecko to use Glean through the Rust crate to build its own telemetry on top.

With the Glean SDK used in all these applications we’re in a difficult position: There’s a Glean in Firefox for Android that’s reporting data. Firefox for Android is using Gecko to render the web. And Gecko is starting to use Glean to report data.

That’s one Glean too many if we want coherent data from the full application.

Shipping it all together, take one

Of course we knew about this scenario for a long time. It’s been one of the goals of Project FOG to transparently collect data from Gecko and the embedding application!

We set out to find a solution so that we can connect both sides and have only one Glean be responsible for the data collection & sending.

We started with more detailed planning all the way back in August of last year and agreed on a design in October. Due to changed priorities & availability of people we didn’t get into the implementation phase until earlier this year.

By February I had a first rough prototype in place. When Gecko was shipped as part of GeckoView it would automatically look up the Glean library that is shipped as a dynamic library with the Android application. All function calls to record data from within Gecko would thus ultimately land in the Glean instance that is controlled by Fenix. Glean and the abstraction layer within Gecko would do the heavy work, but users of the Glean API would notice no difference, except their data would now show up in pings sent from Fenix.

This integration was brittle. It required finding the right dynamic library, looking up symbols at runtime as well as reimplementing all metric types to switch to the FFI API in a GeckoView build. We abandoned this approach and started looking for a better one.

Shipping it all together, take two

After the first failed approach the issue was acknowledged by other teams, including the GeckoView and Android teams.

Glean is not the only Rust project shipped for mobile, the application-services team is also shipping components written in Rust. They bundle all components into a single library, dubbed the megazord. This reduces its size (dependencies & the Rust standard library are only linked once) and simplifies shipping, because there’s only one library to ship. We always talked about pulling in Glean as well into such a megazord, but ultimately didn’t do it (except for iOS builds).

With that in mind we decided it’s now the time to design a solution, so that eventually we can bundle multiple Rust components in a single build. We came up with the following plan:

  • The Glean Kotlin SDK will be split into 2 packages: a glean-native package, that only exists to ship the compiled Rust library, and a glean package, that contains the Kotlin code and has a dependency on glean-native.
  • The GeckoView-provided libxul library (that’s “Gecko”) will bundle the Glean Rust library and export the C-compatible FFI symbols, that are used by the Glean Kotlin SDK to call into Glean core.
  • The GeckoView Kotlin package will then use Gradle capabilities to replace the glean-native package with itself (this is actually handle by the Glean Gradle plugin).

Consumers such as Fenix will depend on both GeckoView and Glean. At build time the Glean Gradle plugin will detect this and will ensure the glean-native package, and thus the Glean library, is not part of the build. Instead it assumes libxul from GeckoView will take that role.

This has some advantages. First off everything is compiled together into one big library. Rust code gets linked together and even Rust consumers within Gecko can directly use the Glean Rust API. Next up we can ensure that the version of the Glean core library matches the Glean Kotlin package used by the final application. It is important that the code matches, otherwise calling native functions could lead to memory or safety issues.

Glean is running ahead here, paving the way for more components to be shipped the same way. Eventually the experimentation SDK called Nimbus and other application-services components will start using the Rust API of Glean. This will require compiling Glean alongside them and that’s the exact case that is handled in mozilla-central for GeckoView then.

Now the unfortunate truth is: these changes have not landed yet. It’s been implemented for both the Glean SDK and mozilla-central, but also requires changes for the build system of mozilla-central. Initially that looked like simple changes to adopt the new bundling, but it turned into bigger changes across the board. Some of the infrastructure used to build and test Android code from mozilla-central was untouched for years and thus is very outdated and not easy to change. With everything else going on for Firefox it’s been a slow process to update the infrastructure, prepare the remaining changes and finally getting this landed.

But we’re close now!

Big thanks to Agi for connecting the right people, driving the initial design and helping me with the GeckoView changes. He also took on the challenge of changing the build system. And also thanks to chutten for his reviews and input. He’s driving the FOG work forward and thus really really needs us to ship GeckoView support.

SeaMonkeySeaMonkey 2.53.7, en-US/zh_TW..

Just for the sake of jogging memory (as I just forgot about this and was trying to figure out what I did wrong):

SeaMonkey 2.53.7 en-US/zh-TW users need to manually upgrade to 2.53.8.1.

Or, for the sake of ‘fun’,  just install 2.53.8 and have the system upgrade it to 2.53.8.1.  Why would you do that?  *shrug*  For <something> and giggles..   Just so to prove it works.

:ewong

 

SeaMonkeySeaMonkey 2.53.8.1 is out!

Hi All,

Just a quick note that SeaMonkey 2.53.8.1 is out!  The Updates part should work as intended…

Please do read the Release notes at [1].

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.8.1/

 

SUMO BlogIntroducing Joseph Cuevas

Hey folks,

Please join me to welcome Joseph Cuevas (Joe) as part of the Customer Experience team and the broader SUMO family. Joe is going to be working as an Operations Manager specifically to build a premium customer experience for current and future Mozilla’s paid products.

Here’s a brief introduction from Joe:

Hi everyone! My name is Joe and I am the new User Support Operations Manager joining the Customer Experience Team. I’ll be working with my team to build a premium customer support experience for Mozilla VPN. I’m looking forward to working alongside and getting to know my fellow Mozillians. I just know we’re going to have a great time!

Welcome, Joe!

hacks.mozilla.orgSpring Cleaning MDN: Part 1

As we’re all aware by now, we made some big platform changes at the end of 2020. Whilst the big move has happened, its given us a great opportunity to clear out the cupboards and closets.

An illustration of a salmon coloured dinosaur sweeping with a broom

                                  Illustration by Daryl Alexsy

 

Most notably MDN now manages its content from a repository on GitHub. Prior to this, the content was stored in a database and edited by logging in to the site and modifying content via an in-page (WYSIWYG) editor, aka ‘The Wiki’. Since the big move, we have determined that MDN accounts are no longer functional for our users. If you want to edit or contribute content, you need to sign in to GitHub, not MDN.

Due to this, we’ll be removing the account functionality and removing all of the account data from our database. This is consistent with our Lean Data Practices principles and our commitment to user privacy. We also have the perfect opportunity to be doing this now, as we’re moving our database from MySQL to PostgreSQL this week.

Accounts will be disabled on MDN on Thursday, 22nd July.

Don’t worry though – you can still contribute to MDN! That hasn’t changed. All the information on how to help is here in this guide.

The post Spring Cleaning MDN: Part 1 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityStopping FTP support in Firefox 90

 

The File Transfer Protocol (FTP) has long been a convenient file exchange mechanism between computers on a network. While this standard protocol has been supported in all major browsers almost since its inception, it’s by now one of the oldest protocols still in use and suffers from a number of serious security issues.

The biggest security risk is that FTP transfers data in cleartext, allowing attackers to steal, spoof and even modify the data transmitted. To date, many malware distribution campaigns launch their attacks by compromising FTP servers and downloading malware on an end user’s device using the FTP protocol.

 

Discontinuing FTP support in Firefox 90Aligning with our intent to deprecate non-secure HTTP and increase the percentage of secure connections, we, as well as other major web browsers, decided to discontinue support of the FTP protocol.

Removing FTP brings us closer to a fully-secure web which is on a path to becoming HTTPS only and any modern automated upgrading mechanisms such as HSTS or also Firefox’s HTTPS-Only Mode, which automatically upgrade any connection to become secure and encrypted do not apply to FTP.

The FTP protocol itself has been disabled by default since version 88 and now the time has come to end an era and discontinue the support for this outdated and insecure protocol — Firefox 90 will no longer support the FTP protocol.

If you are a Firefox user, you don’t have to do anything to benefit from this security advancement. As soon as your Firefox auto-updates to version 90, any attempt to launch an attack relying on the insecure FTP protocol will be rendered useless, because Firefox does not support FTP anymore. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the web.

 

The post Stopping FTP support in Firefox 90 appeared first on Mozilla Security Blog.

hacks.mozilla.orgGetting lively with Firefox 90

Getting lively with Firefox 90

As the summer rolls around for those of us in the northern hemisphere, temperatures are high and unwinding with a cool ice tea is high on the agenda. Isn’t it lucky then that Background Update is here for Windows, which means Firefox can update even if it’s not running. We can just sit back and relax!

Also this release we see a few nice JavaScript additions, including private fields and methods for classes, and the at() method for Array, String and TypedArray global objects.

This blog post just provides a set of highlights; for all the details, check out the following:

Classes go private

A feature JavaScript has lacked since its inception, private fields and methods are now enabled by default In Firefox 90. This allows you to declare private properties within a class. You can not reference these private properties from outside of the class; they can only be read or written within the class body.

Private names must be prefixed with a ‘hash mark’ (#) to distinguish them from any public properties a class might hold.

This shows how to declare private fields as opposed to public ones within a class:

class ClassWithPrivateProperties {

  #privateField;
  publicField;

  constructor() {

    // can be referenced within the class, but not accessed outside
    this.#privateField = 42;

    // can be referenced within the class aswell as outside
    this.publicField = 52;
}

  // again, can only be used within the class
  #privateMethod() {
    return 'hello world';
  }

  // can be called when using the class
  getPrivateMessage() {
    return this.#privateMethod();
  }
}

Static fields and methods can also be private. For a more detailed overview and explanation, check out the great guide: Working with private class features. You can also read what it takes to implement such a feature in our previous blog post Implementing Private Fields for JavaScript.

JavaScript at() method

The relative indexing method at() has been added to the Array, String and TypedArray global objects.

Passing a positive integer to the method returns the item or character at that position. However the highlight with this method, is that it also accepts negative integers. These count back from the end of the array or string. For example, 1 would return the second item or character and -1 would return the last item or character.

This example declares an array of values and uses the at() method to select an item in that array from the end.

const myArray = [5, 12, 8, 130, 44];

let arrItem = myArray.at(-2);

// arrItem = 130

It’s worth mentioning there are other common ways of doing this, however this one looks quite neat.

Conic gradients for Canvas

The 2D Canvas API has a new createConicGradient() method, which creates a gradient around a point (rather than from it, like createRadialGradient() ). This feature allows you to specify where you want the center to be and in which direction the gradient should start. You then add the colours you want and where they should begin (and end).

This example creates a conic gradient with 5 colour stops, which we use to fill a rectangle.

var canvas = document.getElementById('canvas');

var ctx = canvas.getContext('2d');

// Create a conic gradient
// The start angle is 0
// The centre position is 100, 100
var gradient = ctx.createConicGradient(0, 100, 100);

// Add five color stops
gradient.addColorStop(0, "red");
gradient.addColorStop(0.25, "orange");
gradient.addColorStop(0.5, "yellow");
gradient.addColorStop(0.75, "green");
gradient.addColorStop(1, "blue");

// Set the fill style and draw a rectangle
ctx.fillStyle = gradient;
ctx.fillRect(20, 20, 200, 200);

The result looks like this:

Rainbow radial gradient

New Request Headers

Fetch metadata request headers provide information about the context from which a request originated. This allows the server to make decisions about whether a request should be allowed based on where the request came from and how the resource will be used. Firefox 90 enables the following by default:

The post Getting lively with Firefox 90 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 90 introduces SmartBlock 2.0 for Private Browsing

Today, with the launch of Firefox 90, we are excited to announce a new version of SmartBlock, our advanced tracker blocking mechanism built into Firefox Private Browsing and Strict Mode. SmartBlock 2.0 combines a great web browsing experience with robust privacy protection, by ensuring that you can still use third-party Facebook login buttons to sign in to websites, while providing strong defenses against cross-site tracking.

At Mozilla, we believe that privacy is a fundamental right. As part of the effort to provide a strong privacy option, Firefox includes the built-in Tracking Protection feature that operates in Private Browsing windows and Strict Mode to automatically block scripts, images, and other content from being loaded from known cross-site trackers. Unfortunately, blocking such cross-site tracking content can break website functionality.

Ensuring smooth logins with Facebook

Logging into websites is, of course, a critical piece of functionality. For example: many people value the convenience of being able to use Facebook to sign up for, and log into, a website. However, Firefox Private Browsing blocks Facebook scripts by default: that’s because our partner Disconnect includes Facebook domains on their list of known trackers. Historically, when Facebook scripts were blocked, those logins would no longer work.

For instance, if you visit etsy.com in a Private Browsing window, the front page gives the following options to sign in, including a button to sign in using Facebook’s login service. If you click on the Enhanced Tracking Protection shield in the address bar, ()and click on Tracking Content, however, you will see that Firefox has automatically blocked third-party tracking content from Facebook to prevent any possible tracking of you by Facebook on that page:

Etsy Sign In forrm using "Continue with Facebook"Prior to Firefox 90, if you were using a Private Browsing window, when you clicked on the “Continue with Facebook” button to sign in, the “sign in” would fail to proceed because the third-party Facebook script required had been blocked by Firefox.

Now, SmartBlock 2.0 in Firefox 90 eliminates this login problem. Initially, Facebook scripts are all blocked, just as before, ensuring your privacy is preserved. But when you click on the “Continue with Facebook” button to sign in, SmartBlock reacts by quickly unblocking the Facebook login script just in time for the sign-in to proceed smoothly. When this script gets loaded, you can see that unblocking indicated in the list of blocked tracking content:

SmartBlock 2.0 provides this new capability on numerous websites. On all websites where you haven’t signed in, Firefox continues to block scripts from Facebook that would be able to track you. That’s right — you don’t have to choose between being protected from tracking or using Facebook to sign in. Thanks to Firefox SmartBlock, you can have your cake and eat it too!

And we’re baking more cakes! We are continuously working to expand SmartBlock’s capabilities in Firefox Private Browsing and Strict Mode to give you an even better experience on the web while continuing to provide strong protection against trackers.

Thank you

Our privacy protections are a labor of love. We want to acknowledge the work and support of many people at Mozilla that helped to make SmartBlock possible, including Paul Zühlcke, Johann Hofmann, Steven Englehardt, Tanvi Vyas, Wennie Leung, Mikal Lewis, Tim Huang, Dimi Lee, Ethan Tseng, Prangya Basu, and Selena Deckelmann.

The post Firefox 90 introduces SmartBlock 2.0 for Private Browsing appeared first on Mozilla Security Blog.

SUMO BlogWhat’s up with SUMO – July 2021

Hey SUMO folks,

Welcome to a new quarter. Lots of projects and planning are underway. But first, let’s take a step back and see what we’ve been doing for the past month.

Welcome on board!

  1. Hello to strafy, Naheed, Taimur Ahmad, and Felipe. Thanks for contributing to the forum and welcome to SUMO!

Community news

  • The advance search syntax is available on our platform now (read more about it here).
  • Our wiki has a new face now. Please take a look and let us know if you have any feedback.
  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • Check out the following release notes from Kitsune in the month:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in June!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB Page views

Month Page views Vs previous month
June 2021 9,125,327 +20.04%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre
  4. Romado33
  5. wsmwk

KB Localization

Top 10 locale (besides en) based on total page views

Locale Apr 2021 page views Localization progress (per Jul, 8)
de 10.21% 100%
fr 7.51% 89%
es 6.58% 46%
pt-BR 5.43% 65%
ru 4.62% 99%
zh-CN 4.23% 99%
ja 3.98% 54%
pl 2.49% 84%
it 2.42% 100%
id 1.61% 2%

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. JimSp472
  3. Soucet
  4. Michele Rodaro
  5. Artist

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jun 2021 4676 63.58% 15.93% 78.33%

Top 5 forum contributors in the last 90 days: 

  1. Cor-el
  2. Jscher2000
  3. FredMcD
  4. Seburo
  5. Sfhowes

Social Support

Channel Jun 2021
Total conv Conv handled
@firefox 7082 160
@FirefoxSupport 1274 448

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Pravin
  3. Emin Mastizada
  4. Md Monirul Alom
  5. Andrew Truong

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX Desktop V90 (07/13)
    • Shimming Exceptions UI (Smarkblock)
    • DNS over HTTPS – remote settings config
    • Background Update Agent (BAU)
    • About:third-party

Firefox mobile

  • FX Android V90 (07/13)
    • Credit Card Auto-Complete
  • FX IOS V35 (07/13)
    • Folders for your Bookmarks
    • Opt-in or out of Experiments

Other products / Experiments

  • Mozilla VPN V2.4 (07/13)
    • Split Tunneling (Windows and Linux)
    • Support for Local DNS
    • Addition of In app Feedback submission
    • Variable Pricing addition (EU and US)
    • Expansion Phase 2 to EU (Spain, Italy, Belgium, Austria, Switzerland)

Shout-outs!

  • Kudos for everyone who’s been helping with the Firefox 89 release.
  • Franz for helping with the forum and for the search handover insight.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Useful links:

Web Application SecurityFirefox 90 supports Fetch Metadata Request Headers

 

We are pleased to announce that Firefox 90 will support Fetch Metadata Request Headers which allows web applications to protect themselves and their users against various cross-origin threats like (a) cross-site request forgery (CSRF), (b) cross-site leaks (XS-Leaks), and (c) speculative cross-site execution side channel (Spectre) attacks.

 

Cross-site attacks on Web Applications

The fundamental security problem underlying cross-site attacks is that the web in its open nature does not allow web application servers to easily distinguish between requests originating from its own application or originating from a malicious (cross-site) application, potentially opened in a different browser tab.

 

Firefox 90 sending Fetch Metadata (Sec-Fetch-*) Request Headers which allows web application servers to protect themselves against all sorts of cross site attacks.

 

For example, as illustrated in the Figure above, let’s assume you log into your banking site hosted at https://banking.com and you conduct some online banking activities. Simultaneously, an attacker controlled website opened in a different browser tab and illustread as https://attacker.com performs some malicious actions.

Innocently, you continue to interact with your banking site which ultimately causes the banking web server to receive some actions. Unfortunately the banking web server has little to no control of who initiated the action, you or the attacker in the malicious website in the other tab. Hence the banking server or generally web application servers will most likely simply execute any action received and allow the attack to launch.

 

Introducing Fetch Metadata

As illustrated in the attack scenario above, the HTTP request header Sec-Fetch-Site allows the web application server to distinguish between a same-origin request from the corresponding web application and a cross-origin request from an attacker-controlled website.

Inspecting Sec-Fetch-* Headers ultimately allows the web application server to reject or also ignore malicious requests because of the additional context provided by the Sec-Fetch-* header family. In total there are four different Sec-Fetch-* headers: Dest, Mode, Site and User which together allow web applications to protect themselves and their end users against the previously mentioned cross-site attacks.

 

Going Forward

While Firefox will soon ship with it’s new Site Isolation Security Architecture which will combat a few of the above issues, we recommend that web applications make use of the newly supported Fetch Metadata headers which provide a defense in depth mechanism for applications of all sorts.

As a Firefox user, you can benefit from the additionally provided headers as soon as your Firefox auto-updates to version 90. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 90 supports Fetch Metadata Request Headers appeared first on Mozilla Security Blog.

Open Policy & AdvocacyMozilla publishes policy recommendations for EU Digital Markets Act

As the Digital Markets Act (DMA) progresses through the legislative mark-up phase, we’re today publishing our policy recommendations on how lawmakers in the European Parliament and EU Council should amend it.

We welcomed the publication of the DMA in December 2020, and we believe that a vibrant and open internet depends on fair conditions, open standards, and opportunities for a diversity of market participants. With targeted improvements and effective enforcement, we believe the DMA could help restore the internet to be the universal platform where any company can advertise itself and offer its services, any developer can write code and collaborate with others to create new technologies on a fair playing field, and any consumer can navigate information, use critical online services, connect with others, find entertainment, and improve their livelihood

Our key recommendations can be summarised as follows:

  • Consumer Control: The DMA should ban dark patterns and other forms of manipulative design techniques. Data portability should also be included in the proposal to reduce switching costs for consumers.
    txt
  • Interoperability: We propose to expand the interoperability mandate to allow regulators to restrain gatekeepers from behaviour that explicitly goes against the spirit of interoperability. It should also be extended to cover not only ancillary services but the relationship between core services.
    txt
  • Innovation not discrimination: We propose to broaden the prohibition on self-preferencing in ranking systems to a general prohibition so as to address any problematic affiliated preferencing by gatekeepers of their own products in operating systems.
    txt
  • Meaningful Privacy: We underline our support for the provision which prohibits data sharing between gatekeeper verticals, and encourage the effective enforcement of the GDPR.
    txt
  • Effective Oversight & Enforcement: We recommend the oversight framework involve  National Regulatory Authorities to reduce bottlenecks in investigations and enforcement.

We spell out these recommendations in detail in our position paper, and provide practical guidance for lawmakers on how to amend the DMA draft law to incorporate them. As the DMA discussions continue in earnest, we look forward to working with EU lawmakers and the broader community of policy stakeholders to help ensure a final legislative text that promotes a healthy internet that puts competition and consumer choice first.

The post Mozilla publishes policy recommendations for EU Digital Markets Act appeared first on Open Policy & Advocacy.

SeaMonkeyUpdates..

Hi All,

I hope everyone is doing well.

tl;dr;

After struggling for so long, I’m happy to announce that I’ve pushed the Updates server to live.  While I (and others) have tested against our installed versions, with the user base so broad, it’s difficult to find the edge cases.  Hopefully this will help us improve the updates server.

Long form:

So updates are now being served with the following caveats:

  •  versions <= 2.49.5: are unfortunately not supported.  The main rationale for this is the fact that the changes from <= 2.49.5 to 2.53.* could corrupt client data. [*][**]
  • 2.53.1 -> 2.53.4: These version still use the old aus2-community.mozilla.org domain.  A simple user.js addition can fix this.  That said, this domain will be decommissioned (not under our control, but I’m guessing this will be an eventuality).  So when it does gets decommissioned, all versions <= 2.53.4 will be getting server not found or some such similar notice on the Update dialog.   While the user.js changes can fix it until the client is updated, it’s the only workaround.
  • 2.53.x: are not affected and can be updated but with the following sub-caveat:
    • 2.53.7, en-US/zh-TW: Due to a build issue, the en-US client would request zh-TW updates, which it would have gotten and thus confusing the user.  Workaround: Manually update to 2.53.7.1 to get the update to 2.53.8…  or just install 2.53.8.  Again… this applies *only* to 2.53.7 and to locales en-US and zh-TW. All other locales are unaffected.

[*] – Manual update is still possible (provided that your operating system is supported.  Please *always* backup your profile.  While it is possible to use the same user.js to get clients to use the new update server,  versions <= 2.33 won’t actually recognize the new server due to the Certificate trust issues.  Unfortunately, this can’t be helped and it is impractical to fix (especially given the small group we have at SeaMonkey central).

[**] – Technically speaking, it is possible to get the update served for at least versions >= 2.38. (Anything less is not possible due to certificate trust issues and updating these clients’ certificate info has come to the stage of being ‘impractical’ given our resources.)  That said, while it is technically possible, it might just well be infeasible/impractical (again, given the resources we have).  To be honest, I’m not happy about this myself; but realistically speaking, we all have only so much time to spend on this project.  I feel this might be considered a cop-out/excuse.

I had the original intention when I first started the updates server project to update every (yes.. I mean *every* — incl. 1.x) single SeaMonkey version to the latest(depending on OS support);  but as I mentioned before during my update investigations,  it isn’t possible.

I’ll summarize here:

  • 1.x and 2.0.x are no longer supported (doubt anyone can support these versions.. not even sure the code can compile on  modern-day computers w/ modern day compilers.. but.. having not compiled anything lately…  I could be wrong).
  • Versions <= 2.33 have issues with certificate trust so they can’t even connect to the new update server without given you an “Untrusted connection issue”, if and when you use the user.js changes to point the clients to the new update server.
  • 2.49.1 – 2.49.5: These can be updated to 2.53.8 but require the following:
    • a special update package due to the change in update format which upgrades the 2.49.5 to 2.53.1, which can then upgrade to 2.53.8.
    • user.js changes/additions.
    • OS-supported.  2.49.5 is the last version that supports <=XP and <=OSX10.8
  • 2.53.1 – 2.53.4: Need a change/addition to user.js to update to 2.53.8 due to server change.

On the page, it looks simple and one may wonder why the hell can’t we update anything we want to the latest and greatest(again, OS dependent).  I can tell you it isn’t without trying.  I know that’s not saying a lot really.  Since I’ve been struggling with this for so long,  I sometimes wonder if someone else might do a better job… and I think someone can.  The project just needs to find that someone.  In the meantime, I’m continually plugging away at this.

My next project is the crash reporter server.  (Context-switching is difficult.) Hopefully, once that’s out of the way, I can go back and help out with the code.

Anyway, I’ve written a lot this time, and I do appreciate everyone’s patience (especially,  my fellow devs).

It’s been very tough on everyone lately due to the pandemic so I hope everyone is keeping safe and healthy.

Best Regards,

:ewong

Mozilla L10NBetter Understanding Pontoon Notifications to Improve Them

As l10n-drivers, we strongly believe that notifications are an important tool to help localizers organize, improve, and prioritize their work in Pontoon. In order to make them more effective, and focus our development work, we first needed to better understand how localizers use them (or don’t).

In the second quarter of 2021, we ran a couple of experiments and a survey to get a clearer picture of the current status, and this blog post describes in detail the results of this work.

Experiments

First of all, we needed a baseline to understand if the experiments were making significant changes. Unfortunately, this data is quite hard to measure, since there are a lot of factors at play:

  • Localizers are more active close to deadlines or large releases, and those happen randomly.
  • The number of notifications sent heavily depends on new content showing up in the active projects (31), and that has unpredictable spikes over time.

With that in mind, we decided to repeat the same process every month:

  • Look at the notifications sent in the first 2 weeks of the month (“observation period”, starting with a Monday, and ending with a Monday two weeks later).
  • After 2 additional weeks, measure data about notifications (sent, read), recipients, how many of the recipients read at least 1 notification, and how many users were logged in (over the whole 4 weeks).
  BASELINE EXPERIMENT 1 EXPERIMENT 2
Observation period April 5-19 May 3-17 May 31 – June 14
Data collected on May 3 May 31 June 28
Sent 27043 12593 15383
Read 3172 1571 2198
Recipients 3072 2858 3370
Read 1+ 140 (4.56%) 125 (4.37%) 202 (5.99%)
Users logged in 517 459 446

Experiment 1

For the 1st experiment, we decided to promote the Pontoon Add-on. This add-on, among other things, allows users to read Pontoon notifications directly in the browser (even if Pontoon is not currently open), and receive a system notification when there are new messages to read.

Pontoon Add-on PromotionPontoon would detect if the add-on is already installed. If not, it would display an infobar suggesting to install the add-on. Users could also choose to dismiss the notification: while we didn’t track how many saw the banner, we know that 393 dismissed it over the entire quarter.

Unfortunately, this experiment didn’t seem to have an immediate positive impact on the number of users reading notifications (it actually decreased slightly). On the other hand, the number of active users of the add-on has been slowly but steadily increasing, so we hope that will have an impact in the long term.

Pontoon Add-on Statistics over last 90 daysThanks to Michal Stanke for creating the add-on in the first place, and helping us implement the necessary changes to make the infobar work in Pontoon. In the process, we also made this an “official” add-on on AMO, undergoing a review for each release.

Experiment 2

For the 2nd experiment, we made a slight change to the notifications icon within Pontoon, given that we always suspected that the existing one was not very intuitive. The original bell icon would change color from gray to red when new notifications are available, the new one would display the number of unread notifications as a badge over the icon — a popular UX pattern.

Pontoon NotificationThis seemed to have a positive impact on the number of users reading notifications, as the ratio of recipients reading notifications has increased by over 30%. Note that it’s hard to isolate the results of this experiment from the other work raising awareness around notifications (first experiment, blog posts, outreach, or even the survey).

Survey

Between May 26 and June 20, we ran a survey targeting users who were active in Pontoon within the last 2 years. In this context, “active” means that they submitted at least one translation over that period.

We received 169 complete responses, and these are the most significant points (you can find the complete results here).

On a positive note, the spread of the participants’ experience was surprisingly even: 34.3% have been on Pontoon for less than a year, 33.1% between 1 and 4 years, 32.5% for more than 4 years.

7% of participants claim that they don’t know what their role is in Pontoon. That’s significant, even more so if we account for participants who might have picked “translator” while they’re actually contributors (I translate, therefore I’m a translator). Clearly, we need to do some work to onboard new users and help them understand how roles work in Pontoon, or what’s the lifecycle of a suggestion.

53% of people don’t check Pontoon notifications. More importantly, almost 63% of these users — about 33% of all participants — didn’t know Pontoon had them in the first place! 19% feel like they don’t need notifications, which is not totally surprising: volunteers contribute when they can, not necessarily when there’s work to do. Here lies a significant problem though: notifications are used for more than just telling localizers “this project has new content to localize”. For example, we use notifications for commenting on specific errors in translations, to provide more background on a specific string or a project.

As for areas where to focus development, while most features were considered between 3 and 5 on a 1-5 importance scale, the highest rated items were:

  • Notifications for new strings should link to the group of strings added.
  • For translators and locale managers, get notifications when there are pending suggestions to review.
  • Add the ability to opt-out of specific notifications.

What’s next?

First of all, thanks to all the localizers who took the time to answer the survey, as this data really helps us. We’ll need to run it again in the future, after we do more changes, in particular to understand how the data evolves around notifications discoverability and awareness.

As an immediate change, given the results of experiment 2, we plan to keep the updated notification icon as the new default.

SeaMonkeySeaMonkey 2.53.8 is out!

Hi All,

This is a quick announcement that SeaMonkey 2.53.8 has been released!

Please check it out at [1] or [2].

:ewong

PS: Updates are being tested..

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.8/

[2] – https://www.seamonkey-project.org/releases/2.53.8#official

 

Mozilla Add-ons BlogReview Articles on AMO and New Blog Name

I’m very happy to announce a new feature that we’ve released on AMO (addons.mozilla.org). It’s a series of posts that review some of the best add-ons we have available on AMO. So far we have published three articles:

Our goal with this new channel is to provide user-friendly guides into the add-ons world, focused on topics that are at the top of Firefox users’ minds. And, because we’re publishing directly on AMO, you can install the add-ons directly from the article pages.

Screenshot of article

A taste of the new look and feel

All add-ons that are featured in these articles have been reviewed and should be safe to use. If you have any feedback on these articles or the add-ons we’ve included in them, please let us know in the Discourse forum. I’ll be creating new threads for each article we publish.

New blog name

These posts are being published in a new section on AMO called “Firefox Add-on Reviews”. So, while we’re not calling it a “blog”, it could still cause some confusion with this blog.

In order to reduce confusion, we’ve decided to rename this blog from “Add-ons Blog” to “Add-ons Community Blog”, which we think better represents its charter and content. Nothing else will change: the URL will remain the same and this will continue to be the destination for add-on developer and add-on community news.

I hope you like the new content we’re making available for you. Please share it around and let us know what you think!

The post Review Articles on AMO and New Blog Name appeared first on Mozilla Add-ons Community Blog.

Open Policy & AdvocacyMozilla joins call for fifth FCC Commissioner appointment

In a letter sent to the White House on Friday, June 11, 2021, Mozilla joined over 50 advocacy groups and unions asking President Biden and Vice President Harris to appoint the fifth FCC Commissioner. Without a full team of appointed Commissioners, the Federal Communications Commission (FCC) is limited in its ability to move forward on crucial tech agenda items such as net neutrality and on addressing the country’s digital divide.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

In March 2021, we sent a joint letter to the FCC asking for the Commission to reinstate net neutrality as soon as it is in working order. Mozilla has been one of the leading voices in the fight for net neutrality for almost a decade, together with other advocacy groups. Mozilla has defended user access to the internet, in the US and around the world. Our work to preserve net neutrality has been a critical part of that effort, including our lawsuit against the FCC to keep these protections in place for users in the US.

The post Mozilla joins call for fifth FCC Commissioner appointment appeared first on Open Policy & Advocacy.

SUMO BlogWhat’s up with SUMO – June 2021

Hey SUMO folks,

Welcome to the month of June 2021. A new mark for Firefox with the release of Firefox 89. Lots of excitement and anticipation for the changes.

Let’s see what we’re up to these days!

Welcome on board!

  1. Welcome and thanks to TerryN21 and Mamoon for being active in the forum.

Community news

  • June is the month of Major Release 1 (MR1) or commonly known as Proton release. We have prepared a spreadsheet to list down the changes for this release, so you can easily find the workarounds, related bugs, and common responses for each issue. You can join Firefox 89 discussion in this thread and find out about our tagging plan here.
  • If an advanced topic like pref modification in the about:config is something that you’re interested in, please join our discussion in this community thread. We talked about how we can accommodate this in a more responsible and safer way without harming our normal users.
  • What do you think of supporting Firefox users on Facebook? Join our discussion here.
  • We said goodbye to Joni last month and Madalina has also bid farewell to us in our last community call (though she’ll stay until the end of the quarter). It’s sad to let people go, but we know that changes are normal and expected. We’re grateful for what both Joni and Madalina have done in SUMO and hope the best for whatever comes next for them.
  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • There’s only one update from our dev team in the past month:

Community call

  • Find out what we talked about in our community call in May.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB Page views

Month Page views Vs previous month
May 2021 7,601,709 -13.02%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Michele Rodaro
  4. Underpass
  5. Marchelo Ghelman

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 page views Localization progress (per Jun, 3)
de 10.05% 99%
zh-CN 6.82% 100%
es 6.71% 42%
pt-BR 6.61% 65%
fr 6.37% 86%
ja 4.33% 53%
ru 3.54% 95%
it 2.28% 98%
pl 2.17% 84%
zh-TW 1.04% 6%

Top 5 localization contributor in the last 90 days: 

  1. Milupo
  2. Artist
  3. Markh2
  4. Soucet
  5. Goudron

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jun 2021 3091 65.97% 13.62% 63.64%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Seburo
  5. Databaseben

Social Support

Channel May 2021
Total conv Conv handled
@firefox 4012 212
@FirefoxSupport 367 267

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Md Monirul Alom
  3. Devin E
  4. Andrew Truong
  5. Dayana Galeano

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • Fx 89 / MR1 released (June 1)
    • BIG THANKS – to all the contributors who helped with article revisions, localization, and for the help with ongoing MR1 Rapid Feedback Collection reporting
  • Fx 90 (July 13)
    • Background update Agents
    • SmartBlock UI improvements
    • About:third-party addition

Firefox mobile

  • Fx for Android 89 (June 1)
    • Improved menus
    • Redesigned Top Sites
    • Easier access to Synced Tabs
  • Fx for iOS V34 (June 1)
    • Updated Look
    • Search enhancements
    • Tab improvements
  • Fx for Android 90 (July 13th)
    • CC autocomplete

Other products / Experiments

  • Sunset of Firefox lite (June 1)
    • Effective June 30, this app will no longer receive security or other updates. Get the official Firefox Android app now for a fast, private & safe web browser
  • Mozilla VPN V2.3 (June 8)
    • Captive Portal Alerts
  • Mozilla VPN V2.4 (July 14)
    • Split tunneling for Windows
    • Local DNS: user settings for local dns server

Shout-outs!

  • Thanks to Danny Colin and Monirul Alom for helping with the MR1 feedback collection project! 🙌

If you know anyone that we should feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Useful links:

Mozilla L10NL10n Report: June 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

Firefox 89 (MR1)

On June 1st, Mozilla released Firefox 89. That was a major milestone for Firefox, and a lot of work went into this release (internally called MR1, which stands for Major Release 1). If this new update was well received — see for example this recent article from ZDNet — it’s also thanks to the amazing work done by our localization community.

For the first time in over a decade, we looked at Firefox holistically, making changes across the board to improve messages, establish a more consistent tone, and modernize some dialogs. This inevitably generated a lot of new content to localize.

Between November 2020 and May 2021, we added 1637 strings (6798 words). To get a point of reference, that’s almost 14% of the entire browser. What’s amazing is that the completion levels didn’t fall drastically:

  • Nov 30, 2020: 89.03% translated across all shipping locales, 99.24% for the top 15 locales.
  • May 24, 2021: 87.85% translated across all shipping locales, 99.39% for the top 15 locales.

The completion level across all locales is lower, but that’s mostly due to locales that are completely unmaintained, and that we’ll likely need to drop from release later this year. If we exclude those 7 locales, overall completion increased by 0.10% (to 89.84%).

Once again, thanks to all the volunteers who contributed to this successful release of Firefox.

What’s new or coming up in Firefox desktop

These are the important deadlines for Firefox 90, currently in Beta:

  • Firefox 90 will be released on July 13. It will be possible to update localizations until July 4.
  • Firefox 91 will move to beta on July 12 and will be released on August 10.

Keep in mind that Firefox 91 is also going to be the next ESR version. Once that moves to release, it won’t generally be possible to update translations for that specific version.

Talking about Firefox 91, we’re planning to add a new locale: Scots. Congratulations to the team for making it so quickly to release!

On a final note, expect to see more updates to the Firefox L10n Newsletter, since this has proved to be an important tool to provide more context to localizers, and help them with testing.

What’s new or coming up in mobile

Next l10n deadlines for mobile projects:

  • Firefox for Android v91: July 12
  • Firefox for iOS v34.1: June 9

Once more, we want to thank all the localizers who worked hard for the MR1 (Proton) mobile release. We really appreciate the time and effort spent on helping ensure all these products are available globally (and of course, also on desktop). THANK YOU!

What’s new or coming up in web projects

AMO

There are a few strings exposed to Pontoon that do not require translation. Only Mozilla staff in the admin role to the product would be able to see them. The developer for the feature will add a comment of “no need to translate” or context to the string at a later time. We don’t know when this will be added. For the time being, please ignore them. Most of the strings with a source string ID of src/olympia/scanners/templates/admin/* can be ignored. However, there are still a handful of strings that fall out of the category.

MDN

The project continues to be on hold in Pontoon. The product repository doesn’t pick up any changes made in Pontoon, so fr, ja, zh-CN, and zh-TW are now read-only for now.  The MDN site, however, is still maintaining the articles localized in these languages plus ko, pt-BR, and ru.

Mozilla.org

The websites in ar, hi-IN, id, ja, and ms languages are now fully localized through vendor service since our last report. Communities of these languages are encouraged to help promote the sites through various social media platforms to  increase download, conversion and create new profiles.

What’s new or coming up in SuMo

Lots of exciting things happening in SUMO in Q2. Here’s a recap of what’s happening:

  • You can now subscribe to Firefox Daily Digest to get updates about what people are saying about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • We now have release notes for Kitsune in Discourse. The latest one was about advanced search syntax which is a replacement for the former Advanced Search feature.
  • We are trying something new for Firefox 89 by collecting MR1 (Major Release 1) specific feedback from across channels (support forum, Twitter, and Reddit). You can look into how we’re doing it on the contributor thread and learn more about MR1 changes from a list that we put together on this spreadsheet.

As always, feel free to join SUMO Matrix room to discuss or just say hi to the rest of the community.

What’s new or coming up in Pontoon

Since May, we’ve been running experiments in Pontoon to increase the number of users reading notifications. For example, as part of this campaign, you might have seen a banner encouraging you to install the Pontoon Add-on — which you really should do — or noticed a slightly different notification icon in the top right corner of the window.

Pontoon NotificationRecently, we also sent an email to all Pontoon accounts active in the past 2 years, with a link to a survey specifically about further improving notifications. If you haven’t completed the survey yet, or haven’t received the email, you can still take the survey here (until June 20th).

Look out for pilcrows

When a source string includes line breaks, Pontoon will show a pilcrow character (¶) where the line break happens.

This is how the Fluent file looks like:

onboarding-multistage-theme-tooltip-automatic-2 =
    .title =
        Inherit the appearance of your operating
        system for buttons, menus, and windows.

While in most cases the line break is not relevant — it’s just used to make the source file more readable — double check the resource comment: if the line break is relevant, it will be pointed out explicitly.

If they’re not relevant, you can just put your translation on one line.

If you want to preserve the line breaks in your translation, you have a few options:

  • Use SHIFT+ENTER to create a new line while translating.
  • Click the ¶ character in the source: that will create a new line in the position where your cursor currently sits.
  • Use the COPY button to copy the source, then edit it. That’s not really efficient, as your locale might need a line break in a different place.

Do not select the text with your mouse, and copy it in the translation field. That will copy the literal character in the translation, and it will be displayed in the final product, causing bugs.

If you see the ¶ character in the translation field (see red arrow in the image below), it will also appear in the product you are translating, which is most likely not what you want. On the other hand, it’s expected to see the ¶ character in the list of translations under the translation field (green arrow), as it is in the source string and the string list.

Events

  • We have held our first Localization Workshop Zoom event on Saturday June 5th. Next iterations will happen on Friday June 11th and Saturday June 12th. We have invited active managers and translators from a subset of locales. If this experience turns out to be useful, we will consider opening up to an even larger audience with expanded locales.
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Open Policy & AdvocacyWorking in the open: Enhancing privacy and security in the DNS

In 2018, we started pioneering work on securing one of the oldest parts of the Internet, one that had till then remained largely untouched by efforts to make the web safer and more private: the Domain Name System (DNS). We passed a key milestone in that endeavor last year, when we rolled out DNS-over-HTTPS (DoH) technology by default in the United States, thus improving privacy and security for millions of people. Given the transformative nature of this technology and in line with our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Today we’re sharing our latest update on that continued effort.

Between November 2020 and January 2021 we ran a public comment period, to give the broader community who care about the DNS – including human rights defenders; technologists; and DNS service providers – the opportunity to provide recommendations for our future DoH work. Specifically, we canvassed input on our Trusted Recursive Resolver (TRR) policies, the set of privacy, security, and integrity commitments that DNS recursive resolvers must adhere to in order to be considered as default partner resolvers for Mozilla’s DoH roll-out.

We received rich feedback from stakeholders across the world, and we continue to reflect on how it can inform our future DoH work and our TRR policies. As we continue that reflection, we’re today publishing the input we received during the comment period – acting on a commitment to transparency that we made at the outset of the process. You can read the comments here.

During the comment period and prior, we received substantial input on the blocklist publication requirement of our TRR policies. This requirement means that resolvers in our TRR programme  must publicly release the list of domains that they block access to. This blocking could be the result of either legal requirements that the resolver is subject to, or because a user has explicitly consented to certain forms of DNS blocking. We are aware of the downsides associated with blocklist publication in certain contexts, and one of the primary reasons for undertaking our  comment period was to solicit constructive feedback and suggestions on how to best ensure meaningful transparency when DNS blocking takes place. Therefore, while we reflect on the input regarding our TRR policies and solutions for blocking transparency, we will relax this blocklist publication requirement. As such, current or prospective TRR partners will not be required to mandatorily publish DNS blocklists from here on out.

DoH is a transformative technology. It is relatively new and, as such, is of interest to a variety of stakeholders around the world. As we bring the privacy and security benefits of DoH to more Firefox users, we will continue our proactive engagement with internet service providers, civil society organisations, and everyone who cares about privacy and security in the internet ecosystem.

We look forward to this collaborative work. Stay tuned for more updates in the coming months.

The post Working in the open: Enhancing privacy and security in the DNS appeared first on Open Policy & Advocacy.

Firefox UXContent design considerations for the new Firefox

How we collaborated on a major redesign to clean up debt and refresh the product.

Co-authored with Meridel Walkington

Image of a Firefox browser window with the application menu opened on the right.<figcaption>Introducing the redesigned Firefox browser, featuring the Alpenglow theme.</figcaption>

We just launched a major redesign of the Firefox desktop browser to 240 million users. The effort was so large that we put our full content design team — all two of us — on the case. Over the course of the project, we updated nearly 1,000 strings, re-architected our menus, standardized content patterns, established new principles, and cleaned up content debt.

Creating and testing language to inform visual direction

The primary goal of the redesign was to make Firefox feel modern. We needed to concretize that term to guide the design and content decisions, as well as to make the measurement of visual aesthetics more objective and actionable.

To do this, we used the Microsoft Desirability Toolkit, which measures people’s attitudes towards a UI with a controlled vocabulary test. Content design worked with our UX director to identify adjectives that could embody what “modern” meant for our product. The UX team used those words for early visual explorations, which we then tested in a qualitative usertesting.com study.

Based on the results, we had an early idea of where the designs were meeting goals and where we could make adjustments.

Image of a word cloud that includes words like ‘clean,’ ‘easy,’ and ‘simple, as well as two comments from research participants about the application menu redesign.<figcaption>Sampling of qualitative feedback from the visual appeal test with word cloud and participant comments.</figcaption>

Improving way-finding in menus

Over time, our application menu had grown unwieldy. Sub-menus proliferated like dandelions. It was difficult to scan, resulting in high cognitive load. Grouping of items were not intuitive. By re-organizing the items, prioritizing high-value actions, using clear language, and removing icons, the new menu better supports people’s ability to move quickly and efficiently in the Firefox browser.

To finalize the menu’s information architecture, we leveraged a variety of inputs. We studied usage data, reviewed past user research, and referenced external sources like the Nielsen Norman Group for menu design best practices. We also consulted with product managers to understand the historical context of prior decisions.

Before and after images of the redesigned Firefox application menu.<figcaption>The Firefox application menu, before and after the information architecture redesign.</figcaption>
Image of the redesigned Firefox application menu with annotations about what changed.<figcaption>Changes made to the Firefox application menu include removing icons, grouping like items together, and reducing the number of sub-menus.</figcaption>

As a final step, we created principles to document the rationale behind the menu redesign so a consistent approach could be applied to other menu-related decisions across the product and platforms.

Image of content design principles for menus, such as ‘Use icons sparingly’ and ‘Write options as verb phrases.’<figcaption>Content design developed these principles to help establish a consistent approach for other menus in the product.</figcaption>

Streamlining high-visibility messages

Firefox surfaces a number of messages to users while they use the product. Those messages had dated visuals, inconsistent presentation, and clunky copy.

We partnered with our UX and visual designers to redesign those message types using a content-first approach. By approaching the redesign this way, we better ensured the resulting components supported the message needs. Along the way, we were able to make some improvements to the existing copy and establish guidelines so future modals, infobars, and panels would be higher quality.

Cleaning up paper cuts in modal dialogues

A modal sits on top of the main content of a webpage. It’s a highly intrusive message that disables background content and requires user interaction. By redesigning it we made one of the most interruptive browsing moments smoother and more cohesive.

Before and after images of a redesigned Firefox modal dialog. Content decisions are highlighted.<figcaption>Annotated example of the content decisions in a redesigned Firefox modal dialog.</figcaption>

Defining new content patterns for permissions panels

Permissions panels get triggered when you visit certain websites. For example, a website may request to send you notifications, know your location, or gain access to your camera and microphone. We addressed inconsistencies and standardized content patterns to reduce visual clutter. The redesigned panels are cleaner and more concise.

Before and after images of a redesigned Firefox permissions panel. Content decisions are highlighted.<figcaption>Annotated example of the content decisions in a redesigned Firefox permissions panel.</figcaption>

Closing thoughts

This major refresh appears simple and somewhat effortless, which was the goal. A large amount of work happened behind the scenes to make that end result possible — a whole lot of auditing, iteration, communication, collaboration, and reviews. As usual, the lion’s share of content design happened before we put ‘pen to paper.’

Like any major renovation project, we navigated big dreams, challenging constraints, tough compromises, and a whole lot of dust. Software is never ‘done,’ but we cleared significant content weeds and co-created a future-forward design experience.

Thank you, team!

As anyone who has contributed to a major redesign knows, this involved months of collaboration between our user experience team, engineers, and product managers, as well as our partners in localization, accessibility, and quality assurance. We were fortunate to work with such a smart, hard-working group.


Content design considerations for the new Firefox was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXContent design considerations for the new Firefox

How we collaborated on a major redesign to clean up debt and refresh the product.

Image of a Firefox browser window with the application menu opened on the right.

Introducing the redesigned Firefox browser, featuring the Alpenglow them

We just launched a major redesign of the Firefox desktop browser to 240 million users. The effort was so large that we put our full content design team — all two of us — on the case. Over the course of the project, we updated nearly 1,000 strings, re-architected our menus, standardized content patterns, established new principles, and cleaned up content debt.

Creating and testing language to inform visual direction

The primary goal of the redesign was to make Firefox feel modern. We needed to concretize that term to guide the design and content decisions, as well as to make the measurement of visual aesthetics more objective and actionable.

To do this, we used the Microsoft Desirability Toolkit, which measures people’s attitudes towards a UI with a controlled vocabulary test. Content design worked with our UX director to identify adjectives that could embody what “modern” meant for our product. The UX team used those words for early visual explorations, which we then tested in a qualitative usertesting.com study.

Based on the results, we had an early idea of where the designs were meeting goals and where we could make adjustments.

Image of a word cloud that includes words like ‘clean,’ ‘easy,’ and ‘simple, as well as two comments from research participants about the application menu redesign.

Sampling of qualitative feedback from the visual appeal test with word cloud and participant comments.

Improving way-finding in menus

Over time, our application menu had grown unwieldy. Sub-menus proliferated like dandelions. It was difficult to scan, resulting in high cognitive load. Grouping of items were not intuitive. By re-organizing the items, prioritizing high-value actions, using clear language, and removing icons, the new menu better supports people’s ability to move quickly and efficiently in the Firefox browser.

To finalize the menu’s information architecture, we leveraged a variety of inputs. We studied usage data, reviewed past user research, and referenced external sources like the Nielsen Norman Group for menu design best practices. We also consulted with product managers to understand the historical context of prior decisions.

Before and after images of the redesigned Firefox application menu.

The Firefox application menu, before and after the information architecture redesign.

Image of the redesigned Firefox application menu with annotations about what changed.

Changes made to the Firefox application menu include removing icons, grouping like items together, and reducing the number of sub-menus.

As a final step, we created principles to document the rationale behind the menu redesign so a consistent approach could be applied to other menu-related decisions across the product and platforms.

Image of content design principles for menus, such as ‘Use icons sparingly’ and ‘Write options as verb phrases.’

Content design developed these principles to help establish a consistent approach for other menus in the product.

Streamlining high-visibility messages

Firefox surfaces a number of messages to users while they use the product. Those messages had dated visuals, inconsistent presentation, and clunky copy.

We partnered with our UX and visual designers to redesign those message types using a content-first approach. By approaching the redesign this way, we better ensured the resulting components supported the message needs. Along the way, we were able to make some improvements to the existing copy and establish guidelines so future modals, infobars, and panels would be higher quality.

Cleaning up paper cuts in modal dialogues

A modal sits on top of the main content of a webpage. It’s a highly intrusive message that disables background content and requires user interaction. By redesigning it we made one of the most interruptive browsing moments smoother and more cohesive.
Before and after images of a redesigned Firefox modal dialog. Content decisions are highlighted.
Annotated example of the content decisions in a redesigned Firefox modal dialog.

Before and after images of a redesigned Firefox modal dialog. Content decisions are highlighted.

Annotated example of the content decisions in a redesigned Firefox modal dialog

Defining new content patterns for permissions panels

Permissions panels get triggered when you visit certain websites. For example, a website may request to send you notifications, know your location, or gain access to your camera and microphone. We addressed inconsistencies and standardized content patterns to reduce visual clutter. The redesigned panels are cleaner and more concise.
Before and after images of a redesigned Firefox permissions panel. Content decisions are highlighted.

Before and after images of a redesigned Firefox permissions panel. Content decisions are highlighted.

Annotated example of the content decisions in a redesigned Firefox permissions panel.

Closing thoughts

This major refresh appears simple and somewhat effortless, which was the goal. A large amount of work happened behind the scenes to make that end result possible — a whole lot of auditing, iteration, communication, collaboration, and reviews. As usual, the lion’s share of content design happened before we put ‘pen to paper.’

Like any major renovation project, we navigated big dreams, challenging constraints, tough compromises, and a whole lot of dust. Software is never ‘done,’ but we cleared significant content weeds and co-created a future-forward design experience.

As anyone who has contributed to a major redesign knows, this involved months of collaboration between our user experience team, engineers, and product managers, as well as our partners in localization, accessibility, and quality assurance. We were fortunate to work with such a smart, hard-working group.

This post was originally published on Medium.

hacks.mozilla.orgImplementing Private Fields for JavaScript

This post is cross-posted from Matthew Gaudet’s blog

When implementing a language feature for JavaScript, an implementer must make decisions about how the language in the specification maps to the implementation. Sometimes this is fairly simple, where the specification and implementation can share much of the same terminology and algorithms. Other times, pressures in the implementation make it more challenging, requiring or pressuring the implementation strategy diverge to diverge from the language specification.

Private fields is an example of where the specification language and implementation reality diverge, at least in SpiderMonkey– the JavaScript engine which powers Firefox. To understand more, I’ll explain what private fields are, a couple of models for thinking about them, and explain why our implementation diverges from the specification language.

Private Fields

Private fields are a language feature being added to the JavaScript language through the TC39 proposal process, as part of the class fields proposal, which is at Stage 4 in the TC39 process. We will ship private fields and private methods in Firefox 90.

The private fields proposal adds a strict notion of ‘private state’ to the language. In the following example, #x may only be accessed by instances of class A:

class A {
  #x = 10;
}

This means that outside of the class, it is impossible to access that field. Unlike public fields for example, as the following example shows:

class A {
  #x = 10; // Private field
  y = 12; // Public Field
}

var a = new A();
a.y; // Accessing public field y: OK
a.#x; // Syntax error: reference to undeclared private field

Even various other tools that JavaScript gives you for interrogating objects are prevented from accessing private fields (e.g. Object.getOwnProperty{Symbols,Names} don’t list private fields; there’s no way to use Reflect.get to access them).

A Feature Three Ways

When talking about a feature in JavaScript, there are often three different aspects in play: the mental model, the specification, and the implementation.

The mental model provides the high-level thinking that we expect programmers to use mostly. The specification in turn provides the detail of the semantics required by the feature. The implementation can look wildly different from the specification text, so long as the specification semantics are maintained.

These three aspects shouldn’t produce different results for people reasoning through things (though, sometimes a ‘mental model’ is shorthand, and doesn’t accurately capture semantics in edge case scenarios).

We can look at private fields using these three aspects:

Mental Model

The most basic mental model one can have for private fields is what it says on the tin: fields, but private. Now, JS fields become properties on objects, so the mental model is perhaps ‘properties that can’t be accessed from outside the class’.

However, when we encounter proxies, this mental model breaks down a bit; trying to specify the semantics for ‘hidden properties’ and proxies is challenging (what happens when a Proxy is trying to provide access control to properties, if you aren’t supposed to be able see private fields with Proxies? Can subclasses access private fields? Do private fields participate in prototype inheritance?) . In order to preserve the desired privacy properties an alternative mental model became the way the committee thinks about private fields.

This alternative model is called the ‘WeakMap’ model. In this mental model you imagine that each class has a hidden weak map associated with each private field, such that you could hypothetically ‘desugar’

class A {
  #x = 15;
  g() {
    return this.#x;
  }
}

into something like

class A_desugared {
  static InaccessibleWeakMap_x = new WeakMap();
  constructor() {
    A_desugared.InaccessibleWeakMap_x.set(this, 15);
  }

  g() {
    return A_desugared.InaccessibleWeakMap_x.get(this);
  }
}

The WeakMap model is, surprisingly, not how the feature is written in the specification, but is an important part of the design intention is behind them. I will cover a bit later how this mental model shows up in places later.

Specification

The actual specification changes are provided by the class fields proposal, specifically the changes to the specification text. I won’t cover every piece of this specification text, but I’ll call out specific aspects to help elucidate the differences between specification text and implementation.

First, the specification adds the notion of [[PrivateName]], which is a globally unique field identifier. This global uniqueness is to ensure that two classes cannot access each other’s fields merely by having the same name.

function createClass() {
  return class {
    #x = 1;
    static getX(o) {
      return o.#x;
    }
  };
}

let [A, B] = [0, 1].map(createClass);
let a = new A();
let b = new B();

A.getX(a); // Allowed: Same class
A.getX(b); // Type Error, because different class.

The specification also adds a new ‘internal slot’, which is a specification level piece of internal state associated with an object in the spec, called [[PrivateFieldValues]] to all objects. [[PrivateFieldValues]] is a list of records of the form:

{
  [[PrivateName]]: Private Name,
  [[PrivateFieldValue]]: ECMAScript value
}

To manipulate this list, the specification adds four new algorithms:

  1. PrivateFieldFind
  2. PrivateFieldAdd
  3. PrivateFieldGet
  4. PrivateFieldSet

These algorithms largely work as you would expect: PrivateFieldAdd appends an entry to the list (though, in the interest of trying to provide errors eagerly, if a matching Private Name already exists in the list, it will throw a TypeError. I’ll show how that can happen later). PrivateFieldGet retrieves a value stored in the list, keyed by a given Private name, etc.

The Constructor Override Trick

When I first started to read the specification, I was surprised to see that PrivateFieldAdd could throw. Given that it was only called from a constructor on the object being constructed, I had fully expected that the object would be freshly created, and therefore you’d not need to worry about a field already being there.

This turns out to be possible, a side effect of some of the specification’s handling of constructor return values. To be more concrete, the following is an example provided to me by André Bargull, which shows this in action.

class Base {
  constructor(o) {
    return o; // Note: We are returning the argument!
  }
}

class Stamper extends Base {
  #x = "stamped";
  static getX(o) {
    return o.#x;
  }
}

Stamper is a class which can ‘stamp’ its private field onto any object:

let obj = {};
new Stamper(obj); // obj now has private field #x
Stamper.getX(obj); // => "stamped"

This means that when we add private fields to an object we cannot assume it doesn’t have them already. This is where the pre-existence check in PrivateFieldAdd comes into play:

let obj2 = {};
new Stamper(obj2);
new Stamper(obj2); // Throws 'TypeError' due to pre-existence of private field

This ability to stamp private fields into arbitrary objects interacts with the WeakMap model a bit here as well. For example, given that you can stamp private fields onto any object, that means you could also stamp a private field onto a sealed object:

var obj3 = {};
Object.seal(obj3);
new Stamper(obj3);
Stamper.getX(obj3); // => "stamped"

If you imagine private fields as properties, this is uncomfortable, because it means you’re modifying an object that was sealed by a programmer to future modification. However, using the weak map model, it is totally acceptable, as you’re only using the sealed object as a key in the weak map.

PS: Just because you can stamp private fields into arbitrary objects, doesn’t mean you should: Please don’t do this.

Implementing the Specification

When faced with implementing the specification, there is a tension between following the letter of the specification, and doing something different to improve the implementation on some dimension.

Where it is possible to implement the steps of the specification directly, we prefer to do that, as it makes maintenance of features easier as specification changes are made. SpiderMonkey does this in many places. You will see sections of code that are transcriptions of specification algorithms, with step numbers for comments. Following the exact letter of the specification can also be helpful where the specification is highly complex and small divergences can lead to compatibility risks.

Sometimes however, there are good reasons to diverge from the specification language. JavaScript implementations have been honed for high performance for years, and there are many implementation tricks that have been applied to make that happen. Sometimes recasting a part of the specification in terms of code already written is the right thing to do, because that means the new code is also able to have the performance characteristics of the already written code.

Implementing Private Names

The specification language for Private Names already almost matches the semantics around Symbols, which already exist in SpiderMonkey. So adding PrivateNames as a special kind of Symbol is a fairly easy choice.

Implementing Private Fields

Looking at the specification for private fields, the specification implementation would be to add an extra hidden slot to every object in SpiderMonkey, which contains a reference to a list of {PrivateName, Value} pairs. However, implementing this directly has a number of clear downsides:

  • It adds memory usage to objects without private fields
  • It requires invasive addition of either new bytecodes or complexity to performance sensitive property access paths.

An alternative option is to diverge from the specification language, and implement only the semantics, not the actual specification algorithms. In the majority of cases, you really can think of private fields as special properties on objects that are hidden from reflection or introspection outside a class.

If we model private fields as properties, rather than a special side-list that is maintained with an object, we are able to take advantage of the fact that property manipulation is already extremely optimized in a JavaScript engine.

However, properties are subject to reflection. So if we model private fields as object properties, we need to ensure that reflection APIs don’t reveal them, and that you can’t get access to them via Proxies.

In SpiderMonkey, we elected to implement private fields as hidden properties in order to take advantage of all the optimized machinery that already exists for properties in the engine. When I started implementing this feature André Bargull – a SpiderMonkey contributor for many years – actually handed me a series of patches that had a good chunk of the private fields implementation already done, for which I was hugely grateful.

Using our special PrivateName symbols, we effectively desuagar

class A {
  #x = 10;
  x() {
    return this.#x;
  }
}

to something that looks closer to

class A_desugared {
  constructor() {
    this[PrivateSymbol(#x)] = 10;
  }
  x() {
    return this[PrivateSymbol(#x)];
  }
}

Private fields have slightly different semantics than properties however. They are designed to issue errors on patterns expected to be programming mistakes, rather than silently accepting it. For example:

  1. Accessing an a property on an object that doesn’t have it returns undefined. Private fields are specified to throw a TypeError, as a result of the PrivateFieldGet algorithm.
  2. Setting a property on an object that doesn’t have it simply adds the property. Private fields will throw a TypeError in PrivateFieldSet.
  3. Adding a private field to an object that already has that field also throws a TypeError in PrivateFieldAdd. See “The Constructor Override Trick” above for how this can happen.

To handle the different semantics, we modified the bytecode emission for private field accesses. We added a new bytecode op, CheckPrivateField which verifies an object has the correct state for a given private field. This means throwing an exception if the property is missing or present, as appropriate for Get/Set or Add. CheckPrivateField is emitted just before using the regular ‘computed property name’ path (the one used for A[someKey]).

CheckPrivateField is designed such that we can easily implement an inline cache using CacheIR. Since we are storing private fields as properties, we can use the Shape of an object as a guard, and simply return the appropriate boolean value. The Shape of an object in SpiderMonkey determines what properties it has, and where they are located in the storage for that object. Objects that have the same shape are guaranteed to have the same properties, and it’s a perfect check for an IC for CheckPrivateField.

Other modifications we made to make to the engine include omitting private fields from the property enumeration protocol, and allowing the extension of sealed objects if we are adding private field.

Proxies

Proxies presented us a bit of a new challenge. Concretely, using the Stamper class above, you can add a private field directly to a Proxy:

let obj3 = {};
let proxy = new Proxy(obj3, handler);
new Stamper(proxy)

Stamper.getX(proxy) // => "stamped"
Stamper.getX(obj3)  // TypeError, private field is stamped
                    // onto the Proxy Not the target!

I definitely found this surprising initially. The reason I found this surprising was I had expected that, like other operations, the addition of a private field would tunnel through the proxy to the target. However, once I was able to internalize the WeakMap mental model, I was able to understand this example much better. The trick is that in the WeakMap model, it is the Proxy, not the target object, used as the key in the #x WeakMap.

These semantics presented a challenge to our implementation choice to model private fields as hidden properties however, as SpiderMonkey’s Proxies are highly specialized objects that do not have room for arbitrary properties. In order to support this case, we added a new reserved slot for an ‘expando’ object. The expando is an object allocated lazily that acts as the holder for dynamically added properties on the proxy. This pattern is used already for DOM objects, which are typically implemented as C++ objects with no room for extra properties. So if you write document.foo = "hi", this allocates an expando object for document, and puts the foo property and value in there instead. Returning to private fields, when #x is accessed on a Proxy, the proxy code knows to go and look in the expando object for that property.

In Conclusion

Private Fields is an instance of implementing a JavaScript language feature where directly implementing the specification as written would be less performant than re-casting the specification in terms of already optimized engine primitives. Yet, that recasting itself can require some problem solving not present in the specification.

At the end, I am fairly happy with the choices made for our implementation of Private Fields, and am excited to see it finally enter the world!

Acknowledgements

I have to thank, again, André Bargull, who provided the first set of patches and laid down an excellent trail for me to follow. His work made finishing private fields much easier, as he’d already put a lot of thought into decision making.

Jason Orendorff has been an excellent and patient mentor as I have worked through this implementation, including two separate implementations of the private field bytecode, as well as two separate implementations of proxy support.

Thanks to Caroline Cullen, and Iain Ireland for helping to read drafts of this post, and to Steve Fink for fixing many typos.

The post Implementing Private Fields for JavaScript appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyThe Van Buren decision is a strong step forward for public interest research online

In a victory for security research and other public interest work, yesterday the U.S Supreme Court held that the Computer Fraud and Abuse Act’s (CFAA) “exceeding authorized access” provision should be narrowly interpreted and cannot be used to criminalize every single violation of a computer-use policy. This is encouraging news for journalists, bug bounty hunters, social science researchers, and many other practitioners who could legitimately access information in a myriad of ways but were at the risk of being prosecuted as criminals.

As we stated in our joint amicus brief to the Court in July 2020, over the years some federal circuit courts had interpreted the CFAA so broadly as to threaten important practices to protect the public, including research and disclosure of software vulnerabilities by those in the security community. The scope of such broad interpretation went beyond security management and has also been used to stifle legitimate public interest research, such as looking into the advertising practices of online platforms, something Mozilla has pushed back against in the past.

In its ruling, the Supreme Court held that authorized access under the CFAA is not exceeded when information is accessed on a computer for a purpose that the system owner considers improper. For example, the ruling clarifies that employees would not violate the CFAA simply by using a work computer to check personal email if that is contrary to the company’s computer use policies. The decision overrules some of the most expansive interpretations of the CFAA and makes it less likely that the law will be used to chill legitimate research and disclosures. The decision does, however, leave some open questions on the role of contractual limits in the CFAA that will likely have to be settled via litigation over the coming years.

However, the net impact of the decision leaves the “exceeding authorized access” debate under the CFAA in a much better place than when it began and should be celebrated as a clear endorsement of the years of efforts by various digital rights organizations to limit its chilling effects with the goal of protecting public interest research, including in cybersecurity.

The post The Van Buren decision is a strong step forward for public interest research online appeared first on Open Policy & Advocacy.

about:communityFirefox 89: The New Contributors To MR1

Firefox 89 would not have been possible without our community, and it is a great privilege for us to thank all the developers who contributed their first code change to MR1, 44 of whom were brand new volunteers!

hacks.mozilla.orgLooking fine with Firefox 89

While we’re sitting here feeling a bit frumpy after a year with reduced activity, Firefox 89 has smartened up and brings with it a slimmed down, slightly more minimalist interface.

Along with this new look, we get some great styling features including a force-colors feature for media queries and better control over how fonts are displayed. The long awaited top-level await keyword for JavaScript modules is now enabled, as well as the PerformanceEventTiming interface, which is another addition to the performance suite of APIs: 89 really has been working out!

This blog post provides merely a set of highlights; for all the details, check out the following:

forced-colors media feature

The forced-colors CSS media feature detects if a user agent restricts the color palette used on a web page. For instance Windows has a High Contrast mode. If it’s turned on, using forced-colors: active within a CSS media query would apply the styles nested inside.

In this example we have a .button class that declares a box-shadow property, giving any HTML element using that class a nice drop-shadow.

If forced-colors mode is active, this shadow would not be rendered, so instead we’re declaring a border to make up for the shadow loss:

.button {
  border: 0;
  padding: 10px;
  box-shadow: -2px -2px 5px gray, 2px 2px 5px gray;
}

@media (forced-colors: active) {
  .button {
    /* Use a border instead, since box-shadow is forced to 'none' in forced-colors mode */
    border: 2px ButtonText solid;
  }
}

Better control for displayed fonts

Firefox 89 brings with it the line-gap-override, ascent-override and descent-override CSS properties. These allow developers more control over how fonts are displayed. The following snippet shows just how useful these properties are when using a local fallback font:

@font-face {
  font-family: web-font;
  src: url("<https://example.com/font.woff>");
}

@font-face {
  font-family: local-font;
  src: local(Local Font);
  ascent-override: 90%;
  descent-override: 110%;
  line-gap-override: 120%;
}

These new properties help to reduce layout shift when fonts are loading, as developers can better match the intricacies of a local font with a web font. They work alongside the size-adjust property which is currently behind a preference in Firefox 89.

Top-level await

If you’ve been writing JavaScript over the past few years you’ve more than likely become familiar with async functions. Now the await keyword, usually confined to use within an async function, has been given independence and allowed to go it alone. As long as it stays within modules that is.

In short, this means JavaScript modules that have child modules using top level await wait for that child to execute before they themselves run. All while not blocking other child modules from loading.

Here is a very small example of a module using the >a href=”https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API”>Fetch API and specifying await within the export statement. Any modules that include this will wait for the fetch to resolve before running any code.

// fetch request
const colors = fetch('../data/colors.json')
  .then(response => response.json());

export default await colors;

PerformanceEventTiming

A new look can’t go unnoticed without mentioning performance. There’s a plethora of Performance APIs, which give developers granular power over their own bespoke performance tests. The PerformanceEventTiming interface is now available in Firefox 89 and provides timing information for a whole array of events. It adds yet another extremely useful feature for developers by cleverly giving information about when a user-triggered event starts and when it ends. A very welcome addition to the new release.

The post Looking fine with Firefox 89 appeared first on Mozilla Hacks - the Web developer blog.