SUMO BlogFreshening up the Knowledge Base for spring 2024

Hello, SUMO community!

This spring we’re happy to announce that we’re refreshing the Mozilla Firefox Desktop and Mobile knowledge bases. This is a project that we’ve been working on for the past several months and now, we’re ready to finally share it with you all!

So, what does this mean for SUMO?

Introduction of Article Categories

When exploring our knowledge base, we realized there’s so many articles and it’s important to set expectations for users. We’ll be introducing four article types:

  • About – Article that aims to be educational and informs the reader about a certain feature.
  • How To – Article that aims to teach a user how to interact with a feature or complete a task.
  • Troubleshooting – Article that aims to provide solutions to an issue a user might encounter.
  • FAQ – Article that focuses on answering frequently asked questions that a user might have.

We will standardize titles and how articles are formatted per category, so users know what to expect when interacting with an article.

Downsizing and concentration of articles

There’s hundreds upon hundreds of articles in our knowledge base. However, many of them are repetitive and contain similar information. We want to reduce the number of articles and improve the quality of our content. We will be archiving articles and revising active articles throughout this refresh.

Style guideline update focus on reducing cognitive load

As mentioned in a previous post, we will be updating the style guideline and aiming to reduce the cognitive load on users by introducing new style guidelines like in-line images. There’s not huge changes, but we’ll go over them more when we release the updated style guidelines.

Have questions or feedback? Drop us a message in this SUMO forum thread.

The Mozilla Thunderbird BlogApril 2024 Community Office Hours: Rust and Exchange Support

We admit it. Thunderbird is getting a bit Rusty, but in a good way! In our monthly Development Digests, we’ve been updating the community about enabling Rust in Thunderbird to implement native support for Exchange. Now, we’d like to invite you for a chat with Team Thunderbird and the developers making this change possible. As always, send your questions in advance to officehours@thunderbird.net! This is a great way to get answers even if you can’t join live.

Be sure to note the change in day of the week and the UTC time. (At least the time changes are done for now!) We had to shift our calendar a bit to fit everyone’s schedules and time zones!

April Office Hours: Rust and Exchange

This month’s topic is a new and exciting change to the core functionality: using Rust to natively support Microsoft Exchange. Join us and talk with the three key Thunderbird developers responsible for this shiny (rusty) new addition: Sean Burke, Ikey Doherty, and Brendan Abolivier! You’ll find out why we chose Rust, challenges we encountered, how we used Rust to interface with XPCOM and Necko to provide Exchange support. We’ll also give you a peek into some future plans around Rust.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we answered some of your frequently asked recent questions. You can watch clips of specific questions and answers on our TILvids channel. If you’d prefer a written summary, this blog post has you covered.

Join The Video Chat

We’ve also got a shiny new Big Blue Button room, thanks to KDE! We encourage everyone to check out their Get Involved page. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Tuesday, April 23 at 16:00 UTC

Direct URL to Join: https://meet.thunderbird.net/b/hea-uex-usn-rb1

Access Code: 964573

The post April 2024 Community Office Hours: Rust and Exchange Support appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogTeam Thunderbird Answers Your Most Frequently Asked Questions

We know the Thunderbird community has LOTS of questions! We get them on Mozilla Support, Mastodon, and X.com (formerly Twitter). They pop up everywhere, from the Thunderbird subreddit to the teeming halls of conferences like FOSDEM and SCaLE. During our March Community Office Hours, we took your most frequently asked questions to Team Thunderbird and got some answers. If you couldn’t watch the full session, or would rather have the answers in abbreviated text clips, this post is for you!

Thunderbird for Android / K-9 Mail

The upcoming release on Android is definitely on everyone’s mind! We received lots of questions about this at our conference booths, so let’s answer them!

Will there be Exchange support for Thunderbird for Android?

Yes! Implementing Exchange in Rust in the Thunderbird Desktop client will enable us to reuse those Rust crates as shared libraries with the Mobile client. Stay up to date on Exchange support progress via our monthly Developer Digests.

Will Thunderbird Add-ons be available on Android?

Right now, no, they will not be available. K-9 Mail uses a different code base than Thunderbird Desktop. Thunderbird add-ons are designed for a desktop experience, not a mobile one. We want to have add-ons in the future, but this will likely not happen within the next two years.

When Thunderbird for Android launches, will it be available on F-Droid?

It absolutely will.

When Thunderbird for Android is ready to be released, what will the upgrade path be?

We know some in the K-9 Mail community love their adorable robot dog and don’t want to give him up yet. So we will support K-9 Mail (same code, different brand) in parallel for a year or two, until the product is more mature, and we see that more K-9 Mail users are organically switching.

Because of Android security, users will need to manually migrate from K-9 Mail to Thunderbird for Android, versus an automatic migration. We want to make that effortless and unobtrusive, and the Sync feature using Mozilla accounts will be a large part of that. We are exploring one-tap migration tools that will prompt you to switch easily and keep all your data and settings – and your peace of mind.

Will CalDAV and CardDAV be available on Thunderbird for Android?

Probably! We’re still determining this, but we know our users like having their contacts and calendars inside one app for convenience, as well as out of privacy concerns. While it would be a lot of engineering effort, we understand the reasoning behind these requests. As we consider how to go forward, we’ll release all these explorations and ideas in our monthly updates, where people can give us feedback.

Will the K-9 Mail API provide the ability to download the save preferences that Sync stores locally to plug into automation like Ansible?

Yes! Sync is open source, so users can self-host their own instead of using Mozilla services. This question touches on the differences between data structure for desktop and mobile, and how they handle settings. So this will take a while, but once we have something stable in a beta release, we’ll have articles on how to hook up your own sync server and do your own automation.


Thunderbird for Desktop

When will we have native Exchange support for desktop Thunderbird?

We hope to land this in the next ESR (Extended Support Release), version 128, in limited capacity. Users will still need to use the OWL Add-on for all situations where the standard exchange web service is not available. We don’t yet know if native calendar and address book support will be included in the ESR. We want to support every aspect of Exchange, but there is a lot of code complexity and a history of changes from Microsoft. So our primary goal is good, stable support for email by default, and calendar and address book if possible, for the next ESR.

When will conversations and a true threaded view be added to Thunderbird?

Viewing your own sent emails is an important component of a true conversation view. This is a top priority and we’re actively working towards it. Unfortunately, this requires overhauling the backend database that underlies Thunderbird, which is 20 years old. Our legacy database is not built to handle conversation views with received and sent messages listed in the same thread. Restructuring a two decades old database is not easy. Our goal is to have a new global message database in place by May 31. If nothing has exploded, it should be much easier to enable conversation view in the front end.

When will we get a full sender name column with the raw email address of the sender? This will help further avoid phishing and spam.

We plan to make this available in the next ESR — Thunderbird 128 — which is due July 2024.

Will there ever be a browser-based view of Thunderbird?

Despite our foundations in Firefox, this is a huge effort that would have to be built from scratch. This isn’t on our roadmap and not in our plans for now. If there was a high demand, we might examine how feasible this could be. Alex explains this in more detail during the short video below:

The post Team Thunderbird Answers Your Most Frequently Asked Questions appeared first on The Thunderbird Blog.

The Mozilla BlogOpen Source in the Age of LLMs

(To read the complete Mozilla.ai publication featuring all our OSS contributions, please visit the Mozilla.ai blog)

Like our parent company, Mozilla.ai’s founding story is rooted in open-source principles and community collaboration. Since our start last year, our key focus has been exploring state-of-the-art methods for evaluating and fine-tuning large-language models (LLMs).

Throughout this process, we’ve been diving into the open-source ecosystem around LLMs.  What we’ve found is an electric environment where everyone is building. As Nathan Lambert writes in his post, “It’s 2024, and they just want to learn.”

“While everything is on track across multiple communities, that also unlocks the ability for people to tap into excitement and energy that they’ve never experienced in their career (and maybe lives).”

The energy in the space, with new model releases every day, is made even more exciting by the promise of open source where, as I’ve observed before, anyone can make a contribution and have it be meaningful regardless of credentials, and there are plenty of contributions to be made. If the fundamental question of the web is, “Why wasn’t I consulted,” open-source in machine learning today offers the answer, “You are as long as you can productively contribute PRs, come have a seat at the table.”

Even though some of us have been active in open-source work for some time, building and contributing to it at a team and company level is a qualitatively different and rewarding feeling. And it’s been especially fun watching upstream make its way into both the communities and our own projects. 

At a high level, here’s what we’ve learned about the process of successful open-source contributions: 

1. Start small when you’re starting with a new project. If you’re contributing to a new project for the first time, it takes time to understand the project’s norms: how fast they review, who the key people are, their preferences for communication, code review style, build systems, and more. It’s like starting a new job entirely from scratch.

Be gentle with both yourself and the reviewers and pick something like a documentation task, or a “good first issue” label just to get a feel for how things work.

2. Be easy to work with. There are specific norms around working with open source, and they closely follow this fantastic post of understanding how to be an effective developer – “As a developer you have two jobs: to write code, and be easy to work with.”

In open source, being easy to work with means different things to different people, but I generally see it as:

a. Submitting clean PRs with working code that passes tests or gets as close as possible. No one wants to fix your build. 

b. Making small code changes by yourself, and proposing larger architecture changes in a group before getting them down in code for approval. Asking “What do you think about this?” Always try to also propose a solution instead of posing more problems to maintainers: they are busy!

c. Write unit tests if you’re adding a significant feature, where significant is anything more than a single line of code. 

d. Remembering Chesterton’s fence: that code is there for a reason, study it before you suggest removing it. 

3. Assume good intent, but make intent explicit. When you’re working with people in writing, asynchronously, potentially in other countries or timezones, it’s extremely easy for context, tone, and intent to get lost in translation. Implicit knowledge becomes rife.  Assume people are doing the best they can with what they have, and if you don’t understand something, ask about it first.

4. The AI ecosystem moves quickly. Extremely quickly. New models come out every day and are implemented in downstream modules by tomorrow. Make sure you’re ok with this speed and match the pace. Something you can do before you do PRs is to follow issues on the repo, and follow the repo itself so you get a sense for how quickly things move/are approved. If you’re into fast-moving projects, jump in. Otherwise, pick one that moves at a slower cadence.

5. The LLM ecosystem is currently bifurcated between HuggingFace and OpenAI compatibility: An interesting pattern has developed in my development work on open-source in LLMs. It’s become clear to me that, in this new space of developer tooling around transformer-style language models at an industrial scale, you are generally conforming to be downstream of one of two interfaces:

a. models that are trained and hosted using HuggingFace libraries and particularly the HuggingFace hub as infrastructure.

b. Models that are available via API endpoints, particularly as hosted by OpenAI.

If you want to be successful in this space today, you as a library or service provider have to be able to interface with both of these. 

6. Sunshine is the best disinfectant. As the recent xz issue proved, open code is better code and issues get fixed more quickly. This means, don’t be afraid to work out in the open. All code has bugs, even yours and mine, and discovering those bugs is a natural process of learning and developing better code rather than a personal failing. 

We’re looking forward to both continuing our contributions, upstreaming them and learning from them as we continue our product development work. 

Read the whole publication and subscribe to future ones on the Mozilla.ai blog.

Get Firefox

Get the browser that protects what’s important

The post Open Source in the Age of LLMs appeared first on The Mozilla Blog.

The Mozilla BlogMarek Tuszynski reflects on curating thought-provoking experiences at the intersection of technology and activism

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Marek Tuszynski, an artist and curator that is the Executive Director and co-founder of Tactical Tech, a program dedicated to supporting initiatives focused on promoting better privacy and digital rights. We talked with Marek about his travels abroad shaping his career, what sparks his inspiration and future challenges we face online.

So the first question that I had that I wanted to ask you about this is very straightforward, but what initially inspired you to co-found Tactical Tech? What was the first thing that made you really want to start the work that you do?

Marek Tuszynski: Things were not looked at the ways that I thought it’s important to look at them, especially technical things that have been around for over 20 years now. But the truth is, it was actually more excitement. Excitement that there’s such a massive opportunity, such a massive chance for many different people — actors, places, nations — to do things different and outside of the constraints that they are in, where they are at the time, and so on. I worked internationally at the time in sub-Saharan Africa and Southeast Asia and was just thinking that a lot of tech is being damned there in these places — I’m also from Eastern Europe, so we’ve never seen the leading edge of the technology, we’ve seen the tail if we’re lucky if the censorship allowed. So I just wanted to use technology as not only an opportunity that I have been kind of given by chance — it just happened I was in the right place in the right time — but also to bring it, and to co-develop it and do things together with people. My focus early on was with open source with software, and that was the driving force behind how we think about which technology is better for society, which is a more right one, which gives you more freedoms, not restrict them, and so on, etc. The story now, as we know, it turned kind of dark, but the inspiration was fascination with the possibilities of technology in terms of the tool for how we can access knowledge and information, the tool for how we can refine the way we see the world. It’s still there, and I think was there at the very beginning, and I think that’s a major force.

You mentioned some of the traveling that you did. I’m very much of the belief that traveling is one of the best ways for us to learn. How much did the traveling and the places that you’ve been to and that you’ve seen influence or impact the work that you did and give you a bigger perspective on some of the things that you wanted to do?

That’s interesting because I come from the art background and one of my fascinations was a period in history of ours where some of the makers spent some of their time when they were learning on traveling. To go and visit other artists or studios, vendors and makers, etc., to see how they do things because there was no other way of information to travel, and for me that was very important. But I think you are right. It is not cool to talk about traveling these days, because travel also means burning fossil fuels and all this kind of stuff — which is unfortunately true, but you can do other travel, I’m doing a lot of sailing these days. But the initial traveling for me, coming from an entirely restricted part of the world and where I was growing up, I actually thought, “I’m never going to travel” in my mind. This is like something I’m going to read in the books, see on films, but never experience because I can’t get a passport. I couldn’t get the basic rights to leave the border that I was constrained with. So the moment I could do that I did it, and I practically never come back. I’ve left the place I’m from 35 years ago.

The question that you’re asking is very important because travel teaches you a lot of things, and you become much more humble. Your eyes and brain and heart opens up much more. And you see that the world is unique, that people around you are unique — you’re not that unique yourself and there’s a lot to learn from that. Initially, when you travel, you have this arrogance — I had this arrogance at least —  that I’m going to go to places, learn them, understand them, and turn into something, etc. And then you go, and you learn, and you never stop. Learning is endless, and that’s the most fascinating part. But I think that the biggest privilege was to meet people and meet them on their own ground with their own ideas about everything. And then you confront yourself and rip the way you frame things because you just come from a place with ideas that somebody else put into your education system. So for me, yeah, it (traveling) worked in a way. And it’s not necessarily the travel in the physical space that we need. Now you can do it virtually like we have the conversation, and you get to know people and, you travel in some way. You may not see the actual architecture of the space you are in, but you are seeing “Okay, this is a person coming from somewhere with a certain set of ideas, questions, and so on, etc.” You look at them and think how we can have the conversation knowing that we come from very different places. Travel gives you that. 

When it gets to like some of the ideation process, what sparks the inspiration for the experiences you try to create? Is there any research or data that you look at? Are there just trends that you look at to get inspired by? What kind of like starts that? 

There’s a mixture. I think we’re probably the least analyzing organization. Even if it’s a trend or some kind of mainstream stuff with the social media or all media, etc., it’s too late. I think for me personally, it’s always observing what’s being talked about. With AI, it’s what aspects of AI actually are being totally omitted. There’s a lot of focus now, for example, on elections and the visible impact of the AI, so how we can amplify this information, confusion and basically deteriorate trust into what we see, what we hear, what we read. And it’s super important. People are going to learn that very quickly. 

I think what is happening is the invisible part, where businesses that are interested in influencing political or non-political opinions around issues that are critical for people will be using AI for analyzing the data. For hyper fast profiling of people in much more clever ways of addressing them with more clever advertisements, in that it won’t necessarily be paid — it doesn’t have to be. Or even how you design certain strategies, etc. that can be augmented by how you use AI. And I think this is where I will be focusing on rather than talking about that the deepfakes which we’ve done already. But to the core of the question you asked about how we have topics that we focus on, from the first day of Tactical Tech, we’ve always based our work on partners, collaborators and people that we work with — and often that we are invited to work with like a group of people or institutional organizations, etc. And good listening is for me part of the critical research. So instead of coming in a view and ideas of research questions, we listen to what questions are already there. Where the curiosity is, where the fears are, where the hopes are, and so on, and sometimes start to build backwards toward and think about what you can bring from the position you occupy. 

Is there one collaboration or organization that you’ve worked with before that you felt was really impactful that you’re the most proud of? 

There are hundreds of those (laughs). We had this group of people from Tajikistan — many people don’t know what Tajikistan is. And they were amazing. They were basically like a sponge, just sucking everything in and trying to engage with the culture cross between us and them and everybody else and people from probably other countries and so on — it was massive. But everybody was positive and trying to figure out some kind of common language that is bizarre English being mixed with some other languages, and so on. And that first encounter turned into a friendship and collaboration that led them to be the key people that brought the whole open source tool to the school system in Tajikistan, and so on, etc. I think that was one of these examples where it was very positive, encouraging. 

We do a lot of collaboration. So we have hundreds of partners now, especially on our educational work. And I think the most successful series was with a number of groups in Brazil that we collaborated with because they really took the content and the collaboration to the next level and took ownership of that. And for me, the ideal scenario over the work we do is that we may be developing something on the stage together, but there should be a moment when we disappear from the stage and somebody else take that stage. And that’s what happened. And that’s just beautiful to see. And when somebody tells you about this product that is amazing, and they don’t even know that you were part of that, that is the best compliment you can get. 

<figcaption class="wp-element-caption">Marek Tuszynski at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What do you think is the biggest challenge we face in the world this year on and offline? How do we combat it?

I think people have different challenges in different places. The first thing that we are going to launch is a series of election influence situation rooms, which is just kind of a creative space that has the front of the house and back of the house that you would like to mount around some of the elections that are happening — like U.S. or European Parliament, or many other elections. What we would like to showcase and kind of demystify is not only the way we elect people, but how much the entire system works or doesn’t work. 

And what we’re trying to illustrate with this project “the situation rooms” is to unpack it and show how important, for a lot of entities, confusion is. How important polarization is, how important the lack of trust towards specific formats of communication are, or how institutions fail, because in this environment it is much easier to put forward everything from conspiracy theories to fake information, but it also makes people frustrated. It makes people to be much more black and white and aligning themselves with things that they would not align with earlier. I think this is the major thing that we are focusing on this year. So the set of elections and how we can get people engaged without any paranoia and any kind of dystopian way of thinking about it. It helps ourselves in a way to build apparatus for recognizing this situation we are in. And then let’s build some methods for what we need to know, to understand what is happening and then how that can be useful for democratic person elections, but how also it can be extremely harmful. 

What do you think is one action that everyone should take to make the world and our lives online a little bit better?

Technology is the bridge. Technologies open the channel of communications, so on, etc. Use technology for that. Don’t focus on yourself as much as you can focus on the world and other people. And if technology can help you to understand where they’re coming from, what the needs are and what kind of role you can play from whatever level of privilege you may have, use it. And the fact that you may use better technologies and some people have access to it already, that’s a certain kind of privilege that I think we should be able to share widely. 

You use technology to engage other people who don’t want to engage. Who lost trust. Don’t give up on them. Don’t give up on people on the other side, those voting for people you don’t have any respect. They are lost. And part of the reason they are lost is the technology they’re using.

How it is challenging the information and making them believe in things that they shouldn’t be believing in and that they probably wouldn’t if that technology didn’t happen. I think we passed this kind of libertarian way of using technology for individual good that’s going to turn everything into a better world. We have proven itself wrong many times by now.

We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?

I think Rise 25 was fantastic because it was nice to see all the kinds of people. The 25 people who are so different in such different ways thinking about tech and ideas, and so on, etc., and so diverse in many ways that you don’t see very often. And it gave people the space to actually vocalize what they think and so on. And that was definitely unique. So I think Mozilla plays a specific role in this kind of in-between sector thing where it’s a corporation in a city in San Francisco that builds tools, but it’s also a foundation. 

Mozilla has unique access to people around the world that do a lot of creative work around technology. And I think that should be celebrated. I don’t think there’s enough of that. Usually people are celebrated for what they achieve with technology, usually making a lot of money, and so on, etc., there’s very little to talk about the technology that has positive impact on society in the world. And I think Mozilla should be touching and showcasing that, and I think there are plenty of things that can be celebrated. 

What gives you hope about the future of our world?

You know, it’s going to be okay. Don’t worry. But we are going to see people that’ll be happy and there will be people that separate. I think the more we can do now for the next generations of people that are coming up now, the better. And if you’re lucky to live long, you may feel proud for what you’ve been doing — focus on that, rather than imagining or picturing some future machine. 

Get Firefox

Get the browser that protects what’s important

The post Marek Tuszynski reflects on curating thought-provoking experiences at the intersection of technology and activism appeared first on The Mozilla Blog.

The Mozilla BlogRachel Hislop reflects on working for Beyoncé, creating community for Black women and the power of storytelling

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with creator Rachel Hislop, a true storyteller at heart that is currently the VP of content and editor-in-chief at OkayAfrica. We talked with Rachel about the ways the internet allows us to tell our own stories, working for Beyoncé and what’s to come in the next chapter of her career.

So the first question I have is what was your favorite Beyoncé project to work on? I want to know. Also, what’s your favorite Beyoncé album?  

I’ll say that my favorite project to work on was definitely Lemonade. It was just so different from anything that had ever existed. It was so culturally relevant. It was well-timed. It was honest and just really beautiful. You can sometimes be jaded by the work when you’re too close to it, and I think that’s across the board in any industry, but there was never a moment of working on that project that I didn’t understand and appreciate just how beautiful everything was. It was really just like, all of you people that I see at work every day, this came out of your minds? And then, it was just a lot of love, and it was a really important time in culture, I think. And I really enjoyed being part of that.

My favorite Beyoncé album … I don’t know that I can answer that because every part has had a very important impact on my life. I had The Writings on the Wall on a cassette tape that I used to play in my little boombox. And Dangerously in Love, I was in high school and experiencing little crushes for the first time. The albums grew. I met Sasha Fierce when I’m in college and learning the dualities of self when I’m away from home, and so on and so forth. So, it’s hard to pick a favorite when each of those moments were so tied to different portions of my life. 

I’m curious to know what types of stories pull you in and influence the work that you do as a writer?

It’s living, honestly. I’m a really curious person, and I think all the best writers are. It’s even weird calling myself a writer sometimes because so much of what I write right now is just for me, and projects that I really believe in. It’s really just the curiosity. I want to know how everything went. How did you get here? What inspired you? I’m good for going out alone and sitting next to a stranger and then learning their whole life story because I am just truly interested in people and there is no parallel in lived life. I also love reading fiction. I am not the girl that’s like, “yeah, self-help books and self-improvement” and things like that. I want to fully escape into a story. I want to be able to turn my brain on and imagine things and fully detach and escape from this world. And then I love reading old magazine articles from when people were allowed to have long, luxurious deadlines and follow subjects for a really long time. I remember this article that they would make us read in journalism school, which was Frank Sinatra has a cold, and I love that voyeurism journalism where if you can’t get to the subject, you’re talking about the things around them. Everything is so interesting to me. I also love TikTok. I learn so much. I think that it’s a really valuable form of storytelling. In short-form, I think it’s really hard — it’s harder than people give a lot of these creators credit for. My grand hope is that those small insights through those short videos are peaking curiosity and sending people on rabbit holes to go discover and read and just be deeper into the internet. I remember back in college there was this internet plugin called stumbled upon and it would roulette the internet and land on these random pages and learn things. That is how my brain is always processing. I’ll see something that’ll interest me and then be like, “I want to know more about that.” 

<figcaption class="wp-element-caption">Rachel Hislop at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

You’ve been in the content media space for a while. What do you think is the biggest misconception people have about working in this space?

You know that saying that if you do what you love every day, you’ll never work a day in your life? I think it’s that. There is a pressure that you feel when you are following your actual passion. This is not just my job, this is what I like to do in my free time, this is what I think is important. This is what I feel like I was born to do, right? To storytell. And every day, it feels like work. And it feels even harder than work because it feels like a calling. From the outside, I’m sure people are like, “Oh, you get to do this cool stuff. You get to talk to all these interesting people. You get to talk about the things that you care about.” But there is truly a pressure to document this stuff in a way that pays homage to everyone and everything that came before it that solidifies its place in history to come, and to handle things with delicacy and care and importance.  There’s just so many layers to it. So, it’s never just about the one thing that you love. It’s about the responsibility that you now have to this thing because you love it. And when you’re working in culture spaces specifically, there is always someone that you have to give their flowers to for you to be able to do the work that you’re doing, but you also have to be forward. You have to look forward and innovate, while you are also honoring what has come before. And there can be a misconception about it like it being easy. Or “I can do that” — we see a lot of that with interviewers I love, where people say, “Shannon Sharpe didn’t ask the right questions. Insert podcaster here who got a really great hit and didn’t ask the right questions.” It’s because we’ve lost the art because people see the end product, they believe that it is easy. And they forget that everything that journalists are doing is in service to the audience and not in service to themselves. And we’re seeing this really weird landscape now where everyone is in service to themselves and to their own popularity and to growing their own audiences, but then they don’t serve those audiences with ethics. That for me is the misconception — that it’s just so easy, anyone could do it. Now everyone is doing it, and they’re wondering where the value is and the premium stories and all of these things like that.

Who are the Black women you’re inspired by and the people you go to when you’re faced with so many of the challenges Black people have in the content/media industry?

I am very intentional about friendships, and I treat my friends like extensions of myself. They’re the board of directors for my life, and these are often friends from college. I think people really discount the friends that you make in your first big girl job and how you’re learning everything together. Those have become the people that I call on throughout my career, who I call on to collaborate with projects, who I call on when things are falling apart. I’ve been just so blessed to have those people in my life as my board of directors for all things. And I serve as the same for them. I am going to tell you the truth: I don’t know that I’ve met a Black woman that I’m not inspired by. I truly am just so in awe of so many women. 

I made a practice after the pandemic of being like, “I don’t want these people who I like online to just be my online friends.” I wanted to meet these people in real life. I started just DM’ing people and being like, “Hey, we’ve been on here a while. Can we grab lunch?” And then just continuing that connection in person has been so, so fantastic.

I do also want to shout out Poynter Institute. I did a training with them in 2019 right before the pandemic for women in leadership positions in newsrooms. When I tell you it was a week-long, intensive, seven days we were in a classroom … it was like therapy for work in a way that I didn’t know I needed, and I didn’t know was available to any of us. We built such strong bonds, even though some people worked at competing publications. But when we put all of that aside, it was just women of all ages and all backgrounds who were working in an industry and really, really cared about the work they were doing and wanted to be their best. And through that group of people, I have just made true, lifelong friends. When things were falling apart in 2020, I was calling on my cohort members and building deep, deep connections from there.

What do you think is the biggest challenge we face in the world this year on and offline? How do we combat it?

There’s so much happening in the world that I think the one thing that I can say that continues to be dangerous is misinformation online. I’m going to speak specifically about Palestine. We see the power of storytelling from the front lines in a way that we have never witnessed before with any conflict, right? And we’ve seen that unfold into just horrors that we would never know were happening if we did not see it coming from the front lines. I just give kudos to the journalists that are on the ground and the people who have become journalists by force who have documented us through situations that we couldn’t even fathom. Even if we tried, we couldn’t fathom what it’s like to work through that. And while that is really helpful and illuminating the evils of the world, the other side of that there’s so much misinformation because everyone is trying to be fast to discredit what we’re seeing with our own eyes and framing that used to be able to take place is not available anymore. And I’m not going to speak to that being a political tool or otherwise, but the framing is not as readily available. I think we saw this with our election in America. Specifically, we’re in an election year — we saw this with our elections, four, eight years ago — and now as technology starts to grow and change faster than we are learning how to master it, I really do believe that misinformation is going to be one of the hardest things to combat. 

I really don’t know how to combat it. Things are just changing so quickly and there’s just so much access to so much. I think the answer as it is with most things is community and people coming together to dream together. There’s not going to be a single person that solves for all of this. It’s going to take collective efforts to help make sure that we’re doing our best.

What is one action that you think everyone should take to make the world and our lives online a little better?

I think everyone can be a little bit more curious. We don’t need to trust things at face value the first time. We need to be more curious about the sources of the information that we are taking in. And it doesn’t mean that we have to be constantly engaging in combat with it. You can take things at face value and then do more research to inform yourself about all sides of a story, and I think that that’s one action that we can take in our day-to-day lives to just be better.  

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

I hope we get those flying cars that we were promised (laughs). But truly, I hope that in 25 years, we are celebrating the earth more. I really do hope that we’re celebrating the earth still housing us and that we’re all just being a little kinder and more thoughtful in the ways that we’re engaging with nature and ourselves, and that people are spending a little bit more time reconnecting with who they are offline.

What gives you hope about the future of our world?

I mentor with the Lower East Side Girls Club, which is a nonprofit here in New York, and we do a mentee outing once a month. The girls are aged middle school through high school, and they are so bright and so well-rounded and smart, but they’re also funny, and they have great social cues and they think so deeply about the world and they’re really compassionate. They don’t allow the things that used to trip me up and trip me and my peers up as like middle schoolers, like they’re so evolved past that. Every time that I think that mentoring means me teaching, I realize that it really means me learning, and when I leave those girls, I’m like, “alright, we’re going to be okay.” They give me some hope.

Get Firefox

Get the browser that protects what’s important

The post Rachel Hislop reflects on working for Beyoncé, creating community for Black women and the power of storytelling appeared first on The Mozilla Blog.

hacks.mozilla.orgPrototype even faster with the Gradio UI for Figma component library

As an industry, generative AI is moving quickly, and so requires teams exploring new ideas and technologies to move quickly as well. To do so, we have been using Gradio, a low-code prototyping toolkit from Hugging Face, to spin up experiments and experiences. Gradio has allowed us to validate concepts through prototyping without large investments of time, effort, or infrastructure.

Although Gradio has made the development phase of prototyping easier, the design phase has been largely the same. Even with Gradio, designers have had to create components in Figma, outline expected user flows and behaviors, and hand off designs for developers in the same way they have always done. While working on a recent exploration, we realized something was needed: a set of Figma components based on Gradio that enabled designers to create wireframes quickly.

Today, we are releasing our library of design components for Gradio for others to use. The components are based on version 4.23.0 of Gradio and will be available through our Figma profile: Mozilla Innovation Projects, https://www.figma.com/@futureatmozilla. We hope these components help teams accelerate their discovery and experimentation with ML and generative AI.

You can find out more about Gradio at https://www.gradio.app/ and more about innovation at Mozilla at https://future.mozilla.org

Thanks to Amy Chiu and Anais Ron who created the components and to the Gradio team for their work. Happy designing!

What’s Inside Gradio UI for Figma?

Because Gradio is an ever-changing prototyping kit, current components are based on version 4.23.0 of Gradio. We selected components based on their wide array of potential uses. Here is a list of the components inside the kit:

  • Typography (e.g. headers, body fonts)
  • Iconography (e.g. chevrons, arrows, corner expanders) 

Small Components:

  • Buttons
  • Checkbox
  • Radio
  • Sliders
  • Tabs
  • Accordion
  • Delete Button
  • Error Message
  • Media Type Labels
  • Media Player Controller

Big Components:

  • Label + Textbox
  • Accordion with Label + Input
  • Video Player
  • Label + Counter
  • Label + Slider
  • Accordion + Label
  • Checkbox with Label
  • Radio with Label
  • Accordion with Content
  • Accordion with Label + Input
  • Top navigation

How to Access and Use Gradio UI for Figma

To start using the library, follow these simple steps:

  1. Access the Library: Access the component library directly by visiting our public Figma profile (https://www.figma.com/@futureatmozilla) or by searching for “Gradio UI for Figma” within the Figma Community section of your web or desktop Figma application.
  2. Explore the Documentation: Familiarize yourself with the components and guidelines to make the most out of your design process.
  3. Connect with Us: Connect with us by following our Figma profile or emailing us at innovations@mozilla.com

The post Prototype even faster with the Gradio UI for Figma component library appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogThunderbird for Android / K-9 Mail: March 2024 Progress Report

a dark background with Thunderbird and K-9 Mail logos centered, with the text "Thunderbird for Android, March 2024 Progress Report"

If you’ve been wondering how the work to turn K-9 Mail into Thunderbird for Android is coming along, you’ve found the right place. This blog post contains a report of our development activities in March 2024. 

We’ve published monthly progress reports for a while now. If you’re interested in what happened previously, check out February’s progress report. The report for the preceding month is usually linked in the first section of a post. But you can also browse the Android section of our blog to find progress reports and release announcements.

Fixing bugs

For K-9 Mail, new stable releases typically include a lot of changes. K-9 Mail 6.800 was no exception. That means a lot of opportunities to accidentally introduce new bugs. And while we test the app in several ways – manual tests, automated tests, and via beta releases – there’s always some bugs that aren’t caught and make it into a stable version. So we typically spend a couple of weeks after a new major release fixing the bugs reported by our users.

K-9 Mail 6.801

Stop capitalizing email addresses

One of the known bugs was that some software keyboards automatically capitalized words when entering the email address in the first account setup screen. A user opened a bug and provided enough information (❤) for us to reproduce the issue and come up with a fix.

Line breaks in single line text inputs

At the end of the beta phase a user noticed that K-9 Mail wasn’t able to connect to their email account even though they copy-pasted the correct password to the app. It turned out that the text in the clipboard ended with a line break. The single line text input we use for the password field didn’t automatically strip that line break and didn’t give any visual indication that there was one.

While we knew about this issue, we decided it wasn’t important enough to delay the release of K-9 Mail 6.800. After the release we took some time to fix the problem.

DNSSEC? Is anyone using that?

When setting up an account, the app attempts to automatically find the server settings for the given email address. One part of this mechanism is looking up the email domain’s MX record. We intended for this lookup to support DNSSEC and specifically looked for a library supporting this.

Thanks to a beta tester we learned that DNSSEC signatures were never checked. The solution turned out to be embarrassingly simple: use the library in a way that it actually validates signatures.

Strange error message on OAuth 2.0 failure

A user in our support forum reported a strange error message (“Cannot serialize abstract class com.fsck.k9.mail.oauth.XOAuth2Response”) when using OAuth 2.0 while adding their email account. Our intention was to display the error message returned by the OAuth server. Instead an internal error occurred. 

We tracked this down to the tool optimizing the app by stripping unused code and resources when building the final APK. The optimizer was removing a bit too much. But once the issue was identified, the fix was simple enough.

Crash when downloading an attachment

Shortly after K-9 Mail 6.800 was made available on Google Play, I checked the list of reported app crashes in the developer console. Not a lot of users had gotten the update yet. So there were only very few reports. One was about a crash that occurred when the progress dialog was displayed while downloading an attachment. 

The crash had been reported before. But the number of crashes never crossed the threshold where we consider a crash important enough to actually look at. 

It turned out that the code contained the bug since it was first added in 2017. It was a race condition that was very timing sensitive. And so it worked fine much more often than it did not. 

The fix was simple enough. So now this bug is history.

Don’t write novels in the subject line

The app was crashing when trying to send a message with a very long subject line (around 1000 characters). This, too, wasn’t a new bug. But the crash occurred rarely enough that we didn’t notice it before.

The bug is fixed now. But it’s still best practice to keep the subject short!

Work on K-9 Mail 6.802

Even though we fixed quite a few bugs in K-9 Mail 6.801, there’s still more work to do. Besides fixing a couple of minor issues, K-9 Mail 6.802 will include the following changes.

F-Droid metadata

In preparation of building two apps (Thunderbird for Android and K-9 Mail), we moved the app description and screenshots that are used for F-Droid’s app listing to a new location inside our source code repository. We later found out that this new location is not supported by F-Droid, leading to an empty app description on the F-Droid website and inside their app.

We switched to a different approach and hope this will fix the app description once K-9 Mail 6.802 is released.

Push not working due to missing permission

Fresh installs of the app on Android 14 no longer automatically get the permission to schedule exact alarms. But this permission is necessary for Push to work. This was a known issue. But since it only affects new installs and users can manually grant this permission via Android settings, we decided not to delay the stable release until we added a user interface to guide the user through the permission flow.

K-9 Mail 6.802 will include a first step to improve the user experience. If Push is enabled but the permission to schedule exact alarms hasn’t been granted, the app will change the ongoing Push notification to ask the user to grant this permission.

In a future update we’ll expand on that and ask the user to grant the permission before allowing them to enable Push.

What about new features?

Of course we haven’t forgotten about our roadmap. As mentioned in February’s progress report we’ve started work on switching the user interface to use Material 3 and adding/improving Android 14 compatibility.

There’s not much to show yet. Some Material 3 changes have been merged already. But the user interface in our development version is currently very much in a transitional phase.

The Android 14 compatibility changes will be tested in beta versions first, and then back-ported to K-9 Mail 6.8xx.

Releases

In March 2024 we published the following stable release:

There hasn’t been a release of a new beta version in March.

The post Thunderbird for Android / K-9 Mail: March 2024 Progress Report appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogAutomated Testing: How We Catch Thunderbird Bugs Before You Do

Since the release of Thunderbird 115, a big focus has been on improving the state of our automated testing. Automated testing increases the software quality by minimizing the number of bugs accidentally introduced by changes to the code. For each change made to Thunderbird, our testing machines run a set of tests across Windows, macOS, and Linux to detect mistakes and unintended consequences. For a single change (or a group of changes that land at the same time), 60 to 80 hours of machine time is used running tests.

Our code is going to be under more pressure than ever before – with a bigger team making more changes, and monthly releases reducing the time code spends on testing channels before being released.

We want to find the bugs before our users do.

Why We’re Testing

We’re not writing tests merely to make ourselves feel better. Tests improve Thunderbird by:

  • Preventing mistakes
    If we test that some code behaves in an expected way, we’ll find out immediately if it no longer behaves that way. This means a shorter feedback loop, and we can fix the problem before it annoys the users.
  • Finding out when somebody upstream breaks us
    Thunderbird is built from the Firefox code. The Firefox code, which we are not responsible for, is 30 to 40 times the size of the code we are responsible for. When something inevitably changes in Firefox that affects us, we want to know about it immediately so that we can respond.
  • Freeing up human testers
    If we use computers to prove that the program does what it’s supposed to do, particularly if we avoid tedious repetition and difficult-to-set-up tasks, then the limited human resources we have can do more things that humans are better at.
    For example, I’ve recently added tests that check 22 ways to trigger fetching mail, and 10 circumstances fetching mail might not work. There’s no way our human testers (great though they are) are testing all of them, but our automated tests can and do, several times a day.
  • Thinking through what the code should be doing
    Testing forces an engineer to look at the code from a different point-of-view, and this is helpful to think about what the code is supposed to do in more circumstances. It also makes it easier to prove that the code does work in obscure circumstances.
  • Finding existing bugs
    In software terms we’re working with some very old code, and much of it is untested. Testing it puts a fresh set of eyes on the code and reveals some of the mistakes of the past, and where the ravages of time have broken things. It also helps the person writing the tests to understand what the code does, a lot better than just reading the code does.

We’re not trying to completely cover a feature or every edge case in tests. We are trying to create a testing framework around the feature so that when we find a bug, as well as fixing it, we can easily write a test preventing the bug from happening again without being noticed. For too much of the code, this has been impossible without a weeks-long detour into tests.

Breaking New Ground

In the past few months we’ve figured out how to make automated tests for things that were previously impossible:

  • Communication with mail servers using encrypted channels.
  • OAuth2 authentication with mail servers.
  • Communication with web servers where a specific address must be used and an unencrypted channel must not be used.
  • Servers at any given host name or port. Previously, if we wanted to start a server for automated testing, it had to be on the local machine at a non-standard location. Now we can pretend that the server is anywhere, and using standard ports, which is needed for proper testing of account configuration features. (Actually, this was possible before, but now it’s much easier.)

These new abilities are being used to wrap better testing around account set-up features, ahead of the new Account Hub development, so that we can be sure nothing breaks without being noticed. They’re also helping test that collecting mail works when it should, or gives the error prompts we expect when it doesn’t.

Code coverage

We record every line of code that runs during our tests. Collecting all that data tells what code doesn’t run during our tests. If a block of code doesn’t run during any of our tests, nothing will tell us when it breaks until somebody uses the code and complains.

Our code coverage data can be viewed at coverage.thunderbird.net. You can also look at Firefox’s data at coverage.moz.tools.

Looking at the data, you might notice that our overall number is now lower than it was when we started measuring. This doesn’t mean that our testing got worse, it actually shows where we added a lot of code (that isn’t maintained by us) in the third_party directory. For a better reflection of the progress we’ve made, check out the individual directories, especially mail/base which contains the most important user interface code.

  • Just setting up the code coverage tools and looking at the results uncovered several memory leaks. (A memory leak is where memory is allocated for a task and not released when it is no longer needed.) We fixed these leaks and some more that existed in our test code. We now have very low levels of memory leaking in our test runs, so if we make a mistake it is easy to spot.
  • Code coverage data can also point to code that is no longer used. We’ve removed some big chunks of this dead code, which means we’re not wasting time maintaining it.

Mozmill no more

Towards the end of last year we finally retired an old test suite known as Mozmill. Those tests were partially migrated to a different test suite (Mochitest) about four years ago, and things were mostly working fine so it wasn’t a priority to finish. These tests now do things in a more conventional way instead of relying on a bunch of clever but weird tricks.

How much of the code is test code?

About 27%. This is a very rough estimate based on the files in our code repository (minus some third-party directories) and whether they are inside a directory with “test” in the name or not. That’s risen from about 19% in the last five years.

There is no particular goal in mind, but I can imagine a future where there is as much test code as non-test code. If we achieve that, Thunderbird will be in a very healthy place.

A stacked area chart showing the estimated lines of test code (in red) and non-test code (in blue) over time, from January 2019 to January 2024. The chart indicates both types of code increase over this period.

Looking ahead, we’ll be asking contributors to add tests to their patches more often. This obviously depends on the circumstance. But if you’re adding or fixing something, that is the best time to ensure it continues to work in the future. As always, feel free to reach out if you need help writing or running tests, either via Matrix or Topicbox mailing lists:

Geoff Lankow, Staff Engineer

The post Automated Testing: How We Catch Thunderbird Bugs Before You Do appeared first on The Thunderbird Blog.

SUMO BlogKeeping you in the loop: What’s new in our Knowledge Base?

Hello, SUMO community!

We’re setting the stage for something big: a revamp of our style guide designed to make our support content not just user-friendly, but user-delightful. To get a clearer picture of the SUMO user experience, we enlisted the help of an external agency, embarking on a research project designed to peel back the layers of how users interact with our platform. The results were quite revealing. Many users, it turns out, find themselves overwhelmed by the vast amount of information available, often feeling confused and struggling to pinpoint the exact answers they’re searching for. To address this, we’re rolling out targeted improvements focused enhancements to our style guides and contributor resources, aiming to refine how we organize, categorize, and present our support content in SUMO for a smoother, more intuitive user journey.

Have questions or feedback? Drop us a message in this SUMO forum thread.

Refreshing the content taxonomy

A key takeaway from the research was the users’ difficulty in navigating our content categories. This prompted us to rethink our approach to organizing support content, aiming for a better alignment with user needs and industry best practices. This project is in full swing, and we’ll be ready to share more details with you shortly.

Auditing the Firefox content

In our effort to align our content with user needs, we’ve initiated a comprehensive audit of all Firefox support articles. This exhaustive review aims to identify areas where we can reclassify content, eliminate outdated information, and consolidate similar topics. Our goal is to ensure that every piece of information in the KB is relevant, easy to understand, and directly beneficial to our users.

We’re gearing up to share how you can contribute to this exciting initiative. Mark your calendars for the SUMO Community Meeting on Wednesday, April 10, 2024, where we’ll unveil more about this project.

Updating the article types

Using consistent content types for our knowledge base articles has many benefits including ease of navigation and improved clarity and organization, in addition to helping us create content more effectively. We are transitioning to categorizing external knowledge base articles into four types, each serving a specific purpose:

  • About: These articles address “What is…” questions, providing essential information to help readers understand a topic.
  • How-to: These articles focus on answering “How to…?” questions, guiding readers through the steps required to achieve a specific goal or procedure.
  • Troubleshooting: These articles assist users in identifying, diagnosing, and resolving common issues they may encounter with a product, service, or feature by addressing “How to…?” questions related to problem-solving.
  • FAQ: These articles contain concise answers to frequently asked questions on a single topic, which may not fit within other individual KB articles, providing a quick reference for common inquiries.

Stay tuned for additional training and documentation on these article types!

Reducing cognitive load
We believe finding information should not be akin to a mental obstacle course. Focused on minimizing cognitive load, we’ve outlined a series of strategies aimed at guiding users directly to the information they need, no fuss involved. Below are the key strategies we’re implementing:

  • Straight to the point with inline images and icons: We’re transitioning from textual guidance to visual demonstrations. By embedding inline targeted UI captures and icons directly into the article flow, we aim to provide a more visual path for users, minimizing the need for mental translation of text into actions. But, hang on – we haven’t forgotten about making these changes work for everyone. For those using screen readers, we’re counting on you to help us ensure every image and icon comes with comprehensive alt text, making every visual accessible through sound. And on the localization front, your skills are more important than ever. We’re calling on you to assist in capturing and adding alt text to localized images, ensuring it’s accessible and resonant for every member of our global community. For details see Effective use of inline images.
  • Cleaner, more focused images with SUI (simplified user interfaces): To make things even clearer, we’re simplifying our product’s UI in screenshots to just the essentials. This not only makes the images easier to follow but also means they’ll stay accurate longer, even if small UI changes happen. For more info, see Simplifications.
  • Streamlined steps with annotated screenshots: For tasks that necessitate two or more clicks or actions on a single screen, we’re shifting to a more intuitive approach: using screenshots marked with numbered annotations. This strategy will clear away the need for multiple, similar screenshots, making instructions easier to follow while minimizing scrolling.

Keep an eye out for the updated style guides – they’re coming soon!

What this means for you

Our updates will be rolling out from Q2 to Q4 2024, and we’re thrilled to have you on board as we bring these changes to life. The kickoff is just around the corner, so stay tuned for updates! Have thoughts to share or looking to contribute? We’re all ears. Engage with us directly on this SUMO forum thread. Your feedback and involvement are crucial as we progress together.

Thank you for making a difference!

Open Policy & AdvocacyMozilla provides feedback to ACM’s DSA Guidelines

The EU’s Digital Services Act (DSA) has taken effect, ushering in a new era of accountability, transparency, and responsibility for digital platforms. Mozilla has actively supported the DSA –  and its aim to build a safer digital ecosystem – since the legislation was first proposed, and continues to contribute to conversations about how to implement it effectively.

Technology companies that offer services in the EU must “designate a sufficiently mandated legal representative in the Union and provide information relating to their legal representatives to the relevant authorities,” and each EU country must appoint a Digital Services Coordinator to interpret and enforce the DSA.

In January of this year the Authority for Consumers and Markets in the Netherlands (ACM) published draft guidelines for its interpretation and enforcement of the DSA. Mozilla recently provided feedback, focused largely on areas where further detail or clarification would be helpful, as well as discussing challenges small and mid sized platforms may face during implementation.

Specifically, Mozilla recommended the following:

  • Clarification of “ancillary services.”  

The ACM’s draft guidelines note that Recital 13 of the DSA exempts “ancillary services” where, as with the comment section of a newspaper’s website, “the possibility of posting comments… is only an incidental characteristic of the main service.”  Mozilla recommends that this “ancillary services” exception also expressly include services for tech support and product feedback, and similar platforms that exist only to support a primary product that is not subject to DSA. Such forums are clearly ancillary to the main products, as their purpose is to help address bugs and other product-specific issues within those products.

  • Refining the definition of “traders.” 

The DSA imposes additional requirements on platforms that host B2C online marketplaces, by requiring that the platforms track and store data about “traders” that operate on their platform.  DSA Recital 23, which presumes that traders in an online marketplace are offering goods or services for a price, highlights that this provision is intended to cover those platforms that facilitate online commerce. Mozilla recommends that the guidelines make this intent clear, by expressly stating that: (i) “traders” do not include those providing free online services, and (ii) platforms which do not incur profits or facilitate the exchange of money are not B2C online marketplaces.

  • Allowing platforms the flexibility to address spam.

The DSA’s obligations do not apply when platforms act to address “deceptive, high-volume commercial content.” For effective implementation of the guidelines, we believe there needs to be more clarification of how such content is defined. The ACM guidance indicates that the exception applies where someone intentionally manipulates a service through the use of bots, fake accounts, or deceptive practices. Mozilla recommends that the guidance be supplemented to ensure that platforms have the ability to address evolving threats: including clarifying that the references to bots and fake accounts are non-exhaustive examples and not intended to further constrain the spam exception, and establishing a plan to periodically update the guidance to address changing circumstances and developing technologies.

  • Clarifying the Statement of Reasons requirement.

Both the DSA itself and the ACM guidance require platforms to provide a statement of reasons whenever they moderate content or restrict a user account, explaining the legal or contractual provision on which their action was based. Mozilla asked that ACM provide additional details on what such statements should contain; this would provide greater clarity and standardization for platforms and ensure that moderation (particularly of illegal content) remains workable at scale.

  • Allowing platforms flexibility on suspensions.

The ACM guidance allows a platform to permanently suspend users for “manifestly illegal content related to serious crimes.”  However, it requires that a platform always issue a warning, before suspending a user. Mozilla recommends that the ACM expressly confirm platforms have the right to suspend users for violating their Terms of Service, even if their activity is not illegal.  Mozilla also recommends that the warning requirement be clarified, and reduced in cases where having to warn a user might prevent platforms from responding to serious offenses in a timely manner.

As a longtime advocate for the DSA and for platform accountability, Mozilla is enthusiastic about the legislation’s potential to create a safer Internet ecosystem for all. Our comments to ACM, and our ongoing work on this subject, aims to further that goal, without overly burdening small and mid-size platforms.  We look forward to working with the ACM and other European regulators in the coming months, as this legislation continues to take shape.

The post Mozilla provides feedback to ACM’s DSA Guidelines appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait?

A screenshot of the Mozilla Thunderbird 3.0 email client's options menu, with the "General" tab open, displaying settings for the start page and new message alerts.

Let’s step back into the Thunderbird Time Machine and teleport ourselves back to December 2009. If you were on the bleeding edge, maybe you were upgrading your computer to the newly released Windows 7 (or checking out Ubuntu 9.10 “Karmic Koala”.) Perhaps you were pouring all your free time into Valve’s ridiculously fun team-based survival shooter Left 4 Dead 2. And maybe, just maybe, you were eagerly anticipating installing Thunderbird 3.0 — especially since it had been a lengthy two years since Thunderbird 2.0 had launched.

What happened during those two years? The Thunderbird developer community — and Mozilla Messaging — clearly stayed busy and productive. Thunderbird 3.0 introduced several new feature milestones!

1) The Email Account Wizard

We take it for granted now, but in the 2000s, adding an account to an email client wasn’t remotely simple. Traditionally you needed to know your IMAP/POP3 and SMTP server URLs, port numbers, and authentication settings. When Thunderbird 3.0 launched, all that was required was your username and password for most mainstream email service providers like Yahoo, Hotmail, or Gmail. Thunderbird went out and detected the rest of the settings for you. Neat!

2) A New Tabbed Interface

With Firefox at its core, Thunderbird followed in the footsteps of most web browsers by offering a tabbed interface. Imagine! Being able to quickly tab between various searches and emails without navigating a chaotic mess of separate windows!

3) A New Add-on Manager

<figcaption class="wp-element-caption">Screenshot from HowToGeek’s Thunderbird 3.0 review.</figcaption>

Speaking of Firefox, Thunderbird quickly adopted the same kind of Add-on Manager that Firefox had recently integrated. No need to fire up a browser to search for useful extensions to Thunderbird — now you could search and install new functionality from right inside Thunderbird itself.

4) Advanced Search Options

Searching your emails got a massive boost in Thunderbird 3.0. Advanced filtering tools means you could filter your results by sender, attachments, people, folders, and more. A shiny new timeline view was also introduced, letting you jump directly to a certain date’s results.

5) The Migration Assistant

Tying this all together was a simple but wonderful migration assistant. It served as a way to introduce users to certain new features (like per-account IMAP synchronization), and visually toggle them on or off (useful for displaying the revised Message Toolbar and giving users a choice of where to enjoy it). To me, this particular addition felt ahead of its time. We’ve been discussing the idea of re-introducing it in a future Thunderbird release, but one of the steep hurdles to doing so now is localization. If it’s something you’d like to see, let us know in the comments.

Try It Out For Yourself

If you want to personally step into the Thunderbird Time Machine, every version ever released for Windows, Linux, and macOS is available in this archive. I ran mine inside of a Windows 7 virtual machine, since my native Linux install complained about missing libraries when trying to get Thunderbird 3.0 running.

Regardless if you’re a new Thunderbird user or a veteran who’s been with us since 2003, thanks for being on the journey with us!

Previous Time Machine Destinations:

The post Thunderbird Time Machine: Was Thunderbird 3.0 Worth The Wait? appeared first on The Thunderbird Blog.

Web Application SecurityRapidly Leveling up Firefox Security

At Mozilla, we believe in an open web that is safe to use. To that end, we improve and maintain the security of people using Firefox around the world. This includes a solid track record of responding to security bugs in the wild, especially with bug bounty programs such as Pwn2Own. As soon as we discover a critical security issue in Firefox, we plan and ship a rapid fix. This post describes how we recently fixed an exploit discovered at Pwn2Own in less than 21 hours, a success only made possible through the collaborative and well-coordinated efforts of a global cross-functional team of release and QA engineers, security experts, and other stakeholders.

A Bit Of Context

Pwn2Own is an annual computer hacking contest where participants aim to find security vulnerabilities in major software such as browsers. Two weeks ago, this event took place in Vancouver, Canada, where participants investigated everything from Chrome, Firefox, and Safari to MS Word and even the code currently running on your car. Without getting into the technical details of the exploit here, this blog post will describe how Mozilla quickly responds to and ships updated builds for exploits found during Pwn2Own.

To give you a sense of scale, Firefox is a massive piece of software: 30 million+ lines of code, six platforms (Windows 32 & 64bit, GNU/Linux 32 & 64bit, Mac OS X and Android), 90 languages, plus installers, updaters, etc. Releasing such a beast involves coordination across many cross-functional teams spanning the entire globe.

The timing of the Pwn2Own event is known weeks beforehand, so Mozilla is always ready when it rolls around! The Firefox train release calendar takes into consideration the timing of Pwn2Own. We try not to ship a new version of Firefox to end users on the release channel on the same day as Pwn2Own to hopefully avoid multiple updates close together. This also means that we are prepared to ship a patched version of Firefox as soon as we know what vulnerabilities were discovered if any at all.

So What Happened?

The specific exploit disclosed at Pwn2Own consisted of two bugs, a necessity when typical web content is rendered inside of a proverbial browser sandbox: These two sophisticated exploits took an admirable amount of effort to reveal and leverage. Nevertheless, as soon as it was discovered, Mozilla engineers got to work, shipping a new release within 21 hours! We certainly weren’t the only browser “pwned”, but we were the first of all, to patch our vulnerability. That’s right: before you knew about this exploit, we had already protected you from it.

As scary as this might sound, Sandbox Escapes, like many web browser exploits, are an issue common to all browsers, thanks to the evolving nature of the internet. Firefox developers are always eager to find and resolve these security issues as quickly as possible to ensure our users stay safe. We do this continuously by shipping new mitigations like win32k lockdown, site isolation, investing in security fuzzing, and promoting bug bounties for similar escapes. In the interest of openness and transparency, we also continuously invite and reward security researchers who share their newest attacks, which helps us keep our product safe even when there isn’t a Pwn2Own to participate in.

Related Resources

If you’re interested in learning more about Mozilla’s security initiatives or Firefox security, here are some resources to help you get started:

Mozilla Security
Mozilla Security Blog
Bug Bounty Program
Mozilla Security playlist on YouTube

Furthermore, if you want to kickstart your own security research in Firefox, we invite you to follow our deeply technical blog at Attack & Defense – Firefox Security Internals for Engineers, Researchers, and Bounty Hunters .

Past Pwn2Own Blog: https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

The post Rapidly Leveling up Firefox Security appeared first on Mozilla Security Blog.

The Mozilla Thunderbird BlogThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux

We love our Linux users across all Linux distributions. That is why we’ve stepped up to help maintain the Thunderbird Snap available in the Snap Store.

Last year we took ownership of the Thunderbird Flatpak, and it has been our officially recommended package for Linux users. However, we are expanding our horizons to make sure the Thunderbird Snap experience is officially supported too. We at Thunderbird are team “free software”, independent of the packaging technology. This will mostly affect our Ubuntu users but there are plenty of other Snap users out there as well. 

Why support both the Snap and Flatpak?

In the spirit of free software, we want to support as many of our users as possible without discriminating on their package preferences. We are not a large company with infinite resources, so we can’t support everything under the sun. But we can make informed decisions that reach the majority of our Linux users.

The Thunderbird Snap has been well maintained by the Ubuntu desktop team for years, and we felt it was time to step up and help out.

What does this mean for me?

If you are an Ubuntu user, then you may already be using the Thunderbird Snap. The next release of Ubuntu is 24.04 (available April 25) and will be the first Ubuntu release that seeds the Thunderbird Snap on the ISO. So if you do a fresh full install of Ubuntu, you will be using the Thunderbird Snap that you know is directly supported by the Thunderbird team.

If you are not an Ubuntu user but Snaps are still a part of your life, then you will still benefit from the same rolling updates provided by the Snap experience.

What changes are expected?

From a user perspective, you should see no changes. Just keep using whichever Thunderbird Snap channel you are comfortable with.

From a developer perspective, we have added the Snap build to our build infrastructure on treeherder. This means whenever a full build is triggered automatically from commits, the Snap is built as well for testing. Whenever the build is one we want to release to the public, this will trigger a general flow:

  1. A version bump is pushed to the existing Thunderbird Snap github repository.
  2. The existing launchpad mirror will pick up this change and automatically build the Snap for x86 and arm64.
  3. If the launchpad Snap build succeeds, the Snap will be uploaded to the designated Snap store channel.

So all we are changing is adding the snap build into the Thunderbird build infrastructure and plugging it into the existing automation that feeds the snap store. 

Where do I report a bug on the Thunderbird Snap?

As with all supported package types of Thunderbird, we would like bugs about the Thunderbird Snap to be reported on bugzilla.mozilla.org under the Thunderbird project.

The post ThunderSnap! Why We’re Helping Maintain The Thunderbird Snap On Linux appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogDeveloper Spotlight: Control Panel for Twitter

You can’t predict how or when success will come. In the case of Control Panel for Twitter — a Firefox extension that gives users authority over the amount of algorithmic content they’re fed — it went viral in Japan a few years ago and word spread fast. One devoted fan even jumped into the open-source code and quickly localized the extension in Japanese, further catapulting its appeal. Today, Control Panel for Twitter has more than 250,000 users from all over the world enjoying it across various browsers.

A comprehensive Options page gives you easy, intuitive control over your Twitter/X experience.

“Most of my extensions are for sites I’m a long-time user of, fixing issues which bug me, and adding missing features,” explains developer Jonny Buchannon. One of the first issues he addressed was designing a feature that moved retweets into a separate tab.

“If you don’t like the algorithmic ‘For you’ timeline, it’s usually because it’s full of random tweets about topics you’re not interested in, or worse, deliberate engagement bait. If you look at all the retweets in your timeline, they tend to have a similar problem,” explains Buchannon. “By default, following someone on Twitter lets them put any tweet in your timeline with no effort — a single click or tap — without having to add their own comment, and sometimes they do that because the tweet in question made them feel strong negative emotions; sometimes people will also retweet a string of tweets about similar topics, filling up your timeline.”

To fix this problem the extension swaps the “For you” timeline for the “Following” (chronological) version. Control Panel for Twitter can also hide other types of Twitter/X content like the “See new Tweets” button, “Who to follow,” “Follow some topics,” all the X Premium upsell prompts, and more.

Even with gobs of current customization features, Buchannon says there’s a “huge backlog” of potential enhancements in their GitHub Issues. New features coming soon include the ability to control what you see in Notifications (like hiding Likes and retweets) and improvements viewing a conversation under a focused tweet.

App-solutely atrocious experience — try Twitter/X on the mobile web!

Control Panel for Twitter is also available on Firefox for Android (addons.mozilla.org [AMO] recently launched an open ecosystem of extensions on Firefox for Android). While it may seem strange to use a mobile browser to access Twitter/X instead of the app, Buchannon says he primarily added mobile support for his own personal use. “I’m the #1 user on that front,” he says before issuing a “warning” to prospective users of his extension on Firefox for Android: “Once you get used to the changes Control Panel for Twitter makes to the experience, default Twitter is unusable — be it the app or the website.”

There are also mobile-specific features, such as changes it brings to Twitter/X search functionality. In standard Twitter/X, when you tap the Search nav you’re brought to the Explore page, which is loaded with algorithmic content. Control Panel for Twitter can hide that so you’re simply presented with a streamlined search field.

Apparently Buchannon isn’t alone in his preference to experience the mobile web version of Twitter/X while using his extension. He claims Control Panel for Twitter has only been available on the App Store for Safari for a little over a year, but already 78% of its Safari users are using it on the iPhone.

Based on the same philosophical functionality as Control Panel for Twitter, Buchannon just released Control Panel for YouTube.

“One of the main focuses of the initial version was improving the Subscription pages by automatically hiding any content you don’t want to see in there like Shorts, live streams, ‘upcoming’ videos you can’t watch now, and hiding videos you’ve already watched, so it acts more like an inbox, where videos disappear as you watch them.”

Sounds great, can’t wait to try it out. Less is often more with social media.

Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Control Panel for Twitter appeared first on Mozilla Add-ons Community Blog.

The Mozilla BlogEmpowering Choice: Firefox Partners with Qwant for a Better Web

Your tech choices matter more than ever. That’s why at Firefox, we believe in empowering users to make informed decisions that align with their values. In that spirit, we’re excited to announce our partnership with Qwant, a search engine that prioritizes user privacy and tracker blocking. 

Did you know you could choose the search engine of your choice right from your Firefox URL bar? Whether you prioritize privacy, climate protection, or simply want a search experience tailored to your preferences, we’ve got you covered.

Qwant is a privacy-focused search engine that puts your needs first while protecting your personal data. By blocking trackers and advertisements, Qwant helps your search results remain unbiased and comprehensive. Just like Firefox, they are committed to protecting your privacy and preserving the decentralized nature of the web, where people have control over their online experiences.

Together, Firefox and Qwant are contributing to a more open, inclusive web, and above all — one where you can make an informed choice about what tech you use, and why. Your tech choices make a difference. 

As Firefox continues to champion user empowerment and innovation, we invite you to join us in shaping a web that works for everyone. Together, let’s make a positive impact—one search at a time.

The post Empowering Choice: Firefox Partners with Qwant for a Better Web  appeared first on The Mozilla Blog.

The Mozilla BlogThe cost of cutting-edge: Scaling compute and limiting access

(To read the complete Mozilla.ai publication on LLM evaluation, please visit the Mozilla.ai blog)

In a year marked by extraordinary advancements in artificial intelligence, largely driven by the evolution of large language models (LLMs), one factor stands out as a universal accelerator: the exponential growth in computational power.

Over the last few years, researchers have continuously pushed the boundaries of what a ‘large’ language model means, increasing both the size of these models and the volume of data they were trained on. This exploration has revealed a consistent trend: as the models have grown, so have their capabilities.

Fast forward to today, and the landscape has radically transformed. Training state-of-the-art LLMs like ChatGPT has become an immensely costly endeavor. The bulk of this expense stems from the staggering amount of computational resources required. To train an LLM, researchers process enormous datasets using the latest, most advanced GPUs (graphics processing units). The cost of acquiring just one of these GPUs, such as the H100, can reach upwards of $30,000. Moreover, these units are power-hungry, contributing to significant electricity usage. 

While there have been substantial efforts by researchers and organizations towards openness in the development of large language models, in addition to the compute challenges, three major hurdles have also hampered efforts to level the playing field:

  • Limited Transparency: The specifics of model architecture, data sources, and tuning parameters are often closely guarded.
  • Challenging Reproducibility: Due to the swift pace of innovation, independently replicating these models on your own infrastructure can be very difficult.
  • Disjointed Evaluation: The absence of a universally accepted benchmark for LLM evaluation complicates direct comparisons. The multitude of available tasks and frameworks, each assessing different capabilities, means comparisons are often inconclusive.

The high compute requirements, coupled with the opaqueness and the complexities of developing and evaluating LLMs, are hampering progress for researchers and smaller organizations striving for openness in the field. This not only threatens to reduce the diversity of innovation but also risks centralizing control of powerful models in the hands of a few large entities. The critical question arises: are these entities prepared to shoulder the ethical and moral responsibilities this control entails? Moreover, what steps can we take to bridge the divide between open innovators and those who hold the keys to the leading LLM technology?

Why LLMs make model evaluation harder than ever

Evaluation of LLMs involves assessing their performance and capabilities across various tasks and benchmarks and provides a measure of progress and highlights areas where models excel or need improvement. 

‘Traditional’ machine learning evaluation is quite straightforward: if we develop a model to predict lung cancer from X-ray images, we can test its accuracy by using a collection of X-rays that doctors have already diagnosed as either having cancer (YES) or not (NO). By comparing the model’s predictions with the doctor-diagnosed cases, we can assess how well it matches the expert classifications.

In contrast, LLMs can complete an almost endless number of tasks: summarization, autocompletion, reasoning, generating recommendations for movies and recipes, writing essays, telling stories, generating good code, and so on. Evaluation of performance therefore becomes much, much harder.

At Mozilla.ai, we believe that making open-source model fine-tuning and evaluation as accessible as possible is an important step to helping people own and trust the technology they use. Currently, this still requires expertise and the ability to navigate a rapidly evolving ecosystem of techniques, frameworks, and infrastructure requirements. We want to make this less overwhelming for organizations and developers, which is why we’re building tools that:

  • Help them find the right open-source LLM for their needs
  • Make it simple to fine-tune an open-source LLM on their own data
  • Make language model evaluation and validation more accessible

We think there will be a significant opportunity for many organizations to use these tools to develop their own small and specialized language models that address a range of meaningful use cases in a cost-effective way. We strive to empower developers and organizations to engage with and trust the open-source ecosystem and minimize their dependence on using closed-source models over which they have no ownership or control.

Read the whole publication and subscribe to future ones in the Mozilla.ai blog.

The post The cost of cutting-edge: Scaling compute and limiting access appeared first on The Mozilla Blog.

The Mozilla BlogGoogle’s Protected Audience Protects Advertisers (and Google) More Than It Protects You

The announcement that Google would remove the ability to track people using cookies in its Chrome browser was met with some consternation from advertisers. After all, when your business relies so heavily on tracking, as is common in the advertising industry, removing the key means of performing that tracking is a bit of a big deal.

Google relies on tracking too, but this change has the potential to skew the advertising market in Google’s favor.  For online advertisers looking to perform individualized ad targeting, tracking is a significant important source of visitor information. Smaller ad networks and websites depend on being able to source information about what people do on other sites in order to be competitive in the ruthless online advertising marketplace. Without that, these smaller players might be less able to connect visitors with the — often shady — trade in personal data.  

On the other hand, entities like Google who operate large sites, might rely less on information from other sites.  Losing the information that comes from tracking people might affect them far less when they can use information they gather from their many services.

So, while the privacy gains are clear — reducing tracking means a reduction in the collection and trade of information about what people do online — the competition situation is awkward. Here we have a company that dominates both the advertising and browser markets, proposing a change that comes with clear privacy benefits, but it will also further entrench its own dominance in the massively profitable online advertising market.

Protected Audience is a cornerstone of Google’s response to pressure from competition regulators, in particular the UK Competition and Markets Authority, with whom Google entered into a voluntary agreement in 2022. Protected Audience seeks to provide some counterbalance to the effects of better privacy in the advertising market.

We find Google’s claims about the effect of Protected Audience on advertising competition credible. The proposal could make targeted advertising better for sites that heavily relied on tracking in the past. That comes with a small caveat: complexity might cause a larger share of advertising profits to go to ad tech intermediaries.

However, the proposal fails to meet its own privacy goals. The technical privacy measures in Protected Audience fail to prevent sites from abusing the API to learn about what you did on other sites.

To say that the details are a bit complicated would be something of an understatement.  Protected Audience is big and involved, with lots of moving parts, but it can be explained with a simple analogy.

The idea behind Protected Audience is that it creates something like an alternative information dimension inside of your (Chrome) browser. In this alternative dimension, tracking what you saw and did online is possible. Any website can push information into that dimension. While we normally avoid mixing data from multiple sites, those rules are changed to allow that. Sites can then process that data in order to select advertisements. However, no one can see into this dimension, except you. Sites can only open a window for you to peek into that dimension, but only to see the ads they chose.

Leaving the details aside for the moment, the idea that personal data might be made available for specific uses like this, is quite appealing. A few years ago, something like Protected Audience might have been the stuff of science fiction. Protected Audience might be flawed, but it demonstrates real potential. If this is possible, that might give people more of a say in how their data is used. Rather than just have someone spy on your every action then use that information as they like, you might be able to specify what they can and cannot do. The technology could guarantee that your choice is respected.

Maybe advertising is not the first thing you would do with this newfound power, but maybe if the advertising industry is willing to fund investments in new technology that others could eventually use, that could be a good thing.

Sadly, Protected Audience fails in two ways. To be successful, it must process data without leaks.  In a complex design like this, it is almost expected that there will be a few holes in the design.  However, the biggest problem is that the browser needs to stop websites from seeing the information that they process.

Preventing advertising companies from looking at the information they process makes it extremely difficult to use Protected Audience. In response to concerns from these companies, Google loosened privacy protections in a number of places to make it easier to use. Of course, by weakening protections, the current proposal provides no privacy.  In other words, to help make Protected Audience easier to use, they made the design even leakier.

A lot of these leaks are temporary. Google has a plan and even a timeline for closing most of the holes that were added to make Protected Audience easier to use for advertisers. The problem is that there is no credible fix for some of the information leaks embedded in Protected Audience’s architecture. 

A stronger Protected Audience might lead us to ask some fairly challenging questions. We might ask whether objections to targeted advertising arise solely from the privacy problems with current technology? Or, is it the case that targeted manipulation itself is the problem? Targeted advertising is more effective because it uses greater information about its audience — you — to better influence your decisions. So would a system that prevented information collection, but still allowed advertisers to exercise that influence, be acceptable?

We would also need to decide to what extent a browser — a user agent — can justifiably act in ways that are not directly in the interests of its user. Protected Audience exists for the benefit of the advertising industry. A system that makes it easier to make websites supported by advertising has benefits. After all, advertising does have the potential to make content more widely accessible to people of different means, with richer people effectively subsidizing content for those of lesser means. A stronger Protected Audience might then provide people with a real, if indirect, benefit. Does that benefit outweigh the costs of giving advertisers greater influence over our decision-making?

With Protected Audience as it is today, we can simply set those questions aside. In failing to achieve its own privacy goals, Protected Audience is not now — and maybe not ever — a good addition to the Web.

Read our much longer analysis of Protected Audience for more details.

The post Google’s Protected Audience Protects Advertisers (and Google) More Than It Protects You appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird Monthly Development Digest: March 2024

Stylized Thunderbird icon with a code prompt in its center, against a purple background.

Hello Thunderbird Community! March is over, which means it’s time for another Development Digest to share the current progress and product direction of Thunderbird development.

Is this your first time reading the Development Digest? Find them all using the Dev Digest tag!

Rust and Exchange

It seems that this section is part of every Development Digest! But that’s the reality of these large efforts, spanning across multiple months with slow but steady progress.

This month we completed initial Exchange Autodiscovery and compatibility with OAuth in our account setup flow, as well as fetching and rendering of all folders. Some areas still need polish and clean up. But work continues towards having things behind a pref in the next beta release. You can follow the progress in this bug.

Meanwhile, here are some goodies to try if you need to parse the Microsoft Exchange Web Services data set and the current crates for serializing and deserializing XML don’t serve you well. https://github.com/thunderbird/xml_struct

List management

Shout out to Magnus for implementing the first step towards a more manageable mailing list subscription flow. An initial implementation of the List Management feature just landed on daily and beta, and it was recently announced in the tb-beta mailing list with a screenshot to show it in action.

It’s currently accessible via a context menu on the List ID. But we’re planning to do some UX and UI explorations to find the best way to expose it without making it annoying.

You can follow the work from this bug.

Esmification completed!

Another big shout out to Magnus for finishing the ESMification effort! As users, you won’t see or notice any difference, but for developers this substantial architectural change saw the removal of all .jsm files in favor of standard JavaScript modules. 

A huge win for a more standardized code base! This allows us to leverage all the nice features of modern JavaScript in Thunderbird development. 

Tiny changes and improvements in Thunderbird development

A lot of nice quality of life improvements tend to happen in small chunks that are not easy to see or spot right away.

Here’s a list of the projects we’re actively working on and will be focusing on for the next month:

  • Cards view UI completion.
  • Fixed missing FindBar in multimessage and browser view.
  • Implementation of a new visual selection paradigm.
  • Improvements to usability and accessibility of the Quick Filter bar.
  • Completion of the email setup in the new Account Hub.
  • Many add-ons API improvements and additions (big shout out to John).
  • Support for viewing nested signed messages and other OpenPGP improvements.

Stay tuned and make sure to sign up to our mailing lists to get detailed updates on all the items in this list, and a lot more.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next time in our April Development Digest.

Alessandro Castellani (he, him)
Director of Product Engineering

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: March 2024 appeared first on The Thunderbird Blog.

The Mozilla BlogMozilla Announces Call for Entries for the 2nd Annual Rise25 Awards in Dublin, Ireland

Haven’t filled out the nomination form yet? You’re in luck. We are extending the deadline for nominations for 2024’s Rise25 cohort until 5PM PT Friday, April 12th. You can nominate someone you know who is making an impact with AI (or yourself) below:

www.mozilla.org/rise25/nominate/

On the heels of Mozilla’s Rise25 Awards in Berlin last year, we’re excited to announce that we’ll be returning once again with a special celebration that will take place in Dublin, Ireland later this year.

The 2nd Annual Rise25 Awards will feature familiar categories, but with an emphasis on trustworthy AI. We will be honoring 25 people who are leading that next wave of AI — who are using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity. 

2023 was indeed the year of AI, and as more people adopt it, we know it is a technology that will continue to impact our culture and society, act as a catalyst for innovation and creation, and be a medium to engage people from all walks of life in conversations thanks to its growing ubiquity in our everyday lives.

We know we cannot do this alone: At Mozilla, we believe the most groundbreaking innovations emerge when people from diverse backgrounds unite to collaborate and openly trade ideas. 

So if you know someone who you think should be celebrated, we want to hear from you

Five winners from each of the five categories below will be selected to make up our 2024 Rise25 cohort: 

Advocates Guiding AI towards a responsible future

These are the policymakers, activists, and thinkers ensuring AI is developed ethically, inclusively, and transparently. This category also includes those who are adept at translating complex AI concepts for the broader public — including journalists, content creators, and cultural commentators. They champion digital rights and accessible AI, striving to make AI a force for societal good.

Builders Developing AI through ethical innovation

They are the architects of trustworthy AI, including engineers and data scientists dedicated to developing AI’s open-source language infrastructure. They focus on technical proficiency and responsible and ethical construction. Their work ensures AI is secure, accessible, and reliable, aiming to create tools that empower and advance society. 

Artists Reimagining AI’s creative potential

They transcend traditional AI applications, like synthesizing visuals or using large language models. Their projects, whether interactive websites, films, or digital media, challenge our perceptions and demonstrate how AI can amplify and empower human creativity. Their work provokes thought and offers fresh perspectives on the intersection of AI and art.

Entrepreneurs Fueling AI’s evolution with visionary ventures

These daring individuals are transforming imaginative ideas into reality. They’re crafting businesses and solutions with AI to meet societal needs, improve everyday life and forge new technological paths. They embody innovation, steering startups and projects with a commitment to ethical standards, inclusiveness and enhancing human welfare through technology.

Change Agents Cultivating inclusive AI

They are challengers that lead the way in diversifying AI, bringing varied community voices into tech. They focus on inclusivity in AI development, ensuring technology serves and represents everyone, especially those historically excluded from the tech narrative. They are community leaders, corporate leaders, activists and outside-the-box thinkers finding ways to amplify the impacts of AI for marginalized communities. Their work fosters an AI environment of equality and empowerment.

This year’s awards build upon the success of last year’s programming and community event in Berlin, which brought to life what a future trustworthy Internet could look like. Last year’s event crowned trailblazers and visionaries across five distinct categories: Builders, Activists, Artists, Creators, and Advocates. (Psst! Stay tuned as we unveil their inspiring stories in a video series airing across Mozilla channels throughout the year, leading up to the 2nd Annual Rise25 Awards.)

So join us as we honor the innovators, advocates, entrepreneurs, and communities who are working to build a happier, healthier web. Click here to submit your nomination today.

The post Mozilla Announces Call for Entries for the 2nd Annual Rise25 Awards in Dublin, Ireland appeared first on The Mozilla Blog.

Open Policy & AdvocacyHow the U.S. Government is leading by example on artificial intelligence

For years, the U.S. government has seen the challenges and opportunities of leveraging AI to advance its mission. Federal agencies have tried to use facial recognition to identify suspects and taxpayers, raising serious concerns about bias and privacy. Some agencies have tried to use AI to identify veterans at higher risk of suicide, where incorrect predictions in either direction can harm veterans’ health and well-being.
 On the flip side, federal agencies are already harnessing AI in promising ways — from making it easier to forecast the weather, to predicting failures of air navigation equipment, to simply automating paperwork. If harnessed well, AI promises to improve the many federal services that Americans rely upon every day.

That’s why we’re thrilled that, today, the White House established a strong policy to empower federal agencies to responsibly harness the power of AI for public benefit. The policy carefully identifies riskier uses of AI and sets up strong guardrails to ensure those applications are responsible. And, the policy simultaneously creates leadership and incentives for agencies to fully leverage the potential of AI.

The policy is rooted in a simple observation: not all applications of AI are equally risky or equally beneficial. For example, it’s far less risky to use AI for digitizing paper documents than to use AI for determining who receives asylum. The former doesn’t need more scrutiny beyond existing rules, but the latter introduces risks to human rights and should be held to a much higher bar.

Diagram explaining how this policy mitigates AI risks.

Diagram explaining how this policy mitigates AI risks.

Hence, the policy takes a risk-based approach to prioritize resources for AI accountability. This approach largely ignores AI applications that are low risk or appropriately managed by other policies, and focuses on AI applications that could meaningfully impact people’s safety or rights. For example, to use AI in electrical grids or autonomous vehicles, it needs to have an impact assessment, real-world testing, independent evaluation, ongoing monitoring, and appropriate public notice and human override. And, to use AI to filter resumes and approve loans, it needs to include the aforementioned protections for safety, mitigate against bias, incorporate public input, conduct ongoing monitoring, and provide reasonable opt-outs. These protections are based on common sense: AI that’s integral to domains like critical infrastructure, public safety, and government benefits should be tested, monitored, and include human overrides. The specifics of these protections are aligned with years of rigorous research and incorporate public comment so that the interventions are more likely to be both effective and feasible.

The policy applies a similar approach to AI innovation. It calls for agencies to create AI strategies with a focus on prioritizing top AI use cases, reducing barriers to AI adoption, setting goals around AI maturity, and building the capacity needed to harness AI in the long run. This, paired with actions in the AI Executive Order that surge AI talent to high-priority locations across the federal government, sets agencies up to better deploy AI where it can be most impactful.

These rules are also coupled with oversight and transparency. Agencies are required to appoint senior Chief AI Officers who oversee both the accountability and innovation mandates in the policy, and agencies also have to publish their plans to comply with these rules and stop using AI that doesn’t. In general, federal agencies also have to report their AI applications in annual AI use case inventories, and provide additional information about how they are managing risks from safety- and rights-impacting AI. The Office of Management and Budget (OMB) will oversee compliance, and that office is required to have sufficient visibility into any exemptions sought by agencies to the AI risk mitigation practices outlined in the policy.

These practices are slated to be highly impactful. Federal law enforcement agencies — including immigration and border enforcement — should now have many of their uses of facial recognition and predictive analytics subject to strong risk mitigation practices. Millions of people work for the U.S. Government, and now these federal workers will have the protections outlined in this policy if their employers try to surveil and manage their movements and behaviors via AI. And, when federal agencies try to use AI to identify fraud in programs such as food stamps and financial aid, those agencies will now have to make sure that the AI actually works and doesn’t discriminate.

These rules also apply regardless of whether a federal agency builds the AI themselves or purchases it from a vendor. That will have a large market-shaping impact, as the U.S. government is the largest purchaser of goods and services in the world, and agencies will now be incentivized to only purchase AI services that comply with the policy. The policy further directs agencies to share their AI code, models, and data — promoting open-source approaches that are vital for the AI ecosystem broadly. Additionally, when procuring AI services, the policy recommends that agencies promote market competition and interoperability among AI vendors, and avoid self-preferential treatment and vendor lock-in. This all helps advance good government, making sure taxpayer dollars are spent on safe and effective AI solutions, not on risky and over-hyped snake oil from contractors.

Now, federal agencies will work to comply with this policy in the coming months. They will also develop follow-up guidance to support the implementation of this policy, advance better procurement of AI, and govern the use of AI in national security applications. The hard work is not over; there are still outstanding questions to tackle as part of this future work, such as figuring out how to embed open source requirements more explicitly as part of the AI procurement process, helping to reduce agencies’ dependencies on specific AI vendors.

Amidst a flurry of government activity on AI, it’s worth stepping back and reflecting: today is a big day for AI policy. The U.S. government is leading by example with its own rules for AI, and Mozilla stands ready to help make the implementation of this policy a success.

The post How the U.S. Government is leading by example on artificial intelligence appeared first on Open Policy & Advocacy.

SeaMonkeySeaMonkey 2.53.18.2 is out!

Hi everyone!

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.18.2, which is a security release.  Please checkout [1] and/or [2].

Please note that the updates are forthcoming.

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18.2

[2] – https://www.seamonkey-project.org/releases/2.53.18.2

 

The Mozilla BlogReadouts from the Columbia Convening on Openness and AI

On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. We previously wrote about the convening, why it was important, and who we brought together.

Today, we are publishing two readouts from the convening. 

The first is a technical memorandum that outlines three different approaches to openness in AI, and highlights different components and spectrums of openness. It includes an extensive appendix that outlines key components in the AI stack, and describes how more openness in each component can help advance system and societal goals. Finally, it outlines open questions that would be worthy of future exploration, digging deeper into the specifics of openness and AI. This memorandum will be helpful for technical leaders and practitioners who are shaping the future of AI, so that they can better incorporate principles of openness to make their own AI systems more effective for their goals and more beneficial for society. 

The second is a policy memorandum that outlines how and why policymakers should support openness in AI. It outlines the societal benefits from openness in AI, provides a higher-level overview of how different parts of the AI stack contribute to different opportunities and risks, and lays out a series of recommendations about how policymakers can advance openness in AI. This memorandum will be helpful for policymakers, especially those who are grappling with the details of policy interventions related to openness in AI.

In the coming weeks, we will also be publishing a longer document that goes into greater detail about the dimensions of openness in AI. This will help advance our broader work with partners and allies to tackle complex and important topics around openness, competition, and accountability in AI. We will continue to keep mozilla.org/research/cc updated with materials stemming from the Columbia Convening on Openness and AI.

The post Readouts from the Columbia Convening on Openness and AI appeared first on The Mozilla Blog.

Open Policy & AdvocacyPathways to a fairer digital world: shaping EU rules to increase consumer protection and choice online

In the evolving digital landscape, where every click, swipe, and interaction shapes people’s daily lives, the need for robust consumer protection has never been more paramount. The propagation of deceptive design practices, aggressive personalization, and proliferation of fake reviews have the potential to limit or distort choices online and harm people, particularly the most vulnerable, by tricking them into taking actions that are not in their best interest, causing financial loss, loss of privacy, security, and well-being.

At Mozilla, we are committed to building a healthy Internet – an Internet that respects fundamental rights and constitutes a space where individuals can genuinely exercise their choices. Principles 4 and 5 of our Manifesto state that individuals must have the ability to shape the internet and their own experiences on it, while their security and privacy are fundamental and must not be treated as optional. In today’s interconnected world, these are put at stake.

Voluntary commitments by industry are not sufficient, and legislation can play a crucial role in regulating such practices. Recent years have seen the EU act as a pioneer when it comes to online platform regulation. Updating existing EU consumer protection rules and ensuring strong and coherent enforcement of existing legislation will build on this framework to further protect EU citizens in the digital age.

Below, we summarise our recommendations to EU policymakers ahead of the next European Commission mandate 2024-2029 to build a fairer digital world for users and consumers:

  • Addressing harmful design practices – Harmful design practices in digital experiences – such as those that coerce, manipulate, or deceive consumers – are increasingly compromising user autonomy and reducing choice. They not only thrive at the interface level but also lie deeper in the system’s architecture. We advocate for a clear shift towards ethical digital design through stronger regulation, particularly as technology evolves. This would include stronger enforcement of existing regulations addressing harmful design practices (e.g., GDPR, DSA, DMA). At the same time, the EU should update its consumer protection rules to prohibit milder ‘dark patterns’ and introduce an anti-circumvention clause to ensure that no bypassing of legal requirements by design techniques will be possible.
  • Balancing personalization & privacy online –  Personalization in digital services enhances user interaction but poses significant privacy risks and potential biases, leading to the exposure of sensitive information and societal inequalities. To address these issues, our key recommendations include the adoption of rules that will ensure the enforcement of consumer choices given through consent processes. Such rules should also incentivise the use and uptake of privacy-enhancing technologies through legislation (e.g. Consumer Rights Directive) to strike the right balance between personalization practices and respect of privacy online.
  • Tackling fake reviews – The growing problem of fake reviews on online platforms has the potential to mislead consumers and distort product value. We recommend stronger enforcement of existing rules, meaningful transparency measures, including explicit disclosure requirements for incentivized reviews, increased accountability for consumer-facing online platforms, and consistency across the EU and internationally in review handling to ensure the integrity and trustworthiness of online reviews.
  • Rethinking the ‘average consumer’ – The traditional definition of the ‘average consumer’ in EU consumer law is characterised as “reasonably well informed, observant, and circumspect”. The digital age directly challenges this definition as consumers are increasingly more vulnerable online. Due to the ever-growing information asymmetry between traders and consumers, the yardstick of an ‘average consumer’ does not necessarily reflect existing consumer behaviour. For that reason, we ask for the reevaluation of this concept to reflect today’s reality. Such an update will actively lower the existing threshold and thus increase the overall level of protection and prevent the exploitation of vulnerable groups, especially in personalised commercial practices.

To read our detailed position, click here.

The post Pathways to a fairer digital world: shaping EU rules to increase consumer protection and choice online appeared first on Open Policy & Advocacy.

SUMO BlogIntroducing Konstantina

Hi folks,

I’m super excited to share that Konstantina is joining the Customer Experience team to help with the community in SUMO. Some of you may already know Konstantina because she’s been around in Mozilla for quite a while. She’s transitioning internally from the Community Programs team under Marketing to the Customer Experience team under the Strategy and Operation.

Here’s a bit more about Konstantina in her own words:

Hi everyone, my name is Konstantina and I am very happy I am joining your team! I have been involved with Mozilla since 2011, initially as a volunteer and then as a contractor (since late 2012). During my time here, I have had a lot of roles, from events organizer, community manager to program manager, from working with MDN, Support, Foxfooding, Firefox and many more. I am passionate about communities and how we bring their voices to create great products and I am joining your team to work with Kiki on creating a great community experience. I live in Berlin, Germany with my partner and our cat but I am originally from Athens, Greece. Fun fact about me, I studied geology and I used to do a lot of caving, so I know a lot about ropes and rappelling (though I am a bit rusty now). I also love building legos as you will soon see from my office background. Can’t wait to get to know you all more

Please join me to welcome Konstantina (back) to SUMO!

The Mozilla Blog6 takeaways from The Washington Post Futurist Tech Summit in D.C.

A full conglomerate including journalists from The Washington Post, U.S. policymakers and influential business leaders gathered for a day of engaging discussions about technology March 21 in the nation’s capital.

Mozilla sponsored “The Futurist Summit: The New Age of Tech,” an event focused on addressing the wide range of promise and risks associated with emerging technologies — the largest of them being Artificial Intelligence (AI). It featured interviews moderated by journalists from The Post, as well as interactive sessions about tech for audience members in attendance at the paper’s office in Washington D.C.

Missed the event? Here are six takeaways from it that you should know about:

1. How OpenAI is preparing for the election.

The 2024 U.S. presidential election is one of the biggest topics of discussion involving the emergence and dangers of AI this year. It’s no secret that AI has incredible power to create, influence and manipulate voters with misinformation and fake media content (video, photos, audio) that can unfairly sway voters.

OpenAI, one of the biggest AI organizations, stressed an importance to provide transparency for its users to ensure their tools aren’t being used in those negative ways to mislead the public.

“It’s four billion people voting, and that is really unprecedented, and we’re very, very cognizant of that,” OpenAI VP of Global Affairs Anna Makanju said. “And obviously, it’s one of the things that we work — to ensure that our tools are not used to deceive people and to mislead people.”

Makanju reiterated that AI concerns with the election is a very large scale, and OpenAI is focused on engaging with companies to hammer down transparency in the 2024 race.

“This is like a whole of society issue,” Makanju said. “So that’s why we have engaged with other companies in this space as well. As you may have seen in the Munich Security Conference, we announced the Tech Accord, where we’re going to collaborate with social media companies and other companies that generate AI content, because there’s the issue of generation of AI content and the issue of distribution, and they’re quite different. So, for us, we really focus on things like transparency. … We of course have lots of teams investigating abuse of our systems or circumvention of the use case guidelines that are intended to prevent this kind of work. So, there are many teams at OpenAI working to ensure that these tools aren’t used for election interference.”

And OpenAI will be in the spotlight even more as the election inches closer. According to a report from Business Insider, OpenAI is preparing to launch GPT-5 this summer, which will reportedly eclipse the abilities of the ChatGPT chatbot.

<figcaption class="wp-element-caption">The futurist summit focused on the wide range of promise and risks associated with emerging technologies</figcaption>

2. Policymakers address the potential TikTok ban.

The House overwhelmingly voted 352-65 on March 13 to pass a measure that gives ByteDance, the parent company of TikTok, a decision: Sell the social media platform or face a nationwide ban on all U.S. devices.

One of the top lawmakers on the Senate Intelligence Committee, Sen. Mark Warner (D-Va.), addressed the national security concerns around TikTok on a panel moderated by political reporter Leigh Ann Caldwell alongside Sen. Todd Young (R-Ind.).

“There is something uniquely challenging about TikTok because ultimately if this information is turned over to the Chinese espionage services that could be then potentially used for nefarious purposes, that’s not a good thing for America’s long-term national security interests,” Werner said. “End of the day, all we want is it could be an American company, it could be a British company, it could be a Brazilian company. It just needs not to be from one of the nation states, China being one of the four, that are actually named in American law as adversarial nations.”

Young chimed in shortly after Warner: “Though I have not authored a bill on this particular topic, I’ve been deeply involved, for several years running now, in this effort to harden ourselves against a country, China, that has weaponized our economic interdependence in various ways.”

The fate of the measure now heads to the Senate, which is not scheduled to vote on it soon.

3. Deep Media AI is fighting against fake media content.

AI to fight against AI? Yes, it’s possible!

AI being able to alter how we perceive reality through deepfakes — in other words, synthetic media — is another danger of the emerging technology. Deep Media AI founder Rijul Gupta is countering that AI problem with AI of his own.

In a video demonstration alongside tech columnist Geoffrey Fowler, Gupta showcased how Deep Media AI scans and detects deepfakes in photos, videos and audio files to combat the issue.

For example, Deep Media AI can determine if a photo is fake by looking at wrinkles, reflections and things humans typically don’t pay attention to. In the audio space, which Gupta described as “uniquely dangerous,” the technology analyzes the waves and patterns. It can detect video deepfakes by tracking motion of the face — how it moves, the shape and movement of lips — and changes in lighting.

A good sign: Audience members were asked to identify a deepfake between two video clips (one real, one AI generated by OpenAI) at the start of Gupta’s presentation. The majority of people in attendance guessed correctly. Even better: Deep Media AI detected it was fake and scored a 100/100 in its detection system. In other words, it got it right perfectly.

“Generative AI is going to be awesome; it’s going to make us all rich; it’s going to be great,” Gupta said. “But in order for that to happen, we need to make it safe. We’re part of that, but we need militaries and governments. We need buy-in from the generative AI companies. We need buy-in from the tech ecosystem. We need detectors. And we need journalists to tell us what’s real, and what’s fake from a trusted source, right? I think it’s possible. We’re here to help, but we’re not the only ones here. We’re hoping to provide solutions that people use.”

<figcaption class="wp-element-caption">VP of Global Policy at Mozilla, Linda Griffin, interviewed by The Washington Post’s Kathleen Koch.</figcaption>

4. Mozilla’s push for trustworthy AI

As we continue to shift towards a world with AI that’s helpful, it’s important we involve human beings in that process as much as possible. It’s concerning if companies are making AI while only thinking about profit and not the public. That hurts public trust and faith in big tech.

This work is urgent, and Mozilla has been delivering the trustworthy AI report — which had a 2020 status update in February — to aid in aligning with our vision of creating a healthy internet where openness, competition and accountability are the norms.

“We want to know what you think,” Mozilla VP of Global Policy Linda Griffin said. “We’re trying to map and guide where we think these conversations are. What is the point of AI unless more people can benefit from it more broadly? What is the point of this technology if it’s just in the hands of the handful of companies thinking about their bottom line?

“They do important and really interesting things with the technology; that’s great. But we need more; we need the public counterpoint. So, for us, trustworthy AI, it’s about accountability, transparency, and having humans in the loop thinking about people wanting to use these products and feeling safe and understanding that they have recourse if something goes wrong.”

5. AI’s ability to change rules in the NFL (yes!).

While the NFL is early in the process of incorporating AI into the game of football, the league has found ways to get the ball rolling (pun intended) on using its tools to make the game smarter and better.

One area is with health and safety, a major priority for the NFL. The league uses AI and machine learning tools on the field to grab predictive analysis to identify plays and body positions that most likely lead to players getting injured. Then, they can adjust rules and strategies accordingly, if they want.

For example, kickoffs. Concussions sustained on kickoffs dropped by 60 percent in the NFL last season, from 20 to eight. That is because kickoffs were returned less frequently after the league adjusted the rules governing kickoff returns during the previous offseason, so that a returner could signal for a fair catch no matter where the ball was kicked, and the ball would be placed on the 25-yard line. This change came after the NFL used AI tools to gather injury data on those plays.

“The insight to change that rule had come from a lot of the data we had collected with chips on the shoulder pads of our players of capturing data, using machine learning, and trying to figure out what is the safest way to play the game,” Brian Rolapp, Chief Media & Business Officer for the NFL, told media reporter Ben Strauss, “which led to an impact of rule change.”

While kickoff injuries have gone down, making this tweak to one of the most exciting plays in football is tough. So this year, the NFL is working on a compromise and exploring new ideas that can strike a balance to satisfy both safety and excitement. There will be a vote at league meetings this week in front of coaches, general managers and ownership about it.

6. Don’t forget about tech for accessibility.

With the new chapter of AI, the possibilities of investing and creating tools for those with disabilities is endless. For those who are blind, have low vision or have trouble hearing, AI offers an entire new slate of capabilities.

Apple has been one of the companies at the forefront creating features for those with disabilities that use their products. For example, on iPhones, Apple has implemented live captions, sound recognition and voice control on devices to assist.

Sarah Herrlinger, Senior Director of Global Accessibility Policy & Initiatives at Apple, gave insight into how the tech giant decides what features to add and which ones to update. In doing so, she delivered one of the best talking points of the day.

“I think the key to that is really engagement with the communities,” Herrlinger said. “We believe very strongly in the disability mantra of, nothing about us without us, and so it starts with first off employing members of these communities within our ranks. We never build for a community. We build with them.”

Herrlinger was joined on stage alongside retired Judge David S. Tatel, Mike Buckley, the Chair & CEO of Be My Eyes and Disability reporter for The Post Amanda Morris. When asked about the future of accessibility for those that are blind, Patel shared a touching sentiment many in the disability space resonate with.

“It’s anything that improves and enhances my independence, and enhances it seamlessly is with what I look for,” Tatel said. “That’s it. Independence, independence, independence.”

Get Firefox

Get the browser that protects what’s important

The post 6 takeaways from The Washington Post Futurist Tech Summit in D.C. appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogMarch 2024 Community Office Hours: Open Forum and FAQ

Community Office Hours March 2024: Open Forum and FAQ

This month’s topics for our Thunderbird Community Office Hours will be decided by you! We’d like to invite the community to bring their questions, comments, and general conversation to Team Thunderbird for an informal and informational chat. As always, send your questions in advance to officehours@thunderbird.net!

Be sure to note the change in day of the week and time, especially if you’re in Europe and not on summer time yet!

March Office Hours: Open Forum and FAQ

While we love having community office hours with specific topics, from our design process to Add-ons, we want to make time for an open forum, where you bring the topics of discussion. Do you have a great idea for a feature request, or need help filing a bug? Or do you want to know how to use SUMO better, or get some Thunderbird tips? Maybe you want to know more about Team Thunderbird, whether it’s how we got started in open source to how we like our coffee. This is the time to ask these questions and more!

We also just got back from SCaLE21x, and we had so many great questions from people who stopped by the booth. So in addition to answering your questions, whether emailed or live, we’d like to tackle some the things people asked most during our first SCaLE appearance.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours with John Bieling all about Add-on development. We had a fantastic chat about the history, present state, and future of Add-ons, with advice on getting involved in development and support. Watch the video below and read more about our guest at last month’s blog post.

Join The Video Chat

Date and Time: Wednesday, March 27 at 17:00 UTC

Direct URL to Join: https://mozilla.zoom.us/j/95272980798

Meeting ID: 95272980798

Password: 439169

Dial by your location:

  • +1 646 518 9805 US (New York)
  • +1 669 219 2599 US (San Jose)
  • +1 647 558 0588 Canada
  • +33 1 7095 0103 France
  • +49 69 7104 9922 Germany
  • +44 330 088 5830 United Kingdom
  • Find your local number: https://mozilla.zoom.us/u/adkUNXc0FO

The post March 2024 Community Office Hours: Open Forum and FAQ appeared first on The Thunderbird Blog.

Open Policy & AdvocacyMozilla, Center for Democracy and Technology call for openness and transparency in AI

Update | 27 March 2024: Mozilla has submitted its comments to the NTIA’s consultation on openness in AI models referenced in this blog post originally. Drawing on Mozilla’s own history as part of the open source movement, the submission seeks to help guide difficult conversations about openness in AI. First, we shine a light on the different dimensions of openness in AI, including on different components across the AI stack and development lifecycle. Second, we argue that openness in AI can spur competition and help the diffusion of innovation and its benefits more broadly across the economy and society as a whole; that it can advance open science and progress in the entire field of AI; and that it advances accountability and safety by enabling more research and supporting independent scrutiny as well as regulatory oversight. In the past and with a view to recent progress in AI, openness has been a key tenet of U.S. leadership in technology — but ill-conceived policy interventions could jeopardize U.S. leadership in AI. We also recently published the technical and policy readouts from the Columbia Convening on Openness and AI to serve as a resource to the community, both for this consultation and beyond.


Civil society and academics are joining together to defend AI openness and transparency. Mozilla and the Center for Democracy & Technology (CDT), along with members of civil society and academia, have united to underscore the importance of openness and transparency in AI. Nearly 50 signatories sent a letter to Secretary Gina Raimondo in response to the U.S. Commerce Department’s request for comment on openness in AI models.

“We are excited to collaborate with expert individuals and organizations who are committed to seeing more transparent AI innovation,” said Jenn Taylor Hodges, Director of US Public Policy & Government Relations at Mozilla. “Open models in AI will promote trustworthiness and accountability that will better serve society. Mozilla has a long history of promoting open source and fighting corporate consolidation on the Internet. We are bringing those values and experiences to the AI era, making sure that everyone has a say in shaping the future of AI.”

There has been a noticeable shift in the AI landscape toward closed systems, a trend that Mozilla has diligently worked to counter. As detailed in the recently released Accelerating Progress Toward Trustworthy AI report, prominent AI entities are adopting closed systems, prioritizing proprietary control over collaborative openness. These companies have advocated for increased opacity, citing fears of misuse. However, beneath these arguments lies a clear agenda to stifle competition and limit oversight in the AI market.

The joint letter was sent in advance of the Department of Commerce’s comment deadline on AI models which closes March 27. Endorsed by science policy think tanks, advocates against housing discrimination, and computer science luminaries, it argued:

  • Open models have significant benefits to society: They help advance innovation, competition, research, civil and human rights protections, and safety and security.
  • Policy should look at marginal risks of open models compared to closed models: Commerce should look to recent Stanford and Princeton research, which emphasizes limited evidence that open models create new risks not present in closed models.
  • Policy should focus more on AI applications, not models: Where openness makes AI risks worse, policy interventions are more likely to succeed in going after how the AI system is deployed, not by restricting the sharing of information about AI models.
  • Policy should proactively advance openness: Policy on this topic must be developed and vetted by more than just national security agencies, and should promote more R&D into open approaches for AI and better standards for testing and releasing open models.

“The range of participants in this effort – from civil liberties to civil rights organizations, from progressive groups to more market-oriented groups, with advocates for openness in both government and industry, and a broad range of academic experts from law, policy, and computer science – demonstrates how the future of open innovation around powerful AI models is critically important to a wide variety of communities,” said Kevin Bankston, Senior Advisor on AI Governance for CDT. “As our letter highlights, the benefits of open models over closed models for competition, innovation, security and transparency are rather clear, while the risks compared to closed models aren’t. Therefore the White House and Congress should exercise great caution when considering whether and how to regulate the publication of open models.”

Mozilla’s upcoming longer submission to the Commerce Department’s request for comment will include greater details including expanding on Mozilla’s long history of increasing privacy, security, and functionality across the internet through its products, investments, and advocacy. It highlights key findings from the recent Columbia Convening on Openness and AI, and explains how openness is vital to innovation, competition, and accountability – including safety and security, as well as protecting rights and freedoms. It also takes on some of the most prominent arguments driving the push to limit access to AI models, such as claims of “unknown unknown” security risks.

The joint letter and Mozilla’s upcoming response to the call for comments demonstrates how openness can be an enabler of a better future – one where everyone can help build, shape, and test AI so that it works for everyone. That is the future we need, and it’s the one we must keep working toward through policy, technology, and advocacy alike.

The post Mozilla, Center for Democracy and Technology call for openness and transparency in AI appeared first on Open Policy & Advocacy.

The Mozilla BlogHow AI is unfairly targeting and discriminating against Black people

The rise of Artificial Intelligence (AI) is here, and it’s bringing a new era of technology that is already creating and impacting the world. It was the story of 2023, and its emphasis isn’t going anywhere anytime soon.

While the creative growth of AI occurring so rapidly is a fascinating development for our society, it’s important to remember its harms that cannot be ignored, especially pertaining to racial bias and discrimination against African-Americans.

In recent years, there has been research revealing that AI technologies have struggled to identify images and speech patterns of nonwhite people. Black AI researchers at tech giants creating AI technology have raised concerns about its harms against the Black community. 

The concerns surrounding AI’s racial biases and harms against Black people are serious and should be a big focus as 2024 gets underway. We invited University of Michigan professor, Harvard Faculty Associate and former Mozilla Foundation Senior Fellow in Trustworthy AI, Apryl Williams, to dive into this topic further. Williams studies experiences of gender and race at the intersection of digital spaces and algorithmic technocultures, and her most recent book, “Not My Type: Automating Sexual Racism in Online Dating,” exposes how race-based discrimination is a fundamental part of the most popular and influential dating algorithms.

To start, as a professor, I’m curious to know: How aware do you think students are of the dangers of the technology they’re using? Beyond the simple things like screen time notifications they might get, and more about AI problems, misinformation, etc.?

They don’t know. I show two key documentaries in my classes every semester. I teach a class called “Critical Perspectives on the Internet.” And then I have another class that’s called “Critical AI” and in both of those classes, the students are always shook. They always tell me, “You ruin everything for me, I can never look at the world the same,” which is great. That’s my goal. I hope that they don’t look at the world the same when they leave my classes, of course. But I show them  “Coded Bias” by Shalini Kantayya and when they watched that just this past semester they were like, “I can’t believe this is legal, like, how are they using facial recognition everywhere? How are they able to do these things on our phones? How do they do this? How do they do that? I can’t believe this is legal. And why don’t people talk about it?” And I’m like, “Well, people do talk about it. You guys just aren’t necessarily keyed into the places where people are talking about.” And I think that’s one of the feelings of sort of like these movements that we’re trying to build is that we’re not necessarily tapped into the kinds of places young people go to get information.

We often assume that AI machines are neutral in terms of race, but research has shown that some of them are not and can have biases against Black people. When we think about where this problem stems from, is it fair to say it begins with the tech industry’s lack of representation of people who understand and can work to address the potential harms of these technologies?

I would say, yes, that is a huge part of it. But the actual starting point is the norms of the tech industry. So we know that the tech industry was created by and large by the military, industrial, complex  — like the internet was a military device. And so because of that, a lot of the inequity or like inequality, social injustice of the time that the internet work was created were baked into the structure of the internet. And then, of course, industries that spring up from the internet, right? We know that the military was using the internet for surveillance. And look now, we have in 2024, widespread surveillance of Black communities, of marginalized communities, of undocumented communities, right? So really, it’s the infrastructure of the internet that was built to support white supremacy, I would say, is the starting point. And because the infrastructure of the internet and of the tech industry was born from white supremacy, then, yes, we have these hiring practices, and not just the hiring practices, but hiring practices where, largely, they are just hiring the same kinds of people — Cisgender, hetero white men. Increasingly white women, but still we’re not seeing the kinds of diversity that we should be seeing if we’re going to reach demographic parity. So we have the hiring. But then also, we have just the norms of the tech industry itself that are really built to service, I would say, the status quo, they’re not built to disrupt. They’re built to continue the norm. And if people don’t stop and think about that, then, yeah, we’re going to see the replication of all this bias because U.S.  society was built on bias, right? Like it is a stratified society inherently. And because of that,  we’re always going to see that stratification in the tech industry as well.

Issues of bias in AI tend to impact the people who are rarely in positions to develop the technology. How do you think we can enable AI communities to engage in the development and governance of AI to get it where it’s working toward creating systems that embrace the full spectrum of inclusion?

Yes, we should enable it. But also the tech industry, like people in these companies, need to sort of take the onus on themselves to reach out to communities in which they are going to deploy their technology, right? So if your target audience, let’s say on TikTok, is Black content creators, you need to be reaching out to Black content creators and Black communities before you launch an algorithm that targets those people. You should be having them at your headquarters. You should be doing listening sessions. You should be elevating Black voices. You should be listening to people, right? Listening to the concerns, having support teams in place, before you launch the technology, right? So instead of retroactively trying to Band-aid it when you have an oops or like a bad PR moment, you should be looking to marginalize communities as experts on what they need and how they see technology being implemented in their lives.

A lot of the issues with these technologies in relation to Black people is that they are not designed for Black people — and even the people they are designed for run into problems. It feels like this is a difficult spot for everyone involved?

Yeah, that’s an interesting question. I feel like it’s really hard for good people on the inside of tech companies to actually say, “Hey, this thing that we’re building might be generating money, but it’s not generating long-term longevity,” right? Or health for our users. And I get that — not every tech company is health oriented. They may act like they are, but they’re not, like to a lot of them, money is their bottom line. I really think it’s up to sort of like movement builders and tech industry shakers to say or to be able to create buy-in for programs, algorithms, ideas, that foster equity. But we have to be able to create buy-in for that. So that might look like, “Hey, maybe we might lose some users on this front end when we implement this new idea, but we’re going to gain a whole lot more users.” Folks of color, marginalized users, queer users, trans users, if they feel like they can trust us, and that’s worth the investment, right? So it’s really just valuing the whole person, rather than just sort of valuing the face value of the money only or what they think it is, but looking to see the potential of what would happen if people felt like their technology was actually trustworthy.

AI is rapidly growing. What are things we can add to it as it evolves, and what are things we should work to eliminate? 

I would say we need to expand our definition of safety. I think that safety should fundamentally include your mental health and well-being, and if the company that you’re using it for to find intimacy or to connect with friends is not actually keeping you safe as a person of color, as a trans person, as a queer person, then you can’t really have like full mental wellness if you are constantly on high alert, you’re constantly in this anxious position, you’re having to worry that your technology is exploiting you, right? So, if we’re going to have all of this buzz that I’m seeing about trust and safety, that can’t just stop at the current discourse that we’re having on trust and safety. It can’t just be about protecting privacy, protecting data, protecting white people’s privacy. That has to include reporting mechanisms for users of color when they encounter abuse. Whether that is racism or homophobia, right? Like it needs to be more inclusive. I would say that the way that we think about trust and safety and automated or  algorithmic systems needs to be more inclusive. We really need to widen the definition of safety. And probably the definition of trust also. 

In terms of subtracting, they’re just a lot of things that we shouldn’t be doing, that we’re currently doing. Honestly, the thing that we need to subtract the most is this idea that we move fast and break things in tech culture. It’s sort of like, we are just moving for the sake of innovation. We might really need to dial back on this idea of moving for the sake of innovation, and actually think about moving towards a safer  humanity for everybody, and designing with that goal in mind. We can innovate in a safe way. We might have to sacrifice speed, a nd I think we need to say, it’s okay to sacrifice speed in some cases.

When I started to think about the dangers of AI, I immediately remembered the situation with Robert Williams a few years ago, when he was wrongly accused by police that used AI facial recognition. There is more to it than just the strange memes and voice videos people create. What are the serious real world harms that you think of when it comes to Black people and AI that people are overlooking?

I don’t know that it’s overlooked, but I don’t think that Black people are aware of the amount of surveillance of everyday technologies. When you go to the airport, even if you’re not using Clear or other facial recognition technology at the airport for expedited security, they’re still using facial recognition technology. When you’re crossing borders, when you are even flying domestically, they’re still using that tech to look at your face. You look into the camera, they take your picture. They compare it to your ID. Like, that is facial recognition technology. I understand that that is for our national safety, but that also means that they’re collecting a lot of data on us. We don’t know what happens with that data. We don’t know if they keep it for 24 hours or if they keep it for 24 years. Are they keeping logs of what your face looks like every time you go? In 50 years, are we going to see a system that’s like “We’ve got these TSA files, and we’re able to track your aging from the time that you were 18 to the time that you’re 50, just based on your TSA data,” right? Like, we really don’t know what’s happening with the data. And that’s just one example. 

We have constant surveillance, especially in our cars. The smarter our cars get, the more they’re surveilling us. We are seeing increasing use of those systems and cars being used, and police cases to see if you were paying attention. Were you talking on your phone? Were you texting and driving? Things like that. There is automation in cars that’s designed to identify people and to stop right to avoid hitting you. And as we know, a lot of the systems misidentify Black people as trash cans, and will instead hit them. There are so many instances where AI is part of our life, and I don’t think people realize the depth of which it really does drive our lives. And I think that’s the thing that scares me the most for people of color is that we don’t understand just how much AI is part of our everyday life. And I wish people would stop and sort of think about, yes, I get easy access to this thing, but what am I trading off to get that easy access? What does that mean for me? And what does that mean for my community? We have places like Project Blue light, Project Green Light, where those systems are heavily surveilled in order to “protect communities.” But are those created to protect white communities at the expense of Black and brown communities? Right? That’s what we have to think about when we say that these technologies, especially surveillance technologies, are being used to protect people, who are they protecting? And who are they protecting people from? And is that idea that they’re protecting people from a certain group of people realistic? Or is that grounded in some cultural bias that we have. 

Looking bigger picture this year: It’s an election year and AI will certainly be a large talking point for candidates. Regardless of who wins this fall, in what ways do you think the administration can ensure that policies and enforcement are instilled to address AI to make sure that racial and other inequities don’t continue and evolve?

They need to enforce or encourage that tech companies have the onus of transparency on them. There needs to be some kind of legislative prompting, there has to be some kind of responsibility where tech companies actually suffer consequences, legal consequences, economic consequences, when they violate trust with the public, when they extract data without telling people. There also needs to be more two-way conversations. Often tech companies will just tell you, “These are the terms of service, you have to agree with them,” and if you don’t, you opt-out, that means you can’t use the tech. There needs to be some kind of system where tech companies can say, “Okay, we’re thinking about rolling this out or updating our terms of service in this way, how does the community feel about that?” And a way that really they can be accountable to their users. I think we really just need some legislation that makes tech companies sort of put their feet to the fire in terms of them actually having responsibility to their users.

When it comes to fighting against racial biases and struggles, sometimes the most important people that can help create change and bring awareness are those not directly impacted by what’s going on — for example, a white person being an ally and protesting for a Black person. What do you think most normal people can do to influence change and bring awareness to AI challenges for Black people?

I would say, for those people who are in the know about what tech companies are doing, talk about that with your kids, right? When you’re sitting down and your kids are telling you about something that their friend posted, that’s a perfect time to be like, “Let’s talk about that technology that your friend is using or that you’re using.” Did you know that on TikTok, this happens? Did you know that on TikTok, often Black creator voices are hidden, or Black content creators are shadow-banned? Did you know what happens on Instagram? These kinds of regular conversations, that way, these kinds of tech injustices are part of the everyday vernacular for kids as they’re coming up so that they can be more aware, and also so that they can advocate for themselves and for their communities.

Get Firefox

Get the browser that protects what’s important

The post How AI is unfairly targeting and discriminating against Black people appeared first on The Mozilla Blog.

Mozilla Add-ons BlogManifest V3 & Manifest V2 (March 2024 update)

Calling all extension developers! With Manifest V3 picking up steam again, we wanted to provide some visibility into our current plans as a lot has happened since we published our last update.

Back in 2022 we released our initial implementation of MV3, the latest version of the extensions platform, in Firefox. Since then, we have been hard at work collaborating with other browser vendors and community members in the W3C WebExtensions Community Group (WECG). Our shared goals were to improve extension APIs while addressing cross browser compatibility. That collaboration has yielded some great results to date and we’re proud to say our participation has been instrumental in shaping and designing those APIs to ensure broader applicability across browsers.

We continue to support DOM-based background scripts in the form of Event pages, and the blocking webRequest feature, as explained in our previous blog post. Chrome’s version of MV3 requires service worker-based background scripts, which we do not support yet. However, an extension can specify both and have it work in Chrome 121+ and Firefox 121+. Support for Event pages, along with support for blocking webRequest, is a divergence from Chrome that enables use cases that are not covered by Chrome’s MV3 implementation.

Well what’s happening with MV2 you ask? Great question – in case you missed it, Google announced late last year their plans to resume their MV2 deprecation schedule. Firefox, however, has no plans to deprecate MV2 and will continue to support MV2 extensions for the foreseeable future. And even if we re-evaluate this decision at some point down the road, we anticipate providing a notice of at least 12 months for developers to adjust accordingly and not feel rushed.

As our plans solidify, future updates around our MV3 efforts will be shared via this blog. We are loosely targeting our next update after the conclusion of the upcoming WECG meeting at the Apple offices in San Diego. For more information on adopting MV3, please refer to our migration guide. Another great resource worth checking out is the recent FOSDEM presentation a couple team members delivered, Firefox, Android, and Cross-browser WebExtensions in 2024.

If you have questions, concerns or feedback on Manifest V3 we would love to hear from you in the comments section below or if you prefer, drop us an email.

The post Manifest V3 & Manifest V2 (March 2024 update) appeared first on Mozilla Add-ons Community Blog.

Open Policy & AdvocacyMozilla joins allies to co-sign an amicus brief in State of Nevada vs. Meta Platforms defending end-to-end encryption

Mozilla recently signed onto an amicus brief – alongside the Electronic Frontier Foundation , the Internet Society, Signal, and a broad coalition of other allies – on the Nevada Attorney General’s recent attempt to limit encryption. The amicus brief signals a collective commitment from these organizations on the importance of encryption in safeguarding digital privacy and security as fundamental rights.

The core of this dispute is the Nevada Attorney General’s proposition to limit the application of end-to-end encryption (E2EE) for children’s online communications. It is a move that ostensibly aims to aid law enforcement but, in practice, could significantly weaken the privacy and security of all internet users, including children. Nevada argues that end-to-end encryption might impede some criminal investigations. However, as the amicus brief explains, encryption does not prevent either the sender or recipient from reporting concerning content to police, nor does it prevent police from accessing other metadata about communications via lawful requests. Blocking the rollout of end-to-end encryption would undermine privacy and security for everyone for a marginal benefit that would be far outweighed by the harms such a draconian limitation could create.

The case, set for a hearing in Clark County, Nevada, encapsulates a broader debate on the balance between enabling law enforcement to combat online crimes and preserving robust online protections for all users – especially vulnerable populations like children. Mozilla’s involvement in this amicus brief is founded on its long standing belief that encryption is an essential component of its core Manifesto tenet – privacy and security are fundamental online and should not be treated as optional.

The post Mozilla joins allies to co-sign an amicus brief in State of Nevada vs. Meta Platforms defending end-to-end encryption appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyMozilla Joins Amicus Brief Supporting Software Interoperability

In modern technology, interoperability between programs is crucial to the usability of applications, user choice, and healthy competition. Today Mozilla has joined an amicus brief at the Ninth Circuit, to ensure that copyright law does not undermine the ability of developers to build interoperable software.

This amicus brief comes in the latest appeal in a multi-year courtroom saga between Oracle and Rimini Street. The sprawling litigation has lasted more than a decade and has already been up to the Supreme Court on a procedural question about court costs. Our amicus brief addresses a single issue: should the fact that a software program is built to be interoperable with another program be treated, on its own, as establishing copyright infringement?

We believe that most software developers would answer this question with: “Of course not!” But the district court found otherwise. The lower court concluded that even if Rimini’s software does not include any Oracle code, Rimini’s programs could be infringing derivative works simply “because they do not work with any other programs.” This is a mistake.

The classic example of a derivative work is something like a sequel to a book or movie. For example, The Empire Strikes Back is a derivative work of the original Star Wars movie. Our amicus brief explains that it makes no sense to apply this concept to software that is built to interoperate with another program. Not only that, interoperability of software promotes competition and user choice. It should be celebrated, not punished.

This case raises similar themes to another high profile software copyright case, Google v. Oracle, which considered whether it was copyright infringement to re-implement an API. Mozilla submitted an amicus brief there also, where we argued that copyright law should support interoperability. Fortunately, the Supreme Court reached the right conclusion and ruled that re-implementing an API was fair use. That ruling and other important fair use decisions would be undermined if a copyright plaintiff could use interoperability as evidence that software is an infringing derivative work.

In today’s brief Mozilla joins a broad coalition of advocates for openness and competition, including the Electronic Frontier Foundation, Creative Commons, Public Knowledge, iFixit, and the Digital Right to Repair Coalition. We hope the Ninth Circuit will fix the lower court’s mistake and hold that interoperability is not evidence of infringement.

The post Mozilla Joins Amicus Brief Supporting Software Interoperability appeared first on Open Policy & Advocacy.

hacks.mozilla.orgImproving Performance in Firefox and Across the Web with Speedometer 3

In collaboration with the other major browser engine developers, Mozilla is thrilled to announce Speedometer 3 today. Like previous versions of Speedometer, this benchmark measures what we think matters most for performance online: responsiveness. But today’s release is more open and more challenging than before, and is the best tool for driving browser performance improvements that we’ve ever seen.

This fulfills the vision set out in December 2022 to bring experts across the industry together in order to rethink how we measure browser performance, guided by a shared goal to reflect the real-world Web as much as possible. This is the first time the Speedometer benchmark, or any major browser benchmark, has been developed through a cross-industry collaboration supported by each major browser engine: Blink, Gecko, and WebKit. Working together means we can build a shared understanding of what matters to optimize, and facilitates broad review of the benchmark itself: both of which make it a stronger lever for improving the Web as a whole.

And we’re seeing results: Firefox got faster for real users in 2023 as a direct result of optimizing for Speedometer 3. This took a coordinated effort from many teams: understanding real-world websites, building new tools to drive optimizations, and making a huge number of improvements inside Gecko to make web pages run more smoothly for Firefox users. In the process, we’ve shipped hundreds of bug fixes across JS, DOM, Layout, CSS, Graphics, frontend, memory allocation, profile-guided optimization, and more.

We’re happy to see core optimizations in all the major browser engines turning into improved responsiveness for real users, and are looking forward to continuing to work together to build performance tests that improve the Web.

The post Improving Performance in Firefox and Across the Web with Speedometer 3 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogThunderbird for Android / K-9 Mail: February 2024 Progress Report

a dark background with Thunderbird and K-9 Mail logos centered, with the text "Thunderbird for Android, February 2024 Progress Report"

Welcome to a new report on the progress of transforming K-9 Mail into Thunderbird for Android. I hope you’ve enjoyed the extra day in February. We certainly did and used this opportunity to release a new stable version on February 29.

If you’re new to this series or the unusually long February made you forget what happened the previous month, you might want to check out January’s progress report.

New stable release

We spent most of our time in February getting ready for a new stable release – K-9 Mail 6.800. That mostly meant fixing bugs and usability issues reported by beta testers. Thanks to everyone who tested the app and reported bugs ❤

Read all about the new release in our blog post Towards Thunderbird for Android – K-9 Mail 6.800 Simplifies Adding Email Accounts.

What’s next?

With the new account setup being mostly done, we’ll concentrate on the following two areas.

Material 3

The question of whether to update the user interface to match the design used by the latest Android version seems to have always split the K-9 Mail user base. One group prefers that we work on adding new features instead. The other group wants their email app of choice to look similar to the apps that ship with Android.

Never updating the user interface to the latest design is not really an option. At some point all third-party libraries we’re using will only support the latest platform design. Not updating those libraries is also not an option because Android itself is constantly changing and requires app/library updates just to keep existing functionality working.

I think we found a good balance by not being the first ones to update to Material 3. By now a lot of other app developers have done so and countless bugs related to Material 3 have been found and fixed. So it’s a good time for us to start switching to Android’s latest design system now.

We’re currently still in a research phase to figure out what parts of the app need changing. Once that’s done, we’ll change the base theme and fix up the app screen by screen. You will be able to follow along by becoming a beta tester and installing K-9 Mail 6.9xx beta versions once those become available.

Android 14 compatibility

K-9 Mail is affected by a couple of changes that were introduced with Android 14. We’ve started to look into which parts of the app need to be updated to be able to target Android 14.

We’ve already identified these:

Our current plan is to include the necessary changes in updates to the K-9 Mail 6.8xx line.

Community Contributions

  • S Tanveer Hussain submitted a pull request to update the information about third-party libraries in K-9 Mail’s About screen (#7601)
  • GitHub user LorenzHo provided a patch to not focus the recipient input field when the Compose screen was opened using a mailto: URI (#7623). Unfortunately, this change had to be backed out later because of unintended side effects. But we’re hopeful a modified version of this change will make it into the app soon.

Thank you for your contributions!

Releases

In February 2024 we published a new stable release:

… and the following beta versions:

The post Thunderbird for Android / K-9 Mail: February 2024 Progress Report appeared first on The Thunderbird Blog.

Open Policy & AdvocacyMozilla Mornings: Choice or Illusion? Tackling Harmful Design Practices

The first edition of Mozilla Mornings in 2024 will explore the impact of harmful design on consumers in the digital world and the role regulation can play in addressing such practices.

In the evolving digital landscape, deceptive and manipulative design practices, as well as aggressive personalisation and profiling pose significant threats to consumer welfare, potentially leading to financial loss, privacy breaches, and compromised security.

While existing EU regulations address some aspects of these issues, questions persist about their adequacy in combating harmful design patterns comprehensively. What additional measures are needed to ensure digital fairness for consumers and empower designers who want to act ethically?

To discuss these issues, we are delighted to announce that the following speakers will be participating in our panel discussion:

  • Egelyn Braun, Team Leader DG JUST, European Commission
  • Estelle Hary, Co-founder, Design Friction
  • Silvia de Conca, Amsterdam Law & Technology Institute, Vrije Universiteit Amsterdam
  • Finn Myrstad, Digital Policy Director, Norwegian Consumer Council

The event will also feature a fireside chat with MEP Kim van Sparrentak from Greens/EFA.

  • Date: Wednesday 20th March 2024
  • Location: L42, Rue de la Loi 42, 1000 Brussels
  • Time: 08:30 – 10:30 CET

To register, click here.

The post Mozilla Mornings: Choice or Illusion? Tackling Harmful Design Practices appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogTowards Thunderbird for Android – K-9 Mail 6.800 Simplifies Adding Email Accounts

Graphic announcing "Thunderbird for Android" with a Thunderbird icon, and "K-9 Mail 6.800 Released" with a red envelope icon representing K-9 Mail

We’re happy to announce the release of K-9 Mail 6.800. The main goal of this version is to make it easier for you to add your email accounts to the app.

With another item crossed off the list, this brings us one step closer towards Thunderbird for Android.

New account setup

Setting up an email account in K-9 Mail is something many new users have struggled with in the past. That’s mainly because automatic setup was only supported for a handful of large email providers. If you had an email account with another email provider, you had to manually enter the incoming and outgoing server settings. But finding the correct server settings can be challenging. 

So we set out to improve the setup experience. Since this part of the app was quite old and had a couple of other problems, we used this opportunity to rewrite the whole account setup component. This turned out to be more work than originally anticipated. But we’re quite happy with the result.

Let’s have a brief look at the steps involved in setting up a new account.

1. Enter email address

To get the process started, all you have to do is enter the email address of the account you want to set up in K-9 Mail.

2. Provide login credentials

After tapping the Next button, the app will use Thunderbird’s Autoconfig mechanism to try to find the appropriate incoming and outgoing server settings. Then you’ll be asked to provide a password or use the web login flow, depending on the email provider.

The app will then try to log in to the incoming and outgoing server using the provided credentials.

3. Provide some basic information about the account

If your login credentials check out, you’ll be asked to provide your name for outgoing messages. For all the other inputs you can go with the defaults. All settings can be changed later, once an account has been set up.

If everything goes well, that’s all it takes to set up an account.

Of course there’s still cases where the app won’t be able to automatically find a configuration and the user will be asked to manually provide the incoming and outgoing server settings. But we’ll be working with email providers to hopefully reduce the number of times this happens.

What else is new?

While the account setup rewrite was our main development focus, we’ve also made a couple of smaller changes and bug fixes. You can find a list of the most notable ones below.

Improvements and behavior changes

  • Made it harder to accidentally trigger swipe actions in the message list screen
  • IMAP: Added support for sending the ID command (that is required by some email providers)
  • Improved screen reader experience in various places
  • Improved display of some HTML messages
  • Changed background color in message view and compose screens when using dark theme
  • Adding to contacts should now allow you again to add the email address to an existing contact
  • Added image handling within the context menu for hyperlinks
  • A URI pasted when composing a message will now be surrounded by angle brackets
  • Don’t use nickname as display name when auto-completing recipient using the nickname
  • Changed compose icon in the message list widget to match the icon inside the app
  • Don’t attempt to open file: URIs in an email; tapping such a link will now copy the URL to the clipboard instead
  • Added option to return to the message list after marking a message as unread in the message view
  • Combined settings “Return to list after delete” and “Show next message after delete” into “After deleting or moving a message”
  • Moved “Show only subscribed folders” setting to “Folders” section
  • Added copy action to recipient dropdown in compose screen (to work around missing drag & drop functionality)
  • Simplified the app icon so it can be a vector drawable
  • Added support for the IMAP MOVE extension

Bug fixes

  • Fixed bug where account name wasn’t displayed in the message view when it should
  • Fixed bugs with importing and exporting identities
  • The app will no longer ask to save a draft when no changes have been made to an existing draft message
  • Fixed bug where “Cannot connect to crypto provider” was displayed when the problem wasn’t the crypto provider
  • Fixed a crash caused by an interaction with OpenKeychain 6.0.0
  • Fixed inconsistent behavior when replying to messages
  • Fixed display issue with recipients in message view screen
  • Fixed display issues when rendering a message/rfc822 inline part
  • Fixed display issue when removing an account
  • Fixed notification sounds on WearOS devices
  • Fixed the app so it runs on devices that don’t support home screen widgets

Other changes

Known issues

  • A fresh app install on Android 14 will be missing the “alarms & reminders” permission required for Push to work. Please allow setting alarms and reminders in Android’s app settings under Alarms & reminders.
  • Some software keyboards automatically capitalize words when entering the email address in the first account setup screen.
  • When a password containing line breaks is pasted during account setup, these line breaks are neither ignored nor flagged as an error. This will most likely lead to an authentication error when checking server settings.

Where To Get K-9 Mail Version 6.800

Version 6.800 has started gradually rolling out. As always, you can get it on the following platforms:

GitHub | F-Droid | Play Store

(Note that the release will gradually roll out on the Google Play Store, and should appear shortly on F-Droid, so please be patient if it doesn’t automatically update.)

The post Towards Thunderbird for Android – K-9 Mail 6.800 Simplifies Adding Email Accounts appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogDeveloper Spotlight: YouTube Search Fixer

Like a lot of us during the pandemic lockdown, Shubham Bose found himself consuming more YouTube content than ever before. That’s when he started to notice all the unwanted oddities appearing in his YouTube search results — irrelevant suggested videos, shorts, playlists, etc. Shubham wanted a cleaner, more focused search experience, so he decided to do something about it. He built YouTube Search Fixer. The extension streamlines YouTube search results in a slew of customizable ways, like removing “For you,” “People also search for,” “Related to your search,” and so on. You can also remove entire types of content like shorts, live streams, auto-generated mixes, and more.

The extension makes it easy to customize YouTube to suit you.

Early versions of the extension were less customizable and removed most types of suggested search results by default, but over time Shubham learned that different users want different things in their search results. “I realized the line between ‘helpful’ and ‘distracting’ is very subjective,” explains Shubham. “What one person finds useful, another might not. Ultimately, it’s up to the user to decide what works best for them. That’s why I decided to give users granular control using an Options page. Now people can go about hiding elements they find distracting while keeping those they deem helpful. It’s all about striking that personal balance.”

Despite YouTube Search Fixer’s current wealth of customization options (a cool new feature automatically redirects Shorts to their normal length versions), Shubham plans to expand his extension’s feature set. He’s considering keyword highlighting and denylist options, which would give users extreme control over search filtering.

More than solving what he felt was a problem with YouTube’s default search results, Shubham was motivated to build his extension as a “way of giving back to a community I deeply appreciate… I’ve used Firefox since I was in high school. Like countless others, I’ve benefited greatly from the ever helpful MDN Web Docs and the incredible add-ons ecosystem Mozilla hosts and helps thrive. They offer nice developer tools and cultivate a helpful and welcoming community. So making this was my tiny way of giving back and saying ‘thank you’.”

When he’s not writing extensions that improve the world’s most popular video streaming site, Shubham enjoys photographing his home garden in Lucknow, India. “It isn’t just a hobby,” he explains. “Experimenting with light, composition and color has helped me focus on visual aesthetics (in software development). Now, I actively pay attention to little details when I create visually appealing and user-friendly interfaces.”

Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey. 

The post Developer Spotlight: YouTube Search Fixer appeared first on Mozilla Add-ons Community Blog.

The Mozilla Thunderbird BlogThunderbird Monthly Development Digest: February 2024

Stylized Thunderbird icon with a code prompt in its center, against a purple background.

Hello Thunderbird Community! I can’t believe it’s already the end of February. Time goes by very fast and it seems that there’s never enough time to do all the things that you set your mind to. Nonetheless, it’s that time of the month again for a juicy and hopefully interesting Thunderbird Development Digest.

If this is your first time reading our monthly Dev Digest, these are short posts to give our community visibility into features and updates being planned for Thunderbird, as well as progress reports on work that’s in the early stages of development.

Let’s jump right into it, because there’s a lot to get excited about!

Rust and Exchange

Things are moving steadily on this front. Maybe not as fast as we would like, but we’re handling a complicated implementation and we’re adding a new protocol for the first time in more than a decade, so some friction is to be expected.

Nonetheless, you can start following the progress in our Thundercell repository. We’re using this repo to temporarily “park” crates and other libraries we’re aiming to vendor inside Thunderbird.

We’re aiming at reaching an alpha state where we can land in Thunderbird later next month and start asking for user feedback on Daily.

Mozilla Account + Thunderbird Sync

Illustration of a continuous cycle with a web browser window, a sync or update icon, and a server rack, indicating a process of technological interaction or data exchange.<figcaption class="wp-element-caption">Illustration by Alessandro Castellani</figcaption>

Things are moving forward on this front as well. We’re currently in the process of setting up our own SyncServer and TokenStorage in order to allow users to log in with their Mozilla Account but sync the Thunderbird data in an independent location from the Firefox data. This gives us an extra layer of security as it will prevent an app from accessing the other app’s data and vice versa.

In case you didn’t know, you can already use a Mozilla account and Sync on Daily, but this only works with a staging server and you’ll need an alternate Mozilla account for testing. There are a couple of known bugs but overall things seem to be working properly. Once we switch to our storage server, we will expose this feature more and enable it on Beta for everyone to test.

Oh, Snap!

Our continuous efforts to own our packages and distribution methods is moving forward with the internal creation of a Snap package. (For background, last year we took ownership of the Thunderbird Flatpak.)

We’re currently internally testing the Beta and things seem to work accordingly. We will announce it publicly when it’s available from the Snap Store, with the objective of offering both Stable and Beta channels.

We’re exploring the possibility of also offering a Daily channel, but that’s a bit more complicated and we will need more time to make sure it’s doable and automated, so stay tuned.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month,

Alessandro Castellani (he, him)
Director of Product Engineering

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: February 2024 appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogThunderbird for Android / K-9 Mail: January 2024 Progress Report

a dark background with Thunderbird and K-9 Mail logos centered, with the text "Thunderbird for Android, January 2024 dev digest"

A new year, a new progress report! Learn what we did in January on our journey to transform K-9 Mail into Thunderbird for Android. If you’re new here or you forgot where we left off last year, check out the previous progress report.

Account setup

In January most of our work went into polishing the user interface and user experience of the new and improved account setup. However, there was still one feature missing that we really wanted to get in there: the ability to configure special folders.

Special folders

K-9 Mail supports the following special folders:

  • Archive: When configured, an Archive action will be available that moves a message to the designated archive folder.
  • Drafts: When configured, the Save as draft action will be available in the compose screen.
  • Sent: Messages that have been successfully submitted to the outgoing server will be uploaded to this folder. If this special folder is set to None, the app won’t save a copy of sent messages.
    Note: There’s also the setting Upload sent messages that can be disabled to prevent sent messages from being uploaded, e.g. if your email provider automatically saves a copy of outgoing messages.
  • Spam: When configured, a Spam action will be available that moves a message to the designated spam folder. (Please note that K-9 Mail currently does not include spam detection. So besides moving the message, this doesn’t do anything on its own. However, moving a message to and from the spam folder often trains the server-side spam filter available at many email providers.)
  • Trash: When configured, deleting a message in the app will move it to the designated trash folder. If the special folder is set to None, emails are deleted permanently right away.

In the distant past, K-9 Mail was simply using common names for these folders and created them on the server if they didn’t exist yet. But some email clients were using different names. And so a user could end up with e.g. multiple folders for sent messages. Of course there was an option to manually change the special folder assignment. But usually people only noticed when it was too late and the new folder already contained a couple of messages. Manually cleaning this up and making sure all email clients are configured to use the same folders is not fun.

To solve this problem, RFC 6154 introduced the SPECIAL-USE IMAP extension. That’s a mechanism to save this special folder mapping on an IMAP server. Having this information on the server means all email clients can simply fetch that mapping and then there should be no disagreement on e.g. which folder is used for sent messages.

Unfortunately, there’s still some email providers that don’t support this extension. There’s also cases where the server supports the feature, but none of the special roles are assigned to any folder. When K-9 Mail added support for the SPECIAL-USE extension, it simply used the data from the server, even if it meant not using any special folders. Unfortunately, that could be even worse than creating new folders, because you might end up e.g. not having a copy of sent messages.

So now the app is displaying a screen to ask the user to assign special folders when setting up an account. 

This screen is skipped if the app receives a full mapping from the server, i.e. all special roles are assigned to a folder. Of course you’ll still be able to change the special folder assignment after the account has been created.

Splitting account options

We split what used to be the account options screen into two different screens: display options and sync options.

Improved server certificate error screen

The screen to display server certificate errors during account setup has received an overhaul.

Polishing the user experience

With the special folders screen done, we’re now feature complete. So we took a step back to look at the whole experience of setting up an account. And we’ve found several areas where we could improve the app. 

Here’s an (incomplete) list of things we’ve changed:

  • We reduced the font weight of the header text to be less distracting.
  • In some parts of the flow there’s enough content on the screen that a user has to scroll. The area between the header and the navigation buttons at the bottom can be very small depending on the device size. So we included the header in the scrollable area to improve the experience on devices with a small screen.
  • There are a couple of transient screens, e.g. when checking server settings. Previously the app first displayed a progress indicator when checking server settings, then a success message for 2 seconds, but allowed the user to skip this screen by pressing the Next button. This turned out to be annoying and confusing. Annoying because the user has to wait longer than necessary; and confusing because it looked like user input was required, but by the time the user realizes that, the app will have most likely switched to the next screen automatically.
    We updated these transient screens to always show a progress indicator and hide the Next button, so users know something is happening and there’s currently nothing for them to do.
  • We also fixed a couple of smaller issues, like the inbox not being synchronized during setup when an account was configured for manual synchronization.

Fixing bugs

Some of the more interesting bugs we fixed in January:

  • When rotating the screen while selecting a notification sound in settings, some of the notification settings were accidentally disabled (#7468). 
  • When importing settings a preview lines value of 0 was ignored and the default of 2 was used instead (#7493).
  • When viewing a message and long-pressing an image that is also a link, only menu items relevant for images were displayed, but not ones relevant for links (#7457).
  • Opening an attachment from K-9 Mail’s message view in an external app and then sharing the content to K-9 Mail opened the compose screen for a new message but didn’t add an attachment (#7557).

Community Contributions

new-sashok724 fixed a bug that prevented the use of IP addresses for incoming or outgoing servers (#7483).

Thank you ❤

Releases

If you want to help shape Thunderbird for Android, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: January 2024 Progress Report appeared first on The Thunderbird Blog.

Mozilla L10NA Deep Dive Into the Evolution of Pretranslation in Pontoon

Quite often, an imperfect translation is better than no translation. So why even publish untranslated content when high-quality machine translation systems are fast and affordable? Why not immediately machine-translate content and progressively ship enhancements as they are submitted by human translators?

At Mozilla, we call this process pretranslation. We began implementing it in Pontoon before COVID-19 hit, thanks to Vishal who landed the first patches. Then we caught some headwinds and didn’t make much progress until 2022 after receiving a significant development boost and finally launched it for the general audience in September 2023.

So far, 20 of our localization teams (locales) have opted to use pretranslation across 15 different localization projects. Over 20,000 pretranslations have been submitted and none of the teams have opted out of using it. These efforts have resulted in a higher translation completion rate, which was one of our main goals.

In this article, we’ll take a look at how we developed pretranslation in Pontoon. Let’s start by exploring how it actually works.

How does pretranslation work?

Pretranslation is enabled upon a team’s request (it’s off by default). When a new string is added to a project, it gets automatically pretranslated using a 100% match from translation memory (TM), which also includes translations of glossary entries. If a perfect match doesn’t exist, a locale-specific machine translation (MT) engine is used, trained on the locale’s translation memory.

Pretranslation opt-in form

Pretranslation opt-in form.

After pretranslations are retrieved and saved in Pontoon, they get synced to our primary localization storage (usually a GitHub repository) and hence immediately made available for shipping. Unless they fail our quality checks. In that case, they don’t propagate to repositories until errors or warnings are fixed during the review process.

Until reviewed, pretranslations are visually distinguishable from user-submitted suggestions and translations. This makes post-editing much easier and more efficient. Another key factor that influences pretranslation review time is, of course, the quality of pretranslations. So let’s see how we picked our machine translation provider.

Choosing a machine translation engine

We selected the machine translation provider based on two primary factors: quality of translations and the number of supported locales. To make translations match the required terminology and style as much as possible, we were also looking for the ability to fine-tune the MT engine by training it on our translation data.

In March 2022, we compared Bergamot, Google’s Cloud Translation API (generic), and Google’s AutoML Translation (with custom models). Using these services we translated a collection of 1,000 strings into 5 locales (it, de, es-ES, ru, pt-BR), and used automated scores (BLEU, chrF++) as well as manual evaluation to compare them with the actual translations.

Performance of tested MT engines for Italian (it).

Performance of tested MT engines for Italian (it).

Google’s AutoML Translation outperformed the other two candidates in virtually all tested scenarios and metrics, so it became the clear choice. It supports over 60 locales. Google’s Generic Translation API supports twice as many, but we currently don’t plan to use it for pretranslation in locales not supported by Google’s AutoML Translation.

Making machine translation actually work

Currently, around 50% of pretranslations generated by Google’s AutoML Translation get approved without any changes. For some locales, the rate is around 70%. Keep in mind however that machine translation is only used when a perfect translation memory match isn’t available. For pretranslations coming from translation memory, the approval rate is 90%.

Comparison of pretranslation approval rate between teams.

Comparison of pretranslation approval rate between teams.

To reach that approval rate, we had to make a series of adjustments to the way we use machine translation.

For example, we convert multiline messages to single-line messages before machine-translating them. Otherwise, each line is treated as a separate message and the resulting translation is of poor quality.

Multiline message:

Make this password unique and different from any others you use.
A good strategy to follow is to combine two or more unrelated
words to create an entire pass phrase, and include numbers and symbols.

Multiline message converted to a single-line message:

Make this password unique and different from any others you use. A good strategy to follow is to combine two or more unrelated words to create an entire pass phrase, and include numbers and symbols.

Let’s take a closer look at two of the more time-consuming changes.

The first one is specific to our machine translation provider (Google’s AutoML Translation). During initial testing, we noticed it would often take a long time for the MT engine to return results, up to a minute. Sometimes it even timed out! Such a long response time not only slows down pretranslation, it also makes machine translation suggestions in the translation editor less useful – by the time they appear, the localizer has already moved to translate the next string.

After further testing, we began to suspect that our custom engine shuts down after a period of inactivity, thus requiring a cold start for the next request. We contacted support and our assumption was confirmed. To overcome the problem, we were advised to send a dummy query to the service every 60 seconds just to keep the system alive.

Giphy: Oh No Wow GIF by Little Princess Ember

Image source: Giphy.

Of course, it’s reasonable to shut down inactive services to free up resources, but the way to keep them alive isn’t. We have to make (paid) requests to each locale’s machine translation engines every minute just to make sure they work when we need them. And sometimes even that doesn’t help – we still see about a dozen ServiceUnavailable errors every day. It would be so much easier if we could just customize the default inactivity period or pay extra for an always-on service.

The other issue we had to address is quite common in machine translation systems: they are not particularly good at preserving placeholders. In particular, extra space often gets added to variables or markup elements, resulting in broken translations.

Message with variables:

{ $partialSize } of { $totalSize }

Message with variables machine-translated to Slovenian (adding space after $ breaks the variable):

{$ partialSize} od {$ totalSize}

We tried to mitigate this issue by wrapping placeholders in <span translate=”no”>…</span>, which tells Google’s AutoML Translation to not translate the wrapped text. This approach requires the source text to be submitted as HTML (rather than plain text), which triggers a whole new set of issues — from adding spaces in other places to escaping quotes — and we couldn’t circumvent those either. So this was a dead-end.

The solution was to store every placeholder in the Glossary with the same value for both source string and translation. That approach worked much better and we still use it today. It’s not perfect, though, so we only use it to pretranslate strings for which the default (non-glossary) machine translation output fails our placeholder quality checks.

Making pretranslation work with Fluent messages

On top of the machine translation service improvements we also had to account for the complexity of Fluent messages, which are used by most of the projects we localize at Mozilla. Fluent is capable of expressing virtually any imaginable message, which means it is the localization system you want to use if you want your software translations to sound natural.

As a consequence, Fluent message format comes with a syntax that allows for expressing such complex messages. And since machine translation systems (as seen above) already have trouble with simple variables and markup elements, their struggles multiply with messages like this:

shared-photos =
 { $photoCount ->
    [one]
      { $userGender ->
        [male] { $userName } added a new photo to his stream.
        [female] { $userName } added a new photo to her stream.
       *[other] { $userName } added a new photo to their stream.
      }
   *[other]
      { $userGender ->
        [male] { $userName } added { $photoCount } new photos to his stream.
        [female] { $userName } added { $photoCount } new photos to her stream.
       *[other] { $userName } added { $photoCount } new photos to their stream.
      }
  }

That means Fluent messages need to be pre-processed before they are sent to the pretranslation systems. Only relevant parts of the message need to be pretranslated, while syntax elements need to remain untouched. In the example above, we extract the following message parts, pretranslate them, and replace them with pretranslations in the original message:

  • { $userName } added a new photo to his stream.
  • { $userName } added a new photo to her stream.
  • { $userName } added a new photo to their stream.
  • { $userName } added { $photoCount } new photos to his stream.
  • { $userName } added { $photoCount } new photos to her stream.
  • { $userName } added { $photoCount } new photos to their stream.

To be more accurate, this is what happens for languages like German, which uses the same CLDR plural forms as English. For locales without plurals, like Chinese, we drop plural forms completely and only pretranslate the remaining three parts. If the target language is Slovenian, two additional plural forms need to be added (two, few), which in this example results in a total of 12 messages needing pretranslation (four plural forms, with three gender forms each).

Finally, Pontoon translation editor uses custom UI for translating access keys. That means it’s capable of detecting which part of the message is an access key and which is a label the access key belongs to. The access key should ideally be one of the characters included in the label, so the editor generates a list of candidates that translators can choose from. In pretranslation, the first candidate is directly used as an access key, so no TM or MT is involved.

A screenshot of Notepad showing access keys in the menu.

Access keys (not to be confused with shortcut keys) are used for accessibility to interact with all controls or menu items using the keyboard. Windows indicates access keys by underlining the access key assignment when the Alt key is pressed. Source: Microsoft Learn.

Looking ahead

With every enhancement we shipped, the case for publishing untranslated text instead of pretranslations became weaker and weaker. And there’s still room for improvements in our pretranslation system.

Ayanaa has done extensive research on the impact of Large Language Models (LLMs) on translation efficiency. She’s now working on integrating LLM-assisted translations into Pontoon’s Machinery panel, from which localizers will be able to request alternative translations, including formal and informal options.

If the target locale could set the tone to formal or informal on the project level, we could benefit from this capability in pretranslation as well. We might also improve the quality of machine translation suggestions by providing existing translations into other locales as references in addition to the source string.

If you are interested in using pretranslation or already use it, we’d love to hear your thoughts! Please leave a comment, reach out to us on Matrix, or file an issue.

Mozilla L10NL10n Report: February 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

While the amount of content has been relatively small over the last few months in Firefox, there have been some UI changes and updates to privacy setting related text such as form autofill, Cookie Banner Blocker, passwords (about:logins), and cookie and site data*. One change happening here (and across all Mozilla products) is the move away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

In addition, while the number of strings is low, Firefox’s PDF viewer will soon have the ability to highlight content. You can test this feature now in Nightly.

Most of these strings and translations can be previewed by checking a Nightly build. If you’re new to localizing Firefox or if you missed our deep dive, please check out our blog post from July to learn more about the Firefox release schedule.

*Recently in our L10N community matrix channel, someone from our community asked how the new strings for clearing browsing history and data (see screenshot below) from Cookie and Site Data could be shown in Nightly.

Pontoon screenshot showing the strings for clearing browsing history and data from Cookie and Site Data.In order to show the strings in Nightly, the privacy.sanitize.useOldClearHistoryDialog preference needs to be set to false. To set the preference, type about:config in your URL bar and press enter. A warning may pop up warning you to proceed with caution, click the button to continue. On the page that follows, paste privacy.sanitize.useOldClearHistoryDialog into the search field, then click the toggle button to change the value to false.

You can then trigger the new dialog by clicking “Clear Data…” from the Cookies and Site Data setting or “Clear History…” from the History. (You may need to quit Firefox and open it again for the change to take effect.).

In case of doubts about managing about:config, you can consult the Configuration Editor guide on SUMO.

What’s new or coming up in mobile

Much like desktop, mobile land has been pretty calm recently.

Having said that, we would like to call out the new Translation feature that is now available to test on the latest Firefox for Android v124 Nightly builds (this is possible only through the secret settings at the moment). It’s a built-in full page translation feature that allows you to seamlessly browse the web in your preferred language. As you navigate the site, Firefox continuously translates new content.

Check your Pontoon notifications for instructions on how to test it out. Note that the feature is not available on iOS at the moment.

In the past couple of months you may have also noticed strings mentioning a new shopping feature called “Review Checker” (that we mentioned for desktop in our November edition). The feature is still a bit tricky to test on Android, but there are instructions you can follow – these can also be found in your Pontoon notification archive.

For testing on iOS, you just need to have the latest Beta version installed and navigate to the product pages on the US sites of amazon.com, bestbuy.com, and walmart.com. A logo in the URL bar will appear with a notification, to launch and test the feature.

Finally, another notable change that has been called out under the Firefox desktop section above: we are moving away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

What’s new or coming up in Foundation projects

New languages have been added to Common Voice in 2023: Tibetan, Chichewa, Ossetian, Emakhuwa, Laz, Pular Guinée, Sindhi. Welcome!

What’s new or coming up in Pontoon

Improved support for mobile devices

Pontoon translation workspace is now responsive, which means you can finally use Pontoon on your mobile device to translate and review strings! We developed a single-column layout for mobile phones and 2-column layout for tablets.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

2024 Pontoon survey

Thanks again to everyone who has participated in the 2024 Pontoon survey. The 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

Friends of the Lion

We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!

Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

hacks.mozilla.orgAnnouncing Interop 2024

The Interop Project has become one of the key ways that browser vendors come together to improve the web platform. By working to identify and improve key areas where differences between browser engines are impacting users and web developers, Interop is a critical tool in ensuring the long-term health of the open web.

The web platform is built on interoperability based on common standards. This offers users a degree of choice and control that sets the web apart from proprietary platforms defined by a single implementation. A commitment to ensuring that the web remains open and interoperable forms a fundamental part of Mozilla’s manifesto and web vision, and is why we’re so committed to shipping Firefox with our own Gecko engine.

However interoperability requires care and attention to maintain. When implementations ship with differences between the standard and each other, this creates a pain point for web authors; they have to choose between avoiding the problematic feature entirely and coding to specific implementation quirks. Over time if enough authors produce implementation-specific content then interoperability is lost, and along with it user agency.

This is the problem that the Interop Project is designed to address. By bringing browser vendors together to focus on interoperability, the project allows identifying areas where interoperability issues are causing problems, or may do in the near future. Tracking progress on those issues with a public metric provides accountability to the broader web community on addressing the problems.

The project works by identifying a set of high-priority focus areas: parts of the web platform where everyone agrees that making interoperability improvements will be of high value. These can be existing features where we know browsers have slightly different behaviors that are causing problems for authors, or they can be new features which web developer feedback shows is in high demand and which we want to launch across multiple implementations with high interoperability from the start. For each focus area a set of web-platform-tests is selected to cover that area, and the score is computed from the pass rate of these tests.

Interop 2023

The Interop 2023 project covered high profile features like the new :has() selector, and web-codecs, as well as areas of historically poor interoperability such as pointer events.

The results of the project speak for themselves: every browser ended the year with scores in excess of 97% for the prerelease versions of their browsers. Moreover, the overall Interoperability score — that is the fraction of focus area tests that pass in all participating browser engines — increased from 59% at the start of the year to 95% now. This result represents a huge improvement in the consistency and reliability of the web platform. For users this will result in a more seamless experience, with sites behaving reliably in whichever browser they prefer.

For the :has() selector — which we know from author feedback has been one of the most in-demand CSS features for a long time — every implementation is now passing 100% of the web-platform-tests selected for the focus area. Launching a major new platform feature with this level of interoperability demonstrates the power of the Interop project to progress the platform without compromising on implementation diversity, developer experience, or user choice.

As well as focus areas, the Interop project also has “investigations”. These are areas where we know that we need to improve interoperability, but aren’t at the stage of having specific tests which can be used to measure that improvement. In 2023 we had two investigations. The first was for accessibility, which covered writing many more tests for ARIA computed role and accessible name, and ensuring they could be run in different browsers. The second was for mobile testing, which has resulted in both Mobile Firefox and Chrome for Android having their initial results in wpt.fyi.

Interop 2024

Following the success of Interop 2023, we are pleased to confirm that the project will continue in 2024 with a new selection of focus areas, representing areas of the web platform where we think we can have the biggest positive impact on users and web developers.

New Focus Areas

New focus areas for 2024 include, among other things:

  • Popover API – This provides a declarative mechanism to create content that always renders in the topmost-layer, so that it overlays other web page content. This can be useful for building features like tooltips and notifications. Support for popover was the #1 author request in the recent State of HTML survey.
  • CSS Nesting – This is a feature that’s already shipping, which allows writing more compact and readable CSS files, without the need for external tooling such as preprocessors. However different browsers shipped slightly different behavior based on different revisions of the spec, and Interop will help ensure that everyone aligns on a single, reliable, syntax for this popular feature.
  • Accessibility – Ensuring that the web is accessible to all users is a critical part of Mozilla’s manifesto. Our ability to include Accessibility testing in Interop 2024 is a direct result of the success of the Interop 2023 Accessibility Investigation in increasing the test coverage of key accessibility features.

The full list of focus areas is available in the project README.

Carryover

In addition to the new focus areas, we will carry over some of the 2023 focus areas where there’s still more work to be done. Of particular interest is the Layout focus area, which will combine the previous Flexbox, Grid and Subgrid focus area into one area covering all the most important layout primitives for the modern web. On top of that the Custom Properties, URL and Mouse and Pointer Events focus areas will be carried over. These represent cases where, even though we’ve already seen large improvements in Interoperability, we believe that users and web authors will benefit from even greater convergence between implementations.

Investigations

As well as focus areas, Interop 2024 will also feature a new investigation into improving the integration of WebAssembly testing into web-platform-tests. This will open up the possibility of including WASM features in future Interop projects. In addition we will extend the Accessibility and Mobile Testing investigations, as there is more work to be done to make those aspects of the platform fully testable across different implementations.

Partner Announcements

The post Announcing Interop 2024 appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgOption Soup: the subtle pitfalls of combining compiler flags

Firefox development uncovers many cross-platform differences and unique features of its combination of dependencies. Engineers working on Firefox regularly overcome these challenges and while we can’t detail all of them, we think you’ll enjoy hearing about some so here’s a sample of a recent technical investigation.

During the Firefox 120 beta cycle, a new crash signature appeared on our radars with significant volume.

At that time, the distribution across operating systems revealed that more than 50% of the crash volume originates from Ubuntu 18.04 LTS users.

The main process crashes in a CanvasRenderer thread, with the following call stack:

0  firefox  std::locale::operator=  
1  firefox  std::ios_base::imbue  
2  firefox  std::basic_ios<char, std::char_traits<char> >::imbue  
3  libxul.so  sh::InitializeStream<std::__cxx11::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> > >  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Common.h:238
3  libxul.so  sh::TCompiler::setResourceString  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:1294
4  libxul.so  sh::TCompiler::Init  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:407
5  libxul.so  sh::ConstructCompiler  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/ShaderLang.cpp:368
6  libxul.so  mozilla::webgl::ShaderValidator::Create  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:215
6  libxul.so  mozilla::WebGLContext::CreateShaderValidator const  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:196
7  libxul.so  mozilla::WebGLShader::CompileShader  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShader.cpp:98

At first glance, we want to blame WebGL. The C++ standard library functions cannot be at fault, right?

But when looking at the WebGL code, the crash occurs in the perfectly valid lines of C++ summarized below:

std::ostringstream stream;
stream.imbue(std::locale::classic());

This code should never crash, and yet it does. In fact, taking a closer look at the stack gives a first lead for investigation:
Although we crash into functions that belong to the C++ standard library, these functions appear to live in the firefox binary.

This is an unusual situation that never occurs with official builds of Firefox.
It is however very common for distribution to change the configuration settings and apply downstream patches to an upstream source, no worries about that.
Moreover, there is only a single build of Firefox Beta that is causing this crash.

We know this thanks to a unique identifier associated with any ELF binary.
Here, if we choose any specific version of Firefox 120 Beta (such as 120b9), the crashes all embed the same unique identifier for firefox.

Now, how can we guess what build produces this weird binary?

A useful user comment mentions that they regularly experience this crash since updating to 120.0~b2+build1-0ubuntu0.18.04.1.
And by looking for this build identifier, we quickly reach the Firefox Beta PPA.
Then indeed, we are able to reproduce the crash by installing it in a Ubuntu 18.04 LTS virtual machine: it occurs when loading any WebGL page!
With the binary now at hand, running nm -D ./firefox confirms the presence of several symbols related to libstdc++ that live in the text section (T marker).

Templated and inline symbols from libstdc++ usually appear as weak (W marker), so there is only one explanation for this situation: firefox has been statically linked with libstdc++, probably through -static-libstdc++.

Fortunately, the build logs are available for all Ubuntu packages.
After some digging, we find the logs for the 120b9 build, which indeed contain references to -static-libstdc++.

But why?

Again, everything is well documented, and thanks to well trained digging skills we reach a bug report that provides interesting insights.
Firefox requires a modern C++ compiler, and hence a modern libstdc++, which is unavailable on old systems like Ubuntu 18.04 LTS.
The build uses -static-libstdc++ to close this gap.
This just explains the weird setup though.

What about the crash?

Since we can now reproduce it, we can launch Firefox in a debugger and continue our investigation.
When inspecting the crash site, we seem to crash because std::locale::classic() is not properly initialized.
Let’s take a peek at the implementation.

const locale& locale::classic()
{
  _S_initialize();
  return *(const locale*)c_locale;
}

_S_initialize() is in charge of making sure that c_locale will be properly initialized before we return a reference to it.
To achieve this, _S_initialize() calls another function, _S_initialize_once().

void locale::_S_initialize()
{
#ifdef __GTHREADS
  if (!__gnu_cxx::__is_single_threaded())
    __gthread_once(&_S_once, _S_initialize_once);
#endif

  if (__builtin_expect(!_S_classic, 0))
    _S_initialize_once();
}

In _S_initialize(), we first go through a wrapper for pthread_once(): the first thread that reaches this code consumes _S_once and calls _S_initialize_once(), whereas other threads (if any) are stuck waiting for _S_initialize_once() to complete.

This looks rather fail-proof, right?

There is even an extra direct call to _S_initialize_once() if _S_classic is still uninitialized after that.
Now, _S_initialize_once() itself is rather straightforward: it allocates _S_classic and puts it within c_locale.

void
locale::_S_initialize_once() throw()
{
  // Need to check this because we could get called once from _S_initialize()
  // when the program is single-threaded, and then again (via __gthread_once)
  // when it's multi-threaded.
  if (_S_classic)
    return;

  // 2 references.
  // One reference for _S_classic, one for _S_global
  _S_classic = new (&c_locale_impl) _Impl(2);
  _S_global = _S_classic;
  new (&c_locale) locale(_S_classic);
}

The crash looks as if we never went through _S_initialize_once(), so let’s put a breakpoint there and see what happens.
And just by doing this, we already notice something suspicious.
We do reach _S_initialize_once(), but not within the firefox binary: instead, we only ever reach the version exported by liblgpllibs.so.
In fact, liblgpllibs.so is also statically linked with libstdc++, such that firefox and liblgpllibs.so both embed and export their own _S_initialize_once() function.

By default, symbol interposition applies, and _S_initialize_once() should always be called through the procedure linkage table (PLT), so that every module ends up calling the same version of the function.
If symbol interposition were happening here, we would expect that liblgpllibs.so would reach the version of _S_initialize_once() exported by firefox rather than its own, because firefox was loaded first.

So maybe there is no symbol interposition.

This can occur when using -fno-semantic-interposition.

Each version of the standard library would live on its own, independent from the other versions.
But neither the Firefox build system nor the Ubuntu maintainer seem to pass this flag to the compiler.
However, by looking at the disassembly for _S_initialize() and _S_initialize_once(), we can see that the exported global variables (_S_once, _S_classic, _S_global) are subject to symbol interposition:

These accesses all go through the global offset table (GOT), so that every module ends up accessing the same version of the variable.
This seems strange given what we said earlier about _S_initialize_once().
Non-exported global variables (c_locale, c_locale_impl), however, are accessed directly without symbol interposition, as expected.

We now have enough information to explain the crash.

When we reach _S_initialize() in liblgpllibs.so, we actually consume the _S_once that lives in firefox, and initialize the _S_classic and _S_global that live in firefox.
But we initialize them with pointers to well initialized variables c_locale_impl and c_locale that live in liblgpllibs.so!
The variables c_locale_impl and c_locale that live in firefox, however, remain uninitialized.

So if we later reach _S_initialize() in firefox, everything looks as if initialization has happened.
But then we return a reference to the version of c_locale that lives in firefox, and this version has never been initialized.

Boom!

Now the main question is: why do we see interposition occur for _S_once but not for _S_initialize_once()?
If we step back for a minute, there is a fundamental distinction between these symbols: one is a function symbol, the other is a variable symbol.
And indeed, the Firefox build system uses the -Bsymbolic-function flag!

The ld man page describes it as follows:

-Bsymbolic-functions

When creating a shared library, bind references to global function symbols to the definition within the shared library, if any.  This option is only meaningful on ELF platforms which support shared libraries.

As opposed to:

-Bsymbolic

When creating a shared library, bind references to global symbols to the definition within the shared library, if any.  Normally, it is possible for a program linked against a shared library to override the definition within the shared library. This option is only meaningful on ELF platforms which support shared libraries.

Nailed it!

The crash occurs because this flag makes us use a weird variant of symbol interposition, where symbol interposition happens for variable symbols like _S_once and _S_classic but not for function symbols like _S_initialize_once().

This results in a mismatch regarding how we access global variables: exported global variables are unique thanks to interposition, whereas every non-interposed function will access its own version of any non-exported global variable.

With all the knowledge that we have now gathered, it is easy to write a reproducer that does not involve any Firefox code:

/* main.cc */
#include <iostream>
extern void pain();
int main() {
pain();
   std::cout << "[main] " << std::locale::classic().name() <<"\n";
   return 0;
}

/* pain.cc */

#include <iostream>
void pain() {
std::cout << "[pain] " << std::locale::classic().name() <<"\n";
}

# Makefile
all:
   $(CXX) pain.cc -fPIC -shared -o libpain.so -static-libstdc++ -Wl,-Bsymbolic-functions
   $(CXX) main.cc -fPIC -c -o main.o
   $(CC) main.o -fPIC -o main /usr/lib/gcc/x86_64-redhat-linux/13/libstdc++.a -L. -Wl,-rpath=. -lpain -Wl,-Bsymbolic-functions
   ./main

clean:
   $(RM) libpain.so main

Understanding the bug is one step, and solving it is yet another story.
Should it be considered a libstdc++ bug that the code for locales is not compatible with -static-stdlibc++ -Bsymbolic-functions?

It feels like combining these flags is a very nice way to dig our own grave, and that seems to be the opinion of the libstdc++ maintainers indeed.

Overall, perhaps the strangest part of this story is that this combination did not cause any trouble up until now.
Therefore, we suggested to the maintainer of the package to stop using -static-libstdc++.

There are other ways to use a different libstdc++ than available on the system, such as using dynamic linking and setting an RPATH to link with a bundled version.

Doing that allowed them to successfully deploy a fixed version of the package.
A few days after that, with the official release of Firefox 120, we noticed a very significant bump in volume for the same crash signature. Not again!

This time the volume was coming exclusively from users of NixOS 23.05, and it was huge!

After we shared the conclusions from our beta investigation with them, the maintainers of NixOS were able to quickly associate the crash with an issue that had not yet been backported for 23.05 and was causing the compiler to behave like -static-libstdc++.

To avoid such mess in the future, we added detection for this particular setup in Firefox’s configure.

We are grateful to the people who have helped fix this issue, in particular:

  • Rico Tzschichholz (ricotz) who quickly fixed the Ubuntu 18.04 LTS package, and Amin Bandali (bandali) who provided help on the way;
  • Martin Weinelt (hexa) and Artturin for their prompt fixes for the NixOS 23.05 package;
  • Nicolas B. Pierron (nbp) for helping us get started with NixOS, which allowed us to quickly share useful information with the NixOS package maintainers.

 

The post Option Soup: the subtle pitfalls of combining compiler flags appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyPlatform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox

Browsers are the principal gateway connecting people to the open Internet, acting as their agent and shaping their experience. The central role of browsers has long motivated us to build and improve Firefox in order to offer people an independent choice. However, this centrality also creates a strong incentive for dominant players to control the browser that people use. The right way to win users is to build a better product, but shortcuts can be irresistible — and there’s a long history of companies leveraging their control of devices and operating systems to tilt the playing field in favor of their own browser.

This tilt manifests in a variety of ways. For example: making it harder for a user to download and use a different browser, ignoring or resetting a user’s default browser preference, restricting capabilities to the first-party browser, or requiring the use of the first-party browser engine for third-party browsers.

For years, Mozilla has engaged in dialog with platform vendors in an effort to address these issues. With renewed public attention and an evolving regulatory environment, we think it’s time to publish these concerns using the same transparent process and tools we use to develop positions on emerging technical standards. So today we’re publishing a new issue tracker where we intend to document the ways in which platforms put Firefox at a disadvantage and engage with the vendors of those platforms to resolve them.

This tracker captures the issues we experience developing Firefox, but we believe in an even playing field for everyone, not just us. We encourage other browser vendors to publish their concerns in a similar fashion, and welcome the engagement and contributions of other non-browser groups interested in these issues. We’re particularly appreciative of the efforts of Open Web Advocacy in articulating the case for a level playing field and for documenting self-preferencing.

People deserve choice, and choice requires the existence of viable alternatives. Alternatives and competition are good for everyone, but they can only flourish if the playing field is fair. It’s not today, but it’s also not hard to fix if the platform vendors wish to do so.

We call on Apple, Google, and Microsoft to engage with us in this new forum to speedily resolve these concerns.

The post Platform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox appeared first on Open Policy & Advocacy.

Mozilla L10NAdvancing Mozilla’s mission through our work on localization standards

After the previous post highlighting what the Mozilla community and Localization Team achieved in 2023, it’s time to dive deeper on the work the team does in the area of localization technologies and standards.

A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:

We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.

To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.

That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.

In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.

Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.

So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.

Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!

At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.

SeaMonkeyTeething problems with archives

Hi All,

I am currently fixing a mess with the archives for 2.53.18.1.

There are a lot of extraneous artifacts that were stored there and now I’m cleaning them up.

Thankfully, this will be the last time I’m using this way of pushing to release.

My apologies for the mess.

:ewong

SeaMonkeySeaMonkey 2.53.18.1 updates

Hi All,

Just want to mention that the updates will be available soon.

Thank you for your patience.

:ewong

SeaMonkeySeaMonkey 2.53.18.1 is out!

Hi All,

Happy New Year, everyone!

The SeaMonkey Project is pleased to announce the very first release the year: SeaMonkey 2.53.18.1!  As it is a security fix, please check out [1] and/or [2] for release notes.

Best regards,

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18.1

[2] – https://www.seamonkey-project.org/releases/2.53.18.1

 

Mozilla L10NMozilla Localization in 2023

A Year in Data

The Mozilla localization community had a busy and productive 2023. Let’s look at some numbers that defined our year:

  • 32 projects and 258 locales set up in Pontoon
  • 3,685 new user registrations
  • 1,254 active users, submitting at least one translation (on average 235 users per month)
  • 432,228 submitted translations
  • 371,644 approved translations
  • 23,866 new strings to translate

Slide summarizing the activity in Pontoon over 2023. It includes the Mozilla Localization team logo (a red and black lion head) and an image of a cartoonish lion cub holding a thank you sign. Data in the slide: * 32 projects and 258 locales set up in Pontoon * 3,685 new user registrations * 1,254 active users, submitting at least one translation (on average 235 users per month) * 432,228 submitted translations * 371,644 approved translations * 23,866 new strings to translateThank you to all the volunteers who contributed to Mozilla’s localization efforts over the last 12 months!

In case you’re curious about the lion theme: localization is often referred to as l10n, a numeronym which looks like the word lion. That’s why our team’s logo is a lion head, stylized as the original Mozilla logo by artist Shepard Fairey.

Pontoon Development

A core area of focus in 2023 was pretranslation. From the start, our goal with this feature was to support the community by making it easier to leverage existing translations and provide a way to bootstrap translation of new content.

When pretranslation is enabled, any new string added in Pontoon will be pretranslated using a 100% match from translation memory or — if no match exists —  we’ll leverage Google AutoML Translation engine with a model custom trained on the existing locale’s translation memory. Translations are stored in Pontoon with a special “pretranslated” status so that localizers can easily find and review them. Pretranslated strings are also saved to repositories (e.g. GitHub), and eventually ship in the product.

You can find more details on how we approached testing and involved the community in this blog post from July. Over the course of 2023 we pretranslated 14,033 strings for 16 locales across 15 projects.

Towards the end of the year, we also worked on two features that have been long requested by users: 1) it’s now possible to use Pontoon with a light theme; and 2) we improved the translation experience on mobile, with the original 3-column layout adapting to smaller screen sizes.

Screenshot of Pontoon's UI with the light theme selected.

Screenshot of Pontoon’s UI with the light theme selected.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Listening to user feedback remains our priority: in case you missed it, we have just published the results of a new survey, where we asked localizers which features they would like to see implemented in Pontoon. We look forward to implementing some of your fantastic ideas in 2024!

Community

Community is at the core of Mozilla’s localization model, so it’s crucial to identify sustainability issues as early as possible. Only relying on completion levels, or how quickly a locale can respond to urgent localization requests, are not sufficient inputs to really understand the health of a community. Indeed, an extremely dedicated volunteer can mask deeper problems and these issues only become visible — and urgent — when such a person leaves a project, potentially without a clear succession plan.

To prevent these situations, we’ve been researching ways to measure the health of each locale by analyzing multiple data points — for example, the number of new sign-ups actively contributing to localization and getting reviews from translators and managers — and we’ve started reaching out to specific communities to trial test interventions. With the help of existing locale managers, this resulted in several promotions to translator (Arabic, Czech, German) or even manager (Czech, Russian, Simplified Chinese).

During these conversations with various local communities, we heard loud and clear how important in-person meetings are to understanding what Mozilla is working on, and how interacting with other volunteers and building personal connections is extremely valuable. Over the past few years, some unique external factors — COVID and an economic recession chief among them — made the organization of large scale events challenging. We investigated the feasibility of small-scale, local events organized directly by community members, but this initiative wasn’t successful since it required a significant investment of time and energy by localizers on top of the work they were already doing to support Mozilla with product localization.

To counterbalance the lack of in-person events and keep volunteers in the loop, we organized two virtual fireside chats for localizers in May and November (links to recordings).

What’s coming in 2024

In order to strengthen our connection with existing and potential volunteers, we’re planning to organize regular online events this year. We intend to experiment with different formats and audiences for these events, while also improving our presence on social networks (did you know we’re on Mastodon?). Keep an eye out on this blog and Matrix for more information in the coming months.

As many of you have asked in the past, we also want to integrate email functionalities in Pontoon; users should be able to opt in to receive specific communications via email on top of in-app notifications. We also plan to experiment with automated emails to re-engage inactive users with elevated permissions (translators, managers).

It’s clear that a community can only be sustainable if there are active managers and translators to support new contributors. On one side, we will work to create onboarding material for new volunteers so that existing managers and translators can focus on the linguistic aspects. On the other, we’ll engage the community to discuss a refined set of policies that foster a more inclusive and transparent environment. For example, what should the process be when a locale doesn’t have a manager or active translator, yet there are contributors not receiving reviews? How long should an account retain elevated permissions if it’s apparently gone silent? What are the criteria for promotions to translator or manager roles?

For both initiatives, we will reach out to the community for feedback in the coming months.

As for Pontoon, you can expect some changes under the hood to improve performances and overall reliability, but also new user-facing features (e.g. fine-grained search, better translation memory management).

Thank you!

We want to thank all the volunteers who have dedicated their time and skills to localizing Mozilla products. Your tireless efforts are essential in advancing the Mozilla mission of fostering an open and accessible internet for everyone.

Looking ahead, we are excited about the opportunities that 2024 brings. We look forward to working alongside our community to expand the impact of localization and continue breaking down language barriers. Your support is invaluable, and together, we will continue shaping a more inclusive digital world. Thank you for being an integral part of this journey.

Open Policy & AdvocacyMozilla Weighs in on State Comprehensive Privacy Proposals

[Read our letters to legislators in Massachusetts and Maine.]

Today, Mozilla is calling for the passage of strong state privacy protections, such as those modeled off of the American Data Privacy and Protection Act at the federal level. Today’s action came in the form of letters to relevant committee leadership in the Massachusetts and Maine legislatures encouraging them to consider and pass proposals that have been introduced in their respective states.

At Mozilla, we believe that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. In the best of worlds, this “privacy for all” mindset would mean a law at the federal level that protects all Americans from abuse and misuse of their data, which is why we have advocated for decisive action to pass a comprehensive Federal privacy law.

Recently, however, even more states are considering enacting privacy protections. These protections, if crafted incorrectly, could create a false facade of privacy for users and risk enshrining harmful data practices in the marketplace. If crafted correctly, they could provide vital privacy protections and drive further conversation of federal legislation.

The proposals we weighed in on today meet the Mozilla standard for privacy because they: require data minimization; create strong security requirements; prohibit deceptive design that impairs individual autonomy; prohibit algorithmic discrimination; and more.

Mozilla has previously supported legislative and regulatory action in California, and we hope to see more state legislatures introduce and pass strong privacy legislation.

The post Mozilla Weighs in on State Comprehensive Privacy Proposals appeared first on Open Policy & Advocacy.

SUMO BlogIntroducing Mandy and Donna

Hey everybody,

I’m so thrilled to start 2024 with good news for you all. Mandy Cacciapaglia and Donna Kelly are joining our Customer Experience team as a Product Support Manager for Firefox and a Content Strategist. Here’s a bit from them both:

Hi there! Mandy here — I am Mozilla’s new Product Support Manager for Firefox. I’m so excited to collaborate with this awesome group, and dive into Firefox reporting, customer advocacy and feedback, and product support so we can keep elevating our amazing browser. I’m based in NYC, and outside of work you will find me watercolor painting, backpacking, or reading mysteries.

Hi everyone! I’m Donna, and I am very happy to be here as your new Content Strategist on the Customer Experience team. I will be working on content strategy to improve our knowledge base, documentation, localization, and overall user experience!In my free time, I love hanging out with my dog (a rescue tri-pawd named Sundae), hiking, reading (big Stephen King fan), playing video games, and anything involving food. Looking forward to getting to know everyone!

You’ll hear more from them in our next community call (which will be on January 17). In the meantime, please join me to congratulate and welcome both of them into the team!

SUMO Blog2023 in a nutshell

Hey SUMO nation,

As we’re inching closer towards 2024, I’d like to take a step back to reflect on what we’ve accomplished in 2023. It’s a lot, so let’s dive in! 

  • Overall pageviews

From Jan 1st to the end of November, we’ve got a total of 255+ million pageviews on SUMO. We’ve been in a consistent pageview number drop since 2018, and this time around, we’re down 7% from last year. This is far from bad, though, as this is our lowest yearly drop since 2018.

  • Forum

In the forum, we’ve seen an average of 2.8k questions per month this year. This is a 6.67% down turn from last year. We also see a downturn in our answer rate within 72 hours, 71% compared to 75% last year. We also see a drop in our solved rate, 10% this year compared to 14% last year. On a typical month, our average contributors on the forum excluding OP is around 200 (compared to 240 last year).

*See Support glossary
  • KB

We see an increase over different metrics on KB contribution this year, though. In total, we’ve got a total of 1990 revisions (14% increase from last year) from 136 non staff members. Our review rate this year is 80%, while our approval rate is 96%, compared to 73% and 95% in 2022). In total, we’ve got 29 non-staff reviewers this year.

  • Localization

On the localization side, the number is overall pretty normal. Total revision is around 13K (same as last year) from 400 non-staff members, with 93% review rate and 99% approval rate (compared to 90% and 99% last year) from a total of 118 non-staff reviewers.

  • Social Support

From year to date, the Social Support contributors have sent a total of 850 responses (compared to 908 last year) and interacted with 1645 conversations. Our resolved rate has dropped to 40.74%, compared to 70% last year. We have made major improvements on other metrics, though. For example, this year, our contributors were responsible for more replies from our total responses (75% in total compared to 39.6% last year). Our conversion rate is also improving from 20% in 2022 to 52% this year. It means, our contributors have taken more role in answering the overall inbounds and have replied more consistently than last year.

  • Mobile Store Support

On the Mobile Store Support side, our contributors this year have contributed to 1260 replies and interacted with 3149 conversations in total. That makes our conversion rate at 36% this year, compared to 46% last year. And those are mostly contributions to non-English reviews.


In addition to the regular contribution, here are some of the community highlights from 2023:

  • We did some internal assessment and external benchmarking in Q1, which informed our experiments in Q2. Learn the results of those experiments from this call.
  • We also updated our contributor guidelines, including article review guidelines and created a new policy around the use of generative AI.
  • By the end of the year, the Spanish community has done something really amazing. They have managed to translate and update 70% of in-product desktop articles (as opposed to 11% when we started the call for help.

We’d also like to take this opportunity to highlight some Customer Experience team’s projects that we’ve tackled this year (some with close involvement and help from the community).

We split this one into two concurrent projects:

  • Phase 1 Navigation Improvements — initial phase aims to:
    • Surface the community forums in a clearer way
    • Streamline the Ask a Question user flow
    • Improve link text and calls-to-action to better match what users might expect when navigating on the site
    • Updates to the main navigation and small changes to additional site UI (like sidebar menus, page headers, etc.) can be expected
  • Cross-system content structure and hierarchy — the goal of this project is to:
    • Improve our ability to gather data metrics across functional areas of SUMO (KB, ticketing, and forums)
    • Improve recommended “next steps” by linking related content across KB and Forums
    • Create opportunities for grouping and presenting content on SUMO by alternate categories and not just by product

Project Background:

    • This research was conducted between August 2023 and November 2023. The goal of this project is to provide actionable insights on how to improve the customer experience of SUMO.
    • Research approach:
      • Stakeholder engagement process
      • Surveyed 786 Mozilla Support users
      • Conducted three rounds of interviews recruited from survey respondents:
        • Sprint 1: Evaluated content and article structure
        • Sprint 2: Evaluated the overall SUMO customer experience
        • Sprint 3: Co-design of an improved SUMO experience
      • This research was conducted by PH1 Research, who have conducted similar research for Mozilla in 2022.
  • Please consider: Participants for this study were recruited via a banner ad in SUMO. As a result, these findings only reflect the experiences and needs of users who actively use SUMO. It does not reflect users who may not be aware of SUMO or have decided not to use it. 

Executive Summary:

  • Users consider SUMO a trustworthy and content-rich resource. SUMO offers resources that can appropriately help users of different technical levels. The most common user flow is via Google search. Very few are logging in to SUMO directly.
  • The goal of SUMO should be to assist Mozilla users to improve their product experience. Content should be consolidated and optimized to show fewer, high quality results on Google search and SUMO search. The article experience should aim to boost relevance and task success. The SUMO website should aid users to diagnose systems, understand problems, find solutions, and discover additional resources when needed.

Recommendations:

  • Our recommendation is that SUMO’s strategy should be to provide a self-service experience that makes users feel that Mozilla cares about their problems and offers a range of solutions appealing to various persona types (technical/non-technical).
  • The pillars for making SUMO valuable to users should be:
    • Confidence: As a user, I need to be confident that the resource provided will resolve my problem.
    • Guidance: As a user, I need to feel guided through the experience of finding a solution, even when I don’t understand the problem or solutions available.
    • Trust: As a user, I need to trust that the resources have been provided by a trustworthy authority on the subject (SUMO scores well here because of Mozilla).
      • Modernizing our CMS can provide significant benefits in terms of user experience, performance, security, flexibility, collaboration, and analytics.
      • This resulted in a decision to move forward with the plan to migrate our CMS to Wagtail — a modern, open-source content management system focused on flexibility and user experience.
      • We are currently in the process of planning the next phases for implementation.
    • Pocket migration to SUMO
      • We successfully migrated and published 100% of previously identified Pocket help center content from HelpScout’s CMS to SUMO’s CMS, with proper redirects in place to ensure a seamless transition for the user.
      • The localization community began efforts to help us localize the content, which had previously only been available in en-US.
    • Firefox account to Mozilla account rebrand in early November.
    • Officially supporting account users and login less support flow (read more about that here).
    • This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
      • Postgres is better suited for enterprise-level applications like ours, with very large datasets, frequent write operations and complex queries.
      • We can also take advantage of connection pooling via PgBouncer, which will improve our resilience under huge and often malicious traffic spikes (which have been occurring much more frequently during the past year).
      • Last but not least, our database now supports the full unicode character set, which means it can fully handle all characters, including emoji’s , in all languages. Our MySQL database had only limited unicode support, due to its initial configuration, and rather than invest in resolving that, which would have meant a significant chunk of work, we decided to invest instead in Postgres.

This year, you all continue to impress us with the persistence and dedication that you show to Mozilla by contributing to our platform, despite the current state of our world right now. To every single one of you who contributed in one way or another to SUMO, I’d like to express my sincere gratitude because without you all, our platform is just an empty shell. To celebrate this, we’ve prepared this simple dashboard with contribution data that you can filter based on username so you can see how much you’ve accomplished this year (we talked about this in our last community call this year).

Let’s be proud of what we’ve accomplished to keep the internet as a global & public resource for everybody, and let’s keep on rocking the helpful web through 2024 and beyond!

If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!

Open Policy & AdvocacyMozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy

[UPDATE: Read our reply comments here]

[Read our full submission here]

Net neutrality – the concept that your internet provider should not be able to block, throttle, or prioritize elements of your internet service, such as to favor their own products or business partners – is on the docket again in the United States. With the FCC putting out a notice of proposed rulemaking (NPRM) to reinstate net neutrality, Mozilla weighed in last week with a clear message: the FCC should reestablish these common sense rules as soon as possible.

We have been fighting for net neutrality around the world for the better part of a decade and a half. Most notably, this included Mozilla’s challenge to the Trump FCC’s dismantling of net neutrality in 2018.

American internet users are on the cusp of renewed protections for the open internet. Our recently submitted comment to the FCC’s NPRM took a step back to remind the FCC and the public of the real benefits of net neutrality: Competition, Grassroots Innovation, Privacy, and Transparency and Accountability.

Simply put, if the FCC moves forward with reclassification of broadband as a Title II service, it will protect innovation in edge services; unlock vital privacy safeguards; and prevent ISPs from leveraging their market power to control people’s experiences online. With vast increases in our dependence on the internet since the COVID-19 pandemic, these protections are more important than ever.

We encourage others who are passionate about the open internet to file reply comments on the proceeding, which are due January 17, 2024.

You can read our full comment here.

The post Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy appeared first on Open Policy & Advocacy.

Mozilla L10N2024 Pontoon survey results

The results from the 2024 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

The remaining features ranked as follows:

  1. Add ability to preview Fluent strings in the editor (572 votes).
  2. Link project names in Concordance search results to corresponding strings (540 votes).
  3. Add “Copy translation from another locale as suggestion” batch action (523 votes).
  4. Add ability to receive automated notifications via email (521 votes).
  5. Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (501 votes).
  6. Add ability to read notifications one by one, or mark notifications as unread (495 votes).
  7. Add virtual keyboard with special characters to the editor (469 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

A total of 365 Pontoon users participated in the survey, 169 of which voted on all features. Each user could give each feature 1 to 5 votes. Check out the full report.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

Mozilla Add-ons BlogA new world of open extensions on Firefox for Android has arrived

Woo-hoo you did it! Hundreds of add-on developers heeded the call to make their desktop extensions compatible for today’s debut of a new open ecosystem of Firefox for Android extensions. More than 450 Firefox for Android extensions are now discoverable on the addons.mozilla.org (AMO) Android homepage. It’s a strong start to an exciting new frontier of mobile browser customization. Let’s see where this goes.

Are you a developer who hasn’t migrated your desktop extension to Firefox for Android yet? Here’s a good starting point for developing extensions for Firefox for Android.

If you’ve already embarked on the mobile extension journey and have questions/insights/feedback to offer as we continue to optimize the mobile development experience, we invite you to join the discussion about top APIs missing on Firefox for Android.

Have you found any Firefox for Android bugs? Do tell!

The post A new world of open extensions on Firefox for Android has arrived appeared first on Mozilla Add-ons Community Blog.

SeaMonkeyUpdates fixed

Hi All,

The updates have been fixed as well well as a lot of the missing files.

Seems like as if I simply cannot handle multiple changes at the same time.

My apologies for the inconveniences caused.

:ewong

 

SeaMonkeyUpdates… erm.. update.

Hi all,

I have taken a look at what’s going on and am a bit puzzled.

  • Linux-i686 locales:
    • Missing: el, en-US, es-AR, es-ES, fi, fr, ka, nb-NO, nl, pl, pt-PT, ru, sk, sv-SE
    • Existing: cs, de, en-GB, hu, it, ja, pt-BR, zh-CN, zh-TW
  • Linux x86-64 locales:
    • Missing: de, el, en-US, es-ES, hu, it, ka, nb-NO, ru, sk, sv-SE, zh-TW
    • Existing: cs, en-GB, es-AR, fi, fr, ja, nl, pl, pt-BR, pt-PT, zh-CN
  • Mac locales:
    • Missing: cs, en-US, es-AR, fr, pt-BR, sk, zh-CN
    • Existing: de, el, en-GB, es-ES, fi, hu, it, ja-JP-mac, ka, nb-NO, nl, pl, pt-PT, ru, sv-SE, zh-TW
  • Win32 Locales:
    • Missing: cs, de, fi, nl, pl, pt-PT, ru, sv-SE
    • Existing: el, en-GB, en-US, es-AR, es-ES, fr, hu, it, ja, ka, nb-NO, pt-BR, sk, zh-CN, zh-TW
  • Win64 locales:
    • Missing: cs, de, en-GB, en-US, fr, it, ja, pl, pt-BR
    • Existing: el, es-AR, es-ES, fi, hu, ka, nb-NO, nl, pt-PT, ru, sk, sv-SE, zh-CN, zh-TW

No, I have no understand of the pattern of missing files.

So I’ll be changing the updates to using the ‘old’ place while I fix the ‘new’ place. (*wink*)

:ewong

SeaMonkeyMigration away from archive.mozilla.org addendum

Hi All,

In my previous blog post on the SeaMonkey Project migrating away from archive.mozilla.org, it seems as there is some misunderstanding in the wording(I’ve just changed it on the request of Mozilla).

When I stated “We need to stop using archive.mozilla.org” and “They will most likely be left as is until Mozilla blows it away (or I do).”,  I literally meant “We” as in “the SeaMonkey Project”.

So in essence, what I *was* trying to state (and failing miserably) is that “The SeaMonkey Project needs to migrate away from archive.mozilla.org.”   After 2023, when you go to https://archive.mozilla.org/pub/”, you will not see seamonkey there.

End of an era.

:ewong

 

 

 

SeaMonkeyUpdates issue

Hi All,

It seems like as if there are some missing updates and I’m currently working on it.

Sorry for the inconvenience.

:ewong

 

hacks.mozilla.orgPuppeteer Support for the Cross-Browser WebDriver BiDi Standard

We are pleased to share that Puppeteer now supports the next-generation, cross-browser WebDriver BiDi standard. This new protocol makes it easy for web developers to write automated tests that work across multiple browser engines.

How Do I Use Puppeteer With Firefox?

The WebDriver BiDi protocol is supported starting with Puppeteer v21.6.0. When calling puppeteer.launch pass in "firefox" as the product option, and "webDriverBiDi" as the protocol option:

const browser = await puppeteer.launch({
  product: 'firefox',
  protocol: 'webDriverBiDi',
})

You can also use the "webDriverBiDi" protocol when testing in Chrome, reflecting the fact that WebDriver BiDi offers a single standard for modern cross-browser automation.

In the future we expect "webDriverBiDi" to become the default protocol when using Firefox in Puppeteer.

Doesn’t Puppeteer Already Support Firefox?

Puppeteer has had experimental support for Firefox based on a partial re-implementation of the proprietary Chrome DevTools Protocol (CDP). This approach had the advantage that it worked without significant changes to the existing Puppeteer code. However the CDP implementation in Firefox is incomplete and has significant technical limitations. In addition, the CDP protocol itself is not designed to be cross browser, and undergoes frequent breaking changes, making it unsuitable as a long-term solution for cross-browser automation.

To overcome these problems, we’ve worked with the WebDriver Working Group at the W3C to create a standard automation protocol that meets the needs of modern browser automation clients: this is WebDriver BiDi. For more details on the protocol design and how it compares to the classic HTTP-based WebDriver protocol, see our earlier posts.

As the standardization process has progressed, the Puppeteer team has added a WebDriver BiDi backend in Puppeteer, and provided feedback on the specification to ensure that it meets the needs of Puppeteer users, and that the protocol design enables existing CDP-based tooling to easily transition to WebDriver BiDi. The result is a single protocol based on open standards that can drive both Chrome and Firefox in Puppeteer.

Are All Puppeteer Features Supported?

Not yet; WebDriver BiDi is still a work in progress, and doesn’t yet cover the full feature set of Puppeteer.

Compared to the Chrome+CDP implementation, there are some feature gaps, including support for accessing the cookie store, network request interception, some emulation features, and permissions. These features are actively being standardized and will be integrated as soon as they become available. For Firefox, the only missing feature compared to the Firefox+CDP implementation is cookie access. In addition, WebDriver BiDi already offers improvements, including better support for multi-process Firefox, which is essential for testing some websites. More information on the complete set of supported APIs can be found in the Puppeteer documentation, and as new WebDriver-BiDi features are enabled in Gecko we’ll publish details on the Firefox Developer Experience blog.

Nevertheless, we believe that the WebDriver-based Firefox support in Puppeteer has reached a level of quality which makes it suitable for many real automation scenarios. For example at Mozilla we have successfully ported our Puppeteer tests for pdf.js from Firefox+CDP to Firefox+WebDriver BiDi.

Is Firefox’s CDP Support Going Away?

We currently don’t have a specific timeline for removing CDP support. However, maintaining multiple protocols is not a good use of our resources, and we expect WebDriver BiDi to be the future of remote automation in Firefox. If you are using the CDP support outside of the context of Puppeteer, we’d love to hear from you (see below), so that we can understand your use cases, and help transition to WebDriver BiDi.

Where Can I Provide Feedback?

For any issues you experience when porting Puppeteer tests to BiDi, please open issues in the Puppeteer issue tracker, unless you can verify the bug is in the Firefox implementation, in which case please file a bug on Bugzilla.

If you are currently using CDP with Firefox, please join the #webdriver matrix channel so that we can discuss your use case and requirements, and help you solve any problems you encounter porting your code to WebDriver BiDi.

Update: The Puppeteer team have published “Harness the Power of WebDriver BiDi: Chrome and Firefox Automation with Puppeteer“.

The post Puppeteer Support for the Cross-Browser WebDriver BiDi Standard appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.18 is now out!

Hi All,

The SeaMonkey project is pleased to announce the immediate release of 2.53.18 version of this long standing Internet Suite.

Please check out [1] and/or [2].  Also note, the updates should be up now.

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.18

[2] – https://www.seamonkey-project.org/releases/2.53.18

SUMO BlogWhat’s up with SUMO – Q4 2023

Hi everybody,

The last part of our quarterly update in 2023 come early with this post. That means, we won’t get the data from December just yet (but we’ll make sure to update the post later). Lots of updates after the last quarter so let’s just dive in!

Welcome note and shout-outs from Q4

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • Kiki back from maternity leave and Sarto bid her farewell, all happened in this quarter.
  • We have a new contributor policy around the use of generative AI tools. This was one of the things that Sarto initiated back then so I’d like to give the credit to her. Please take some time to read and familiarize yourself with the policy.
  • Spanish contributors are pushing really hard to help localize the in-product and top articles for the Firefox Desktop. I’m so proud that at the moment, 57.65% of Firefox Desktop in-product articles have been translated & updated to Spanish (compared to 11.8% when we started) and 80% of top 50 articles are localized and updated to Spanish. Huge props to those who I mentioned in the shout-outs section above.
  • We’ve got new locale leaders for Catalan and Indonesian (as I mentioned above). Please join me to congratulate Handi S & Carlos Tomás for their new role!
  • The Customer Experience team is officially moved out from the Marketing org to the Strategy and Operations org led by Suba Vasudevan (more about that in our community meeting in Dec).
  • We’ve migrated Pocket support platform (used to be under Help Scout) to SUMO. That means, Pocket help articles are now available on Mozilla Support, and people looking for Pocket premium support can also ask a question through SUMO.
  • Firefox account is transitioned to Mozilla account in early November this year. Read this article to learn more about the background for this transition.
  • We did a SUMO sprint for the Review checker feature with the release of Firefox 119, even though we couldn’t find lots of chatter about it.
  • Please check out this thread to learn more about recent platform fixes and improvements (including the use of emoji! )
  • We’ve also updated and moved Kitsune documentation to GitHub page recently. Check out this thread to learn more.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in October, November, and December! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.

Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Oct 2023 7,061,331 9.36%
Nov 2023 6,502,248 -7.92%
Dec 2023 TBD TBD

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Oct 2023 

pageviews (*)

Nov 2023 pageviews (*) Dec 2023 

pageviews (*)

Localization progress (per Dec, 7)(**)
de 10.66% 10.97% TBD 93%
fr 7.10% 7.23% TBD 80%
zh-CN 6.84% 6.81% TBD 92%
es 5.59% 5.49% TBD 27%
ja 5.10% 4.72% TBD 33%
ru 3.67% 3.8% TBD 88%
pt-BR 3.30% 3.11% TBD 43%
It 2.52% 2.48% TBD 96%
zh-TW 2.42% 2.61% TBD 2%
pl 2.13% 2.11% TBD 83%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Oct 2023 3,897 66.33% 10.01% 59.68%
Nov 2023 2,660 64.77% 9.81% 65.74%
Dec 2023 TBD TBD TBD TBD

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate
Oct 2023 311 209 132 63.16%
Nov 2023 245 137 87 63.50%
Dec 2023 TBD TBD TBD TBD

Top 5 Social Support contributors in the past 3 months: 

  1. Tim Maks 
  2. Wim Benes
  3. Daniel B
  4. Philipp T
  5. Pierre Mozinet

Play Store Support

Firefox for Android only

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors
Oct 2023 6,334 45 18
Nov 2023 6,231 281 75
Dec 2023

Top 5 Play Store contributors in the past 3 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

 

Web Application SecurityMozilla VPN Security Audit 2023

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt6 App for macOS
  • Mozilla VPN Qt6 App for Linux
  • Mozilla VPN Qt6 App for Windows
  • Mozilla VPN Qt6 App for iOS
  • Mozilla VPN Qt6 App for Android

Here’s a summary of the items discovered within this security audit that the auditors rated as medium or higher severity:

  • FVP-03-003: DoS via serialized intent 
      • Data received via intents within the affected activity should be validated to prevent the Android app from exposing certain activities to third-party apps.
      • There was a risk that a malicious application could leverage this weakness to crash the app at any time.
      • This risk was addressed by Mozilla and confirmed by Cure53.
  • FVP-03-008: Keychain access level leaks WG private key to iCloud 
      • Cure53 confirmed that this risk has been addressed due to an extra layer of encryption, which protects the Keychain specifically with a key from the device’s secure enclave.
  • FVP-03-009: Lack of access controls on daemon socket
      • Access controls to guarantee that the user sending commands to the daemon was permitted to initiate the intended action needs to be implemented.
      • This risk has been addressed by Mozilla and confirmed by Cure53.
  • FVP-03-010: VPN leak via captive portal detection 
      • Cure53 advised that the captive portal detection feature be turned off by default to prevent an opportunity for IP leakage when using maliciously set up WiFi hotspots.
      • Mozilla addressed the risk by no longer pinging for a captive portal outside of the VPN tunnel.
  • FVP-03-011: Lack of local TCP server access controls
      • The VPN client exposes a local TCP interface running on port 8754, which is bound to localhost. Users on localhost can issue a request to the port and disable the VPN.
      • Mozilla addressed this risk as recommended by Cure53.
  • FVP-03-012: Rogue extension can disable VPN using mozillavpnnp (High)
      • mozillavpnnp does not sufficiently restrict the application caller.
      • Mozilla addressed this risk as recommended by Cure53.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

 

The post Mozilla VPN Security Audit 2023 appeared first on Mozilla Security Blog.

hacks.mozilla.orgFirefox Developer Edition and Beta: Try out Mozilla’s .deb package!

A month ago, we introduced our Nightly package for Debian-based Linux distributions. Today, we are proud to announce we made our .deb package available for Developer Edition and Beta!

We’ve set up a new APT repository for you to install Firefox as a .deb package. These packages are compatible with the same Debian and Ubuntu versions as our traditional binaries.

Your feedback is invaluable, so don’t hesitate to report any issues you encounter to help us improve the overall experience.

Adopting Mozilla’s Firefox .deb package offers multiple benefits:

  • you will get better performance thanks to our advanced compiler-based optimizations,
  • you will receive the latest updates as fast as possible because the .deb is integrated into Firefox’s release process,
  • you will get hardened binaries with all security flags enabled during compilation,
  • you can continue browsing after upgrading the package, meaning you can restart Firefox at your convenience to get the latest version.
To set up the APT repository and install the Firefox .deb package, simply follow these steps:
<code># Create a directory to store APT repository keys if it doesn't exist:
sudo install -d -m 0755 /etc/apt/keyrings

# Import the Mozilla APT repository signing key:
wget -q <a class="c-link" href="https://packages.mozilla.org/apt/repo-signing-key.gpg" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt/repo-signing-key.gpg" data-sk="tooltip_parent">https://packages.mozilla.org/apt/repo-signing-key.gpg</a> -O- | sudo tee /etc/apt/keyrings/packages.mozilla.org.asc > /dev/null

# The fingerprint should be 35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3
gpg -n -q --import --import-options import-show /etc/apt/keyrings/packages.mozilla.org.asc | awk '/pub/{getline; gsub(/^ +| +$/,""); print "\n"$0"\n"}'

# Next, add the Mozilla APT repository to your sources list:
echo "deb [signed-by=/etc/apt/keyrings/packages.mozilla.org.asc] <a class="c-link" href="https://packages.mozilla.org/apt" target="_blank" rel="noopener noreferrer" data-stringify-link="https://packages.mozilla.org/apt" data-sk="tooltip_parent">https://packages.mozilla.org/apt</a> mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null

# Update your package list and install the Firefox .deb package:
sudo apt-get update && sudo apt-get install firefox-beta  # Replace "beta" by "devedition" for Developer Edition
And that’s it! You have now installed the latest Firefox Beta/Developer Edition .deb package on your Linux.
Firefox supports more than a hundred different locales. The packages mentioned above are in American English, but we have also created .deb packages containing the Firefox language packs. To install a specific language pack, replace fr in the example below with the desired language code:
sudo apt-get install firefox-beta-l10n-fr
To list all the available language packs, you can use this command after adding the Mozilla APT repository and running sudo apt-get update:
apt-cache search firefox-beta-l10n

The post Firefox Developer Edition and Beta: Try out Mozilla’s .deb package! appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NVote for new Pontoon features

It’s been a while since we have asked Pontoon users what new features should we develop, which is why we have decided to run another survey now.

But first, let’s take a look at the top-voted features from the last round that are all live now:

  1. Provide new contributors with guidelines before adding their first suggestion (details).
  2. Notify suggestion authors when their suggestions get reviewed (details).
  3. Pre-fill editor with 100% Translation Memory matches when available (details).

In addition to those, we also implemented a couple of features that didn’t make it into top 3:

  • Expose managers on team dashboards to help users get in touch with them easily (details).
  • Add a light theme (details).

You asked, we listened! 🙂

2024 Survey

It’s now time to vote again! We’re working on Pontoon Roadmap for 2024 and we commit to implement at least 3 top-voted features by Pontoon users.

Please let us know by December 11 how important for you are the features listed below in this quick 5-minute survey:

  • Add virtual keyboard with special characters to the editor, customizable per locale (details).
  • Add “Copy translation from another locale as suggestion” batch action (details).
  • Link project names in Concordance search results to their corresponding strings (details).
  • Add ability to edit Translation Memory entries (details).
  • Add ability to propose new Terminology entries (details).
  • Improve overall performance of Pontoon translation workspace and dashboards (details).
  • Add ability to preview Fluent strings in the editor (details).
  • Add ability to receive automated notifications via email (details).
  • Add ability to read notifications one by one, or mark notifications as unread (details).
  • Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (details).

Note that at the end of the survey you will be able to add your own ideas, which you are always welcome to submit on GitHub.

hacks.mozilla.orgIntroducing llamafile

A special thanks to Justine Tunney of the Mozilla Internet Ecosystem (MIECO), who co-authored this blog post.

Today we’re announcing the first release of llamafile and inviting the open source community to participate in this new project.

llamafile lets you turn large language model (LLM) weights into executables.

Say you have a set of LLM weights in the form of a 4GB file (in the commonly-used GGUF format). With llamafile you can transform that 4GB file into a binary that runs on six OSes without needing to be installed.

This makes it dramatically easier to distribute and run LLMs. It also means that as models and their weights formats continue to evolve over time, llamafile gives you a way to ensure that a given set of weights will remain usable and perform consistently and reproducibly, forever.

We achieved all this by combining two projects that we love: llama.cpp (a leading open source LLM chatbot framework) with Cosmopolitan Libc (an open source project that enables C programs to be compiled and run on a large number of platforms and architectures). It also required solving several interesting and juicy problems along the way, such as adding GPU and dlopen() support to Cosmopolitan; you can read more about it in the project’s README.

This first release of llamafile is a product of Mozilla’s innovation group and developed by Justine Tunney, the creator of Cosmopolitan. Justine has recently been collaborating with Mozilla via MIECO, and through that program Mozilla funded her work on the 3.0 release  (Hacker News discussion) of Cosmopolitan. With llamafile, Justine is excited to be contributing more directly to Mozilla projects, and we’re happy to have her involved.

llamafile is licensed Apache 2.0, and we encourage contributions. Our changes to llama.cpp itself are licensed MIT (the same license used by llama.cpp itself) so as to facilitate any potential future upstreaming. We’re all big fans of llama.cpp around here; llamafile wouldn’t have been possible without it and Cosmopolitan.

We hope llamafile is useful to you and look forward to your feedback.

 

 

The post Introducing llamafile appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogOpen extensions on Firefox for Android debut December 14 (but you can get a sneak peek today)

Starting December 14, 2023, extensions marked as Android compatible on addons.mozilla.org (AMO) will be openly available to Firefox for Android users.

“We’ve been so impressed with developer enthusiasm and preparation,” said Giorgio Natili, Firefox Director of Engineering. “Just a few weeks ago it looked like we might have a couple hundred Android extensions for launch, but now we can safely say AMO will have 400+ new Firefox for Android extensions available on December 14. We couldn’t be more thankful to our developer community for embracing this exciting moment.”

In anticipation of the launch of open extensions on Android, we just added a link to “Explore all Android extensions” on AMO’s Android page to make it easy to discover new content. And just for fun and to offer a taste of what’s to come, we also released a couple dozen new open extensions for Android. You can find them listed beneath the Recommended Extensions collection on that AMO Android page. Try a few out!

Get your Firefox desktop extension ready for Android

There’s still time to make your desktop extension compatible with Firefox for Android if you want to be part of the December 14 launch. Senior Developer Relations Engineer Simeon Vincent recently hosted two webinars to help developers work through common migration hurdles. Here are recorded webinars from October (an introduction to mobile extension migration) and November (setup, testing, debugging).

Simeon also hosts open “office hours” every Monday and Tuesday for anyone interested in signing up to receive 1:1 guidance on Firefox for Android extension development. Office hours run through December, so be sure to tap Simeon’s expertise while time remains.

“Early Add-opter” t-shirts still available!

Are you a developer planning to make your desktop extension work with Firefox for Android by December 14? Do you like cool free t-shirts? Great! Then email us at firefox-android-addon-support [at] mozilla.com with a link to your extension’s AMO listing page and we’ll follow up with t-shirt order details. Better act fast though, we’ve only got 200 tees total and just a few remain.

The post Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today) appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgMozilla AI Guide Launch with Summarization Code Example

The Mozilla AI Guide has launched and we welcome you to read through and get acquainted with it. You can access it here

Our vision is for the AI Guide to be the starting point for every new developer to the space and a place to revisit for clarity and inspiration, ensuring that AI innovations enrich everyday life. The AI Guide’s initial focus begins with language models and the aim is to become a collaborative community-driven resource covering other types of models.

To start the first few sections in the Mozilla AI Guide go in-depth on the most asked questions about Large Language Models (LLMs). AI Basics covers the concepts of AI, ML, LLMs, what these concepts mean and how they are related. This section also breaks down the pros and cons of using an LLM. Language Models 101 continues to build on the shared knowledge of AI basics and dives deeper into the next level with language models. It will answer questions such as “What does ‘training’ an ML model mean” or “What is ‘human in the loop’ approach?”

We will jump to the last section on Choosing ML Models and demonstrate in code below what can be done using open source models to summarize certain text. You can access the Colab Notebook here or continue reading:

First Steps with Language Models

Unlike other guides, this one is designed to help pick the right model for whatever it is you’re trying to do, by:

  • teaching you how to always remain on the bleeding edge of published AI research
  • broadening your perspective on current open options for any given task
  • not be tied to a closed-source / closed-data large language model (ex OpenAI, Anthropic)
  • creating a data-led system for always identifying and using the state-of-the-art (SOTA) model for any particular task.

We’re going to hone in on “text summarization” as our first task.

So… why are we not using one of the popular large language models?

Great question. Most available LLMs worth their salt can do many tasks, including summarization, but not all of them may be good at what specifically you want them to do. We should figure out how to evaluate whether they actually can or not.

Also, many of the current popular LLMs are not open, are trained on undisclosed data and exhibit biases. Responsible AI use requires careful choices, and we’re here to help you make them.

Finally, most large LLMs require powerful GPU compute to use. While there are many models that you can use as a service, most of them cost money per API call. Unnecessary when some of the more common tasks can be done at good quality with already available open models and off-the-shelf hardware.

Why do using open models matter?

Over the last few decades, engineers have been blessed with being able to onboard by starting with open source projects, and eventually shipping open source to production. This default state is now at risk.

Yes, there are many open models available that do a great job. However, most guides don’t discuss how to get started with them using simple steps and instead bias towards existing closed APIs.

Funding is flowing to commercial AI projects, who have larger budgets than open source contributors to market their work, which inevitably leads to engineers starting with closed source projects and shipping expensive closed projects to production.

Our First Project – Summarization

We’re going to:

  • Find text to summarize.
  • Figure out how to summarize them using the current state-of-the-art open source models.
  • Write some code to do so.
  • Evaluate quality of results using relevant metrics

For simplicity’s sake, let’s grab Mozilla’s Trustworthy AI Guidelines in string form

Note that in the real world, you will likely have to use other libraries to extract content for any particular file type.

import textwrap

content = """Mozilla's "Trustworthy AI" Thinking Points:

PRIVACY: How is data collected, stored, and shared? Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should enable people to decide how their data is used and what decisions are made with it.

FAIRNESS: We’ve seen time and again how bias shows up in computational models, data, and frameworks behind automated decision making. The values and goals of a system should be power aware and seek to minimize harm. Further, AI systems that depend on human workers should protect people from exploitation and overwork.

TRUST: People should have agency and control over their data and algorithmic outputs, especially considering the high stakes for individuals and societies. For instance, when online recommendation systems push people towards extreme, misleading content, potentially misinforming or radicalizing them.

SAFETY: AI systems can carry high risk for exploitation by bad actors. Developers need to implement strong measures to protect our data and personal security. Further, excessive energy consumption and extraction of natural resources for computing and machine learning accelerates the climate crisis.

TRANSPARENCY: Automated decisions can have huge personal impacts, yet the reasons for decisions are often opaque. We need to mandate transparency so that we can fully understand these systems and their potential for harm."""

Great. Now we’re ready to start summarizing.

A brief pause for context

The AI space is moving so fast that it requires a tremendous amount of catching up on scientific papers each week to understand the lay of the land and the state of the art.

It’s some effort for an engineer who is brand new to AI to:

  • discover which open models are even out there
  • which models are appropriate for any particular task
  • which benchmarks are used to evaluate those models
  • which models are performing well based on evaluations
  • which models can actually run on available hardware

For the working engineer on a deadline, this is problematic. There’s not much centralized discourse on working with open source AI models. Instead there are fragmented X (formerly Twitter) threads, random private groups and lots of word-of-mouth transfer.

However, once we have a workflow to address all of the above, you will have the means to forever be on the bleeding age of published AI research

How do I get a list of available open summarization models?

For now, we recommend Huggingface and their large directory of open models broken down by task. This is a great starting point. Note that larger LLMs are also included in these lists, so we will have to filter.

In this huge list of summarization models, which ones do we choose?

We don’t know what any of these models are trained on. For example, a summarizer trained on news articles vs Reddit posts will perform better on news articles.

What we need is a set of metrics and benchmarks that we can use to do apples-to-apples comparisons of these models.

How do I evaluate summarization models?

The steps below can be used to evaluate any available model for any task. It requires hopping between a few sources of data for now, but we will be making this a lot easier moving forward.

Steps:

  1. Find the most common datasets used to train models for summarization.
  2. Find the most common metrics used to evaluate models for summarization across those datasets.
  3. Do a quick audit on training data provenance, quality and any exhibited biases, to keep in line with Responsible AI usage.

Finding datasets

The easiest way to do this is using Papers With Code, an excellent resource for finding the latest scientific papers by task that also have code repositories attached.

First, filter Papers With Code’s “Text Summarization” datasets by most cited text-based English datasets.

Let’s pick (as of this writing) the most cited dataset — the “CNN/DailyMail” dataset. Usually most cited is one marker of popularity.

Now, you don’t need to download this dataset. But we’re going to review the info Papers With Code have provided to learn more about it for the next step. This dataset is also available on Huggingface.

You want to check 3 things:

  • license
  • recent papers
  • whether the data is traceable and the methods are transparent

First, check the license. In this case, it’s MIT licensed, which means it can be used for both commercial and personal projects.

Next, see if the papers using this dataset are recent. You can do this by sorting Papers in descending order. This particular dataset has many papers from 2023 – great!

Finally, let’s check whether the data is from a credible source. In this case, the dataset was generated by IBM in partnership with the University of Montréal. Great.

Now, let’s dig into how we can evaluate models that use this dataset.

Evaluating models

Next, we look for measured metrics that are common across datasets for the summarization task. BUT, if you’re not familiar with the literature on summarization, you have no idea what those are.

To find out, pick a “Subtask” that’s close to what you’d like to see. We’d like to summarize the CNN article we pulled down above, so let’s choose “Abstractive Text Summarization”.

Now we’re in business! This page contains a significant amount of new information.

There are mentions of three new terms: ROUGE-1, ROUGE-2 and ROUGE-L. These are the metrics that are used to measure summarization performance.

There is also a list of models and their scores on these three metrics – this is exactly what we’re looking for.

Assuming we’re looking at ROUGE-1 as our metric, we now have the top 3 models that we can evaluate in more detail. All 3 are close to 50, which is a promising ROUGE score (read up on ROUGE).

Testing out a model

OK, we have a few candidates, so let’s pick a model that will run on our local machines. Many models get their best performance when running on GPUs, but there are many that also generate summaries fast on CPUs. Let’s pick one of those to start – Google’s Pegasus.

# first we install huggingface's transformers library
%pip install transformers sentencepiece

Then we find Pegasus on Huggingface. Note that part of the datasets Pegasus was trained on includes CNN/DailyMail which bodes well for our article summarization. Interestingly, there’s a variant of Pegasus from google that’s only trained on our dataset of choice, we should use that.

from transformers import PegasusForConditionalGeneration, PegasusTokenizer 
import torch 

# Set the seed, this will help reproduce results. Changing the seed will 
# generate new results 
from transformers import set_seed 
set_seed(248602) 

# We're using the version of Pegasus specifically trained for summarization 
# using the CNN/DailyMail dataset 
model_name = "google/pegasus-cnn_dailymail"

# If you're following along in Colab, switch your runtime to a
# T4 GPU or other CUDA-compliant device for a speedup
device = "cuda" if torch.cuda.is_available() else "cpu" 

# Load the tokenizer
tokenizer = PegasusTokenizer.from_pretrained(model_name) 

# Load the model 
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)

# Tokenize the entire content
batch = tokenizer(content, padding="longest", return_tensors="pt").to(device)

# Generate the summary as tokens
summarized = model.generate(**batch)

# Decode the tokens back into text
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]

# Compare
def compare(original, summarized_text):
  print(f"Article text length: {len(original)}\n")
  print(textwrap.fill(summarized_text, 100))
  print()
  print(f"Summarized length: {len(summarized_text)}")

compare(content, summarized_text)
Article text length: 1427

Trustworthy AI should enable people to decide how their data is used.<n>values and goals of a system
should be power aware and seek to minimize harm.<n>People should have agency and control over their
data and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.

Summarized length: 320

Alright, we got something! Kind of short though. Let’s see if we can make the summary longer…


set_seed(860912)

# Generate the summary as tokens, with a max_new_tokens
summarized = model.generate(**batch, max_new_tokens=800)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]

compare(content, summarized_text)
Article text length: 1427

Trustworthy AI should enable people to decide how their data is used.<n>values and goals of a system
should be power aware and seek to minimize harm.<n>People should have agency and control over their
data and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.

Summarized length: 320

Well, that didn’t really work. Let’s try a different approach called ‘sampling’. This allows the model to pick the next word according to its conditional probability distribution (specifically, the probability that said word follows the word before).

We’ll also be setting the ‘temperature’. This variable works to control the levels of randomness and creativity in the generated output.

set_seed(118511)
summarized = model.generate(**batch, do_sample=True, temperature=0.8, top_k=0)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data.

Summarized length: 193

Shorter, but the quality is higher. Adjusting the temperature up will likely help.

set_seed(108814)
summarized = model.generate(**batch, do_sample=True, temperature=1.0, top_k=0)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.<n>We need to mandate transparency so that we can fully understand these systems
and their potential for harm.

Summarized length: 325

Now let’s play with one other generation approach called top_k sampling — instead of considering all possible next words in the vocabulary, the model only considers the top ‘k’ most probable next words.

This technique helps to focus the model on likely continuations and reduces the chances of generating irrelevant or nonsensical text.

It strikes a balance between creativity and coherence by limiting the pool of next-word choices, but not so much that the output becomes deterministic.

set_seed(226012)
summarized = model.generate(**batch, do_sample=True, top_k=50)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points look at ethical issues surrounding automated decision
making.<n>values and goals of a system should be power aware and seek to minimize harm.People
should have agency and control over their data and algorithmic outputs.<n>Developers need to
implement strong measures to protect our data and personal security.

Summarized length: 355

Finally, let’s try top_p sampling — also known as nucleus sampling, is a strategy where the model considers only the smallest set of top words whose cumulative probability exceeds a threshold ‘p’.

Unlike top_k which considers a fixed number of words, top_p adapts based on the distribution of probabilities for the next word. This makes it more dynamic and flexible. It helps create diverse and sensible text by allowing less probable words to be selected when the most probable ones don’t add up to ‘p’.

set_seed(21420041)
summarized = model.generate(**batch, do_sample=True, top_p=0.9, top_k=50)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)

# saving this for later.
pegasus_summarized_text = summarized_text
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.<n>We need to mandate transparency so that we can fully understand these systems
and their potential for harm.

Summarized length: 325

To continue with the code example and see a test with another model, and to learn how to evaluate ML model results (a whole another section), click here to view the Python Notebook and click “Open in Colab” to experiment with your own custom code.

Note this guide will be constantly updated and new sections on Data Retrieval, Image Generation, and Fine Tuning will be coming next.

Developer Contributions Are Vital

Shortly after today’s launch of the Mozilla AI Guide, we will be publishing our community contribution guidelines. It will provide guidance on the type of content developers can contribute and how it can be shared. Get ready to share any great open source AI projects, implementations, video and audio models.

Together, we can bring together a cohesive, collaborative and responsible AI community.

A special thanks to Kevin Li and Pradeep Elankumaran who pulled this great blog post together.

The post Mozilla AI Guide Launch with Summarization Code Example appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyMigrating off archive.mozilla.org

Hi all,

This is a heads up to all.  The SeaMonkey project needs to stop using archive.mozilla.org by year end.

So, we are migrating our files from archive.mozilla.org to a storage system on Azure.  However, I want to note that there are files there that I will not be moving.  They will most likely be left as is until Mozilla blows it away (or I do).

The only folders that are being migrated are the “releases” and nightly folder.  [I’m currently copying the files from releases to the new storage area, so nightly will need to take a back seat.]  The rest will be “decommissioned” as they are relics of a past that we can no longer go back to.  Tinderbox is dead.   Comm-* builds are no longer feasible/relevant.  Candidates are also no longer relevant.

I’m hoping to setup the new storage system as a ‘static’ page so that one can just go to (for example, https://archive.seamonkey-project.org/) and you can see both the nightly and releases directory.   At this point in time, I’m working on it.   The copying is taking the longest.

As the person who’ve worked on and around archive.mozilla.org, I feel sad to see all those files go poof. I guess it’s a feeling of nostalgia(makes me think of those years working with Callek to get the releases out on Buildbot) rather than actual utility.  Once Jan 1 comes around, it will be the end of an era and of course, the beginning of a new one.

Anyway, *if* you are interested in keeping those files, I’d suggest you download them as soon as possible.

Best regards,

:ewong

 

SUMO BlogWhat’s up with SUMO – Q3 2023

Hi everybody,

Sarto here! It’s been a great 4 months! The time really flew by. First and foremost I would like to thank the community here at Mozilla for for giving me grace and also showing me how passionate you guys truly are. I’ve worked in a handful of communities in the past but, by far, Mozilla has the most engaged community I’ve come across. The work that you guys put into Mozilla is commendable and valuable. For the community members and contributors that I was able to meet and interact with during my time here, thank you for sharing that passion with me. I’m handing the baton back over to Kiki. Till next time, keep on rocking the helpful web!

Cheers!

Welcome note and shout-outs from Q3

  • Big thanks to Paul who helped investigate 3 different incidents for Firefox in the last 2 weeks. There has been a huge amount of work going on for the CX team this quarter and you being involved in these incidents to help provide forum examples, follow up with users, and help herd some community folks to investigate has been very helpful.
  • Thanks to Jscher2000, Danny Colin, Paul, jonzn4SUSE, Dan, TyDraniu, and Zulqarnainjabbar99 for your input in the thread about UX Pain points leading to users leaving Firefox in the first 30 days.
  • Thank you to everyone who contributed to the release of Firefox 117 for Desktop, as well as all of the contributors who participated in the release thread.
  • Shout out to Paul for his work updating the Browsing history in Firefox – View the websites you have visited article for FireFox v118.
  • Shout out to Mark Heijl for his amazing job getting dutch article translations (incl. all the pocket ones) to a 100%!. And thank you Tim for bringing this to our attention!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in July, August, and September! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Jul 2023 6,512,758 3.87%
Aug 2023 7,164,666 10.01%
Sep 2023 6,456,716 -9.88%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Jul 2023 

pageviews (*)

Aug 2023 pageviews (*) Sep 2023 

pageviews (*)

Localization progress (per Oct, 30)(**)
de 11.09% 11.41% 11.12% 87%
zh-CN 6.98% 7.03% 6.67% 88%
fr 6.16% 5.95% 7.49% 80%
es 5.71% 5.50% 5.84% 23%
ja 4.81% 4.62% 4.84% 35%
ru 3.47% 3.48% 3.55% 84%
pt-BR 3.39% 3.66% 3.39% 43%
It 2.35% 1.98% 2.42% 91%
pl 2.06% 2.05% 1.99% 78%
zh-TW 1.91% 0.92% 2.16% 2%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jul 2023 2,664 76.28% 11.71% 59.24%
Aug 2023 2,853 79.36% 12.72% 49.59%
Sep 2023 2,977 72.93% 11.89% 67.89%

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate
Jul 2023 317 157 83 52.87%
Aug 2023 237 47 33 70.21%
Sep 2023 192 47 22 46.81%

Top 5 Social Support contributors in the past 3 months: 

  1. Daniel B.
  2. Théo Cannillo
  3. Wim Benes
  4. Ifeoma
  5. Peter Gallwas

Play Store Support*

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors
Jul 2023 6,072 191 40
Aug 2023 6,135 185 55
Sep 2023 6,111 75 23
* Firefox for Android only

Top 5 Play Store contributors in the past 3 months: 

  1. Wim Benes
  2. Tim Maks
  3. Damian Szabat
  4. Christophe Villeneuve
  5. Selim Şumlu

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Mozilla L10NL10n Report: November 2023 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

On October 24 we shipped Firefox 119 with a brand new locale: Santali (sat). This brings the overall number of locales supported in Firefox release to 102. Congratulations to Prasanta and the other Santali contributors for this huge accomplishment.

In terms of new content to translate, a couple of new features were responsible for most of the new strings over the last months: a new shopping feature (Review Checker), and a redesigned Firefox View page, which now includes more information to support the user (recent browsing, recently closed tabs, tabs from other devices, etc.).

Check your Pontoon notifications for instructions on how to test your localization for the Review Checker in Nightly.

In the current Nightly (121) we also migrated the integrated PDF Viewer to Fluent, finally replacing the unmaintained legacy l10n system (webl10n.js) used in this feature.

What’s new or coming up in mobile

We officially launched yesterday the brand update from “Firefox Accounts” to a more general “Mozilla accounts” – a change you have probably noticed in recent string updates. Please make sure to address these strings so you keep products up to date with the rebranding.

You may have also noticed that a few Android strings have landed for add-ons, specifically to call out that we have hundreds of new extensions. If you would like to have this experiment available in your locale, make sure you go into the Firefox for Android project in Pontoon, and choose the Fenix file. Then search for these string IDs:

  • addon_ga_message_title
  • addon_ga_message_body
  • Addon_ga_message_button

You can find these from the search bar, once you are in the Fenix file in Pontoon.

What’s new or coming up in web projects

Mozilla Accounts

In early October Mozilla announced a name change for Firefox accounts, and as of November 1 Firefox accounts is now officially Mozilla accounts. Even before this, starting in September a significant number of new strings and changes related to this name change started making its way to you. Thank you for ensuring that your locales were updated and ready. The majority of locales shipping to production launched with all translations complete and ready for people around the world to use their Mozilla accounts in their own language. This is truly a result of your contributions! Now that these changes are live, please do reach out if you notice anything strange as you go about using your Mozilla account.

Mozilla.org

Since the last report, a few changes have landed in this project. In addition to the global change from Firefox account(s) to Mozilla account(s), the team also began to simplify the references to third party brand names. The names are no longer inside a placeholder. This change will make it easier to translate long strings with many brand names, all  too common in this project. Only Mozilla brands and product names will be coded in the placeholder. During this transition period, you will see a mixture of both. As we update a page or add a new page, the new approach will be applied.

A few new pages were added too. These are pages with file names ending in “-2023” or “-2”, replacing the older versions which will soon be removed from Pontoon. If you are working on these pages, make sure you are working on the new versions, not the old ones.

Relay Website

In the last report, we shared with you the news of migrating a few relay.firefox.com pages to mozilla.org. The migration was complete which resulted in opening up Relay specific pages to more locales. However, an internal decision has been made that these pages should remain on the current Relay product site and not move to mozilla.org.

We regret that the reversal of this decision came soon after the migration. We are having internal discussions around how we can better communicate changes in the future so that we can minimize the impact to our community volunteers.

The Mozilla.org and Relay teams will work closely with the l10n team to migrate the content back to the existing product site. All the work you have done will be stored in Pontoon. The l10n team will make its best effort to preserve the history of each of the translated strings. For the locales that didn’t opt in to the Relay Website project but participated in the localization of the pages on mozilla.org, we encourage you to consider opting in on the Relay project if the community is interested and has the bandwidth.

What’s new or coming up in SUMO

Firefox Review Checker Sprint is happening as we launched Firefox 119. Please check out the sprint wiki to get know more about the detail.

Firefox Account transition to Mozilla account. What you need to know as a SUMO contributor?

The content team at SUMO is utilizing Bugzilla to collect content requests from other teams. If you’re contributing to content at SUMO, please check out this best practices for Bugzilla tickets.

What’s new or coming up in Pontoon

Light Theme

We are excited to announce that we have incorporated a light theme into Pontoon. The theme selector is available in two places:

  • Settings Page: Directly select the light theme.
  • User Profile Menu: Click on the profile icon (top right) and choose the light theme.

Newly published localizer facing documentation

We have added documentation on how to use the theme selector feature to access the light theme in the settings page and user profile menu.

Events

We are hosting an L10n Fireside chat mid-November (date and time TBD). It will be live and recorded here. We are interested in your questions and topics! Please submit them in this form, or reach out directly to delphine at mozilla dot com if you prefer.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!

Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

SUMO BlogMozilla account rename – Changes on the support flows

If you’ve been contributing to the support forum on the Mozilla Support platform, you might’ve been aware of the difficulties of supporting users with Firefox account problems. The lack of safety measures to deal with PII (Personal Identifiable Information) in the forum, the ambiguity on some security terminologies (recovery codes vs. recovery key) or, ultimately, the lack of infrastructure to support users with account’s recovery issues without having them losing their data.

With the momentum of Firefox account rebrands to Mozilla account, the Customer Experience team has prepared a new flow to support this transition as well as building the foundation for a better support experience for account’s holders in the long run.

The new support flows

If you’re contributing to Mozilla Support, here’s what you need to know about the new support flow:

  • Mozilla account specific contact form

Users with Mozilla account issues can now submit their questions to the Mozilla account contact form that can be accessed from the Get Help fly-out menu. Questions submitted to Mozilla account contact form will be handled by dedicated support agents who are better equipped to deal with PII as well as have access to the infrastructure to solve a more complex case.

Screenshot of the new fly-out menu in the Mozilla Support platform

  • Login-less support

We also introduced login-less support for account holders who lose access to their account. This type of support can be accessed from the login prompt. Users who submit a question from this contact form will also be handled by dedicated support agents.

Screenshot of the new login prompt in the Mozilla Support platform

Implication for the Forum & Social Support contributors

If you’re a forum contributor or you have access to Verint, please help us direct any questions related to Mozilla account to the Mozilla account contact form. We have a forum common response for this called ‘Mozilla account contact form‘ and a clipping in Verint called ‘Mozilla account contact form‘ that you can use at your convenience.

Mozilla account as a product in SUMO

Technically, we have created a new product for Mozilla account in SUMO, which means that we’ll host future articles related to Mozilla account in this category. However, it won’t be visible as a tile in our product selections page. If you see Firefox account is still mentioned in a KB article, or if you see an article that should be moved to the Mozilla account category, please notify the content team. You can also check out this article to learn more about editorial guidelines for Mozilla account in our Knowledge Base.

Implication for the locale teams

You should expect to see many translated articles become outdated due to the update that we’re doing with the English KB articles. Please check the Recent Revisions page to see the articles that we’ve updated as part of this launch.

Frequently asked questions

What to do when encountering users with Mozilla account problems?

Please direct any questions related to Mozilla account to the Mozilla account contact form, unless it can be solved with KB articles.

Does this also include users with Firefox Sync issue?

The login-less contact form is intended for users with login issues, while the signed-in contact form is intended for account-related issues. In short, Firefox Sync is out of scope for now.

Do we support account recovery now?

Account recovery is a complicated process, and we don’t have the infrastructure yet to handle every case. However, that’s part of the scope of this new support infra, and you should direct users with this issue to file a ticket.


If you have other questions about this change, please join our discussion in this forum thread!

hacks.mozilla.orgDown and to the Right: Firefox Got Faster for Real Users in 2023

One of the biggest challenges for any software is to determine how changes impact user experience in the real world. Whether it’s the processing speed of video editing software or the smoothness of a browsing experience, there’s only so much you can tell from testing in a controlled lab environment. While local experiments can provide plenty of metrics, improvements to those metrics may not translate to a better user experience.

This can be especially challenging with complex client software running third-party code like Firefox, and it’s a big reason why we’ve undertaken the Speedometer 3 effort alongside other web browsers. Our goal is to build performance tests that simulate real-world user experiences so that browsers have better tools to drive improvements for real users on real webpages. While it’s easy to see that benchmarks have improved in Firefox throughout the year as a result of this work, what we really care about is how much those wins are being felt by our users.

In order to measure the user experience, Firefox collects a wide range of anonymized timing metrics related to page load, responsiveness, startup and other aspects of browser performance. Collecting data while holding ourselves to the highest standards of privacy can be challenging. For example, because we rely on aggregated metrics, we lack the ability to pinpoint data from any particular website. But perhaps even more challenging is analyzing the data once collected and drawing actionable conclusions. In the future we’ll talk more about these challenges and how we’re addressing them, but in this post we’d like to share how some of the metrics that are fundamental to how our users experience the browser have improved throughout the year.

Let’s start with page load. First Contentful Paint (FCP) is a better metric for felt performance than the `onload` event. We’re tracking the time it takes between receiving the first byte from the network to FCP. This tells us how much faster we are giving feedback to the user that the page is successfully loading, so it’s a critical metric for understanding the user experience. While much of this is up to web pages themselves, if the browser improves performance across the board, we expect this number to go down.

Graph of the median time between response start and first contentful paint, going from ~250 to ~215. Three distinct areas with a more pronounced slope are visible in mid february, late April and the largest in late July.

Image 1 – Median time from Response Start to First Contentful Paint in milliseconds

We can see that this time improved from roughly 250ms at the start of the year to 215ms in October. This means that a user receives feedback on page loads almost 15% faster than they did at the start of the year. And it’s important to note that this is all the result of optimization work that didn’t even explicitly target pageload.

In order to understand where this improvement is coming from, let’s look at another piece of timing data: the amount of time that was spent executing JavaScript code during a pageload. Here we are going to look at the 95th percentile, representing the most JS heavy pages and highlighting a big opportunity for us to remove friction for users.

A graph of the 95th percentile of JS execution time during pageload. It runs from ~1560 in January 2023 to ~1260 by October 2023. In general it's a steady downward slope with a small downward jump in April and a large downward jump during August.

Image 2 – 95th Percentile of JS execution time during pageload in milliseconds

This shows the 95th percentile improving from ~1560ms at the beginning of the year, to ~1260ms in October. This represents a considerable improvement of 300ms, or almost 20%, and is likely responsible for a significant portion of the reduced FCP times. This makes sense, since Speedometer 3 work has led to significant optimizations to the SpiderMonkey JavaScript engine (a story for another post).

We’d also like to know how responsive pages are after they are loaded. For example, how smooth is the response when typing on the keyboard as I write this blogpost! The primary metric we collect here is the “keypress present latency”; the time between a key being pressed on the keyboard and its result being presented onto the screen. Rendering some text to the screen may sound simple, but there’s a lot going on to make that happen – especially when web pages run main thread JavaScript to respond to the keypress event. Most typing is snappy and primarily limited by hardware (e.g. the refresh rate of the monitor), but it’s extremely disruptive when it’s not. This means it’s important to mitigate the worst cases, so we’ll again look at the 95th percentile.

A graph of the 95th percentile of the keypress present latency. Ranging from January 2023 to October 2023. It hovers fairly steady around 65ms, even seemingly going up a bit between March and May. Before dropping down to about 58-59ms over the course of August and September 2023.

Image 3 – 95th Percentile of the keypress present latency

Once again we see a measurable improvement. The 95th percentile hovered around 65ms for most of the year and dropped to under 59ms after the Firefox 116 and 117 releases in August. A 10% improvement to the slowest keypresses means users are experiencing more instantaneous feedback and fewer disruptions while typing.

We’ve been motivated by the improvements we’re seeing in our telemetry data, and we’re convinced that our efforts this year are having a positive effect on Firefox users. We have many more optimizations in the pipeline and will share more details about those and our overall progress in future posts.

The post Down and to the Right: Firefox Got Faster for Real Users in 2023 appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgBuilt for Privacy: Partnering to Deploy Oblivious HTTP and Prio in Firefox

Protecting user privacy is a core element of Mozilla’s vision for the web and the internet at large. In pursuit of this vision, we’re pleased to announce new partnerships with Fastly and Divvi Up to deploy privacy-preserving technology in Firefox.

Mozilla builds a number of tools that help people defend their privacy online, but the need for these tools reflects a world where companies view invasive data collection as necessary for building good products and making money. A zero-sum game between privacy and business interests is not a healthy state of affairs. Therefore, we dedicate considerable effort to developing and advancing new technologies that enable businesses to achieve their goals without compromising peoples’ privacy. This is a focus of our work on web standards, as well as in how we build Firefox itself.

Building an excellent browser while maintaining a high standard for privacy sometimes requires this kind of new technology. For example: we put a lot of effort into keeping Firefox fast. This involves extensive automated testing, but also monitoring how it’s performing for real users. Firefox currently reports generic performance metrics like page-load time but does not associate those metrics with specific sites, because doing so would reveal peoples’ browsing history. These internet-wide averages are somewhat informative but not particularly actionable. Sites are constantly deploying code changes and occasionally those changes can trigger performance bugs in browsers. If we knew that a specific site got much slower overnight, we could likely isolate the cause and fix it. Unfortunately, we lack that visibility today, which hinders our ability to make Firefox great.

This is a classic problem in data collection: We want aggregate data, but the naive way to get it involves collecting sensitive information about individual people. The solution is to develop technology that delivers the same insights while keeping information about any individual person verifiably private.

In recent years, Mozilla has worked with others to advance two such technologies — Oblivious HTTP and the Prio-based Distributed Aggregation Protocol (DAP) — towards being proper internet standards that are practical to deploy in production. Oblivious HTTP works by routing encrypted data through an intermediary to conceal its source, whereas DAP/Prio splits the data into two shares and sends each share to a different server [1]. Despite their different shapes, both technologies rely on a similar principle: By processing the data jointly across two independent parties, they ensure neither party holds the information required to reveal sensitive information about someone.

We therefore need to partner with another independent and trustworthy organization to deploy each technology in Firefox. Having worked for some time to develop and validate both technologies in staging environments, we’ve now taken the next step to engage Fastly to operate an OHTTP relay and Divvi Up to operate a DAP aggregator. Both Fastly and ISRG (the nonprofit behind Divvi Up and Let’s Encrypt) have excellent reputations for acting with integrity, and they have staked those reputations on the faithful operation of these services. So even in a mirror universe where we tried to persuade them to cheat, they have a strong incentive to hold the line.

Our objective at Mozilla is to develop viable alternatives to the things that are wrong with the internet today and move the entire industry by demonstrating that it’s possible to do better. In the short term, these technologies will help us keep Firefox competitive while adhering to our longstanding principles around sensitive data. Over the long term, we want to see these kinds of strong privacy guarantees become the norm, and we will continue to work towards such a future.


Footnotes:

[1] Each approach is best-suited to different scenarios, which is why we’re investing in both. Oblivious HTTP is more flexible and can be used in interactive contexts, whereas DAP/Prio can be used in situations where the payload itself might be identifying.

 

The post Built for Privacy: Partnering to Deploy Oblivious HTTP and Prio in Firefox appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NLocalizer Spotlight: Meet Reza (Persian locale)

Welcome to our second localizer spotlight, presenting this time Reza from our Persian community.

Q. What first drew you to want to volunteer with Mozilla’s localization program?

The growing community of Persian users highlighted the need for a browser created by the people for the people. Thus, I began assisting the community in translating Firefox into Persian. Subsequently, we expanded our efforts to include other products like Firefox for phones.

Q. What have been some of the most rewarding or impactful projects you’ve localized for Mozilla?

The entire endeavor with Mozilla was driven by volunteering and a strong motivation to provide safe and open-source tools to the community. Given the substantial Persian (Farsi) population of over 110 million people, ensuring their access to interactive and helpful tools became a significant priority. We also focused on addressing issues related to Mozilla extensions, particularly the text-reader (Readaloud), to assist individuals with visual disabilities.

We discovered that a substantial number of people with visual impairments were utilizing Mozilla’s text-reader because it was one of the few free and open tools that catered to their specific needs. One day, I received an email from a Persian user with visual impairment, in which she highlighted the widespread utility of such tools for her and her friends. This instance made me realize that we needed to broaden our perspective beyond ordinary users, especially concerning localization, and emphasize accessibility as a key aspect of our work.

Q. What are some of the biggest challenges you’ve faced in translating Mozilla projects? How did you overcome them?

Translating a product is often not sufficient, especially when dealing with Right-To-Left (RTL) languages. It’s imperative to consider usability, accessibility, and how people with diverse language backgrounds perceive the product. Therefore, addressing all the UI/UX challenges and ensuring the product is user-friendly for the end users proved to be quite challenging.

Q. What skills or background do you think helps most for becoming an effective Mozilla translator?

I’m a computer scientist with a passion for open-source software. Naturally, my technical knowledge was sufficient to embark on this journey. However, I found it crucial to put myself in the shoes of end users, understanding how they wish to perceive the product and how we can create a better experience for them.

Q. What advice would you give to someone new wanting to get involved in localizing for Mozilla?

Think about the broader impact that your work has on the community. Translating can be challenging and sometimes even tedious, but we must remember that these small pieces of work drive the community forward and present new opportunities for them.

Interested in featuring in these spotlights? Or know someone you think we should interview? Fill out this form, or reach out directly to delphine at mozilla dot com. Interested in contributing to localization? Head on over here for more details!

Useful Links