At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.
This week, we chatted with winner Dr. J. Nathan Matias, a professor at Cornell University leading technology research to create change and impact digital rights. He leads the school’s Citizen and Technology Lab (CAT) and is the co-founder of the Coalition for Independent Technology Research, a nonprofit defending the right to ethically study the impact of tech on society. We talk with Matias about his start in citizen science, his work advocating for researchers’ rights and more.
As a professor at Cornell, how would you gauge where students and Gen Z are at in terms of knowing the dangers of the internet?
As a researcher, I am very aware that my students are one narrow slice of Americans. I teach communication and technology. I teach this 500 student class and I think the students I teach hear about people’s concerns, about technology, through media, through what they see online. And they’re really curious about what if that is true and what we can do about it. That’s one of the great joys of being a professor, that I can introduce students to what we know, thanks to research and to all the advocacy and journalism, and also to what we don’t know and encourage students to help create the answers for themselves, their communities and future generations.
To kind of go a little bit even further, as a professor, what are the things that you try to instill with them, or what are core concepts that you think are really important for them to know and try to hammer down to them about the internet and the social impacts of all of these platforms?
If I’m known for one thing, it’s the idea that knowledge and power about digital technologies shouldn’t be constrained to just within the walls of the universities and tech companies. Throughout my classes and throughout my work, I actively collaborate with and engage with the general public to understand what people’s fears are to collect evidence and to inform accountability. And so, my students had the opportunity to see how that works and participate in it themselves. And I think that’s especially important, because yeah, people come to a university to learn and grow and learn from what scholars have said before, but also, if we come out of our degrees without an appreciation for the deeply held knowledge that people have outside of universities, I think that’s a missed opportunity.
Beyond the data you collect in your field, what other types of data collection out there creates change and inspires you to continue the work that you do?
I’m often inspired by people who do environmental citizen science because many of them live in context. We all live in contexts where our lives and our health and our futures are shaped by systems and infrastructures that are invisible, and that we might not appear to have much power over, right? It could be air or water, or any number of other environmental issues. And it’s similar for our digital environments. I’m often inspired by people who do work for data collection and advocacy and science on the environment when thinking about what we could do for our digital worlds. Last summer, I spent a week with a friend traveling throughout the California Central Valley, talking to educators, activists, organizers and farmworkers and communities working to understand and use data to improve their physical environment. We spent a day with Cesar Aguirre at the Central California Justice Network. You have neighborhoods in central California that are surrounded by oil wells and people are affected by the pollution that comes out of those wells — some of them have long been abandoned and are just leaking. And it’s hard to convince people sometimes that you’re experiencing a problem and to document the problem in a way that can get things to change. Cesar talked about ways that people used air sensors and told their stories and created media and worked in their local council and at a state level to document the health impacts of these oil wells and actually get laws changed at the state level to improve safety across the state. Whenever I encounter a story like that, whether it’s people in Central California or folks documenting oil spills in Louisiana or people just around the corner from Cornell — indigenous groups advocating for safe water and water rights in Onondaga Lake — I’m inspired by the work that people have to do and do to make their concerns and experiences legible to powerful institutions to create change. Sometimes it’s through the courts, sometimes it’s through basic science that finds new solutions. Sometimes it’s mutual aid, and often at the heart of these efforts, is some creative work to collect and share data that makes a difference.
When it pertains to citizen science and the work that you do, what do you think is the biggest challenge you and other researchers face? And by that I mean, is it kind of the inaction of tech companies and a lot of these institutions? Or is it maybe just the very cold online climate of the world today?
It’s always hard to point to one. I think the largest one is just that we have a lot more work to do to help people realize that they can participate in documenting problems and imagining solutions. We’re so used to the idea that tech companies will take care of things for us, that when things go wrong, we might complain, but we don’t necessarily know how to organize or what to do next. And I think there’s a lot that we as people who are involved in these issues and more involved in them can do to make people aware and create pathways — and I know Mozilla has done a lot of work around awareness raising. Beyond that, we’ve kind of reached a point where I wish companies were indifferent, but the reality is that they’re actively working to hinder independent research and accountability. If you talk to anyone who’s behind the Coalition for Independent Tech Research, I think we would all say we kind of wish it we didn’t have to create it, because spending years building a network to support and defend researchers when they come under attack by governments or tech companies for accountability and transparency work for actually trying to solve problems, like, that’s not how you prefer to spend your time. But, I think that on the whole, the more people realize that we can do something, and that our perspective and experience matters, and that it can be part of the solution, the better off we are with our ability to document issues and imagine a better future. And as a result, when it involves organizing in the face of opposition, the more people we’ll have on that journey
Just looking at this year in general with so much going on, what do you think is the biggest challenge that we face this year and in the world? How do we combat it?
Here’s the one I’ve been thinking about. Wherever you live, we don’t live in a world where a person who has experienced a very real harm from a digital technology — whether it’s social media or some kind of AI system — can record that information and seek some kind of redress, or even know who to turn to, to address or fix the problem or harm. And we see this problem in so many levels, right? If someone’s worried about discrimination from an algorithm in hiring, who do you turn to? If you’re worried about the performance of your self-driving car, or you have a concern about mental health and social media this year? We haven’t had those cases in court yet. We’re seeing some efforts by governments to create standards and we’re seeing new laws proposed. But it’s still not possible, right? If you get a jar of food from the supermarket that has harmful bacteria, we kind of know what to do. There’s a way you can report it, and that problem can be solved for lots of people. But that doesn’t yet exist in these spaces. My hope for 2024 is that on whatever issue people are worried about or focused on, we’ll be able to make some progress towards knowing how to create those pathways. Whether it’s going to be work so that courts know how to make sense of evidence about digital technologies —and I think they’re going to be some big debates there — whether it’s going to involve these standards conversations that are happening in Europe and the U.S., around how to report AI incidents and how to determine whether an AI system is safe or not, or safe for certain purposes and any number of other issues. Will that happen and be solved this year? No, it’s a longer term effort. But how could we possibly say that we have a tech ecosystem that respects people’s rights and treats them well and is safe if we don’t even have basic ways for people to be heard when things go wrong, whether it’s by courts or companies, or elsewhere. And so I think that’s the big question that I’m thinking about both in our citizen science work and our broader policy work at Cat Lab.
There’s also a bigger problem that so many of these apps and platforms are very much dependent upon us having to doing something compared to them.
Absolutely. I think a lot of people have lost trust in companies to do things about those reports. Because companies have a history of ignoring them. In fact, my very first community participatory science project in this space, which started back in 2014, we pulled information from hundreds of women who faced online harassment. And we looked at the kinds of things they experienced. And then whether Twitter back then was responding to people’s reports. It revealed a bunch of systemic problems and how the company has handled it. I think we’ve reached the point where there’s some value in that reporting, and sometimes for good and sometimes those things are exploited for censorship purposes as well — people report things they disagree with to try to get it taken down. But even more deeply, those reports don’t get at the deeper systemic issues. They don’t address how to prevent problems in the first place, or how to create or how to change the underlying logics of those platforms, or how to incentivize companies differently, so that they don’t create the conditions for those problems in the first place. I think we’re all looking for what are the right entities? Some currently exist, some we’re going to have to create that will be able to take on what people experience and actually create change that matters.
We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?
I love that question because my first true encounter with Mozilla would have been in 2012 at the Mozilla festival, and I was so inspired to be surrounded by a room of people who cared about making the Internet and our digital worlds better for people. And it was such a powerful statement that Mozilla convened people. Other tech institutions have these big events where the CEO stands on a stage and tells everyone why what they’re doing is revolutionary. And Mozilla did something radically different, which was to create a community and a space for people to envision the future together. I don’t know what the tech innovations or questions are going to be 25 years from now — there will probably be some enduring ones about access and equity and inclusion and safety for whatever the technologies are. My hope is that 25 years from now, Mozilla will continue to be an organization and a movement that listens and amplifies and supports a broad and diverse community to envision that together. It’s one of the things that makes Mozilla so special, and I think is one of the things that makes it so powerful.
What is one action you think that everybody can take to make the world and their lives online better?
I think the action to believe yourself when you notice something unusual, or have a question. And then to find other people who can corroborate and build a collective picture. Whether it’s by participating in the study at Cat Lab or something else. I have a respiratory disability, and it’s so easy to doubt your own experience and so hard to convince other people sometimes that what you’re experiencing is real. And so I think the biggest step we can do is to believe ourselves and to like, believe others when they talk about things they’ve experienced and are worried about but use that experience as the beginning of something larger, because it can be so powerful, and make such a huge difference when people believe in each other and take each other seriously.
What gives you hope about the future of our world?
So many things. I think every time I meet someone who is making things work under whatever circumstances they have — unsurprising as someone who does citizen and community science. I think about our conversations with Jasmine Walker, who is a community organizer who organizes these large spaces for Black communities online and has been doing it for ages and across many versions of technology and eras of time. And just to see the care and commitment that people have to their communities and families as it relates to technology — it could be our collaborators who are investigating hiring algorithms or communities we’ve talked to. We did a study that involved understanding the impact of smartphone design on people’s time use, and we met a bunch of people who are colorblind and advocates for accessibility. In each of those cases, there are people who care deeply about those around them and so much that they’re willing to do science to make a difference. I’m always inspired when we talk, and we find ways to support the work that they’re doing by creating evidence together that could make a difference. As scientists and researchers, we are sometimes along for the ride for just part of the journey. And so I’m always inspired when I see the commitment and dedication people have for a better world.
Everyone has a hobby. More generally, everyone has things they’re interested in or passionate about. And pursuing those interests is one of the big reasons that we use the Web. The online world is a great place to connect with our fellow hobbyists and enthusiasts, to learn from them, and to share our own knowledge and accomplishments.
But so much of this happens today in online spaces where things can quickly turn sour. Big social media platforms increasingly expose us to toxic behavior. Interests groups and forums can be unwelcoming or intimidating to newcomers. These bad experiences are driving more and more people off of the open Web and into the protected enclaves of the so-called “cozy web.” Additionally, social media distractions and the pressure to keep up with posting can more often stall your progress rather than accelerating it.
Didthis, a Mozilla innovation project, is a new app for anyone with a project-oriented hobby or personal interest. Whether you’re learning to knit a sweater, crafting a side table, or practicing a new recipe, Didthis makes it easy to keep track of your passion projects, capturing photos, links, and notes along the way and assembling your updates into a timeline that tells the story of your project. It’s a personal record of your progress, an acknowledgement of what you learned from your setbacks, and a celebration of your growth as a hobbyist.
Didthis isn’t really “social media,” at least not yet. Didthis is about being useful to you as you pursue your personal interests. We’re not following the typical social media playbook, here, and that’s intentional. Everything you post on Didthis is private by default. If you want, you can choose to share a link to your project with anyone you want: friends, family, or fellow hobbyists on social media or the “cozy web.” If people like Didthis, we’ll add social and community functionality over time, but our focus will always be on healthy interactions over virality.
For now, we’ve set up our own Discord server where Didthis users can connect with us to share feedback. We’ve also got a dedicated “show and tell” channel where Didthis users can optionally share their project updates with fellow hobbyists in our small but growing community.
You can try Didthis on the Web by visiting https://didthis.app. Our Web app works on both desktop and mobile devices. We also have an early iOS app that is available in the App Store for US and Canada (with Android to follow).
As this is still an experiment, we are eager for you to share your feedback at any time in the Didthis Discord channel. If you prefer to share more privately, you can email our entire team directly at didthis@mozilla.com.
At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.
This week, we chatted with activist Larissa May, the founder of #HalfTheStory, a nonprofit dedicated to empowering the next generation’s relationship with technology. With talked with May about the role technology played in her mental health, how #HalfTheStory evolved from a project in her college dorm room to what it is today, and her work in policy advocating for tech companies to build solutions to help youth thrive.
You know firsthand how toxic social media can be for kids. It has changed a lot in recent years, for the good and the bad. What do you think is the biggest danger kids face in 2024, and what can we do to combat it?
The average American teenager will spend approximately 30 years of their life behind screens. The greatest danger children, and indeed all of us, face lies in the uncertainties surrounding social media and its technologies. Technology evolves rapidly, outpacing both human understanding and legislative frameworks.
In 2024, we are witnessing the emergence of AI, with its potential for positive innovation, while also getting glimpses of its perilous side, whose full extent eludes us. Formerly innocuous interactions, such as a mere comment now hold the potential to morph into deceptive deep fakes, amplifying the challenges posed by social media. The velocity of AI’s advancement often outpaces our comprehension, leading to profound emotional ramifications, not only for our children but also for our societal fabric and economy.
Watching the growth of the #HalfTheStory movement has certainly had a big impact on you. Has anything surprised you along the way that you weren’t expecting?
What surprised me most along the way was realizing that most adults grapple with their relationship with technology just as much as children do. Now, as an adult who was once a child with a dream and an idea – which became #HalfTheStory – I’ve come to understand that while our focus may be on safeguarding children, we must also provide support to the adults who guide them. Demonstrating and modeling healthy relationships with technology is a crucial piece of this puzzle.
From the spotlight you’ve received in recent years – Good Morning America, your Ted Talk, TIME, Forbes, NBC, etc. — which experience made you stop and reflect on the magnitude of the work you do?
There is no destination or pot of gold. In fact, the goalpost is always moving. There isn’t a day that I don’t wake up without wonder and awe for the journey and where it’s taken me. Sometimes I struggle to fully understand the magnitude and the impact of this nonprofit. There are moments every week that surprise me, whether it be the people who slide into my DMs, full circle moments, or people that I meet on the street who’ve known about #HalfTheStory or shared their own story with HTS many years ago.
Although the big accolades and TV segments are meaningful, I think the moments that are the most striking for me are the ones that happen behind closed doors, the messages that I receive, the one-off text messages with young people, and the aha moments that help me better understand the realities that young people are facing so that I can create a voice in every room where a decision is being made about them.
What do you think is the biggest challenge we face in the world this year on and offline?
Social media has perpetuated so many of the inequalities we see in the world. The online “realities” we see are not the whole story and make it more difficult for us to be able to see people from where they come from and to walk in their shoes.
During this year, especially when an election is happening in America, this is especially dangerous as social media often can keep us in our own ecosystems and eco chambers. It’s up to us to break through those so that we can understand multiple perspectives and have empathy for what other people are going through.
Social media feeds on emotions and combative behavior – that’s just how the algorithm works. We have to step outside of our algorithm and into our humanity.
Where do you draw inspiration from to continue your work as an activist today?
Teen work makes the dream work. I draw my inspiration from the future and the heartbeat of #HalfTheStory, our community.
What is one action that you think everyone should take to make the world and our lives a little better?
One simple action you can take is to put your phone down and engage in eye contact, genuinely seeking to understand someone’s story and background. Often, we become ensnared in our own egos, identities, and digital distractions, overlooking those right in front of us who may need our support the most.
To create more room for the present moment, I employ a few strategies. I set away messages for my text messages, switch my phone to grayscale mode, and strive to make my technology less addictive by hacking my algorithm. These practices help me liberate my mind and savor the moments between the hustle and bustle of daily life.
We started Rise25 to celebrate Mozilla’s 25th anniversary, what do you hope people are celebrating in the next 25 years?
In the next 25 years, I hope that humanity is celebrating humanity. I think for many years we’ve celebrated tech and innovation and as we’ve done that we lost touch with ourselves, our souls, and the things that make us human. I do believe that we will see a pendulum swing – we are even seeing it with some of our teens now.
Being human and accessing screen-free experiences really is a luxury, and connection that is not simulated is one of the most precious things that we have. Time is a non-renewable resource, so I hope we don’t spend the next 25 years behind our screens. What gives me hope for the future is our teens.
What gives you hope about the future of our world?
Our society loves to paint a story of darkness and digital sickness, but I get to witness the digital wellness revolution unfold every day before my eyes.
Our teens are paving the path forward. They are the heart and soul of #HalfTheStory and I’m the lucky leader that gets to sail alongside them into a brighter horizon.
Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 123 Nightly release cycle.
Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla. Many thanks to Aaron who added a “Save as File” context menu entry in the Network panel so (mostly) all responses can be saved to disk (#1221964)
A source map is a file that maps from a generated source, i.e. the actual Javascript source the browser runs, to the original source, which could be typescript, jsx, or even regular JS file that was compressed. This enables DevTools to display code to the developers in the way they wrote it.
It can happen that a referenced source map file can’t be retrieved by the Debugger, or that it’s invalid, and in such case, we’ll now show a warning message in the editor to indicate why we can only show the generated source (#1834725).
Still related to source map, a link to the original source is now displayed in the footer when selecting a location in a generated source (#1834729).
We some time fixing common issues that were affecting the Preview Popup, which is displayed when hovering variables (#1873147, #1872715, #1873149). It should now be more solid and reliable, but less us know if you’re still having problem with it!
Finally, we fixed a nasty bug that could crash the Debugger (#1874382).
Misc
We vastly improved console performance when there is a lot of messages being displayed (#1873066, #1874696) and fixed logging of cross-origin iframe’s contentWindow (#1867726) and arrays in workers (#1874695).
The Network panel timing markers for Service Worker interception are now displayed correctly (#1353798).
We fixed a couple regressions in the Inspector. The first one was preventing using double click edit attribute with URLs in the markup view (#1870214), and the second was adding extra line when copy/pasting rule from the Rules view (#1876220).
Thank you for reading this and using our tools, see you next month week for a new round of updates
Calling all extension developers! With Manifest V3 picking up steam again, we wanted to provide some visibility into our current plans as a lot has happened since we published our last update.
Back in 2022 we released our initial implementation of MV3, the latest version of the extensions platform, in Firefox. Since then, we have been hard at work collaborating with other browser vendors and community members in the W3C WebExtensions Community Group (WECG). Our shared goals were to improve extension APIs while addressing cross browser compatibility. That collaboration has yielded some great results to date and we’re proud to say our participation has been instrumental in shaping and designing those APIs to ensure broader applicability across browsers.
We continue to support DOM-based background scripts in the form of Event pages, and the blocking webRequest feature, as explained in our previous blog post. Chrome’s version of MV3 requires service worker-based background scripts, which we do not support yet. However, an extension can specify both and have it work in Chrome 121+ and Firefox 121+. Support for Event pages, along with support for blocking webRequest, is a divergence from Chrome that enables use cases that are not covered by Chrome’s MV3 implementation.
Well what’s happening with MV2 you ask? Great question – in case you missed it, Google announced late last year their plans to resume their MV2 deprecation schedule. Firefox, however, has no plans to deprecate MV2 and will continue to support MV2 extensions for the foreseeable future. And even if we re-evaluate this decision at some point down the road, we anticipate providing a notice of at least 12 months for developers to adjust accordingly and not feel rushed.
As our plans solidify, future updates around our MV3 efforts will be shared via this blog. We are loosely targeting our next update after the conclusion of the upcoming WECG meeting at the Apple offices in San Diego. For more information on adopting MV3, please refer to our migration guide. Another great resource worth checking out is the recent FOSDEM presentation a couple team members delivered, Firefox, Android, and Cross-browser WebExtensions in 2024.
If you have questions, concerns or feedback on Manifest V3 we would love to hear from you in the comments section below or if you prefer, drop us an email.
At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.
This week, we chatted with Nyamekye Wilson, a creator that is the founder and CEO of Black Sisters in STEM, a group building one of the largest talent pipelines of Black college women in STEM. Her passion for global STEM and bridging the gender gap gave birth to a six-figure tech company while she was working at Google. We talk with Nyamekye about the challenges she’s faced in her career, starting a Black nonprofit, where she draws inspiration from and more.
OK, first off, where did the phrase “the Moses of STEM” originate from for you?
It came to me at church, and it was something that I just knew and I just heard and my brain is like “the Moses of them.” And then it was something that I spoke over to the team like a fellow — like that is really perfect, that’s exactly who you are.
The historical figure of Moses, he was someone who led people out of activity. And so really with Black Sisters in STEM, it’s not just a workplace organization, it’s so much more than that. It is really that we are taking Black women out of a lot of the activity that they’ve learned over time from a very young age of things that we cannot be, things that we cannot do, places we cannot go. Who we cannot be. And so, when it comes to the Moses of STEM, it’s really about unearthing and bringing people out of a lot of bondage and most of that bondage is always in the mind.
You mentioned a lot of the different experiences and labels you’ve dealt with in your career — racism, sexism, classism — that we face in schooling and in the workforce in general. Which issue would you say was the one that really ignited the fire for you and the work that you do right now the most?
I would say it was really the concept of intersectionality. When I did leave my finance major in college, I went into sociology and women, gender and sexuality studies, that’s when I got introduced to Kimberlé Crenshaw and her concept of intersectionality. And that was the first time in life that I actually heard a philosophy that actually spoke to my experience.
What are the biggest challenges that you’ve had to face starting a Black nonprofit that most people might not be aware of?
We are not the ones in the world of philanthropy, typically, when it comes to running systems and running things at a large level. Even when you look at places like Africa or places like the Caribbean, or even Black American communities in the U.S., a lot of organizations aren’t run by us, aren’t founded by us in our community to support us. So that’s one thing, the funding structure and really understanding that some of the relationships are doing a lot of funding with the people that are running them.
Number two is that it can be very difficult to fight for something that you also share the identity of. This is something that we notice in the civil rights movement — it’s a lot of fatigue to fight for the rights of your people when you are also the people that are inclusive of those fights, right? It’s a constant mental war, I would say, because it’s like, I’m not just talking about Black women from afar, I am the Black woman who’s been through that. I am the Black woman who did not have the support. I am the Black woman who, you know, went through the question and anxiety trying to get to the place where I’m trying to get to my girl’s to. I am the Black woman who gained over 60 pounds trying to take my family out of poverty and be the first person in my family to have a six-figure career right out of college. That is a mental battle that you constantly have to do. And so you really have to have a lot of mental attitude. You really have to work with the best interactions and best relationships that maybe other populations more likely have — they don’t have to do half the work because if they came from a very wealthy area, they’re likely to have that and some people in those areas have a foundation, it’s very likely. And they likely have those people to be family friends or family connections — it makes it so much easier when you can just go to their house, or you can just call them and say, “hey, I have this idea.” That’s how money moves by relationship. Essentially, it always moves by relationship because money is a trust factor. And when you have the relationship, someone who has known you since you were five, and they’ve been friends with your mom and dad for 20 years, that trust factor is already there. Versus, a young Black girl coming to the phone with her story and her narrative and you’ve never met her in your life. You don’t know anything about her. You don’t know anything about the organization. Now, I have to do 10 times the work. And plead to you who I am, plead to you what we’re doing at Black Sisters to get that trust, compared to someone with less information because that’s someone who you’ve known your whole life. And that’s just human nature. But that human nature is now, again, back to the systems of racism. That effect of racism is now causing more work for me.
Where do you draw inspiration from in continuing the work that you do today?
I would say what inspired me really is my faith. I’m a very faith-based person, very spiritual person, and my faith in Christ is what keeps me going. Because if not, it would be very hard to do this work. Number two of what keeps me going is knowing that where I come from had even less access, even less opportunity and seeing what they were able to create and what they were able to be so inspiring.
I’ve always loved learning about civilized movements, learning about things that MLK was a part of. All of these people — Sojourner Truth, Harriet Tubman, — when you read and watch, we just learn about the level of resilience and the level of fortitude and the level to feel and see a better world completely at the expense of themselves. As much as there is a need for a better world, the world I’m seeing is way better than the world they had. And if they could affect that global change, I can, too.
And then I would also say, in alignment with that, my own mother. She is a perfect example. Single mother. She really held the weight of my entire family on her shoulders. And she never gave up. One of the most consistent, one of the most brilliant, one of the most hardworking — if not the most hardworking — brilliant person I know. Through all that she’s given, that I already have, it inspires me to do more.
What is one action that everybody can take to make our world a little bit better?
I would say take the time to learn. After going through sociology, women, gender and sexuality studies, I just realized that was a wealth of knowledge that everyone in this world should have. Unfortunately, that’s not how most of the education systems are. … Being a viable part of the society, it is really important to understand what has to be reading and what has to do with where the society is right now. I don’t think a lot of people do enough research.
And then, number two, after you do that research, have some sort of goals around supporting people who are putting their efforts in changing that society and changing that world. And be very intentional about it. Look at who is running those companies. Look at the impact of their companies. Look at who they’re supporting — and everyone at every level.
Whether you give your time, whether you give your money, whether you give whatever, I don’t think there’s ever enough people to give. You can even give your amplification, right? Amplifying something on social media. Amplifying and making sure you forward a newsletter. Making sure you promise on something. You don’t understand what that could potentially do, especially if you have a certain network. You have a certain network and if you’re on LinkedIn and you have the time to comment on a Black Sisters post, you are doing a lot for us. Because now your entire network is going to be seeing that consistently. And that is something that’s completely free and didn’t take many minutes to do.
And then also, if you have the capacity, make sure you’re also giving on a yearly basis as much as you can budget for.
We started Rise 25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?
I hope people are celebrating a society that provides opportunities based off of potential and not race, color, gender, etc. I hope that potential means opportunities and that people are celebrating the fact that they are in a city, space, etc. that allows for potential and opportunities to always be on equal footing. And not be based off of things that you cannot control.
What gives you hope for the future of our world?
What gives me hope is hope (laughs). What gives me hope is the ability to know that human beings have and will always have the luxury of bringing stories of seeing things progress, of moving and changing the world. And it’s going to be something that has been done throughout. So many people have different stories. It’s everything. And so, I really believe that if there’s a force that keeps me going, I think it’s a way that people can hold onto that.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization. The following
RFCs would benefit from user testing before moving forward:
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing
label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature
need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. They are ordered below by when the CFP closes.
But also three admittedly small-ish regressions which seemed unanticipated and
were still large enough that I did not feel comfortable rubber-stamping them
with a perf-regression-triaged marking.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
Mozilla recently signed onto an amicus brief – alongside the Electronic Frontier Foundation , the Internet Society, Signal, and a broad coalition of other allies – on the Nevada Attorney General’s recent attempt to limit encryption. The amicus brief signals a collective commitment from these organizations on the importance of encryption in safeguarding digital privacy and security as fundamental rights.
The core of this dispute is the Nevada Attorney General’s proposition to limit the application of end-to-end encryption (E2EE) for children’s online communications. It is a move that ostensibly aims to aid law enforcement but, in practice, could significantly weaken the privacy and security of all internet users, including children. Nevada argues that end-to-end encryption might impede some criminal investigations. However, as the amicus brief explains, encryption does not prevent either the sender or recipient from reporting concerning content to police, nor does it prevent police from accessing other metadata about communications via lawful requests. Blocking the rollout of end-to-end encryption would undermine privacy and security for everyone for a marginal benefit that would be far outweighed by the harms such a draconian limitation could create.
The case, set for a hearing in Clark County, Nevada, encapsulates a broader debate on the balance between enabling law enforcement to combat online crimes and preserving robust online protections for all users – especially vulnerable populations like children. Mozilla’s involvement in this amicus brief is founded on its long standing belief that encryption is an essential component of its core Manifesto tenet – privacy and security are fundamental online and should not be treated as optional.
Looking back at the past year it sure was different than the years before, again.
Obviously we left most of the pandemic isolation behind us and I got to meet more of my coworkers in person:
At the Mozilla All-Hands in Montreal, Canada, though that was cut short for me due to ... of course: Covid.
At PyCon DE and PyData here in Berlin.
And at another workweek with my extended team also here in Berlin.
My work also changed.
As predicted a year ago I branched out beyond the Glean SDK, took a look at our data pipeline,
worked on features across the stack and wrote a ton of stuff that is not code.
Most of that work spanned months and months and some is still not 100% finished.
For this year I'm focusing a bit more on the SDK and client-side world again.
With Glean used just about everywhere it's time we look into some optimizations.
In the past we made it correct first and paid less attention to optimize resource usage (CPU & memory for example).
Now that we have more and more usage in Firefox Desktop (Use Counters!) we need to look into making data collection more efficient.
The first step is to get better insights where and how much memory we use.
Then we can optimize.
Firefox comes with some of its own tooling for that with which we need to integrate, see about:memory for example.
I'm also noticing some parts in our codebase where in hindsight I wish we had made different implementation decisions.
At the time though we did make the right choices, now we need to deal with that (memory usage might be one of these, storage sure is another one).
And Mozilla more broadly?
It's changing. All the time.
We just had layoffs and reprioritization of projects.
That certainly dampens the mood.
Focus shifts and work changes.
But underneath there's still the need to use data to drive our decisions and so I'm rather confident that there's work for me to do.
Thank you
None of my work would happen if it weren't for my manager Alessio and team mates Chris, Travis, Perry, Bruno and Abhishek.
They make it fun to work here, always have some interesting things to share and they still endure my bad jokes all the time.
Thank you!
Thanks also goes out to the bigger data engineering team within Mozilla, and all the other people at Mozilla I work or chat with.
This post was planned to be published more than a week ago. It's still perfectly in time. I wasn't able to focus on it earlier.
In modern technology, interoperability between programs is crucial to the usability of applications, user choice, and healthy competition. Today Mozilla has joined an amicus brief at the Ninth Circuit, to ensure that copyright law does not undermine the ability of developers to build interoperable software.
This amicus brief comes in the latest appeal in a multi-year courtroom saga between Oracle and Rimini Street. The sprawling litigation has lasted more than a decade and has already been up to the Supreme Court on a procedural question about court costs. Our amicus brief addresses a single issue: should the fact that a software program is built to be interoperable with another program be treated, on its own, as establishing copyright infringement?
We believe that most software developers would answer this question with: “Of course not!” But the district court found otherwise. The lower court concluded that even if Rimini’s software does not include any Oracle code, Rimini’s programs could be infringing derivative works simply “because they do not work with any other programs.” This is a mistake.
The classic example of a derivative work is something like a sequel to a book or movie. For example, The Empire Strikes Back is a derivative work of the original Star Wars movie. Our amicus brief explains that it makes no sense to apply this concept to software that is built to interoperate with another program. Not only that, interoperability of software promotes competition and user choice. It should be celebrated, not punished.
This case raises similar themes to another high profile software copyright case, Google v. Oracle, which considered whether it was copyright infringement to re-implement an API. Mozilla submitted an amicus brief there also, where we argued that copyright law should support interoperability. Fortunately, the Supreme Court reached the right conclusion and ruled that re-implementing an API was fair use. That ruling and other important fair use decisions would be undermined if a copyright plaintiff could use interoperability as evidence that software is an infringing derivative work.
Over the past year, Servo has gone a long way towards reigniting the dream of a web rendering engine in Rust. This comes with a lot of potential, and not just towards becoming a viable alternative to WebKit and Chromium for embedded webviews. If we can make the web platform more modular and easily reusable in both familiar and novel ways, and help build web-platform-grade libraries in underlying areas like networking, graphics, and typography, we could really change the Rust ecosystem.
In theory, anything is possible in a free and open source project like ours, and we think projects like Servo and Ladybird have shown that building a web browser with limited resources is more achievable than many have assumed. But doing those things well does take time and money, and we can only achieve Servo’s full potential with your help.
We will stop accepting donations on LFX soon. Any funds left over will also be transferred to the Servo project, but recurring donations will be cancelled, so if you would like to continue your recurring donation, please do so on GitHub or Open Collective.
Both one-time and monthly donations are appreciated, and over 94% of the amount will go directly towards improving Servo, with the remaining 6% going to processing fees. The way the funds are used is decided in public via the Technical Steering Committee, but to give you a sense of scale…
at 100 USD/month, we can cover the costs of our website and other core infrastructure
at 1,000 USD/month, we can set up dedicated servers for faster Windows and macOS builds, better test coverage and reliability, and new techniques like fuzzing and performance testing
at 10,000 USD/month, we can sponsor a developer to make Servo their top priority
If you or your company are interested in making a bigger donation or funding specific work that would make Servo more useful to your needs, you can also reach out to us at join@servo.org.
The new Screenshots component (which replaces the Screenshots built-in extension) is now enabled by default on Nightly (bug 1789727)! This improves upon the extension version of the feature in a number of ways:
You can capture screenshots of about: pages and other pages that extensions cannot normally manipulate
Improved performance!
Greatly improved keyboard and visual accessibility
You can ensure it’s on by default by checking if screenshots.browser.component.enabled is set to true in about:config
You can access the Screenshot feature via the keyboard shortcut (ctrl + shift + s or cmd + shift + s for macos), the context menu, or by adding the Screenshot button to your toolbar via toolbar customization
The Firefox Profiler has a new “Network Bandwidth” feature to record the network bandwidth used between every profiler sample. Example profile
We’ve started early investigations on a profile backup feature. This feature will, in theory, allow users to create backups of their user profile in an archive on the local file system. We’re still very early days here, but we have a meta bug here that people can follow along with.
In collaboration with the other major browser engine developers, Mozilla is thrilled to announce Speedometer 3 today. Like previous versions of Speedometer, this benchmark measures what we think matters most for performance online: responsiveness. But today’s release is more open and more challenging than before, and is the best tool for driving browser performance improvements that we’ve ever seen.
This fulfills the vision set out in December 2022 to bring experts across the industry together in order to rethink how we measure browser performance, guided by a shared goal to reflect the real-world Web as much as possible. This is the first time the Speedometer benchmark, or any major browser benchmark, has been developed through a cross-industry collaboration supported by each major browser engine: Blink, Gecko, and WebKit. Working together means we can build a shared understanding of what matters to optimize, and facilitates broad review of the benchmark itself: both of which make it a stronger lever for improving the Web as a whole.
And we’re seeing results: Firefox got faster for real users in 2023 as a direct result of optimizingfor Speedometer 3. This took a coordinated effort from many teams: understanding real-world websites, building new tools to drive optimizations, and making a huge number of improvements inside Gecko to make web pages run more smoothly for Firefox users. In the process, we’ve shipped hundreds of bug fixes across JS, DOM, Layout, CSS, Graphics, frontend, memory allocation, profile-guided optimization, and more.
We’re happy to see core optimizations in all the major browser engines turning into improved responsiveness for real users, and are looking forward to continuing to work together to build performance tests that improve the Web.
Welcome to a new report on the progress of transforming K-9 Mail into Thunderbird for Android. I hope you’ve enjoyed the extra day in February. We certainly did and used this opportunity to release a new stable version on February 29.
If you’re new to this series or the unusually long February made you forget what happened the previous month, you might want to check out January’s progress report.
New stable release
We spent most of our time in February getting ready for a new stable release – K-9 Mail 6.800. That mostly meant fixing bugs and usability issues reported by beta testers. Thanks to everyone who tested the app and reported bugs
With the new account setup being mostly done, we’ll concentrate on the following two areas.
Material 3
The question of whether to update the user interface to match the design used by the latest Android version seems to have always split the K-9 Mail user base. One group prefers that we work on adding new features instead. The other group wants their email app of choice to look similar to the apps that ship with Android.
Never updating the user interface to the latest design is not really an option. At some point all third-party libraries we’re using will only support the latest platform design. Not updating those libraries is also not an option because Android itself is constantly changing and requires app/library updates just to keep existing functionality working.
I think we found a good balance by not being the first ones to update to Material 3. By now a lot of other app developers have done so and countless bugs related to Material 3 have been found and fixed. So it’s a good time for us to start switching to Android’s latest design system now.
We’re currently still in a research phase to figure out what parts of the app need changing. Once that’s done, we’ll change the base theme and fix up the app screen by screen. You will be able to follow along by becoming a beta tester and installing K-9 Mail 6.9xx beta versions once those become available.
Android 14 compatibility
K-9 Mail is affected by a couple of changes that were introduced with Android 14. We’ve started to look into which parts of the app need to be updated to be able to target Android 14.
Our current plan is to include the necessary changes in updates to the K-9 Mail 6.8xx line.
Community Contributions
S Tanveer Hussain submitted a pull request to update the information about third-party libraries in K-9 Mail’s About screen (#7601)
GitHub user LorenzHo provided a patch to not focus the recipient input field when the Compose screen was opened using a mailto: URI (#7623). Unfortunately, this change had to be backed out later because of unintended side effects. But we’re hopeful a modified version of this change will make it into the app soon.
Thank you for your contributions!
Releases
In February 2024 we published a new stable release:
Let's face it: Dawnmaker still has some important flaws. We're aware of that, and we are working on those flaws. One of the biggest remaining problem with the game was that one of its core mechanics, the oppression of the Smog, was… well, not explained at all. If you didn't have a developer behind your back to tell you, there was almost no way you could understand it.
What's the oppression of the Smog, you ask? It is the fact that the Smog gets stronger the more the game progresses, and thus makes you consume luminoil faster. It's a simple formula that grows every time you shuffle your deck of cards: when that happens, your luminoil consumption increases by the level of your aerostation. We tried a few, small things to explain that mechanism: we added an animation to the Smog, making it grow darker and closer to your city, when the oppression increases. We also had a line in the luminoil tooltip showing how much luminoil is consumed by this oppression. But that was not nearly enough, and I started thinking about how to solve this with a complex UI inspired by Frostpunk. I am glad I did not go with that, as it would have been a nightmare to implement, and I now believe it would not have helped much.
So, while I was busy not solving this problem, another one got to the front: the progression issue. You see, we have been struggling a lot with the meta progression we offer in Dawnmaker. We've made attempts at doing it Slay the Spire-like, with a map of limited paths and rewards after each stop. That doesn't work very well, the roguelike structure (the progression on the world map) around an already roguelike structure (a single region / city) was weird and sometimes frustrating. So we started thinking about doing it differently, more like Hades does it with its mirror. There would be a new resource that you'd gain after securing each region, and that resource could be spent in order to improve your starting party, making each new region a tad easier to secure.
We like this idea, but it causes another problem: if the game gets easier and easier, there will come a point when it will become boring, as the challenge will be lost. We thus need to have a progression in the difficulty just as we have one in the player's strength. Hades does that with the Heat system, where you can choose how you increase the challenge each time you play. We cannot easily do something similar, so I once again started thinking about a complex question: how can we increase the difficulty of the game? What levers to we have to do that in a way that is challenging and doesn't feel too artificial or frustrating? There's an easy answer to that: the length of the game and the number of lighthouses, which we use in the currently called "Discovery" game, where we increase both the level to reach and the number of lighthouses to repair in order to win a game as you progress on the map. But that is not enough for a long-term progression, as it would quickly feel completely artificial. Luckily, there was another feature we could use, and did not: the oppression of the Smog. That is were those two problems converged, and led me to a single solution solving both: turning the Smog's behavior into a deck of cards.
Using cards to represent the Smog's behavior increases the affordance of the game: Smog cards look like buildings cards, and they sort of work like them. It's a card that has an effect, that you can read at any time, and because it uses the same wording, abilities and display as the buildings, it's easy to interpret it. Changing the behavior simply means changing the card, and we can do a whole bunch of animations there to show that happening. We can also add tooltips around those elements to explain them further. That's a first big win! Now the second one is that we create and handle those decks in our content editor, and we can create as many cards and decks as we want. There we have it: near infinite difficulty progression, simply by making different Smog cards, and assembling them differently in various decks!
Note that we haven't done anything like that yet. So far we have only reproduced the previous system with cards. But the potential is here: when we start working on the meta progression, we can be confident that we'll have the tools to make the difficulty progress as well.
We definitely killed two nasty, vicious flying creatures with one deck. Neat!
This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game, the latest news of its development, as well as an exclusive access to Dawnmaker's alpha version!
The rustup team is happy to announce the release of rustup version 1.27.0.
Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of rustup installed, getting rustup 1.27.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:
$ rustup self update
Rustup will also automatically update itself at the end of a normal toolchain update:
$ rustup update
If you don't have it already, you can get rustup from the appropriate page on our website.
What's new in rustup 1.27.0
This long-awaited Rustup release has gathered all the new features and fixes since April 2023. These changes include improvements in Rustup's maintainability, user experience, compatibility and documentation quality.
Also, it's worth mentioning that Dirkjan Ochtman (djc) and rami3l (rami3l) have joined the team and are coordinating this new release.
At the same time, we have granted Daniel Silverstone (kinnison) and 二手掉包工程师 (hi-rustin) their well-deserved alumni status in this release cycle.
Kudos for your contributions over the years and your continuous guidance on maintaining the project!
The headlines for this release are:
Basic support for the fish shell has been added.
If you're using fish, PATH configs for your Rustup installation will be added automatically from now on.
Please note that this will only take effect on installation, so if you have already installed Rustup on your machine, you will need to reinstall it.
For example, if you have installed Rustup via rustup.rs, simply follow rustup.rs's instructions again;
if you have installed Rustup using some other method, you might want to reinstall it using that same method.
Rustup support for loongarch64-unknown-linux-gnu as a host platform has been added.
This means you should be able to install Rustup via rustup.rs and no longer have to rely on loongnix.cn or self-compiled installations.
Please note that as of March 2024, loongarch64-unknown-linux-gnu is a "tier 2 platform with host tools", so Rustup is guaranteed to build for this platform.
According to Rust's target tier policy, this does not imply that these builds are also guaranteed to work, but they often work to quite a good degree and patches are always welcome!
Like the rest of the Rust community, crates.io has been growing rapidly, with download and package counts increasing 2-3x year-on-year. This growth doesn't come without problems, and we have made some changes to download handling on crates.io to ensure we can keep providing crates for a long time to come.
The Problem
This growth has brought with it some challenges. The most significant of these is that all download requests currently go through the crates.io API, occasionally causing scaling issues. If the API is down or slow, it affects all download requests too. In fact, the number one cause of waking up our crates.io on-call team is "slow downloads" due to the API having performance issues.
Additionally, this setup is also problematic for users outside of North America, where download requests are slow due to the distance to the crates.io API servers.
The Solution
To address these issues, over the last year we have decided to make some changes:
Starting from 2024-03-12, cargo will begin to download crates directly from our static.crates.io CDN servers.
This change will be facilitated by modifying the config.json file on the package index. In other words: no changes to cargo or your own system are needed for the changes to take effect. The config.json file is used by cargo to determine the download URLs for crates, and we will update it to point directly to the CDN servers, instead of the crates.io API.
Over the past few months, we have made several changes to the crates.io backend to enable this:
We changed how downloads are counted. Previously, downloads were counted directly on the crates.io API servers. Now, we analyze the log files from the CDN servers to count the download requests.
The latter change has caused the download numbers of most crates to increase, as some download requests were not counted before. Specifically, crates.io mirrors were often downloading directly from the CDN servers already, and those downloads had previously not been counted. For crates with a lot of downloads these changes will be barely noticeable, but for smaller crates, the download numbers have increased quite a bit over the past few weeks since we enabled this change.
Expected Outcomes
We expect these changes to significantly improve the reliability and speed of downloads, as the performance of the crates.io API servers will no longer affect the download requests. Over the next few weeks, we will monitor the performance of the system to ensure that the changes have the expected effects.
We have noticed that some non-cargo build systems are not using the config.json file of the index to build the download URLs. We will reach out to the maintainers of those build systems to ensure that they are aware of the change and to help them update their systems to use the new download URLs. The old download URLs will continue to work, but these systems will be missing out on the potential performance improvement.
We are excited about these changes and believe they will greatly improve the reliability of crates.io. We look forward to hearing your feedback!
(To read the complete Mozilla.ai learnings on LLM evaluation, please visit the Mozilla.ai blog)
Large language models (LLMs) have rapidly advanced, but determining their real-world performance remains a complex challenge in AI. Mozilla.ai participated in NeurIPS 2023, one of the most prominent machine learning conferences, by co-sponsoring a challenge designed to address evaluating models by focusing on efficient fine-tuning of LLMs and developing robust evaluation techniques.
The competition emphasized fine-tuning LLMs under precise hardware constraints. Fine-tuning involves updating specific parts of an existing LLM with curated datasets to specialize its behavior. The goal was to fine-tune models within 24 hours on a single GPU, making this process more accessible to those without access to high-performance computational clusters.
Mozilla.ai played a key role in evaluating the results of these fine-tuning experiments. We used tools like HELM, a framework developed at Stanford for running various tasks to assess LLM performance. However, evaluating LLMs is hard due to the stochastic nature of the responses of transformer models: a model can give different answers every time it is provided with a given prompt and there are many ways to measure these responses. This complexity makes it challenging to compare models objectively and decide which models are truly “best”.
The competition highlighted the rapidly evolving nature of LLMs. New models, fine-tuning techniques, and evaluation methods are being constantly introduced so reliable and standardized evaluation of LLMs will be crucial for understanding their capabilities and ensuring they are trustworthy.
Open source plays a big role in this area because evaluation is such a multifaceted problem. Being able to work in a collaborative manner and with open-source systems is crucial for moving forward toward a better framework that could eventually be used in the field by many people.
At Mozilla.ai we believe in the importance of establishing robust and transparent foundations for the entire evaluation landscape. This is why we are working on several tracks of work to support this. On the experimentation side, we are focused on research approaches that allow for a clear definition of metrics and transparency and run repeatable evaluations. On the infrastructure side, we’re developing reliable and replicable infrastructure to evaluate models and store and introspect model results.
We brought together experts to tackle a critical question: What does openness mean for AI, and how can it best enable trustworthy and beneficial AI?
On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. Open source software helped make the internet safer and more robust in earlier eras of the internet — and offered trillions of dollars of value to startups and innovators as they created the digital services we all use today. Our shared hope is that open approaches can have a similar impact in the AI era.
To help unlock this significant potential, the Columbia Convening took an important step toward developing a framework for openness in AI and unifying the openness community around shared understandings and next steps. Participatants noted that:
Openness in AI has the potential to advance key societal goals, including making AI safe and effective, unlocking innovation and competition in the AI market, and bringing underserved communities into the AI ecosystem.
Openness is a key characteristic to consider throughout the AI stack, and not just in AI models themselves. In components ranging from data to hardware to user interfaces, there are different types of openness that can be helpful for accomplishing different technical and societal goals. Participants reviewed research mapping dimensions of openness in AI, and noted the need to make it easier for developers of AI systems to understand where and how openness should be central to the technology they build.
Policy conversations need to be more thoughtful about the benefits and risks of openness in AI. For example, comparing the marginal risk that open systems pose in relation to closed systems is one promising approach to bringing rigor to this discussion. More work is needed across the board — from policy research on liability distribution, to more submissions to the National Telecommunications and Information Administration’s request for comment on “dual-use foundation models with widely available model weights.”
We need a stronger community and better organization to help build, invest, and advocate for better approaches to openness in AI. This convening showed that the openness community can have collaborative, productive discussions even when there are meaningful differences of opinion between its members. Mozilla committed to continuing to help build and foster community on this topic.
Getting “open” right for AI will be hard — but it’s never been more timely or important. Today, while everyone gushes about how generative AI can change the world, only a handful of products dominate the generative AI market. The lack of competition in AI products today is a real problem. It could mean that the new AI products we’ll begin to see in the next several years won’t be as innovative and safe as we need them to be – but instead, be built on the same closed, proprietary model that has defined roughly the last decade of online life. That’s why Mozilla’s recent report on Accelerating Progress Toward Trustworthy AI doubles down on openness, competition, and accountability as vital to the future of AI.
We know a better future is possible. During earlier eras of the Internet, open source technologies played a core role in promoting innovation and safety. Open source software made it easier to find and fix bugs in software. Attempts to limit open innovation — such as export controls on encryption in early web browsers — ended up being counterproductive, further exemplifying the value of openness. And, perhaps most importantly, open source technology has provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value.
For years, we saw similar benefits play out for AI. Industry researchers openly published foundational AI research and frameworks, making it easier for academics and startups to keep pace with AI advances and enabling an ecosystem of external experts who could challenge the big AI players. But, the benefits of this approach are not assured as we enter a new wave of innovation around AI. As training AI systems requires more compute and data, some key players are shifting their attention away from publishing research and toward consolidating competitive advantages and economies of scale to enable foundational models on demand. As AI risks are being portrayed as murkier and more hypothetical, it is becoming easier to argue that locking down AI models is the safest path forward. Today, it feels like the benefits and risks of AI depend on the whims of a few tech companies in Silicon Valley.
This can’t be the best approach to AI. If AI is truly so powerful and pervasive, shouldn’t AI be subject to real scrutiny from third-party assessments? If AI is truly so innovative and useful, shouldn’t there be more AI tools and systems that startups and small businesses can use?
We believe openness can and must play a key role in the future of AI — the question is how. Late last year, we and over 1,800 people signed our letter that noted that although the signatories represent different perspectives on open source AI, they all agree that open, responsible, and transparent approaches are critical to safety and security in the AI era. Indeed, across the AI ecosystem, some advocate for staged release of AI models, others believe other forms of openness in the AI stack are more important, and yet others believe every part of AI systems should be as open as possible. There are people who believe in openness for openness’ sake, and others who view openness as a means to other societal goals — such as identifying civil rights and privacy harms, promoting innovation and competition in the market, and supporting consumers and workers who want a say about how AI is deployed in their communities. We were thrilled to bring together people with very divergent views and motivations for openness collaborating on strengthening and leveraging openness in support of their missions.
We’re immensely grateful to the participants in the Columbia Convening on Openness and AI:
Anthony Annunziata — Head of AI Open Innovation and AI Alliance, IBM
Mitchell Baker— Chairwoman, Mozilla Foundation
Kevin Bankston — Senior Advisor on AI Governance, Center for Democracy and Technology
Adrien Basdevant — Tech Lawyer, Entropy Law
Ayah Bdeir — Senior Advisor, Mozilla
Philippe Beaudoin — Co-Founder and CEO, Waverly
Brian Behlendorf — Chief AI Strategist, The Linux Foundation
Stella Biderman — Executive Director, EleutherAI
John Borthwick — CEO, Betaworks
Zoë Brammer — Senior Associate for Cybersecurity & Emerging Technologies, Institute for Security and Technology
Glenn Brown — Principal, GOB Advisory
Kasia Chmielinski — Practitioner Fellow, Stanford Center on Philanthropy and Civil Society
Peter Cihon — Senior Policy Manager, Github
Julia Rhodes Davis — Chief Program Officer, Computer Says Maybe
Merouane Debbah — Senior Scientific AI Advisor, Technology Innovation Institute
Alix Dunn — Facilitator, Computer Says Maybe
Michelle Fang — Strategy, Cerebras Systems
Camille François — Faculty Affiliate, Institute for Global Politics at Columbia University’s School of Public and International Affairs
Stefan French — Product Manager, Mozilla.ai
Yacine Jernite — Machine Learning and Society Lead, Hugging Face
Amba Kak — Executive Director, AI Now Institute
Sayash Kapoor — Ph.D. Candidate, Princeton University
Helen King-Turvey — Managing Partner, Philanthropy Matters
Kevin Klyman — AI Policy Researcher, Stanford Institute for Human-Centered AI
Nathan Lambert — ML Scientist, Allen Institute for AI
Yann LeCun — Vice President and Chief AI Scientist, Meta
Stefano Maffulli — Executive Director, Open Source Initiative
Nik Marda — Technical Lead, AI Governance, Mozilla
Ryan Merkley — CEO, Conscience
Mohamed Nanabhay — Managing Partner, Mozilla Ventures
Deval Pandya — Vice President of AI Engineering, Vector Institute
Deb Raji — Fellow at Mozilla and PhD Student, UC Berkeley
Sarah Myers West — Managing Director, AI Now Institute
In the coming weeks, we intend to publish more content related to the convening. We will release resources to help practitioners and policymakers grapple with the opportunities and risks from openness in AI, such as determining how openness can help make AI systems safer and better. We will also continue to bring similar communities together, helping to keep pushing forward on this important work.
Xunlei Accelerator (迅雷客户端) a.k.a. Xunlei Thunder by the China-based Xunlei Ltd. is a wildly popular application. According to the company’s annual report 51.1 million active users were counted in December 2022. The company’s Google Chrome extension 迅雷下载支持, while not mandatory for using the application, had 28 million users at the time of writing.
I’ve found this application to expose a massive attack surface. This attack surface is largely accessible to arbitrary websites that an application user happens to be visiting. Some of it can also be accessed from other computers in the same network or by attackers with the ability to intercept user’s network connections (Man-in-the-Middle attack).
It does not appear like security concerns were considered in the design of this application. Extensive internal interfaces were exposed without adequate protection. Some existing security mechanisms were disabled. The application also contains large amounts of third-party code which didn’t appear to receive any security updates whatsoever.
I’ve reported a number of vulnerabilities to Xunlei, most of which allowed remote code execution. Still, given the size of the attack surface it felt like I barely scratched the surface.
Last time Xunlei made security news, it was due to distributing a malicious software component. Back then it was an inside job, some employees turned rouge. However, the application’s flaws allowed the same effect to be easily achieved from any website a user of the application happened to be visiting.
Contents
What is Xunlei Accelerator?
Wikipedia lists Xunlei Limited’s main product as a Bittorrent client, and maybe a decade ago it really was. Today however it’s rather difficult to describe what this application does. Is it a download manager? A web browser? A cloud storage service? A multimedia client? A gaming platform? It appears to be all of these things and more.
It’s probably easier to think of Xunlei as an advertising platform. It’s an application with the goal of maximizing profits through displaying advertising and selling subscriptions. As such, it needs to keep the users on the platform for as long as possible. That’s why it tries to implement every piece of functionality the user might need, while not being particularly good at any of it of course.
So there is a classic download manager that will hijack downloads initiated in the browser, with the promise of speeding them up. There is also a rudimentary web browser (two distinctly different web browsers in fact) so that you don’t need to go back to your regular web browser. You can play whatever you are downloading in the built-in media player, and you can upload it to the built-in storage. And did I mention games? Yes, there are games as well, just to keep you occupied.
Altogether this is a collection of numerous applications, built with a wide variety of different technologies, often implementing competing mechanisms for the same goal, yet trying hard to keep the outward appearance of a single application.
The built-in web browser
The trouble with custom Chromium-based browsers
Companies love bringing out their own web browsers. The reason is not that their browser is any better than the other 812 browsers already on the market. It’s rather that web browsers can monetize your searches (and, if you are less lucky, also your browsing history) which is a very profitable business.
Obviously, profits from that custom-made browser are higher if the company puts as little effort into maintenance as possible. So they take the open source Chromium, slap their branding on it, maybe also a few half-hearted features, and they call it a day.
Trouble is: a browser has a massive attack surface which is exposed to arbitrary web pages (and ad networks) by definition. Companies like Mozilla or Google invest enormous resources into quickly plugging vulnerabilities and bringing out updates every six weeks. And that custom Chromium-based browser also needs updates every six weeks, or it will expose users to known (and often widely exploited) vulnerabilities.
Even merely keeping up with Chromium development is tough, which is why it almost never happens. In fact, when I looked at the unnamed web browser built into the Xunlei application (internal name: TBC), it was based on Chromium 83.0.4103.106. Being released in May 2020, this particular browser version was already three and a half years old at that point. For reference: Google fixed eight actively exploited zero-day vulnerabilities in Chromium in the year 2023 alone.
Among others, the browser turned out to be vulnerable to CVE-2021-38003. There is this article which explains how this vulnerability allows JavaScript code on any website to gain read/write access to raw memory. I could reproduce this issue in the Xunlei browser.
Protections disabled
It is hard to tell whether not having a pop-up blocker in this browser was a deliberate choice or merely a consequence of the browser being so basic. Either way, websites are free to open as many tabs as they like. Adding --autoplay-policy=no-user-gesture-required command line flag definitely happened intentionally however, turning off video autoplay protections.
It’s also notable that Xunlei revives Flash Player in their browser. Flash Player support has been disabled in all browsers in December 2020, for various reasons including security. Xunlei didn’t merely decide to ignore this reasoning, they shipped Flash Player 29.0.0.140 (released in April 2018) with their browser. Adobe support website lists numerous Flash Player security fixes published after April 2018 and before end of support.
Censorship included
Interestingly, Xunlei browser won’t let users visit the example.com website (as opposed to example.net). When you try, the browser redirects you to a page on static.xbase.cloud. This is an asynchronous process, so chances are good that you will catch a glimpse of the original page first.
Automated translation of the text: “This webpage contains illegal or illegal content and access has been stopped.”
As it turns out, the application will send every website you visit to an endpoint on api-shoulei-ssl.xunlei.com. That endpoint will either accept your choice of navigation target or instruct to redirect you to a different address. So when to navigate to example.com the following request will be sent:
POST /xlppc.blacklist.api/v1/check HTTP/1.1
Content-Length: 29
Content-Type: application/json
Host: api-shoulei-ssl.xunlei.com
{"url":"http://example.com/"}
Interestingly, giving it the address http://example.com./ (note the trailing dot) will result in the response {"code":403,"msg":"params error","data":null}. With the endpoint being unable to handle this address, the browser will allow you to visit it.
Native API
In an interesting twist, the Xunlei browser exposed window.native.CallNativeFunction() method to all web pages. Calls would be forwarded to the main application where any plugin could register its native function handlers. When I checked, there were 179 such handlers registered, though that number might vary depending on the active plugins.
Among the functions exposed were ShellOpen (used Windows shell APIs to open a file), QuerySqlite (query database containing download tasks), SetProxy (configure a proxy server to be used for all downloads) or GetRecentHistorys (retrieve browsing history for the Xunlei browser).
My proof-of-concept exploit would run the following code:
As you might have noticed, this doesn’t actually validate the host name against the list but looks for substring matches in the entire address. So https://malicious.com/?www.xunlei.com is also considered a trusted address, allowing for a trivial circumvention of this “protection.”
Getting into the Xunlei browser
Now most users hopefully won’t use Xunlei for their regular browsing. These should be safe, right?
Unfortunately not, as there is a number of ways for webpages to open the Xunlei browser. The simplest way is using a special thunderx:// address. For example, thunderx://eyJvcHQiOiJ3ZWI6b3BlbiIsInBhcmFtcyI6eyJ1cmwiOiJodHRwczovL2V4YW1wbGUuY29tLyJ9fQ== will open the Xunlei browser and load https://example.com/ into it. From the attacker’s point of view, this approach has a downside however: modern browsers ask the user for confirmation before letting external applications handle addresses.
There are alternatives however. For example, the Xunlei browser extension (28 million users according to Chrome Web Store) is meant to pass on downloads to the Xunlei application. It could be instrumented into passing on thunderx:// links without any user interaction however, and these would immediately open arbitrary web pages in the Xunlei browser.
More ways to achieve this are exposed by the XLLite application’s API which is introduced later. And that’s likely not even the end of it.
The fixes
While Xunlei never communicated any resolution of these issues to me, as of Xunlei Accelerator 12.0.8.2392 (built on February 2, 2024 judging by executable signatures) several changes have been implemented. First of all, the application no longer packages Flash Player. It still activates Flash Player if it is installed on the user’s system, so some users will still be exposed. But chances are good that this Flash Player installation will at least be current (as much as software can be “current” three years after being discontinued).
The isUrlInDomains() function has been rewritten, and the current logic appears reasonable. It will now only check the allowlist against the end of the hostname, matches elsewhere in the address won’t be accepted. So this now leaves “only” all of the xunlei.com domain with access to the application’s internal APIs. Any cross-site scripting vulnerability anywhere on this domain will again put users at risk.
The outdated Chromium base appears to remain unchanged. It still reports as Chromium 83.0.4103.106, and the exploit for CVE-2021-38003 still succeeds.
The browser extension 迅雷下载支持 also received an update, version 3.48 on January 3, 2024. According to automated translation, the changelog entry for this version reads: “Fixed some known issues.” The fix appears to be adding a bunch of checks for the event.isTrusted property, making sure that the extension can no longer be instrumented quite as easily. Given these restrictions, just opening the thunderx:// address directly likely has higher chances of success now, especially when combined with social engineering.
The main application
Outdated Electron framework
The main Xunlei application is based on the Electron framework. This means that its user interface is written in HTML and displayed via the Chromium web browser (renderer process). And here again it’s somewhat of a concern that the Electron version used is 83.0.4103.122 (released in June 2020). It can be expected to share most of the security vulnerabilities with a similarly old Chromium browser.
Granted, an application like that should be less exposed than a web browser as it won’t just load any website. But it does work with remote websites, so vulnerabilities in the way it handles web content are an issue.
Cross-site scripting vulnerabilities
Being HTML-based, the Xunlei application is potentially vulnerable to cross-site scripting vulnerabilities. For most part, this is mitigrated by using the React framework. React doesn’t normally work with raw HTML code, so there is no potential for vulnerabilities here.
Well, normally. Unless dangerouslySetInnerHTML property is being used, which you should normally avoid. But it appears that Xunlei developers used this property in a few places, and now they have code displaying messages like this:
If message content ever happens to be some malicious data, it could create HTML elements that will result in execution of arbitrary JavaScript code.
How would malicious data end up here? Easiest way would be via the browser. There is for example the MessageBoxConfirm native function that could be called like this:
When executed on a “trusted” website in the Xunlei browser, this would make the main application display a message and, as a side-effect, run the JavaScript code alert(location.href).
Impact of executing arbitrary code in the renderer process
Electron normally sandboxes renderer processes, making certain that these have only limited privileges and vulnerabilities are harder to exploit. This security mechanism is active in the Xunlei application.
However, Xunlei developers at some point must have considered it rather limiting. After all, their user interface needed to perform lots of operations. And providing a restricted interface for each such operation was too much effort.
So they built a generic interface into the application. By means of messages like AR_BROWSER_REQUIRE or AR_BROWSER_MEMBER_GET, the renderer process can instruct the main (privileged) process of the application to do just about anything.
My proof-of-concept exploit successfully abused this interface by loading Electron’s shell module (not accessible to sandboxed renderers by regular means) and calling one of its methods. In other words, the Xunlei application managed to render this security boundary completely useless.
The (lack of) fixes
Looking at Xunlei Accelerator 12.0.8.2392, I could not recognize any improvements in this area. The application is still based on Electron 83.0.4103.122. The number of potential XSS vulnerabilities in the message rendering code didn’t change either.
It appears that Xunlei called it a day after making certain that triggering messages with arbitrary content became more difficult. I doubt that it is impossible however.
The XLLite application
Overview of the application
The XLLite application is one of the plugins running within the Xunlei framework. Given that I never created a Xunlei account to see this application in action, my understanding of its intended functionality is limited. Its purpose however appears to be integrating the Xunlei cloud storage into the main application.
As it cannot modify the main application’s user interface directly, it exposes its own user interface as a local web server, on a randomly chosen port between 10500 and 10599. That server essentially provides static files embedded in the application, all functionality is implemented in client-side JavaScript.
Privileged operations are provided by a separate local server running on port 21603. Some of the API calls exposed here are handled by the application directly, others are forwarded to the main application via yet another local server.
I originally got confused about how the web interface accesses the API server, with the latter failing to implement CORS correctly – OPTION requests don’t get a correct response, so that only basic requests succeed. It appears that Xunlei developers didn’t manage to resolve this issue and instead resorted to proxying the API server on the user interface server. So any endpoints available on the API server are exposed by the user interface server as well, here correctly (but seemingly unnecessarily) using CORS to allow access from everywhere.
So the communication works like this: the Xunlei application loads http://127.0.0.1:105xx/ in a frame. The page then requests some API on its own port, e.g. http://127.0.0.1:105xx/device/now. When handling the request, the XLLite application requests http://127.0.0.1:21603/device/now internally. And the API server handler within the same process responds with the current timestamp.
This approach appears to make little sense. However, it’s my understanding that Xunlei also produces storage appliances which can be installed on the local network. Presumably, these appliances run identical code to expose an API server. This would also explain why the API server is exposed to the network rather than being a localhost-only server.
The “pan authentication”
With quite a few API calls having the potential to do serious damage or at the very least expose private information, these need to be protected somehow. As mentioned above, Xunlei developers chose not to use CORS to restrict access but rather decided to expose the API to all websites. Instead, they implemented their own “pan authentication” mechanism.
Their approach of generating authentication tokens was taking the current timestamp, concatenating it with a long static string (hardcoded in the application) and hashing the result with MD5. Such tokens would expire after 5 minutes, apparently an attempt to thwart replay attacks.
They even went as far as to perform time synchronization, making sure to correct for deviation between the current time as perceived by the web page (running on the user’s computer) and by the API server (running on the user’s computer). Again, this is something that probably makes sense if the API server can under some circumstances be running elsewhere on the network.
Needless to say that this “authentication” mechanism doesn’t provide any value beyond very basic obfuscation.
Achieving code execution via plugin installation
There are quite a few interesting API calls exposed here. For example, the device/v1/xllite/sign endpoint would sign data with one out of three private RSA keys hardcoded in the application. I don’t know what this functionality is used for, but I sincerely hope that it’s as far away from security and privacy topics as somehow possible.
There is also the device/v1/call endpoint which is yet another way to open a page in the Xunlei browser. Both OnThunderxOpt and OpenNewTab calls allow that, the former taking a thunderx:// address to be processed and the latter a raw page address to be opened in the browser.
It’s fairly obvious that the API exposes full access to the user’s cloud storage. I chose to focus my attention on the drive/v1/app/install endpoint however, which looked like it could do even more damage. This endpoint in fact turned out to be a way to install binary plugins.
I couldn’t find any security mechanisms preventing malicious software to be installed this way, apart from the already mentioned useless “pan authentication.” However, I couldn’t find any actual plugins to use as an example. In the end I figured out that a plugin had to be packaged in an archive containing a manifest.yaml file like the following:
ID:ExploitTitle:My exploitDescription:This is an exploitVersion:1.0.0System:- OS:windowsARCH:386Service:ExecStart:Exploit.exeExecStop:Exploit.exe
The plugin would install successfully under Thunder\Profiles\XLLite\plugin\Exploit\1.0.1\Exploit but the binary wouldn’t execute for some reason. Maybe there is a security mechanism that I missed, or maybe the plugin interface simply isn’t working yet.
Either way, I started thinking: what if instead of making XLLite run my “plugin” I would replace an existing binary? It’s easy enough to produce an archive with file paths like ..\..\..\oops.exe. However, the Go package archiver used here has protection against such path traversal attacks.
The XLLite code deciding which folder to put the plugin into didn’t have any such protections on the other hand. The folder is determined by the ID and Version values of the plugin’s manifest. Messing with the former is inconvenient, it being present twice in the path. But setting the “version” to something like ..\..\.. achieved the desired results.
Two complications:
The application to be replaced cannot be running or the Windows file locking mechanism will prevent it from being replaced.
The plugin installation will only replace entire folders.
In the end, I chose to replace Xunlei’s media player for my proof of concept. This one usually won’t be running and it’s contained in a folder of its own. It’s also fairly easy to make Xunlei run the media player by using a thunderx:// link. Behold, installation and execution of a malicious application without any user interaction.
Remember that the API server is exposed to the local network, meaning that any devices on the network can also perform API calls. So this attack could not merely be executed from any website the user happened to be visiting, it could also be launched by someone on the same network, e.g. when the user is connected to a public WiFi.
The fixes
As of version 3.19.4 of the XLLite plugin (built January 25, 2024 according to its digital signature), the “pan authentication” method changed to use JSON Web Tokens. The authentication token is embedded within the main page of the user interface server. Without any CORS headers being produced for this page, the token cannot be extracted by other web pages.
It wasn’t immediately obvious what secret is being used to generate the token. However, authentication tokens aren’t invalidated if the Xunlei application is restarted. This indicates that the secret isn’t being randomly generated on application startup. The remaining possibilities are: a randomly generated secret stored somewhere on the system (okay) or an obfuscated hardcoded secret in the application (very bad).
While calls to other endpoints succeed after adjusting authentication, calls to the drive/v1/app/install endpoint result in a “permission denied” response now. I did not investigate whether the endpoint has been disabled or some additional security mechanism has been added.
Plugin management
The oddities
XLLite’s plugin system is actually only one out of at least five completely different plugin management systems in the Xunlei application. One other is the main application’s plugin system, the XLLite application is installed as one such plugin. There are more, and XLLiveUpdateAgent.dll is tasked with keeping them updated. It will download the list of plugins from an address like http://upgrade.xl9.xunlei.com/plugin?os=10.0.22000&pid=21&v=12.0.3.2240&lng=0804 and make sure that the appropriate plugins are installed.
Note the lack of TLS encryption here which is quite typical. Part of the issue appears to be that Xunlei decided to implement their own HTTP client for their downloads. In fact, they’ve implemented a number of different HTTP clients instead of using any of the options available via the Windows API for example. Some of these HTTP clients are so limited that they cannot even parse uncommon server responses, much less support TLS. Others support TLS but use their own list of CA certificates which happens to be Mozilla’s list from 2016 (yes, that’s almost eight years old).
Another common issue is that almost all these various update mechanisms run as part of the regular application process, meaning that they only have user’s privileges. How do they manage to write to the application directory then? Well, Xunlei solved this issue: they made the application directory writable with user’s privileges! Another security mechanism successfully dismantled. And there is a bonus: they can store application data in the same directory rather than resorting to per-user nonsense like AppData.
Altogether, you better don’t run Xunlei Accelerator on untrusted networks (meaning: any of them?). Anyone on your network or anyone who manages to insert themselves into the path between you and the Xunlei update server will be able to manipulate the server response. As a result, the application will install a malicious plugin without you even noticing anything.
You also better don’t run Xunlei Accelerator on a computer that you share with other people. Anyone on a shared computer will be able to add malicious components to the Xunlei application, so next time you run it your user account will be compromised.
Example scenario: XLServicePlatform
I decided to focus on XLServicePlatform because, unlike all the other plugin management systems, this one runs with system privileges. That’s because it’s a system service and any installed plugins will be loaded as dynamic libraries into this service process. Clearly, injecting a malicious plugin here would result in full system compromise.
The management service downloads the plugin configuration from http://plugin.pc.xunlei.com/config/XLServicePlatform_12.0.3.xml. Yes, no TLS encryption here because the “HTTP client” in question isn’t capable of TLS. So anyone on the same WiFi network as you for example could redirect this request and give you a malicious response.
In fact, that HTTP client was rather badly written, and I found multiple Out-of-Bounds Read vulnerabilities despite not actively looking for them. It was fairly easy to crash the service with an unexpected response.
But it wasn’t just that. The XML response was parsed using libexpat 2.1.0. With that version being released more than ten years ago, there are numerous known vulnerabilities, including a number of critical remote code execution vulnerabilities.
I generally leave binary exploitation to other people however. Continuing with the high-level issues, a malicious plugin configuration will result in a DLL or EXE file being downloaded, yet it won’t run. There is a working security mechanism here: these files need a valid code signature issued to Shenzhen Thunder Networking Technologies Ltd.
But it still downloads. And there is our old friend: a path traversal vulnerability. Choosing the file name ..\XLBugReport.exe for that plugin will overwrite the legitimate bug reporter used by the Xunlei service. And crashing the service with a malicious server response will then run this trojanized bug reporter, with system privileges.
My proof of concept exploit merely created a file in the C:\Windows directory, just to demonstrate that it runs with sufficient privileges to do it. But we are talking about complete system compromise here.
The (lack of?) fixes
At the time of writing, XLServicePlatform still uses its own HTTP client to download plugins which still doesn’t implement TLS support. Server responses are still parsed using libexpat 2.1.0. Presumably, the Out-of-Bounds Read and Path Traversal vulnerabilities have been resolved but verifying that would take more time than I am willing to invest.
The application will still render its directory writable for all users. It will also produce a number of unencrypted HTTP requests, including some that are related to downloading application components.
Outdated components
I’ve already mentioned the browser being based on an outdated Chromium version, the main application being built on top of an outdated Electron platform and a ten years old XML library being widely used throughout the application. This isn’t by any means the end of it however. The application packages lots of third-party components, and the general approach appears to be that none of them are ever updated.
Take for example the media player XMP a.k.a. Thunder Video which is installed as part of the application and can be started via a thunderx:// address from any website. This is also an Electron-based application, but it’s based on an even older Electron 59.0.3071.115 (released in June 2017). The playback functionality seems to be based on the APlayer SDK which Xunlei provides for free for other applications to use.
Now you might know that media codecs are extremely complicated pieces of software that are known for disastrous security issues. That’s why web browsers are very careful about which media codecs they include. Yet APlayer SDK features media codecs that have been discontinued more than a decade ago as well as some so ancient that I cannot even figure out who developed them originally. There is FFmpeg 2021-06-30 (likely a snapshot around version 4.4.4), which has dozens of known vulnerabilities. There is libpng 1.0.56, which was released in July 2011 and is affected by seven known vulnerabilities. Last but not least, there is zlib 1.2.8-4 which was released in 2015 and is affected by at least two critical vulnerabilities. These are only some examples.
So there is a very real threat that Xunlei users might get compromised via a malicious media file, either because they were tricked into opening it with Xunlei’s video player, or because a website used one of several possible ways to open it automatically.
As of Xunlei Accelerator 12.0.8.2392, I could not notice any updates to these components.
Reporting the issues
Reporting security vulnerabilities is usually quite an adventure, and the language barrier doesn’t make it any better. So I was pleasantly surprised to discover XunLei Security Response Center that was even discoverable via an English-language search thanks to the site heading being translated.
Unfortunately, there was a roadblock: submitting a vulnerability is only possible after logging in via WeChat or QQ. While these social networks are immensely popular in China, creating an account from outside China proved close to impossible. I’ve spent way too much time on verifying that.
That’s when I took a closer look and discovered an email address listed on the page as fallback for people who are unable to log in. So I’ve sent altogether five vulnerability reports on 2023-12-06 and 2023-12-07. The number of reported vulnerabilities was actually higher because the reports typically combined multiple vulnerabilities. The reports mentioned 2024-03-06 as publication deadline.
I received a response a day later, on 2023-12-08:
Thank you very much for your vulnerability submission. XunLei Security Response Center has received your report. Once we have successfully reproduced the vulnerability, we will be in contact with you.
Just like most companies, they did not actually contact me again. I saw my proof of concept pages being accessed, so I assumed that the issues are being worked on and did not inquire further. Still, on 2024-02-10 I sent a reminder that the publication deadline was only a month away. I do this because in my experience companies will often “forget” about the deadline otherwise (more likely: they assume that I’m not being serious about it).
I received another laconic reply a week later which read:
XunLei Security Response Center has verified the vulnerabilities, but the vulnerabilities have not been fully repaired.
That was the end of the communication. I don’t really know what Xunlei considers fixed and what they still plan to do. Whatever I could tell about the fixes here has been pieced together from looking at the current software release and might not be entirely correct.
It does not appear that Xunlei released any further updates in the month after this communication. Given the nature of the application with its various plugin systems, I cannot be entirely certain however.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization. The following
RFCs would benefit from user testing before moving forward:
No RFCs issued a call for testing this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing
label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature
need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
A bunch of noise this week which has been dropped from the report (but may be
present in the summary figures). As a result, the week is pretty busy in amount
of changes, but the net effect is nearly neutral to a slight regression for
most workloads.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
My experience with C++ is that, as I’ve become more of an expert in the language, I’ve become more disillusioned with it. It’s incredibly hard to do things that you should be able to do in software. And, it’s a huge problem for me to constantly be helping other engineers debug the same bugs over and over. It’s always another use after free. I’ve probably debugged 300 of those. [...]
In our experience using the Rust ecosystem for almost three years now, I don't think we found a bug in a single Rust crate that we've pulled off the shelf. We found a bug in one of them and that was a Rust crate wrapping a C library and the bug was in the C library. The software quality that you kind of get for free is amazing.
The first edition of Mozilla Mornings in 2024 will explore the impact of harmful design on consumers in the digital world and the role regulation can play in addressing such practices.
In the evolving digital landscape, deceptive and manipulative design practices, as well as aggressive personalisation and profiling pose significant threats to consumer welfare, potentially leading to financial loss, privacy breaches, and compromised security.
While existing EU regulations address some aspects of these issues, questions persist about their adequacy in combating harmful design patterns comprehensively. What additional measures are needed to ensure digital fairness for consumers and empower designers who want to act ethically?
To discuss these issues, we are delighted to announce that the following speakers will be participating in our panel discussion:
Egelyn Braun, Team Leader DG JUST, European Commission
Estelle Hary, Co-founder, Design Friction
Silvia de Conca, Amsterdam Law & Technology Institute, Vrije Universiteit Amsterdam
Finn Myrstad, Digital Policy Director, Norwegian Consumer Council
The event will also feature a fireside chat withMEP Kim van Sparrentak from Greens/EFA.
This blog post explores an alternative formulation of Rust’s type system that eschews lifetimes in favor of places. The TL;DR is that instead of having 'a represent a lifetime in the code, it can represent a set of loans, like shared(a.b.c) or mut(x). If this sounds familiar, it should, it’s the basis for polonius, but reformulated as a type system instead of a static analysis. This blog post is just going to give the high-level ideas. In follow-up posts I’ll dig into how we can use this to support interior references and other advanced borrowing patterns. In terms of implementation, I’ve mocked this up a bit, but I intend to start extending a-mir-formality to include this analysis.
Why would you want to replace lifetimes?
Lifetimes are the best and worst part of Rust. The best in that they let you express very cool patterns, like returning a pointer into some data in the middle of your data structure. But they’ve got some serious issues. For one, the idea of what a lifetime is rather abstract, and hard for people to grasp (“what does 'a actually represent?”). But also Rust is not able to express some important patterns, most notably interior references, where one field of a struct refers to data owned by another field.
So what is a lifetime exactly?
Here is the definition of a lifetime from the RFC on non-lexical lifetimes:
Whenever you create a borrow, the compiler assigns the resulting reference a lifetime. This lifetime corresponds to the span of the code where the reference may be used. The compiler will infer this lifetime to be the smallest lifetime that it can have that still encompasses all the uses of the reference.
Under this formulation, 'a no longer represents a lifetime but rather an origin – i.e., it explains where the reference may have come from. We define an origin as a set of loans. Each loan captures some place expression (e.g. a or a.b.c), that has been borrowed along with the mode in which it was borrowed (shared or mut).
Using origins, we can define Rust types roughly like this (obviously I’m ignoring a bunch of complexity here…):
Type = TypeName < Generic* >
| & Origin Type
| & Origin mut Type
TypeName = u32 (for now I'll ignore the rest of the scalars)
| () (unit type, don't worry about tuples)
| StructName
| EnumName
| UnionName
Generic = Type | Origin
Here is the first interesting thing to note: there is no 'a notation here! This is because I’ve not introduced generics yet. Unlike Rust proper, this formulation of the type system has a concrete syntax (Origin) for what 'a represents.
Explicit types for a simple program
Having a fully explicit type system also means we can easily write out example programs where all types are fully specified. This used to be rather challenging because we had no notation for lifetimes. Let’s look at a simple example, a program that ought to get an error:
letmutcounter: u32=22_u32;letp: &/*{shared(counter)}*/u32=&counter;// ---------------------
// no syntax for this today!
counter+=1;// Error: cannot mutate `counter` while `p` is live
println!("{p}");
Apart from the type of p, this is valid Rust. Of course, it won’t compile, because we can’t modify counter while there is a live shared reference p (playground). As we continue, you will see how the new type system formulation arrives at the same conclusion.
Basic typing judgments
Typing judgments are the standard way to describe a type system. We’re going to phase in the typing judgments for our system iteratively. We’ll start with a simple, fairly standard formulation that doesn’t include borrow checking, and then show how we introduce borrow checking. For this first version, the typing judgment we are defining has the form
Env |- Expr : Type
This says, “in the environment Env, the expression Expr is legal and has the type Type”. The environmentEnv here defines the local variables in scope. The Rust expressions we are looking at for our sample program are pretty simple:
Expr = integer literal (e.g., 22_u32)
| & Place
| Expr + Expr
| Place (read the value of a place)
| Place = Expr (overwrite the value of a place)
| ...
Since we only support one scalar type (u32), the typing judgment for Expr + Expr is as simple as:
Env |- Place : Type
----------------------------------------- shared references
Env |- & Place : & {shared(Place)} Type
The rule just says that we figure out the type of the place Place being borrowed (here, the place is counter and its type will be u32) and then we have a resulting reference to that type. The origin of that reference will be {shared(Place)}, indicating that the reference came from Place:
&{shared(Place)} Type
Computing liveness
To introduce borrow checking, we need to phase in the idea of liveness.1 If you’re not familiar with the concept, the NLL RFC has a nice introduction:
The term “liveness” derives from compiler analysis, but it’s fairly intuitive. We say that a variable is live if the current value that it holds may be used later.
Unlike with NLL, where we just computed live variables, we’re going to compute live places:
LivePlaces = { Place }
To compute the set of live places, we’ll introduce a helper function LiveBefore(Env, LivePlaces, Expr): LivePlaces. LiveBefore() returns the set of places that are live before Expr is evaluated, given the environment Env and the set of places live after expression. I won’t define this function in detail, but it looks roughly like this:
// `&Place` reads `Place`, so add it to `LivePlaces`
LiveBefore(Env, LivePlaces, &Place) =
LivePlaces ∪ {Place}
// `Place = Expr` overwrites `Place`, so remove it from `LivePlaces`
LiveBefore(Env, LivePlaces, Place = Expr) =
LiveBefore(Env, (LivePlaces - {Place}), Expr)
// `Expr1` is evaluated first, then `Expr2`, so the set of places
// live after expr1 is the set that are live *before* expr2
LiveBefore(Env, LivePlaces, Expr1 + Expr2) =
LiveBefore(Env, LiveBefore(Env, LivePlaces, Expr2), Expr1)
... etc ...
Integrating liveness into our typing judgments
To detect borrow check errors, we need to adjust our typing judgment to include liveness. The result will be as follows:
(Env, LivePlaces) |- Expr : Type
This judgment says, “in the environment Env, and given that the function will access LivePlaces in the future, Expr is valid and has type Type”. Integrating liveness in this way gives us some idea of what accesses will happen in the future.
For compound expressions, like Expr1 + Expr2, we have to adjust the set of live places to reflect control flow:
We start out with LiveAfter2, i.e., the places that are live after the entire expression. These are also the same as the places live after expression 2 is evaluated, since this expression doesn’t itself reference or overwrite any places. We then compute LiveAfter1 – i.e., the places live after Expr1 is evaluated – by looking at the places that are live beforeExpr2. This is a bit mind-bending and took me a bit of time to see. The tricky bit here is that liveness is computed backwards, but most of our typing rules (and intution) tends to flow forwards. If it helps, think of the “fully desugared” version of +:
let tmp0 = <Expr1>
// <-- the set LiveAfter1 is live here (ignoring tmp0, tmp1)
let tmp1 = <Expr2>
// <-- the set LiveAfter2 is live here (ignoring tmp0, tmp1)
tmp0 + tmp1
// <-- the set LiveAfter2 is live here
Borrow checking with liveness
Now that we know liveness information, we can use it to do borrow checking. We’ll introduce a “permits” judgment:
(Env, LiveAfter) permits Loan
that indicates that “taking the loan Loan would be allowed given the environment and the live places”. Here is the rule for assignments, modified to include liveness and the new “permits” judgment:
Before I dive into how we define “permits”, let’s go back to our example and get an intution for what is going on here. We want to declare an error on this assigment:
letmutcounter: u32=22_u32;letp: &{shared(counter)}u32=&counter;counter+=1;// <-- Error
println!("{p}");// <-- p is live
Note that, because of the println! on the next line, p will be in our LiveAfter set. Looking at the type of p, we see that it includes the loan shared(counter). The idea then is that mutating counter is illegal because there is a live loan shared(counter), which implies that counter must be immutable.
Restating that intution:
A set Live of live places permits a loan Loan1 if, for every live place Place in Live, the loans in the type of Place are compatible with Loan1.
Written more formally:
∀ Place ∈ Live {
(Env, Live) |- Place : Type
∀ Loan2 ∈ Loans(Type) { Compatible(Loan1, Loan2) }
}
-----------------------------------------
(Env, Live) permits Loan1
This definition makes use of two helper functions:
Loans(Type) – the set of loans that appear in the type
Compatible(Loan1, Loan2) – defines if two loans are compatible. Two shared loans are always compatible. A mutable loan is only compatible with another loan if the places are disjoint.
Conclusion
The goal of this post was to give a high-level intution. I wrote it from memory, so I’ve probably overlooked a thing or two. In follow-up posts though I want to go deeper into how the system I’ve been playing with works and what new things it can support. Some high-level examples:
How to define subtyping, and in particular the role of liveness in subtyping
Important borrow patterns that we use today and how they work in the new system
Interior references that point at data owned by other struct fields and how it can be supported
If this is not obvious to you, don’t worry, it wasn’t obvious to me either. It turns out that using liveness in the rules is the key to making them simple. I’ll try to write a follow-up about the alternatives I explored and why they don’t work later on. ↩︎
We’re happy to announce the release of K-9 Mail 6.800. The main goal of this version is to make it easier for you to add your email accounts to the app.
Setting up an email account in K-9 Mail is something many new users have struggled with in the past. That’s mainly because automatic setup was only supported for a handful of large email providers. If you had an email account with another email provider, you had to manually enter the incoming and outgoing server settings. But finding the correct server settings can be challenging.
So we set out to improve the setup experience. Since this part of the app was quite old and had a couple of other problems, we used this opportunity to rewrite the whole account setup component. This turned out to be more work than originally anticipated. But we’re quite happy with the result.
Let’s have a brief look at the steps involved in setting up a new account.
1. Enter email address
To get the process started, all you have to do is enter the email address of the account you want to set up in K-9 Mail.
2. Provide login credentials
After tapping the Next button, the app will use Thunderbird’s Autoconfig mechanism to try to find the appropriate incoming and outgoing server settings. Then you’ll be asked to provide a password or use the web login flow, depending on the email provider.
The app will then try to log in to the incoming and outgoing server using the provided credentials.
3. Provide some basic information about the account
If your login credentials check out, you’ll be asked to provide your name for outgoing messages. For all the other inputs you can go with the defaults. All settings can be changed later, once an account has been set up.
If everything goes well, that’s all it takes to set up an account.
Of course there’s still cases where the app won’t be able to automatically find a configuration and the user will be asked to manually provide the incoming and outgoing server settings. But we’ll be working with email providers to hopefully reduce the number of times this happens.
What else is new?
While the account setup rewrite was our main development focus, we’ve also made a couple of smaller changes and bug fixes. You can find a list of the most notable ones below.
Improvements and behavior changes
Made it harder to accidentally trigger swipe actions in the message list screen
IMAP: Added support for sending the ID command (that is required by some email providers)
Improved screen reader experience in various places
Improved display of some HTML messages
Changed background color in message view and compose screens when using dark theme
Adding to contacts should now allow you again to add the email address to an existing contact
Added image handling within the context menu for hyperlinks
A URI pasted when composing a message will now be surrounded by angle brackets
Don’t use nickname as display name when auto-completing recipient using the nickname
Changed compose icon in the message list widget to match the icon inside the app
Don’t attempt to open file: URIs in an email; tapping such a link will now copy the URL to the clipboard instead
Added option to return to the message list after marking a message as unread in the message view
Combined settings “Return to list after delete” and “Show next message after delete” into “After deleting or moving a message”
Moved “Show only subscribed folders” setting to “Folders” section
Added copy action to recipient dropdown in compose screen (to work around missing drag & drop functionality)
Simplified the app icon so it can be a vector drawable
Added support for the IMAP MOVE extension
Bug fixes
Fixed bug where account name wasn’t displayed in the message view when it should
Fixed bugs with importing and exporting identities
The app will no longer ask to save a draft when no changes have been made to an existing draft message
Fixed bug where “Cannot connect to crypto provider” was displayed when the problem wasn’t the crypto provider
Fixed a crash caused by an interaction with OpenKeychain 6.0.0
Fixed inconsistent behavior when replying to messages
Fixed display issue with recipients in message view screen
Fixed display issues when rendering a message/rfc822 inline part
Fixed display issue when removing an account
Fixed notification sounds on WearOS devices
Fixed the app so it runs on devices that don’t support home screen widgets
A fresh app install on Android 14 will be missing the “alarms & reminders” permission required for Push to work. Please allow setting alarms and reminders in Android’s app settings under Alarms & reminders.
Some software keyboards automatically capitalize words when entering the email address in the first account setup screen.
When a password containing line breaks is pasted during account setup, these line breaks are neither ignored nor flagged as an error. This will most likely lead to an authentication error when checking server settings.
Where To Get K-9 Mail Version 6.800
Version 6.800 has started gradually rolling out. As always, you can get it on the following platforms:
(Note that the release will gradually roll out on the Google Play Store, and should appear shortly on F-Droid, so please be patient if it doesn’t automatically update.)
Like a lot of us during the pandemic lockdown, Shubham Bose found himself consuming more YouTube content than ever before. That’s when he started to notice all the unwanted oddities appearing in his YouTube search results — irrelevant suggested videos, shorts, playlists, etc. Shubham wanted a cleaner, more focused search experience, so he decided to do something about it. He built YouTube Search Fixer. The extension streamlines YouTube search results in a slew of customizable ways, like removing “For you,” “People also search for,” “Related to your search,” and so on. You can also remove entire types of content like shorts, live streams, auto-generated mixes, and more.
The extension makes it easy to customize YouTube to suit you.
Early versions of the extension were less customizable and removed most types of suggested search results by default, but over time Shubham learned that different users want different things in their search results. “I realized the line between ‘helpful’ and ‘distracting’ is very subjective,” explains Shubham. “What one person finds useful, another might not. Ultimately, it’s up to the user to decide what works best for them. That’s why I decided to give users granular control using an Options page. Now people can go about hiding elements they find distracting while keeping those they deem helpful. It’s all about striking that personal balance.”
Despite YouTube Search Fixer’s current wealth of customization options (a cool new feature automatically redirects Shorts to their normal length versions), Shubham plans to expand his extension’s feature set. He’s considering keyword highlighting and denylist options, which would give users extreme control over search filtering.
More than solving what he felt was a problem with YouTube’s default search results, Shubham was motivated to build his extension as a “way of giving back to a community I deeply appreciate… I’ve used Firefox since I was in high school. Like countless others, I’ve benefited greatly from the ever helpful MDN Web Docs and the incredible add-ons ecosystem Mozilla hosts and helps thrive. They offer nice developer tools and cultivate a helpful and welcoming community. So making this was my tiny way of giving back and saying ‘thank you’.”
When he’s not writing extensions that improve the world’s most popular video streaming site, Shubham enjoys photographing his home garden in Lucknow, India. “It isn’t just a hobby,” he explains. “Experimenting with light, composition and color has helped me focus on visual aesthetics (in software development). Now, I actively pay attention to little details when I create visually appealing and user-friendly interfaces.”
Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.
Hello Thunderbird Community! I can’t believe it’s already the end of February. Time goes by very fast and it seems that there’s never enough time to do all the things that you set your mind to. Nonetheless, it’s that time of the month again for a juicy and hopefully interesting Thunderbird Development Digest.
If this is your first time reading our monthly Dev Digest, these are short posts to give our community visibility into features and updates being planned for Thunderbird, as well as progress reports on work that’s in the early stages of development.
Let’s jump right into it, because there’s a lot to get excited about!
Rust and Exchange
Things are moving steadily on this front. Maybe not as fast as we would like, but we’re handling a complicated implementation and we’re adding a new protocol for the first time in more than a decade, so some friction is to be expected.
Nonetheless, you can start following the progress in our Thundercell repository. We’re using this repo to temporarily “park” crates and other libraries we’re aiming to vendor inside Thunderbird.
We’re aiming at reaching an alpha state where we can land in Thunderbird later next month and start asking for user feedback on Daily.
Mozilla Account + Thunderbird Sync
<figcaption class="wp-element-caption">Illustration by Alessandro Castellani</figcaption>
Things are moving forward on this front as well. We’re currently in the process of setting up our own SyncServer and TokenStorage in order to allow users to log in with their Mozilla Account but sync the Thunderbird data in an independent location from the Firefox data. This gives us an extra layer of security as it will prevent an app from accessing the other app’s data and vice versa.
In case you didn’t know, you can already use a Mozilla account and Sync on Daily, but this only works with a staging server and you’ll need an alternate Mozilla account for testing. There are a couple of known bugs but overall things seem to be working properly. Once we switch to our storage server, we will expose this feature more and enable it on Beta for everyone to test.
Oh, Snap!
Our continuous efforts to own our packages and distribution methods is moving forward with the internal creation of a Snap package. (For background, last year we took ownership of the Thunderbird Flatpak.)
We’re currently internally testing the Beta and things seem to work accordingly. We will announce it publicly when it’s available from the Snap Store, with the objective of offering both Stable and Beta channels.
We’re exploring the possibility of also offering a Daily channel, but that’s a bit more complicated and we will need more time to make sure it’s doable and automated, so stay tuned.
As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.
See ya next month,
Alessandro Castellani(he, him) Director of Product Engineering
If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization. The following
RFCs would benefit from user testing before moving forward:
No RFCs issued a call for testing this week.
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing
label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature
need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.
A rare week where regressions out powered improvements to make the compiler roughly half a percent slower on average on nearly 100 benchmarks. Some regressions have fixes in the pipeline, but some remain elusive or were introduced to address correctness issues.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
<figcaption>Font fallback now works for Chinese, Japanese, and Korean.</figcaption>
A couple of weeks ago, Servo surpassed its legacy layout engine in a core set of CSS2 test suites (84.2% vs 82.8% in legacy), but now we’ve surpassed legacy in the whole CSS test suite (63.6% vs 63.5%) as well!
More on how we got there in a bit, but first let’s talk about new API support:
as of 2024-02-07, you can safely console.log() symbols and large arrays (@syvb, #31241, #31267)
as of 2024-02-07, we support CanvasRenderingContext2D.reset() (@syvb, #31258)
as of 2024-02-08, we support navigator.hardwareConcurrency (@syvb, #31268)
as of 2024-02-11, you can look up shorthands like ‘margin’ in getComputedStyle() (@sebsebmc, #31277)
as of 2024-02-15, we accept SVG with the image/svg+xml mime type (@KiChjang, #31318)
as of 2024-02-20, we support non-XR game controllers with the Gamepad API (@msub2, #31200)
as of 2024-02-23, we have basic support for ‘text-transform’ (@mrobinson, @atbrakhi, #31396)
— except ‘full-width’, ‘full-size-kana’, grapheme clusters, and language-specific transforms
As of 2024-02-12, we have basic support for font fallback (@mrobinson, #31254)!
This is especially important for pages that mix text from different languages.
More work is needed to support shaping across element boundaries and shaping complex scripts like Arabic, but the current version should be enough for Chinese, Japanese, and Korean.
If you encounter text that still fails to display, be sure to check your installed fonts against the page styles and Servo’s default font lists (Windows, macOS, Linux).
<figcaption>
Space Jam (1996) now has correct layout with --pref layout.tables.enabled.
</figcaption>
As of 2024-02-24, layout now runs in the script thread, rather than in a dedicated layout thread (@mrobinson, @jdm, #31346), though it can still spawn worker threads to parallelise layout work.
Since the web platform almost always requires layout to run synchronously with script, this should allow us to make layout simpler and more reliable without regressing performance.
Our experimental tables support (--pref layout.tables.enabled) has vastly improved:
Together with inline layout for <div align> and <center> (@Loirooriol, #31388) landing in 2024-02-24, we now render the classic Space Jam website correctly when tables are enabled!
As of 2024-02-24, we support videos with autoplay (@jdm, #31412), and windows containing videos no longer crash when closed (@jdm, #31413).
Many layout and CSS bugs have also been fixed:
as of 2024-01-28, correct rounding of clientLeft, clientTop, clientWidth, and clientHeight (@mrobinson, #31187)
as of 2024-01-30, correct cache invalidation of client{Left,Top,Width,Height} after reflow (@Loirooriol, #31210, #31219)
as of 2024-02-03, correct width and height for preloaded Image objects (@syvb, #31253)
as of 2024-02-07, correct [...spreading] and indexing[0] of style objects (@Loirooriol, #31299)
as of 2024-02-09, correct border widths in fragmented inlines (@mrobinson, #31292)
as of 2024-02-11, correct UA styles for <hr> (@sebsebmc, #31297)
as of 2024-02-24, correct positioning of absolutes with ‘inset: auto’ (@mrobinson, #31418)
Embedding, code health, and dev changes
We’ve landed a few embedding improvements:
we’ve removed several mandatory WindowMethods relating to OpenGL video playback (@mrobinson, #31209)
we’ve removed webrender_surfman, and WebrenderSurfman is now in gfx as RenderingContext (@mrobinson, #31184)
We’ve fixed one of the blockers for building Servo with clang 16 (@mrobinson, #31306), but a blocker for clang 15 still remains.
See #31059 for more details, including how to build Servo against clang 14.
We’ve also made some other dev changes:
we’ve removed the unmaintained libsimpleservo C API (@mrobinson, #31172), though we’re open to adding a new C API someday
we’ve upgraded surfman such that it no longer depends on winit (@mrobinson, #31224)
we’ve added support for building Servo on Asahi Linux (@arrynfr, #31207)
In the meantime, check out Rakhi’s recent talk Embedding Servo in Rust projects, which she gave at FOSDEM 2024 on 3 February 2024.
Here you’ll learn about the state of the art around embedding Servo and Stylo, including a walkthrough of our example browser servoshell, our ongoing effort to integrate Servo with Tauri, and a sneak peek into how Stylo might someday be usable with Dioxus:
Since Clippy v0.0.97 and before it was shipped with rustup, Clippy
implicitly added a feature = "cargo-clippy" config1 when linting your code
with cargo clippy.
Back in the day (2016) this was necessary to allow, warn or deny Clippy lints
using attributes:
Doing this hasn't been necessary for a long time. Today, Clippy users will set
lint levels with tool lint attributes using the clippy:: prefix:
#[allow(clippy::lint_name)]
The implicit feature = "cargo-clippy" has only been kept for backwards
compatibility, but will be deprecated in upcoming nightlies and later in
1.78.0.
Alternative
As there is a rare use case for conditional compilation depending on Clippy,
we will provide an alternative. So in the future (1.78.0) you will be able to
use:
#[cfg(clippy)]
Transitioning
Should you only use stable toolchains, you can wait with the transition until
Rust 1.78.0 (2024-05-02) is released.
Should you have instances of feature = "cargo-clippy" in your code base, you
will see a warning from the new Clippy lint
clippy::deprecated_clippy_cfg_attr available in the latest nightly Clippy.
This lint can automatically fix your code. So if you should see this lint
triggering, just run:
If you have this config, you will have to update it yourself, by either changing
it to cfg(clippy) or taking this opportunity to transition to setting lint
levels in Cargo.toml directly.
Motivation for Deprecation
Currently, there's a call for testing, in order to stabilize checking
conditional compilation at compile time, aka cargo check -Zcheck-cfg. If we were to keep the feature = "cargo-clippy" config, users
would start seeing a lot of warnings on their feature = "cargo-clippy"
conditions. To work around this, they would either need to allow the lint or
have to add a dummy feature to their Cargo.toml in order to silence those
warnings:
[features]
cargo-clippy = []
We didn't think this would be user friendly, and decided that instead we want to
deprecate the implicit feature = "cargo-clippy" config and replace it with the
clippy config.
It's likely that you didn't even know that Clippy implicitly sets this
config (which was not a Cargo feature). This is intentional, as we stopped
advertising and documenting this a long time ago. ↩
On the heels of Mozilla’s Rise25 Awards in Berlin last year, we’re excited to announce that we’ll be returning once again with a special celebration that will take place in Dublin, Ireland later this year.
The 2nd Annual Rise25 Awards will feature familiar categories, but with an emphasis on trustworthy AI. We will be honoring 25 people who are leading that next wave of AI — who are using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity.
2023 was indeed the year of AI, and as more people adopt it, we know it is a technology that will continue to impact our culture and society, act as a catalyst for innovation and creation, and be a medium to engage people from all walks of life in conversations thanks to its growing ubiquity in our everyday lives.
We know we cannot do this alone: At Mozilla, we believe the most groundbreaking innovations emerge when people from diverse backgrounds unite to collaborate and openly trade ideas.
Five winners from each of the five categories below will be selected to make up our 2024 Rise25 cohort:
Advocates –Guiding AI towards a responsible future
These are the policymakers, activists, and thinkers ensuring AI is developed ethically, inclusively, and transparently. This category also includes those who are adept at translating complex AI concepts for the broader public — including journalists, content creators, and cultural commentators. They champion digital rights and accessible AI, striving to make AI a force for societal good.
Builders – Developing AI through ethical innovation
They are the architects of trustworthy AI, including engineers and data scientists dedicated to developing AI’s open-source language infrastructure. They focus on technical proficiency and responsible and ethical construction. Their work ensures AI is secure, accessible, and reliable, aiming to create tools that empower and advance society.
Artists – Reimagining AI’s creative potential
They transcend traditional AI applications, like synthesizing visuals or using large language models. Their projects, whether interactive websites, films, or digital media, challenge our perceptions and demonstrate how AI can amplify and empower human creativity. Their work provokes thought and offers fresh perspectives on the intersection of AI and art.
Entrepreneurs – Fueling AI’s evolution with visionary ventures
These daring individuals are transforming imaginative ideas into reality. They’re crafting businesses and solutions with AI to meet societal needs, improve everyday life and forge new technological paths. They embody innovation, steering startups and projects with a commitment to ethical standards, inclusiveness and enhancing human welfare through technology.
Change Agents – Cultivating inclusive AI
They are challengers that lead the way in diversifying AI, bringing varied community voices into tech. They focus on inclusivity in AI development, ensuring technology serves and represents everyone, especially those historically excluded from the tech narrative. They are community leaders, corporate leaders, activists and outside-the-box thinkers finding ways to amplify the impacts of AI for marginalized communities. Their work fosters an AI environment of equality and empowerment.
This year’s awards build upon the success of last year’s programming and community event in Berlin, which brought to life what a future trustworthy Internet could look like. Last year’s event crowned trailblazers and visionaries across five distinct categories: Builders, Activists, Artists, Creators, and Advocates. (Psst! Stay tuned as we unveil their inspiring stories in a video series airing across Mozilla channels throughout the year, leading up to the 2nd Annual Rise25 Awards.)
So join us as we honor the innovators, advocates, entrepreneurs, and communities who are working to build a happier, healthier web. Click here to submit your nomination today.
Alex added added a new option to log frame return to the nascent JS Tracer feature (bug)
This can be really useful paired with the “Log function arguments and returned values”. See the example below, tracing a Redux reducer on about:home
The JS Tracer is a tool that the DevTools team is developing to log JavaScript execution at runtime. This can be very helpful when debugging or exploring a new JavaScript codebase!
Set devtools.debugger.features.javascript-tracing to true to enable the JS Tracer feature and try it out!
Nicolas fixed a performance issue in the Inspector when showing a lot of rules (bug)
for 5000 rules, on Nicolas’s engineering machine, this went from 7 seconds to 50 milliseconds!
Marco enabled cross-container tab searching in Nightly (1876743). Users can now search in the URL bar for tabs open in different containers. This behaviour can be controlled through the browser.urlbar.switchTabs.searchAllContainers boolean pref.
The Firefox View team has added new tab indicators to the Open Tabs section (sound playing, notifications, etc) and the option of sorting Open Tabs by recency. Both are slated to ship in Firefox 124!
Nicolas started the migration of DevTools to CodeMirror6. The first consumer is the EventTooltip, in the markup view (bug)
Hubert will manage the migration of the Debugger to the new CodeMirror version, which will happen incrementally behind a pref, until everything is ready (bug)
WebDrive BiDi
Thanks to Jing Zhu for updating various deserialization methods in RemoteAgent to match the specifications more closely (bug).
Kagami updated Marionette permissions to handle storage-related permissions (bug).
Sasha implemented the storage.getCookie command, which allows to retrieve cookies (with support for partitions and filtering) (bug).
Sasha also implemented the storage.setCookie command to create new cookies (bug).
Julian added basic support for two network interception commands network.continueRequest and network.continueResponse (bug). We also removed the experimental flag gating the various network interception commands (bug).
Julian implemented the userContext parameter for browsingContext.create, which allows to set user context (Firefox container) owning a new tab or window (bug). We added the userContext field to events and payloads describing a browsing context, so that you can check which container owns a given browsing context (bug).
Julian fixed a bug for the network.fetchError event which was missing in case a network request failed really early on (bug).
Henrik fixed a bug with Get Element Text (WebDriver classic) which could fail when used with web components (bug).
ESMification status
ESMified status:
browser: 100%!
toolkit: 99.83%
devtools: 89.09%
dom: 96%
services: 98.94%
Only 10 JSMs left in the tree!
Total: 99.34% (+.86% from last week)
#esmification on Matrix
Information Management
Firefox View
More Open Tabs features we’re planning to ship in 125: Pinned Tabs, Bookmark and New Tab pinned Indicators
Drew updated the MDN suggestion code, weather suggestion code and quick suggest config to account for all three of those suggestion types being implemented in Rust (1877595, 1878441, 1878444)
Drew finalized the UX for Yelp suggestions (1878727)
Drew added a “Local recommendations” group label for Yelp suggestions (1879397)
Drew updated the Yelp desktop suggestions to accommodate certain prefix-matching rules that are applied to what the user types into the urlbar (1879642)
Daisuke implemented the result menu for Yelp suggestions (1878728, 1879637)
Daisuke ensured Yelp suggestions are labeled as “sponsored” (1878814)
Daisuke added a Nimbus variable for testing Yelp suggestions as a top pick (1877920)
Daisuke allowed Yelp suggestions to pull location information from the Merino service if that Yelp suggestion didn’t already have a location (1878206)
Search and SERP (Search Engine Result Page) telemetry
James and Stephanie have continued their work on the SERP categorization project (1848197, 1879737)
James fixed two issues with how we record SERP telemetry when the SERP is a single page app (1878062, 1879404)
Consolidated Search Configuration
Mandy fixed an issue where prior search settings were lost when using Amazon as the default search engine in Spain (1878277)
General Search Service
Standard8 simplified the way the search service loads engines (1879126)
General Improvements
Karandeep landed the first in a series of patches to remove an ultimately unused API for urlbar experiments (1855958)
The minimum requirements for Tier 1 toolchains targeting Windows will increase with the 1.78 release (scheduled for May 02, 2024).
Windows 10 will now be the minimum supported version for the *-pc-windows-* targets.
These requirements apply both to the Rust toolchain itself and to binaries produced by Rust.
Two new targets have been added with Windows 7 as their baseline: x86_64-win7-windows-msvc and i686-win7-windows-msvc.
They are starting as Tier 3 targets, meaning that the Rust codebase has support for them but we don't build or test them automatically.
Once these targets reach Tier 2 status, they will be available to use via rustup.
Affected targets
x86_64-pc-windows-msvc
i686-pc-windows-msvc
x86_64-pc-windows-gnu
i686-pc-windows-gnu
x86_64-pc-windows-gnullvm
i686-pc-windows-gnullvm
Why are the requirements being changed?
Prior to now, Rust had Tier 1 support for Windows 7, 8, and 8.1 but these targets no longer meet our requirements.
In particular, these targets could no longer be tested in CI which is required by the Target Tier Policy and are not supported by their vendor.
Finally getting back towards something approaching current. Firefox 123 is out, adding platform improvements, off-main-thread canvas and the ability to report problematic sites. Or, I dunno, sites that work just fine but claim they don't, like PG&E, the soulless natural monopolist Abilisks of northern California. No particular reason. The other reported improvement was PGO optimization improvements on Apple silicon Macs and Android. How cute! Meanwhile, our own PGO-LTO patch got simpler and I was able to drop the other changes we needed for Python 3.12 on Fedora 39, which now builds with this smaller PGO-LTO patch and .mozconfigs from Firefox 122. Some of you reported crashes on Fx122 but I haven't observed any with that release or this one built from source. Fingers crossed.
I was recently invited to join the Matrix “Spec Core Team”, the group who
steward the Matrix protocol, from their own documentation:
The contents and direction of the Matrix Spec is governed by the Spec Core Team;
a set of experts from across the whole Matrix community, representing all aspects
of the Matrix ecosystem. The Spec Core Team acts as a subcommittee of the Foundation.
This was the announced a couple of weeks ago and I’m just starting to get my feet
wet! You can see an interview between myself, Tulir (another new member of the Spec
Core Team), and Matthew (the Spec Core Team lead) in today’s This Week in Matrix.
We cover a range of topics including Thunderbird (and Instantbird), some
improvements I hope to make and more.
Matrix includes the ability for a client to request that the server
generate a “preview” for a URL. The client provides a URL to the server which
returns Open Graph data as a JSON response. This leaks any URLs detected in
the message content to the server, but protects the end user’s IP address, etc.
from the URL being previewed. [1] (Note that clients generally disable URL previews
for encrypted rooms, but it can be enabled.)
Improvements
Synapse implements the URL preview endpoint, but it was a bit neglected. I was
one of the few main developers running with URL previews enabled and sunk a bit of
time into improving URL previews for my on sake. Some highlights of the improvements
made include (in addition to lots and lots of refactoring):
Apply url_preview_url_blacklist to oEmbed and pre-cached images:
#15601.
Results
Overall, there was an improved result (from my point of view). A summary of some
of the improvements. I tested 26 URLs (based on ones that had previously been
reported or found to give issues). See the table below for testing at a few versions.
The error reason was also broken out into whether JavaScript was required or some
other error occurred. [2]
Version
Release date
Successful preview
JavaScript required error
Found image & description?
1.0.0
2019-06-11
15
4
14
1.12.0
2020-03-23
18
4
17
1.24.0
2020-12-09
20
1
16
1.36.0
2021-06-15
20
1
16
1.48.0
2021-11-30
20
1
11
1.60.0
2022-05-31
21
0
21
1.72.0
2022-11-22
22
0
21
1.84.0
2023-05-23
22
0
21
Future improvements
I am no longer working on Synapse, but some of the ideas I had for additional improvements included:
Use BeautifulSoup instead of a custom parser to handle some edge cases in HTML
documents better (WIP @ clokep/bs4).
Fixing any of the other issues with particular URLs (see this GitHub search).
Thumbnailing of SVG images (which sites tend to use for favicons) (#1309).
There’s also a ton more that could be done here if you wanted, e.g. handling more
data types (text and PDF are the ones I have frequently come across that would be
helpful to preview). I’m sure there are also many other URLs that don’t work right
now for some reason. Hopefully the URL preview code continues to improve!
See some ancient documentation on the tradeoffs and design of URL previews.
MSC4095 was recently written to bundle the URL preview information into
evens.
This was done by instantiating different Synapse versions via Docker and
asking them to preview URLs. (See the code.) This is not a super realistic
test since it assumes that URLs are static over time. In particular some
sites (e.g. Twitter) like to change what they allow you to access without
being authenticated.
(In short: Mozilla has updated its take on the state of AI — and what we need to do to make AI more trustworthy.Read the paper and share your feedback: AIPaper@mozillafoundation.org.)
In 2020, when Mozilla first focused its philanthropy and advocacy on trustworthy AI, we published a paper outlining our vision. We mapped the barriers to a better AI ecosystem — barriers like centralization, algorithmic bias, and poor data privacy norms. We also mapped paths forward, like shifting industry norms and introducing new regulations and incentives.
The upshot of that report? We learned AI has a lot in common with the early web. So much promise, but also peril — with harms spanning privacy, security, centralization, and competition. Mozilla’s expertise in open source and holding incumbent tech players accountable put us in a good place to unpack this dynamic and take action.
A lot has changed since 2020. AI technology has grown more centralized, powerful, and pervasive; its risks and opportunities are not abstractions. Conversations about AI have grown louder and more urgent. Meanwhile, within Mozilla, we’ve made progress on our vision, from research and investments to products and grantmaking.
Today, we’re publishing an update to our 2020 report — the progress we’ve made so far, and the work that is left to do.
Our original paper focused on four strategic areas:
Changing AI development norms,
Building new tech and products,
Raising consumer awareness,
Strengthening AI regulations and incentives.
This update revisits those areas, outlining what’s changed for the better, what’s changed for the worse, and what’s stayed the same. At a very high level, our takeaways are:
Norms: The people that broke the internet are the ones building AI.
Products: More trustworthy AI products need to be mainstream.
Consumers: A more engaged public still needs better choices on AI.
Policy: Governments are making progress while grappling with conflicting influences.
A consistent theme across these areas is the importance and potential of openness for the development of more trustworthy AI — something Mozilla hasn’t been quiet about.
Our first trustworthy AI paper was both a guidepost and map, and this one will be, too. Within are Mozilla’s plans for engaging with AI issues and trends. The paper outlines five key steps Mozilla will take in the years ahead (like making open-source generative AI more trustworthy and mainstream), and also five steps the broader movement can take (like pushing back on regulations that would make AI even less open).
Our first paper was also “open source,” and this one is, too. We are seeking input on the report and on the state of the AI ecosystem more broadly. Through your comments and a series of public events, we will take feedback from the AI community and use it to strengthen our understanding and vision for the future. Please contact us at AIPaper@mozillafoundation.org and send us your feedback on the report, as well as examples of trustworthy AI approaches and applications.
The movement for trustworthy AI has made meaningful progress since 2020, but there’s still much more work to be done. It’s time to redouble our efforts and recommit to our core principles, and this report is Mozilla’s next step in doing that.It will take all of us, working together, to turn this vision into reality. There’s no time to waste — let’s get to work.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization. The following
RFCs would benefit from user testing before moving forward:
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing
label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature
need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.
Relatively few PRs affecting performance, but massive improvements thanks to the
update to LLVM 18 (PR #12005), as well as the merging of two related compiler
queries (PR #120919) and other small improvements from a rollup (PR #121055).
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
We're writing this blog post to announce that the Rust Project will be participating in Google Summer of Code (GSoC) 2024. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.
Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.
As of today, the organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to send project proposals to organizations that appeal to them. If their project proposal is accepted, they will embark on a 12-week journey during which they will try to complete their proposed project under the guidance of an assigned mentor.
We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals.
You can start discussing the project ideas with Rust Project maintainers immediately. The project proposal application period starts on March 18, 2024, and ends on April 2, 2024 at 18:00 UTC. Take note of that deadline, as there will be no extensions!
If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.
This is the first time that the Rust Project is participating in GSoC, so we are quite excited about it. We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. We will publish another blog post later this year with more information about our participation in the program.
FOSDEM (Free and Open Source Software Developers’ European Meeting) is one of the largest gatherings of open-source enthusiasts, developers, and advocates worldwide. Each year there are many focused developer rooms (devrooms), managed by volunteers, and this year’s edition on 3-4 February saw the return of the Web Performance devroom managed by Peter Hedenskog from Wikimedia and myself (Dave Hunt) from Mozilla. Thanks to so many great talk proposals (we easily could have filled a full day), we were able to assemble a fantastic schedule, and at times the room was full, with as many people standing outside hoping to get in!
Dive into the talks
Thanks to the FOSDEM organisers and preparation from our speakers, we successfully managed to squeeze nine talks into the morning with a tight turnaround time. Here’s a rundown of the sessions:
1. The importance of Web Performance to Information Equity
Bas Schouten kicked off the morning with his informative talk on the vital role web performance plays on ensuring equal access to information and services for those with slower devices.
2. Let’s build a RUM system with open source tools
3. Better than loading fast… is loading instantly!
At this point the room was at capacity, with at least as many people waiting outside! Next, Barry Pollard gave shared details on how to score near-perfect Core Web Vitals in his talk on pre-fetching and pre-rendering.
4. Keyboard Interactions
Patricija Cerkaite followed with her talk on how she helped to improve measuring keyboard interactions, and how this influenced Interaction to Next Paint, leading to a better experience for Input Method Editors (IME).
5. Web Performance at Mozilla and Wikimedia
Midway through the morning, Peter Hedenskog & myself shared some insights into how Wikimedia and Mozilla measure performance in our talk. Peter shared a some public dashboards, and I ran through a recent example of a performance regression affecting our page load tests.
6. Understanding how the web browser works, or tracing your way out of (performance) problems
We handed the spotlight over to Alexander Timin for his talk on event tracing and browser engineering based on his experience working on the Chromium project.
7. Fast JavaScript with Data-Oriented Design
The morning continued to go from strength to strength, with Markus Stange demonstrating in his talk how to iterate and optimise a small example project and showing how easy it is to use the Firefox Profiler.
8. From Google AdSense to FOSS: Lightning-fast privacy-friendly banners
As we got closer to lunch, Tim Vereecke teased us with hamburger banner ads in his talk on replacing Google AdSense with open source alternative Revive Adserver to address privacy and performance concerns.
9. Insights from the RUM Archive
For our final session of the morning, Robin Marx introduced us to the RUM Archive, shared some insights and challenges with the data, and discussed the part real user monitoring plays alongside other performance analysis.
I would like to thank all the amazing FOSDEM volunteers for supporting the event. Thank you to our wonderful speakers and everyone who submitted a proposal for providing us with such an excellent schedule. Thank you to Peter Hedenskog for bringing his devroom management experience to the organisation and facilitation of the devroom. Thank you to Andrej Glavic, Julien Wajsberg, and Nazım Can Altınova for their help managing the room and ensuring everything ran smoothly. See you next year!
WebDriver is a remote control interface that enables introspection and control of user agents. As such it canhelp developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 123 release cycle.
Contributions
With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.
New: Support for the “browsingContext.locateNodes” command
Support for the browsingContext.locateNodes command has been introduced to find elements on the given page. Supported locators for now are CssLocator and XPathLocator. Additional support for locating elements by InnerTextLocator will be added in a later version.
This command encapsulates the logic for locating elements within a web page’s DOM, streamlining the process for users familiar with the Find Element(s) methods from WebDriver classic (HTTP). Alternatively, users can still utilize script.evaluate, although it necessitates knowledge of the appropriate JavaScript code for evaluation.
We implemented this change to simplify the creation of tests that need to run across various desktop platforms and Android. Consequently, specific adjustments for new top-level browsing contexts are no longer required, enhancing the test creation process.
The Rust Survey Team is excited to share the results of our 2023 survey on the Rust Programming language, conducted between December 18, 2023 and January 15, 2024.
As in previous years, the 2023 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.
This eighth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, this year we have also prepared a report containing charts with aggregated results of all questions in the survey. Based on feedback from recent years, we have also tried to provide more comprehensive and interactive charts in this summary blog post. Let us know what you think!
Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.
There's a lot of data to go through, so strap in and enjoy!
Participation
Survey
Started
Completed
Completion rate
Views
2022
11 482
9 433
81.3%
25 581
2023
11 950
9 710
82.2%
16 028
As shown above, in 2023, we have received 37% fewer survey views in vs 2022, but saw a slight uptick in starts and completions. There are many reasons why this could have been the case, but it’s possible that because we released the 2022 analysis blog so late last year, the survey was fresh in many Rustaceans’ minds. This might have prompted fewer people to feel the need to open the most recent survey. Therefore, we find it doubly impressive that there were more starts and completions in 2023, despite the lower overall view count.
Community
This year, we have relied on automated translations of the survey, and we have asked volunteers to review them. We thank the hardworking volunteers who reviewed these automated survey translations, ultimately allowing us to offer the survey in seven languages: English, Simplified Chinese, French, German, Japanese, Russian, and Spanish. We decided not to publish the survey in languages without a translation review volunteer, meaning we could not issue the survey in Portuguese, Ukrainian, Traditional Chinese, or Korean.
The Rust Survey team understands that there were some issues with several of these translated versions, and we apologize for any difficulty this has caused. We are always looking for ways to improve going forward and are in the process of discussing improvements to this part of the survey creation process for next year.
We saw a 3pp increase in respondents taking this year’s survey in English – 80% in 2023 and 77% in 2022. Across all other languages, we saw only minor variations – all of which are likely due to us offering fewer languages overall this year due to having fewer volunteers.
Rust user respondents were asked which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (12%), China (6%), United Kingdom (6%), France (6%), Canada (3%), Russia (3%), Netherlands (3%), Japan (3%), and Poland (3%) . We were interested to see a small reduction in participants taking the survey in the United States in 2023 (down 3pp from the 2022 edition) which is a positive indication of the growing global nature of our community! You can try to find your country in the chart below:
Once again, the majority of our respondents reported being most comfortable communicating on technical topics in English at 92.7% — a slight difference from 93% in 2022. Again, Chinese was the second-highest choice for preferred language for technical communication at 6.1% (7% in 2022).
We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 76% selected no, 14% selected yes, and 10% preferred not to say.
We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 41% followed by trans at 31.4%. Going forward, it will be important for us to track these figures over time to learn how our community changes and to identify the gaps we need to fill.
As Rust continues to grow, we must acknowledge the diversity, equity, and inclusivity (DEI)-related gaps that exist in the Rust community. Sadly, Rust is not unique in this regard. For instance, only 20% of 2023 respondents to this representation question consider themselves a member of a racial or ethnic minority and only 26% identify as a woman. We would like to see more equitable figures in these and other categories. In 2023, the Rust Foundation formed a diversity, equity, and inclusion subcommittee on its Board of Directors whose members are aware of these results and are actively discussing ways that the Foundation might be able to better support underrepresented groups in Rust and help make our ecosystem more globally inclusive. One of the central goals of the Rust Foundation board's subcommittee is to analyze information about our community to find out what gaps exist, so this information is a helpful place to start. This topic deserves much more depth than is possible here, but readers can expect more on the subject in the future.
Rust usage
In 2023, we saw a slight jump in the number of respondents that self-identify as a Rust user, from 91% in 2022 to 93% in 2023.
31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, with 67% reporting that they simply haven’t had the chance to prioritize learning Rust yet, which was once again the most common reason.
Of the former Rust users who participated in the 2023 survey, 46% cited factors outside their control (a decrease of 1pp from 2022), 31% stopped using Rust due to preferring another language (an increase of 9pp from 2022), and 24% cited difficulty as the primary reason for giving up (a decrease of 6pp from 2022).
Rust expertise has generally increased amongst our respondents over the past year! 23% can write (only) simple programs in Rust (a decrease of 6pp from 2022), 28% can write production-ready code (an increase of 1pp), and 47% consider themselves productive using Rust — up from 42% in 2022. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.
In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users, followed by macOS and Windows, which have a very similar share of usage.
Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine. We can see a slight uptick in users targeting WebAssembly, embedded and mobile platforms, which speaks to the versatility of Rust.
We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option, with RustRover (which was released last year) also gaining some traction.
You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.
Rust at Work
We were excited to see a continued upward year-over-year trend of Rust usage at work. 34% of 2023 survey respondents use Rust in the majority of their coding at work — an increase of 5pp from 2022. Of this group, 39% work for organizations that make non-trivial use of Rust.
Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software at 86% — a 4pp increase from 2022 responses. The second most popular reason was Rust’s performance characteristics at 83%.
We were also pleased to see an increase in the number of people who reported that Rust helped their company achieve its goals at 79% — an increase of 7pp from 2022. 77% of respondents reported that their organization is likely to use Rust again in the future — an increase of 3pp from the previous year. Interestingly, we saw a decrease in the number of people who reported that using Rust has been challenging for their organization to use: 34% in 2023 and 39% in 2022. We also saw an increase of respondents reporting that Rust has been worth the cost of adoption: 64% in 2023 and 60% in 2022.
There are many factors playing into this, but the growing awareness around Rust has likely resulted in the proliferation of resources, allowing new teams using Rust to be better supported.
In terms of technology domains, it seems that Rust is especially popular for creating server backends, web and networking services and cloud technologies.
You can scroll the chart to the right to see more domains. Note that the Database implementation and Computer Games domains were not offered as closed answers in the 2022 survey (they were merely submitted as open answers), which explains the large jump.
It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!
Challenges
As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.
Of those respondents who shared their main worries for the future of Rust (9,374), the majority were concerned about Rust becoming too complex at 43% — a 5pp increase from 2022. 42% of respondents were concerned about a low level of Rust usage in the tech industry. 32% of respondents in 2023 were most concerned about Rust developers and maintainers not being properly supported — a 6pp increase from 2022.
We saw a notable decrease in respondents who were not at all concerned about the future of Rust, 18% in 2023 and 30% in 2022.
Thank you to all participants for your candid feedback which will go a long way toward improving Rust for everyone.
Closed answers marked with N/A were not present in the previous (2022) version of the survey.
In terms of features that Rust users want to be implemented, stabilized or improved, the most desired improvements are in the areas of traits (trait aliases, associated type defaults, etc.), const execution (generic const expressions, const trait methods, etc.) and async (async closures, coroutines).
It is interesting that 20% of respondents answered that they wish Rust to slow down the development of new features, which likely goes hand in hand with the previously mentioned worry that Rust becomes too complex.
The areas of Rust that Rustaceans seem to struggle with the most seem to be asynchronous Rust, the traits and generics system and also the borrow checker.
Respondents of the survey want the Rust maintainers to mainly prioritize fixing compiler bugs (68%), improving the runtime performance of Rust programs (57%) and also improving compile times (45%).
Same as in recent years, respondents noted that compilation time is one of the most important areas that should be improved. However, it is interesting to note that respondents also seem to consider runtime performance to be more important than compile times.
Looking ahead
Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.
We are aware that the survey has contained some confusing questions, and we will try to improve upon that in the next year's survey.
If you have any suggestions for the Rust Annual survey, please let us know!
We are immensely grateful to those who participated in the 2023 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.
If you’d like to dig into more details, we recommend you to browse through the full survey report.
Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.
Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.
I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] --clean-dest-dir on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.
This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:
On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time.
[wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.
Mozilla Monitor Plus has been launched! This is a new subscription product (available only in the US for now) that will search for and scrub your personal information from data brokers.
Mozilla Monitor Plus lets you take back control over your personal information.
The new clear history dialog has been enabled by default at Nightly! The dialog now has a more modern look, consolidated clearing options, and shows the amount of data you clear based on time range. Additionally, all the entry points for clearing data have been unified to point to the same dialog. Congratulations to :harshitsohaney for getting the new dialog to this point!
Much cleaner than before!
Nicolas added support for registered properties (@property/CSS.registerProperty) in the DevTools Rules view (bug, bug). The registered properties are displayed in var() autocomplete (bug), as well as in property name autocomplete too (bug)
Check it out by setting the pref layout.css.properties-and-values.enabled to true
In Firefox 124 a new runtime.onPerformanceWarning API event has been introduced (Bug 1861445) for WebExtensions. This event will be emitted when Firefox detects that a content script is impacting a web page responsiveness. It is meant to allow WebExtension developers to detect when their content scripts are slowing down pages.
This new API has been previously proposed through the W3C WebExtensions Community Group and tracked by this ticket.
Thanks to Dave Vandyke for contributing this new WebExtensions API!
Friends of the Firefox team
Introductions/Shout-Outs
Welcome to Nathan Barrett (:nbarrett), who is joining the New Tab team!
Ongoing High Contrast Mode Project – We’re reevaluating all occurrences of @media (prefers-contrast) to make sure they’re targeting BOTH Windows HCM and macOS Increase Contrast. If the code within the query is only for Windows (as often is the case 😁) the query should be switched to @media (forced-colors).
New revisions that use these queries will be blocked for review by the HCM-reviewers review group in phabricator.
You can read more about using these queries in our new documentation, and play around with this live site Morgan made. If you have any questions, please reach out to Morgan or Anna 🙂 Thanks!
Add-ons / Web Extensions
Addon Manager & about:addons
Thanks to :arai for having converted the last 3 jsm files from the XPIProvider internals to ES modules – Bug 1836480.
WebExtensions and AOM/XPIProvider internals are now 100% migrated away from legacy jsm files! 🎉
Thanks to :masayuki for fixing a bug related to keyboard shortcuts using non-english keyboard layouts (fixed as part of Bug 1874727 and tracked for WebExtensions keyboard shortcuts in Bug 1782660).
Developer Tools
DevTools
Oliver Schramm reported and fixed the geometry editor when the page is zoomed in (bug)
Nicolas added a preference to control the behavior of the Enter key when editing properties in the Rules view (bug), and reverted the behavior to what we had in Firefox 121 (bug, blog post update)
Alex made the console up to 70% faster (perf alert) when it reaches the limit of messages we show (bug)
Alex improved the tracer, by allowing it to trace on next reload or navigation (bug)
This relies on a new option in the context menu:
Nicolas fixed an issue where ServiceWorker file where not displayed in the debugger when using an URL with a port (bug)
If you’re working with Service Worker, please flip devtools.debugger.features.windowless-service-workers so you can debug them directly in the page tab toolbox (not via about:debugging). We’re looking for feedback on this before we enable it by default
Bomsy made the debugger no longer use Babel to detect if watch expressions have syntax errors (bug). This is part of a bigger project where we’re trying to completely remove Babel, which can be pretty slow on very large files
Alex fixed a bug in the Debugger where watch expressions and variable tooltip could show wrong values (bug)
WebDrive BiDi
Contributors
James Hendry updated the “WebDriver:SwitchToFrame” command to make the “id” parameter mandatory and raise an exception if it is missing (bug)
Sasha added support for the contexts attribute to the script.addPreloadScript command (BiDi), which allows to assign a preload script to specific browsing contexts (bug)
Henrik fixed the “WebDriver:NewWindow” command to always fallback to opening new tabs on Android, even if a new “window” was requested (bug)
Henrik updated our vendored Puppeteer version to v21.10.0, which comes with updated tests and support for BiDi features. The ./mach puppeteer-test command was also updated to run in headful mode by default (bug)
Henrik improved browsingContext.close to allow closing the last tab of a window (bug)
Julian implemented several commands to handle user contexts (containers) in WebDriver BiDi;
browser.createUserContext allows to create a new user context (bug)
browser.getUserContexts allows to list all the available user contexts (including the default one and contexts created outside of WebDriver BiDi) (bug)
browser.removeUserContexts allows to remove a user context and close all the related tabs (bug)
Julian added partial support for two network interception commands, network.continueRequest and network.continueResponse. At the moment they only allow to resume an intercepted request, but additional parameters will later allow to modify the request/response (bug)
Originally integrated due to wanting to use JavaScript features that were at stage 3, whereas ESLint only supports them at stage 4.
We can/will reintroduce the integration later if need be, but for now let’s enjoy the slightly faster linting.
Migration Improvements
Welcome to fchasen and kpatenio, who are going to be joining us on making device migration smoother for our users!
The team has been mostly prototyping, consulting and building up their expertise on the various data stored in user profile directories, and how it can be safely copied during runtime.
Anna has fixed various accessibility issues around the urlbar, including providing interactive roles to search bar button (1871980), fixing TAB behaviour @ 1874277 and 1875654 and various test fixes
Trending suggestions are now enabled on Bing (1872409) for Nightly users.
Mandy and Mark have done a lot of work towards search-config-v2 that allows us to share search configuration across desktop and mobile, tracking bug @ 1833829
At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.
This week, we chatted with winner Chris Smalls, an activist using technology to effect change and advocate for a better world. He’s the founder and president of the Amazon Labor Union in Staten Island that advocates for workers’ rights and conditions. In 2020, he was fired by Amazon after leading protests against its working conditions during the COVID-19 pandemic. We talk with Smalls about the early days of the union fight, his work in the community and how the digital world has impacted organizing efforts.
When people are fighting against Amazon, there are a lot of different fights — wages, time-off, to even remote work now. What was the main thing that you wanted to fight for like during that time? When you began to fight for the union.
The pandemic, for sure. It was COVID-19. That initially was the reason why I spoke up. You know, after working there for a number of years — five years — and realizing that we weren’t prepared for the virus on a local level, it was a very alarming situation to be in, and this was before the vaccine, before mask testing, before we even really understood what the virus was doing. We knew it was wiping people out, so my fear was that it would spread like wildfire within the warehouse and within the whole Amazon network. So initially, I was just trying to go through the proper channels. And one thing led to another, you know, when I wasn’t met with an answer that I felt was sustainable for not just myself, but for everybody, that’s when I started to pretty much rebel. I try to still do that in a respectable manner, but unfortunately, the company decided to take an aggressive route by just quarantining myself out of the thousands of people, and I felt that wasn’t right at all. So initially it was over COVID-19, but as things unfolded, the demands changed over time. And it wasn’t until 2021 — the end of 2021, spring — was when we decided that we were going to form this independent Amazon labor union.
How did you get people on board with this? How did you convince people to buy into it?
I used Amazon’s principles — really, to be honest with you — earning the trust, building the relationships. One of my favorite principles out of the 14 was, have backbone, disagree and commit, so that’s exactly what I did. I disagreed with the way they were responding. I had a backbone to stand up to it, and I committed myself to the movement and committed myself to building relationships and earning the trust of the workers. So, over the course of 11 months, you know, organizing outside across the street, meeting people, having conversations, having barbecues, giving out free food — and yes, we did give out free weed — we did all these things, little things that mattered the most. Things that Amazon overlooked all the time – the little things. How do people get to work? How do they eat lunch every day? How do they get a ride to and from work in a snowstorm? We were there for them during those times, and we did those little bit of things with a little bit of money that we had from donations, and that’s ultimately how we defeated them, which is bringing people together from all different backgrounds.
<figcaption class="wp-element-caption">Chris Smalls at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>
When you reflect on your time at Amazon, what do you remember most about that period in your life in terms of the work that you did there?
What I remember most is really just being allowed to be exactly who people see today. When I worked there, I was so well respected because I was a good employee, that I was allowed to pretty much create my culture within my own little department no matter what building I was in. I opened up 3 buildings for Amazon — one in New Jersey, Connecticut and Staten Island — and for me to go to each of these buildings and be able to have the respect of upper management and have the morale of the people underneath me to make them productive, and my team go number one in our department. I think people respected the fact that I was always siding with the workers, no matter what position I was in, and I was a supervisor. To have the morale that I had, I had to understand where people came from, and I understood where they came from because I was them at one point in time. I was an entry level worker on the line, picking and packing boxes just like the rest of them. So for me, I never forgot where I came from, and by having those types of skill sets, along with learning those principles, that’s what made me the best organizer I can possibly be.
The Daily Show is definitely up there, that was a cool one. The Breakfast Club, that was a cool one for me. Desus and Mero was a cool one for me. And of course, the White House. I’m not fond of the President, but to go to the White House as a young black man from where I came from is unheard of, so, that’s always going to be a highlight of my life, regardless of who the President is.
Where do you draw inspiration from to continue the work that you do today?
I draw definitely from the youth, the younger generation. I try to stay young and hip — I’m still 35 years old and I have kids already, I have kids about to be in high school. My kids are 11 going on 12, and they’re watching me on YouTube, especially on TikTok. I’m in their classroom. They’re talking to their friends about their dad. So for me, my inspiration is being a good role model, being a good father and understanding that the youth is paying attention now, and because of my uniqueness and our style, our swag, the way my union is so different, I want to continue to build off of that. I want to make sure that we’re making unionizing cool because before it was boring, you know, to talk about it. But now we’re trying to change the culture of what labor looks like.
What do you think is the biggest challenge that we face right now in the world, on and offline? How do you think we combat it?
Well, the biggest challenge is the opposition. The system that’s been in place is still operating against us, and they got a lot more money and power than we do. The reason why they continue to get away with the things that they do is because we’re still divided.
I’m a fast learner in my few years of organizing, the labor movement itself is in a small bubble. If you talk about social injustice, it’s in a small bubble. You talk about women’s rights, it’s in the small bubble. Climate is in a different bubble. We’re not really, truly connected until we see something like a George Floyd where everybody’s out in the streets, and that’s the problem with America. We all go out in the streets when we see things like George Floyd. But then, after a while, we forget about it, and then we go back to work. And then it’s like, “Oh well, I can’t, because of my own individual problems that I have, and it’s not everybody’s fault. It’s the system that we live in that is designed to keep us distracted and not together.” So I think that’s the biggest issue that we got to overcome is, how do we connect all these different movements? Because at the end of the day, we’re all a part of the working class, no matter what movement, we’re all part of the working class. And if you’re in the labor movement, everybody here is a worker, no matter what job you work for or what industry you work for, you’re a worker. My goal one day is to connect trade unions to all the different movements and make this a class struggle, This is a class struggle. It’s 99.9% of us versus the one percent class, the billionaires. And I think if we all realize — that we’re all poor compared to these billionaires that are the ones who make the decisions for the rest of us and control these corporations — then we’ll be way better off than we are as a country.
What gives you hope about the future of our world to reach a place where we’re all much better?
What gives me hope now is that I’m walking into middle schools now and these 10-year-olds are telling me that Jeff Bezos is a bad man. Back in the day I didn’t go to class, and on Career Day, there was no Chris Small walking into a classroom on Career Day. There was always police officers, firefighters or nurses and doctors. But there was never a young, Black, cool-looking, Urban-like, brother to come in and say “Yo, you could be a trade union leader and still be as cool as a rapper. It was none of that. So for me, that’s what gives me hope is that the young generation — it’s a gift and curse they have access to iPads because they get access to everything — but they’re much more conscious than we were. They’re much more smarter and advanced and I know that could be a little scary, because they do have access to a lot of things at a younger age, but these kids are so smart now that they’re able to make decisions at a younger age. The younger generation is paying attention to the major issues of the world right now. I think we’re in a time that we’ve never seen before and that’s what gives me hope is that the younger generation is going to lead the way instead of us passing the torch, they’re going to lead it.
A new year, a new progress report! Learn what we did in January on our journey to transform K-9 Mail into Thunderbird for Android. If you’re new here or you forgot where we left off last year, check out the previous progress report.
Account setup
In January most of our work went into polishing the user interface and user experience of the new and improved account setup. However, there was still one feature missing that we really wanted to get in there: the ability to configure special folders.
Special folders
K-9 Mail supports the following special folders:
Archive: When configured, an Archive action will be available that moves a message to the designated archive folder.
Drafts: When configured, the Save as draft action will be available in the compose screen.
Sent: Messages that have been successfully submitted to the outgoing server will be uploaded to this folder. If this special folder is set to None, the app won’t save a copy of sent messages. Note: There’s also the setting Upload sent messages that can be disabled to prevent sent messages from being uploaded, e.g. if your email provider automatically saves a copy of outgoing messages.
Spam: When configured, a Spam action will be available that moves a message to the designated spam folder. (Please note that K-9 Mail currently does not include spam detection. So besides moving the message, this doesn’t do anything on its own. However, moving a message to and from the spam folder often trains the server-side spam filter available at many email providers.)
Trash: When configured, deleting a message in the app will move it to the designated trash folder. If the special folder is set to None, emails are deleted permanently right away.
In the distant past, K-9 Mail was simply using common names for these folders and created them on the server if they didn’t exist yet. But some email clients were using different names. And so a user could end up with e.g. multiple folders for sent messages. Of course there was an option to manually change the special folder assignment. But usually people only noticed when it was too late and the new folder already contained a couple of messages. Manually cleaning this up and making sure all email clients are configured to use the same folders is not fun.
To solve this problem, RFC 6154 introduced the SPECIAL-USE IMAP extension. That’s a mechanism to save this special folder mapping on an IMAP server. Having this information on the server means all email clients can simply fetch that mapping and then there should be no disagreement on e.g. which folder is used for sent messages.
Unfortunately, there’s still some email providers that don’t support this extension. There’s also cases where the server supports the feature, but none of the special roles are assigned to any folder. When K-9 Mail added support for the SPECIAL-USE extension, it simply used the data from the server, even if it meant not using any special folders. Unfortunately, that could be even worse than creating new folders, because you might end up e.g. not having a copy of sent messages.
So now the app is displaying a screen to ask the user to assign special folders when setting up an account.
This screen is skipped if the app receives a full mapping from the server, i.e. all special roles are assigned to a folder. Of course you’ll still be able to change the special folder assignment after the account has been created.
Splitting account options
We split what used to be the account options screen into two different screens: display options and sync options.
With the special folders screen done, we’re now feature complete. So we took a step back to look at the whole experience of setting up an account. And we’ve found several areas where we could improve the app.
Here’s an (incomplete) list of things we’ve changed:
We reduced the font weight of the header text to be less distracting.
In some parts of the flow there’s enough content on the screen that a user has to scroll. The area between the header and the navigation buttons at the bottom can be very small depending on the device size. So we included the header in the scrollable area to improve the experience on devices with a small screen.
There are a couple of transient screens, e.g. when checking server settings. Previously the app first displayed a progress indicator when checking server settings, then a success message for 2 seconds, but allowed the user to skip this screen by pressing the Next button. This turned out to be annoying and confusing. Annoying because the user has to wait longer than necessary; and confusing because it looked like user input was required, but by the time the user realizes that, the app will have most likely switched to the next screen automatically. We updated these transient screens to always show a progress indicator and hide the Next button, so users know something is happening and there’s currently nothing for them to do.
We also fixed a couple of smaller issues, like the inbox not being synchronized during setup when an account was configured for manual synchronization.
Fixing bugs
Some of the more interesting bugs we fixed in January:
When rotating the screen while selecting a notification sound in settings, some of the notification settings were accidentally disabled (#7468).
When importing settings a preview lines value of 0 was ignored and the default of 2 was used instead (#7493).
When viewing a message and long-pressing an image that is also a link, only menu items relevant for images were displayed, but not ones relevant for links (#7457).
Opening an attachment from K-9 Mail’s message view in an external app and then sharing the content to K-9 Mail opened the compose screen for a new message but didn’t add an attachment (#7557).
Community Contributions
new-sashok724 fixed a bug that prevented the use of IP addresses for incoming or outgoing servers (#7483).
Most of you still using a Power Mac as a daily or occasional driver are probably either running Linux, Tiger or Leopard, and a minority on OS 9. Despite many distributions no longer shipping 32-bit PPC installs, Gentoo Linux still has specific support along with a few others, as does Adélie Linux if you like musl for breakfast. Still, for server duties, where I come from, you bring on the BSDs. In this blog you've already met my long-suffering NetBSD Macintosh IIci which is still trucking to this day and more recently my also-NetBSD G4 Mac mini (which later needed, effectively, a logic board swap), but I also have a Quadra 605 with a full '040 running NetBSD I use for utility tasks and at one time I ran an intermediate incarnation of gopher.floodgap.com on a Power Macintosh 7300 with a Sonnet G3 running NetBSD too. I stuffed that system full with a gig of RAM and a SATA card and it did very well until I got the current POWER6 server in 2010.
NetBSD has the widest support, continuing to run on most 68Ks and PCI Power Macs to this day (leaving out only the NuBus Power Macs which aren't really supported by much of anything anymore, sadly). However, OpenBSD works fine on New World Macs, and FreeBSD has a very mature 32-bit PowerPC port — or, should I say, soon will have had one, since starting in FreeBSD 15 (13.x is the current release), ARMv6, 32-bit Intel and 32-bit PowerPC support will likely be removed. No new 32-bit support will be added, including for RISC-V.
Even though I have a large number of NetBSD systems, I still like FreeBSD, and one of my remote "island" systems runs it. The differences between BSDs are more subtle than with Linux distributions, but you can still enjoy the different flavours that result, and I even ported a little FreeBSD code to the NetBSD kernel so I could support automatic restarts after a power failure on the G4 mini. The fact that the userland and kernel are better matched together probably makes the BSDs better desktop clients, too, especially since on big-endian we're already used to some packages just not building right, so we don't lose a whole lot by running it. (Usually those are the same packages that wouldn't build on anything but Linux anyway.)
This isn't the end for the G5, which should still be able to run the 64-bit version of FreeBSD, and OpenBSD hasn't voiced any firm plans to cut 32-bit loose. However, NetBSD supports the widest range of Macs, including Macs far older than any Power Mac, and frankly if you want to use a Un*x on a Power Mac and have reasonable confidence it will still be running on it for years to come, it's undeniably the one with the best track record.
The topic for this month’s Thunderbird Community Office Hours takes a short break from the core of Thunderbird and takes us into the world of extensions we call Add-ons. These allow our users to add features and options beyond the customization already available in Thunderbird by default.
We want it to be easy to make Thunderbird yours, and so does our community. The Thunderbird Add-on page shows the power of community-driven extensions. There are Add-ons for everything, from themes to integrations, that add even more customization to Thunderbird.
Our guest for this month’s Thunderbird Community Office Hours is John Bieling, who is the person responsible for Thunderbird’s add-on component. This includes the WebExtension APIs, add-on documentation, as well as community support. He hosts a frequent open call about Add-on development and is welcoming to any developers seeking help. Come join us to learn about Add-on development and meet a key developer in the space.
Catch Up On Last Month’s Thunderbird Community Office Hours
Before you join us on February 22 at 18:00 UTC, watch last month’s office hours with UX Engineer Elizabeth Mitchell. We had some great discussion around the Message Context Menu and testing beta and daily images. Watch the video and read more about our guest at last month’s blog post.
<figcaption class="wp-element-caption">Watch January’s Office Hours session, all about the message context menu</figcaption>
Join Us On Zoom
(Yes, we’re still on Zoom for now, but a Jitsi server for future office hours is in the works!)
The Thunderbird Project enjoyed a fantastic 2023. From my point of view – as someone who regularly engages with both the community and our team on a daily basis – the past year brought a renewed sense of purpose, sustainability, and excitement to Thunderbird. Let’s talk about a few of the awesome milestones Thunderbird achieved, but let’s also discuss where we stumbled and what lessons we learned along the way.
Our 2023 Milestones
The biggest milestone of 2023 was Thunderbird 115 “Supernova.” This release marked the first step towards a more flexible, reliable, and customizable Thunderbird that will accommodate different needs and workflows. Work has been long underway to modernize huge amounts of old code, with the aim of modernizing Thunderbird to deliver new features even faster. The “Supernova” release represented the first fruits of those efforts, and there’s a lot more in the pipeline!
Alongside Supernova came a brand new Thunderbird logo to signal the revitalization of the project. We finally (even a bit reluctantly) said goodbye to our beloved “wig on an envelope” and ushered in a new era of Thunderbird with a refreshed, redesigned logo. But it was important to honor our roots, which is why we hired Jon Hicks – the designer of the original Firefox and Thunderbird logos – to help us bring it to life. (Now that you’ve all been living with it for the last several months, has it grown on you? Let us know in the comments of this post!)
One 2023 milestone that deserves more attention is that we hired a dedicated User Support Specialist! Roland Tanglao has been working enthusiastically towards removing “documentation debt” and updating the 100s of Thunderbird support articles at support.mozilla.org (which you’ll see us refer to internally as “SUMO”). Beyond that, he keeps a watchful eye on our Matrix community support channel for emerging issues, and is in the forums answering as many help questions as humanly possible, alongside our amazing support volunteers. In a nutshell, Roland is doing everything he can to improve the experience of asking for and receiving support, modernize existing documentation, and create new guides and articles that make using Thunderbird easier.
These are some – not all – of our accomplishments from last year. But it’s time to shift focus to where we stumbled, and how we’ll do better.
The Lessons We Learned In 2023
In 2023, we failed to finish some of the great features we wanted to bring to Thunderbird, including Sync and Account Hub (both of which, however, are still in development). We also missed our target release window for Thunderbird on Android, after deciding it was worth the extra development time to add the kind of functionality and flexibility you expect from Thunderbird software.
Speaking of functionality you expect, we hear you loud and clear: you want Exchange support in Thunderbird. We’ve already done some exploratory work, and have enabled the usage of Rust in Thunderbird. This is a complex topic, but the short version is that this opens the doors for us to start implementing native support for the Exchange protocol. It’s officially on our roadmap!
We also believe our communication with you has fallen short of where it needs to be. There are times when we get so excited about things we’re working on that it seems like marketing hype. In other situations, we have over-promised and under-delivered because these projects haven’t been extensively scoped out.
We’re beginning to solve the latter issue with the recent hiring of Kelly McSweeney, Senior Technical PM. She joined our team late last year and brings 20 years of valuable experience to Thunderbird. In a nutshell, Kelly is building processes and tools to accurately gauge how long development time will realistically take, from extensive projects to the tiniest tasks. Basically, she’s getting us very organized and making things run much more efficiently! This not only means smoother operations across the organization, but also clearer communication with you going forward.
And communication is our biggest area of opportunity right now, specifically with our global Thunderbird community. We haven’t been as transparent as an open source project should be, nor have we discussed our future plans frequently enough. We’ve had several meetings about this over the past few weeks, and we’re taking immediate steps to do better.
To begin with, you’ll start seeing monthly Developer Digests like this one from Alex, aimed at giving you a closer look at the work currently being planned. We’re also increasing our activity on the Thunderbird mailing lists, where you can give us direct feedback about future improvements and features.
In 2024 you can also look forward to monthly community Office Hours sessions. This is where you can get some face time (or just voice time) with our team, and watch presentations about upcoming features and improvements by the developer(s) working on them.
One last thing: In 2023, Thunderbird’s Marketing & Communications team consisted of myself and Wayne Mery. This year Wayne and I are fortunate to be working alongside new team members Heather Ellsworth, Monica Ayhens-Madon, and Natalia Ivanova. Together, we’re going to work diligently to create more tutorials on the blog, more video guides, and more content to help you get the most out of Thunderbird – with a focus on productivity.
How To Stay Updated
Thank you for being on this journey with us! If you want to get more involved and stay in touch, here are the best places to keep up with what’s happening at Thunderbird:
We will be more active right here on this blog, so come back once or twice per month to see what’s new.
If you enjoy the technical bits, want to help test Thunderbird, or you’re part of our contributor community, these mailing lists at Topicbox are ideal.
Follow us on Mastodon or X/Twitter for more frequent – and fun – updates!
Oh, you don’t want any poison in your porridge.
But how about in your computer’s memory?
Papa Bear - too much poison
Papa Bear likes his chair hard, his porridge hot and his browser written in
a memory safe language that helps engineers avoid memory bugs like
buffer overruns and use after frees.
But even Papa Bear has to compromise, part of Firefox is written in a
memory safe language and the rest is written in C++. When using
C++ there are a variety of defenses programmers can take to help
catch memory errors. One of those is called memory poisoning.
mozjemalloc the memory allocator built into Firefox will poison memory
by calling memset(aPtr, 0xE5, size); before freeing it.
Any memory containing the pattern 0xE5E5E5E5 is therefore very likely to be
memory that’s already been freed.
This has two and a half benefits:
If some code were to free and then dereference some memory
(a use after free bug)
it would most likely cause the browser to crash, which is much better
than a potentially exploitable bug allowing Goldilocks to steal Papa Bear’s
banking credentials!
The other benefit is that when Firefox does crash due to such a
use-after-free, the presence of this pattern in the crash report allows
engineers to see the type of error that occurred and hopefully fix the
mistake.
You probably figured out by now that I’m going to persist with this
metaphor.
Mama Bear likes her chair soft, her porridge cold (and congealed (yuck)),
and her browser fast.
But how much faster is Mama Bear’s experience?
This is the question that was raised recently when
Randell Jesup was benchmarking various memory allocators in Firefox.
He noted that while mozjemalloc performs poisoning, many of the other
allocators do not and to compare the performance of the allocators more
fairly they should either all perform poisoning or none of them should.
And so Randell noted that, depending on the test,
Firefox could be
between 0.5% and 4%
faster
with poisoning disabled.
There are some results I collected. The "sp2" (Speedometer 2) and "sp3"
(Speedometer 3) tests are browser benchmarks - larger numbers indicate
better performance.
The amazon and instagram tests are pageload tests measured in seconds with
the ContentfulSpeedIndex metric - smaller numbers indicate better
performance.
sp2 (score)
sp3 (score)
amazon (sec)
instagram (sec)
Poison
178.84 ± 0.84
13.32 ± 1.03
243.2 ± 1.96
419.43 ± 1.04
No poisoning
179.42 ± 0.48
13.39 ± 0.31
237.55 ± 2.6
414.5 ± 0.8
The speedometer figures are pretty close and these are the best pageload
figures (the others showed very little difference but nothing regressed, yes
I’m aware I’ve cherry-picked data).
This means that if it weren’t for the lack of security and debugability
Mama Bear would have the right approach.
Baby Bear
Baby Bear loves a compromise, they want their computer to be safe from
Goldilocks' hacking attempts but also love performance improvements.
One compromise may be to probabilistic poison memory some of the time, e.g.
a roughly 5% chance of poisoning.
That’s more complex and involves a memory write anyway to keep the "time
until poison" counter updated.
We didn’t investigate it.
But it’s worth noting that it would be similar in spirit to the
Probabilistic Heap Checker (PHC)
that’s
rolling out
in Firefox or the similar GWP-ASan
capability in Chrome.
Instead we tested "what if we poison only the first cache line of a memory
cell".
Andrew McCreight and Olli Pettay
pointed out that
Element, a common DOM structure, is 128 bytes long and poisoning it is
useful to detect memory errors in DOM code, as a lot of DOM code will
involve Element.
We tested poisoning the first 64, 128 and 256 bytes of each structure.
We assume that management of cache and writing cache lines back to RAM is
going to be the dominant cost. Therefore we round-up our writes to the next
cache line boundary..
For example, on a computer with 64-byte cache lines, if a 96-byte object is
allocated so that the first 32-bytes is in one cache-line, while the next
64-bytes is in another. Our 64-byte write would cover two halves of
different cache lines. In this case we will poison all 96-bytes because
doing so writes to the same number of cache lines as the original 64-byte
write.
Let’s add these options to our table of results.
sp2 (score)
sp3 (score)
amazon (sec)
instagram (sec)
Poison
178.84 ± 0.84
13.32 ± 1.03
243.20 ± 1.96
419.43 ± 1.04
Poison 256
179.50 ± 0.55
13.35 ± 0.33
240.47 ± 2.82
415.28 ± 1.30
Poison 128
179.19 ± 0.43
13.35 ± 0.59
241.62 ± 3.05
414.95 ± 1.15
Poison 64
179.09 ± 0.87
13.33 ± 0.83
242.13 ± 2.56
414.11 ± 0.91
No poisoning
179.42 ± 0.48
13.39 ± 0.31
237.55 ± 2.60
414.5 ± 0.8
As above, sp2 and sp3 are scores - bigger numbers are better. While amazon
and instagram are page load tests where smaller numbers are better.
As expected the partial poisoning results fall between full and no
poisoning. But what’s a little bit surprising is that in some tests (sp2 and
amazon) poisoning a larger amount of memory made things faster.
This could be because the memset() routine or the hardware itself is able
to optimise larger writes more effectively.
That said it’s important to acknowledge that the standard deviation is
fairly high and doing the right statistical analysis is beyond this blog post.
Just right
Since poisoning more memory isn’t much slower and in some
cases is faster than poisoning a little memory, then we might as well choose to
poison
256 bytes
which comfortably covers the Element object and most
others and for the others it likely covers many of their most-often accessed
fields.
We’re confident that this is enough to help us catch many errors that can be
caught with poisoning.
While also performing well enough, especially for the pageload tests where
it is closer to the performance available with poisoning disabled.
We think that Baby Bear would agree, it is Just Right.
It gets better
With the
Probablistic Heap Checker (PHC)
rolling out soon we will have an even greater ability to catch information
related to memory errors.
I’ll be writing about this in the future.
Why Papa Bear is safe and Mama Bear is secure?
In some ways it feels more natural to lean in to (negative) gender
stereotypes where Papa Bear wants things fast and Mama Bear is the
cautious one. I considered this however to make comprehension easier it’s
easier to explain poisoning before explaining turning poisoning off and the
nursery tale describes Papa Bear’s preferences first,
so that’s the order I introduced them here.
Flipping the script on gender stereotypes was accidental.
Dawnmaker, le jeu sur lequel je travaille dans le cadre d'Arpentor Studio, depuis plus de deux ans, a désormais une page Steam et un trailer. Je vous laisse découvrir ça :
Today marks a significant moment in our journey, and I am thrilled to share some important news with you. After much thoughtful consideration, I have decided to transition from the role of CEO of Mozilla Corporation back to the position of Mozilla Corporation Executive Chairwoman, a role I held with great passion for many years.
During my 25 years at Mozilla, I’ve worn many hats, and this move is driven by a desire to streamline our focus and leadership for the challenges ahead. I’ve been leading the Mozilla business through a transformative period, while also overseeing Mozilla’s broader mission. It’s become evident that both endeavors need dedicated full-time leadership.
Enter Laura Chambers, a dynamic board member who will step into the CEO role for the remainder of this year. Laura brings a wealth of experience, having been an active and impactful member of the Mozilla board for three years. With an impressive background leading product organization at Airbnb, PayPal, eBay, and most recently as CEO of Willow Innovations, Laura is well-equipped to guide Mozilla through this transitional period.
Her focus will be on delivering successful products that advance our mission and building platforms that accelerate momentum. Laura and I will be working closely together throughout February to ensure a seamless transition, and in my role as Exec Chair I’ll continue to provide advice and engage in areas that touch on our unique history and Mozilla characteristics.
Laura’s focus will be on Mozilla Corporation with two key goals:
1. Vision and Strategy for the Future: Refining the company’s vision and aligning the corporate and product strategy behind it. This will be grounded in our mission and unique strengths and shaped by our point of view on technology’s future and our role in it.
2. Outstanding Execution: Focus, Processes, Capabilities: Doubling down on our core products, like Firefox, and building out our capabilities and innovation pipeline to bring new compelling products to market.
While Laura takes on the reins as CEO of Mozilla Corporation, I will return to supporting the CEO and leadership team as I have done previously as Exec Chair. In addition, I will expand my work in two critical areas:
1. More consistently representing Mozilla in the public – With a focus on policy, open source, and community — through speaking and direct engagement with the community.
2. Representing Mozilla as a unified entity – bigger than the sum of our parts — as we continue to strengthen and refine how all the entities work together to advance our policy and community goals with greater urgency and speed.
We’re at a critical juncture where public trust in institutions, governments, and the fabric of the internet has reached unprecedented lows. There’s a tectonic shift underway as everyone battles to own the future of AI. It is Mozilla’s opportunity and imperative to forge a better future. I’m excited about Laura’s day-to-day involvement and the chance for Mozilla to achieve more. Our power lies in the collective effort of people contributing to something better and I’m eager for Mozilla to meet the needs of this era more fully.
Thank you to everyone who participates in Mozilla, supports us, cheers us on, and works towards similar goals. Your dedication is the driving force behind Mozilla’s impact and success. Here’s to a future filled with innovation, collaboration, and continued success!
The Rust team is happy to announce a new version of Rust, 1.76.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.76.0 with:
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.76.0 stable
This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.
ABI compatibility updates
A new ABI Compatibility section in the function pointer documentation describes what it means for function signatures to be ABI-compatible. A large part of that is the compatibility of argument types and return types, with a list of those that are currently considered compatible in Rust. For the most part, this documentation is not adding any new guarantees, only describing the existing state of compatibility.
The one new addition is that it is now guaranteed that char and u32 are ABI compatible. They have always had the same size and alignment, but now they are considered equivalent even in function call ABI, consistent with the documentation above.
Type names from references
For debugging purposes, any::type_name::<T>() has been available since Rust 1.38 to return a string description of the type T, but that requires an explicit type parameter. It is not always easy to specify that type, especially for unnameable types like closures or for opaque return types. The new any::type_name_of_val(&T) offers a way to get a descriptive name from any reference to a type.
fn get_iter() -> impl Iterator<Item = i32> {
[1, 2, 3].into_iter()
}
fn main() {
let iter = get_iter();
let iter_name = std::any::type_name_of_val(&iter);
let sum: i32 = iter.sum();
println!("The sum of the `{iter_name}` is {sum}.");
}
This currently prints:
The sum of the `core::array::iter::IntoIter<i32, 3>` is 6.
Quite often, an imperfect translation is better than no translation. So why even publish untranslated content when high-quality machine translation systems are fast and affordable? Why not immediately machine-translate content and progressively ship enhancements as they are submitted by human translators?
At Mozilla, we call this process pretranslation. We began implementing it in Pontoon before COVID-19 hit, thanks to Vishal who landed the first patches. Then we caught some headwinds and didn’t make much progress until 2022 after receiving a significant development boost and finally launched it for the general audience in September 2023.
So far, 20 of our localization teams (locales) have opted to use pretranslation across 15 different localization projects. Over 20,000 pretranslations have been submitted and none of the teams have opted out of using it. These efforts have resulted in a higher translation completion rate, which was one of our main goals.
In this article, we’ll take a look at how we developed pretranslation in Pontoon. Let’s start by exploring how it actually works.
How does pretranslation work?
Pretranslation is enabled upon a team’s request (it’s off by default). When a new string is added to a project, it gets automatically pretranslated using a 100% match from translation memory (TM), which also includes translations of glossary entries. If a perfect match doesn’t exist, a locale-specific machine translation (MT) engine is used, trained on the locale’s translation memory.
After pretranslations are retrieved and saved in Pontoon, they get synced to our primary localization storage (usually a GitHub repository) and hence immediately made available for shipping. Unless they fail our quality checks. In that case, they don’t propagate to repositories until errors or warnings are fixed during the review process.
Until reviewed, pretranslations are visually distinguishable from user-submitted suggestions and translations. This makes post-editing much easier and more efficient. Another key factor that influences pretranslation review time is, of course, the quality of pretranslations. So let’s see how we picked our machine translation provider.
Choosing a machine translation engine
We selected the machine translation provider based on two primary factors: quality of translations and the number of supported locales. To make translations match the required terminology and style as much as possible, we were also looking for the ability to fine-tune the MT engine by training it on our translation data.
In March 2022, we compared Bergamot, Google’s Cloud Translation API (generic), and Google’s AutoML Translation (with custom models). Using these services we translated a collection of 1,000 strings into 5 locales (it, de, es-ES, ru, pt-BR), and used automated scores (BLEU, chrF++) as well as manual evaluation to compare them with the actual translations.
Performance of tested MT engines for Italian (it).
Google’s AutoML Translation outperformed the other two candidates in virtually all tested scenarios and metrics, so it became the clear choice. It supports over 60 locales. Google’s Generic Translation API supports twice as many, but we currently don’t plan to use it for pretranslation in locales not supported by Google’s AutoML Translation.
Making machine translation actually work
Currently, around 50% of pretranslations generated by Google’s AutoML Translation get approved without any changes. For some locales, the rate is around 70%. Keep in mind however that machine translation is only used when a perfect translation memory match isn’t available. For pretranslations coming from translation memory, the approval rate is 90%.
To reach that approval rate, we had to make a series of adjustments to the way we use machine translation.
For example, we convert multiline messages to single-line messages before machine-translating them. Otherwise, each line is treated as a separate message and the resulting translation is of poor quality.
Multiline message:
Make this password unique and different from any others you use.
A good strategy to follow is to combine two or more unrelated
words to create an entire pass phrase, and include numbers and symbols.
Multiline message converted to a single-line message:
Make this password unique and different from any others you use. A good strategy to follow is to combine two or more unrelated words to create an entire pass phrase, and include numbers and symbols.
Let’s take a closer look at two of the more time-consuming changes.
The first one is specific to our machine translation provider (Google’s AutoML Translation). During initial testing, we noticed it would often take a long time for the MT engine to return results, up to a minute. Sometimes it even timed out! Such a long response time not only slows down pretranslation, it also makes machine translation suggestions in the translation editor less useful – by the time they appear, the localizer has already moved to translate the next string.
After further testing, we began to suspect that our custom engine shuts down after a period of inactivity, thus requiring a cold start for the next request. We contacted support and our assumption was confirmed. To overcome the problem, we were advised to send a dummy query to the service every 60 seconds just to keep the system alive.
Of course, it’s reasonable to shut down inactive services to free up resources, but the way to keep them alive isn’t. We have to make (paid) requests to each locale’s machine translation engines every minute just to make sure they work when we need them. And sometimes even that doesn’t help – we still see about a dozen ServiceUnavailable errors every day. It would be so much easier if we could just customize the default inactivity period or pay extra for an always-on service.
The other issue we had to address is quite common in machine translation systems: they are not particularly good at preserving placeholders. In particular, extra space often gets added to variables or markup elements, resulting in broken translations.
Message with variables:
{ $partialSize } of { $totalSize }
Message with variables machine-translated to Slovenian (adding space after $ breaks the variable):
{$ partialSize} od {$ totalSize}
We tried to mitigate this issue by wrapping placeholders in <span translate=”no”>…</span>, which tells Google’s AutoML Translation to not translate the wrapped text. This approach requires the source text to be submitted as HTML (rather than plain text), which triggers a whole new set of issues — from adding spaces in other places to escaping quotes — and we couldn’t circumvent those either. So this was a dead-end.
The solution was to store every placeholder in the Glossary with the same value for both source string and translation. That approach worked much better and we still use it today. It’s not perfect, though, so we only use it to pretranslate strings for which the default (non-glossary) machine translation output fails our placeholder quality checks.
Making pretranslation work with Fluent messages
On top of the machine translation service improvements we also had to account for the complexity of Fluent messages, which are used by most of the projects we localize at Mozilla. Fluent is capable of expressing virtually any imaginable message, which means it is the localization system you want to use if you want your software translations to sound natural.
As a consequence, Fluent message format comes with a syntax that allows for expressing such complex messages. And since machine translation systems (as seen above) already have trouble with simple variables and markup elements, their struggles multiply with messages like this:
shared-photos =
{ $photoCount ->
[one]
{ $userGender ->
[male] { $userName } added a new photo to his stream.
[female] { $userName } added a new photo to her stream.
*[other] { $userName } added a new photo to their stream.
}
*[other]
{ $userGender ->
[male] { $userName } added { $photoCount } new photos to his stream.
[female] { $userName } added { $photoCount } new photos to her stream.
*[other] { $userName } added { $photoCount } new photos to their stream.
}
}
That means Fluent messages need to be pre-processed before they are sent to the pretranslation systems. Only relevant parts of the message need to be pretranslated, while syntax elements need to remain untouched. In the example above, we extract the following message parts, pretranslate them, and replace them with pretranslations in the original message:
{ $userName } added a new photo to his stream.
{ $userName } added a new photo to her stream.
{ $userName } added a new photo to their stream.
{ $userName } added { $photoCount } new photos to his stream.
{ $userName } added { $photoCount } new photos to her stream.
{ $userName } added { $photoCount } new photos to their stream.
To be more accurate, this is what happens for languages like German, which uses the same CLDR plural forms as English. For locales without plurals, like Chinese, we drop plural forms completely and only pretranslate the remaining three parts. If the target language is Slovenian, two additional plural forms need to be added (two, few), which in this example results in a total of 12 messages needing pretranslation (four plural forms, with three gender forms each).
Finally, Pontoon translation editor uses custom UI for translating access keys. That means it’s capable of detecting which part of the message is an access key and which is a label the access key belongs to. The access key should ideally be one of the characters included in the label, so the editor generates a list of candidates that translators can choose from. In pretranslation, the first candidate is directly used as an access key, so no TM or MT is involved.
Access keys (not to be confused with shortcut keys) are used for accessibility to interact with all controls or menu items using the keyboard. Windows indicates access keys by underlining the access key assignment when the Alt key is pressed. Source: Microsoft Learn.
Looking ahead
With every enhancement we shipped, the case for publishing untranslated text instead of pretranslations became weaker and weaker. And there’s still room for improvements in our pretranslation system.
Ayanaa has done extensive research on the impact of Large Language Models (LLMs) on translation efficiency. She’s now working on integrating LLM-assisted translations into Pontoon’s Machinery panel, from which localizers will be able to request alternative translations, including formal and informal options.
If the target locale could set the tone to formal or informal on the project level, we could benefit from this capability in pretranslation as well. We might also improve the quality of machine translation suggestions by providing existing translations into other locales as references in addition to the source string.
If you are interested in using pretranslation or already use it, we’d love to hear your thoughts! Please leave a comment, reach out to us on Matrix, or file an issue.
Tab Previews! Congratulations to DJ for getting these landed. Currently disabled by default, but you can test them by setting `browser.tabs.cardPreview.enabled` to true
sfoster landed a patch that improves the performance of restoring many tabs all at once, especially on older machines
Here’s a great blog post from the Localization team on how they’re working to advance Mozilla’s mission through localization standards
Thanks to Anna Yeddi, a missing label to the remove shortcut icon from the extensions shortcuts management view part of the about:addons page has been identified and added. Another accessibility issue caught by the a11y jobs 🥳 – Bug 1873304
WebExtensions Framework
As part of follow ups to the work on the new taskcluster jobs to run webextensions tp6 and tp6m perftests jobs (landed as tier-3 jobs as part of Bug 1859549 in December):
A new linter named condprof-addons has been landed, this new linter makes sure that xpi files referenced in condprof customization files and the firefox-addons.tar archive (fetched through the related CI fetch task) are not going out of sync with each other – Bug 1868144
Thanks to ahal and sparky for their help and support on introducing this new linter
A new doc section has been added to the Raptor Browsertime doc page, to briefly provide a description of the webextensions tp6/tp6m perftests jobs and examples for how to run these tests locally and in try pushes – Bug 1874487
We will start with the migration away from Console.sys.mjs as that is closest to console.createInstance, and will look at Log.sys.mjs later. However, if teams want to investigate moving away sooner, and find issues, please file bugs blocking the appropriate metas (Console.sys.mjs, Log.sys.mjs)
The next wave of spotlight messages to encourage users without accounts to create one to aid in device migration should be going out in a week or so.
The infrastructure that allows for doing backups of active SQLite databases has landed. We’re hoping this can be part of the foundations for a backup-to-local-file utility.
New Tab Page
Mardak sent a message to the governance mailing list with some proposed updates on ownership for New Tab, Onboarding and In Product Messaging
On top of ongoing projects, the search team collectively worked on closing as many “Dragon Slayer” bugs (small story point effort bugs) over the last two weeks, which included fixing visual and functional errors in the address bar, updating documentation, adding additional test coverage, and addressing tech debt. One such bug:
:mcheang made a change where initiating a keyword search and ending it with a question mark no longer switches the search back to the default search engine. This could be helpful for users who use keyword queries to shortcut usage of chatbots as queries can naturally end with a question mark.
Many people, including myself, have implemented garbage collection (GC)
libraries for Rust. Manish Goregaokar wrote up a fantastic survey of this
space a few years ago. These libraries aim to provide a safe API for their users
to consume: an unsafe-free interface which soundly encapsulates and hides the
library’s internal unsafe code. The one exception is their mechanism to
enumerate the outgoing GC edges of user-defined GC types, since failure to
enumerate all edges can lead the collector to believe that an object is
unreachable and collect it, despite the fact that the user still has a reference
to the reclaimed object, leading to use-after-free bugs.1 This
functionality is generally exposed as an unsafe trait for the user to
implement because it is the user’s responsibility, not the library’s, to uphold
this particular critical safety invariant.
However, despite providing safe interfaces, all of these libraries make
extensive use of unsafe code in their internal implementations. I’ve always
believed it was possible to write a garbage collection library without any
unsafe code, and no one I’ve asserted this to has disagreed, but there has
never been a proof by construction.
So, finally, I created the safe-gc crate: a garbage collection library for
Rust with zero unsafe code. No unsafe in the API. No unsafe in the
implementation. It even has a forbid(unsafe_code) pragma at the top.
That said, safe-gc is not a particularly high-performance garbage collector.
Using safe-gc
To use safe-gc, first we define our GC-managed types, using Gc<T> to define
references to other GC-managed objects, and implement the Trace trait to
report each of those GC edges to the collector:
usesafe_gc::{Collector,Gc,Trace};// Define a GC-managed object.structList{value:u32,// GC-managed references to the next and previous links in the list.prev:Option<Gc<List>>,next:Option<Gc<List>>,}// Report GC edges to the collector.implTraceforList{fntrace(&self,collector:&mutCollector){ifletSome(prev)=self.prev{collector.edge(prev);}ifletSome(next)=self.next{collector.edge(next);}}}
This looks pretty similar to other GC libraries in Rust, although it could
definitely benefit from an implementation of Trace for Option<T> and a
derive(Trace) macro. The big difference from existing GC libraries is that
Trace is safe to implement; more on this later.
Next, we create one or more Heaps to allocate our objects within. Each heap is
independently garbage collected.
usesafe_gc::Heap;letmutheap=Heap::new();
And with a Heap in hand, we can allocate objects:
leta=heap.alloc(List{value:42,prev:None,next:None,});letb=heap.alloc(List{value:36,prev:Some(a.into()),next:None,});// Create a bunch of garbage! Who cares! It'll all be cleaned// up eventually!foriin0..100{let_=heap.alloc(List{value:i,prev:None,next:None,});}
The heap will automatically trigger garbage collections, as necessary, but we
can also force a collection if we want:
// Force a garbage collection!heap.gc()
Rather than deref’ing Gc<T> pointers directly, we must index into the Heap
to access the referenced T object. This contrasts with other GC libraries and
is the key that unlocks safe-gc’s lack of unsafe code, allowing the
implementation to abide by Rust’s ownership and borrowing discipline.2
// Read from a GC object in the heap.letb_value=heap[&b].value;assert_eq!(b_value,36);// Write to a GC object in the heap.heap[&b].value+=1;assert_eq!(heap[&b].value,37);
Finally, there are actually two types for indexing into Heaps to access GC
objects:
Gc<T>, which we have seen already, and
Root<T>, which we have also seen in action, but which was hidden from us by
type inference.
The Gc<T> type is Copy and should be used when referencing other GC-managed
objects from within a GC-managed object’s type definition, or when you can prove
that a garbage collection will not happen (i.e. you have a shared borrow of its
heap). A Gc<T> does not root its referenced T, keeping it alive across
garbage collections, and therefore Gc<T> should not be used to hold onto GC
references across any operation that can trigger a garbage collection.
A Root<T>, on the other hand, does indeed root its associated T object,
preventing the object from being reclaimed during garbage collection. This makes
Root<T> suitable for holding references to GC-managed objects across
operations that can trigger garbage collections. Root<T> is not Copy because
dropping it must remove its entry from the heap’s root set. Allocation returns
rooted references; all the heap.alloc(...) calls from our earlier examples
returned Root<T>s.
Peeking Under the Hood
A safe_gc::Heap is more similar to an arena newtype over a
Vec than an engineered heap with hierarchies of
regions like
Immix. Its
main storage is a hash map from std::any::TypeId to uniform arenas of the
associated type. This lets us ultimately use Vec as the storage for
heap-allocated objects, and we don’t need to do any unsafe pointer arithmetic or
worry about splitting large blocks in our free lists. In fact, the free lists
only manage indices, not blocks of raw memory.
pubstructHeap{// A map from `type_id(T)` to `Arena<T>`. The `ArenaObject`// trait facilitates crossing the boundary from an untyped// heap to typed arenas.arenas:HashMap<TypeId,Box<dynArenaObject>>,// ...}structArena<T>{elements:FreeList<T>,// ...}enumFreeListEntry<T>{/// An occupied entry holding a `T`.Occupied(T),/// A free entry that is also part of a linked list/// pointing to the next free entry, if any.Free(Option<u32>),}structFreeList<T>{// The actual backing storage for our `T`s.entries:Vec<FreeListEntry<T>>,/// The index of the first free entry in the free list.free:Option<u32>,// ...}
To allocate a new T in the heap, we first get the T object arena out of the
heap’s hash map, or create it if it doesn’t exist yet. Then, we check if the
arena has capacity to allocate our new T. If it does, we push the object onto
the arena and return a rooted reference. If it does not, we fall back to an
out-of-line slow path where we trigger a garbage collection to ensure that we
have space for the new object, and then try again.
implHeap{#[inline]pubfnalloc<T>(&mutself,value:T)->Root<T>whereT:Trace,{letheap_id=self.id;letarena=self.ensure_arena::<T>();// Fast path for when we have available capacity for// allocating into.matcharena.try_alloc(heap_id,value){Ok(root)=>root,Err(value)=>self.alloc_slow(value),}}// Out-of-line slow path for when we need to GC to free// up or allocate additional space.#[inline(never)]fnalloc_slow<T>(&mutself,value:T)->Root<T>whereT:Trace,{self.gc();letheap_id=self.id;letarena=self.ensure_arena::<T>();arena.alloc_slow(heap_id,value)}}
Arena<T> allocation bottoms out in allocating from a FreeList<T>, which will
attempt to use existing capacity by popping off its internal list of empty
entries when possible, or otherwise fall back to reserving additional capacity.
impl<T>FreeList<T>{fntry_alloc(&mutself,value:T)->Result<u32,T>{ifletSome(index)=self.free{// We have capacity. Pop the first free entry off// the free list and put the value in there.letindex=usize::try_from(index).unwrap();letnext_free=matchself.entries[index]{Entry::Free(next_free)=>next_free,Entry::Occupied{..}=>unreachable!(),};self.free=next_free;self.entries[index]=Entry::Occupied(value);Ok(index)}else{// No capacity to hold the value; give it back.Err(value)}}fnalloc(&mutself,value:T)->u32{self.try_alloc(value).unwrap_or_else(|value|{// Reserve additional capacity, since we didn't have// space for the allocation.self.double_capacity();// After which the allocation will succeed.self.try_alloc(value).ok().unwrap()})}}
Accessing objects in the heap is straightforward: look up the arena for T and
index into it.
implHeap{/// Get a shared borrow of the referenced `T`.pubfnget<T>(&self,gc:implInto<Gc<T>>)->&TwhereT:Trace,{letgc=gc.into();assert_eq!(self.id,gc.heap_id);letarena=self.arena::<T>().unwrap();arena.elements.get(gc.index)}// Get an exclusive borrow of the referenced `T`.pubfnget_mut<T>(&mutself,gc:implInto<Gc<T>>)->&mutTwhereT:Trace,{letgc=gc.into();assert_eq!(self.id,gc.heap_id);letarena=self.arena_mut::<T>().unwrap();arena.elements.get_mut(gc.index)}}
Before we get into how safe-gc actually performs garbage collection, we need
to look at how it implements the root set. The root set are the set of things
that are definitely alive; things that the application is actively using right
now or planning to use in the future. The goal of the collector is to identify
all objects transitively referenced by these roots, since these are the objects
that can still be used in the future, and recycle all others.
Each Arena<T> has its own RootSet<T>. For simplicity a RootSet<T> is a
wrapper around a FreeList<Gc<T>>. When we add new roots, we insert them into
the FreeList, and when we drop a root, we remove it from the FreeList. This
does mean that the root set can contain duplicates and is therefore not a proper
set. The root set’s FreeList is additionally wrapped in an Rc<RefCell<...>>
so that we can implement Clone for Root<T>, which adds another entry in the
root set, and don’t need to explicitly pass around a Heap to hold additional
references to a rooted object.
Finally, I took care to design Root<T> and RootSet<T> such that Root<T>
doesn’t directly hold a Gc<T>. This allows for updating rooted GC pointers
after a collection, which is necessary for moving GC algorithms like
generational GC and compaction. In fact, I originally intended to implement a
copying collector, which is a moving GC algorithm, for safe-gc but ran into
some issues. More on those later. For now, we retain the possibility of
introducing moving GC at a later date.
structArena<T>{// ...// Each arena has a root set.roots:RootSet<T>,}// The set of rooted `T`s in an arena.structRootSet<T>{inner:Rc<RefCell<FreeList<Gc<T>>>>,}impl<T:Trace>RootSet<T>{// Rooting a `Gc<T>` adds an entry to the root set.fninsert(&self,gc:Gc<T>)->Root<T>{letmutinner=self.inner.borrow_mut();letindex=inner.alloc(gc);Root{roots:self.clone(),index,}}fnremove(&self,index:u32){letmutinner=self.inner.borrow_mut();inner.dealloc(index);}}pubstructRoot<T:Trace>{// Each `Root<T>` holds a reference to the root set.roots:RootSet<T>,// Index of this root in the root set.index:u32,}// Dropping a `Root<T>` removes its entry from the root set.impl<T:Trace>DropforRoot<T>{fndrop(&mutself){self.roots.remove(self.index);}}
With all that out of the way, we can finally look at the core garbage collection
algorithm.
safe-gc implements simple mark-and-sweep garbage collection. We begin by
resetting the mark bits for each arena, and making sure that there are enough
bits for all of our allocated objects, since we keep the mark bits in an
out-of-line compact bitset rather than in each object’s header word or something
like that.
implHeap{#[inline(never)]pubfngc(&mutself){// Reset/pre-allocate the mark bits.for(ty,arena)in&self.arenas{self.collector.mark_bits.entry(*ty).or_default().reset(arena.capacity());}// ...}}
Next we begin the mark phase. This starts by iterating over each root and then
setting its mark bit and enqueuing it in the mark stack by calling
collector.edge(root).
implHeap{#[inline(never)]pubfngc(&mutself){// ...// Mark all roots.forarenainself.arenas.values(){arena.trace_roots(&mutself.collector);}// ...}}traitArenaObject:Any{fntrace_roots(&self,collector:&mutCollector);// ...}impl<T:Trace>ArenaObjectforArena<T>{fntrace_roots(&self,collector:&mutCollector){self.roots.trace(collector);}// ...}impl<T:Trace>RootSet<T>{fntrace(&self,collector:&mutCollector){letinner=self.inner.borrow();for(_,root)ininner.iter(){collector.edge(*root);}}}
The mark phase continues by marking everything transitively reachable from those
roots in a fixed-point loop. If we discover an unmarked object, we mark it and
enqueue it for tracing. Whenever we see an already-marked object, we ignore it.
What is kind of unusual is that we don’t have a single mark stack. The Heap
has no T type parameter, and contains many different types of objects, so the
heap itself doesn’t know how to trace any particular object. However, each of
the heap’s Arena<T>s holds only a single type of object, and an arena does
know how to trace its objects. So we have a mark stack for each T, or
equivalently, each arena. This means that our fixed-point loop has two levels:
an outer loop that continues while any mark stack has work enqueued, and an
inner loop to drain a particular mark stack.
implHeap{#[inline(never)]pubfngc(&mutself){// ...// Mark everything transitively reachable from the roots.whileletSome(type_id)=self.collector.next_non_empty_mark_stack(){whileletSome(index)=self.collector.pop_mark_stack(type_id){self.arenas.get_mut(&type_id).unwrap().trace_one(index,&mutself.collector);}}// ...}}
While the driver loop for marking is inside the Heap::gc method, the actual
edge tracing and mark bit setting happens inside Collector and the arena
which, because it has a T type parameter, can call the correct Trace
implementation for each object.
traitArenaObject:Any{fntrace_one(&mutself,index:u32,collector:&mutCollector);// ...}impl<T:Trace>ArenaObjectforArena<T>{fntrace_one(&mutself,index:u32,collector:&mutCollector){self.elements.get(index).trace(collector);}// ...}pubstructCollector{heap_id:u32,// The mark stack for each type in the heap.mark_stacks:HashMap<TypeId,Vec<u32>>,// The mark bits for each type in the heap.mark_bits:HashMap<TypeId,MarkBits>,}implCollector{pubfnedge<T:Trace>(&mutself,to:Gc<T>){assert_eq!(to.heap_id,self.heap_id);// Get the mark bits for `T` objects.letty=TypeId::of::<T>();letmark_bits=self.mark_bits.get_mut(&ty).unwrap();// Set `to`'s mark bit. If the bit was already set, we're// done.ifmark_bits.set(to.index){return;}// Otherwise this is the first time visiting this GC// object so enqueue it for further marking.letmark_stack=self.mark_stacks.entry(ty).or_default();mark_stack.push(to.index);}}
Once our mark stacks are all empty, we’ve reached our fixed point, and that
means we’ve finished marking all objects reachable from the root set. Now we
transition to the sweep phase.
Sweeping iterates over each object in each arena. If that object’s mark bit is
not set, then it is unreachable from the GC roots, i.e. it is not a member of
the live set, i.e. it is garbage. We drop such objects and push their slots into
their arena’s free list, making the slot available for future allocations.
After sweeping each arena we check whether the arena is still close to running
out of capacity and, if so, reserve additional space for the arena. This
amortizes the cost of garbage collection and avoids a scenario that could
otherwise trigger a full GC on every object allocation:
The arena has zero available capacity.
The user tries to allocate, triggering a GC.
The GC is able to reclaim only one slot in the arena.
The user’s pending allocation fills the reclaimed slot.
Now the arena is out of capacity again, and the process repeats from the top.
By reserving additional space in the arena after sweeping, we avoid this failure
mode.
We could also compact the arena and release excess space back to the global
allocator if there was too much available capacity. This would additionally
require a method for updating incoming edges to the compacted objects, and
safe-gc does not implement compaction at this time.
implHeap{#[inline(never)]pubfngc(&mutself){// ...// Sweep.for(ty,arena)in&mutself.arenas{letmark_bits=&self.collector.mark_bits[ty];arena.sweep(mark_bits);}}}traitArenaObject:Any{// ...fnsweep(&mutself,mark_bits:&MarkBits);}impl<T:Trace>ArenaObjectforArena<T>{// ...fnsweep(&mutself,mark_bits:&MarkBits){// Reclaim garbage slots.letcapacity=self.elements.capacity();forindexin0..capacity{if!mark_bits.get(index){self.elements.dealloc(index);}}// Amortize the cost of GC across allocations.letlen=self.elements.len();letavailable=capacity-len;ifavailable<capacity/4{self.elements.double_capacity();}}}
After every arena is swept, garbage collection is complete!
Preventing Classic Footguns
Now that we know how safe-gc is implemented, we can explore a couple classic
GC footguns and analyze how safe-gc either completely nullifies them or
downgrades them from critical security vulnerabilities to plain old bugs.
Often an object might represent some external resource that should be cleaned up
when the object is no longer in use, like an open file descriptor. This
functionality is typically supported with finalizers, the GC-equivalent of C++
destructors and Rust’s Drop trait. Finalization of GC objects is usually
tricky because of the risks of either accessing objects that have already been
reclaimed by the collector (which is a use-after-free bug) or accidentally
entrenching objects and making them live again (which leads to memory
leaks). Because of these risks, Rust GC libraries often make finalization an
unsafe trait and even forbid allocating types that implement Drop in their
heaps.
However, safe-gc doesn’t need an unsafe finalizer trait, or even any
additional finalizer trait: it can just use Drop. Drop implementations
simply do not have access to a Heap, which is required to deref GC pointers,
so they cannot suffer from those finalization footguns.
Next up: why isn’t Trace an unsafe trait? And what happens if you don’t root
a Gc<T> and then index into a Heap with it after a garbage collection? These
are actually the same question: what happens if I use a dangling Gc<T>? As
mentioned at the start, if a Trace implementation fails to report all edges to
the collector, the collector may believe an object is unreachable and reclaim
it, and now the unreported edge is dangling. Similarly, if the user holds an
unrooted Gc<T>, rather than a Root<T>, across a garbage collection then the
collector might believe that the referenced object is garbage and reclaim it,
leaving the unrooted reference dangling.
Indexing into a Heap with a potentially-dangling Gc<T> will result in one of
three possibilities:
We got “lucky” and something else happened to keep the object alive. The
access succeeds as it otherwise would have and the potentially-dangling bug
is hidden.
The associated slot in the arena’s free list is empty and contains a
FreeListEntry::Free variant. This scenario will raise a panic.
A new object has since been allocated in the same arena slot. The access will
succeed, but it will be to the wrong object. This is an instance of the ABA
problem. We could, at the cost of
some runtime overhead, turn this into a loud panic instead of silent action
at a distance by adding a generation counter to our arenas.
Of course, it would be best if users always rooted GC references they held
across collections and correctly implemented the Trace trait but, should they
fail to do that, all three potential outcomes are 100% memory
safe.3 These failures can’t lead to memory corruption or
use-after-free bugs, which would be the typical results of this kind of thing
with an unsafe GC implementation.
Copying Collector False Start
I initially intended to implement a copying collector rather than
mark-and-sweep, but ultimately the borrowing and ownership didn’t pan out. That
isn’t to say it is impossible to implement a copying collector in safe Rust, but
it ended up feeling like more of a headache than it was worth. I spent several
hours trying to jiggle things around to experiment with different ownership
hierarchies and didn’t get anything satisfactory. When I decided to try
mark-and-sweep, it only took me about half an hour to get an initial prototype
working. I found this really surprising, since I had a strong intuition that a
copying collector, with its separate from- and to-spaces, should play well with
Rust’s ownership and borrowing.
Briefly, the algorithm works as follows:
We equally divide the heap into two semi-spaces.
At any given time in between collections, all objects live in one semi-space
and the other is sitting idle.
We bump allocate within the active semi-space, slowly filling it up, and
when the bump pointer reaches the end of the semi-space, we trigger a
collection.
During collection, as we trace the live set, we copy objects from the old
semi-space that has been active, to the other new semi-space that has been
idle. At the same time, we maintain a map from the live objects’ location in
the old semi-space to their location in the new semi-space. When we trace an
object’s edges, we also update those edges to point to their new
locations. Once tracing reaches a fixed-point, we’ve copied the whole live set
to the new semi-space, it becomes the active semi-space, and the
previously-active semi-space now sits idle until the next collection.
Copying collection has a number of desirable properties:
The algorithm is relatively simple and easy to understand.
Allocating new objects is fast: just bumping a pointer and checking that space
isn’t exhausted yet.
The act of copying objects to the new semi-space compacts the heap, defeating
fragmentation.
It also eliminates the need for a sweep phase, since the whole of the old
semi-space is garbage after the live set has been moved to the new semi-space.
Copying collection’s primary disadvantage is the memory overhead it imposes: we
can only ever use at most half of the heap to store objects.
When I think about a copying collector, I tend to imagine Lisp cons cells, as I
was first introduced to this algorithm in that context by SICP. Here is what a
very naive implementation of the core copying collection algorithm might look
like in safe Rust:
fncopy_collect(roots:&mut[usize],from:&[Cons],to:&mutVec<Cons>,){// Contains a work list of the new indices of cons cells// that have been been copied to `to` but haven't had their// edges traced and updated yet.letmutstack=vec::with_capacity(roots.len());// The map from each live object's old location, to its new one.letmutold_to_new=HashMap::new();// Copy each root to the to-space, enqueue it for tracing, and// update its pointer to its new index in the to-space.forrootinroots{visit_edge(from,to,&mutold_to_new,&mutstack,root);}// Now do the same for everything transitively reachable from// the roots.whileletSome(index)=stack.pop(){letcons=&mutto[index];ifletSome(car)=&mutcons.car{visit_edge(from,to,&mutold_to_new,&mutstack,car);}ifletSome(cdr)=&mutcons.cdr{visit_edge(from,to,&mutold_to_new,&mutstack,cdr);}}}// Visit one edge. If the edge's referent has already been copied// to the to-space, just update the edge's pointer so that it points// to the new location. If it hasn't been copied yet, additionally// copy it over and enqueue it in the stack for future tracing.fnvisit_edge(from:&[Cons],to:&mutVec<Cons>,old_to_new:&mutHashMap<usize,usize>,stack:&mutVec<usize>,edge:&mutusize,){letnew_location=old_to_new.entry(*edge).or_insert(||{letnew=to.len();// Copy the object over.to.push(from[*edge]);// Enqueue it for tracing.stack.push(new);new});*edge=new_location;}
As written, this works and is 100% safe!4 So where do things start to
break down? We’ll get there, but first…
The old-to-new-location map needn’t be an additional, separate allocation. We
don’t need that hash map. Instead, we can reuse the from-space’s storage and
write the address of each copied object’s new location inline into its old
location. These are referred to as forwarding pointers. This is a super
standard optimization for copying collection; so much so that it’s rare to see a
copying collector without it.
Let’s implement inline forwarding pointers for our safe copying
collector. Because we are mutating the from-space to write the forwarding
pointers, we will need to change it from a shared borrow into an exclusive
borrow. Additionally, to differentiate between forwarding pointers and actual
cons cells, our semi-spaces must become slices of an enum rather than slices
of cons cells directly.
enumSemiSpaceEntry{// The cons cell. If we see this during tracing, that means// we haven't copied it over to the to-space yet.Occupied(Cons),// This cons cell has already been moved, here is its new// location.Forwarded(usize)}fncopy_collect(roots:&mut[usize],from:&mut[SemiSpaceEntry],to:&mutVec<SemiSpaceEntry>,){// Same as before, but without `old_to_new`...}fnvisit_edge(from:&mut[SemiSpaceEntry],to:&mutVec<SemiSpaceEntry>>,stack:&mutVec<usize>,edge:&mutusize,){letnew=match&mutfrom[*edge]{SemiSpaceEntry::Forwarded(new)=>new,SemiSpaceEntry::Occupied(cons)=>{letnew=to.len();// Copy the object over.to.push(cons);// Enqueue it for tracing.stack.push(new);// !!! Write the forwarding pointer. !!!from[edge]=SemiSpaceEntry::Forwarded(new);new}};*edge=new;}
Again, this copying collector with forwarding pointers works and is still 100%
safe code.
Things break down when we move away from a homogeneously-typed heap that only
contains cons cells towards a heterogeneously-typed heap that can contain any
type of GC object.
Recall how safe_gc::Heap organizes its underlying storage with a hash map
keyed by type id to get the Arena<T> storage for that associated type:
pubstructHeap{// A map from `type_id(T)` to `Arena<T>`.arenas:HashMap<TypeId,Box<dynArenaObject>>,// ...}
My idea was that a whole Heap would be a semi-space, and if it was the active
semi-space, the heap would additionally have an owning handle to the idle
semi-space:
Note that we pass the whole to_heap into copy_collect, not from_arena’s
corresponding Arena<T> in the to-space, because there can be cross-type
edges. A Cat object can have a reference to a Salami object as a little
treat, and we need access to the whole to-space, not just its Arena<Cat>, to
copy that Salami over when tracing Cats.
But here’s where things break down: we also need mutable access the whole
from-space when tracing Arena<Cat>s because we need to write the forwarding
pointer in the from-space’s Arena<Salami> for the Salami’s new location in
the to-space. But we can’t have mutable access to the whole from-space because
we’ve already projected into one of its arenas. Yeah, I guess we could use
something like take the Arena<Cat> out of the from-space, and then pass both
the Arena<Cat> and the whole from-space into copy_collect. But then what do
we do for Cat-to-Cat edges? Have some kind of check to test for whether we
need to follow a given edge into the from-space Heap or the Arena we are
currently tracing?
Like I said, I don’t think it is impossible to overcome these hurdles, the
question is: is overcoming them worth it? Everything I could think up got pretty
inelegant pretty quickly and/or would have laughably poor performance.5
When compared with how easy it was to implement mark-and-sweep, I just don’t
think a 100% unsafe-free copying collector that supports arbitrary,
heterogeneous types is worth the headache.
Why safe-gc?
safe-gc is certainly a point in the design space of garbage-collection
libraries in Rust. One could even argue it is an interesting — and maybe
even useful? — point in the design space!
Also, it was fun!
At the very least, you don’t have to wonder about the correctness of any
unsafe code in there, because there isn’t any. As long as the Rust language
and its standard library are sound, safe-gc is too.
Conclusion
The safe-gc crate implements garbage-collection-as-library for Rust with
zero unsafe code. It was fun to implement!
Thanks to Trevor Elliott and Jamey
Sharp for brainstorming with me and thanks to
Manish Goregaokar and again to Trevor Elliott
for reading early drafts of this blog post.
In the garbage collection literature, we think about the heap of
GC-managed objects as a graph where each object is a node in that graph and
the graph’s edges are the references from one object to another. ↩
The one exception to this statement that I’m aware of is the
gc-arena crate, although it
is only half an exception. Similar to safe-gc, it also requires threading
through a heap context (that it calls a Mutation) to access GC objects,
although only for allocation and mutable access to GC objects. Getting
shared, immutable borrows of GC objects doesn’t require threading in a heap
context. ↩
I do have sympathy for users writing these bugs! I’ve
written them myself. Remembering to root GC references across operations
that can trigger collections isn’t always easy! It can be difficult to
determine which things can trigger collections or whether some reference
you’re holding has a pointer to another structure which internally holds
onto a GC reference. The SpiderMonkey GC folks had to resort to implementing
a GCC static analysis plugin to find unrooted references held across
GC-triggering function
calls. This
analysis runs in Firefox’s CI because even the seasoned systems engineers
who work on SpiderMonkey and Firefox routinely make these mistakes and the
resulting bugs are so disastrous! ↩
Well, this collector works in principle; I haven’t actually compiled
it. I wrote it inside this text file, so it probably has some typos and
minor compilation errors or whatever. But the point stands: you could use
this collector for your next toy lisp. ↩
I’m not claiming that safe-gc has incredible performance, I haven’t
benchmarked anything and it almost assuredly does not. But its performance
shouldn’t be laughably bad, and I’d like to think that with a bit of tuning
it would be competitive with just about any other unsafe-free Rust
implementation. ↩
Cargo and crates.io were developed in the rush leading up to the Rust 1.0 release to fill the needs for a tool to manage dependencies and a registry that people could use to share code. This rapid work resulted in these tools being connected with an API that initially didn't return the correct HTTP response status codes. After the Rust 1.0 release, Rust's stability guarantees around backward compatibility made this non-trivial to fix, as we wanted older versions of Cargo to continue working with the current crates.io API.
When an old version of Cargo receives a non-"200 OK" response, it displays the raw JSON body like this:
error: failed to get a 200 OK response, got 400
headers:
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Content-Length: 171
body:
{"errors":[{"detail":"missing or empty metadata fields: description, license. Please see https://doc.rust-lang.org/cargo/reference/manifest.html for how to upload metadata"}]}
This was improved in pull request #6771, which was released in Cargo 1.34 (mid-2019). Since then, Cargo has supported receiving 4xx and 5xx status codes too and extracts the error message from the JSON response, if available.
On 2024-03-04 we will switch the API from returning "200 OK" status codes for errors to the new 4xx/5xx behavior. Cargo 1.33 and below will keep working after this change, but will show the raw JSON body instead of a nicely formatted error message. We feel confident that this degraded error message display will not affect very many users. According to the crates.io request logs only very few requests are made by Cargo 1.33 and older versions.
This is the list of API endpoints that will be affected by this change:
GET /api/v1/crates
PUT /api/v1/crates/new
PUT /api/v1/crates/:crate/:version/yank
DELETE /api/v1/crates/:crate/:version/unyank
GET /api/v1/crates/:crate/owners
PUT /api/v1/crates/:crate/owners
DELETE /api/v1/crates/:crate/owners
All other endpoints have already been using regular HTTP status codes for some time.
If you are still using Cargo 1.33 or older, we recommend upgrading to a newer version to get the improved error messages and all the other nice things that the Cargo team has built since then.
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New content and projects
What’s new or coming up in Firefox desktop
While the amount of content has been relatively small over the last few months in Firefox, there have been some UI changes and updates to privacy setting related text such as form autofill, Cookie Banner Blocker, passwords (about:logins), and cookie and site data*. One change happening here (and across all Mozilla products) is the move away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”
In addition, while the number of strings is low, Firefox’s PDF viewer will soon have the ability to highlight content. You can test this feature now in Nightly.
Most of these strings and translations can be previewed by checking a Nightly build. If you’re new to localizing Firefox or if you missed our deep dive, please check out our blog post from July to learn more about the Firefox release schedule.
*Recently in our L10N community matrix channel, someone from our community asked how the new strings for clearing browsing history and data (see screenshot below) from Cookie and Site Data could be shown in Nightly.
In order to show the strings in Nightly, the privacy.sanitize.useOldClearHistoryDialog preference needs to be set to false. To set the preference, type about:config in your URL bar and press enter. A warning may pop up warning you to proceed with caution, click the button to continue. On the page that follows, paste privacy.sanitize.useOldClearHistoryDialog into the search field, then click the toggle button to change the value to false.
You can then trigger the new dialog by clicking “Clear Data…” from the Cookies and Site Data setting or “Clear History…” from the History. (You may need to quit Firefox and open it again for the change to take effect.).
Much like desktop, mobile land has been pretty calm recently.
Having said that, we would like to call out the new Translation feature that is now available to test on the latest Firefox for Android v124 Nightly builds (this is possible only through the secret settings at the moment). It’s a built-in full page translation feature that allows you to seamlessly browse the web in your preferred language. As you navigate the site, Firefox continuously translates new content.
Check your Pontoon notifications for instructions on how to test it out. Note that the feature is not available on iOS at the moment.
In the past couple of months you may have also noticed strings mentioning a new shopping feature called “Review Checker” (that we mentioned for desktop in our November edition). The feature is still a bit tricky to test on Android, but there are instructions you can follow – these can also be found in your Pontoon notification archive.
For testing on iOS, you just need to have the latest Beta version installed and navigate to the product pages on the US sites of amazon.com, bestbuy.com, and walmart.com. A logo in the URL bar will appear with a notification, to launch and test the feature.
Finally, another notable change that has been called out under the Firefox desktop section above: we are moving away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”
What’s new or coming up in Foundation projects
New languages have been added to Common Voice in 2023: Tibetan, Chichewa, Ossetian, Emakhuwa, Laz, Pular Guinée, Sindhi. Welcome!
What’s new or coming up in Pontoon
Improved support for mobile devices
Pontoon translation workspace is now responsive, which means you can finally use Pontoon on your mobile device to translate and review strings! We developed a single-column layout for mobile phones and 2-column layout for tablets.
Screenshot of Pontoon UI on a smartphone running Firefox for Android
2024 Pontoon survey
Thanks again to everyone who has participated in the 2024 Pontoon survey. The 3 top-voted features we commit to implement are:
We started a series called “Localizer Spotlight” and have published twoalready. Do you know someone who should be featured there? Let us know here!
Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!
The Interop Project has become one of the key ways that browser vendors come together to improve the web platform. By working to identify and improve key areas where differences between browser engines are impacting users and web developers, Interop is a critical tool in ensuring the long-term health of the open web.
The web platform is built on interoperability based on common standards. This offers users a degree of choice and control that sets the web apart from proprietary platforms defined by a single implementation. A commitment to ensuring that the web remains open and interoperable forms a fundamental part of Mozilla’s manifesto and web vision, and is why we’re so committed to shipping Firefox with our own Gecko engine.
However interoperability requires care and attention to maintain. When implementations ship with differences between the standard and each other, this creates a pain point for web authors; they have to choose between avoiding the problematic feature entirely and coding to specific implementation quirks. Over time if enough authors produce implementation-specific content then interoperability is lost, and along with it user agency.
This is the problem that the Interop Project is designed to address. By bringing browser vendors together to focus on interoperability, the project allows identifying areas where interoperability issues are causing problems, or may do in the near future. Tracking progress on those issues with a public metric provides accountability to the broader web community on addressing the problems.
The project works by identifying a set of high-priority focus areas: parts of the web platform where everyone agrees that making interoperability improvements will be of high value. These can be existing features where we know browsers have slightly different behaviors that are causing problems for authors, or they can be new features which web developer feedback shows is in high demand and which we want to launch across multiple implementations with high interoperability from the start. For each focus area a set of web-platform-tests is selected to cover that area, and the score is computed from the pass rate of these tests.
Interop 2023
The Interop 2023 project covered high profile features like the new :has() selector, and web-codecs, as well as areas of historically poor interoperability such as pointer events.
The results of the project speak for themselves: every browser ended the year with scores in excess of 97% for the prerelease versions of their browsers. Moreover, the overall Interoperability score — that is the fraction of focus area tests that pass in all participating browser engines — increased from 59% at the start of the year to 95% now. This result represents a huge improvement in the consistency and reliability of the web platform. For users this will result in a more seamless experience, with sites behaving reliably in whichever browser they prefer.
For the :has() selector — which we know from author feedback has been one of the most in-demand CSS features for a long time — every implementation is now passing 100% of the web-platform-tests selected for the focus area. Launching a major new platform feature with this level of interoperability demonstrates the power of the Interop project to progress the platform without compromising on implementation diversity, developer experience, or user choice.
As well as focus areas, the Interop project also has “investigations”. These are areas where we know that we need to improve interoperability, but aren’t at the stage of having specific tests which can be used to measure that improvement. In 2023 we had two investigations. The first was for accessibility, which covered writing many more tests for ARIA computed role and accessible name, and ensuring they could be run in different browsers. The second was for mobile testing, which has resulted in both Mobile Firefox and Chrome for Android having their initial results in wpt.fyi.
Interop 2024
Following the success of Interop 2023, we are pleased to confirm that the project will continue in 2024 with a new selection of focus areas, representing areas of the web platform where we think we can have the biggest positive impact on users and web developers.
New Focus Areas
New focus areas for 2024 include, among other things:
Popover API – This provides a declarative mechanism to create content that always renders in the topmost-layer, so that it overlays other web page content. This can be useful for building features like tooltips and notifications. Support for popover was the #1 author request in the recent State of HTML survey.
CSS Nesting – This is a feature that’s already shipping, which allows writing more compact and readable CSS files, without the need for external tooling such as preprocessors. However different browsers shipped slightly different behavior based on different revisions of the spec, and Interop will help ensure that everyone aligns on a single, reliable, syntax for this popular feature.
Accessibility – Ensuring that the web is accessible to all users is a critical part of Mozilla’s manifesto. Our ability to include Accessibility testing in Interop 2024 is a direct result of the success of the Interop 2023 Accessibility Investigation in increasing the test coverage of key accessibility features.
The full list of focus areas is available in the project README.
Carryover
In addition to the new focus areas, we will carry over some of the 2023 focus areas where there’s still more work to be done. Of particular interest is the Layout focus area, which will combine the previous Flexbox, Grid and Subgrid focus area into one area covering all the most important layout primitives for the modern web. On top of that the Custom Properties, URL and Mouse and Pointer Events focus areas will be carried over. These represent cases where, even though we’ve already seen large improvements in Interoperability, we believe that users and web authors will benefit from even greater convergence between implementations.
Investigations
As well as focus areas, Interop 2024 will also feature a new investigation into improving the integration of WebAssembly testing into web-platform-tests. This will open up the possibility of including WASM features in future Interop projects. In addition we will extend the Accessibility and Mobile Testing investigations, as there is more work to be done to make those aspects of the platform fully testable across different implementations.
This all started when I looked at whether it would be possible to build Firefox with Pointer Authentication Code for arm64 macOS. In case you're curious, the quick answer is no, because Apple essentially hasn't upstreamed the final ABI for it yet, only Xcode clang can produce it, and obviously Rust can't.
Anyways, the Rust compiler did recently add the arm64e-apple-darwin target (which, as mentioned above, turns out to be useless for now), albeit without a prebuilt libstd (so, requiring the use of the -Zbuild-std flag). And by recently, I mean in 1.76.0 (in beta as of writing).
So, after tricking the Firefox build system into accepting to build for that target, I ended up with a Firefox build that... crashed on startup, saying:
Hit MOZ_CRASH(unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed isize::MAX) at /builds/worker/fetches/rustc/lib/rustlib/src/rust/library/core/src/panicking.rs:155"
(MOZ_CRASH is what we get on explicit crashes, like MOZ_ASSERT in C++ code, or assert!() in Rust)
The caller of the crashing code was NS_InvokeByIndex, so at this point, I was thinking XPConnect might need some adjustement for arm64e.
But that was a build I had produced through the Mozilla try server. So I did a local non-optimized debug build to see what's up, which crashed with a different message:
Hit MOZ_CRASH(slice::get_unchecked requires that the index is within the slice) at /Users/glandium/.rustup/toolchains/nightly-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/slice/index.rs:228
This comes from this code in rust libstd:
unsafe fn get_unchecked(self, slice: *const [T]) -> *const T {
debug_assert_nounwind!(
self < slice.len(),
"slice::get_unchecked requires that the index is within the slice",
);
// SAFETY: the caller guarantees that `slice` is not dangling, so it
// cannot be longer than `isize::MAX`. They also guarantee that
// `self` is in bounds of `slice` so `self` cannot overflow an `isize`,
// so the call to `add` is safe.
unsafe {
crate::hint::assert_unchecked(self < slice.len());
slice.as_ptr().add(self)
}
}
(I'm pasting the whole thing because it will be important later)
We're hitting the debug_assert_nounwind.
The calling code looks like the following:
let end = atoms.get_unchecked(STATIC_ATOM_COUNT) as *const _;
And what the debug_assert_nounwind means is that STATIC_ATOM_COUNT is greater or equal to the slice size (spoiler alert: it is equal).
At that point, I started to suspect this might be a more general issue with the new Rust version, rather than something limited to arm64e. And I was kind of right? Mozilla automation did show crashes on all platforms when building with Rust beta (currently 1.76.0). But that was a different, and non-sensical crash:
Hit MOZ_CRASH(attempt to add with overflow) at servo/components/style/gecko_string_cache/mod.rs:77
But this time, it was in the same vicinity as the crash I was getting locally.
Since this was talking about an overflowing addition, I wrapped both terms in dbg!() to see the numbers and... the overflow disappeared but now I was getting a plain crash:
application crashed [@ <usize as core::slice::index::SliceIndex<[T]>>::get_unchecked]
(still from the same call to get_unchecked, at least)
Well, first is that despite there being a debug_assert, debug builds don't complain about the out-of-bounds use of get_unckecked. Only when using -Zbuild-std does it happen. I'm not sure whether that's intended, but I opened an issue about it to figure out.
Second, in the code I pasted from get_unchecked, the hint::assert_unchecked is new in 1.76.0 (well, it was intrinsics::assume in 1.76.0 and became hint::assert_unchecked in 1.77.0, but it wasn't there before). This is why our broken code didn't cause actual problems until now.
What about the addition overflow?
Well, this is where undefined behavior leads the optimizer to do what the user might perceive as weird things, but they actually make sense (as usual with these things involving undefined behavior). Let's start with a standalone version of the original code, simplifying the types used originally:
#![allow(non_upper_case_globals, non_snake_case, dead_code)]
#[inline]
fn static_atoms() -> &'static [[u32; 3]; STATIC_ATOM_COUNT] {
unsafe {
let addr = &gGkAtoms as *const _ as usize + kGkAtomsArrayOffset as usize;
&*(addr as *const _)
}
}
#[inline]
fn valid_static_atom_addr(addr: usize) -> bool {
unsafe {
let atoms = static_atoms();
let start = atoms.as_ptr();
let end = atoms.get_unchecked(STATIC_ATOM_COUNT) as *const _;
let in_range = addr >= start as usize && addr < end as usize;
let aligned = addr % 4 == 0;
in_range && aligned
}
}
fn main() {
println!("{:?}", valid_static_atom_addr(0));
}
Stick this code in a newly created crate (with e.g. cargo new testcase), and run it:
$ cargo +nightly run -q
false
Nothing obviously bad happened. So what went wrong in Firefox? In my first local attempt, I had -Zbuild-std, so let's try that:
$ cargo +nightly run -q -Zbuild-std --target=x86_64-unknown-linux-gnu
thread 'main' panicked at /home/glandium/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/index.rs:228:9:
slice::get_unchecked requires that the index is within the slice
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.
There we go, we hit that get_unchecked error. But what went bad in Firefox if the reduced testcase doesn't crash without -Zbuild-std? Well, Firefox is always built with optimizations on by default, even for debug builds.
$ RUSTFLAGS=-O cargo +nightly run -q
thread 'main' panicked at src/main.rs:10:20:
attempt to add with overflow
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Interestingly, though, changing the addition to
let addr = dbg!(&gGkAtoms as *const _ as usize) + dbg!(kGkAtomsArrayOffset as usize);
doesn't "fix" it like it did with Firefox, but it shows:
[src/main.rs:10:20] &gGkAtoms as *const _ as usize = 94400145014784
[src/main.rs:10:59] kGkAtomsArrayOffset as usize = 61744
thread 'main' panicked at src/main.rs:10:20:
attempt to add with overflow
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
which is even funnier, because you can see that adding those two numbers is definitely not causing an overflow.
Let's take a look at what LLVM is doing with this code across optimization passes, with the following command (on the initial code without dbg!(), and with a #[inline(never)] on valid_static_atom_addr):
RUSTFLAGS="-C debuginfo=0 -O -Cllvm-args=-print-changed=quiet" cargo +nightly run -q
Here is what's most relevant to us. First, what the valid_static_atom_addr function looks like after inlining as_ptr into it:
The first basic block has two exits: 4 and 5, depending on how the add with overflow performed. Both of these basic blocks finish in... unreachable. The first one because it's the panic case for the overflow, and the second one because both values passed to get_unchecked are constants and equal, which the compiler has been hinted (with hint::assert_unchecked) that it's not possible. Thus, once get_unchecked is inlined, what's left is unreachable code. And because we're not rebuilding libstd, the debug_assert is not there before the unreachable annotation. Finally, the last basic block is now orphan.
Imagine you're an optimizer, and you want to optimize this code considering all its annotations. Well, you'll start by removing the orphan basic block. Then you see that the basic block 5 doesn't do anything, and doesn't have side effects, so you just remove it. Which means the branch leading to it can't happen. Basic block 4? There's a function call, so it would have to stay there, and so would the first basic block.
Guess what the Control-Flow Graph pass did? Just that:
And this is how a hint that undefined behavior can't happen transformed get_unchecked(STATIC_ATOM_COUNT) into an addition overflow that never happened.
Obviously, this all doesn't happen with -Zbuild-std, because in that case the get_unchecked branch has a panic call that is still relevant.
$ RUSTFLAGS=-O cargo +nightly run -q -Zbuild-std --target=x86_64-unknown-linux-gnu
thread 'main' panicked at /home/glandium/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/index.rs:228:9:
slice::get_unchecked requires that the index is within the slice
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.
What about non-debug builds?
$ cargo +nightly run --release -q
Illegal instruction
In those builds, because there is no call to display a panic, the entire function ends up unreachable:
Matthew Gaudet here from the SpiderMonkey team, giving Jan a break from newsletter writing.
Our newsletter is an opportunity to highlight some of the work that’s happened in SpiderMonkey land over the last couple of releases. Everyone is hard at work (though some of us are nicely rejuvenated from a winter break).
Feel free to email feedback on the shape of the newsletter to me, as I’d be interested in hearing what works for people and what doesn’t.
🚀 Performance
We’re continuing work on our performance story, with Speedometer 3 being the current main target. We like Speedometer 3 because it provides a set of workloads that we think better reflect the real web, driving improvements to real users too.
Here is a curated selection of just some of the performance related changes in this release:
I added a new optimization system called Fuses, which will allow us to make optimizations that depend on assumptions about the state of the virtual machine. The first optimization to make use of this landed in 123. While it wasn’t a noticeable improvement for Speedometer, it does provide about a 40% improvement on a destructuring microbenchmark. The hope is that this framework will be a foundation to build further improvement upon.
🔦 Contributor Spotlight: Mayank Bansal
Mayank Bansal has been a huge help to the Firefox project for years. Taking a special interest in performance, he is often one of the first to take note of a performance improvement or regression. He also frequently files performance bugs, some of which have identified fixable problems, along with comparative profiles which smooth the investigative process.
In his own words:
Mayank Bansal has been using Firefox Nightly for more than a decade. He is passionate about browser performance and scours the internet for interesting javascript test-cases for the SM team to analyse. He closely monitors the performance improvement and regressions on AWFY. You can check out some of the bugs he has filed by visiting the metabug here.
The SpiderMonkey team greatly appreciates all the help we get from Mayank. Thank you very much Mayank.
⚡ Wasm
Ben Visness enabled JIT Allocation of Structs, which helps improve Wasm GC performance by 5-15% depending on workload.
Ryan Hunt implemented the js-string-builtin proposal championed by Mozilla for fast access to strings from wasm in Bug 1863794.
I enabled ArrayBuffer.prototype.transfer by default (but André Bargull did all the real work in implementing this). This API provides ownership semantics to JS ArrayBuffers.
Contributor Jonatan Klemets has landed updates to our preliminary (disabled by default) support for Import Assertions.
I fixed a low volume crash related to synchronous events occurring while devtools is open on a page; this should eventually avoid about 10 crashes a week for people debugging in Firefox. As of 121.0.1 this should no longer occur. This was a fun investigation triggered by a seemingly impossible crash, and also an interesting case of a crash-report bug opened by a bot leading to an actionable fix.
⏰ Date parsing improvements
Contributor Vinny Diehl has continued improving our date parsing story, aiming to improve compatibility and handling of peculiar cases.
In order to find bugs, fuzzing by generating and running random testcases to see if they crash turns out to be an unreasonably effective technique. The SpiderMonkey team works with a variety of fuzzers, both inside of Mozilla (👋 Hi fuzzing@!) and outside (Thank you all!).
Fuzzing can find test cases which are both very benign but worth fixing, as well as extremely serious security bugs. Security sensitive fuzz bugs are eligible for the Mozilla Bug Bounty Program.
To show off the kind of fun we have with fuzzing, I thought I’d curate some fun, interesting (and not hidden for security reasons) fuzz bugs.
Hello Thunderbird Community! I’m very happy to kick off a new monthly Thunderbird development recap in order to bring a deeper look and understanding of what we’re working on, and the status of these efforts. (We also publish monthly progress reports on Thunderbird for Android.)
These monthly digests will be in a very short format, focusing primarily on the work that is currently being planned or initiated that is not yet fully captured in BugZilla. Nonetheless, we’re putting it out there to cherish and fully embrace the open nature of Thunderbird.
Without further ado, let’s get into it!
2024 Thunderbird Development RoadmapsPublished
Over at DTN, we’ve published initial 2024 roadmaps for the work we have planned on Thunderbird for desktop, and Thunderbird for Android. These will be updated periodically as we continue to scope out each project.
Global Message Database
Our database is currently based on Mork, which is a very old paradigm that creates a lot of limitations, blocking us from doing anything remotely modern or expected (a real threaded conversation view is a classic example). Removing and reworking this implementation, which is at the very core of every message and folder interaction, is not an easy lift and requires a lot of careful planning and exploration, but the work is underway.
The first clean up effort is targeted at removing the old and bad paradigm of the “non-unique unique ID” (kudos to our very own Ben Campbell for coining this term), which causes all sorts of problems. You can follow the work in Bug 1806770.
Cards view final sprint
If you’re using Daily or Beta you might have already seen a lot of drastic differences from 115 for Cards View.
Currently, we’re shaping up the final sprint to polish what we’ve implemented and add extra needed features. We’re in the process of opening all the needed bugs and assigning resources for this final sprint. You can follow the progress by tracking this meta bug and all its child bugs.
As usual, we will continue sharing plans and mock-ups in the UX mailing list, so make sure to follow that if you’re interested in seeing early visual prototypes before any code is touched.
Rust Implementation and ExchangeSupport
This is a very large topic and exploration that requires dedicated posts (which are coming) and extensive recaps. The short story is that we were able to enable the usage of Rust in Thunderbird, therefore opening the doors for us to start implementing native support for the Exchange protocol by building and vendoring a Rust crate.
Once we have a stable and safe implementation, we will share that crate publicly on a GitHub repo so everyone will be able to vendor it and improve it.
Make sure to follow tb-planning and tb-developers mailing lists to soon get more detailed and very in depth info on Rust and Exchange in Thunderbird.
As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.
Alessandro Castellani(he, him) Director of Product Engineering
If you’re interested in joining the discussion around Thunderbird development, consider joining one or several of our mailing list groups here.
Firefox development uncovers many cross-platform differences and unique features of its combination of dependencies. Engineers working on Firefox regularly overcome these challenges and while we can’t detail all of them, we think you’ll enjoy hearing about some so here’s a sample of a recent technical investigation.
This code should never crash, and yet it does. In fact, taking a closer look at the stack gives a first lead for investigation:
Although we crash into functions that belong to the C++ standard library, these functions appear to live in the firefox binary.
This is an unusual situation that never occurs with official builds of Firefox.
It is however very common for distribution to change the configuration settings and apply downstream patches to an upstream source, no worries about that.
Moreover, there is only a single build of Firefox Beta that is causing this crash.
We know this thanks to a unique identifier associated with any ELF binary.
Here, if we choose any specific version of Firefox 120 Beta (such as 120b9), the crashes all embed the same unique identifier for firefox.
Now, how can we guess what build produces this weird binary?
A useful user comment mentions that they regularly experience this crash since updating to 120.0~b2+build1-0ubuntu0.18.04.1.
And by looking for this build identifier, we quickly reach the Firefox Beta PPA.
Then indeed, we are able to reproduce the crash by installing it in a Ubuntu 18.04 LTS virtual machine: it occurs when loading any WebGL page!
With the binary now at hand, running nm -D ./firefox confirms the presence of several symbols related to libstdc++ that live in the text section (T marker).
Templated and inline symbols from libstdc++ usually appear as weak (W marker), so there is only one explanation for this situation: firefox has been statically linked with libstdc++, probably through -static-libstdc++.
Fortunately, the build logs are available for all Ubuntu packages.
After some digging, we find the logs for the 120b9 build, which indeed contain references to -static-libstdc++.
But why?
Again, everything is well documented, and thanks to well trained digging skills we reach a bug report that provides interesting insights.
Firefox requires a modern C++ compiler, and hence a modern libstdc++, which is unavailable on old systems like Ubuntu 18.04 LTS.
The build uses -static-libstdc++ to close this gap.
This just explains the weird setup though.
What about the crash?
Since we can now reproduce it, we can launch Firefox in a debugger and continue our investigation.
When inspecting the crash site, we seem to crash because std::locale::classic() is not properly initialized.
Let’s take a peek at the implementation.
_S_initialize() is in charge of making sure that c_locale will be properly initialized before we return a reference to it.
To achieve this, _S_initialize() calls another function, _S_initialize_once().
void locale::_S_initialize()
{
#ifdef __GTHREADS
if (!__gnu_cxx::__is_single_threaded())
__gthread_once(&_S_once, _S_initialize_once);
#endif
if (__builtin_expect(!_S_classic, 0))
_S_initialize_once();
}
In _S_initialize(), we first go through a wrapper for pthread_once(): the first thread that reaches this code consumes _S_once and calls _S_initialize_once(), whereas other threads (if any) are stuck waiting for _S_initialize_once() to complete.
This looks rather fail-proof, right?
There is even an extra direct call to _S_initialize_once() if _S_classic is still uninitialized after that.
Now, _S_initialize_once() itself is rather straightforward: it allocates _S_classic and puts it within c_locale.
void
locale::_S_initialize_once() throw()
{
// Need to check this because we could get called once from _S_initialize()
// when the program is single-threaded, and then again (via __gthread_once)
// when it's multi-threaded.
if (_S_classic)
return;
// 2 references.
// One reference for _S_classic, one for _S_global
_S_classic = new (&c_locale_impl) _Impl(2);
_S_global = _S_classic;
new (&c_locale) locale(_S_classic);
}
The crash looks as if we never went through _S_initialize_once(), so let’s put a breakpoint there and see what happens.
And just by doing this, we already notice something suspicious.
We do reach _S_initialize_once(), but not within the firefox binary: instead, we only ever reach the version exported by liblgpllibs.so.
In fact, liblgpllibs.so is also statically linked with libstdc++, such that firefox and liblgpllibs.so both embed and export their own _S_initialize_once() function.
By default, symbol interposition applies, and _S_initialize_once() should always be called through the procedure linkage table (PLT), so that every module ends up calling the same version of the function.
If symbol interposition were happening here, we would expect that liblgpllibs.so would reach the version of _S_initialize_once() exported by firefox rather than its own, because firefox was loaded first.
So maybe there is no symbol interposition.
This can occur when using -fno-semantic-interposition.
Each version of the standard library would live on its own, independent from the other versions.
But neither the Firefox build system nor the Ubuntu maintainer seem to pass this flag to the compiler.
However, by looking at the disassembly for _S_initialize() and _S_initialize_once(), we can see that the exported global variables (_S_once, _S_classic, _S_global) are subject to symbol interposition:
These accesses all go through the global offset table (GOT), so that every module ends up accessing the same version of the variable.
This seems strange given what we said earlier about _S_initialize_once().
Non-exported global variables (c_locale, c_locale_impl), however, are accessed directly without symbol interposition, as expected.
We now have enough information to explain the crash.
When we reach _S_initialize() in liblgpllibs.so, we actually consume the _S_once that lives in firefox, and initialize the _S_classic and _S_global that live in firefox.
But we initialize them with pointers to well initialized variables c_locale_impl and c_locale that live in liblgpllibs.so!
The variables c_locale_impl and c_locale that live in firefox, however, remain uninitialized.
So if we later reach _S_initialize() in firefox, everything looks as if initialization has happened.
But then we return a reference to the version of c_locale that lives in firefox, and this version has never been initialized.
Boom!
Now the main question is: why do we see interposition occur for _S_once but not for _S_initialize_once()?
If we step back for a minute, there is a fundamental distinction between these symbols: one is a function symbol, the other is a variable symbol.
And indeed, the Firefox build system uses the -Bsymbolic-function flag!
The ld man page describes it as follows:
-Bsymbolic-functions
When creating a shared library, bind references to global function symbols to the definition within the shared library, if any. This option is only meaningful on ELF platforms which support shared libraries.
As opposed to:
-Bsymbolic
When creating a shared library, bind references to global symbols to the definition within the shared library, if any. Normally, it is possible for a program linked against a shared library to override the definition within the shared library. This option is only meaningful on ELF platforms which support shared libraries.
Nailed it!
The crash occurs because this flag makes us use a weird variant of symbol interposition, where symbol interposition happens for variable symbols like _S_once and _S_classic but not for function symbols like _S_initialize_once().
This results in a mismatch regarding how we access global variables: exported global variables are unique thanks to interposition, whereas every non-interposed function will access its own version of any non-exported global variable.
With all the knowledge that we have now gathered, it is easy to write a reproducer that does not involve any Firefox code:
Understanding the bug is one step, and solving it is yet another story.
Should it be considered a libstdc++ bug that the code for locales is not compatible with -static-stdlibc++ -Bsymbolic-functions?
Overall, perhaps the strangest part of this story is that this combination did not cause any trouble up until now.
Therefore, we suggested to the maintainer of the package to stop using -static-libstdc++.
There are other ways to use a different libstdc++ than available on the system, such as using dynamic linking and setting an RPATH to link with a bundled version.
Doing that allowed them to successfully deploy a fixed version of the package.
A few days after that, with the official release of Firefox 120, we noticed a very significant bump in volume for the same crash signature. Not again!
This time the volume was coming exclusively from users of NixOS 23.05, and it was huge!
After we shared the conclusions from our beta investigation with them, the maintainers of NixOS were able to quickly associate the crash with an issue that had not yet been backported for 23.05 and was causing the compiler to behave like -static-libstdc++.
We are grateful to the people who have helped fix this issue, in particular:
Rico Tzschichholz (ricotz) who quickly fixed the Ubuntu 18.04 LTS package, and Amin Bandali (bandali) who provided help on the way;
Martin Weinelt (hexa) and Artturin for their prompt fixes for the NixOS 23.05 package;
Nicolas B. Pierron (nbp) for helping us get started with NixOS, which allowed us to quickly share useful information with the NixOS package maintainers.
Right now during our relocation I'm not always in the same ZIP code as my T2, but we've still got to keep it up to date. To that end Firefox 122 is out with some UI improvements and new Web platform support.
A number of changes have occurred between Fx121 and Fx122 which improve our situation in OpenPOWER world, most notably being we no longer need to drag our WebRTC build changes around (and/or you can remove --disable-webrtc in your .mozconfig). However, on Fedora I needed to add ac_add_options --with-libclang-path=/usr/lib64 to my .mozconfigs (or ./mach build would fail during configuration because Rust bindgen could not find libclang.so), and I also needed to effectively fix bug 1865993 to get PGO builds to work again on Python 3.12, which Fedora 39 ships with. You may not need to do either of these things depending on your distro. There are separate weird glitches due to certain other components being deprecated in Python 3.12 that do not otherwise affect the build.
Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 122 Nightly release cycle.
Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like AAR.dev, who fixed a typo in the Profiler settings page (#1865895).
As said in previous newsletters, we worked on accessibility issues across the toolbox, and fixed a few of them in this release. First, there was a big focus in the Inspector view to make sure that various elements are all accessible and can activated using only the keyboard:
the checkbox to disable/enable a property (#1844055)
While working on keyboard navigation in the Rules view, we felt like we could revisit the behavior of the Enter key when editing selector,property name or value. Since we know this is an important change, we write a specific blog post to explain our motivation behind it: https://fxdx.dev/rules-view-enter-key/
Finally, we fixed remaining focus indicator (#1865846, #186608) and color contrast issue (#1843332), as well as properly labelled the button to toggle object properties in the console and debugger (#1844088)
This project is coming to an end, but we’ll likely have another project later this year to take care of remaining, especially in tools we didn’t investigate yet, like the Network panel.
Miscellaneous
Did you know that the console exposes two helper functions, $ and $$ ? They are similar to document.querySelector and document.querySelectorAll, the only difference being that $$ returns an array, while document.querySelectorAllreturns a NodeList. Now those two helpers are eagerly evaluated, making it easier to query a specific element as you get feedback about matching elements as you’re typing (#1616524)
You can now set beforeunload and unloadevent listener breakpoints in the Debugger which should be pretty useful when investigating navigation/reload issues (#1569775).
The total transferred size in the Network Monitor does not include service worker requests any more (#1347146).
We fixed fews issues in the Inspector. First, we weren’t able to load stylesheet text, which could occur in projects using Vite (#1867816). We also introduced a bug in the Inspector markup view in 121, which was causing single click to activate URLS in element attributes (e.g. the src attribute on <img> elements) (#1870214). Finally, using the the clip path editor could cause the value in the Rules view to be invalid (#1868263).
Thank you for reading this and using our tools, see you next month for a new round of updates
as of 2024-01-09, ‘line-height’ and ‘vertical-align’ are now moderately supported (@mrobinson, #30902)
as of 2024-01-24, ‘Event#composedPath()’ is now supported (@gterzian, #31123)
We’ve started working on support for sticky positioning and tables in the new layout engine, with some very early sticky positioning code landing in 2023-11-30 (@mrobinson, #30686), the CSS tables tests now enabled (@mrobinson, #31131), and rudimentary table layout landing in 2024-01-20 under the layout.tables.enabled pref (@mrobinson, @Loirooriol, @Manishearth, #30799, #30868, #31121).
Geometry in our new layout engine is now being migrated from floating-point coordinates (f32) to fixed-point coordinates (i32 × 1/60) (@atbrakhi, #30825, #30894, #31135), similar to other engines like WebKit and Blink.
While floating-point geometry was thought to be better for transformation-heavy content like SVG, the fact that larger values are less precise than smaller values causes a variety of rendering problems and test failures (#29819).
As a result of these changes, we’ve made big strides in our WPT pass rates:
CSS2 floats (+3.3pp to 84.9%) and floats-clear (+5.6pp to 78.9%) continue to surge
we now surpass legacy layout in the CSS2 linebox tests (61.1% → 87.9%, legacy 86.4%)
we now surpass legacy layout in the css-flexbox tests (49.5% → 52.7%, legacy 52.2%)
we’ve closed 76% of the gap in key CSS2 tests (79.2% → 82.2%, legacy 83.1%)
Servo’s example browser now has Back and Forward buttons (@atbrakhi, #30805), and no longer shows the incorrect location when navigation takes a long time (@atbrakhi, #30518).
we now support Visual Studio 2022 on Windows (@mrobinson, #31148), the same version that rustup installs by default
Linux build issues
Several people have reported problems building Servo on newer Linux distro versions, particularly with clang 15 or with clang 16.
While we’re still working on fixing the underlying issues, there are some workarounds.
If your distro lets you install older versions of clang with a package like clang-14, you can tell Servo to use it with:
Alternatively you can try our new Nix-based dev environment, which should now work on any Linux distro (@delan, #31001).
Nix is a package manager with some unusual benefits.
Servo can use Nix to find the correct versions of all of its compilers and build dependencies without needing you to install them or run mach bootstrap.
All you need to do is install Nix, and export MACH_USE_NIX= to your environment.
See the wiki for more details!
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 122 release cycle.
Contributions
With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.
The modifications outlined in this section are applicable to both WebDriver BiDi and Marionette, as both implementations utilize a shared set of common code:
New: Support for the “browsingContext.traverseHistory” command
The browsingContext.traverseHistory command enables clients to navigate pages within a specified browsing context backward and forward in history, similar to a user clicking the back and forward buttons in the browser’s toolbar. The command expects a delta number argument to specify how many history steps to traverse. For instance to jump forward to the next page, delta should be set to 1. To navigate back 3 steps – and therefore skip 2 entries – delta should be -3, as in the example below:
Updates for the browsingContext.setViewport command
In preparation for the addition of emulating the device pixel ratio (DPR) in the browsingContext.setViewport command, a variant was needed to retain the current viewport size of the specified top-level browsing context. Using null as a value for the viewport argument, which is already supported, resets the view port to its original size. Omitting the argument instead ensures that the viewport size remains unchanged.
This is a replacement for the extension implementation. The component should have full feature parity, and have improved performance and accessibility.
New keyboard accessibility has been added to the screenshots component. The selected region can now be moved/adjusted with the keyboard.
Until this is on by default, you can enable it manually by setting screenshots.browser.component.enabled to true in about:config.
Lee Salzman (:lsalzman) landed off-main-thread canvas for macOS, Linux and Android back in December for Firefox 123 – and that’s given us a 4-7% boost on Speedometer3! More details here.
Landed a fix to prevent AddonRepository.sys.mjs from mistakenly clearing add-ons metadata (stored in ProfD/addons.json) when an addon metadata refresh request is triggered while Gecko is disconnected from the network – Bug 1870905
Thanks to Hartmut Welpmann for fixing addon updates error handling of empty results and improving logging – Bug 1861372
WebExtensions Framework
Thanks to Gregory Pappas for contributing a fix to Bug 1870498 and fix a regression that was preventing extensions content scripts from accessing getCoalescedEvents() after it has been marked as only available to Secure Contexts
WebExtension APIs
Thanks to Cimbali for contributing changes to ContextualIdentityService.sys.mjs internals and the WebExtensions contextualIdentities API to introduce a new method that allows extensions to reorder the defined containers – Bug 1333395
Developer Tools
Aaron expanded the “Save as File” context menu to all types of Network responses in Netmonitor (bug). It was only enabled for images before.
v9 is changing to the new “flat” configuration. This changes the formats of plugins as well as the main configuration, which will help to fix some long standing issues.
Due to this change, we will be making some significant changes to how ESLint is configured across the tree over the coming months.
Migration Improvements
Device migration
We sent out a spotlight message a few weeks ago encouraging users without a Mozilla account and lots of local data (like bookmarks, history, passwords, etc) that they can use a Mozilla account to have an end-to-end encrypted copy of that data in the cloud. This targeted clients in the English, Italian, French and German locales. We’re going to be doing the rest of the locales later this month, and include folks on the 115 ESR branch as well.
mconley has a prototype component that can create periodic snapshots of SQLite databases, which could end up being the basis of a local profile backup at runtime system.
New Tab Page
The ASRouterAdmin code has been moved out from about:newtab. It now lives at about:asrouter. The DiscoveryStream tools still remain in about:newtab.
The off-main-thread Windows Jump List backend is currently enabled by default on Nightly, and the code to support it has ridden the trains to Beta 122. When that code reaches the Release channel after January 23rd, we plan to run an experiment to see if there’s a measurable improvement to input event response time with it enabled.
UPDATE: Our January Office Hours was fantastic! Here’s the full video replay.
A New Year of New Office Hours
We’re back from our end of year break, breaking in our new calendars, and ready to start 2024 with our renewed, refreshed, and refocused community office hours. Thank you to everyone who joined us for our November session! If you missed out on our chat about the new Cards View and the Thunderbird design process, you can find the video (which also describes the new format) in this blog post.
We’re excited for another year of bringing you expert insights from the Thunderbird Team and our broader community. To kick off 2024, and to build on November’s excellent discussion, we’ll be continuing our dive into another important aspect of the Thunderbird design process.
January Office Hours Topic: Message Context Menu
<figcaption class="wp-element-caption">Mock-up: designs shown are not final and subject to change. </figcaption>
We’ve been working on some significant (and what we think are pretty fantastic) UI changes to Thunderbird. Besides the new Cards View, we have some exciting overhauls to the Message Context Menu (aka the right-click menu) planned. UX Engineer Elizabeth Mitchell will discuss these changes, and most importantly, why we’re making them. Additionally, Elizabeth is one of the leaders on making Thunderbird accessible for all! We’re excited to hear how the new Message Context Menu will make your email experience easier and more effective.
If you’d like a sneak peak of the Context Menu plans, you can find them here.
And as always, if you have any questions you’d like to ask during the January office hours, you can e-mail them to officehours@thunderbird.net.
Join Us On Zoom
(Yes, we’re still on Zoom for now, but a Jitsi server for future office hours is in the works!)
Browsers are the principal gateway connecting people to the open Internet, acting as their agent and shaping their experience. The central role of browsers has long motivated us to build and improve Firefox in order to offer people an independent choice. However, this centrality also creates a strong incentive for dominant players to control the browser that people use. The right way to win users is to build a better product, but shortcuts can be irresistible — and there’s a long history of companies leveraging their control of devices and operating systems to tilt the playing field in favor of their own browser.
This tilt manifests in a variety of ways. For example: making it harder for a user to download and use a different browser, ignoring or resetting a user’s default browser preference, restricting capabilities to the first-party browser, or requiring the use of the first-party browser engine for third-party browsers.
For years, Mozilla has engaged in dialog with platform vendors in an effort to address these issues. With renewed public attention and an evolving regulatory environment, we think it’s time to publish these concerns using the same transparent process and tools we use to develop positions on emerging technical standards. So today we’re publishing a new issue tracker where we intend to document the ways in which platforms put Firefox at a disadvantage and engage with the vendors of those platforms to resolve them.
This tracker captures the issues we experience developing Firefox, but we believe in an even playing field for everyone, not just us. We encourage other browser vendors to publish their concerns in a similar fashion, and welcome the engagement and contributions of other non-browser groups interested in these issues. We’re particularly appreciative of the efforts of Open Web Advocacy in articulating the case for a level playing field and for documenting self-preferencing.
People deserve choice, and choice requires the existence of viable alternatives. Alternatives and competition are good for everyone, but they can only flourish if the playing field is fair. It’s not today, but it’s also not hard to fix if the platform vendors wish to do so.
We call on Apple, Google, and Microsoft to engage with us in this new forum to speedily resolve these concerns.
Back in November, we highlighted our ongoing efforts to make Servo more embeddable, and today we are a few steps closer!
Tauri is a framework for building desktop apps that combine a web frontend with a Rust backend, and work is already ongoing to expand it to mobile apps and other backend languages.
But unlike say, Electron or React Native, Tauri is both engine-agnostic and frontend-agnostic, allowing you to use any frontend tooling you like and whichever web engine makes the most sense for your users.
To integrate Servo with Tauri, we need to add support for Servo in WRY, the underlying webview library, and the developers of Tauri have created a proof of concept doing exactly that!
While this is definitely not production-ready yet, you can play around with it by checking out the servo-wry-demo branch (permalink) and following the README.
While servoshell, our example browser, continues to be the “reference” for embedding Servo, this has its limitations in that servoshell’s needs are often simpler than those of a general-purpose embeddable webview.
For example, the “minibrowser” UI needs the ability to reserve space at the top of the window, and hook the presenting of new frames to do extra drawing, but it doesn’t currently need multiple webviews.
This is where working with the Tauri team has been especially invaluable for Servo — they’ve used their experience integrating with other embeddable webviews to guide changes on the Servo side.
Early changes include making it possible to position Servo webviews anywhere within a native window (@wusyong, #30088), and give them translucent or transparent backgrounds (@wusyong, #30488).
Support for multiple webviews in one window is needed for parity with the other WRY backends.
Servo currently has a fairly pervasive assumption that only one webview is active at a time.
We’ve found almost all of the places where this assumption was made (@delan, #30648), and now we’re breaking those findings into changes that can actually be reviewed and landed (@delan, #30840, #30841, #30842).
Support for multiple windows sounds similar, but it’s a lot harder.
Servo handles user input and drawing with a component known for historical reasons as the “compositor”.
Since the constellation — the heart of Servo — is currently associated with exactly one compositor, and the compositor is currently tightly coupled with the event loop of exactly one window, supporting multiple windows will require some big architectural changes.
@paulrouget’s extensive research and prior work on making Servo embeddable will prove especially helpful.
Offscreen rendering is critical for integrating Servo with apps containing non-Servo components.
For example, you might have a native app that uses Servo for online help or an OAuth flow, or a game that uses Servo for purchases or social features.
We can now draw Servo to an offscreen framebuffer and let the app decide how to present it (@delan, #30767), rather than assuming control of the whole window, and servoshell now uses this ability except when the minibrowser is disabled (--no-minibrowser).
Precompiling mozangle and mozjs would improve developer experience by reducing initial build times.
We can now build the C++ parts of mozangle as a dynamic library (.so/.dylib/.dll) on Linux and macOS (@atbrakhi, mozangle#71), though more work is needed to distribute and make use of them.
We’re exploring two approaches to precompiling mozjs.
The easier approach is to build the C++ parts as a static library (.a/.lib) and cache the generated Rust bindings (@wusyong, mozjs#439).
Building a dynamic library (@atbrakhi, mozjs#432) will be more difficult, but it should reduce build times even further.
L'année 2023 est terminée, donc comme depuis trois ans, c'est l'heure de dresser un bilan de ce que j'ai fait ces douze derniers mois. Je démarre cette retrospective avec la sensation de n'avoir « rien » fait, mais c'est parce que je me suis concentré essentiellement sur un seul et unique projet, notre jeu Dawnmaker. Vous allez le voir, l'année a en fait été bien chargée pour moi. En route pour le bilan !
Projets principaux
Arpentor Studio
Notre société, cofondée avec Alexis, a stagné cette année. Nous sommes toujours deux, même si nous avons été trois à deux moments dans l'année, avec Aurélie au sound design en début d'année puis Agathe au UX / UI design pendant deux mois. Nous n'avons quasiment aucune entrée d'argent, nous ne nous payons pas, et avons également réduit au minimum les dépenses de fonctionnement pour tenir le plus longtemps possible.
Mécaniquement, la gestion du studio m'a demandé moins de temps cette année. J'ai dû faire un dossier de demande de solde pour une subvention de la Région, plusieurs itérations sur le budget de Dawnmaker pour des négociations (qui n'ont malheureusement mené à rien) avec un éditeur, et l'entretien administratif mensuel — envoyer les factures à notre expert-comptable, essentiellement.
La principale erreur sur laquelle j'ai appris, et redressé la barre, en 2023 porte sur la stratégie d'édition de notre jeu. Depuis début 2022, nous avons établi une feuille de route qui implique l'arrivée d'un éditeur, partenaire qui prend en charge le financement et la publication de Dawnmaker. C'est, je crois aujourd'hui, une erreur, a fortiori dans le contexte actuel de l'industrie du jeu vidéo : les éditeurs traversent une période de disette financière, liée à de nombreux facteurs — bulle financière de 2021 suite au COVID et à la forte augmentation des habitudes de jeu, de nombreuses très grosses sorties en 2023, décalées là aussi à cause du COVID, qui ont phagocyté les ventes de jeux indépendants, et bien sûr les taux d'intérêts bancaires qui ont explosé. Résultat, en 2023, les éditeurs sont frileux et il est devenu très difficile de leur vendre son jeu.
Baser la stratégie financière de son entreprise sur l'apport d'un partenaire externe, sur lequel nous n'avons aucun contrôle, me paraît donc un risque énorme. C'est pourtant la stratégie de l'immense majorité des studios de jeu vidéo aujourd'hui, pour une raison très simple : ça coûte très cher de produire un jeu vidéo ! De notre côté, nous avons la chance aujourd'hui de pouvoir travailler sans salaire, grâce notamment au RSA. C'est cependant une situation qui n'est ni enviable, ni viable sur le moyen terme.
Face à tous ces éléments, j'ai décidé de modifier notre stratégie pour Dawnmaker. Nous ne planifions plus le fait de trouver un éditeur. Notre plan principal, désormais, est de sortir le jeu nous-même (en « auto-édition »), dans un délai qui nous permet à la fois de le mener vers une qualité satisfaisante pour un produit commercial, et de ne pas nous couler financièrement. Nous avons donc deux échéances : début mars, nous devons avoir terminé la vertical slice du jeu, une version qui contient tous les systèmes du jeu mais avec une partie seulement de son contenu. C'est un produit de très bonne qualité, proche de l’état final attendu, et qui est donc représentatif de ce qu'on veut faire. On se donnera ensuite environ trois mois pour chercher à nouveau un éditeur, tout en menant une campagne marketing et en ajoutant quelques améliorations au jeu en fonction des retours de nos testeuses et testeurs. Si courant mai nous n'avons pas sécurisé de financement, nous sortirons alors le jeu nous-mêmes — très probablement fin juin. Ce sera une version « amputée » du jeu, loin du contenu que nous souhaiterions avoir, mais une version malgré tout fonctionnelle et de qualité professionnelle. Et bien sûr, si un éditeur s'engage sur notre jeu et le finance, nous reprendrons le plan secondaire, qui consiste à faire une vraie phase de production, recruter quelques personnes en plus, et sortir, probablement début 2025, une version complète du jeu.
En conclusion, Arpentor Studio avance, mais c'est difficile. 2024 sera une année décisive pour le studio, avec soit l'arrivée d'un éditeur pour notre premier jeu, soit sa sortie. Dans tous les cas, ça devrait faire entrer de l'argent dans l'entreprise, ce dont j'ai hâte !
Dawnmaker
Le projet Cities of Heksiga a changé de nom et s'appelle désormais Dawnmaker ! J'ai passé l'essentiel de mon année 2023 à travailler dessus, sur trois aspects principalement : la programmation, le Game Design — la conception des règles du jeu et de son contenu — et le marketing.
Dawnmaker a beaucoup changé pendant cette année. Le jeu est passé d'un rendu 2D très (très) basique à un rendu en 3D en début d'année, puis est revenu vers la 2D pendant l'été. Le passage du jeu vers la 3D était prévu de longue date, mais s'est avéré être une erreur. Quasiment tous les éditeurs à qui nous avons montré le jeu nous l'ont fait remarquer. La question qui nous a fait revenir en arrière a été : « quelle est la valeur ajoutée de la 3D pour le jeu ? » On a eu bien du mal à y répondre…
Nous avons donc fait machine arrière — pas vraiment, puisque j'en ai profité pour coder tout le rendu avec une nouvelle techno optimisée pour la 2D. Grand bien nous en a fait : le jeu est vraiment beaucoup plus beau maintenant ! Il tourne également mieux sur ma machine vieillissante, ce qui est un bon signe pour le sortir sur téléphones portables. J'ai aussi amélioré notre éditeur de contenu pour qu'Alexis soit le plus autonome possible sur l'intégration des assets des bâtiments.
Voici une petite fresque de la progression du jeu en 2023 :
En janvier
En mai
En novembre
Au delà de l'aspect graphique, nous avons ajouté beaucoup de contenu (une quarantaine de nouveaux bâtiments, une vingtaine de nouvelles cartes), des mécaniques importantes (notamment une boucle de progression à la roguelike), et beaucoup de choses pour améliorer la jouabilité du jeu (de nouvelles interfaces, notamment grâce aux contributions de Menica Folden, du drag&drop pour jouer les cartes, des petites animations un peu partout… ).
Comme annoncé dans ma retrospective 2022, Dawnmaker a énormément progressé cette année, et il est passé d'un prototype à un vrai jeu vidéo. Il reste cependant encore beaucoup de choses à régler : la boucle de progression n'est toujours pas fonctionnelle, il n'y a aucun accompagnement à la prise en main pour les nouveaux joueurs, il faut retravailler encore une grosse partie de l'interface du jeu… Et tout ça, idéalement pour mars 2024 ! Autant vous dire que : c'est chaud. Mais la sortie du jeu approche, et ça, ça fait plaisir ! Peut-être que vous pourrez acheter Dawnmaker en 2024 ?
Projets secondaires
Souls
Comme l'année dernière, je n'ai quasiment pas eu l'occasion de toucher à Souls, mon vieux projet de jeu de cartes compétitif. Mais : « quasiment », car oui, je l'ai tout de même ressorti de sa boîte, et j'en ai fait une partie. Ça été l'occasion de me remémorer là où j'en étais, et surtout tous les défauts de la version en cours. Je ne travaille toujours pas activement dessus, mais j'ai bon espoir de m'y remettre un peu en 2024.
Blog
En début d'année, je me suis mis l'objectif de publier 6 articles sur ce blog, un tous les deux mois. L'objectif est presque atteint : j'en ai publié 5.
La majorité de ces articles est en anglais, car ils ont également servi de contenu pour la newsletter d'Arpentor Studio, lancée cette année.
J'ai eu un peu de mal à me lancer dans l'écriture régulière, mais je me suis créé un système en cours d’année — en gros, un rappel tous les deux mois — et depuis, je m'y tiens correctement ! J’ai bon espoir de continuer sur ce rythme en 2024, pour continuer à partager avec vous mes expériences.
Autres jeux
En 2023 j'ai enfin rejoint une association locale de créateurs de jeux, la Compagnie des zAuteurs Lyonnais (CAL). C'est un groupement informel d'auteurs et autrices de jeux de société, qui se réunit dans les bars à jeux lyonnais régulièrement. Ce fût l'occasion pour moi d'enfin entrer dans ce milieu, de tester des prototypes très chouettes, et surtout de montrer les miens, de prototypes. Parce que oui, j'ai malgré tout continué à travailler, épisodiquement, sur des protos de jeux.
Le premier a pour nom de code « Little Brass Imhotep », car il est conçu pour être à la croisée des expériences de jeu de Little Town, Brass: Birmingham et Imhotep: The Duel. Le concept central est le suivant : il y a un plateau de 5 par 5 cases, sur lequel les joueurs vont construire des bâtiments. Ces bâtiments peuvent être activés pour donner des ressources ou des points de victoire. Les joueurs disposent d'ouvriers, qu'ils vont placer à l’extrémité d'une ligne ou d'une colonne, et ce faisant vont activer tous les bâtiments de la ligne ou colonne. Construire un bâtiment permet de marquer des points de victoire et de créer ou d'améliorer un moteur, mais donne aussi des opportunités à l'adversaire de l'exploiter.
Les premiers playtests ont fait apparaître de nombreuses lacunes dans le système de jeu, notamment une trop grande symétrie sur les ressources et les effets qui rend la mécanique principale, construire des bâtiments, peu attirante. Le prototype en est pour l'instant resté là.
Mon second prototype de l'année est né de la volonté de croiser l'expérience d'un draft de Magic — probablement mon expérience de jeu préférée — avec le cycle de feedback très court d'un autochess. La première version du jeu, nom de code « Cube Light » (oui je sais je suis nul en noms), donne ceci : sur une table de 8 joueurs, chacun⋅e reçoit un deck de 4 cartes (les mêmes pour chaque personne), puis va commencer à drafter des paquets de 4 cartes. Ensuite, chaque joueur se constitue un deck de 7 cartes, en jetant une de ses cartes. Puis on joue un match en 1 contre 1 : chaque joueur pioche trois cartes, puis simultanément va répartir ses 3 cartes, face cachée, sur trois lieux disposés au milieu de la table. Une fois les cartes placées, on les dévoile chacun son tour. Bien sûr, chaque carte a des effets variés, de même que les lieux, il faut donc placer ses cartes au bon endroit, et anticiper les prochaines cartes que l'on piochera, pour créer des combinaisons puissantes. À la fin du deuxième tour, la manche est terminée, et on compte la puissance cumulée des personnages joués sur chaque lieu. Un joueur qui a strictement plus de puissance que son adversaire sur un lieu contrôle celui-ci, et le joueur qui contrôle le plus de lieux gagne la manche. On recommence ensuite une nouvelle phase de draft, en ayant changé les places des joueurs. On construit un deck de 10 cartes, on fait une manche en trois tours. On répète ça sur 4 manches, et à la fin de la dernière manche le joueur qui a le plus de points de victoire remporte la partie !
Je me suis rendu compte, pendant que je produisais ce prototype, que ça se rapproche énormément de Challengers, un excellent jeu sorti en 2022, et dont le pitch est assez proche du mien — reproduire l'expérience d'un autochess en jeu de plateau. Mon objectif cependant est d'avoir une expérience plus proche de celle de Magic, c'est-à-dire d'avoir plus de décisions stratégiques, à la fois pendant le choix des cartes (la phase de draft) et pendant les manches.
Le premier playtest a laissé apparaître de nombreux axes d'améliorations, mais le cœur du jeu fonctionne bien et constitue une base solide. J'espère prendre du temps cette année pour reprendre ce prototype et en faire un jeu fun, au moins pour mon groupe de joueurs de Magic.
Mes recommandations de l'année
Et voilà pour le bilan de mon travail sur 2023 ! C'est l'heure de terminer ce bilan par une partie plus fun. Cette année à nouveau, j'aimerais vous partager les quelques découvertes culturelles que j'ai le plus appréciées ces douze derniers mois.
Mon jeu vidéo de l'année
2023 a été une année pauvre en jeux vidéo pour moi. Peut-être est-ce le fait de passer mes journées à travailler sur un jeu qui m'empêche d'apprécier pleinement les autres ? Peut-être est-ce parce que j'ai utilisé beaucoup de mon temps de jeu à étudier des jeux en lien avec Dawnmaker ? Ou bien est-ce un simple concours de circonstances qui fait qu'aucun jeu ne m'a vraiment happé, ou marqué, cette année ?
Quoi qu'il en soit, le meilleur jeu auquel j'ai joué cette année est Baldur's Gate 3. Je suis un immense fan des deux premiers titres, sur lesquels j'ai passé énormément de temps étant ado. J'abordais le troisième opus avec beaucoup d'appréhension, mais il ne m'a pas déçu. Le jeu donne vraiment la sensation de jouer à un Baldur's Gate pur jus, mais moderne. Certains personnages sont très attachants, l'histoire est prenante, et le contenu est gigantesque. C'est presque le seul vrai point noir pour moi d'ailleurs : je n'aime pas passer à côté de quelque chose dans un jeu, du coup j'ai passé trop de temps à tout fouiller. Et je sais que j'ai malgré ça raté des tas de choses, parce que le jeu est ainsi conçu.
Bref : Baldur's Gate 3 mérite son titre de Game of the Year.
Mes jeux de plateau de l'année
Trop difficile de choisir un seul jeu cette année, alors en voilà deux : Spirit Island et Brass: Birmingham ! Deux gros jeux, dans lesquels il faut beaucoup réfléchir, l'un coopératif et l'autre compétitif.
Spirit Island, jeu coopératif donc, vous met dans la peau des esprits protecteurs d'une île qui se fait envahir par des colons. Chaque esprit a un gameplay différent, des capacités spéciales, et un lot de cartes de départ uniques. En solo ou avec vos alliés, vous devez développer vos ressources (gagner plus d'énergie pour jouer vos cartes, obtenir de nouvelles cartes plus puissantes… ) et utiliser vos cartes pour détruire les envahisseurs, les empêcher de construire des villages ou des cités, et de répandre la désolation sur votre île luxuriante. C'est vraiment un jeu excellent, dans lequel chaque tour est un gros puzzle à plusieurs, où il y a des interactions entre les capacités des joueurs. Et bonus : sa complexité limite assez fortement l'effet « joueur alpha », quand un joueur dirige tous les autres.
Brass: Birmingham, à l'inverse, est un jeu compétitif dans l'Angleterre de la révolution industrielle. Pur jeu de gestion, il faut y construire des bâtiments — mines de charbon, fonderies, usines, manufactures… — pour développer ses ressources et marquer des points de victoire. On y construit également des canaux ou chemins de fer, on y vend des ressources, et on s'adapte aux cartes de sa main pour se positionner sur la carte. Il y a un gros aspect planification qui est contrebalancé par l'importance d'être opportuniste par moment. Ce n'est pas le jeu numéro 1 sur boardgamegeek pour rien !
Ma BD de l'année
The Nice House on the Lake, tomes 1 et 2, gagnent la palme d'or de la BD 2023 ! C'est un comic — une bande dessinée américaine — de science fiction, un huit clos qui démarre très simplement et tourne, très rapidement, vers quelque chose d'angoissant. Il y a des interludes qui montrent un futur dramatique, un personnage très énigmatique qui est au centre de l'intrigue, des enjeux qui se développent progressivement pour atteindre une ouverture, à la fin du tome 2, qui donne vraiment envie de lire la suite ! Difficile d'en dire plus tant tout le plaisir de la lecture se trouve dans la découverte de cette intrigue, mais grosse recommandation de ma part.
Mon livre de l'année
Chose incroyable, en 2023, mon livre préféré n'est pas une fiction, mais un livre de productivité : How to take smart notes. L'auteur y présente une méthode de prise de notes créée par le sociologue Niklas Luhmann. La méthode est simple, mais demande une certaine assiduité pour qu'elle développe tout son potentiel. En résumé : prendre des notes temporaires, constamment, puis régulièrement les transformer en notes « permanentes », des notes autosuffisantes, rédigées, et surtout systématiquement mises en lien avec d'autres notes. L'idée est de se constituer une base de notes, qu'on relit régulièrement, en suivant des liens et surtout en en créant de nouveaux à chaque fois que c'est pertinent. C'est à la fois une manière de mieux apprendre, en se forçant à écrire ce qu'on apprend et les idées qu'on développe, et à la fois une manière de structurer sa pensée et d'articuler ses idées, pour les transformer et en faire des outils novateurs et impactant.
Conclusions sur l'année 2023
Bon, ben, quelle année bizarre — comme prévu. Bosser aussi longtemps sur un unique projet, ou presque, c'est éreintant. Heureusement, on a pu montrer du concret au cours de l'année, grace notamment à notre serveur discord et à la newsletter que j'ai lancée. Mais à l'heure du bilan, l'impression que rien n'a avancé est vraiment forte, bien que totalement fausse. Fin 2022, je déclarais que j'avais encore beaucoup d'énergie, là, je dois avouer que c'est moins le cas. Je compte sur cette année 2024 pour chambouler un peu tout ça et me rebooster !
Sur ce, je vous remercie chaleureusement de m'avoir lu, je vous souhaite une très bonne année 2024, et je vous dis à bientôt sur ce blog pour une grande annonce sur Dawnmaker !
Il est l'heure, tardive, de faire le point sur mon année 2022 ! Vous allez le lire, l'année a été chargée, ce qui explique que j'ai un peu de retard dans la rédaction de ce billet… Mais pour me faire pardonner, je vous ai mis quelques recommandations culturelles à la fin !
Voici donc un résumé de ce que j'ai fait en 2022…
Projets principaux
Arpentor Studio
Mon projet principal en 2022 a évidemment été le studio de jeu vidéo que nous avons créé avec Alexis. J'ai raconté l'essentiel de l'histoire dans mon billet Starting a Games Studio [en], mais je voudrais revenir ici sur d'autres aspects de cette aventure, notamment sur certaines erreurs que nous avons faites.
En début d'année, nous avons rejoint l'incubateur Let's GO, porté par l'association régionale Game Only. Ce fût une excellente décision que de postuler, ce programme nous a apporté énormément de connaissances, de contacts, d'opportunités, et puis des bons moments de fun aussi ! Mais ça nous a mené à faire une erreur fondamentale : nous nous sommes laissés porter par les connaissances qu'on nous livrait, sans nous demander si c'était vraiment pertinent de s'en servir à ce moment-là.
Concrètement, nous avons modifié notre plan initial. Nous voulions nous concentrer sur la création d'un jeu relativement rapidement, entre un an et un an et demi. Entraîné par les formations, notamment sur les financements, nous avons révisé ce plan pour le faire grossir, impliquer plus de gens, dépenser plus d'argent pour pouvoir en demander plus, etc. Ce changement de stratégie a eu plusieurs conséquences :
Nous avons passé énormément de temps à faire des dossiers de financement, des pitch decks et autre documents de recherche d'argent, et pas assez à travailler concrètement sur notre jeu. Nous avons du coup pris beaucoup de retard sur la production de celui-ci. Hors sans jeu un minimum abouti, sans une vraie démo qui montre notre savoir-faire, impossible d'espérer signer un contrat avec un éditeur — ce sans quoi nous ne pourrons de toute manière pas terminer notre jeu.
Nous avons anticipé sur l'arrivée de financements qui, il s'avère, n'étaient pas aussi faciles à obtenir que prévu. Nous avons commencé à nous rémunérer Alexis et moi, nous avons recruté une employée, nous avons engagé des frais de déplacement sur des salons… Le fait de n'avoir pas obtenu le principal financement public sur lequel nous comptions nous a mis face à une situation qui aurait pu devenir critique : la faillite. Heureusement pour nous, nous avons su nous rattraper suffisamment tôt. Malheureusement, ça impliquait de nous séparer de notre employée, d'arrêter de nous salarier, et de réduire nos frais dans le futur.
Nous avons fait grossir notre jeu, ajoutant de nombreuses fonctionnalités, jusqu'à atteindre un point où j'estimais qu'il nous aurait fallut une équipe de plus de 10 personnes pendant un an et demi pour réussir à finir le jeu. Là aussi, nous avons su largement réduire la taille du jeu et revenir à quelque chose de plus raisonnable pour nous, sans (trop) compromettre la vision que nous avions.
Cette année a du coup été éprouvante pour moi, à faire un peu les montagnes russes : on a passé une partie de l'année à rêver d'une grosse production, de financements faramineux, de faire un jeu très ambitieux. Et puis le parpaing de la réalité s'est écrasé sur la tartelette aux fraises de nos illusions, et il a fallut revenir à des choses plus raisonnables, prendre des décisions difficiles, faire du mal à des gens.
Malgré tout ça, ou grâce à tout ça, j'ai énormément appris en 2022 : sur la production d'un jeu, la stratégie d'entreprise, le recrutement, les relations avec les éditeurs… Le timing n'était pas toujours le bon pour apprendre ces choses-là, mais je sais qu'on s'en souviendra le moment venu, et que ça n'aura pas servi à rien. L'essentiel, comme me disait récemment un grand homme, ce n'est pas de ne plus faire d'erreur : c'est de toujours faire de nouvelles erreurs.
Si je devais recommencer demain, je ferais en sorte de garder ce plan de commencer petit, et de grossir tout doucement. Commencer par faire quasiment des jeux de Jams, en quelques jours seulement, puis faire un jeu en un mois, puis en deux, puis en quatre, etc. L'idée étant de monter en compétence doucement mais sûrement, sur toute la chaîne de production d'un jeu vidéo, et de se faire connaître en sortant régulièrement du contenu. C'est un modèle qui a bien fonctionné pour d'autres studios, et qui me semble vraiment sain pour quelqu'un comme moi qui n'a pas 10 ans d'expérience dans l'industrie. C'est aussi, je crois, une bonne manière de créer une entreprise financièrement stable dans ce milieu difficile.
Pour conclure, Arpentor Studio va bien. En fin d'année, nous avons fait en sorte de bien redresser la barre, et nous nous dirigeons actuellement vers un cap qui nous semble plus cohérent, plus sûr. On ne sortira probablement pas de jeu en 2023, mais progressera énormément dessus, on fera grossir l'équipe, et on mettra en place tout ce qu'il faut pour sortir le meilleur jeu possible en 2024.
État : en cours.
Cities of Heksiga
Qui dit studio de jeu vidéo dit forcément jeu vidéo. Ça n'est pas vraiment un secret (même si j'en ai peu parlé), nous travaillons depuis un peu plus d'un an sur un jeu que nous appelons actuellement Cities of Heksiga. C'est un jeu de stratégie solo, pour PC et mobile, qui se déroule dans un univers de Fantasy Steampunk. C'est en quelque sorte un jeu de plateau numérique, à la Terraforming Mars par exemple, qui mélange deck building (améliorer un deck de carte au fil de la partie en acquérant des cartes de plus en plus fortes ou synergiques) et pose de tuiles sur un plateau. Je ne vous en dit pas plus pour le moment parce qu'on a encore beaucoup de choses à stabiliser, mais ça viendra bien assez tôt. Sachez qu'on vise actuellement une sortie pendant la première moitié de 2024.
Sur ce jeu, je suis responsable de la programmation (le jeu est codé avec des technologies du Web, en TypeScript, avec une interface qui utilise Svelte) mais aussi du game design, c'est-à-dire de la conception des mécaniques du jeu. Alexis quant à lui est responsable de la direction artistique, de la création de tous les assets graphiques, et de la narration du jeu. Nous sommes également accompagnés par Aurélie, qui créé la musique et tous les effets sonores qui viennent embellir l'expérience.
En 2022 j'ai travaillé sur plusieurs prototypes du jeu (j'en compte au moins une douzaine d'après notre documentation), itérant chaque fois sur les mécaniques centrales du jeu pour trouver une formule qui fonctionne. J'ai fait quelques prototypes papier, mais je suis rapidement passé sur des versions numériques, parce que nos mécaniques impliquaient tout un ensemble de calculs et d'actions automatiques difficiles à effectuer manuellement.
Le prototype de Cities of Heksiga au 12 janvier 2023
J'ai également travaillé sur des outils, notamment un outil de gestion du contenu du jeu : j'ai une interface très simple qui me permet de créer rapidement un nouveau bâtiment, ou de mettre à jour un bâtiment existant, puis d'exporter ça en un seul clic. Le fait que nous utilisions des techno Web me permet d'être très efficace là-dessus, et j'ai bon espoir de mettre en place un workflow de game design aux petits oignons d'ici quelques mois.
Fin 2022, nous terminons, enfin mais difficilement, notre phase de prototypage. C'est-à-dire que nous avons consolidé les mécaniques centrales du jeu, que nous les avons validées (bon, pas vraiment, mais c'est en cours et j'ai confiance) et que nous pouvons maintenant passer à la suite : créer une vraie démo qui déchire, et étoffer doucement le jeu en ajoutant de nouvelles mécaniques et du contenu.
Comme je l'ai dit dans la partie précédente, nous avons passé trop peu de temps à travailler sur ce jeu cette année. Mais ça a présenté un avantage : nous avons eu le temps de le faire tester, de prendre des retours posés et construits sur les forces et les faiblesses de nos différents prototypes. Au final, nous avons pu identifier des problèmes fondamentaux et les corriger, ce qui aurait été plus difficile si nous avions eu plus la tête dans le guidon. Un mal pour un bien !
En 2023, Cities of Heksiga devrait vraiment prendre forme, et passer d'un prototype à une véritable démo, puis à une vertical slice, une version représentative de ce que nous voulons que le jeu final soit. Nous prévoyons actuellement de sortir le jeu dans la première moitié de 2024.
État : en cours.
Projets secondaires
Souls
Souls, mon jeu de cartes compétitif en ligne, a fait une grosse pause en 2022. Au milieu de tout le reste, je n'ai tout simplement pas eu le temps de me remettre dessus. Mais tout mon travail à côté a pour objectif de monter en compétence et de créer un contexte dans lequel il sera possible de faire de Souls un succès. Donc quelque part, ça avance quand même !
État : en pause.
Board Game Jam 2
Voici mon gros projet secondaire de ces derniers mois : l'organisation d'une Jam de création de jeux de plateau. C'est une idée que mon ami Aurélien et moi avions depuis trèèèès longtemps, qui s'est enfin concrétisée début 2020 via l'association Game Dev Party… mais qui s'est fait couper en plein milieu par l'annonce du premier confinement. Je suis donc très heureux d'avoir enfin pu mener une vraie Board Game Jam jusqu'au bout !
Mais qu'est-ce que c'est que ce truc, me demandez-vous ? Une Jam, c'est un événement de création, initialement de jeu vidéo, en équipe, en général sur un week-end. On réunit une cinquantaine de personnes dans un même lieu physique, ils se répartissent en groupes et passent leur week-end à créer de toutes pièces, depuis zéro, un jeu vidéo. À Lyon, l'association Game Dev Party a fait de l'organisation de ces événement sa spécialité depuis 2011 — et j'en suis membre organisateur depuis 2012. Une Board Game Jam, c'est le même principe, mais pour les jeux de société.
La table de matériel mis à disposition des participant⋅e⋅s
L'événement a eu lieu mi-janvier, et s'est soldée par une franche réussite : environ 40 participant⋅e⋅s pour 9 jeux créés pendant le week-end. Le week-end s'est déroulé sans accroc majeur (oublions les quelques couacs techniques du dimanche soir), les gens avaient l'air heureux, et les jeux produits étaient incroyablement engageants et variés.
Je suis particulièrement ravi de cette formule. Travailler sur un jeu vidéo présente un réel challenge technique : il faut programmer, il faut illustrer, il faut sonoriser… Le temps d'itération est relativement long, entre le moment où on a une idée et le moment où on peut réellement la tester, clavier, souris ou manette en main. Avec le jeu de société, ce temps d'itération est très largement réduit. Une nouvelle idée de carte ? Un bout de papier, un crayon, et hop, la carte est créée et prête à être testée.
C'était épuisant de porter cet événement, mais je suis fier de ce qu'on a réalisé, et je compte fortement sur d'autres personnes pour organiser de nouveaux événements de ce type. Parce que c'est quand même super frustrant de voir tous ces gens créer des jeux et de ne pas participer !!!
État : terminé.
Blog
Je me note donc, pour mon moi du futur, de faire attention à rester ouvert : c'est éprouvant d'avancer sans que rien de concret ne « sorte », sans avoir la satisfaction d'avoir terminé quelque chose. Alors, Adrian de 2022 : n'oublie pas de parler de ce que tu fais, de montrer tes avancées, même si c'est moche, même si ça marche mal, parce que ça te donnera la sensation de progresser, et que ça t'aidera beaucoup !
Raté ! Je n'ai publié que deux articles en 2022 : How I did my market research on Steam [en] en mars puis Starting a Games Studio [en] en août. Ce dernier a été un énorme travail, que j'ai fait sur plusieurs mois, mais ça reste très insuffisant pour moi. Heureusement, j'ai quand même partagé mon travail, mais ailleurs : sur un serveur discord qu'on utilise pour les playtests de notre jeu, et au sein de l'incubateur Let's GO. Je n'ai pas ressenti le besoin de plus écrire, même si ça reste un objectif que j'aimerais tenir un jour. J'ai beaucoup appris de gens qui ont partagé leurs expériences avant moi, et je souhaite rendre ce service moi aussi. C'est dans cette démarche que j'ai écrit ces deux billets, mais je pense que je peux en faire plus.
Allez, objectif pour 2023 : 6 billets dans l'année, soit un tous les deux mois !
Mes recommandations de l'année
Pour conclure ce billet, j'ai envie de faire un truc nouveau : vous recommander quelques œuvres culturelles qui m'ont marquées cette année.
Mon jeu vidéo de l'année
Sans conteste, c'est Planet Crafter qui a été mon jeu de 2022. On y mélange survie, exploration et construction de base sur une planète inhabitable, et notre objectif est de la terraformer. Le jeu est en early access, mais son contenu est déjà énorme, et les mises à jour ont toutes été très bénéfiques. J'ai pris quelques grosses claques en découvrant certains lieux, j'ai passé des heures à me construire une belle base, la progression est excellemment maîtrisée, il y a toujours quelque chose à faire, bref : je vous recommande de jouer à Planet Crafter !
PS : j'ai découvert via le CanardPC de janvier que les créateurs de Planet Crafter sont un couple de Toulousains. Ils ont fait ce jeu à deux. C'est très impressionnant. :-)
Mon jeu de plateau de l'année
J'ai été conquis par Terraforming Mars: Expédition Arès. Ce mélange des cartes du merveilleux Terraforming Mars original avec la mécanique d'actions partagées de Race for the Galaxy a complètement fait mouche chez moi. C'est tout ce que j'aime : de l'engine building pur, avec de la planification, un poil de bluff, et juste ce qu'il faut de ressources. C'est accessible, et ça se joue (relativement) vite, entre 1h et 1h30.
Ma BD de l'année
Je décerne le prix de la BD de l'année à Bolchoï Arena, de Boulet et Aseyn. Le tome 1 date de 2018, mais je n'ai découvert la série qu'en 2022 à l'occasion de la sortie du tome 3 — pour une série prévue en 5 livres. Dans cette histoire de Science Fiction, on suit les pérégrinations d'une jeune femme dans le Bolchoï, monde virtuel en ligne particulièrement gigantesque qui reproduit à l'identique l'univers connu. Jusqu'à, bien sûr, qu'il se passe des trucs de ouf qui posent des tonnes de questions. On y retrouve de l'aventure, de l'exploration, de la géopolitique, des questions existentielles sur le rapport aux mondes virtuels, et bien plus mais je peux pas dire quoi pour pas spoiler. J'ai très très hâte de lire la suite, les trois premiers tomes sont excellents !
Mon livre de l'année
Andy Weir, auteur du livre de SF The Martian, qui a été adapté au cinéma dans un film éponyme avec Matt Damon (très bonne adaptation soit dit en passant), a sorti deux autres livres : Artemis et Project Hail Mary. Si Artemis est une lecture très agréable, Project Hail Mary a été une claque monumentale. Le personnage principal cynique à souhaite, la narration par flashbacks qui fait monter la compréhension et les enjeux, et un incroyable twist au milieu du livre qui change complètement la donne : j'ai adoré ce livre, et je ne peux que le recommander à tout le monde, c'est une merveille.
Conclusions sur l'année 2022
2022 fût une année encore plus éprouvante que ce que j'avais prévu. Mais j'ai énormément appris, sur beaucoup de choses. J'ai été tour à tour programmeur, game designer, producer, entrepreneur, recruteur, organisateur… Ça fait beaucoup pour un seul homme, c'est épuisant, mais je ne regrette pas ! Dans tout ça, j'ai tout de même vraiment réussi à me préserver, à ne pas me surcharger de travail, à prendre de (longues) vacances, et c'est une très bonne chose. Je ne suis pas cramé, j'ai encore plein d'énergie pour 2023, et je suis confiant sur l'avenir.
Bonne année 2023 à vous toutes et tous, chères lectrices, chers lecteurs, et merci de tout cœur de me suivre dans ces aventures !
A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:
We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.
To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.
That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.
In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.
Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.
So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.
Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!
At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.
Collected here are the most recent blog posts from all over the Mozilla community.
The content here is unfiltered and uncensored, and represents the views of individual community members.
Individual posts are owned by their authors -- see original source for licensing information.