Patrick ClokeJoining the Matrix Spec Core Team

I was recently invited to join the Matrix “Spec Core Team”, the group who steward the Matrix protocol, from their own documentation:

The contents and direction of the Matrix Spec is governed by the Spec Core Team; a set of experts from across the whole Matrix community, representing all aspects of the Matrix ecosystem. The Spec Core Team acts as a subcommittee of the Foundation.

This was the announced a couple of weeks ago and I’m just starting to get my feet wet! You can see an interview between myself, Tulir (another new member of the Spec Core Team), and Matthew (the Spec Core Team lead) in today’s This Week in Matrix. We cover a range of topics including Thunderbird (and Instantbird), some improvements I hope to make and more.

Patrick ClokeSynapse URL Previews

Matrix includes the ability for a client to request that the server generate a “preview” for a URL. The client provides a URL to the server which returns Open Graph data as a JSON response. This leaks any URLs detected in the message content to the server, but protects the end user’s IP address, etc. from the URL being previewed. [1] (Note that clients generally disable URL previews for encrypted rooms, but it can be enabled.)

Improvements

Synapse implements the URL preview endpoint, but it was a bit neglected. I was one of the few main developers running with URL previews enabled and sunk a bit of time into improving URL previews for my on sake. Some highlights of the improvements made include (in addition to lots and lots of refactoring):

I also helped review many changes by others:

  • Improved support for encodings: #10410.
  • Safer content-type support: #11936.
  • Attempts to fix Twitter previews: #11985.
  • Remove useless elements from previews: #12887.
  • Avoid crashes due to unbounded recursion: GHSA-22p3-qrh9-cx32.

And also fixed some security issues:

  • Apply url_preview_url_blacklist to oEmbed and pre-cached images: #15601.

Results

Overall, there was an improved result (from my point of view). A summary of some of the improvements. I tested 26 URLs (based on ones that had previously been reported or found to give issues). See the table below for testing at a few versions. The error reason was also broken out into whether JavaScript was required or some other error occurred. [2]

Version Release date Successful preview JavaScript required error Found image & description?
1.0.0 2019-06-11 15 4 14
1.12.0 2020-03-23 18 4 17
1.24.0 2020-12-09 20 1 16
1.36.0 2021-06-15 20 1 16
1.48.0 2021-11-30 20 1 11
1.60.0 2022-05-31 21 0 21
1.72.0 2022-11-22 22 0 21
1.84.0 2023-05-23 22 0 21

Future improvements

I am no longer working on Synapse, but some of the ideas I had for additional improvements included:

There’s also a ton more that could be done here if you wanted, e.g. handling more data types (text and PDF are the ones I have frequently come across that would be helpful to preview). I’m sure there are also many other URLs that don’t work right now for some reason. Hopefully the URL preview code continues to improve!

[1]See some ancient documentation on the tradeoffs and design of URL previews. MSC4095 was recently written to bundle the URL preview information into evens.
[2]This was done by instantiating different Synapse versions via Docker and asking them to preview URLs. (See the code.) This is not a super realistic test since it assumes that URLs are static over time. In particular some sites (e.g. Twitter) like to change what they allow you to access without being authenticated.

The Mozilla BlogNext steps for Mozilla and Trustworthy AI

(In short: Mozilla has updated its take on the state of AI — and what we need to do to make AI more trustworthy. Read the paper and share your feedback: AIPaper@mozillafoundation.org.)

In 2020, when Mozilla first focused its philanthropy and advocacy on trustworthy AI, we published a paper outlining our vision. We mapped the barriers to a better AI ecosystem — barriers like centralization, algorithmic bias, and poor data privacy norms. We also mapped paths forward, like shifting industry norms and introducing new regulations and incentives. 

The upshot of that report? We learned AI has a lot in common with the early web. So much promise, but also peril — with harms spanning privacy, security, centralization, and competition. Mozilla’s expertise in open source and holding incumbent tech players accountable put us in a good place to unpack this dynamic and take action. 

A lot has changed since 2020. AI technology has grown more centralized, powerful, and pervasive; its risks and opportunities are not abstractions. Conversations about AI have grown louder and more urgent. Meanwhile, within Mozilla, we’ve made progress on our vision, from research and investments to products and grantmaking

Today, we’re publishing an update to our 2020 report — the progress we’ve made so far, and the work that is left to do.

[Read: Accelerating Progress Toward Trustworthy AI]

Our original paper focused on four strategic areas: 

  • Changing AI development norms,
  • Building new tech and products,
  • Raising consumer awareness,
  • Strengthening AI regulations and incentives. 

This update revisits those areas, outlining what’s changed for the better, what’s changed for the worse, and what’s stayed the same. At a very high level, our takeaways are:

  • Norms: The people that broke the internet are the ones building AI. 
  • Products: More trustworthy AI products need to be mainstream. 
  • Consumers: A more engaged public still needs better choices on AI. 
  • Policy: Governments are making progress while grappling with conflicting influences. 

A consistent theme across these areas is the importance and potential of openness for the development of more trustworthy AI — something Mozilla hasn’t been quiet about

Our first trustworthy AI paper was both a guidepost and map, and this one will be, too. Within are Mozilla’s plans for engaging with AI issues and trends. The paper outlines five key steps Mozilla will take in the years ahead (like making open-source generative AI more trustworthy and mainstream), and also five steps the broader movement can take (like pushing back on regulations that would make AI even less open). 

Our first paper was also “open source,” and this one is, too. We are seeking input on the report and on the state of the AI ecosystem more broadly. Through your comments and a series of public events, we will take feedback from the AI community and use it to strengthen our understanding and vision for the future. Please contact us at AIPaper@mozillafoundation.org and send us your feedback on the report, as well as examples of trustworthy AI approaches and applications.

The movement for trustworthy AI has made meaningful progress since 2020, but there’s still much more work to be done. It’s time to redouble our efforts and recommit to our core principles, and this report is Mozilla’s next step in doing that. It will take all of us, working together, to turn this vision into reality. There’s no time to waste — let’s get to work.

The post Next steps for Mozilla and Trustworthy AI appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 535

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is kind, a helper crate for typed UUIDs.

Thanks to Denys Séguret for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

508 pull requests were merged in the last week

Rust Compiler Performance Triage

Relatively few PRs affecting performance, but massive improvements thanks to the update to LLVM 18 (PR #12005), as well as the merging of two related compiler queries (PR #120919) and other small improvements from a rollup (PR #121055).

Triage done by @pnkfelix. Revision range: 74c3f5a1..5af21304

3 Regressions, 1 Improvements, 6 Mixed; 1 of them in rollups 65 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-02-21 - 2024-03-20 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Shared mutable state is evil, and you can solve it by forbidding mutation, or by forbidding sharing. Rust supports both.

kornel on Lobste.rs

Thanks to Aleksey Kladov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogRust participates in Google Summer of Code 2024

We're writing this blog post to announce that the Rust Project will be participating in Google Summer of Code (GSoC) 2024. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.

Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.

As of today, the organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to send project proposals to organizations that appeal to them. If their project proposal is accepted, they will embark on a 12-week journey during which they will try to complete their proposed project under the guidance of an assigned mentor.

We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals.

You can start discussing the project ideas with Rust Project maintainers immediately. The project proposal application period starts on March 18, 2024, and ends on April 2, 2024 at 18:00 UTC. Take note of that deadline, as there will be no extensions!

If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.

This is the first time that the Rust Project is participating in GSoC, so we are quite excited about it. We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. We will publish another blog post later this year with more information about our participation in the program.

Mozilla Performance BlogWeb Performance @ FOSDEM 2024

FOSDEM (Free and Open Source Software Developers’ European Meeting) is one of the largest gatherings of open-source enthusiasts, developers, and advocates worldwide. Each year there are many focused developer rooms (devrooms), managed by volunteers, and this year’s edition on 3-4 February saw the return of the Web Performance devroom managed by Peter Hedenskog from Wikimedia and myself (Dave Hunt) from Mozilla. Thanks to so many great talk proposals (we easily could have filled a full day), we were able to assemble a fantastic schedule, and at times the room was full, with as many people standing outside hoping to get in!

Dive into the talks

Thanks to the FOSDEM organisers and preparation from our speakers, we successfully managed to squeeze nine talks into the morning with a tight turnaround time. Here’s a rundown of the sessions:

1. The importance of Web Performance to Information Equity

Bas Schouten kicked off the morning with his informative talk on the vital role web performance plays on ensuring equal access to information and services for those with slower devices.



2. Let’s build a RUM system with open source tools

Next up we had Tsvetan Stoychev share what he’s learned working on Basic RUM – an open source real user monitoring system.



3. Better than loading fast… is loading instantly!

At this point the room was at capacity, with at least as many people waiting outside! Next, Barry Pollard gave shared details on how to score near-perfect Core Web Vitals in his talk on pre-fetching and pre-rendering.



4. Keyboard Interactions

Patricija Cerkaite followed with her talk on how she helped to improve measuring keyboard interactions, and how this influenced Interaction to Next Paint, leading to a better experience for Input Method Editors (IME).



5. Web Performance at Mozilla and Wikimedia

Midway through the morning, Peter Hedenskog & myself shared some insights into how Wikimedia and Mozilla measure performance in our talk. Peter shared a some public dashboards, and I ran through a recent example of a performance regression affecting our page load tests.



6. Understanding how the web browser works, or tracing your way out of (performance) problems

We handed the spotlight over to Alexander Timin for his talk on event tracing and browser engineering based on his experience working on the Chromium project.



7. Fast JavaScript with Data-Oriented Design

The morning continued to go from strength to strength, with Markus Stange demonstrating in his talk how to iterate and optimise a small example project and showing how easy it is to use the Firefox Profiler.



8. From Google AdSense to FOSS: Lightning-fast privacy-friendly banners

As we got closer to lunch, Tim Vereecke teased us with hamburger banner ads in his talk on replacing Google AdSense with open source alternative Revive Adserver to address privacy and performance concerns.



9. Insights from the RUM Archive

For our final session of the morning, Robin Marx introduced us to the RUM Archive, shared some insights and challenges with the data, and discussed the part real user monitoring plays alongside other performance analysis.



Beyond the devroom

It was great to see that the topic of web performance wasn’t limited to our devroom, with talks such as Debugging HTTP/3 upload speed in Firefox in the Mozilla devroom, Web Performance: Leveraging Qwik to Meet Google’s Core Web Vitals in the JavaScript devroom, and Firefox power profiling: a powerful visualization of web sustainability in the main track.

Acknowledgements

I would like to thank all the amazing FOSDEM volunteers for supporting the event. Thank you to our wonderful speakers and everyone who submitted a proposal for providing us with such an excellent schedule. Thank you to Peter Hedenskog for bringing his devroom management experience to the organisation and facilitation of the devroom. Thank you to Andrej Glavic, Julien Wajsberg, and Nazım Can Altınova for their help managing the room and ensuring everything ran smoothly. See you next year!

Firefox Developer ExperienceFirefox WebDriver Newsletter — 123

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 123 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi

New: Support for the “browsingContext.locateNodes” command

Support for the browsingContext.locateNodes command has been introduced to find elements on the given page. Supported locators for now are CssLocator and XPathLocator. Additional support for locating elements by InnerTextLocator will be added in a later version.

This command encapsulates the logic for locating elements within a web page’s DOM, streamlining the process for users familiar with the Find Element(s) methods from WebDriver classic (HTTP). Alternatively, users can still utilize script.evaluate, although it necessitates knowledge of the appropriate JavaScript code for evaluation.

New: Support for the “network.fetchError” event

Added support for the network.fetchError event that is emitted when a network request ends in an error.

Update for the “browsingContext.create” command

The browsingContext.create command has been improved on Android to seamlessly switch to opening a new tab if the type argument is specified as window.

We implemented this change to simplify the creation of tests that need to run across various desktop platforms and Android. Consequently, specific adjustments for new top-level browsing contexts are no longer required, enhancing the test creation process.

Bug Fixes

Marionette (WebDriver classic)

Bug Fixes

The Rust Programming Language Blog2023 Annual Rust Survey Results

Hello, Rustaceans!

The Rust Survey Team is excited to share the results of our 2023 survey on the Rust Programming language, conducted between December 18, 2023 and January 15, 2024. As in previous years, the 2023 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.

This eighth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, this year we have also prepared a report containing charts with aggregated results of all questions in the survey. Based on feedback from recent years, we have also tried to provide more comprehensive and interactive charts in this summary blog post. Let us know what you think!

Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.

There's a lot of data to go through, so strap in and enjoy!

Participation

Survey Started Completed Completion rate Views
2022 11 482 9 433 81.3% 25 581
2023 11 950 9 710 82.2% 16 028

As shown above, in 2023, we have received 37% fewer survey views in vs 2022, but saw a slight uptick in starts and completions. There are many reasons why this could have been the case, but it’s possible that because we released the 2022 analysis blog so late last year, the survey was fresh in many Rustaceans’ minds. This might have prompted fewer people to feel the need to open the most recent survey. Therefore, we find it doubly impressive that there were more starts and completions in 2023, despite the lower overall view count.

Community

This year, we have relied on automated translations of the survey, and we have asked volunteers to review them. We thank the hardworking volunteers who reviewed these automated survey translations, ultimately allowing us to offer the survey in seven languages: English, Simplified Chinese, French, German, Japanese, Russian, and Spanish. We decided not to publish the survey in languages without a translation review volunteer, meaning we could not issue the survey in Portuguese, Ukrainian, Traditional Chinese, or Korean.

The Rust Survey team understands that there were some issues with several of these translated versions, and we apologize for any difficulty this has caused. We are always looking for ways to improve going forward and are in the process of discussing improvements to this part of the survey creation process for next year.

We saw a 3pp increase in respondents taking this year’s survey in English – 80% in 2023 and 77% in 2022. Across all other languages, we saw only minor variations – all of which are likely due to us offering fewer languages overall this year due to having fewer volunteers.

Rust user respondents were asked which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (12%), China (6%), United Kingdom (6%), France (6%), Canada (3%), Russia (3%), Netherlands (3%), Japan (3%), and Poland (3%) . We were interested to see a small reduction in participants taking the survey in the United States in 2023 (down 3pp from the 2022 edition) which is a positive indication of the growing global nature of our community! You can try to find your country in the chart below:

Once again, the majority of our respondents reported being most comfortable communicating on technical topics in English at 92.7% — a slight difference from 93% in 2022. Again, Chinese was the second-highest choice for preferred language for technical communication at 6.1% (7% in 2022).

<noscript> <img alt="what-are-your-preferred-languages-for-technical-communication" height="400" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/what-are-your-preferred-languages-for-technical-communication.png" /> </noscript>
[PNG] [SVG]

We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 76% selected no, 14% selected yes, and 10% preferred not to say.

We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 41% followed by trans at 31.4%. Going forward, it will be important for us to track these figures over time to learn how our community changes and to identify the gaps we need to fill.

<noscript> <img alt="which-marginalized-group" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-marginalized-group.png" /> </noscript>
[PNG] [SVG]

As Rust continues to grow, we must acknowledge the diversity, equity, and inclusivity (DEI)-related gaps that exist in the Rust community. Sadly, Rust is not unique in this regard. For instance, only 20% of 2023 respondents to this representation question consider themselves a member of a racial or ethnic minority and only 26% identify as a woman. We would like to see more equitable figures in these and other categories. In 2023, the Rust Foundation formed a diversity, equity, and inclusion subcommittee on its Board of Directors whose members are aware of these results and are actively discussing ways that the Foundation might be able to better support underrepresented groups in Rust and help make our ecosystem more globally inclusive. One of the central goals of the Rust Foundation board's subcommittee is to analyze information about our community to find out what gaps exist, so this information is a helpful place to start. This topic deserves much more depth than is possible here, but readers can expect more on the subject in the future.

Rust usage

In 2023, we saw a slight jump in the number of respondents that self-identify as a Rust user, from 91% in 2022 to 93% in 2023.

Of those who used Rust in 2023, 49% did so on a daily (or nearly daily) basis — a small increase of 2pp from the previous year.

<noscript> <img alt="how-often-do-you-use-rust" height="300" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/how-often-do-you-use-rust.png" /> </noscript>
[PNG] [SVG]

31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, with 67% reporting that they simply haven’t had the chance to prioritize learning Rust yet, which was once again the most common reason.

<noscript> <img alt="why-dont-you-use-rust" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/why-dont-you-use-rust.png" /> </noscript>

Of the former Rust users who participated in the 2023 survey, 46% cited factors outside their control (a decrease of 1pp from 2022), 31% stopped using Rust due to preferring another language (an increase of 9pp from 2022), and 24% cited difficulty as the primary reason for giving up (a decrease of 6pp from 2022).

<noscript> <img alt="why-did-you-stop-using-rust" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/why-did-you-stop-using-rust.png" /> </noscript>

Rust expertise has generally increased amongst our respondents over the past year! 23% can write (only) simple programs in Rust (a decrease of 6pp from 2022), 28% can write production-ready code (an increase of 1pp), and 47% consider themselves productive using Rust — up from 42% in 2022. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.

<noscript> <img alt="how-would-you-rate-your-rust-expertise" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/how-would-you-rate-your-rust-expertise.png" /> </noscript>
[PNG] [SVG]

In terms of operating systems used by Rustaceans, the situation is very similar to the results from 2022, with Linux being the most popular choice of Rust users, followed by macOS and Windows, which have a very similar share of usage.

<noscript> <img alt="which-os-do-you-use" height="400" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-os-do-you-use.png" /> </noscript>

Rust programmers target a diverse set of platforms with their Rust programs, even though the most popular target by far is still a Linux machine. We can see a slight uptick in users targeting WebAssembly, embedded and mobile platforms, which speaks to the versatility of Rust.

<noscript> <img alt="which-os-do-you-target" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-os-do-you-target.png" /> </noscript>

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) do they use. Visual Studio Code still seems to be the most popular option, with RustRover (which was released last year) also gaining some traction.

<noscript> <img alt="what-ide-do-you-use" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/what-ide-do-you-use.png" /> </noscript>

You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.

Rust at Work

We were excited to see a continued upward year-over-year trend of Rust usage at work. 34% of 2023 survey respondents use Rust in the majority of their coding at work — an increase of 5pp from 2022. Of this group, 39% work for organizations that make non-trivial use of Rust.

Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software at 86% — a 4pp increase from 2022 responses. The second most popular reason was Rust’s performance characteristics at 83%.

<noscript> <img alt="why-you-use-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/why-you-use-rust-at-work.png" /> </noscript>
[PNG] [SVG]

We were also pleased to see an increase in the number of people who reported that Rust helped their company achieve its goals at 79% — an increase of 7pp from 2022. 77% of respondents reported that their organization is likely to use Rust again in the future — an increase of 3pp from the previous year. Interestingly, we saw a decrease in the number of people who reported that using Rust has been challenging for their organization to use: 34% in 2023 and 39% in 2022. We also saw an increase of respondents reporting that Rust has been worth the cost of adoption: 64% in 2023 and 60% in 2022.

<noscript> <img alt="which-statements-apply-to-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-statements-apply-to-rust-at-work.png" /> </noscript>
[PNG] [SVG]

There are many factors playing into this, but the growing awareness around Rust has likely resulted in the proliferation of resources, allowing new teams using Rust to be better supported.

In terms of technology domains, it seems that Rust is especially popular for creating server backends, web and networking services and cloud technologies.

<noscript> <img alt="technology-domain" height="600" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/technology-domain.png" /> </noscript>

You can scroll the chart to the right to see more domains. Note that the Database implementation and Computer Games domains were not offered as closed answers in the 2022 survey (they were merely submitted as open answers), which explains the large jump.

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Challenges

As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.

Of those respondents who shared their main worries for the future of Rust (9,374), the majority were concerned about Rust becoming too complex at 43% — a 5pp increase from 2022. 42% of respondents were concerned about a low level of Rust usage in the tech industry. 32% of respondents in 2023 were most concerned about Rust developers and maintainers not being properly supported — a 6pp increase from 2022.

We saw a notable decrease in respondents who were not at all concerned about the future of Rust, 18% in 2023 and 30% in 2022.

Thank you to all participants for your candid feedback which will go a long way toward improving Rust for everyone.

Closed answers marked with N/A were not present in the previous (2022) version of the survey.

In terms of features that Rust users want to be implemented, stabilized or improved, the most desired improvements are in the areas of traits (trait aliases, associated type defaults, etc.), const execution (generic const expressions, const trait methods, etc.) and async (async closures, coroutines).

<noscript> <img alt="which-features-do-you-want-stabilized" height="600" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-features-do-you-want-stabilized.png" /> </noscript>

It is interesting that 20% of respondents answered that they wish Rust to slow down the development of new features, which likely goes hand in hand with the previously mentioned worry that Rust becomes too complex.

The areas of Rust that Rustaceans seem to struggle with the most seem to be asynchronous Rust, the traits and generics system and also the borrow checker.

<noscript> <img alt="which-problems-do-you-remember-encountering" height="400" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/which-problems-do-you-remember-encountering.png" /> </noscript>

Respondents of the survey want the Rust maintainers to mainly prioritize fixing compiler bugs (68%), improving the runtime performance of Rust programs (57%) and also improving compile times (45%).

<noscript> <img alt="how-should-work-be-prioritized" height="800" src="https://blog.rust-lang.org/images/2024-02-rust-survey-2023/how-should-work-be-prioritized.png" /> </noscript>
[PNG] [SVG]

Same as in recent years, respondents noted that compilation time is one of the most important areas that should be improved. However, it is interesting to note that respondents also seem to consider runtime performance to be more important than compile times.

Looking ahead

Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.

We are aware that the survey has contained some confusing questions, and we will try to improve upon that in the next year's survey. If you have any suggestions for the Rust Annual survey, please let us know!

We are immensely grateful to those who participated in the 2023 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.

If you’d like to dig into more details, we recommend you to browse through the full survey report.

Anne van KesterenWebKit and web-platform-tests

Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.

Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.

I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] --clean-dest-dir on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.

This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:

  • On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time.
  • [wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.

Firefox NightlyMonitor, Plus More Improvements – These Weeks in Firefox: Issue 154

Highlights

  • Mozilla Monitor Plus has been launched! This is a new subscription product (available only in the US for now) that will search for and scrub your personal information from data brokers.
    • A screenshot of the Mozilla Monitor Plus dashboard for US customers. The dashboard shows that 16 exposures of user data have been manually fixed, and that 240 are in progress. At the bottom, a list of data brokers are listed for which data scrubbing is in progress. Those data brokers are "arivify.com" and "beenverified.com", and the user data is listed as being for sale.

      Mozilla Monitor Plus lets you take back control over your personal information.

  • The new clear history dialog has been enabled by default at Nightly! The dialog now has a more modern look, consolidated clearing options, and shows the amount of data you clear based on time range. Additionally, all the entry points for clearing data have been unified to point to the same dialog. Congratulations to :harshitsohaney for getting the new dialog to this point!
    • The new "Clear browsing data and cookies" dialog is shown. A dropdown for when to remove data for has "Last hour" selected. There are 4 checkboxes shown with the following labels: "History", "Cookies and site data (23.5 MB)", "Temporary cached files and pages (464 MB)", "Site settings". The first three are checked.

      Much cleaner than before!

  • Nicolas added support for registered properties (@property/CSS.registerProperty) in the DevTools Rules view (bug, bug). The registered properties are displayed in var() autocomplete (bug), as well as in property name autocomplete too (bug)
    • Check it out by setting the pref layout.css.properties-and-values.enabled to true
    • 4 different sections of the Firefox DevTools are shown. All are demonstrating that custom CSS properties are more easily inspected. The top-left quadrant shows the property being inspected in the Style pane of the Inspector. The bottom-left quadrant shows an animated custom property listed in the CSS animation inspector. The top-right quadrant shows the CSS property in the Style pane of the Inspector being offered in an autofill tooltip, and showing what the CSS value resolves to (the colour "gold"). The bottom most pane shows the rule being included in the autofill tooltip when setting a style property in a selector.
  • In Firefox 124 a new runtime.onPerformanceWarning API event has been introduced (Bug 1861445) for WebExtensions. This event will be emitted when Firefox detects that a content script is impacting a web page responsiveness. It is meant to allow WebExtension developers to detect when their content scripts are slowing down pages.
    • This new API has been previously proposed through the W3C WebExtensions Community Group and tracked by this ticket.
    • Thanks to Dave Vandyke for contributing this new WebExtensions API!

Friends of the Firefox team

Introductions/Shout-Outs

  • Welcome to Nathan Barrett (:nbarrett), who is joining the New Tab team!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]
  • Itiel

New contributors (🌟 = first patch)

Project Updates

Accessibility

  • Ongoing High Contrast Mode Project – We’re reevaluating all occurrences of @media (prefers-contrast) to make sure they’re targeting BOTH Windows HCM and macOS Increase Contrast. If the code within the query is only for Windows (as often is the case 😁) the query should be switched to @media (forced-colors).
  • New revisions that use these queries will be blocked for review by the HCM-reviewers review group in phabricator.
  • You can read more about using these queries in our new documentation, and play around with this live site Morgan made. If you have any questions, please reach out to Morgan or Anna 🙂 Thanks!

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to :arai for having converted the last 3 jsm files from the XPIProvider internals to ES modules – Bug 1836480.
    • WebExtensions and AOM/XPIProvider internals are now 100% migrated away from legacy jsm files! 🎉
  • Thanks to :masayuki for fixing a bug related to keyboard shortcuts using non-english keyboard layouts (fixed as part of Bug 1874727 and tracked for WebExtensions keyboard shortcuts in Bug 1782660).

Developer Tools

DevTools
  • Oliver Schramm reported and fixed the geometry editor when the page is zoomed in (bug)
  • Nicolas added a preference to control the behavior of the Enter key when editing properties in the  Rules view (bug), and reverted the behavior to what we had in Firefox 121 (bug, blog post update)
  • Alex made the console up to 70% faster (perf alert) when it reaches the limit of messages we show (bug)
  • Alex improved the tracer, by allowing it to trace on next reload or navigation (bug)
    • This relies on a new option in the context menu:
      • A menu popup for the Tracer tool in the Firefox DevTools. A new item has been added with a checkbox next to it: "Trace only on next page load (reload or navigation)".
  • Nicolas fixed an issue where ServiceWorker file where not displayed in the debugger when using an URL with a port (bug)
    • If you’re working with Service Worker, please flip devtools.debugger.features.windowless-service-workers so you can debug them directly in the page tab toolbox (not via about:debugging). We’re looking for feedback on this before we enable it by default
  • Bomsy made the debugger no longer use Babel to detect if watch expressions have syntax errors (bug). This is part of a bigger project where we’re trying to completely remove Babel, which can be pretty slow on very large files
  • Alex fixed a bug in the Debugger where watch expressions and variable tooltip could show wrong values (bug)
WebDrive BiDi
  • Contributors
    • James Hendry updated the “WebDriver:SwitchToFrame” command to make the “id” parameter mandatory and raise an exception if it is missing (bug)
  • Sasha added support for the contexts attribute to the script.addPreloadScript command (BiDi), which allows to assign a preload script to specific browsing contexts (bug)
  • Henrik fixed the “WebDriver:NewWindow” command to always fallback to opening new tabs on Android, even if a new “window” was requested (bug)
  • Henrik updated our vendored Puppeteer version to v21.10.0, which comes with updated tests and support for BiDi features. The ./mach puppeteer-test command was also updated to run in headful mode by default (bug)
  • Henrik improved browsingContext.close to allow closing the last tab of a window (bug)
  • Julian implemented several commands to handle user contexts (containers) in WebDriver BiDi;
    • browser.createUserContext allows to create a new user context (bug)
    • browser.getUserContexts allows to list all the available user contexts (including the default one and contexts created outside of WebDriver BiDi) (bug)
    • browser.removeUserContexts allows to remove a user context and close all the related tabs (bug)
  • Julian added partial support for two network interception commands, network.continueRequest and network.continueResponse. At the moment they only allow to resume an intercepted request, but additional parameters will later allow to modify the request/response (bug)

ESMification status

  • Aria transitioned extensions related modules and found some more modules under devtools’ performance-new to transition.
  • See also the New Tab Page update below.
  • ESMified status:
    • browser: 96.43%
    • toolkit: 99.83%
    • Total:  98.48% (+2% from last week)
  • #esmification on Matrix

Lint, Docs and Workflow

  • Removed .ini file support from ESLint, now that the transition to .toml is largely complete.
  • Removed Babel integration from ESLint.
    • This helps to speed up ESLint.
    • Originally integrated due to wanting to use JavaScript features that were at stage 3, whereas ESLint only supports them at stage 4.
    • We can/will reintroduce the integration later if need be, but for now let’s enjoy the slightly faster linting.

Migration Improvements

  • Welcome to fchasen and kpatenio, who are going to be joining us on making device migration smoother for our users!
  • The team has been mostly prototyping, consulting and building up their expertise on the various data stored in user profile directories, and how it can be safely copied during runtime.

New Tab Page

Performance

Screenshots

Search and Navigation

  • Marc has implemented the UX spec for switch to tab across containers @ 1871980 and added voice support @ 1876759 (will be enabled in Nightly soon)
  • Anna has fixed various accessibility issues around the urlbar, including providing interactive roles to search bar button (1871980), fixing TAB behaviour @ 1874277 and 1875654 and various test fixes
  • Trending suggestions are now enabled on Bing (1872409) for Nightly users.
  • Drew fixed the Weather suggestions UI @ 1878190
  • Dao fixed top pick alignment @ 1876020
  • Mandy and Mark have done a lot of work towards search-config-v2 that allows us to share search configuration across desktop and mobile, tracking bug @ 1833829

Storybook/Reusable Components

The Mozilla BlogActivist Chris Smalls reflects on taking on Amazon, forming worker unions and digital activism in 2024

At Mozilla, we know  we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with winner Chris Smalls, an activist using technology to effect change and advocate for a better world. He’s the founder and president of the Amazon Labor Union in Staten Island that advocates for workers’ rights and conditions. In 2020, he was fired by Amazon after leading protests against its working conditions during the COVID-19 pandemic. We talk with Smalls about the early days of the union fight, his work in the community and how the digital world has impacted organizing efforts.

When people are fighting against Amazon, there are a lot of different fights — wages, time-off, to even remote work now. What was the main thing that you wanted to fight for like during that time? When you began to fight for the union.

The pandemic, for sure. It was COVID-19. That initially was the reason why I spoke up. You know, after working there for a number of years — five years — and realizing that we weren’t prepared for the virus on a local level, it was a very alarming situation to be in, and this was before the vaccine, before mask testing, before we even really understood what the virus was doing. We knew it was wiping people out, so my fear was that it would spread like wildfire within the warehouse and within the whole Amazon network. So initially, I was just trying to go through the proper channels. And one thing led to another, you know, when I wasn’t met with an answer that I felt was sustainable for not just myself, but for everybody, that’s when I started to pretty much rebel. I try to still do that in a respectable manner, but unfortunately, the company decided to take an aggressive route by just quarantining myself out of the thousands of people, and I felt that wasn’t right at all. So initially it was over COVID-19, but as things unfolded, the demands changed over time. And it wasn’t until 2021 — the end of 2021, spring — was when we decided that we were going to form this independent Amazon labor union.

How did you get people on board with this? How did you convince people to buy into it?

I used Amazon’s principles — really, to be honest with you — earning the trust, building the relationships. One of my favorite principles out of the 14 was, have backbone, disagree and commit, so that’s exactly what I did. I disagreed with the way they were responding. I had a backbone to stand up to it, and I committed myself to the movement and committed myself to building relationships and earning the trust of the workers. So, over the course of 11 months, you know, organizing outside across the street, meeting people, having conversations, having barbecues, giving out free food — and yes, we did give out free weed — we did all these things, little things that mattered the most. Things that Amazon overlooked all the time – the little things. How do people get to work? How do they eat lunch every day? How do they get a ride to and from work in a snowstorm? We were there for them during those times, and we did those little bit of things with a little bit of money that we had from donations, and that’s ultimately how we defeated them, which is bringing people together from all different backgrounds.

<figcaption class="wp-element-caption">Chris Smalls at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

When you reflect on your time at Amazon, what do you remember most about that period in your life in terms of the work that you did there?

What I remember most is really just being allowed to be exactly who people see today. When I worked there, I was so well respected because I was a good employee, that I was allowed to pretty much create my culture within my own little department no matter what building I was in. I opened up 3 buildings for Amazon — one in New Jersey, Connecticut and Staten Island — and for me to go to each of these buildings and be able to have the respect of upper management and have the morale of the people underneath me to make them productive, and my team go number one in our department. I think people respected the fact that I was always siding with the workers, no matter what position I was in, and I was a supervisor. To have the morale that I had, I had to understand where people came from, and I understood where they came from because I was them at one point in time. I was an entry level worker on the line, picking and packing boxes just like the rest of them. So for me, I never forgot where I came from, and by having those types of skill sets, along with learning those principles, that’s what made me the best organizer I can possibly be.

You’ve gotten a lot of different spotlights — being on The Daily Show, meeting President Joe Biden, magazine features —  which experience from the last few years has kind of made you stop and realize the magnitude of what you did?

The Daily Show is definitely up there, that was a cool one. The Breakfast Club, that was a cool one for me. Desus and Mero was a cool one for me. And of course, the White House. I’m not fond of the President, but to go to the White House as a young black man from where I came from is unheard of, so, that’s always going to be a highlight of my life, regardless of who the President is. 

Where do you draw inspiration from to continue the work that you do today?

I draw definitely from the youth, the younger generation. I try to stay young and hip — I’m still 35 years old and I have kids already, I have kids about to be in high school. My kids are 11 going on 12, and they’re watching me on YouTube, especially on TikTok. I’m in their classroom. They’re talking to their friends about their dad. So for me, my inspiration is being a good role model, being a good father and understanding that the youth is paying attention now, and because of my uniqueness and our style, our swag, the way my union is so different, I want to continue to build off of that. I want to make sure that we’re making unionizing cool because before it was boring, you know, to talk about it. But now we’re trying to change the culture of what labor looks like.

What do you think is the biggest challenge that we face right now in the world, on and offline? How do you think we combat it?

Well, the biggest challenge is the opposition. The system that’s been in place is still operating against us, and they got a lot more money and power than we do. The reason why they continue to get away with the things that they do is because we’re still divided. 

I’m a fast learner in my few years of organizing, the labor movement itself is in a small bubble. If you talk about social injustice, it’s in a small bubble. You talk about women’s rights, it’s in the small bubble. Climate is in a different bubble. We’re not really, truly connected until we see something like a George Floyd where everybody’s out in the streets, and that’s the problem with America. We all go out in the streets when we see things like George Floyd. But then, after a while, we forget about it, and then we go back to work. And then it’s like, “Oh well, I can’t, because of my own individual problems that I have, and it’s not everybody’s fault. It’s the system that we live in that is designed to keep us distracted and not together.” So I think that’s the biggest issue that we got to overcome is, how do we connect all these different movements? Because at the end of the day, we’re all a part of the working class, no matter what movement, we’re all part of the working class. And if you’re in the labor movement, everybody here is a worker, no matter what job you work for or what industry you work for, you’re a worker. My goal one day is to connect trade unions to all the different movements and make this a class struggle, This is a class struggle. It’s 99.9% of us versus the one percent class, the billionaires. And I think if we all realize — that we’re all poor compared to these billionaires that are the ones who make the decisions for the rest of us and control these corporations — then we’ll be way better off than we are as a country.

What gives you hope about the future of our world to reach a place where we’re all much better?

What gives me hope now is that I’m walking into middle schools now and these 10-year-olds are telling me that Jeff Bezos is a bad man. Back in the day I didn’t go to class, and on Career Day, there was no Chris Small walking into a classroom on Career Day. There was always police officers, firefighters or nurses and doctors. But there was never a young, Black, cool-looking, Urban-like, brother to come in and say “Yo, you could be a trade union leader and still be as cool as a rapper. It was none of that. So for me, that’s what gives me hope is that the young generation — it’s a gift and curse they have access to iPads because they get access to everything — but they’re much more conscious than we were. They’re much more smarter and advanced and I know that could be a little scary, because they do have access to a lot of things at a younger age, but these kids are so smart now that they’re able to make decisions at a younger age. The younger generation is paying attention to the major issues of the world right now. I think we’re in a time that we’ve never seen before and that’s what gives me hope is that the younger generation is going to lead the way instead of us passing the torch, they’re going to lead it.

Get Firefox

Get the browser that protects what’s important

The post Activist Chris Smalls reflects on taking on Amazon, forming worker unions and digital activism in 2024 appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: January 2024 Progress Report

a dark background with Thunderbird and K-9 Mail logos centered, with the text "Thunderbird for Android, January 2024 dev digest"

A new year, a new progress report! Learn what we did in January on our journey to transform K-9 Mail into Thunderbird for Android. If you’re new here or you forgot where we left off last year, check out the previous progress report.

Account setup

In January most of our work went into polishing the user interface and user experience of the new and improved account setup. However, there was still one feature missing that we really wanted to get in there: the ability to configure special folders.

Special folders

K-9 Mail supports the following special folders:

  • Archive: When configured, an Archive action will be available that moves a message to the designated archive folder.
  • Drafts: When configured, the Save as draft action will be available in the compose screen.
  • Sent: Messages that have been successfully submitted to the outgoing server will be uploaded to this folder. If this special folder is set to None, the app won’t save a copy of sent messages.
    Note: There’s also the setting Upload sent messages that can be disabled to prevent sent messages from being uploaded, e.g. if your email provider automatically saves a copy of outgoing messages.
  • Spam: When configured, a Spam action will be available that moves a message to the designated spam folder. (Please note that K-9 Mail currently does not include spam detection. So besides moving the message, this doesn’t do anything on its own. However, moving a message to and from the spam folder often trains the server-side spam filter available at many email providers.)
  • Trash: When configured, deleting a message in the app will move it to the designated trash folder. If the special folder is set to None, emails are deleted permanently right away.

In the distant past, K-9 Mail was simply using common names for these folders and created them on the server if they didn’t exist yet. But some email clients were using different names. And so a user could end up with e.g. multiple folders for sent messages. Of course there was an option to manually change the special folder assignment. But usually people only noticed when it was too late and the new folder already contained a couple of messages. Manually cleaning this up and making sure all email clients are configured to use the same folders is not fun.

To solve this problem, RFC 6154 introduced the SPECIAL-USE IMAP extension. That’s a mechanism to save this special folder mapping on an IMAP server. Having this information on the server means all email clients can simply fetch that mapping and then there should be no disagreement on e.g. which folder is used for sent messages.

Unfortunately, there’s still some email providers that don’t support this extension. There’s also cases where the server supports the feature, but none of the special roles are assigned to any folder. When K-9 Mail added support for the SPECIAL-USE extension, it simply used the data from the server, even if it meant not using any special folders. Unfortunately, that could be even worse than creating new folders, because you might end up e.g. not having a copy of sent messages.

So now the app is displaying a screen to ask the user to assign special folders when setting up an account. 

This screen is skipped if the app receives a full mapping from the server, i.e. all special roles are assigned to a folder. Of course you’ll still be able to change the special folder assignment after the account has been created.

Splitting account options

We split what used to be the account options screen into two different screens: display options and sync options.

Improved server certificate error screen

The screen to display server certificate errors during account setup has received an overhaul.

Polishing the user experience

With the special folders screen done, we’re now feature complete. So we took a step back to look at the whole experience of setting up an account. And we’ve found several areas where we could improve the app. 

Here’s an (incomplete) list of things we’ve changed:

  • We reduced the font weight of the header text to be less distracting.
  • In some parts of the flow there’s enough content on the screen that a user has to scroll. The area between the header and the navigation buttons at the bottom can be very small depending on the device size. So we included the header in the scrollable area to improve the experience on devices with a small screen.
  • There are a couple of transient screens, e.g. when checking server settings. Previously the app first displayed a progress indicator when checking server settings, then a success message for 2 seconds, but allowed the user to skip this screen by pressing the Next button. This turned out to be annoying and confusing. Annoying because the user has to wait longer than necessary; and confusing because it looked like user input was required, but by the time the user realizes that, the app will have most likely switched to the next screen automatically.
    We updated these transient screens to always show a progress indicator and hide the Next button, so users know something is happening and there’s currently nothing for them to do.
  • We also fixed a couple of smaller issues, like the inbox not being synchronized during setup when an account was configured for manual synchronization.

Fixing bugs

Some of the more interesting bugs we fixed in January:

  • When rotating the screen while selecting a notification sound in settings, some of the notification settings were accidentally disabled (#7468). 
  • When importing settings a preview lines value of 0 was ignored and the default of 2 was used instead (#7493).
  • When viewing a message and long-pressing an image that is also a link, only menu items relevant for images were displayed, but not ones relevant for links (#7457).
  • Opening an attachment from K-9 Mail’s message view in an external app and then sharing the content to K-9 Mail opened the compose screen for a new message but didn’t add an attachment (#7557).

Community Contributions

new-sashok724 fixed a bug that prevented the use of IP addresses for incoming or outgoing servers (#7483).

Thank you ❤

Releases

If you want to help shape Thunderbird for Android, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: January 2024 Progress Report appeared first on The Thunderbird Blog.

Cameron KaiserOne less Un*xy option for 32-bit PowerPC

Most of you still using a Power Mac as a daily or occasional driver are probably either running Linux, Tiger or Leopard, and a minority on OS 9. Despite many distributions no longer shipping 32-bit PPC installs, Gentoo Linux still has specific support along with a few others, as does Adélie Linux if you like musl for breakfast. Still, for server duties, where I come from, you bring on the BSDs. In this blog you've already met my long-suffering NetBSD Macintosh IIci which is still trucking to this day and more recently my also-NetBSD G4 Mac mini (which later needed, effectively, a logic board swap), but I also have a Quadra 605 with a full '040 running NetBSD I use for utility tasks and at one time I ran an intermediate incarnation of gopher.floodgap.com on a Power Macintosh 7300 with a Sonnet G3 running NetBSD too. I stuffed that system full with a gig of RAM and a SATA card and it did very well until I got the current POWER6 server in 2010.

NetBSD has the widest support, continuing to run on most 68Ks and PCI Power Macs to this day (leaving out only the NuBus Power Macs which aren't really supported by much of anything anymore, sadly). However, OpenBSD works fine on New World Macs, and FreeBSD has a very mature 32-bit PowerPC port — or, should I say, soon will have had one, since starting in FreeBSD 15 (13.x is the current release), ARMv6, 32-bit Intel and 32-bit PowerPC support will likely be removed. No new 32-bit support will be added, including for RISC-V.

Even though I have a large number of NetBSD systems, I still like FreeBSD, and one of my remote "island" systems runs it. The differences between BSDs are more subtle than with Linux distributions, but you can still enjoy the different flavours that result, and I even ported a little FreeBSD code to the NetBSD kernel so I could support automatic restarts after a power failure on the G4 mini. The fact that the userland and kernel are better matched together probably makes the BSDs better desktop clients, too, especially since on big-endian we're already used to some packages just not building right, so we don't lose a whole lot by running it. (Usually those are the same packages that wouldn't build on anything but Linux anyway.)

This isn't the end for the G5, which should still be able to run the 64-bit version of FreeBSD, and OpenBSD hasn't voiced any firm plans to cut 32-bit loose. However, NetBSD supports the widest range of Macs, including Macs far older than any Power Mac, and frankly if you want to use a Un*x on a Power Mac and have reasonable confidence it will still be running on it for years to come, it's undeniably the one with the best track record.

Mozilla ThunderbirdFebruary 2024 Community Office Hours: All About Add-Ons!

A graphic with an icon representing community, set inside the Thunderbird logo, with the text "Thunderbird Community Office Hours for February 2024: Add-Ons"

The topic for this month’s Thunderbird Community Office Hours takes a short break from the core of Thunderbird and takes us into the world of extensions we call Add-ons. These allow our users to add features and options beyond the customization already available in Thunderbird by default.

February Office Hours Topic: Add-ons

<figcaption class="wp-element-caption">John Bieling: Sr. Software Engineer, Add-ons Ecosystem</figcaption>

We want it to be easy to make Thunderbird yours, and so does our community. The Thunderbird Add-on page shows the power of community-driven extensions. There are Add-ons for everything, from themes to integrations, that add even more customization to Thunderbird.

Our guest for this month’s Thunderbird Community Office Hours is John Bieling, who is the person responsible for Thunderbird’s add-on component. This includes the WebExtension APIs, add-on documentation, as well as community support. He hosts a frequent open call about Add-on development and is welcoming to any developers seeking help. Come join us to learn about Add-on development and meet a key developer in the space.

Catch Up On Last Month’s Thunderbird Community Office Hours

Before you join us on February 22 at 18:00 UTC, watch last month’s office hours with UX Engineer Elizabeth Mitchell. We had some great discussion around the Message Context Menu and testing beta and daily images. Watch the video and read more about our guest at last month’s blog post.

<figcaption class="wp-element-caption">Watch January’s Office Hours session, all about the message context menu</figcaption>

Join Us On Zoom

(Yes, we’re still on Zoom for now, but a Jitsi server for future office hours is in the works!)

When: February 22 at 18:00 UTC (10am PST / 1pm EST / 7pm CET)

Direct URL To Join: https://mozilla.zoom.us/j/97506306527
Meeting ID: 97506306527
Password: 319424

Dial by your location:

  • +1 646 518 9805 US (New York)
  • +1 669 219 2599 US (San Jose)
  • +1 647 558 0588 Canada
  • +33 1 7095 0103 France
  • +49 69 7104 9922 Germany
  • +44 330 088 5830 United Kingdom
  • Find your local number: https://mozilla.zoom.us/u/adkUNXc0FO

The call will be recorded and this post will be updated with a link to the recording afterwards.

The post February 2024 Community Office Hours: All About Add-Ons! appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 534

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is microflow, a robust and efficient TinyML inference engine for embedded systems.

Thanks to matteocarnelos for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • Devoxx PL 2024 | CFP closes 2024-03-01 | Krakow, Poland | Event date: 2024-06-19 - 2024-06-21
  • RustFest Zürich 2024 CFP closes 2024-03-31 | Zürich, Switzerland | Event date: 2024-06-19 - 2024-06-24

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

466 pull requests were merged in the last week

Rust Compiler Performance Triage

Relatively balanced results this week, with more improvements than regressions. Some of the larger regressions are not relevant, however there was a real large regression on doc builds, that was caused by a correctness fix (rustdoc was doing the wrong thing before).

Triage done by @kobzol. Revision range: 0984becf..74c3f5a1

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
2.1% [0.2%, 12.0%] 44
Regressions ❌
(secondary)
5.2% [0.2%, 20.1%] 76
Improvements ✅
(primary)
-0.7% [-2.4%, -0.2%] 139
Improvements ✅
(secondary)
-1.3% [-3.3%, -0.3%] 86
All ❌✅ (primary) -0.1% [-2.4%, 12.0%] 183

6 Regressions, 5 Improvements, 8 Mixed; 5 of them in rollups 53 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
New and Updated RFCs
  • No New or Updated RFCs were created this week.
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2024-02-14 - 2024-03-13 💕 🦀 💕

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

For some weird reason the Elixir Discord community has a distinct lack of programmer-socks-wearing queer furries, at least compared to Rust, or even most other tech-y Discord servers I’ve seen. It caused some weird cognitive dissonance. Why do I feel vaguely strange hanging out online with all these kind, knowledgeable, friendly and compassionate techbro’s? Then I see a name I recognized from elsewhere and my hindbrain goes “oh thank gods, I know for a fact she’s actually a snow leopard in her free time”. Okay, this nitpick is firmly tongue-in-cheek, but the Rust user-base continues to be a fascinating case study in how many weirdos you can get together in one place when you very explicitly say it’s ok to be a weirdo.

SimonHeath on the alopex Wiki's ElixirNitpicks page

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdThunderbird In 2023: The Milestones and The Lessons We Learned

A dark background with the old and new Thunderbird logos side by side, with the text "Thunderbird 2023 Recap"

The Thunderbird Project enjoyed a fantastic 2023. From my point of view – as someone who regularly engages with both the community and our team on a daily basis – the past year brought a renewed sense of purpose, sustainability, and excitement to Thunderbird. Let’s talk about a few of the awesome milestones Thunderbird achieved, but let’s also discuss where we stumbled and what lessons we learned along the way. 

Our 2023 Milestones

The biggest milestone of 2023 was Thunderbird 115 “Supernova.” This release marked the first step towards a more flexible, reliable, and customizable Thunderbird that will accommodate different needs and workflows. Work has been long underway to modernize huge amounts of old code, with the aim of modernizing Thunderbird to deliver new features even faster. The “Supernova” release represented the first fruits of those efforts, and there’s a lot more in the pipeline! 

Alongside Supernova came a brand new Thunderbird logo to signal the revitalization of the project. We finally (even a bit reluctantly) said goodbye to our beloved “wig on an envelope” and ushered in a new era of Thunderbird with a refreshed, redesigned logo. But it was important to honor our roots, which is why we hired Jon Hicks – the designer of the original Firefox and Thunderbird logos – to help us bring it to life. (Now that you’ve all been living with it for the last several months, has it grown on you? Let us know in the comments of this post!)

One 2023 milestone that deserves more attention is that we hired a dedicated User Support Specialist! Roland Tanglao has been working enthusiastically towards removing “documentation debt” and updating the 100s of Thunderbird support articles at support.mozilla.org (which you’ll see us refer to internally as “SUMO”). Beyond that, he keeps a watchful eye on our Matrix community support channel for emerging issues, and is in the forums answering as many help questions as humanly possible, alongside our amazing support volunteers. In a nutshell, Roland is doing everything he can to improve the experience of asking for and receiving support, modernize existing documentation, and create new guides and articles that make using Thunderbird easier.

These are some – not all – of our accomplishments from last year. But it’s time to shift focus to where we stumbled, and how we’ll do better. 

The Lessons We Learned In 2023

In 2023, we failed to finish some of the great features we wanted to bring to Thunderbird, including Sync and Account Hub (both of which, however, are still in development). We also missed our target release window for Thunderbird on Android, after deciding it was worth the extra development time to add the kind of functionality and flexibility you expect from Thunderbird software. 

Speaking of functionality you expect, we hear you loud and clear: you want Exchange support in Thunderbird. We’ve already done some exploratory work, and have enabled the usage of Rust in Thunderbird. This is a complex topic, but the short version is that this opens the doors for us to start implementing native support for the Exchange protocol. It’s officially on our roadmap!

We also believe our communication with you has fallen short of where it needs to be. There are times when we get so excited about things we’re working on that it seems like marketing hype. In other situations, we have over-promised and under-delivered because these projects haven’t been extensively scoped out.

We’re beginning to solve the latter issue with the recent hiring of Kelly McSweeney, Senior Technical PM. She joined our team late last year and brings 20 years of valuable experience to Thunderbird. In a nutshell, Kelly is building processes and tools to accurately gauge how long development time will realistically take, from extensive projects to the tiniest tasks. Basically, she’s getting us very organized and making things run much more efficiently! This not only means smoother operations across the organization, but also clearer communication with you going forward. 

And communication is our biggest area of opportunity right now, specifically with our global Thunderbird community. We haven’t been as transparent as an open source project should be, nor have we discussed our future plans frequently enough. We’ve had several meetings about this over the past few weeks, and we’re taking immediate steps to do better. 

To begin with, you’ll start seeing monthly Developer Digests like this one from Alex, aimed at giving you a closer look at the work currently being planned. We’re also increasing our activity on the Thunderbird mailing lists, where you can give us direct feedback about future improvements and features. 

In 2024 you can also look forward to monthly community Office Hours sessions. This is where you can get some face time (or just voice time) with our team, and watch presentations about upcoming features and improvements by the developer(s) working on them. 

One last thing: In 2023, Thunderbird’s Marketing & Communications team consisted of myself and Wayne Mery. This year Wayne and I are fortunate to be working alongside new team members Heather Ellsworth, Monica Ayhens-Madon, and Natalia Ivanova. Together, we’re going to work diligently to create more tutorials on the blog, more video guides, and more content to help you get the most out of Thunderbird – with a focus on productivity. 

How To Stay Updated

Thank you for being on this journey with us! If you want to get more involved and stay in touch, here are the best places to keep up with what’s happening at Thunderbird:

  • We will be more active right here on this blog, so come back once or twice per month to see what’s new.
  • If you enjoy the technical bits, want to help test Thunderbird, or you’re part of our contributor community, these mailing lists at Topicbox are ideal. 
  • Follow us on Mastodon or X/Twitter for more frequent – and fun – updates!
  • Join our Thunderbird Community Support room on Matrix if you need some help.

The post Thunderbird In 2023: The Milestones and The Lessons We Learned appeared first on The Thunderbird Blog.

Paul BoneThe right amount of poison

Oh, you don’t want any poison in your porridge. But how about in your computer’s memory?

Papa Bear - too much poison

Papa Bear likes his chair hard, his porridge hot and his browser written in a memory safe language that helps engineers avoid memory bugs like buffer overruns and use after frees.

But even Papa Bear has to compromise, part of Firefox is written in a memory safe language and the rest is written in C++. When using C++ there are a variety of defenses programmers can take to help catch memory errors. One of those is called memory poisoning.

mozjemalloc the memory allocator built into Firefox will poison memory by calling memset(aPtr, 0xE5, size); before freeing it. Any memory containing the pattern 0xE5E5E5E5 is therefore very likely to be memory that’s already been freed. This has two and a half benefits: If some code were to free and then dereference some memory (a use after free bug) it would most likely cause the browser to crash, which is much better than a potentially exploitable bug allowing Goldilocks to steal Papa Bear’s banking credentials! The other benefit is that when Firefox does crash due to such a use-after-free, the presence of this pattern in the crash report allows engineers to see the type of error that occurred and hopefully fix the mistake.

Note that back in March 2023 we moved the poison operation outside of the arena lock’s critical section; which improved performance in some tests.

Mama Bear - no poisoning

You probably figured out by now that I’m going to persist with this metaphor. Mama Bear likes her chair soft, her porridge cold (and congealed (yuck)), and her browser fast.

But how much faster is Mama Bear’s experience? This is the question that was raised recently when Randell Jesup was benchmarking various memory allocators in Firefox. He noted that while mozjemalloc performs poisoning, many of the other allocators do not and to compare the performance of the allocators more fairly they should either all perform poisoning or none of them should.

And so Randell noted that, depending on the test, Firefox could be between 0.5% and 4% faster with poisoning disabled.

There are some results I collected. The "sp2" (Speedometer 2) and "sp3" (Speedometer 3) tests are browser benchmarks - larger numbers indicate better performance. The amazon and instagram tests are pageload tests measured in seconds with the ContentfulSpeedIndex metric - smaller numbers indicate better performance.

sp2 (score) sp3 (score) amazon (sec) instagram (sec)

Poison

178.84 ± 0.84

13.32 ± 1.03

243.2 ± 1.96

419.43 ± 1.04

No poisoning

179.42 ± 0.48

13.39 ± 0.31

237.55 ± 2.6

414.5 ± 0.8

The speedometer figures are pretty close and these are the best pageload figures (the others showed very little difference but nothing regressed, yes I’m aware I’ve cherry-picked data).

This means that if it weren’t for the lack of security and debugability Mama Bear would have the right approach.

Baby Bear

Baby Bear loves a compromise, they want their computer to be safe from Goldilocks' hacking attempts but also love performance improvements.

One compromise may be to probabilistic poison memory some of the time, e.g. a roughly 5% chance of poisoning. That’s more complex and involves a memory write anyway to keep the "time until poison" counter updated. We didn’t investigate it. But it’s worth noting that it would be similar in spirit to the Probabilistic Heap Checker (PHC) that’s rolling out in Firefox or the similar GWP-ASan capability in Chrome.

Instead we tested "what if we poison only the first cache line of a memory cell". Andrew McCreight and Olli Pettay pointed out that Element, a common DOM structure, is 128 bytes long and poisoning it is useful to detect memory errors in DOM code, as a lot of DOM code will involve Element.

We tested poisoning the first 64, 128 and 256 bytes of each structure. We assume that management of cache and writing cache lines back to RAM is going to be the dominant cost. Therefore we round-up our writes to the next cache line boundary..

For example, on a computer with 64-byte cache lines, if a 96-byte object is allocated so that the first 32-bytes is in one cache-line, while the next 64-bytes is in another. Our 64-byte write would cover two halves of different cache lines. In this case we will poison all 96-bytes because doing so writes to the same number of cache lines as the original 64-byte write.

Let’s add these options to our table of results.

sp2 (score) sp3 (score) amazon (sec) instagram (sec)

Poison

178.84 ± 0.84

13.32 ± 1.03

243.20 ± 1.96

419.43 ± 1.04

Poison 256

179.50 ± 0.55

13.35 ± 0.33

240.47 ± 2.82

415.28 ± 1.30

Poison 128

179.19 ± 0.43

13.35 ± 0.59

241.62 ± 3.05

414.95 ± 1.15

Poison 64

179.09 ± 0.87

13.33 ± 0.83

242.13 ± 2.56

414.11 ± 0.91

No poisoning

179.42 ± 0.48

13.39 ± 0.31

237.55 ± 2.60

414.5 ± 0.8

As above, sp2 and sp3 are scores - bigger numbers are better. While amazon and instagram are page load tests where smaller numbers are better.

As expected the partial poisoning results fall between full and no poisoning. But what’s a little bit surprising is that in some tests (sp2 and amazon) poisoning a larger amount of memory made things faster. This could be because the memset() routine or the hardware itself is able to optimise larger writes more effectively. That said it’s important to acknowledge that the standard deviation is fairly high and doing the right statistical analysis is beyond this blog post.

Just right

Since poisoning more memory isn’t much slower and in some cases is faster than poisoning a little memory, then we might as well choose to poison 256 bytes which comfortably covers the Element object and most others and for the others it likely covers many of their most-often accessed fields. We’re confident that this is enough to help us catch many errors that can be caught with poisoning. While also performing well enough, especially for the pageload tests where it is closer to the performance available with poisoning disabled. We think that Baby Bear would agree, it is Just Right.

It gets better

With the Probablistic Heap Checker (PHC) rolling out soon we will have an even greater ability to catch information related to memory errors. I’ll be writing about this in the future.

Why Papa Bear is safe and Mama Bear is secure?

In some ways it feels more natural to lean in to (negative) gender stereotypes where Papa Bear wants things fast and Mama Bear is the cautious one. I considered this however to make comprehension easier it’s easier to explain poisoning before explaining turning poisoning off and the nursery tale describes Papa Bear’s preferences first, so that’s the order I introduced them here. Flipping the script on gender stereotypes was accidental.

Adrian GaudebertDawnmaker a une page Steam ET un trailer

Dawnmaker, le jeu sur lequel je travaille dans le cadre d'Arpentor Studio, depuis plus de deux ans, a désormais une page Steam et un trailer. Je vous laisse découvrir ça :

Ça vous a plu ? N'hésitez pas à ajouter le jeu à votre liste de souhaits sur Steam !

The Mozilla BlogA New Chapter for Mozilla: Focused Execution and an Expanded Role in Charting the Internet’s Future

Today marks a significant moment in our journey, and I am thrilled to share some important news with you. After much thoughtful consideration, I have decided to transition from the role of CEO of Mozilla Corporation back to the position of Mozilla Corporation Executive Chairwoman, a role I held with great passion for many years. 

During my 25 years at Mozilla, I’ve worn many hats, and this move is driven by a desire to streamline our focus and leadership for the challenges ahead. I’ve been leading the Mozilla business through a transformative period, while also overseeing Mozilla’s broader mission. It’s become evident that both endeavors need dedicated full-time leadership. 

Enter Laura Chambers, a dynamic board member who will step into the CEO role for the remainder of this year. Laura brings a wealth of experience, having been an active and impactful member of the Mozilla board for three years. With an impressive background leading product organization at Airbnb, PayPal, eBay, and most recently as CEO of Willow Innovations, Laura is well-equipped to guide Mozilla through this transitional period. 

Her focus will be on delivering successful products that advance our mission and building platforms that accelerate momentum. Laura and I will be working closely together throughout February to ensure a seamless transition, and in my role as Exec Chair I’ll continue to provide advice and engage in areas that touch on our unique history and Mozilla characteristics. 

Laura’s focus will be on Mozilla Corporation with two key goals: 

1. Vision and Strategy for the Future: Refining the company’s vision and aligning the corporate and product strategy behind it. This will be grounded in our mission and unique strengths and shaped by our point of view on technology’s future and our role in it.

2. Outstanding Execution: Focus, Processes, Capabilities: Doubling down on our core products, like Firefox, and building out our capabilities and innovation pipeline to bring new compelling products to market. 

While Laura takes on the reins as CEO of Mozilla Corporation, I will return to supporting the CEO and leadership team as I have done previously as Exec Chair. In addition, I will expand my work in two critical areas: 

1. More consistently representing Mozilla in the public – With a focus on policy, open source, and community — through speaking and direct engagement with the community.

2. Representing Mozilla as a unified entity – bigger than the sum of our parts — as we continue to strengthen and refine how all the entities work together to advance our policy and community goals with greater urgency and speed. 

We’re at a critical juncture where public trust in institutions, governments, and the fabric of the internet has reached unprecedented lows. There’s a tectonic shift underway as everyone battles to own the future of AI. It is Mozilla’s opportunity and imperative to forge a better future. I’m excited about Laura’s day-to-day involvement and the chance for Mozilla to achieve more. Our power lies in the collective effort of people contributing to something better and I’m eager for Mozilla to meet the needs of this era more fully. 

Thank you to everyone who participates in Mozilla, supports us, cheers us on, and works towards similar goals. Your dedication is the driving force behind Mozilla’s impact and success. Here’s to a future filled with innovation, collaboration, and continued success! 

The post A New Chapter for Mozilla: Focused Execution and an Expanded Role in Charting the Internet’s Future appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.76.0

The Rust team is happy to announce a new version of Rust, 1.76.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.76.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.76.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.76.0 stable

This release is relatively minor, but as always, even incremental improvements lead to a greater whole. A few of those changes are highlighted in this post, and others may yet fill more niche needs.

ABI compatibility updates

A new ABI Compatibility section in the function pointer documentation describes what it means for function signatures to be ABI-compatible. A large part of that is the compatibility of argument types and return types, with a list of those that are currently considered compatible in Rust. For the most part, this documentation is not adding any new guarantees, only describing the existing state of compatibility.

The one new addition is that it is now guaranteed that char and u32 are ABI compatible. They have always had the same size and alignment, but now they are considered equivalent even in function call ABI, consistent with the documentation above.

Type names from references

For debugging purposes, any::type_name::<T>() has been available since Rust 1.38 to return a string description of the type T, but that requires an explicit type parameter. It is not always easy to specify that type, especially for unnameable types like closures or for opaque return types. The new any::type_name_of_val(&T) offers a way to get a descriptive name from any reference to a type.

fn get_iter() -> impl Iterator<Item = i32> {
    [1, 2, 3].into_iter()
}

fn main() {
    let iter = get_iter();
    let iter_name = std::any::type_name_of_val(&iter);
    let sum: i32 = iter.sum();
    println!("The sum of the `{iter_name}` is {sum}.");
}

This currently prints:

The sum of the `core::array::iter::IntoIter<i32, 3>` is 6.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.76.0

Many people came together to create Rust 1.76.0. We couldn't have done it without all of you. Thanks!

Mozilla Localization (L10N)A Deep Dive Into the Evolution of Pretranslation in Pontoon

Quite often, an imperfect translation is better than no translation. So why even publish untranslated content when high-quality machine translation systems are fast and affordable? Why not immediately machine-translate content and progressively ship enhancements as they are submitted by human translators?

At Mozilla, we call this process pretranslation. We began implementing it in Pontoon before COVID-19 hit, thanks to Vishal who landed the first patches. Then we caught some headwinds and didn’t make much progress until 2022 after receiving a significant development boost and finally launched it for the general audience in September 2023.

So far, 20 of our localization teams (locales) have opted to use pretranslation across 15 different localization projects. Over 20,000 pretranslations have been submitted and none of the teams have opted out of using it. These efforts have resulted in a higher translation completion rate, which was one of our main goals.

In this article, we’ll take a look at how we developed pretranslation in Pontoon. Let’s start by exploring how it actually works.

How does pretranslation work?

Pretranslation is enabled upon a team’s request (it’s off by default). When a new string is added to a project, it gets automatically pretranslated using a 100% match from translation memory (TM), which also includes translations of glossary entries. If a perfect match doesn’t exist, a locale-specific machine translation (MT) engine is used, trained on the locale’s translation memory.

Pretranslation opt-in form

Pretranslation opt-in form.

After pretranslations are retrieved and saved in Pontoon, they get synced to our primary localization storage (usually a GitHub repository) and hence immediately made available for shipping. Unless they fail our quality checks. In that case, they don’t propagate to repositories until errors or warnings are fixed during the review process.

Until reviewed, pretranslations are visually distinguishable from user-submitted suggestions and translations. This makes post-editing much easier and more efficient. Another key factor that influences pretranslation review time is, of course, the quality of pretranslations. So let’s see how we picked our machine translation provider.

Choosing a machine translation engine

We selected the machine translation provider based on two primary factors: quality of translations and the number of supported locales. To make translations match the required terminology and style as much as possible, we were also looking for the ability to fine-tune the MT engine by training it on our translation data.

In March 2022, we compared Bergamot, Google’s Cloud Translation API (generic), and Google’s AutoML Translation (with custom models). Using these services we translated a collection of 1,000 strings into 5 locales (it, de, es-ES, ru, pt-BR), and used automated scores (BLEU, chrF++) as well as manual evaluation to compare them with the actual translations.

Performance of tested MT engines for Italian (it).

Performance of tested MT engines for Italian (it).

Google’s AutoML Translation outperformed the other two candidates in virtually all tested scenarios and metrics, so it became the clear choice. It supports over 60 locales. Google’s Generic Translation API supports twice as many, but we currently don’t plan to use it for pretranslation in locales not supported by Google’s AutoML Translation.

Making machine translation actually work

Currently, around 50% of pretranslations generated by Google’s AutoML Translation get approved without any changes. For some locales, the rate is around 70%. Keep in mind however that machine translation is only used when a perfect translation memory match isn’t available. For pretranslations coming from translation memory, the approval rate is 90%.

Comparison of pretranslation approval rate between teams.

Comparison of pretranslation approval rate between teams.

To reach that approval rate, we had to make a series of adjustments to the way we use machine translation.

For example, we convert multiline messages to single-line messages before machine-translating them. Otherwise, each line is treated as a separate message and the resulting translation is of poor quality.

Multiline message:

Make this password unique and different from any others you use.
A good strategy to follow is to combine two or more unrelated
words to create an entire pass phrase, and include numbers and symbols.

Multiline message converted to a single-line message:

Make this password unique and different from any others you use. A good strategy to follow is to combine two or more unrelated words to create an entire pass phrase, and include numbers and symbols.

Let’s take a closer look at two of the more time-consuming changes.

The first one is specific to our machine translation provider (Google’s AutoML Translation). During initial testing, we noticed it would often take a long time for the MT engine to return results, up to a minute. Sometimes it even timed out! Such a long response time not only slows down pretranslation, it also makes machine translation suggestions in the translation editor less useful – by the time they appear, the localizer has already moved to translate the next string.

After further testing, we began to suspect that our custom engine shuts down after a period of inactivity, thus requiring a cold start for the next request. We contacted support and our assumption was confirmed. To overcome the problem, we were advised to send a dummy query to the service every 60 seconds just to keep the system alive.

Giphy: Oh No Wow GIF by Little Princess Ember

Image source: Giphy.

Of course, it’s reasonable to shut down inactive services to free up resources, but the way to keep them alive isn’t. We have to make (paid) requests to each locale’s machine translation engines every minute just to make sure they work when we need them. And sometimes even that doesn’t help – we still see about a dozen ServiceUnavailable errors every day. It would be so much easier if we could just customize the default inactivity period or pay extra for an always-on service.

The other issue we had to address is quite common in machine translation systems: they are not particularly good at preserving placeholders. In particular, extra space often gets added to variables or markup elements, resulting in broken translations.

Message with variables:

{ $partialSize } of { $totalSize }

Message with variables machine-translated to Slovenian (adding space after $ breaks the variable):

{$ partialSize} od {$ totalSize}

We tried to mitigate this issue by wrapping placeholders in <span translate=”no”>…</span>, which tells Google’s AutoML Translation to not translate the wrapped text. This approach requires the source text to be submitted as HTML (rather than plain text), which triggers a whole new set of issues — from adding spaces in other places to escaping quotes — and we couldn’t circumvent those either. So this was a dead-end.

The solution was to store every placeholder in the Glossary with the same value for both source string and translation. That approach worked much better and we still use it today. It’s not perfect, though, so we only use it to pretranslate strings for which the default (non-glossary) machine translation output fails our placeholder quality checks.

Making pretranslation work with Fluent messages

On top of the machine translation service improvements we also had to account for the complexity of Fluent messages, which are used by most of the projects we localize at Mozilla. Fluent is capable of expressing virtually any imaginable message, which means it is the localization system you want to use if you want your software translations to sound natural.

As a consequence, Fluent message format comes with a syntax that allows for expressing such complex messages. And since machine translation systems (as seen above) already have trouble with simple variables and markup elements, their struggles multiply with messages like this:

shared-photos =
 { $photoCount ->
    [one]
      { $userGender ->
        [male] { $userName } added a new photo to his stream.
        [female] { $userName } added a new photo to her stream.
       *[other] { $userName } added a new photo to their stream.
      }
   *[other]
      { $userGender ->
        [male] { $userName } added { $photoCount } new photos to his stream.
        [female] { $userName } added { $photoCount } new photos to her stream.
       *[other] { $userName } added { $photoCount } new photos to their stream.
      }
  }

That means Fluent messages need to be pre-processed before they are sent to the pretranslation systems. Only relevant parts of the message need to be pretranslated, while syntax elements need to remain untouched. In the example above, we extract the following message parts, pretranslate them, and replace them with pretranslations in the original message:

  • { $userName } added a new photo to his stream.
  • { $userName } added a new photo to her stream.
  • { $userName } added a new photo to their stream.
  • { $userName } added { $photoCount } new photos to his stream.
  • { $userName } added { $photoCount } new photos to her stream.
  • { $userName } added { $photoCount } new photos to their stream.

To be more accurate, this is what happens for languages like German, which uses the same CLDR plural forms as English. For locales without plurals, like Chinese, we drop plural forms completely and only pretranslate the remaining three parts. If the target language is Slovenian, two additional plural forms need to be added (two, few), which in this example results in a total of 12 messages needing pretranslation (four plural forms, with three gender forms each).

Finally, Pontoon translation editor uses custom UI for translating access keys. That means it’s capable of detecting which part of the message is an access key and which is a label the access key belongs to. The access key should ideally be one of the characters included in the label, so the editor generates a list of candidates that translators can choose from. In pretranslation, the first candidate is directly used as an access key, so no TM or MT is involved.

A screenshot of Notepad showing access keys in the menu.

Access keys (not to be confused with shortcut keys) are used for accessibility to interact with all controls or menu items using the keyboard. Windows indicates access keys by underlining the access key assignment when the Alt key is pressed. Source: Microsoft Learn.

Looking ahead

With every enhancement we shipped, the case for publishing untranslated text instead of pretranslations became weaker and weaker. And there’s still room for improvements in our pretranslation system.

Ayanaa has done extensive research on the impact of Large Language Models (LLMs) on translation efficiency. She’s now working on integrating LLM-assisted translations into Pontoon’s Machinery panel, from which localizers will be able to request alternative translations, including formal and informal options.

If the target locale could set the tone to formal or informal on the project level, we could benefit from this capability in pretranslation as well. We might also improve the quality of machine translation suggestions by providing existing translations into other locales as references in addition to the source string.

If you are interested in using pretranslation or already use it, we’d love to hear your thoughts! Please leave a comment, reach out to us on Matrix, or file an issue.

This Week In RustThis Week in Rust 533

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is embedded-cli-rs, a library that makes it easy to create CLIs on embedded devices.

Thanks to Sviatoslav Kokurin for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustNL 2024 CFP closes 2024-02-19 | Delft, The Netherlands | Event date: 2024-05-07 & 2024-05-08
  • NDC Techtown CFP closes 2024-04-14 | Kongsberg, Norway | Event date: 2024-09-09 to 2024-09-12

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

309 pull requests were merged in the last week

Rust Compiler Performance Triage

Rust's CI was down most of the week, leading to a much smaller collection of commits than usual. Results are mostly neutral for the week.

Triage done by @simulacrum. Revision range: 5c9c3c78..0984bec

0 Regressions, 2 Improvements, 1 Mixed; 1 of them in rollups 17 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2024-02-07 - 2024-03-06 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

My take on this is that you cannot use async Rust correctly and fluently without understanding Arc, Mutex, the mutability of variables/references, and how async and await syntax compiles in the end. Rust forces you to understand how and why things are the way they are. It gives you minimal abstraction to do things that could’ve been tedious to do yourself.

I got a chance to work on two projects that drastically forced me to understand how async/await works. The first one is to transform a library that is completely sync and only requires a sync trait to talk to the outside service. This all sounds fine, right? Well, this becomes a problem when we try to port it into browsers. The browser is single-threaded and cannot block the JavaScript runtime at all! It is arguably the most weird environment for Rust users. It is simply impossible to rewrite the whole library, as it has already been shipped to production on other platforms.

What we did instead was rewrite the network part using async syntax, but using our own generator. The idea is simple: the generator produces a future when called, and the produced future can be awaited. But! The produced future contains an arc pointer to the generator. That means we can feed the generator the value we are waiting for, then the caller who holds the reference to the generator can feed the result back to the function and resume it. For the browser, we use the native browser API to derive the network communications; for other platforms, we just use regular blocking network calls. The external interface remains unchanged for other platforms.

Honestly, I don’t think any other language out there could possibly do this. Maybe C or C++, but which will never have the same development speed and developer experience.

I believe people have already mentioned it, but the current asynchronous model of Rust is the most reasonable choice. It does create pain for developers, but on the other hand, there is no better asynchronous model for Embedded or WebAssembly.

/u/Top_Outlandishness78 on /r/rust

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogEntrepreneur Trisha Prabhu dishes on technology’s evolution, AI and her early career success

At Mozilla, we know  we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with winner Trisha Prabhu, an award-winning innovator, social entrepreneur, technologist and advocate that has combatted cyberbullying and online hate since the age of 13 to make the internet a better place for everyone. We talk with her about the evolution of her app to stop online hate, ReThink, her growth as a professional through college at Oxford and Harvard, her biggest inspirations and how she views the future of the internet.

How has ReThink evolved its technology from the early days in 2013 to adapt to an online climate in 2024, where the internet has flooded with more hate speech and bullying?

I think ReThink has evolved in two key ways. So first, I’ll say one of our biggest advantages is that we are platform-agnostic. So, because the technology is a keyboard, and it works at the keyboard level, we’re able to work across any platform, whether it’s social media, to email and text. That ended up being really handy for us because when I developed and designed the technologies and keyboard way back in 2013, I had no idea that the internet was going to play the role that it does today in our lives a decade later. 

<figcaption class="wp-element-caption">Trisha Prabhu at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

In terms of how things have changed and what we’ve had to kind of account for over the last decade, I think one big change has definitely been the type of cyberbullying that we see. When I first started this work, it was very text based — people using text to say mean things to each other. Certainly with the advent of AI (artificial intelligence) now in the last year, we’ve seen how image and video based harassment has become much more pervasive — people using memes, people using explicit images to bully, to harass, and to intimidate other people. One thing that we really had to adapt to and what we’re currently working on now is developing ReThink for detection of offensive images and videos, acknowledging that the way that people are harassing each other are changing. No doubt that work is going to continue. 

As we think about the metaverse, something that I’m really worried about is a world in which — and we’ve already had instances that this happened — users can be physically harassed right in the metaverse. It’s a completely different level of harm. As the harm itself evolves, we’ve been evolving. 

Another thing we’ve had to do is adapt the languages that we think are available and then the populations that it can serve. ReThink as it was developed 10 years ago was available in english. Today, on the Google Play Store, we’re available in nine languages. So, that’s been a huge evolution, to make the technology available for all those populations and think about the ways that cyberbullying is manifesting in these different contexts. 

What do you think is the biggest challenge we face online this year, and how do we combat that? 

I think the biggest issue that we face online this year would have to be the threat of AI to democracy and elections, just because there’s so much of the world that is voting in elections this year. Most recently, we had the Taiwanese elections in January. And, of course, here in the U.S., we’re going to have our election in November. I think that that is an issue that definitely stands out to me.

It’s not one that’s directly related to my work, but there are other issues that are constants. Child sexual exploitation, that’s an issue I do a lot of advocacy work on, and that’s an issue that remains constant and extremely important every year.

But if there’s one issue that I’d single out and say is very particular to this year specifically, it’s that it’s kind of the first year that we’re going to see our democratic institutions and these new (AI) technologies interact. I’m concerned, but I’m also hopeful in the sense that a lot of people are paying attention to this. So I think that this can be a really powerful year for learning and a chance to identify harms where they’re happening and hopefully take action. 

“I think the biggest issue that we face online this year would have to be the threat of AI to democracy and elections, just because there’s so much of the world that is voting in elections this year.”

Trisha Prabhu

You’ve achieved a lot of success at a very young age before you went to the University of Oxford and then Harvard. When you look back at that time in your life, what do you wish you wouldve known about entrepreneurship as a teenager? How did your college experience refine the work that you do now?

There are probably two things that I wish that I had known at that age that I definitely didn’t know. One was that, in entrepreneurship, failure is not a bad thing. It really is such an iterative process. For every success that I’ve had with ReThink, we’ve had so many moments of something not working, something not going through, us trying to figure something out. Certainly the work that we do, it’s so critical that we get it right. It’s so critical that we’re thinking about language in a really precise way, in a really nuanced way. It’s so critical that we’re thinking about what are the key concepts that we need to share with youth about anti hate and digital literacy? I figured, like a lot of young people — and a lot of people generally —  that if you fail at something, that means you must not be good at it or that it’s not working. And I think with entrepreneurship, there just needs to be a comfort with failure and a willingness to be open to learn from it. The best entrepreneurs are those that do that. I think some of the challenges we’ve seen on the internet today are not necessarily because of a discomfort or a failure, but an unwillingness to learn from it. I think that’s definitely something that I’ve learned since. 

Another thing that I didn’t know was that you don’t need to have a formal business education to be a great entrepreneur. I figured the best entrepreneurs were the ones who had gotten an MBA and had the fancy background. But the truth is, you can learn a lot on the job and you know your product and your mission, and you know the people that you’re trying to serve the best. Especially when you’re one of the people coming from the community that you’re trying to serve, and you have lived experience with the issue. That will take you so much further than an MBA ever will. I always felt a sense of insecurity of “I don’t have this formal training,” but I wish I could tell myself that what I did have, which was a knowledge of the ecosystem and the issues in a way that no adult in the room did, was tremendously more powerful. 

“My vision was, can I create an anti hate digital literacy resource that is written to youth in their voice that is actually something that as a 10-year-old, I would have wanted to read? That is fun, that is engaging, that is interesting. And so that was really what gave birth to ReThink The Internet.”

Trisha Prabhu

In terms of how college refined me, I think college was a time and an opportunity for me to start to get some of that formal education, and it was really, really powerful and helpful. But it also led me to say, “Hey, I actually did pretty good for not knowing all of this stuff.” It was the moment of realization that this is very powerful, and I’ve learned a lot. But also, “You can do a lot.” One of the biggest things I learned was that we are at the core of our education. So much of what I learned in school was not in the textbook. It was in conversation with other students or interrogating my own thoughts or perspectives. Recognizing your own source of power as an agent for change, as opposed to thinking that there is some prescribed way to make an impact, but that’s something that college affirmed for me that I didn’t really know as a young person that I would definitely tell myself now. You just have so much more capability and ability than you realize.

Obviously, you’ve won many different awards. You’ve received a lot of recognition. You’ve traveled to a lot of different places — Shark Tank, the White House, TED Talks. Is there one experience you’ve done that surprised you or felt really special to you?

A lot of things come to mind, including Mozilla’s Rise 25 award. If I were to pick something, I’d probably say the TED Talk that I did in India back in 2017. I went back to Mumbai, and it was actually a talk that I delivered in Hindi, which is not my first language. So, it was an interesting offer because it was a chance to talk about an issue that in India is very stigmatized — cyberbullying and mental health — to an audience that wasn’t maybe necessarily ready to hear the message and from someone who is not from the country. It surprised me because it was a chance to challenge myself. It was a chance to push myself out of my comfort zone to deliver a talk not in my native language. And it was also a chance, I think, to push the folks that I was speaking to out of their comfort zone and to say, “Hey, these aren’t topics that we talk about that we need to.” In the end, through that partnership with TED, they actually worked with a local television program in India and were able to televise the talks to 650 million Indian viewers, which is incredible. It was part of changing narratives of how we see certain issues. That was really powerful to me because it was anti hate advocacy in its most impactful form. 

I think to be able to do that work in a space where so few people were talking about these issues and know that I was igniting conversations, that felt really gratifying and super important. And it was awesome for me personally to have to push my own boundaries and kinda step out of my comfort zone a little.

We wanted to ask you about your book, “ReThink the Internet: How to Make The Digital World a Lot Less Sucky.” What inspired you to do that in 2022 at that point in your career? What was the most challenging part of that book to write? 

The inspiration for the book came from my experience traveling globally and talking with youth about the anti hate educational experiences they had, and coming away with this common thread, which was that it’s just so boring. (They think that) internet education is not exciting, it’s not interesting. (Youth felt that) It’s not engaging to me, I don’t like the resources that I’m being presented with, it makes us tune out.

And so my vision was, can I create an anti hate digital literacy resource that is written to youth in their voice that is actually something that as a 10-year-old, I would have wanted to read? That is fun, that is engaging, that is interesting. And so that was really what gave birth to ReThink The Internet.

It’s structured less as an educational guide and more as a series of seven fun vignettes and stories that teach seven lessons about responsible digital citizenship, but also offer opportunities for actually putting those lessons into practice, reflecting critically. So it’s a really nice balance, and the biggest piece of feedback I’ve heard is, “Wow, I actually really enjoyed reading this, and I went to it.” That was my vision, I wanted to create something young people actually liked.

What was the hardest part? It was probably thinking about how to do that. It was thinking back to me at, like, 10, 11, 12, like, what got me into a book. And how do I take these really complex topics like distinguishing inaccurate or misleading information from true information on the internet. How do I take a really big topic like that and make it accessible to a young audience and make it fun? So it was a lot of talking with young people about their experiences, reflecting on my own and brainstorming in creative ways to share stories that young people could resonate with. 

Where do you draw inspiration from in continuing the work that you do today?  

I draw inspiration from the young people that I work with. That is the young people who I have a chance to serve through my work with ReThink, who I advocate with for better internet. There are still so many young people who are suffering online because of internet harms. We see it today — there was a Senate judiciary hearing with five big tech CEOs testifying about youth safety. But there are so many young people who are survivors of internet challenges, parents that are survivors of internet challenges. And they, to me, are my constant inspiration and reminder that this work is not finished and that we have so much more to do and that we’ve got to press on and keep working for a better digital universe. 

“Way back when Web 2.0 was being launched, we didn’t have a diverse group of technologists creating our digital world. I think today, we’re starting to see that paradigm shift, where those voices are finally starting to be invited into the fold, and also we’re starting to demand our seat at the table.”

Trisha Prabhu

What is one action that you think everyone should take to be able to make the internet a little better?

I guess this is very consistent with my work, but I genuinely do think it’s a really small and yet super powerful thing that everyone can do: just to pause and think before you post and share. And that’s not just with respect to what you’re saying, but it’s even with respect to an article that you might be sharing. A lot of people don’t think of retweeting an article as a form of cyberbullying or hate, but depending on what you’re amplifying, especially if you haven’t taken the time to read that article, you might be inadvertently contributing to the spread of information that is less than the gold standard. If you’re composing a tweet or a message, you might say something in the heat of the moment looking at a phone instead of someone’s face that you regret later that you never say to someone in person. If everyone can just take a second to pause and think before they say something, I imagine our internet would look very different.

We started Rise 25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?  

I hope people are celebrating an internet that is more kind and an internet that celebrates difference and affirms people as they are. An internet that protects and safeguards our rights. And Mozilla has really been at the forefront of that fight, protecting people’s privacy, protecting people’s agency, protecting people’s right to have the digital experience that they want to have. I hope that the next 25 years are spent celebrating an internet where users are at the forefront of our digital experience and that it is one that is fundamentally safe, free and on.

What gives you hope about the future of the internet? 

I think what gives me hope about the future of the internet is the number of incredible young people, and members of historically underrepresented communities, that are stepping up and demanding a seat at the table when it comes to building a better internet and when it comes to building technologies of the future. I think that gap is one of the biggest reasons that we’ve seen so many internet harms today, being that we didn’t have young people that were a part of that process. Way back when Web 2.0 was being launched, we didn’t have a diverse group of technologists creating our digital world. I think today, we’re starting to see that paradigm shift, where those voices are finally starting to be invited into the fold, and also we’re starting to demand our seat at the table. And we’re starting to — as activists, as technologists, as builders, as creators, as visionaries — see the internet that we want and start putting it into place. I think that that gives me a lot of hope because with our perspectives and lived experiences at the forefront, I think we really can create an internet that belongs to everyone. 

Get Firefox

Get the browser that protects what’s important

The post Entrepreneur Trisha Prabhu dishes on technology’s evolution, AI and her early career success appeared first on The Mozilla Blog.

Firefox NightlyA Preview of Tab Previews – These Weeks in Firefox: Issue 153

Highlights

  • Tab Previews! Congratulations to DJ for getting these landed. Currently disabled by default, but you can test them by setting `browser.tabs.cardPreview.enabled` to true

A tab preview showing the page contents of another background tab

A comparison showing increased contrast and lower brightness for images displayed in dark theme Reader Mode

Increased contrast and reduced brightness make images easier on the eyes (right: old changes, left: new changes)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to Anna Yeddi, a missing label to the remove shortcut icon from the extensions shortcuts management view part of the about:addons page has been identified and added. Another accessibility issue caught by the a11y jobs 🥳 – Bug 1873304
WebExtensions Framework
  • As part of follow ups to the work on the new taskcluster jobs to run webextensions tp6 and tp6m perftests jobs (landed as tier-3 jobs as part of Bug 1859549 in December):
    • A new linter named condprof-addons has been landed, this new linter makes sure that xpi files referenced in condprof customization files and the firefox-addons.tar archive (fetched through the related CI fetch task) are not going out of sync with each other – Bug 1868144
      • The condprof-addons linter is documented here
      • Thanks to ahal and sparky for their help and support on introducing this new linter
    • A new doc section has been added to the Raptor Browsertime doc page, to briefly provide a description of the webextensions tp6/tp6m perftests jobs and examples for how to run these tests locally and in try pushes – Bug 1874487
      • The new section is already available here

Developer Tools

DevTools
  • Alex added a notice at the bottom of the Debugger editor when the source map file is invalid or unavailable (bug)

Notification about a Source Map Error due to an unexpected non-whitespace character

  • Hubert delayed getting information about sources functions until we need to display them, which made opening files faster (bug)
  • Hubert fixed an issue where the Debugger would crash (bug)
  • Alex added options to the console :trace command limit depth and number of top level frames being traced (bug)
    • (still behind devtools.debugger.features.javascript-tracing)
    • :trace –max-depth N –max-records M
  • Alex improved performance of the console when it’s receiving a very large number of messages (bug, bug)
  • Nicolas made Ctrl+Enter (Cmd+Enter on MacOS) on Rules view input advance the focus to the next editable property, i.e. like the Tab key (bug)
  • Nicolas added a hint about the new Enter key behavior in Rules view input, linking to an explanatory blog post (bug)

Notification explaining that Enter key no longer changes focus in Rules view

WebDrive BiDi
  • Julian added support the the network.fetchError event, which is emitted when a request ends in an error state (bug)
  • Julian implemented the network.failRequets command, which forces an intercepted request to fail, and will fire a network.fetchError event (bug)
  • Sasha made script.evaluate, script.callFunction and script.disown ignore the realm argument when a context argument is passed (bug)
  • Henrik fixed an issue with the browsingContext.create command, aligning with Chrome for a consistent cross-browser experience (bug)

ESMification status

  • Some changes landed after today’s numbers were generated (see new tab page section below) – that brings us to the 90% mark on browser/
  • ESMified status:
    • browser: 89%
    • toolkit: 99%
    • Total:  96.48% (no change)
  • #esmification on Matrix

Lint, Docs and Workflow

Migration Improvements

  • The next wave of spotlight messages to encourage users without accounts to create one to aid in device migration should be going out in a week or so.
  • The infrastructure that allows for doing backups of active SQLite databases has landed. We’re hoping this can be part of the foundations for a backup-to-local-file utility.

New Tab Page

Picture-in-Picture

  • emilio landed some patches that fix a regression with kde/wayland window rules for the PiP window not working as intended (bug)

Performance

Reader Mode

Search and Navigation

The Mozilla BlogIntroducing Mozilla Monitor Plus, a new tool to automatically remove your personal information from data broker sites

Today, Mozilla Monitor (previously called Firefox Monitor), a free service that notifies you when your email has been part of a breach, announced its new paid subscription service offering: automatic data removal and continuous monitoring of your exposed personal information. 

<figcaption class="wp-element-caption">Introducing Mozilla Monitor Plus</figcaption>

There’s a growing interest among 42% of young adults – aged 18-24 – who want to learn more about the types of information that companies have about them, according to a consumer privacy survey. Yet, taking the steps to request changes or delete personal data can be a bit overwhelming. At Mozilla, we’re always looking for ways to protect people’s privacy and give them greater control when they go online. Enter Monitor Plus.

“When we launched Monitor, our goal was to help people discover where their personal info may have been exposed. Now, with Monitor Plus, we’ll help people take back their exposed data from data broker sites that are trying to sell it,” said Tony Amaral-Cinotto, Product Manager of Mozilla Monitor at Mozilla. “Our long-standing commitment to put people’s needs first and our easy step-by-step process makes Monitor Plus unique. Additionally, we combine breach alerts and data broker removal to offer an all-in-one protection tool and make it easier for people to feel and be safe online.” 

First step: Find out where your personal information has been exposed

More than 10 million people have signed up with Mozilla Monitor so they can be notified when their personal data has been involved in a data breach. Today, we are rolling out a new feature with a free one-time scan, where people can take the next step to see where their personal information has been exposed on sites selling it for profit. This could include information like your name, current and previous home addresses, and phone numbers. It could also go another layer deeper with information like family member names, criminal history, your kids’ school district, and even your hobbies.

To get your complimentary scan, you will need to provide your first and last name, the current city and state that you live in, your date of birth, and your email address. This information will be encrypted and follows Mozilla’s privacy policy, which always puts people first. This is the least amount of information we need to get the most accurate search results for you. From there, you can see where your personal info is exposed, either through a data breach or through broker sites. We also include high risk data breaches – exposures that may include social security numbers, credit card information, your bank account and pin numbers – that you’ve been exposed to and show how you can fix and resolve it.

brief GIF showing the fields where you enter some personal data, then a screen showing "scanning for exposure", then the dashboard where you can fix.<figcaption class="wp-element-caption">Take the step to see where your personal info has been exposed</figcaption>

Second step: Take back your personal information with Monitor Plus

If you’re the type who wants to set it and forget it, because you know the work is happening behind the scenes, then we can automatically and continuously request to remove your personal information with an annual paid subscription of $8.99 per month ($107.88 a year). On your behalf, Mozilla Monitor will start with data removal requests, then scan every month to make sure your personal information stays off data broker sites. Monitor Plus will let you know once your personal information has been removed from more than 190+ data broker sites, twice the number of other competitors. 

<figcaption class="wp-element-caption">See the actual sites where your personal info has been exposed</figcaption>
<figcaption class="wp-element-caption">Mark as fixed in the dashboard</figcaption>

At launch, the Monitor Plus free scan and paid subscription service will be offered to people based in the United States. 

Privacy starts with a Mozilla Account

Mozilla has built a reputation of creating and delivering products – Firefox and Mozilla VPN – that put people’s privacy needs first so you can count on Mozilla Monitor as an ally in reclaiming your privacy. In order to get a free scan and sign up for the paid automated data removal, you’ll need to get a Mozilla Account (previously known as a Firefox Account). With a Mozilla Account, you’ll get security benefits such as two-factor authentication powered by Mozilla, as well as backed by Mozilla’s terms of service and privacy policy. To learn about the benefits of having a Mozilla Account, click here.

Find out where your private info is exposed – and take it back

Try a free scan today with Mozilla Monitor!

The post Introducing Mozilla Monitor Plus, a new tool to automatically remove your personal information from data broker sites  appeared first on The Mozilla Blog.

Nick FitzgeraldGarbage Collection Without Unsafe Code

Many people, including myself, have implemented garbage collection (GC) libraries for Rust. Manish Goregaokar wrote up a fantastic survey of this space a few years ago. These libraries aim to provide a safe API for their users to consume: an unsafe-free interface which soundly encapsulates and hides the library’s internal unsafe code. The one exception is their mechanism to enumerate the outgoing GC edges of user-defined GC types, since failure to enumerate all edges can lead the collector to believe that an object is unreachable and collect it, despite the fact that the user still has a reference to the reclaimed object, leading to use-after-free bugs.1 This functionality is generally exposed as an unsafe trait for the user to implement because it is the user’s responsibility, not the library’s, to uphold this particular critical safety invariant.

However, despite providing safe interfaces, all of these libraries make extensive use of unsafe code in their internal implementations. I’ve always believed it was possible to write a garbage collection library without any unsafe code, and no one I’ve asserted this to has disagreed, but there has never been a proof by construction.

So, finally, I created the safe-gc crate: a garbage collection library for Rust with zero unsafe code. No unsafe in the API. No unsafe in the implementation. It even has a forbid(unsafe_code) pragma at the top.

That said, safe-gc is not a particularly high-performance garbage collector.

Using safe-gc

To use safe-gc, first we define our GC-managed types, using Gc<T> to define references to other GC-managed objects, and implement the Trace trait to report each of those GC edges to the collector:

use safe_gc::{Collector, Gc, Trace};

// Define a GC-managed object.
struct List {
    value: u32,

    // GC-managed references to the next and previous links in the list.
    prev: Option<Gc<List>>,
    next: Option<Gc<List>>,
}

// Report GC edges to the collector.
impl Trace for List {
    fn trace(&self, collector: &mut Collector) {
        if let Some(prev) = self.prev {
            collector.edge(prev);
        }
        if let Some(next) = self.next {
            collector.edge(next);
        }
    }
}

This looks pretty similar to other GC libraries in Rust, although it could definitely benefit from an implementation of Trace for Option<T> and a derive(Trace) macro. The big difference from existing GC libraries is that Trace is safe to implement; more on this later.

Next, we create one or more Heaps to allocate our objects within. Each heap is independently garbage collected.

use safe_gc::Heap;

let mut heap = Heap::new();

And with a Heap in hand, we can allocate objects:

let a = heap.alloc(List {
    value: 42,
    prev: None,
    next: None,
});

let b = heap.alloc(List {
    value: 36,
    prev: Some(a.into()),
    next: None,
});

// Create a bunch of garbage! Who cares! It'll all be cleaned
// up eventually!
for i in 0..100 {
    let _ = heap.alloc(List {
        value: i,
        prev: None,
        next: None,
    });
}

The heap will automatically trigger garbage collections, as necessary, but we can also force a collection if we want:

// Force a garbage collection!
heap.gc()

Rather than deref’ing Gc<T> pointers directly, we must index into the Heap to access the referenced T object. This contrasts with other GC libraries and is the key that unlocks safe-gc’s lack of unsafe code, allowing the implementation to abide by Rust’s ownership and borrowing discipline.2

// Read from a GC object in the heap.
let b_value = heap[&b].value;
assert_eq!(b_value, 36);

// Write to a GC object in the heap.
heap[&b].value += 1;
assert_eq!(heap[&b].value, 37);

Finally, there are actually two types for indexing into Heaps to access GC objects:

  1. Gc<T>, which we have seen already, and
  2. Root<T>, which we have also seen in action, but which was hidden from us by type inference.

The Gc<T> type is Copy and should be used when referencing other GC-managed objects from within a GC-managed object’s type definition, or when you can prove that a garbage collection will not happen (i.e. you have a shared borrow of its heap). A Gc<T> does not root its referenced T, keeping it alive across garbage collections, and therefore Gc<T> should not be used to hold onto GC references across any operation that can trigger a garbage collection.

A Root<T>, on the other hand, does indeed root its associated T object, preventing the object from being reclaimed during garbage collection. This makes Root<T> suitable for holding references to GC-managed objects across operations that can trigger garbage collections. Root<T> is not Copy because dropping it must remove its entry from the heap’s root set. Allocation returns rooted references; all the heap.alloc(...) calls from our earlier examples returned Root<T>s.

Peeking Under the Hood

A safe_gc::Heap is more similar to an arena newtype over a Vec than an engineered heap with hierarchies of regions like Immix. Its main storage is a hash map from std::any::TypeId to uniform arenas of the associated type. This lets us ultimately use Vec as the storage for heap-allocated objects, and we don’t need to do any unsafe pointer arithmetic or worry about splitting large blocks in our free lists. In fact, the free lists only manage indices, not blocks of raw memory.

pub struct Heap {
    // A map from `type_id(T)` to `Arena<T>`. The `ArenaObject`
    // trait facilitates crossing the boundary from an untyped
    // heap to typed arenas.
    arenas: HashMap<TypeId, Box<dyn ArenaObject>>,

    // ...
}

struct Arena<T> {
    elements: FreeList<T>,

    // ...
}

enum FreeListEntry<T> {
    /// An occupied entry holding a `T`.
    Occupied(T),

    /// A free entry that is also part of a linked list
    /// pointing to the next free entry, if any.
    Free(Option<u32>),
}

struct FreeList<T> {
    // The actual backing storage for our `T`s.
    entries: Vec<FreeListEntry<T>>,

    /// The index of the first free entry in the free list.
    free: Option<u32>,

    // ...
}

To allocate a new T in the heap, we first get the T object arena out of the heap’s hash map, or create it if it doesn’t exist yet. Then, we check if the arena has capacity to allocate our new T. If it does, we push the object onto the arena and return a rooted reference. If it does not, we fall back to an out-of-line slow path where we trigger a garbage collection to ensure that we have space for the new object, and then try again.

impl Heap {
    #[inline]
    pub fn alloc<T>(&mut self, value: T) -> Root<T>
    where
        T: Trace,
    {
        let heap_id = self.id;
        let arena = self.ensure_arena::<T>();
        // Fast path for when we have available capacity for
        // allocating into.
        match arena.try_alloc(heap_id, value) {
            Ok(root) => root,
            Err(value) => self.alloc_slow(value),
        }
    }

    // Out-of-line slow path for when we need to GC to free
    // up or allocate additional space.
    #[inline(never)]
    fn alloc_slow<T>(&mut self, value: T) -> Root<T>
    where
        T: Trace,
    {
        self.gc();
        let heap_id = self.id;
        let arena = self.ensure_arena::<T>();
        arena.alloc_slow(heap_id, value)
    }
}

Arena<T> allocation bottoms out in allocating from a FreeList<T>, which will attempt to use existing capacity by popping off its internal list of empty entries when possible, or otherwise fall back to reserving additional capacity.

impl<T> FreeList<T> {
    fn try_alloc(&mut self, value: T) -> Result<u32, T> {
        if let Some(index) = self.free {
            // We have capacity. Pop the first free entry off
            // the free list and put the value in there.
            let index = usize::try_from(index).unwrap();
            let next_free = match self.entries[index] {
                Entry::Free(next_free) => next_free,
                Entry::Occupied { .. } => unreachable!(),
            };
            self.free = next_free;
            self.entries[index] = Entry::Occupied(value);
            Ok(index)
        } else {
            // No capacity to hold the value; give it back.
            Err(value)
        }
    }

    fn alloc(&mut self, value: T) -> u32 {
        self.try_alloc(value).unwrap_or_else(|value| {
            // Reserve additional capacity, since we didn't have
            // space for the allocation.
            self.double_capacity();
            // After which the allocation will succeed.
            self.try_alloc(value).ok().unwrap()
        })
    }
}

Accessing objects in the heap is straightforward: look up the arena for T and index into it.

impl Heap {
    /// Get a shared borrow of the referenced `T`.
    pub fn get<T>(&self, gc: impl Into<Gc<T>>) -> &T
    where
        T: Trace,
    {
        let gc = gc.into();
        assert_eq!(self.id, gc.heap_id);
        let arena = self.arena::<T>().unwrap();
        arena.elements.get(gc.index)
    }

    // Get an exclusive borrow of the referenced `T`.
    pub fn get_mut<T>(&mut self, gc: impl Into<Gc<T>>) -> &mut T
    where
        T: Trace,
    {
        let gc = gc.into();
        assert_eq!(self.id, gc.heap_id);
        let arena = self.arena_mut::<T>().unwrap();
        arena.elements.get_mut(gc.index)
    }
}

Before we get into how safe-gc actually performs garbage collection, we need to look at how it implements the root set. The root set are the set of things that are definitely alive; things that the application is actively using right now or planning to use in the future. The goal of the collector is to identify all objects transitively referenced by these roots, since these are the objects that can still be used in the future, and recycle all others.

Each Arena<T> has its own RootSet<T>. For simplicity a RootSet<T> is a wrapper around a FreeList<Gc<T>>. When we add new roots, we insert them into the FreeList, and when we drop a root, we remove it from the FreeList. This does mean that the root set can contain duplicates and is therefore not a proper set. The root set’s FreeList is additionally wrapped in an Rc<RefCell<...>> so that we can implement Clone for Root<T>, which adds another entry in the root set, and don’t need to explicitly pass around a Heap to hold additional references to a rooted object.

Finally, I took care to design Root<T> and RootSet<T> such that Root<T> doesn’t directly hold a Gc<T>. This allows for updating rooted GC pointers after a collection, which is necessary for moving GC algorithms like generational GC and compaction. In fact, I originally intended to implement a copying collector, which is a moving GC algorithm, for safe-gc but ran into some issues. More on those later. For now, we retain the possibility of introducing moving GC at a later date.

struct Arena<T> {
    // ...

    // Each arena has a root set.
    roots: RootSet<T>,
}

// The set of rooted `T`s in an arena.
struct RootSet<T> {
    inner: Rc<RefCell<FreeList<Gc<T>>>>,
}

impl<T: Trace> RootSet<T> {
    // Rooting a `Gc<T>` adds an entry to the root set.
    fn insert(&self, gc: Gc<T>) -> Root<T> {
        let mut inner = self.inner.borrow_mut();
        let index = inner.alloc(gc);
        Root {
            roots: self.clone(),
            index,
        }
    }

    fn remove(&self, index: u32) {
        let mut inner = self.inner.borrow_mut();
        inner.dealloc(index);
    }
}

pub struct Root<T: Trace> {
    // Each `Root<T>` holds a reference to the root set.
    roots: RootSet<T>,

    // Index of this root in the root set.
    index: u32,
}

// Dropping a `Root<T>` removes its entry from the root set.
impl<T: Trace> Drop for Root<T> {
    fn drop(&mut self) {
        self.roots.remove(self.index);
    }
}

With all that out of the way, we can finally look at the core garbage collection algorithm.

safe-gc implements simple mark-and-sweep garbage collection. We begin by resetting the mark bits for each arena, and making sure that there are enough bits for all of our allocated objects, since we keep the mark bits in an out-of-line compact bitset rather than in each object’s header word or something like that.

impl Heap {
    #[inline(never)]
    pub fn gc(&mut self) {
        // Reset/pre-allocate the mark bits.
        for (ty, arena) in &self.arenas {
            self.collector
                .mark_bits
                .entry(*ty)
                .or_default()
                .reset(arena.capacity());
        }

        // ...
    }
}

Next we begin the mark phase. This starts by iterating over each root and then setting its mark bit and enqueuing it in the mark stack by calling collector.edge(root).

impl Heap {
    #[inline(never)]
    pub fn gc(&mut self) {
        // ...

        // Mark all roots.
        for arena in self.arenas.values() {
            arena.trace_roots(&mut self.collector);
        }

        // ...
    }
}

trait ArenaObject: Any {
    fn trace_roots(&self, collector: &mut Collector);

    // ...
}

impl<T: Trace> ArenaObject for Arena<T> {
    fn trace_roots(&self, collector: &mut Collector) {
        self.roots.trace(collector);
    }

    // ...
}

impl<T: Trace> RootSet<T> {
    fn trace(&self, collector: &mut Collector) {
        let inner = self.inner.borrow();
        for (_, root) in inner.iter() {
            collector.edge(*root);
        }
    }
}

The mark phase continues by marking everything transitively reachable from those roots in a fixed-point loop. If we discover an unmarked object, we mark it and enqueue it for tracing. Whenever we see an already-marked object, we ignore it.

What is kind of unusual is that we don’t have a single mark stack. The Heap has no T type parameter, and contains many different types of objects, so the heap itself doesn’t know how to trace any particular object. However, each of the heap’s Arena<T>s holds only a single type of object, and an arena does know how to trace its objects. So we have a mark stack for each T, or equivalently, each arena. This means that our fixed-point loop has two levels: an outer loop that continues while any mark stack has work enqueued, and an inner loop to drain a particular mark stack.

impl Heap {
    #[inline(never)]
    pub fn gc(&mut self) {
        // ...

        // Mark everything transitively reachable from the roots.
        while let Some(type_id) = self
            .collector
            .next_non_empty_mark_stack()
        {
            while let Some(index) = self
                .collector
                .pop_mark_stack(type_id)
            {
                self.arenas
                    .get_mut(&type_id)
                    .unwrap()
                    .trace_one(index, &mut self.collector);
            }
        }

        // ...
    }
}

While the driver loop for marking is inside the Heap::gc method, the actual edge tracing and mark bit setting happens inside Collector and the arena which, because it has a T type parameter, can call the correct Trace implementation for each object.

trait ArenaObject: Any {
    fn trace_one(&mut self, index: u32, collector: &mut Collector);

    // ...
}

impl<T: Trace> ArenaObject for Arena<T> {
    fn trace_one(&mut self, index: u32, collector: &mut Collector) {
        self.elements.get(index).trace(collector);
    }

    // ...
}

pub struct Collector {
    heap_id: u32,
    // The mark stack for each type in the heap.
    mark_stacks: HashMap<TypeId, Vec<u32>>,
    // The mark bits for each type in the heap.
    mark_bits: HashMap<TypeId, MarkBits>,
}

impl Collector {
    pub fn edge<T: Trace>(&mut self, to: Gc<T>) {
        assert_eq!(to.heap_id, self.heap_id);

        // Get the mark bits for `T` objects.
        let ty = TypeId::of::<T>();
        let mark_bits = self.mark_bits.get_mut(&ty).unwrap();

        // Set `to`'s mark bit. If the bit was already set, we're
        // done.
        if mark_bits.set(to.index) {
            return;
        }

        // Otherwise this is the first time visiting this GC
        // object so enqueue it for further marking.
        let mark_stack = self.mark_stacks.entry(ty).or_default();
        mark_stack.push(to.index);
    }
}

Once our mark stacks are all empty, we’ve reached our fixed point, and that means we’ve finished marking all objects reachable from the root set. Now we transition to the sweep phase.

Sweeping iterates over each object in each arena. If that object’s mark bit is not set, then it is unreachable from the GC roots, i.e. it is not a member of the live set, i.e. it is garbage. We drop such objects and push their slots into their arena’s free list, making the slot available for future allocations.

After sweeping each arena we check whether the arena is still close to running out of capacity and, if so, reserve additional space for the arena. This amortizes the cost of garbage collection and avoids a scenario that could otherwise trigger a full GC on every object allocation:

  • The arena has zero available capacity.
  • The user tries to allocate, triggering a GC.
  • The GC is able to reclaim only one slot in the arena.
  • The user’s pending allocation fills the reclaimed slot.
  • Now the arena is out of capacity again, and the process repeats from the top.

By reserving additional space in the arena after sweeping, we avoid this failure mode.

We could also compact the arena and release excess space back to the global allocator if there was too much available capacity. This would additionally require a method for updating incoming edges to the compacted objects, and safe-gc does not implement compaction at this time.

impl Heap {
    #[inline(never)]
    pub fn gc(&mut self) {
        // ...

        // Sweep.
        for (ty, arena) in &mut self.arenas {
            let mark_bits = &self.collector.mark_bits[ty];
            arena.sweep(mark_bits);
        }
    }
}

trait ArenaObject: Any {
    // ...

    fn sweep(&mut self, mark_bits: &MarkBits);
}

impl<T: Trace> ArenaObject for Arena<T> {
    // ...

    fn sweep(&mut self, mark_bits: &MarkBits) {
        // Reclaim garbage slots.
        let capacity = self.elements.capacity();
        for index in 0..capacity {
            if !mark_bits.get(index) {
                self.elements.dealloc(index);
            }
        }

        // Amortize the cost of GC across allocations.
        let len = self.elements.len();
        let available = capacity - len;
        if available < capacity / 4 {
            self.elements.double_capacity();
        }
    }
}

After every arena is swept, garbage collection is complete!

Preventing Classic Footguns

Now that we know how safe-gc is implemented, we can explore a couple classic GC footguns and analyze how safe-gc either completely nullifies them or downgrades them from critical security vulnerabilities to plain old bugs.

Often an object might represent some external resource that should be cleaned up when the object is no longer in use, like an open file descriptor. This functionality is typically supported with finalizers, the GC-equivalent of C++ destructors and Rust’s Drop trait. Finalization of GC objects is usually tricky because of the risks of either accessing objects that have already been reclaimed by the collector (which is a use-after-free bug) or accidentally entrenching objects and making them live again (which leads to memory leaks). Because of these risks, Rust GC libraries often make finalization an unsafe trait and even forbid allocating types that implement Drop in their heaps.

However, safe-gc doesn’t need an unsafe finalizer trait, or even any additional finalizer trait: it can just use Drop. Drop implementations simply do not have access to a Heap, which is required to deref GC pointers, so they cannot suffer from those finalization footguns.

Next up: why isn’t Trace an unsafe trait? And what happens if you don’t root a Gc<T> and then index into a Heap with it after a garbage collection? These are actually the same question: what happens if I use a dangling Gc<T>? As mentioned at the start, if a Trace implementation fails to report all edges to the collector, the collector may believe an object is unreachable and reclaim it, and now the unreported edge is dangling. Similarly, if the user holds an unrooted Gc<T>, rather than a Root<T>, across a garbage collection then the collector might believe that the referenced object is garbage and reclaim it, leaving the unrooted reference dangling.

Indexing into a Heap with a potentially-dangling Gc<T> will result in one of three possibilities:

  1. We got “lucky” and something else happened to keep the object alive. The access succeeds as it otherwise would have and the potentially-dangling bug is hidden.

  2. The associated slot in the arena’s free list is empty and contains a FreeListEntry::Free variant. This scenario will raise a panic.

  3. A new object has since been allocated in the same arena slot. The access will succeed, but it will be to the wrong object. This is an instance of the ABA problem. We could, at the cost of some runtime overhead, turn this into a loud panic instead of silent action at a distance by adding a generation counter to our arenas.

Of course, it would be best if users always rooted GC references they held across collections and correctly implemented the Trace trait but, should they fail to do that, all three potential outcomes are 100% memory safe.3 These failures can’t lead to memory corruption or use-after-free bugs, which would be the typical results of this kind of thing with an unsafe GC implementation.

Copying Collector False Start

I initially intended to implement a copying collector rather than mark-and-sweep, but ultimately the borrowing and ownership didn’t pan out. That isn’t to say it is impossible to implement a copying collector in safe Rust, but it ended up feeling like more of a headache than it was worth. I spent several hours trying to jiggle things around to experiment with different ownership hierarchies and didn’t get anything satisfactory. When I decided to try mark-and-sweep, it only took me about half an hour to get an initial prototype working. I found this really surprising, since I had a strong intuition that a copying collector, with its separate from- and to-spaces, should play well with Rust’s ownership and borrowing.

Briefly, the algorithm works as follows:

  • We equally divide the heap into two semi-spaces.

  • At any given time in between collections, all objects live in one semi-space and the other is sitting idle.

  • We bump allocate within the active semi-space, slowly filling it up, and when the bump pointer reaches the end of the semi-space, we trigger a collection.

  • During collection, as we trace the live set, we copy objects from the old semi-space that has been active, to the other new semi-space that has been idle. At the same time, we maintain a map from the live objects’ location in the old semi-space to their location in the new semi-space. When we trace an object’s edges, we also update those edges to point to their new locations. Once tracing reaches a fixed-point, we’ve copied the whole live set to the new semi-space, it becomes the active semi-space, and the previously-active semi-space now sits idle until the next collection.

Copying collection has a number of desirable properties:

  • The algorithm is relatively simple and easy to understand.

  • Allocating new objects is fast: just bumping a pointer and checking that space isn’t exhausted yet.

  • The act of copying objects to the new semi-space compacts the heap, defeating fragmentation.

  • It also eliminates the need for a sweep phase, since the whole of the old semi-space is garbage after the live set has been moved to the new semi-space.

Copying collection’s primary disadvantage is the memory overhead it imposes: we can only ever use at most half of the heap to store objects.

When I think about a copying collector, I tend to imagine Lisp cons cells, as I was first introduced to this algorithm in that context by SICP. Here is what a very naive implementation of the core copying collection algorithm might look like in safe Rust:

fn copy_collect(
    roots: &mut [usize],
    from: &[Cons],
    to: &mut Vec<Cons>,
) {
    // Contains a work list of the new indices of cons cells
    // that have been been copied to `to` but haven't had their
    // edges traced and updated yet.
    let mut stack = vec::with_capacity(roots.len());

    // The map from each live object's old location, to its new one.
    let mut old_to_new = HashMap::new();

    // Copy each root to the to-space, enqueue it for tracing, and
    // update its pointer to its new index in the to-space.
    for root in roots {
        visit_edge(from, to, &mut old_to_new, &mut stack, root);
    }

    // Now do the same for everything transitively reachable from
    // the roots.
    while let Some(index) = stack.pop() {
        let cons = &mut to[index];
        if let Some(car) = &mut cons.car {
            visit_edge(from, to, &mut old_to_new, &mut stack, car);
        }
        if let Some(cdr) = &mut cons.cdr {
            visit_edge(from, to, &mut old_to_new, &mut stack, cdr);
        }
    }
}

// Visit one edge. If the edge's referent has already been copied
// to the to-space, just update the edge's pointer so that it points
// to the new location. If it hasn't been copied yet, additionally
// copy it over and enqueue it in the stack for future tracing.
fn visit_edge(
    from: &[Cons],
    to: &mut Vec<Cons>,
    old_to_new: &mut HashMap<usize, usize>,
    stack: &mut Vec<usize>,
    edge: &mut usize,
) {
    let new_location = old_to_new
        .entry(*edge)
        .or_insert(|| {
            let new = to.len();
            // Copy the object over.
            to.push(from[*edge]);
            // Enqueue it for tracing.
            stack.push(new);
            new
        });
    *edge = new_location;
}

As written, this works and is 100% safe!4 So where do things start to break down? We’ll get there, but first…

The old-to-new-location map needn’t be an additional, separate allocation. We don’t need that hash map. Instead, we can reuse the from-space’s storage and write the address of each copied object’s new location inline into its old location. These are referred to as forwarding pointers. This is a super standard optimization for copying collection; so much so that it’s rare to see a copying collector without it.

Let’s implement inline forwarding pointers for our safe copying collector. Because we are mutating the from-space to write the forwarding pointers, we will need to change it from a shared borrow into an exclusive borrow. Additionally, to differentiate between forwarding pointers and actual cons cells, our semi-spaces must become slices of an enum rather than slices of cons cells directly.

enum SemiSpaceEntry {
    // The cons cell. If we see this during tracing, that means
    // we haven't copied it over to the to-space yet.
    Occupied(Cons),
    // This cons cell has already been moved, here is its new
    // location.
    Forwarded(usize)
}

fn copy_collect(
    roots: &mut [usize],
    from: &mut [SemiSpaceEntry],
    to: &mut Vec<SemiSpaceEntry>,
) {
    // Same as before, but without `old_to_new`...
}

fn visit_edge(
    from: &mut [SemiSpaceEntry],
    to: &mut Vec<SemiSpaceEntry>>,
    stack: &mut Vec<usize>,
    edge: &mut usize,
) {
    let new = match &mut from[*edge] {
        SemiSpaceEntry::Forwarded(new) => new,
        SemiSpaceEntry::Occupied(cons) => {
            let new = to.len();
            // Copy the object over.
            to.push(cons);
            // Enqueue it for tracing.
            stack.push(new);
            // !!! Write the forwarding pointer. !!!
            from[edge] = SemiSpaceEntry::Forwarded(new);
            new
        }
    };
    *edge = new;
}

Again, this copying collector with forwarding pointers works and is still 100% safe code.

Things break down when we move away from a homogeneously-typed heap that only contains cons cells towards a heterogeneously-typed heap that can contain any type of GC object.

Recall how safe_gc::Heap organizes its underlying storage with a hash map keyed by type id to get the Arena<T> storage for that associated type:

pub struct Heap {
    // A map from `type_id(T)` to `Arena<T>`.
    arenas: HashMap<TypeId, Box<dyn ArenaObject>>,

    // ...
}

My idea was that a whole Heap would be a semi-space, and if it was the active semi-space, the heap would additionally have an owning handle to the idle semi-space:

pub struct Heap {
    // ...

    idle_semi_space: Option<Box<Heap>>,
}

Given that, we would collect the heap by essentially (papering over some details) calling copy_collect on each of its internal arenas:

impl Heap {
    pub fn gc(&mut self) {
        let to_heap = self.idle_semi_space.take().unwrap();
        for (ty, from_arena) in &mut self.arenas {
            copy_collect(from_arena, to_heap);
        }
    }
}

Note that we pass the whole to_heap into copy_collect, not from_arena’s corresponding Arena<T> in the to-space, because there can be cross-type edges. A Cat object can have a reference to a Salami object as a little treat, and we need access to the whole to-space, not just its Arena<Cat>, to copy that Salami over when tracing Cats.

But here’s where things break down: we also need mutable access the whole from-space when tracing Arena<Cat>s because we need to write the forwarding pointer in the from-space’s Arena<Salami> for the Salami’s new location in the to-space. But we can’t have mutable access to the whole from-space because we’ve already projected into one of its arenas. Yeah, I guess we could use something like take the Arena<Cat> out of the from-space, and then pass both the Arena<Cat> and the whole from-space into copy_collect. But then what do we do for Cat-to-Cat edges? Have some kind of check to test for whether we need to follow a given edge into the from-space Heap or the Arena we are currently tracing?

Like I said, I don’t think it is impossible to overcome these hurdles, the question is: is overcoming them worth it? Everything I could think up got pretty inelegant pretty quickly and/or would have laughably poor performance.5 When compared with how easy it was to implement mark-and-sweep, I just don’t think a 100% unsafe-free copying collector that supports arbitrary, heterogeneous types is worth the headache.

Why safe-gc?

safe-gc is certainly a point in the design space of garbage-collection libraries in Rust. One could even argue it is an interesting — and maybe even useful? — point in the design space!

Also, it was fun!

At the very least, you don’t have to wonder about the correctness of any unsafe code in there, because there isn’t any. As long as the Rust language and its standard library are sound, safe-gc is too.

Conclusion

The safe-gc crate implements garbage-collection-as-library for Rust with zero unsafe code. It was fun to implement!

Thanks to Trevor Elliott and Jamey Sharp for brainstorming with me and thanks to Manish Goregaokar and again to Trevor Elliott for reading early drafts of this blog post.


  1. In the garbage collection literature, we think about the heap of GC-managed objects as a graph where each object is a node in that graph and the graph’s edges are the references from one object to another. 

  2. The one exception to this statement that I’m aware of is the gc-arena crate, although it is only half an exception. Similar to safe-gc, it also requires threading through a heap context (that it calls a Mutation) to access GC objects, although only for allocation and mutable access to GC objects. Getting shared, immutable borrows of GC objects doesn’t require threading in a heap context. 

  3. I do have sympathy for users writing these bugs! I’ve written them myself. Remembering to root GC references across operations that can trigger collections isn’t always easy! It can be difficult to determine which things can trigger collections or whether some reference you’re holding has a pointer to another structure which internally holds onto a GC reference. The SpiderMonkey GC folks had to resort to implementing a GCC static analysis plugin to find unrooted references held across GC-triggering function calls. This analysis runs in Firefox’s CI because even the seasoned systems engineers who work on SpiderMonkey and Firefox routinely make these mistakes and the resulting bugs are so disastrous! 

  4. Well, this collector works in principle; I haven’t actually compiled it. I wrote it inside this text file, so it probably has some typos and minor compilation errors or whatever. But the point stands: you could use this collector for your next toy lisp. 

  5. I’m not claiming that safe-gc has incredible performance, I haven’t benchmarked anything and it almost assuredly does not. But its performance shouldn’t be laughably bad, and I’d like to think that with a bit of tuning it would be competitive with just about any other unsafe-free Rust implementation. 

The Rust Programming Language Blogcrates.io: API status code changes

Cargo and crates.io were developed in the rush leading up to the Rust 1.0 release to fill the needs for a tool to manage dependencies and a registry that people could use to share code. This rapid work resulted in these tools being connected with an API that initially didn't return the correct HTTP response status codes. After the Rust 1.0 release, Rust's stability guarantees around backward compatibility made this non-trivial to fix, as we wanted older versions of Cargo to continue working with the current crates.io API.

When an old version of Cargo receives a non-"200 OK" response, it displays the raw JSON body like this:

error: failed to get a 200 OK response, got 400
headers:
    HTTP/1.1 400 Bad Request
    Content-Type: application/json; charset=utf-8
    Content-Length: 171

body:
{"errors":[{"detail":"missing or empty metadata fields: description, license. Please see https://doc.rust-lang.org/cargo/reference/manifest.html for how to upload metadata"}]}

This was improved in pull request #6771, which was released in Cargo 1.34 (mid-2019). Since then, Cargo has supported receiving 4xx and 5xx status codes too and extracts the error message from the JSON response, if available.

On 2024-03-04 we will switch the API from returning "200 OK" status codes for errors to the new 4xx/5xx behavior. Cargo 1.33 and below will keep working after this change, but will show the raw JSON body instead of a nicely formatted error message. We feel confident that this degraded error message display will not affect very many users. According to the crates.io request logs only very few requests are made by Cargo 1.33 and older versions.

This is the list of API endpoints that will be affected by this change:

  • GET /api/v1/crates
  • PUT /api/v1/crates/new
  • PUT /api/v1/crates/:crate/:version/yank
  • DELETE /api/v1/crates/:crate/:version/unyank
  • GET /api/v1/crates/:crate/owners
  • PUT /api/v1/crates/:crate/owners
  • DELETE /api/v1/crates/:crate/owners

All other endpoints have already been using regular HTTP status codes for some time.

If you are still using Cargo 1.33 or older, we recommend upgrading to a newer version to get the improved error messages and all the other nice things that the Cargo team has built since then.

Mozilla Localization (L10N)L10n Report: February 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

While the amount of content has been relatively small over the last few months in Firefox, there have been some UI changes and updates to privacy setting related text such as form autofill, Cookie Banner Blocker, passwords (about:logins), and cookie and site data*. One change happening here (and across all Mozilla products) is the move away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

In addition, while the number of strings is low, Firefox’s PDF viewer will soon have the ability to highlight content. You can test this feature now in Nightly.

Most of these strings and translations can be previewed by checking a Nightly build. If you’re new to localizing Firefox or if you missed our deep dive, please check out our blog post from July to learn more about the Firefox release schedule.

*Recently in our L10N community matrix channel, someone from our community asked how the new strings for clearing browsing history and data (see screenshot below) from Cookie and Site Data could be shown in Nightly.

Pontoon screenshot showing the strings for clearing browsing history and data from Cookie and Site Data.In order to show the strings in Nightly, the privacy.sanitize.useOldClearHistoryDialog preference needs to be set to false. To set the preference, type about:config in your URL bar and press enter. A warning may pop up warning you to proceed with caution, click the button to continue. On the page that follows, paste privacy.sanitize.useOldClearHistoryDialog into the search field, then click the toggle button to change the value to false.

You can then trigger the new dialog by clicking “Clear Data…” from the Cookies and Site Data setting or “Clear History…” from the History. (You may need to quit Firefox and open it again for the change to take effect.).

In case of doubts about managing about:config, you can consult the Configuration Editor guide on SUMO.

What’s new or coming up in mobile

Much like desktop, mobile land has been pretty calm recently.

Having said that, we would like to call out the new Translation feature that is now available to test on the latest Firefox for Android v124 Nightly builds (this is possible only through the secret settings at the moment). It’s a built-in full page translation feature that allows you to seamlessly browse the web in your preferred language. As you navigate the site, Firefox continuously translates new content.

Check your Pontoon notifications for instructions on how to test it out. Note that the feature is not available on iOS at the moment.

In the past couple of months you may have also noticed strings mentioning a new shopping feature called “Review Checker” (that we mentioned for desktop in our November edition). The feature is still a bit tricky to test on Android, but there are instructions you can follow – these can also be found in your Pontoon notification archive.

For testing on iOS, you just need to have the latest Beta version installed and navigate to the product pages on the US sites of amazon.com, bestbuy.com, and walmart.com. A logo in the URL bar will appear with a notification, to launch and test the feature.

Finally, another notable change that has been called out under the Firefox desktop section above: we are moving away from using the term “login” to describe the credentials for accessing websites and instead use “password(s).”

What’s new or coming up in Foundation projects

New languages have been added to Common Voice in 2023: Tibetan, Chichewa, Ossetian, Emakhuwa, Laz, Pular Guinée, Sindhi. Welcome!

What’s new or coming up in Pontoon

Improved support for mobile devices

Pontoon translation workspace is now responsive, which means you can finally use Pontoon on your mobile device to translate and review strings! We developed a single-column layout for mobile phones and 2-column layout for tablets.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

2024 Pontoon survey

Thanks again to everyone who has participated in the 2024 Pontoon survey. The 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

Friends of the Lion

We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!

Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Frederik BraunHow Firefox gives special permissions to some domains

Today, I found someone tweeting about a neat security bug in Chrome, that bypasses how Chrome disallows extensions from injecting JavaScript into special domains like chrome.google.com. The intention of this block is that browsers give special permissions to some internal pages that allow troubleshooting, resetting the browser, installing …

Hacks.Mozilla.OrgAnnouncing Interop 2024

The Interop Project has become one of the key ways that browser vendors come together to improve the web platform. By working to identify and improve key areas where differences between browser engines are impacting users and web developers, Interop is a critical tool in ensuring the long-term health of the open web.

The web platform is built on interoperability based on common standards. This offers users a degree of choice and control that sets the web apart from proprietary platforms defined by a single implementation. A commitment to ensuring that the web remains open and interoperable forms a fundamental part of Mozilla’s manifesto and web vision, and is why we’re so committed to shipping Firefox with our own Gecko engine.

However interoperability requires care and attention to maintain. When implementations ship with differences between the standard and each other, this creates a pain point for web authors; they have to choose between avoiding the problematic feature entirely and coding to specific implementation quirks. Over time if enough authors produce implementation-specific content then interoperability is lost, and along with it user agency.

This is the problem that the Interop Project is designed to address. By bringing browser vendors together to focus on interoperability, the project allows identifying areas where interoperability issues are causing problems, or may do in the near future. Tracking progress on those issues with a public metric provides accountability to the broader web community on addressing the problems.

The project works by identifying a set of high-priority focus areas: parts of the web platform where everyone agrees that making interoperability improvements will be of high value. These can be existing features where we know browsers have slightly different behaviors that are causing problems for authors, or they can be new features which web developer feedback shows is in high demand and which we want to launch across multiple implementations with high interoperability from the start. For each focus area a set of web-platform-tests is selected to cover that area, and the score is computed from the pass rate of these tests.

Interop 2023

The Interop 2023 project covered high profile features like the new :has() selector, and web-codecs, as well as areas of historically poor interoperability such as pointer events.

The results of the project speak for themselves: every browser ended the year with scores in excess of 97% for the prerelease versions of their browsers. Moreover, the overall Interoperability score — that is the fraction of focus area tests that pass in all participating browser engines — increased from 59% at the start of the year to 95% now. This result represents a huge improvement in the consistency and reliability of the web platform. For users this will result in a more seamless experience, with sites behaving reliably in whichever browser they prefer.

For the :has() selector — which we know from author feedback has been one of the most in-demand CSS features for a long time — every implementation is now passing 100% of the web-platform-tests selected for the focus area. Launching a major new platform feature with this level of interoperability demonstrates the power of the Interop project to progress the platform without compromising on implementation diversity, developer experience, or user choice.

As well as focus areas, the Interop project also has “investigations”. These are areas where we know that we need to improve interoperability, but aren’t at the stage of having specific tests which can be used to measure that improvement. In 2023 we had two investigations. The first was for accessibility, which covered writing many more tests for ARIA computed role and accessible name, and ensuring they could be run in different browsers. The second was for mobile testing, which has resulted in both Mobile Firefox and Chrome for Android having their initial results in wpt.fyi.

Interop 2024

Following the success of Interop 2023, we are pleased to confirm that the project will continue in 2024 with a new selection of focus areas, representing areas of the web platform where we think we can have the biggest positive impact on users and web developers.

New Focus Areas

New focus areas for 2024 include, among other things:

  • Popover API – This provides a declarative mechanism to create content that always renders in the topmost-layer, so that it overlays other web page content. This can be useful for building features like tooltips and notifications. Support for popover was the #1 author request in the recent State of HTML survey.
  • CSS Nesting – This is a feature that’s already shipping, which allows writing more compact and readable CSS files, without the need for external tooling such as preprocessors. However different browsers shipped slightly different behavior based on different revisions of the spec, and Interop will help ensure that everyone aligns on a single, reliable, syntax for this popular feature.
  • Accessibility – Ensuring that the web is accessible to all users is a critical part of Mozilla’s manifesto. Our ability to include Accessibility testing in Interop 2024 is a direct result of the success of the Interop 2023 Accessibility Investigation in increasing the test coverage of key accessibility features.

The full list of focus areas is available in the project README.

Carryover

In addition to the new focus areas, we will carry over some of the 2023 focus areas where there’s still more work to be done. Of particular interest is the Layout focus area, which will combine the previous Flexbox, Grid and Subgrid focus area into one area covering all the most important layout primitives for the modern web. On top of that the Custom Properties, URL and Mouse and Pointer Events focus areas will be carried over. These represent cases where, even though we’ve already seen large improvements in Interoperability, we believe that users and web authors will benefit from even greater convergence between implementations.

Investigations

As well as focus areas, Interop 2024 will also feature a new investigation into improving the integration of WebAssembly testing into web-platform-tests. This will open up the possibility of including WASM features in future Interop projects. In addition we will extend the Accessibility and Mobile Testing investigations, as there is more work to be done to make those aspects of the platform fully testable across different implementations.

Partner Announcements

The post Announcing Interop 2024 appeared first on Mozilla Hacks - the Web developer blog.

Mike HommeyWhen undefined behavior causes a nonsensical error (in Rust)

This all started when I looked at whether it would be possible to build Firefox with Pointer Authentication Code for arm64 macOS. In case you're curious, the quick answer is no, because Apple essentially hasn't upstreamed the final ABI for it yet, only Xcode clang can produce it, and obviously Rust can't.

Anyways, the Rust compiler did recently add the arm64e-apple-darwin target (which, as mentioned above, turns out to be useless for now), albeit without a prebuilt libstd (so, requiring the use of the -Zbuild-std flag). And by recently, I mean in 1.76.0 (in beta as of writing).

So, after tricking the Firefox build system into accepting to build for that target, I ended up with a Firefox build that... crashed on startup, saying:

Hit MOZ_CRASH(unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed isize::MAX) at /builds/worker/fetches/rustc/lib/rustlib/src/rust/library/core/src/panicking.rs:155"

(MOZ_CRASH is what we get on explicit crashes, like MOZ_ASSERT in C++ code, or assert!() in Rust)

The caller of the crashing code was NS_InvokeByIndex, so at this point, I was thinking XPConnect might need some adjustement for arm64e.

But that was a build I had produced through the Mozilla try server. So I did a local non-optimized debug build to see what's up, which crashed with a different message:

Hit MOZ_CRASH(slice::get_unchecked requires that the index is within the slice) at /Users/glandium/.rustup/toolchains/nightly-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/slice/index.rs:228

This comes from this code in rust libstd:

    unsafe fn get_unchecked(self, slice: *const [T]) -> *const T {
        debug_assert_nounwind!(
            self < slice.len(),
            "slice::get_unchecked requires that the index is within the slice",
        );
        // SAFETY: the caller guarantees that `slice` is not dangling, so it
        // cannot be longer than `isize::MAX`. They also guarantee that
        // `self` is in bounds of `slice` so `self` cannot overflow an `isize`,
        // so the call to `add` is safe.
        unsafe {
            crate::hint::assert_unchecked(self < slice.len());
            slice.as_ptr().add(self)
        }
    }

(I'm pasting the whole thing because it will be important later)

We're hitting the debug_assert_nounwind.

The calling code looks like the following:

let end = atoms.get_unchecked(STATIC_ATOM_COUNT) as *const _;

And what the debug_assert_nounwind means is that STATIC_ATOM_COUNT is greater or equal to the slice size (spoiler alert: it is equal).

At that point, I started to suspect this might be a more general issue with the new Rust version, rather than something limited to arm64e. And I was kind of right? Mozilla automation did show crashes on all platforms when building with Rust beta (currently 1.76.0). But that was a different, and non-sensical crash:

Hit MOZ_CRASH(attempt to add with overflow) at servo/components/style/gecko_string_cache/mod.rs:77

But this time, it was in the same vicinity as the crash I was getting locally.

Since this was talking about an overflowing addition, I wrapped both terms in dbg!() to see the numbers and... the overflow disappeared but now I was getting a plain crash:

application crashed [@ <usize as core::slice::index::SliceIndex<[T]>>::get_unchecked]

(still from the same call to get_unchecked, at least)

The problem was fixed by essentially removing the entire code that was using get_unchecked. めでたしめでたし.

But this was too weird to leave it at that.

So what's going on?

Well, first is that despite there being a debug_assert, debug builds don't complain about the out-of-bounds use of get_unckecked. Only when using -Zbuild-std does it happen. I'm not sure whether that's intended, but I opened an issue about it to figure out.

Second, in the code I pasted from get_unchecked, the hint::assert_unchecked is new in 1.76.0 (well, it was intrinsics::assume in 1.76.0 and became hint::assert_unchecked in 1.77.0, but it wasn't there before). This is why our broken code didn't cause actual problems until now.

What about the addition overflow?

Well, this is where undefined behavior leads the optimizer to do what the user might perceive as weird things, but they actually make sense (as usual with these things involving undefined behavior). Let's start with a standalone version of the original code, simplifying the types used originally:

#![allow(non_upper_case_globals, non_snake_case, dead_code)]

#[inline]
fn static_atoms() -> &'static [[u32; 3]; STATIC_ATOM_COUNT] {
    unsafe {
        let addr = &gGkAtoms as *const _ as usize + kGkAtomsArrayOffset as usize;
        &*(addr as *const _)
    }
}

#[inline]
fn valid_static_atom_addr(addr: usize) -> bool {
    unsafe {
        let atoms = static_atoms();
        let start = atoms.as_ptr();
        let end = atoms.get_unchecked(STATIC_ATOM_COUNT) as *const _;
        let in_range = addr >= start as usize && addr < end as usize;
        let aligned = addr % 4 == 0;
        in_range && aligned
    }
}

fn main() {
    println!("{:?}", valid_static_atom_addr(0));
}

Stick this code in a newly created crate (with e.g. cargo new testcase), and run it:

$ cargo +nightly run -q
false

Nothing obviously bad happened. So what went wrong in Firefox? In my first local attempt, I had -Zbuild-std, so let's try that:

$ cargo +nightly run -q -Zbuild-std --target=x86_64-unknown-linux-gnu
thread 'main' panicked at /home/glandium/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/index.rs:228:9:
slice::get_unchecked requires that the index is within the slice
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.

There we go, we hit that get_unchecked error. But what went bad in Firefox if the reduced testcase doesn't crash without -Zbuild-std? Well, Firefox is always built with optimizations on by default, even for debug builds.

$ RUSTFLAGS=-O cargo +nightly run -q
thread 'main' panicked at src/main.rs:10:20:
attempt to add with overflow
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Interestingly, though, changing the addition to

        let addr = dbg!(&gGkAtoms as *const _ as usize) + dbg!(kGkAtomsArrayOffset as usize);

doesn't "fix" it like it did with Firefox, but it shows:

[src/main.rs:10:20] &gGkAtoms as *const _ as usize = 94400145014784
[src/main.rs:10:59] kGkAtomsArrayOffset as usize = 61744
thread 'main' panicked at src/main.rs:10:20:
attempt to add with overflow
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

which is even funnier, because you can see that adding those two numbers is definitely not causing an overflow.

Let's take a look at what LLVM is doing with this code across optimization passes, with the following command (on the initial code without dbg!(), and with a #[inline(never)] on valid_static_atom_addr):

RUSTFLAGS="-C debuginfo=0 -O -Cllvm-args=-print-changed=quiet" cargo +nightly run -q 

Here is what's most relevant to us. First, what the valid_static_atom_addr function looks like after inlining as_ptr into it:

*** IR Dump After InlinerPass on (_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E) ***
; Function Attrs: noinline nonlazybind uwtable
define internal fastcc noundef zeroext i1 @_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E(i64 noundef %0) unnamed_addr #3 {
  %2 = call fastcc noundef align 4 dereferenceable(31212) ptr @_ZN8testcase12static_atoms17hde3e2dda1d3edc34E()
  call void @llvm.experimental.noalias.scope.decl(metadata !4)
  %3 = call fastcc noundef align 4 dereferenceable(12) ptr @"_ZN4core5slice29_$LT$impl$u20$$u5b$T$u5d$$GT$13get_unchecked17he5e8081ea9f9099dE"(ptr noalias noundef nonnull readonly align 4 %2, i64 noundef 2601, i64 noundef 2601)
  %4 = icmp eq ptr %2, null
  ret i1 %4 
}

At this point, we've already done some constant propagation, and we can see the call to get_unchecked is done with constants.

What comes next, after inlining both static_atoms and get_unchecked:

*** IR Dump After InlinerPass on (_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E) ***
; Function Attrs: noinline nonlazybind uwtable
define internal fastcc noundef zeroext i1 @_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E(i64 noundef %0) unnamed_addr #2 {
  %2 = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 ptrtoint (ptr @_ZN8testcase8gGkAtoms17h338a289876067f43E to i64), i64 61744)
  %3 = extractvalue { i64, i1 } %2, 1             
  br i1 %3, label %4, label %5, !prof !4

4:                                                ; preds = %1
  call void @_ZN4core9panicking5panic17hae453b53e597714dE(ptr noalias noundef nonnull readonly align 1 @str.0, i64 noundef 28, ptr noalias noundef nonnull readonly align 8 dereferenceable(24) @2) #9                      
  unreachable

5:                                                ; preds = %1
  %6 = extractvalue { i64, i1 } %2, 0
  %7 = inttoptr i64 %6 to ptr
  call void @llvm.experimental.noalias.scope.decl(metadata !5)
  unreachable

8:                                                ; No predecessors!
  %9 = icmp eq ptr %7, null
  ret i1 %9
}

The first basic block has two exits: 4 and 5, depending on how the add with overflow performed. Both of these basic blocks finish in... unreachable. The first one because it's the panic case for the overflow, and the second one because both values passed to get_unchecked are constants and equal, which the compiler has been hinted (with hint::assert_unchecked) that it's not possible. Thus, once get_unchecked is inlined, what's left is unreachable code. And because we're not rebuilding libstd, the debug_assert is not there before the unreachable annotation. Finally, the last basic block is now orphan.

Imagine you're an optimizer, and you want to optimize this code considering all its annotations. Well, you'll start by removing the orphan basic block. Then you see that the basic block 5 doesn't do anything, and doesn't have side effects, so you just remove it. Which means the branch leading to it can't happen. Basic block 4? There's a function call, so it would have to stay there, and so would the first basic block.

Guess what the Control-Flow Graph pass did? Just that:

*** IR Dump After SimplifyCFGPass on _ZN8testcase22valid_static_atom_addr17h778b64d644106c67E ***
; Function Attrs: noinline nonlazybind uwtable
define internal fastcc noundef zeroext i1 @_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E(i64 noundef %0) unnamed_addr #2 {
  %2 = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 ptrtoint (ptr @_ZN8testcase8gGkAtoms17h338a289876067f43E to i64), i64 61744)
  %3 = extractvalue { i64, i1 } %2, 1
  call void @llvm.assume(i1 %3)
  call void @_ZN4core9panicking5panic17hae453b53e597714dE(ptr noalias noundef nonnull readonly align 1 @str.0, i64 noundef 28, ptr noalias noundef nonnull readonly align 8 dereferenceable(24) @2) #9
  unreachable
}

Now, there's no point doing the addition at all, since we're not even looking at its result:

*** IR Dump After InstCombinePass on _ZN8testcase22valid_static_atom_addr17h778b64d644106c67E ***
; Function Attrs: noinline nonlazybind uwtable
define internal fastcc noundef zeroext i1 @_ZN8testcase22valid_static_atom_addr17h778b64d644106c67E(i64 noundef %0) unnamed_addr #2 {
  call void @llvm.assume(i1 icmp uge (i64 ptrtoint (ptr @_ZN8testcase8gGkAtoms17h338a289876067f43E to i64), i64 -61744))
  call void @_ZN4core9panicking5panic17hae453b53e597714dE(ptr noalias noundef nonnull readonly align 1 @str.0, i64 noundef 28, ptr noalias noundef nonnull readonly align 8 dereferenceable(24) @2) #9
  unreachable
}

And this is how a hint that undefined behavior can't happen transformed get_unchecked(STATIC_ATOM_COUNT) into an addition overflow that never happened.

Obviously, this all doesn't happen with -Zbuild-std, because in that case the get_unchecked branch has a panic call that is still relevant.

$ RUSTFLAGS=-O cargo +nightly run -q -Zbuild-std --target=x86_64-unknown-linux-gnu
thread 'main' panicked at /home/glandium/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/index.rs:228:9:
slice::get_unchecked requires that the index is within the slice
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.

What about non-debug builds?

$ cargo +nightly run --release -q
Illegal instruction

In those builds, because there is no call to display a panic, the entire function ends up unreachable:

define internal fastcc noundef zeroext i1 @_ZN8testcase22valid_static_atom_addr17h9d1fc9abb5e1cc3aE(i64 noundef %0) unnamed_addr #4 {
  unreachable
} 

So thanks to the magic of hints and compiler optimization, we have code that invokes undefined behavior that

  • crashes when built with cargo build --release
  • works when built with cargo build
  • says there's an addition overflow when built with RUSTFLAGS=-O cargo build.

And none of those give a hint as to what the real problem is.

The Mozilla BlogMozilla’s Biggest AI Moments

Just over a year ago Mozilla launched two big investments in AI — a $35 million investment in a responsible tech fund, Mozilla Ventures and a $30 million investment in a R+D Lab developing trustworthy AI, Mozilla.AI. Since those initial investments, Mozilla has accelerated our efforts to build and deploy AI that adheres to our mission of 25 years — putting people first, while being truly trustworthy and open.

While AI was the story of 2023, the emphasis on AI is not going anywhere. Today’s complex ethical tech landscape is shaped by the rise of AI and its profound impact on society, just as the browser battles shaped the tech landscape of the 90s. Mozilla is focused on using philanthropy, community and collective power, to help create a new status quo where public good instead of profit defines the next wave of AI. The many possibilities of AI, both wondrous and harmful, cannot be ignored, so instead Mozilla has been doing our best to shape the future of AI to be responsible, trustworthy, inclusive and centered around human dignity.

Mozilla is uniquely set up to do this because of our structure as both a nonprofit research hub and a mission-driven, for -profit product organization. Mozilla’s distinct advantage in this moment is the ability to bring together advocacy and product, connecting cause to code and tackling issues in AI from every angle.

This is work we won’t stop doing.

Explore Mozilla’s latest advancements in more detail below :

November 2022 Mozilla launched $35M responsible tech fund Investment portfolio includes AI companies Fiddler, Lelapa and Themis

Mozilla launched a venture capital fund for early-stage (seed or series A) startups whose products or technologies protect privacy, decentralize digital power and build more trustworthy AI — advancing privacy, inclusion, transparency, human dignity and other values in the Mozilla Manifesto

March 2023 Invested $30M in Mozilla.ai Launched R+D lab developing trustworthy and open-source AI models

After speaking to thousands of founders, engineers. Scientists, artists, designers and activists who are taking a new approach to AI — one founded in human agency and transparency, but not feeling that new approach from the big tech and cloud companies with the most power, Mozilla announced Mozilla.ai. It is a start-up and community dedicated to building a trustworthy and independent open-source AI ecosystem. 

April 2023 Mozilla co-writes AI regulation policy brief guiding EU’s AI Act Garnering 50+ individual expert and institutional signatories

Mozilla, among a larger group of more than 50 notable artificial intelligence researchers, urged European politicians to adopt broader AI regulation and not exclude generative AI from the European Union’s AI Act as co-authors of the brief.

May 2023 Acquired Fakespot Expanded AI efforts with product that utilizes AI to discern deceptive customer reviews

For close to thirty years, commerce has been core to how people use the internet. The global ecommerce scale-up has brought convenience to people’s lives, but also new challenges and bad actors. Enter: Fakespot. Mozilla acquired Fakespot to continue to invest and enhance their sophisticated AI and machine learning (ML) systems to flag deceptive reviews and help people trust and enjoy their online shopping experience more. 

May 2023 Hosted inaugural Responsible AI Challenge Challenged builders to design trustworthy AI solutions

Mozilla brought together some of the brightest thinkers, technologists, ethicists and business leaders who believe in trustworthy AI for a day of talks, workshops and working sessions to help them get their ideas off the ground through Mozilla’s Responsible AI Challenge. The goal was to inspire, support and invest in a community of builders working on responsible AI products and solutions. Mozilla invested $50,000 into the top applicant and projects presented.

June 2023 Open-Source Research & Investigations AI Team Launched Focusing on platform integrity amid elections

Funding gaps and aggressive actions by big platforms are hampering the important work of independent public interest research scrutinizing the technology industry’s impact on society. In order to fill this gap, produce more independent investigations and help inform better public policy, Mozilla launched The Open Source Research and Investigations (OSRI) team. OSRI’s work is largely community-driven, leveraging crowdsourced data donations with their first project focused on TikTok. 

July 2023 Introduced AI Help (Beta) on MDN Creating rapid access to extensive database and coding best practices

MDN launched AI Help as an assistant for web developers. The tool enhances search efficiency by distilling MDN articles into relevant answers. Users can ask questions, receive streamlined responses with sources, and directly test code in the MDN playground. AI Help boosts productivity, making navigation on MDN faster and more intuitive for developers.

September 2023 MozFest Debuted in Kenya Mobilizing East African AI community on critical issues

Mozilla Festival’s House debuted in Kenya embodying Africa Innovation Mradi work through confronting pressing realities at the intersection of emergency technology and the African continent, including digital extractivism and AI governance. MozFest House Kenya featured over 20 sessions aligned under the theme “Mobilizing African Communities for Trustworthy AI.”

September 2023 Mozilla AI Researchers featured in TIME 100 in AI Two researchers celebrated for their work

Mozilla Trustworthy AI researchers, Inioluwa Deborah Raji (Mozilla Fellow) and Abeba Birhane (Senior Adviser in AI Accountability at the Mozilla Foundation) were named to the TIME100 Most Influential People in AI. 

October 2023 AI Guide Launched Introduces online hub for builders to access resources on responsible AI development

Mozilla announced the availability of its AI Guide, a collaborative and interactive web resource that serves as the starting point for developers diving into the world of AI, especially large language models (LLMs). Not only do developers get access to learning modules and curated tools, they can also contribute through Github, making this a community-powered learning tool.

October 2023 Mozilla attended AI Safety Summit AI Leaders convened in London for a summit focused on AI safety

AI Leaders from around the world, including Mozilla convened near London for the AI Safety Summit organized by the UK government. In a joint declaration, the countries attending the summit pledged to collaborate on AI safety just as Parliament announced the formation of an AI Safety Institute

October 2023 Called for more Openness in AI Released statement on AI safety garnering 1,800+ signatories

Ahead of the Summit, Mozilla published a joint statement on the importance of openness for AI safety with collaborators in the open source community. The statement included 1,800+ signatories, including Nobel Peace Prize winner, Maria Ressa, and several government ministers.

October 2023 Co-signed Open Letter to UK Prime Minister Raised flags on lack of civil society representation prior to AI Safety Summit

Mozilla co-signed an open letter to Prime Minister Rishi Sunak emphasizing the lack of civil society representation at the Summit.

November 2023 Commented on Biden’s AI Executive Order Welcoming the order but pointing out gaps regarding open-source AI

The White House released a sweeping executive order on AI. The executive order covers a wide range of issues, from safety and security to privacy to civil rights and consumer protection. Mozilla President, Mark Surman spoke with numerous policy media including Gizmodo and Fedscoop to advocate for the importance of AI governance to advance privacy and open-source development in AI.

November 2023 Unveiled $200M AI Fund Collaborated on collective commitment from U.S. VP Harris and nine other foundations

Mozilla joined a coalition of 10 leading philanthropies, with leadership from US Vice President Kamala Harris, to invest $200 million in a more trustworthy AI ecosystem. In the coming years, the 10 philanthropies will focus their grantmaking on five key areas identified by Vice President Harris including the intersection of AI with democracy, international rules, and workers’ rights. 

November 2023 AI-Powered Fakespot Chat Launches Mozilla’s first large-language model and AI agent to help online shoppers

Fakespot Chat is a new AI agent that Mozilla started testing as Mozilla’s first LLM. Fakespot Chat, currently available to 100% of Fakespot.com users, serves as a shopping guide, answers your product questions, suggests questions, and recommends alternatives so you can buy with confidence. It serves as an online shopping guide, creating an experience similar to talking to a customer service person while in a physical store. Fakespot Chat uses AI and machine learning to find the answers from product reviews, filtering out fake reviews to help shoppers not only save time, but also trust their purchasing decisions.

November 2023 Mozilla Joined ‘AI Insight Forum’ in U.S. Senate Discussed privacy and liability in AI with U.S. Senators

Mozilla Foundation President, Mark Surman, spoke with members of the US Senate, including Senator Leader Schumer, Senator Rounds, Senator Heinrich and Senator Young about two of what Mozilla believes are the most critical questions we must ask if we’re to chart a better path forward with AI: Howe can we protect people’s privacy in the AI era? And how can we ensure that those who cause harm through AI can be held both accountable and liable?

November 2023 Director of Mozilla.ai Presents to UK’s House of Lords Shared position on open-source AI and technical liability concerns

Moez Draief, Director of Mozilla.ai presented Mozilla’s stance on open source AI and addressed issues around technical liability before the UK’s House of Lords’ Communications and Digital Committee during a formal inquiry into LLMs.

November 2023 Mozilla Meetups in D.C., London and Brussels Featured AI experts from senate, parliament, academia and nonprofits with 200 guests in attendance across markets

At our ‘Mozilla Meetup’ event in Washington, D.C. Our SVP of Innovation Ecosystems, Imo Udom, engaged in a fireside chat with the White House Deputy CTO  — focusing on the intersection between AI, open source, and privacy — followed by a panel discussion featuring experts from the Senate, civil society, and academia. Over 80 people attended the event.

Mozilla hosted a Policy Talks panel in Westminster, London where our VP, Global Policy, Linda Griffin discussed the nuances of open source AI and how to balance innovation with safety. Panelists included various AI policy experts and featured Alex Davies-Jones, Member of Parliament, and an associate director from the Ada Lovelace Institute, an independent research institute with a mission to ensure data and AI benefit society. 

Mozilla also reintroduced ‘Mozilla Mornings’ in Brussels. The event highlighted the interplay of AI with open markets and competition, and included a keynote by a Member of the European Parliament.

November 2023 Mozilla CEO Joined French Prime Minister’s Generative AI Committee Defended open-source values and discussed AI’s societal impacts in Paris

Mozilla CEO, Mitchell Baker defended open source values and discussed AI’s societal impacts with the Generative AI Committee set up by the French Prime Minister in Paris.

December 2023 Mozilla Innovation Week Celebrated and announced multiple AI-based innovation projects

Mozilla announced several AI-based innovation projects we are working on that explore the vast AI opportunities that exist, and invited the Mozilla community to join us in collaborative conversations on our AI Discord

December 2023 Mozilla Announced Three Mozilla AI-based Innovation Projects Introducing Solo, MemoryCache, and llamafile

Mozilla shared with the public three new experimental, prototype tools — an AI website builder for solopreneurs, Solo, an innovation project that augments an on-device, personal model with local files saved from the browser to reflect a more personalized and tailored experience through the lens of privacy and agency, MemoryCache, and llamafile, an open-source initiative that collapses a full-stack LLM chatbot down to a single file that runs on six operations systems.

The post Mozilla’s Biggest AI Moments appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 532

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is Apache Iceberg Rust, a Rust implementation of a table format for huge analytic datasets.

Thanks to Renjie Liu for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

409 pull requests were merged in the last week

Rust Compiler Performance Triage

This was a very quiet week with only one PR having any real impact on overall compiler performance. The removal of the internal StructuralEq trait saw a roughly 0.4% improvement on average across nearly 50 real-world benchmarks.

Triage done by @rylev. Revision range: d6b151fc7..5c9c3c7

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.3%, 0.7%] 5
Regressions ❌
(secondary)
0.5% [0.2%, 1.4%] 10
Improvements ✅
(primary)
-0.5% [-1.5%, -0.2%] 48
Improvements ✅
(secondary)
-2.3% [-7.7%, -0.4%] 36
All ❌✅ (primary) -0.4% [-1.5%, 0.7%] 53

0 Regressions, 4 Improvements, 4 Mixed; 3 of them in rollups 37 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2024-01-31 - 2024-02-28 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The sheer stability of this program is what made me use rust for everything going forward. The social-service has a 100% uptime for almost 2.5 years now. It’s processed 12.9TB of traffic and is still using 1.5mb of ram just like the day we ran it 2.5 years ago. The resource usage is so low it brings tears to my eyes. As someone who came from Java, the lack of OOM errors or GC problems has been a huge benefit of rust and I don’t ever see myself using any other programming language. I’m a big fan of the mindset “build it once, but build it the right way” which is why rust is always my choice.

/u/Tiflotin on /r/rust

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 122-123)

Hello everyone!

Matthew Gaudet here from the SpiderMonkey team, giving Jan a break from newsletter writing.

Our newsletter is an opportunity to highlight some of the work that’s happened in SpiderMonkey land over the last couple of releases. Everyone is hard at work (though some of us are nicely rejuvenated from a winter break).

Feel free to email feedback on the shape of the newsletter to me, as I’d be interested in hearing what works for people and what doesn’t.

🚀 Performance

We’re continuing work on our performance story, with Speedometer 3 being the current main target. We like Speedometer 3 because it provides a set of workloads that we think better reflect the real web, driving improvements to real users too.

Here is a curated selection of just some of the performance related changes in this release:

🔦 Contributor Spotlight: Mayank Bansal

Mayank Bansal has been a huge help to the Firefox project for years. Taking a special interest in performance, he is often one of the first to take note of a performance improvement or regression. He also frequently files performance bugs, some of which have identified fixable problems, along with comparative profiles which smooth the investigative process.

In his own words:

Mayank Bansal has been using Firefox Nightly for more than a decade. He is passionate about browser performance and scours the internet for interesting javascript test-cases for the SM team to analyse. He closely monitors the performance improvement and regressions on AWFY. You can check out some of the bugs he has filed by visiting the metabug here.

The SpiderMonkey team greatly appreciates all the help we get from Mayank. Thank you very much Mayank.

⚡ Wasm

👷🏽‍♀️ Other Work

⏰ Date parsing improvements

Contributor Vinny Diehl has continued improving our date parsing story, aiming to improve compatibility and handling of peculiar cases.

🐇 Fuzzing

In order to find bugs, fuzzing by generating and running random testcases to see if they crash turns out to be an unreasonably effective technique. The SpiderMonkey team works with a variety of fuzzers, both inside of Mozilla (👋 Hi fuzzing@!) and outside (Thank you all!).

Fuzzing can find test cases which are both very benign but worth fixing, as well as extremely serious security bugs. Security sensitive fuzz bugs are eligible for the Mozilla Bug Bounty Program.

To show off the kind of fun we have with fuzzing, I thought I’d curate some fun, interesting (and not hidden for security reasons) fuzz bugs.

Mozilla ThunderbirdThunderbird Monthly Development Digest: January 2024

Hello Thunderbird Community! I’m very happy to kick off a new monthly Thunderbird development recap in order to bring a deeper look and understanding of what we’re working on, and the status of these efforts. (We also publish monthly progress reports on Thunderbird for Android.)

These monthly digests will be in a very short format, focusing primarily on the work that is currently being planned or initiated that is not yet fully captured in BugZilla. Nonetheless, we’re putting it out there to cherish and fully embrace the open nature of Thunderbird.

Without further ado, let’s get into it!

2024 Thunderbird Development Roadmaps Published

Over at DTN, we’ve published initial 2024 roadmaps for the work we have planned on Thunderbird for desktop, and Thunderbird for Android. These will be updated periodically as we continue to scope out each project.

Global Message Database

Our database is currently based on Mork, which is a very old paradigm that creates a lot of limitations, blocking us from doing anything remotely modern or expected (a real threaded conversation view is a classic example). Removing and reworking this implementation, which is at the very core of every message and folder interaction, is not an easy lift and requires a lot of careful planning and exploration, but the work is underway.

You can follow the general effort in Bug 1572000.

The first clean up effort is targeted at removing the old and bad paradigm of the “non-unique unique ID” (kudos to our very own Ben Campbell for coining this term), which causes all sorts of problems. You can follow the work in Bug 1806770.

Cards view final sprint

If you’re using Daily or Beta you might have already seen a lot of drastic differences from 115 for Cards View.

Currently, we’re shaping up the final sprint to polish what we’ve implemented and add extra needed features. We’re in the process of opening all the needed bugs and assigning resources for this final sprint. You can follow the progress by tracking this meta bug and all its child bugs.

As usual, we will continue sharing plans and mock-ups in the UX mailing list, so make sure to follow that if you’re interested in seeing early visual prototypes before any code is touched.

Rust Implementation and Exchange Support

This is a very large topic and exploration that requires dedicated posts and extensive recaps. The short story is that we were able to enable the usage of Rust in Thunderbird, therefore opening the doors for us to start implementing native support for the Exchange protocol by building and vendoring a Rust crate.

Once we have a stable and safe implementation, we will share that crate publicly on a GitHub repo so everyone will be able to vendor it and improve it.

Make sure to follow tb-planning and tb-developers mailing lists to soon get more detailed and very in depth info on Rust and Exchange in Thunderbird.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

Alessandro Castellani (he, him)
Director of Product Engineering

If you’re interested in joining the discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: January 2024 appeared first on The Thunderbird Blog.

Hacks.Mozilla.OrgOption Soup: the subtle pitfalls of combining compiler flags

Firefox development uncovers many cross-platform differences and unique features of its combination of dependencies. Engineers working on Firefox regularly overcome these challenges and while we can’t detail all of them, we think you’ll enjoy hearing about some so here’s a sample of a recent technical investigation.

During the Firefox 120 beta cycle, a new crash signature appeared on our radars with significant volume.

At that time, the distribution across operating systems revealed that more than 50% of the crash volume originates from Ubuntu 18.04 LTS users.

The main process crashes in a CanvasRenderer thread, with the following call stack:

0  firefox  std::locale::operator=  
1  firefox  std::ios_base::imbue  
2  firefox  std::basic_ios<char, std::char_traits<char> >::imbue  
3  libxul.so  sh::InitializeStream<std::__cxx11::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> > >  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Common.h:238
3  libxul.so  sh::TCompiler::setResourceString  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:1294
4  libxul.so  sh::TCompiler::Init  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/Compiler.cpp:407
5  libxul.so  sh::ConstructCompiler  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/gfx/angle/checkout/src/compiler/translator/ShaderLang.cpp:368
6  libxul.so  mozilla::webgl::ShaderValidator::Create  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:215
6  libxul.so  mozilla::WebGLContext::CreateShaderValidator const  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShaderValidator.cpp:196
7  libxul.so  mozilla::WebGLShader::CompileShader  /build/firefox-ZwAdKm/firefox-120.0~b2+build1/dom/canvas/WebGLShader.cpp:98

At first glance, we want to blame WebGL. The C++ standard library functions cannot be at fault, right?

But when looking at the WebGL code, the crash occurs in the perfectly valid lines of C++ summarized below:

std::ostringstream stream;
stream.imbue(std::locale::classic());

This code should never crash, and yet it does. In fact, taking a closer look at the stack gives a first lead for investigation:
Although we crash into functions that belong to the C++ standard library, these functions appear to live in the firefox binary.

This is an unusual situation that never occurs with official builds of Firefox.
It is however very common for distribution to change the configuration settings and apply downstream patches to an upstream source, no worries about that.
Moreover, there is only a single build of Firefox Beta that is causing this crash.

We know this thanks to a unique identifier associated with any ELF binary.
Here, if we choose any specific version of Firefox 120 Beta (such as 120b9), the crashes all embed the same unique identifier for firefox.

Now, how can we guess what build produces this weird binary?

A useful user comment mentions that they regularly experience this crash since updating to 120.0~b2+build1-0ubuntu0.18.04.1.
And by looking for this build identifier, we quickly reach the Firefox Beta PPA.
Then indeed, we are able to reproduce the crash by installing it in a Ubuntu 18.04 LTS virtual machine: it occurs when loading any WebGL page!
With the binary now at hand, running nm -D ./firefox confirms the presence of several symbols related to libstdc++ that live in the text section (T marker).

Templated and inline symbols from libstdc++ usually appear as weak (W marker), so there is only one explanation for this situation: firefox has been statically linked with libstdc++, probably through -static-libstdc++.

Fortunately, the build logs are available for all Ubuntu packages.
After some digging, we find the logs for the 120b9 build, which indeed contain references to -static-libstdc++.

But why?

Again, everything is well documented, and thanks to well trained digging skills we reach a bug report that provides interesting insights.
Firefox requires a modern C++ compiler, and hence a modern libstdc++, which is unavailable on old systems like Ubuntu 18.04 LTS.
The build uses -static-libstdc++ to close this gap.
This just explains the weird setup though.

What about the crash?

Since we can now reproduce it, we can launch Firefox in a debugger and continue our investigation.
When inspecting the crash site, we seem to crash because std::locale::classic() is not properly initialized.
Let’s take a peek at the implementation.

const locale& locale::classic()
{
  _S_initialize();
  return *(const locale*)c_locale;
}

_S_initialize() is in charge of making sure that c_locale will be properly initialized before we return a reference to it.
To achieve this, _S_initialize() calls another function, _S_initialize_once().

void locale::_S_initialize()
{
#ifdef __GTHREADS
  if (!__gnu_cxx::__is_single_threaded())
    __gthread_once(&_S_once, _S_initialize_once);
#endif

  if (__builtin_expect(!_S_classic, 0))
    _S_initialize_once();
}

In _S_initialize(), we first go through a wrapper for pthread_once(): the first thread that reaches this code consumes _S_once and calls _S_initialize_once(), whereas other threads (if any) are stuck waiting for _S_initialize_once() to complete.

This looks rather fail-proof, right?

There is even an extra direct call to _S_initialize_once() if _S_classic is still uninitialized after that.
Now, _S_initialize_once() itself is rather straightforward: it allocates _S_classic and puts it within c_locale.

void
locale::_S_initialize_once() throw()
{
  // Need to check this because we could get called once from _S_initialize()
  // when the program is single-threaded, and then again (via __gthread_once)
  // when it's multi-threaded.
  if (_S_classic)
    return;

  // 2 references.
  // One reference for _S_classic, one for _S_global
  _S_classic = new (&c_locale_impl) _Impl(2);
  _S_global = _S_classic;
  new (&c_locale) locale(_S_classic);
}

The crash looks as if we never went through _S_initialize_once(), so let’s put a breakpoint there and see what happens.
And just by doing this, we already notice something suspicious.
We do reach _S_initialize_once(), but not within the firefox binary: instead, we only ever reach the version exported by liblgpllibs.so.
In fact, liblgpllibs.so is also statically linked with libstdc++, such that firefox and liblgpllibs.so both embed and export their own _S_initialize_once() function.

By default, symbol interposition applies, and _S_initialize_once() should always be called through the procedure linkage table (PLT), so that every module ends up calling the same version of the function.
If symbol interposition were happening here, we would expect that liblgpllibs.so would reach the version of _S_initialize_once() exported by firefox rather than its own, because firefox was loaded first.

So maybe there is no symbol interposition.

This can occur when using -fno-semantic-interposition.

Each version of the standard library would live on its own, independent from the other versions.
But neither the Firefox build system nor the Ubuntu maintainer seem to pass this flag to the compiler.
However, by looking at the disassembly for _S_initialize() and _S_initialize_once(), we can see that the exported global variables (_S_once, _S_classic, _S_global) are subject to symbol interposition:

These accesses all go through the global offset table (GOT), so that every module ends up accessing the same version of the variable.
This seems strange given what we said earlier about _S_initialize_once().
Non-exported global variables (c_locale, c_locale_impl), however, are accessed directly without symbol interposition, as expected.

We now have enough information to explain the crash.

When we reach _S_initialize() in liblgpllibs.so, we actually consume the _S_once that lives in firefox, and initialize the _S_classic and _S_global that live in firefox.
But we initialize them with pointers to well initialized variables c_locale_impl and c_locale that live in liblgpllibs.so!
The variables c_locale_impl and c_locale that live in firefox, however, remain uninitialized.

So if we later reach _S_initialize() in firefox, everything looks as if initialization has happened.
But then we return a reference to the version of c_locale that lives in firefox, and this version has never been initialized.

Boom!

Now the main question is: why do we see interposition occur for _S_once but not for _S_initialize_once()?
If we step back for a minute, there is a fundamental distinction between these symbols: one is a function symbol, the other is a variable symbol.
And indeed, the Firefox build system uses the -Bsymbolic-function flag!

The ld man page describes it as follows:

-Bsymbolic-functions

When creating a shared library, bind references to global function symbols to the definition within the shared library, if any.  This option is only meaningful on ELF platforms which support shared libraries.

As opposed to:

-Bsymbolic

When creating a shared library, bind references to global symbols to the definition within the shared library, if any.  Normally, it is possible for a program linked against a shared library to override the definition within the shared library. This option is only meaningful on ELF platforms which support shared libraries.

Nailed it!

The crash occurs because this flag makes us use a weird variant of symbol interposition, where symbol interposition happens for variable symbols like _S_once and _S_classic but not for function symbols like _S_initialize_once().

This results in a mismatch regarding how we access global variables: exported global variables are unique thanks to interposition, whereas every non-interposed function will access its own version of any non-exported global variable.

With all the knowledge that we have now gathered, it is easy to write a reproducer that does not involve any Firefox code:

/* main.cc */
#include <iostream>
extern void pain();
int main() {
pain();
   std::cout << "[main] " << std::locale::classic().name() <<"\n";
   return 0;
}

/* pain.cc */

#include <iostream>
void pain() {
std::cout << "[pain] " << std::locale::classic().name() <<"\n";
}

# Makefile
all:
   $(CXX) pain.cc -fPIC -shared -o libpain.so -static-libstdc++ -Wl,-Bsymbolic-functions
   $(CXX) main.cc -fPIC -c -o main.o
   $(CC) main.o -fPIC -o main /usr/lib/gcc/x86_64-redhat-linux/13/libstdc++.a -L. -Wl,-rpath=. -lpain -Wl,-Bsymbolic-functions
   ./main

clean:
   $(RM) libpain.so main

Understanding the bug is one step, and solving it is yet another story.
Should it be considered a libstdc++ bug that the code for locales is not compatible with -static-stdlibc++ -Bsymbolic-functions?

It feels like combining these flags is a very nice way to dig our own grave, and that seems to be the opinion of the libstdc++ maintainers indeed.

Overall, perhaps the strangest part of this story is that this combination did not cause any trouble up until now.
Therefore, we suggested to the maintainer of the package to stop using -static-libstdc++.

There are other ways to use a different libstdc++ than available on the system, such as using dynamic linking and setting an RPATH to link with a bundled version.

Doing that allowed them to successfully deploy a fixed version of the package.
A few days after that, with the official release of Firefox 120, we noticed a very significant bump in volume for the same crash signature. Not again!

This time the volume was coming exclusively from users of NixOS 23.05, and it was huge!

After we shared the conclusions from our beta investigation with them, the maintainers of NixOS were able to quickly associate the crash with an issue that had not yet been backported for 23.05 and was causing the compiler to behave like -static-libstdc++.

To avoid such mess in the future, we added detection for this particular setup in Firefox’s configure.

We are grateful to the people who have helped fix this issue, in particular:

  • Rico Tzschichholz (ricotz) who quickly fixed the Ubuntu 18.04 LTS package, and Amin Bandali (bandali) who provided help on the way;
  • Martin Weinelt (hexa) and Artturin for their prompt fixes for the NixOS 23.05 package;
  • Nicolas B. Pierron (nbp) for helping us get started with NixOS, which allowed us to quickly share useful information with the NixOS package maintainers.

 

The post Option Soup: the subtle pitfalls of combining compiler flags appeared first on Mozilla Hacks - the Web developer blog.

The Talospace ProjectFirefox 122 on POWER

Right now during our relocation I'm not always in the same ZIP code as my T2, but we've still got to keep it up to date. To that end Firefox 122 is out with some UI improvements and new Web platform support.

A number of changes have occurred between Fx121 and Fx122 which improve our situation in OpenPOWER world, most notably being we no longer need to drag our WebRTC build changes around (and/or you can remove --disable-webrtc in your .mozconfig). However, on Fedora I needed to add ac_add_options --with-libclang-path=/usr/lib64 to my .mozconfigs (or ./mach build would fail during configuration because Rust bindgen could not find libclang.so), and I also needed to effectively fix bug 1865993 to get PGO builds to work again on Python 3.12, which Fedora 39 ships with. You may not need to do either of these things depending on your distro. There are separate weird glitches due to certain other components being deprecated in Python 3.12 that do not otherwise affect the build.

To that end, here is the updated PGO-LTO patch I'm using, as well as the current .mozconfigs:

Optimized

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you like
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options --with-libclang-path=/usr/lib64
ac_add_options MOZ_PGO=1

export GN=/home/censored/bin/gn # if you haz
export RUSTC_OPT_LEVEL=2

Debug

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you like
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-Og -mcpu=power9 -fpermissive -DXXH_NO_INLINE_HINTS=1"
ac_add_options --enable-debug
ac_add_options --enable-linker=bfd
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options --with-libclang-path=/usr/lib64

export GN=/home/censored/bin/gn # if you haz

Firefox Developer ExperienceFirefox DevTools Newsletter — 122

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 122 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like AAR.dev, who fixed a typo in the Profiler settings page (#1865895).

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


Accessibility

As said in previous newsletters, we worked on accessibility issues across the toolbox, and fixed a few of them in this release. First, there was a big focus in the Inspector view to make sure that various elements are all accessible and can activated using only the keyboard:

  • the checkbox to disable/enable a property (#1844055)
  • the selector highlighter icon (#1844053)
  • the button to show rules containing a given property (#1844058)
  • the expandable containers like Pseudo elements, keyframes, … (#1866986)
  • the new CSS property edit control (#1844057)
  • the toggle buttons in top toolbar (:hov, .cls, print and dark/light mode simulation) (#1844061)
  • the grid highlighter color picker (#1844072)

While working on keyboard navigation in the Rules view, we felt like we could revisit the behavior of the Enter key when editing selector,property name or value. Since we know this is an important change, we write a specific blog post to explain our motivation behind it: https://fxdx.dev/rules-view-enter-key/

Finally, we fixed remaining focus indicator (#1865846, #186608) and color contrast issue (#1843332), as well as properly labelled the button to toggle object properties in the console and debugger (#1844088)

This project is coming to an end, but we’ll likely have another project later this year to take care of remaining, especially in tools we didn’t investigate yet, like the Network panel.

Miscellaneous

Did you know that the console exposes two helper functions, $ and $$ ? They are similar to document.querySelector and document.querySelectorAll, the only difference being that $$ returns an array, while document.querySelectorAll returns a NodeList. Now those two helpers are eagerly evaluated, making it easier to query a specific element as you get feedback about matching elements as you’re typing (#1616524)

The console input, filled with the following expression: `$$(":has(h1)")` The eager evaluation result,  just below the console input is displaying `Array [ html, body ]`

You can now set beforeunload and unload event listener breakpoints in the Debugger which should be pretty useful when investigating navigation/reload issues (#1569775).

The total transferred size in the Network Monitor does not include service worker requests any more (#1347146).

We fixed  fews issues in the Inspector. First, we weren’t able to load stylesheet text, which could occur in projects using Vite (#1867816). We also introduced a bug in the Inspector markup view in 121, which was causing single click to activate URLS in element attributes (e.g. the src attribute on <img> elements) (#1870214). Finally, using the the clip path editor could cause the value in the Rules view to be invalid (#1868263).

Thank you for reading this and using our tools, see you next month for a new round of updates 🙂

The Servo BlogTwo months in Servo: better inline layout, stable Rust, and more!

Servo nightly showing support for ‘text-align-last’, ‘text-align: justify’, ‘vertical-align: baseline’, and ‘position: sticky’

Servo has had some exciting changes land in our nightly builds over the last month:

  • as of 2023-12-27, the ‘text-align-last’ property is now supported (@mrobinson, #30905)
  • as of 2023-12-27, ‘text-align: justify’ is now supported (@mrobinson, #30807, #30866)
  • as of 2024-01-09, ‘line-height’ and ‘vertical-align’ are now moderately supported (@mrobinson, #30902)
  • as of 2024-01-24, ‘Event#composedPath()’ is now supported (@gterzian, #31123)
Servo nightly showing rudimentary support for table layouts when the pref is enabled

We’ve started working on support for sticky positioning and tables in the new layout engine, with some very early sticky positioning code landing in 2023-11-30 (@mrobinson, #30686), the CSS tables tests now enabled (@mrobinson, #31131), and rudimentary table layout landing in 2024-01-20 under the layout.tables.enabled pref (@mrobinson, @Loirooriol, @Manishearth, #30799, #30868, #31121).

Geometry in our new layout engine is now being migrated from floating-point coordinates (f32) to fixed-point coordinates (i32 × 1/60) (@atbrakhi, #30825, #30894, #31135), similar to other engines like WebKit and Blink. While floating-point geometry was thought to be better for transformation-heavy content like SVG, the fact that larger values are less precise than smaller values causes a variety of rendering problems and test failures (#29819).

As a result of these changes, we’ve made big strides in our WPT pass rates:

  • CSS2 floats (+3.3pp to 84.9%) and floats-clear (+5.6pp to 78.9%) continue to surge
  • we now surpass legacy layout in the CSS2 linebox tests (61.1% → 87.9%, legacy 86.4%)
  • we now surpass legacy layout in the css-flexbox tests (49.5% → 52.7%, legacy 52.2%)
  • we’ve closed 76% of the gap in key CSS2 tests (79.2% → 82.2%, legacy 83.1%)

Updates, servoshell, and stability

GStreamer has been updated from 0.15 to 0.21 (@mrobinson, #30750), fixing long-standing breakage of video playback. WebGPU has been updated from 0.17 to 0.18 (@sagudev, #30926, #30954), and ANGLE has been updated from April 2019 to August 2023 (@sagudev, #30546).

Servo nightly showing Back and Forward buttons in the minibrowser

Servo’s example browser now has Back and Forward buttons (@atbrakhi, #30805), and no longer shows the incorrect location when navigation takes a long time (@atbrakhi, #30518).

Many stability improvements have landed, including fixes for a crash in inline layout (@atbrakhi, #30897), three WebGPU-related crashes (@lucasMontenegro, @gterzian, @Taym95, #30888, #30989, #31002), a crash in the PerformanceResourceTiming API (@delan, #31063), and several crashes due to script runtimes being dropped in the wrong order (@gterzian, #30896).

CI, code health, and dev changes

The intermittent macOS build failures on CI should now be fixed (@mrobinson, #30975).

Servo now has some preliminary Android build support (@mukilan, #31086), though more work needs to be done before Servo will run on Android.

The long-term effort to simplify how Servo is built continues (@mrobinson, #31075), and we’ve replaced the time crate with chrono and std::time in much of Servo (@Taym95, @augustebaum, #30927, #31020, #30639, #31079). We have also started migrating our DOM bindings to use typed arrays where possible (@gterzian, #30990, #31077, #31087, #31076, #31106), as part of an effort to reduce our unsafe code surface (#30889, #30862).

Several crates have been moved into our main repo:

We’ve also made some dev changes:

Linux build issues

Several people have reported problems building Servo on newer Linux distro versions, particularly with clang 15 or with clang 16. While we’re still working on fixing the underlying issues, there are some workarounds. If your distro lets you install older versions of clang with a package like clang-14, you can tell Servo to use it with:

export CC=/usr/bin/clang-14
export CXX=/usr/bin/clang++-14

Alternatively you can try our new Nix-based dev environment, which should now work on any Linux distro (@delan, #31001). Nix is a package manager with some unusual benefits. Servo can use Nix to find the correct versions of all of its compilers and build dependencies without needing you to install them or run mach bootstrap. All you need to do is install Nix, and export MACH_USE_NIX= to your environment. See the wiki for more details!

Firefox Developer ExperienceFirefox WebDriver Newsletter — 122

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 122 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

General

The modifications outlined in this section are applicable to both WebDriver BiDi and Marionette, as both implementations utilize a shared set of common code:

WebDriver BiDi

New: Support for the “browsingContext.traverseHistory” command

The browsingContext.traverseHistory command enables clients to navigate pages within a specified browsing context backward and forward in history, similar to a user clicking the back and forward buttons in the browser’s toolbar. The command expects a delta number argument to specify how many history steps to traverse. For instance to jump forward to the next page, delta should be set to 1. To navigate back 3 steps – and therefore skip 2 entries – delta should be -3, as in the example below:

{
  "id": 68,
  "method": "browsingContext.traverseHistory",
  "params": {
    "context":"4143d9c5-09bd-4491-816c-8c8a50f89ab2",
    "delta": -3
  }
}

Updates for the browsingContext.setViewport command

In preparation for the addition of emulating the device pixel ratio (DPR) in the browsingContext.setViewport command, a variant was needed to retain the current viewport size of the specified top-level browsing context. Using null as a value for the viewport argument, which is already supported, resets the view port to its original size. Omitting the argument instead ensures that the viewport size remains unchanged.

Bug Fixes

Marionette (WebDriver classic)

Bug Fixes

The Mozilla BlogGot a new device? Don’t skip this small step to make it safer

If you just scored a new phone, tablet or computer over the holidays or from an end-of-year sale, congratulations! Starting the new year with a new device is always exciting, and we know you’re eager to get it set up ASAP. 

But whether you received a new phone or laptop, one of the best first steps to personalizing your device is setting your preferred or default browser — debating which screensaver you should go with isn’t going anywhere, we promise. While many of these devices have pre-installed browsers, such as Apple’s Safari, best believe you’re actually not stuck with them. You should take a moment to consider the alternatives to the default that can lead to a more customized and efficient browsing experience.

In this article, we’ll dive into the easy steps you can take to make Firefox your default browser and why it might be the perfect fit for your browsing needs across the new devices you received last month during the holidays. 

1. First off, why Firefox?

Before we get into the nitty-gritty about how to add Firefox to your devices, let’s lay out why you should do this. 

While there are multiple browser options to choose from when setting up your new technology, keep in mind that Firefox is a browser backed by a non-profit whose sole mission is to ensure that the Internet is a global public resource, open and accessible to all. It’s a fast browser that puts reliability and security/privacy ahead of profits, with a focus on the needs of people, not shareholders. Firefox also has a wide range of extensions, too, that take your browsing experience to the next level — it’s one of the few browsers that supports extensions on Android devices, by the way. It can also sync all of your devices to take your favorite bookmarks, saved logins, passwords and browsing history wherever you go. Plus, send open tabs between your phone and laptop or computer to pick up where you left off.

2. Where to download and install Firefox

OK, let’s get right to it. The first step in making Firefox the default browser on your new devices is by, well, downloading and installing it. Visit our Mozilla Firefox page, scroll down and select your preferred operating system. 

Once Firefox is downloaded on your computer or device, take a second and bask in it, exploring and getting familiar with the layout/features. Don’t get lost in it for too long, though.

3. How to set Firefox as your default browser

Now that Firefox is downloaded on your new device, let’s make it the default browser. Depending on which operating system you are using, the setup is different. Here’s what they look like for each platform:

Windows:

  • Open the Windows Settings menu.
  • Select “Apps” and then click on “Default apps.”
  • Scroll down to the “Web browser” section and choose Mozilla Firefox from the list.

Mac: 

  • Open the Apple menu and select “System Settings.”
  • Click on “Desktop & Dock” and scroll to find the “Default web browser” option.
  • Choose Firefox from the dropdown menu.

Android:

  • Open the Settings app on your device.
  • Scroll down and select “Apps.”
  • Find and tap on “Default apps” or “Browser app,” then choose Firefox.

iOS:

  • Go to the Settings app on your iOS device.
  • Scroll down and select “Firefox” from the list of installed apps.
  • Toggle the “Default Browser App” option to enable Firefox.

4. How to customize your Firefox experience 

With Firefox now set as your default browser, it’s time to customize your experience a bit. If you’re on an Android phone or a laptop or computer, take advantage of the browser themes and collection of extensions it offers — for blocking ads, fixing sound issues and more.

For phone and tablets, play around and tweak the settings to suit your preferences. Tab management, for example, is a great starting point to make browsing safer.

**

As you start your new journey with your new devices, making Firefox the default browser is a simple and easy way to making your online experience more personalized and secure to begin 2024! Enjoy the flexibility, speed and security Firefox brings.

Get Firefox

Get the browser that protects what’s important

The post Got a new device? Don’t skip this small step to make it safer appeared first on The Mozilla Blog.

The Mozilla Blog4 reasons to try Mozilla’s new Firefox Linux package for Ubuntu and Debian derivatives

Great news for Linux users, after months of testing, Mozilla released today a new package for Firefox on Linux (specifically on Ubuntu, Debian, and any Debian-based distribution). If you’ve heard about Linux, which is known for its open-source software and an alternative to traditional operating systems (OS), and are curious to learn more, here are four reasons why you should give our new Firefox on Linux package a try.

1. Adaptable to fit your needs

Browsers are complex applications that support many scenarios in people’s daily lives and we’ve been working on improving sandbox implementations. This is why, while Firefox gets fully compatible with Snap and Flatpak, we want to offer a native package too.

Firefox is available in several official formats on Linux including the Mozilla .tar.bz2 builds and sandboxed packages like Snap and Flatpak.

2. 100% built by Mozilla

We are grateful for those who choose Firefox on Linux, making it a popular option and for many, their default browser. Previously, Firefox .deb packages needed the help of people and organizations (depending on the linux distribution) outside of Mozilla. With this new package, we offer Firefox assembled from its source code, without any modifications, built and supported by Mozilla. 💪

3. Better performance 

For more than 25 years, Mozilla has built a reputation for building free and open-source web browsers. Because the Firefox browser is open-source, we know Firefox inside and out, including how to get the best from it. For example, we built Firefox with advanced compiler-based optimizations for better performance. Note: If you are using another .deb package, you may or may not get all the optimizations we intended – it depends on the package’s maintainers

4. Faster updates 

Getting the latest version with features and security fixes is key to having a good experience whenever you use Firefox. Now, our new APT repository is directly connected to the Firefox release process, so you will receive the latest updates whenever we make them available. Tip: you will still need to restart Firefox for the latest version. 😁

Good news: many Linux distributions come with Firefox pre-installed through their package manager and it’s already set as the default browser. 🙌

Can’t find it, here’s a direct link to try our .deb new Firefox on Linux package, plus, our how to install Firefox on Linux guide.

Try our Firefox on Linux package today!

The Firefox on Linux package now available for Ubuntu and Debian derivatives
Try the latest Firefox on Linux package

The post 4 reasons to try Mozilla’s new Firefox Linux package for Ubuntu and Debian derivatives appeared first on The Mozilla Blog.

Firefox NightlyHappy New Year – These Weeks in Firefox: Issue 152

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • [jepstein] Irene Ni joins the Front-end team through April. Welcome! She is starting on the Reader View work with Cieara, Sam, and Fred.

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]
  • Javi Rueda :javirid

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Landed a fix to prevent AddonRepository.sys.mjs from mistakenly clearing add-ons metadata (stored in ProfD/addons.json) when an addon metadata refresh request is triggered while Gecko is disconnected from the network – Bug 1870905
  • Thanks to Hartmut Welpmann for fixing addon updates error handling of empty results and improving logging – Bug 1861372
WebExtensions Framework
  • Thanks to Gregory Pappas for contributing a fix to Bug 1870498 and fix a regression that was preventing extensions content scripts from accessing getCoalescedEvents() after it has been marked as only available to Secure Contexts
WebExtension APIs
  • Thanks to Cimbali for contributing changes to ContextualIdentityService.sys.mjs internals and the WebExtensions contextualIdentities API to introduce a new method that allows extensions to reorder the defined containers – Bug 1333395

Developer Tools

  • Aaron expanded the “Save as File” context menu to all types of Network responses in Netmonitor (bug). It was only enabled for images before.
  • Emilio fixed a styling issues on tooltips (bug)
  • Alex added the ability to show a link to the original source when selecting a location in a bundle in the Debugger(bug)
  • Nicolas fixed a few issues on the Preview popup in the Debugger (bug, bug)

Fluent

ESMification status

  • ESMified status:
    • browser: 89%
    • toolkit: 99%
    • Total:  96.48% (no change)
  • #esmification on Matrix

Lint, Docs and Workflow

Migration Improvements

  • Device migration
    • We sent out a spotlight message a few weeks ago encouraging users without a Mozilla account and lots of local data (like bookmarks, history, passwords, etc) that they can use a Mozilla account to have an end-to-end encrypted copy of that data in the cloud. This targeted clients in the English, Italian, French and German locales. We’re going to be doing the rest of the locales later this month, and include folks on the 115 ESR branch as well.
    • mconley has a prototype component that can create periodic snapshots of SQLite databases, which could end up being the basis of a local profile backup at runtime system.

New Tab Page

Performance

  • The off-main-thread Windows Jump List backend is currently enabled by default on Nightly, and the code to support it has ridden the trains to Beta 122. When that code reaches the Release channel after January 23rd, we plan to run an experiment to see if there’s a measurable improvement to input event response time with it enabled.

Screenshots

Search and Navigation

Below the fold

Mozilla ThunderbirdJanuary 2024 Community Office Hours: Context Menu Updates

The blue Thunderbird is circled around a heart created by clasped hands, in the featured image for the Thunderbird Community Office Hours blog post.

UPDATE: Our January Office Hours was fantastic! Here’s the full video replay.

A New Year of New Office Hours

We’re back from our end of year break, breaking in our new calendars, and ready to start 2024 with our renewed, refreshed, and refocused community office hours. Thank you to everyone who joined us for our November session! If you missed out on our chat about the new Cards View and the Thunderbird design process, you can find the video (which also describes the new format) in this blog post.

We’re excited for another year of bringing you expert insights from the Thunderbird Team and our broader community. To kick off 2024, and to build on November’s excellent discussion, we’ll be continuing our dive into another important aspect of the Thunderbird design process.

January Office Hours Topic: Message Context Menu

The image shows a mock-up of a nested Thunderbird context menu, with the Organize menu option opening to a menu that lists, from top to bottom, Tag, Archive, Move To, Copy To, Convert To. Tag has been chosen in this mock up, and from top to bottom, this menu lists New Tag, Manage Tags, Remove All Tags, Important, Work, Personal, To Do, Later. The tags all have a color-coded tag icon to their left.<figcaption class="wp-element-caption">Mock-up: designs shown are not final and subject to change. </figcaption>

We’ve been working on some significant (and what we think are pretty fantastic) UI changes to Thunderbird. Besides the new Cards View, we have some exciting overhauls to the Message Context Menu (aka the right-click menu) planned. UX Engineer Elizabeth Mitchell will discuss these changes, and most importantly, why we’re making them. Additionally, Elizabeth is one of the leaders on making Thunderbird accessible for all! We’re excited to hear how the new Message Context Menu will make your email experience easier and more effective.

If you’d like a sneak peak of the Context Menu plans, you can find them here.

And as always, if you have any questions you’d like to ask during the January office hours, you can e-mail them to officehours@thunderbird.net.

Join Us On Zoom

(Yes, we’re still on Zoom for now, but a Jitsi server for future office hours is in the works!)

When: January 25 at 18:00 UTC

Direct URL To Join: https://mozilla.zoom.us/j/92739888755
Meeting ID: 92739888755
Password: 365021

Dial by your location:

  • +1 646 518 9805 US (New York)
  • +1 669 219 2599 US (San Jose)
  • +1 647 558 0588 Canada
  • +33 1 7095 0103 France
  • +49 69 7104 9922 Germany
  • +44 330 088 5830 United Kingdom
  • Find your local number: https://mozilla.zoom.us/u/adkUNXc0FO

The call will be recorded and this post updated with a link to the recording afterwards.

Stay Informed About Future Thunderbird Releases and Events

Want to be notified about upcoming releases AND Community Office Hours? Subscribe to the Thunderbird Release and Events Calendar!

The post January 2024 Community Office Hours: Context Menu Updates appeared first on The Thunderbird Blog.

Mozilla Open Policy & Advocacy BlogPlatform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox

Browsers are the principal gateway connecting people to the open Internet, acting as their agent and shaping their experience. The central role of browsers has long motivated us to build and improve Firefox in order to offer people an independent choice. However, this centrality also creates a strong incentive for dominant players to control the browser that people use. The right way to win users is to build a better product, but shortcuts can be irresistible — and there’s a long history of companies leveraging their control of devices and operating systems to tilt the playing field in favor of their own browser.

This tilt manifests in a variety of ways. For example: making it harder for a user to download and use a different browser, ignoring or resetting a user’s default browser preference, restricting capabilities to the first-party browser, or requiring the use of the first-party browser engine for third-party browsers.

For years, Mozilla has engaged in dialog with platform vendors in an effort to address these issues. With renewed public attention and an evolving regulatory environment, we think it’s time to publish these concerns using the same transparent process and tools we use to develop positions on emerging technical standards. So today we’re publishing a new issue tracker where we intend to document the ways in which platforms put Firefox at a disadvantage and engage with the vendors of those platforms to resolve them.

This tracker captures the issues we experience developing Firefox, but we believe in an even playing field for everyone, not just us. We encourage other browser vendors to publish their concerns in a similar fashion, and welcome the engagement and contributions of other non-browser groups interested in these issues. We’re particularly appreciative of the efforts of Open Web Advocacy in articulating the case for a level playing field and for documenting self-preferencing.

People deserve choice, and choice requires the existence of viable alternatives. Alternatives and competition are good for everyone, but they can only flourish if the playing field is fair. It’s not today, but it’s also not hard to fix if the platform vendors wish to do so.

We call on Apple, Google, and Microsoft to engage with us in this new forum to speedily resolve these concerns.

The post Platform Tilt: Documenting the Uneven Playing Field for an Independent Browser Like Firefox appeared first on Open Policy & Advocacy.

The Servo BlogTauri update: embedding prototype, offscreen rendering, multiple webviews, and more!

Back in November, we highlighted our ongoing efforts to make Servo more embeddable, and today we are a few steps closer!

Tauri is a framework for building desktop apps that combine a web frontend with a Rust backend, and work is already ongoing to expand it to mobile apps and other backend languages. But unlike say, Electron or React Native, Tauri is both engine-agnostic and frontend-agnostic, allowing you to use any frontend tooling you like and whichever web engine makes the most sense for your users.

To integrate Servo with Tauri, we need to add support for Servo in WRY, the underlying webview library, and the developers of Tauri have created a proof of concept doing exactly that! While this is definitely not production-ready yet, you can play around with it by checking out the servo-wry-demo branch (permalink) and following the README.

While servoshell, our example browser, continues to be the “reference” for embedding Servo, this has its limitations in that servoshell’s needs are often simpler than those of a general-purpose embeddable webview. For example, the “minibrowser” UI needs the ability to reserve space at the top of the window, and hook the presenting of new frames to do extra drawing, but it doesn’t currently need multiple webviews.

This is where working with the Tauri team has been especially invaluable for Servo — they’ve used their experience integrating with other embeddable webviews to guide changes on the Servo side. Early changes include making it possible to position Servo webviews anywhere within a native window (@wusyong, #30088), and give them translucent or transparent backgrounds (@wusyong, #30488).

Support for multiple webviews in one window is needed for parity with the other WRY backends. Servo currently has a fairly pervasive assumption that only one webview is active at a time. We’ve found almost all of the places where this assumption was made (@delan, #30648), and now we’re breaking those findings into changes that can actually be reviewed and landed (@delan, #30840, #30841, #30842).

Support for multiple windows sounds similar, but it’s a lot harder. Servo handles user input and drawing with a component known for historical reasons as the “compositor”. Since the constellation — the heart of Servo — is currently associated with exactly one compositor, and the compositor is currently tightly coupled with the event loop of exactly one window, supporting multiple windows will require some big architectural changes. @paulrouget’s extensive research and prior work on making Servo embeddable will prove especially helpful.

Offscreen rendering is critical for integrating Servo with apps containing non-Servo components. For example, you might have a native app that uses Servo for online help or an OAuth flow, or a game that uses Servo for purchases or social features. We can now draw Servo to an offscreen framebuffer and let the app decide how to present it (@delan, #30767), rather than assuming control of the whole window, and servoshell now uses this ability except when the minibrowser is disabled (--no-minibrowser).

Precompiling mozangle and mozjs would improve developer experience by reducing initial build times. We can now build the C++ parts of mozangle as a dynamic library (.so/.dylib/.dll) on Linux and macOS (@atbrakhi, mozangle#71), though more work is needed to distribute and make use of them.

We’re exploring two approaches to precompiling mozjs. The easier approach is to build the C++ parts as a static library (.a/.lib) and cache the generated Rust bindings (@wusyong, mozjs#439). Building a dynamic library (@atbrakhi, mozjs#432) will be more difficult, but it should reduce build times even further.

Many thanks to NLnet for sponsoring this work.

Adrian GaudebertL'état de l'Adrian 2023

L'année 2023 est terminée, donc comme depuis trois ans, c'est l'heure de dresser un bilan de ce que j'ai fait ces douze derniers mois. Je démarre cette retrospective avec la sensation de n'avoir « rien » fait, mais c'est parce que je me suis concentré essentiellement sur un seul et unique projet, notre jeu Dawnmaker. Vous allez le voir, l'année a en fait été bien chargée pour moi. En route pour le bilan !

Projets principaux

Arpentor Studio

Notre société, cofondée avec Alexis, a stagné cette année. Nous sommes toujours deux, même si nous avons été trois à deux moments dans l'année, avec Aurélie au sound design en début d'année puis Agathe au UX / UI design pendant deux mois. Nous n'avons quasiment aucune entrée d'argent, nous ne nous payons pas, et avons également réduit au minimum les dépenses de fonctionnement pour tenir le plus longtemps possible.

Mécaniquement, la gestion du studio m'a demandé moins de temps cette année. J'ai dû faire un dossier de demande de solde pour une subvention de la Région, plusieurs itérations sur le budget de Dawnmaker pour des négociations (qui n'ont malheureusement mené à rien) avec un éditeur, et l'entretien administratif mensuel — envoyer les factures à notre expert-comptable, essentiellement.

La principale erreur sur laquelle j'ai appris, et redressé la barre, en 2023 porte sur la stratégie d'édition de notre jeu. Depuis début 2022, nous avons établi une feuille de route qui implique l'arrivée d'un éditeur, partenaire qui prend en charge le financement et la publication de Dawnmaker. C'est, je crois aujourd'hui, une erreur, a fortiori dans le contexte actuel de l'industrie du jeu vidéo : les éditeurs traversent une période de disette financière, liée à de nombreux facteurs — bulle financière de 2021 suite au COVID et à la forte augmentation des habitudes de jeu, de nombreuses très grosses sorties en 2023, décalées là aussi à cause du COVID, qui ont phagocyté les ventes de jeux indépendants, et bien sûr les taux d'intérêts bancaires qui ont explosé. Résultat, en 2023, les éditeurs sont frileux et il est devenu très difficile de leur vendre son jeu.

Baser la stratégie financière de son entreprise sur l'apport d'un partenaire externe, sur lequel nous n'avons aucun contrôle, me paraît donc un risque énorme. C'est pourtant la stratégie de l'immense majorité des studios de jeu vidéo aujourd'hui, pour une raison très simple : ça coûte très cher de produire un jeu vidéo ! De notre côté, nous avons la chance aujourd'hui de pouvoir travailler sans salaire, grâce notamment au RSA. C'est cependant une situation qui n'est ni enviable, ni viable sur le moyen terme.

Face à tous ces éléments, j'ai décidé de modifier notre stratégie pour Dawnmaker. Nous ne planifions plus le fait de trouver un éditeur. Notre plan principal, désormais, est de sortir le jeu nous-même (en « auto-édition »), dans un délai qui nous permet à la fois de le mener vers une qualité satisfaisante pour un produit commercial, et de ne pas nous couler financièrement. Nous avons donc deux échéances : début mars, nous devons avoir terminé la vertical slice du jeu, une version qui contient tous les systèmes du jeu mais avec une partie seulement de son contenu. C'est un produit de très bonne qualité, proche de l’état final attendu, et qui est donc représentatif de ce qu'on veut faire. On se donnera ensuite environ trois mois pour chercher à nouveau un éditeur, tout en menant une campagne marketing et en ajoutant quelques améliorations au jeu en fonction des retours de nos testeuses et testeurs. Si courant mai nous n'avons pas sécurisé de financement, nous sortirons alors le jeu nous-mêmes — très probablement fin juin. Ce sera une version « amputée » du jeu, loin du contenu que nous souhaiterions avoir, mais une version malgré tout fonctionnelle et de qualité professionnelle. Et bien sûr, si un éditeur s'engage sur notre jeu et le finance, nous reprendrons le plan secondaire, qui consiste à faire une vraie phase de production, recruter quelques personnes en plus, et sortir, probablement début 2025, une version complète du jeu.

En conclusion, Arpentor Studio avance, mais c'est difficile. 2024 sera une année décisive pour le studio, avec soit l'arrivée d'un éditeur pour notre premier jeu, soit sa sortie. Dans tous les cas, ça devrait faire entrer de l'argent dans l'entreprise, ce dont j'ai hâte !

Dawnmaker

Le projet Cities of Heksiga a changé de nom et s'appelle désormais Dawnmaker ! J'ai passé l'essentiel de mon année 2023 à travailler dessus, sur trois aspects principalement : la programmation, le Game Design — la conception des règles du jeu et de son contenu — et le marketing.

Dawnmaker a beaucoup changé pendant cette année. Le jeu est passé d'un rendu 2D très (très) basique à un rendu en 3D en début d'année, puis est revenu vers la 2D pendant l'été. Le passage du jeu vers la 3D était prévu de longue date, mais s'est avéré être une erreur. Quasiment tous les éditeurs à qui nous avons montré le jeu nous l'ont fait remarquer. La question qui nous a fait revenir en arrière a été : « quelle est la valeur ajoutée de la 3D pour le jeu ? » On a eu bien du mal à y répondre…

Nous avons donc fait machine arrière — pas vraiment, puisque j'en ai profité pour coder tout le rendu avec une nouvelle techno optimisée pour la 2D. Grand bien nous en a fait : le jeu est vraiment beaucoup plus beau maintenant ! Il tourne également mieux sur ma machine vieillissante, ce qui est un bon signe pour le sortir sur téléphones portables. J'ai aussi amélioré notre éditeur de contenu pour qu'Alexis soit le plus autonome possible sur l'intégration des assets des bâtiments.

Voici une petite fresque de la progression du jeu en 2023 :

En janvier

En mai

En novembre

Au delà de l'aspect graphique, nous avons ajouté beaucoup de contenu (une quarantaine de nouveaux bâtiments, une vingtaine de nouvelles cartes), des mécaniques importantes (notamment une boucle de progression à la roguelike), et beaucoup de choses pour améliorer la jouabilité du jeu (de nouvelles interfaces, notamment grâce aux contributions de Menica Folden, du drag&drop pour jouer les cartes, des petites animations un peu partout… ).

Comme annoncé dans ma retrospective 2022, Dawnmaker a énormément progressé cette année, et il est passé d'un prototype à un vrai jeu vidéo. Il reste cependant encore beaucoup de choses à régler : la boucle de progression n'est toujours pas fonctionnelle, il n'y a aucun accompagnement à la prise en main pour les nouveaux joueurs, il faut retravailler encore une grosse partie de l'interface du jeu… Et tout ça, idéalement pour mars 2024 ! Autant vous dire que : c'est chaud. Mais la sortie du jeu approche, et ça, ça fait plaisir ! Peut-être que vous pourrez acheter Dawnmaker en 2024 ?

Projets secondaires

Souls

Comme l'année dernière, je n'ai quasiment pas eu l'occasion de toucher à Souls, mon vieux projet de jeu de cartes compétitif. Mais : « quasiment », car oui, je l'ai tout de même ressorti de sa boîte, et j'en ai fait une partie. Ça été l'occasion de me remémorer là où j'en étais, et surtout tous les défauts de la version en cours. Je ne travaille toujours pas activement dessus, mais j'ai bon espoir de m'y remettre un peu en 2024.

Blog

En début d'année, je me suis mis l'objectif de publier 6 articles sur ce blog, un tous les deux mois. L'objectif est presque atteint : j'en ai publié 5.

La majorité de ces articles est en anglais, car ils ont également servi de contenu pour la newsletter d'Arpentor Studio, lancée cette année.

J'ai eu un peu de mal à me lancer dans l'écriture régulière, mais je me suis créé un système en cours d’année — en gros, un rappel tous les deux mois — et depuis, je m'y tiens correctement ! J’ai bon espoir de continuer sur ce rythme en 2024, pour continuer à partager avec vous mes expériences.

Autres jeux

En 2023 j'ai enfin rejoint une association locale de créateurs de jeux, la Compagnie des zAuteurs Lyonnais (CAL). C'est un groupement informel d'auteurs et autrices de jeux de société, qui se réunit dans les bars à jeux lyonnais régulièrement. Ce fût l'occasion pour moi d'enfin entrer dans ce milieu, de tester des prototypes très chouettes, et surtout de montrer les miens, de prototypes. Parce que oui, j'ai malgré tout continué à travailler, épisodiquement, sur des protos de jeux.

Le premier a pour nom de code « Little Brass Imhotep », car il est conçu pour être à la croisée des expériences de jeu de Little Town, Brass: Birmingham et Imhotep: The Duel. Le concept central est le suivant : il y a un plateau de 5 par 5 cases, sur lequel les joueurs vont construire des bâtiments. Ces bâtiments peuvent être activés pour donner des ressources ou des points de victoire. Les joueurs disposent d'ouvriers, qu'ils vont placer à l’extrémité d'une ligne ou d'une colonne, et ce faisant vont activer tous les bâtiments de la ligne ou colonne. Construire un bâtiment permet de marquer des points de victoire et de créer ou d'améliorer un moteur, mais donne aussi des opportunités à l'adversaire de l'exploiter.

Les premiers playtests ont fait apparaître de nombreuses lacunes dans le système de jeu, notamment une trop grande symétrie sur les ressources et les effets qui rend la mécanique principale, construire des bâtiments, peu attirante. Le prototype en est pour l'instant resté là.

Mon second prototype de l'année est né de la volonté de croiser l'expérience d'un draft de Magic — probablement mon expérience de jeu préférée — avec le cycle de feedback très court d'un autochess. La première version du jeu, nom de code « Cube Light » (oui je sais je suis nul en noms), donne ceci : sur une table de 8 joueurs, chacun⋅e reçoit un deck de 4 cartes (les mêmes pour chaque personne), puis va commencer à drafter des paquets de 4 cartes. Ensuite, chaque joueur se constitue un deck de 7 cartes, en jetant une de ses cartes. Puis on joue un match en 1 contre 1 : chaque joueur pioche trois cartes, puis simultanément va répartir ses 3 cartes, face cachée, sur trois lieux disposés au milieu de la table. Une fois les cartes placées, on les dévoile chacun son tour. Bien sûr, chaque carte a des effets variés, de même que les lieux, il faut donc placer ses cartes au bon endroit, et anticiper les prochaines cartes que l'on piochera, pour créer des combinaisons puissantes. À la fin du deuxième tour, la manche est terminée, et on compte la puissance cumulée des personnages joués sur chaque lieu. Un joueur qui a strictement plus de puissance que son adversaire sur un lieu contrôle celui-ci, et le joueur qui contrôle le plus de lieux gagne la manche. On recommence ensuite une nouvelle phase de draft, en ayant changé les places des joueurs. On construit un deck de 10 cartes, on fait une manche en trois tours. On répète ça sur 4 manches, et à la fin de la dernière manche le joueur qui a le plus de points de victoire remporte la partie !

Je me suis rendu compte, pendant que je produisais ce prototype, que ça se rapproche énormément de Challengers, un excellent jeu sorti en 2022, et dont le pitch est assez proche du mien — reproduire l'expérience d'un autochess en jeu de plateau. Mon objectif cependant est d'avoir une expérience plus proche de celle de Magic, c'est-à-dire d'avoir plus de décisions stratégiques, à la fois pendant le choix des cartes (la phase de draft) et pendant les manches.

Le premier playtest a laissé apparaître de nombreux axes d'améliorations, mais le cœur du jeu fonctionne bien et constitue une base solide. J'espère prendre du temps cette année pour reprendre ce prototype et en faire un jeu fun, au moins pour mon groupe de joueurs de Magic.

Mes recommandations de l'année

Et voilà pour le bilan de mon travail sur 2023 ! C'est l'heure de terminer ce bilan par une partie plus fun. Cette année à nouveau, j'aimerais vous partager les quelques découvertes culturelles que j'ai le plus appréciées ces douze derniers mois.

Mon jeu vidéo de l'année

2023 a été une année pauvre en jeux vidéo pour moi. Peut-être est-ce le fait de passer mes journées à travailler sur un jeu qui m'empêche d'apprécier pleinement les autres ? Peut-être est-ce parce que j'ai utilisé beaucoup de mon temps de jeu à étudier des jeux en lien avec Dawnmaker ? Ou bien est-ce un simple concours de circonstances qui fait qu'aucun jeu ne m'a vraiment happé, ou marqué, cette année ?

Quoi qu'il en soit, le meilleur jeu auquel j'ai joué cette année est Baldur's Gate 3. Je suis un immense fan des deux premiers titres, sur lesquels j'ai passé énormément de temps étant ado. J'abordais le troisième opus avec beaucoup d'appréhension, mais il ne m'a pas déçu. Le jeu donne vraiment la sensation de jouer à un Baldur's Gate pur jus, mais moderne. Certains personnages sont très attachants, l'histoire est prenante, et le contenu est gigantesque. C'est presque le seul vrai point noir pour moi d'ailleurs : je n'aime pas passer à côté de quelque chose dans un jeu, du coup j'ai passé trop de temps à tout fouiller. Et je sais que j'ai malgré ça raté des tas de choses, parce que le jeu est ainsi conçu.

Bref : Baldur's Gate 3 mérite son titre de Game of the Year.

Mes jeux de plateau de l'année

Trop difficile de choisir un seul jeu cette année, alors en voilà deux : Spirit Island et Brass: Birmingham ! Deux gros jeux, dans lesquels il faut beaucoup réfléchir, l'un coopératif et l'autre compétitif.

Spirit Island, jeu coopératif donc, vous met dans la peau des esprits protecteurs d'une île qui se fait envahir par des colons. Chaque esprit a un gameplay différent, des capacités spéciales, et un lot de cartes de départ uniques. En solo ou avec vos alliés, vous devez développer vos ressources (gagner plus d'énergie pour jouer vos cartes, obtenir de nouvelles cartes plus puissantes… ) et utiliser vos cartes pour détruire les envahisseurs, les empêcher de construire des villages ou des cités, et de répandre la désolation sur votre île luxuriante. C'est vraiment un jeu excellent, dans lequel chaque tour est un gros puzzle à plusieurs, où il y a des interactions entre les capacités des joueurs. Et bonus : sa complexité limite assez fortement l'effet « joueur alpha », quand un joueur dirige tous les autres.

Brass: Birmingham, à l'inverse, est un jeu compétitif dans l'Angleterre de la révolution industrielle. Pur jeu de gestion, il faut y construire des bâtiments — mines de charbon, fonderies, usines, manufactures… — pour développer ses ressources et marquer des points de victoire. On y construit également des canaux ou chemins de fer, on y vend des ressources, et on s'adapte aux cartes de sa main pour se positionner sur la carte. Il y a un gros aspect planification qui est contrebalancé par l'importance d'être opportuniste par moment. Ce n'est pas le jeu numéro 1 sur boardgamegeek pour rien !

Ma BD de l'année

The Nice House on the Lake, tomes 1 et 2, gagnent la palme d'or de la BD 2023 ! C'est un comic — une bande dessinée américaine — de science fiction, un huit clos qui démarre très simplement et tourne, très rapidement, vers quelque chose d'angoissant. Il y a des interludes qui montrent un futur dramatique, un personnage très énigmatique qui est au centre de l'intrigue, des enjeux qui se développent progressivement pour atteindre une ouverture, à la fin du tome 2, qui donne vraiment envie de lire la suite ! Difficile d'en dire plus tant tout le plaisir de la lecture se trouve dans la découverte de cette intrigue, mais grosse recommandation de ma part.

Mon livre de l'année

Chose incroyable, en 2023, mon livre préféré n'est pas une fiction, mais un livre de productivité : How to take smart notes. L'auteur y présente une méthode de prise de notes créée par le sociologue Niklas Luhmann. La méthode est simple, mais demande une certaine assiduité pour qu'elle développe tout son potentiel. En résumé : prendre des notes temporaires, constamment, puis régulièrement les transformer en notes « permanentes », des notes autosuffisantes, rédigées, et surtout systématiquement mises en lien avec d'autres notes. L'idée est de se constituer une base de notes, qu'on relit régulièrement, en suivant des liens et surtout en en créant de nouveaux à chaque fois que c'est pertinent. C'est à la fois une manière de mieux apprendre, en se forçant à écrire ce qu'on apprend et les idées qu'on développe, et à la fois une manière de structurer sa pensée et d'articuler ses idées, pour les transformer et en faire des outils novateurs et impactant.

Conclusions sur l'année 2023

Bon, ben, quelle année bizarre — comme prévu. Bosser aussi longtemps sur un unique projet, ou presque, c'est éreintant. Heureusement, on a pu montrer du concret au cours de l'année, grace notamment à notre serveur discord et à la newsletter que j'ai lancée. Mais à l'heure du bilan, l'impression que rien n'a avancé est vraiment forte, bien que totalement fausse. Fin 2022, je déclarais que j'avais encore beaucoup d'énergie, là, je dois avouer que c'est moins le cas. Je compte sur cette année 2024 pour chambouler un peu tout ça et me rebooster !

Sur ce, je vous remercie chaleureusement de m'avoir lu, je vous souhaite une très bonne année 2024, et je vous dis à bientôt sur ce blog pour une grande annonce sur Dawnmaker !

Adrian GaudebertL'état de l'Adrian 2022

Il est l'heure, tardive, de faire le point sur mon année 2022 ! Vous allez le lire, l'année a été chargée, ce qui explique que j'ai un peu de retard dans la rédaction de ce billet… Mais pour me faire pardonner, je vous ai mis quelques recommandations culturelles à la fin !

Voici donc un résumé de ce que j'ai fait en 2022…

Projets principaux

Arpentor Studio

Mon projet principal en 2022 a évidemment été le studio de jeu vidéo que nous avons créé avec Alexis. J'ai raconté l'essentiel de l'histoire dans mon billet Starting a Games Studio [en], mais je voudrais revenir ici sur d'autres aspects de cette aventure, notamment sur certaines erreurs que nous avons faites.

En début d'année, nous avons rejoint l'incubateur Let's GO, porté par l'association régionale Game Only. Ce fût une excellente décision que de postuler, ce programme nous a apporté énormément de connaissances, de contacts, d'opportunités, et puis des bons moments de fun aussi ! Mais ça nous a mené à faire une erreur fondamentale : nous nous sommes laissés porter par les connaissances qu'on nous livrait, sans nous demander si c'était vraiment pertinent de s'en servir à ce moment-là.

Concrètement, nous avons modifié notre plan initial. Nous voulions nous concentrer sur la création d'un jeu relativement rapidement, entre un an et un an et demi. Entraîné par les formations, notamment sur les financements, nous avons révisé ce plan pour le faire grossir, impliquer plus de gens, dépenser plus d'argent pour pouvoir en demander plus, etc. Ce changement de stratégie a eu plusieurs conséquences :

  1. Nous avons passé énormément de temps à faire des dossiers de financement, des pitch decks et autre documents de recherche d'argent, et pas assez à travailler concrètement sur notre jeu. Nous avons du coup pris beaucoup de retard sur la production de celui-ci. Hors sans jeu un minimum abouti, sans une vraie démo qui montre notre savoir-faire, impossible d'espérer signer un contrat avec un éditeur — ce sans quoi nous ne pourrons de toute manière pas terminer notre jeu.
  2. Nous avons anticipé sur l'arrivée de financements qui, il s'avère, n'étaient pas aussi faciles à obtenir que prévu. Nous avons commencé à nous rémunérer Alexis et moi, nous avons recruté une employée, nous avons engagé des frais de déplacement sur des salons… Le fait de n'avoir pas obtenu le principal financement public sur lequel nous comptions nous a mis face à une situation qui aurait pu devenir critique : la faillite. Heureusement pour nous, nous avons su nous rattraper suffisamment tôt. Malheureusement, ça impliquait de nous séparer de notre employée, d'arrêter de nous salarier, et de réduire nos frais dans le futur.
  3. Nous avons fait grossir notre jeu, ajoutant de nombreuses fonctionnalités, jusqu'à atteindre un point où j'estimais qu'il nous aurait fallut une équipe de plus de 10 personnes pendant un an et demi pour réussir à finir le jeu. Là aussi, nous avons su largement réduire la taille du jeu et revenir à quelque chose de plus raisonnable pour nous, sans (trop) compromettre la vision que nous avions.

Cette année a du coup été éprouvante pour moi, à faire un peu les montagnes russes : on a passé une partie de l'année à rêver d'une grosse production, de financements faramineux, de faire un jeu très ambitieux. Et puis le parpaing de la réalité s'est écrasé sur la tartelette aux fraises de nos illusions, et il a fallut revenir à des choses plus raisonnables, prendre des décisions difficiles, faire du mal à des gens.

Malgré tout ça, ou grâce à tout ça, j'ai énormément appris en 2022 : sur la production d'un jeu, la stratégie d'entreprise, le recrutement, les relations avec les éditeurs… Le timing n'était pas toujours le bon pour apprendre ces choses-là, mais je sais qu'on s'en souviendra le moment venu, et que ça n'aura pas servi à rien. L'essentiel, comme me disait récemment un grand homme, ce n'est pas de ne plus faire d'erreur : c'est de toujours faire de nouvelles erreurs.

Si je devais recommencer demain, je ferais en sorte de garder ce plan de commencer petit, et de grossir tout doucement. Commencer par faire quasiment des jeux de Jams, en quelques jours seulement, puis faire un jeu en un mois, puis en deux, puis en quatre, etc. L'idée étant de monter en compétence doucement mais sûrement, sur toute la chaîne de production d'un jeu vidéo, et de se faire connaître en sortant régulièrement du contenu. C'est un modèle qui a bien fonctionné pour d'autres studios, et qui me semble vraiment sain pour quelqu'un comme moi qui n'a pas 10 ans d'expérience dans l'industrie. C'est aussi, je crois, une bonne manière de créer une entreprise financièrement stable dans ce milieu difficile.

Pour conclure, Arpentor Studio va bien. En fin d'année, nous avons fait en sorte de bien redresser la barre, et nous nous dirigeons actuellement vers un cap qui nous semble plus cohérent, plus sûr. On ne sortira probablement pas de jeu en 2023, mais progressera énormément dessus, on fera grossir l'équipe, et on mettra en place tout ce qu'il faut pour sortir le meilleur jeu possible en 2024.

État : en cours.

Cities of Heksiga

Qui dit studio de jeu vidéo dit forcément jeu vidéo. Ça n'est pas vraiment un secret (même si j'en ai peu parlé), nous travaillons depuis un peu plus d'un an sur un jeu que nous appelons actuellement Cities of Heksiga. C'est un jeu de stratégie solo, pour PC et mobile, qui se déroule dans un univers de Fantasy Steampunk. C'est en quelque sorte un jeu de plateau numérique, à la Terraforming Mars par exemple, qui mélange deck building (améliorer un deck de carte au fil de la partie en acquérant des cartes de plus en plus fortes ou synergiques) et pose de tuiles sur un plateau. Je ne vous en dit pas plus pour le moment parce qu'on a encore beaucoup de choses à stabiliser, mais ça viendra bien assez tôt. Sachez qu'on vise actuellement une sortie pendant la première moitié de 2024.

Sur ce jeu, je suis responsable de la programmation (le jeu est codé avec des technologies du Web, en TypeScript, avec une interface qui utilise Svelte) mais aussi du game design, c'est-à-dire de la conception des mécaniques du jeu. Alexis quant à lui est responsable de la direction artistique, de la création de tous les assets graphiques, et de la narration du jeu. Nous sommes également accompagnés par Aurélie, qui créé la musique et tous les effets sonores qui viennent embellir l'expérience.

En 2022 j'ai travaillé sur plusieurs prototypes du jeu (j'en compte au moins une douzaine d'après notre documentation), itérant chaque fois sur les mécaniques centrales du jeu pour trouver une formule qui fonctionne. J'ai fait quelques prototypes papier, mais je suis rapidement passé sur des versions numériques, parce que nos mécaniques impliquaient tout un ensemble de calculs et d'actions automatiques difficiles à effectuer manuellement.

Capture d'écran du prototype de Cities of Heksiga au 12 janvier 2023

Le prototype de Cities of Heksiga au 12 janvier 2023

J'ai également travaillé sur des outils, notamment un outil de gestion du contenu du jeu : j'ai une interface très simple qui me permet de créer rapidement un nouveau bâtiment, ou de mettre à jour un bâtiment existant, puis d'exporter ça en un seul clic. Le fait que nous utilisions des techno Web me permet d'être très efficace là-dessus, et j'ai bon espoir de mettre en place un workflow de game design aux petits oignons d'ici quelques mois.

Fin 2022, nous terminons, enfin mais difficilement, notre phase de prototypage. C'est-à-dire que nous avons consolidé les mécaniques centrales du jeu, que nous les avons validées (bon, pas vraiment, mais c'est en cours et j'ai confiance) et que nous pouvons maintenant passer à la suite : créer une vraie démo qui déchire, et étoffer doucement le jeu en ajoutant de nouvelles mécaniques et du contenu.

Comme je l'ai dit dans la partie précédente, nous avons passé trop peu de temps à travailler sur ce jeu cette année. Mais ça a présenté un avantage : nous avons eu le temps de le faire tester, de prendre des retours posés et construits sur les forces et les faiblesses de nos différents prototypes. Au final, nous avons pu identifier des problèmes fondamentaux et les corriger, ce qui aurait été plus difficile si nous avions eu plus la tête dans le guidon. Un mal pour un bien !

En 2023, Cities of Heksiga devrait vraiment prendre forme, et passer d'un prototype à une véritable démo, puis à une vertical slice, une version représentative de ce que nous voulons que le jeu final soit. Nous prévoyons actuellement de sortir le jeu dans la première moitié de 2024.

État : en cours.

Projets secondaires

Souls

Souls, mon jeu de cartes compétitif en ligne, a fait une grosse pause en 2022. Au milieu de tout le reste, je n'ai tout simplement pas eu le temps de me remettre dessus. Mais tout mon travail à côté a pour objectif de monter en compétence et de créer un contexte dans lequel il sera possible de faire de Souls un succès. Donc quelque part, ça avance quand même !

État : en pause.

Board Game Jam 2

Voici mon gros projet secondaire de ces derniers mois : l'organisation d'une Jam de création de jeux de plateau. C'est une idée que mon ami Aurélien et moi avions depuis trèèèès longtemps, qui s'est enfin concrétisée début 2020 via l'association Game Dev Party… mais qui s'est fait couper en plein milieu par l'annonce du premier confinement. Je suis donc très heureux d'avoir enfin pu mener une vraie Board Game Jam jusqu'au bout !

Mais qu'est-ce que c'est que ce truc, me demandez-vous ? Une Jam, c'est un événement de création, initialement de jeu vidéo, en équipe, en général sur un week-end. On réunit une cinquantaine de personnes dans un même lieu physique, ils se répartissent en groupes et passent leur week-end à créer de toutes pièces, depuis zéro, un jeu vidéo. À Lyon, l'association Game Dev Party a fait de l'organisation de ces événement sa spécialité depuis 2011 — et j'en suis membre organisateur depuis 2012. Une Board Game Jam, c'est le même principe, mais pour les jeux de société.

La table de matériel mis à disposition des participant⋅e⋅s

L'événement a eu lieu mi-janvier, et s'est soldée par une franche réussite : environ 40 participant⋅e⋅s pour 9 jeux créés pendant le week-end. Le week-end s'est déroulé sans accroc majeur (oublions les quelques couacs techniques du dimanche soir), les gens avaient l'air heureux, et les jeux produits étaient incroyablement engageants et variés.

Je suis particulièrement ravi de cette formule. Travailler sur un jeu vidéo présente un réel challenge technique : il faut programmer, il faut illustrer, il faut sonoriser… Le temps d'itération est relativement long, entre le moment où on a une idée et le moment où on peut réellement la tester, clavier, souris ou manette en main. Avec le jeu de société, ce temps d'itération est très largement réduit. Une nouvelle idée de carte ? Un bout de papier, un crayon, et hop, la carte est créée et prête à être testée.

C'était épuisant de porter cet événement, mais je suis fier de ce qu'on a réalisé, et je compte fortement sur d'autres personnes pour organiser de nouveaux événements de ce type. Parce que c'est quand même super frustrant de voir tous ces gens créer des jeux et de ne pas participer !!!

État : terminé.

Blog

Je me note donc, pour mon moi du futur, de faire attention à rester ouvert : c'est éprouvant d'avancer sans que rien de concret ne « sorte », sans avoir la satisfaction d'avoir terminé quelque chose. Alors, Adrian de 2022 : n'oublie pas de parler de ce que tu fais, de montrer tes avancées, même si c'est moche, même si ça marche mal, parce que ça te donnera la sensation de progresser, et que ça t'aidera beaucoup !

Raté ! Je n'ai publié que deux articles en 2022 : How I did my market research on Steam [en] en mars puis Starting a Games Studio [en] en août. Ce dernier a été un énorme travail, que j'ai fait sur plusieurs mois, mais ça reste très insuffisant pour moi. Heureusement, j'ai quand même partagé mon travail, mais ailleurs : sur un serveur discord qu'on utilise pour les playtests de notre jeu, et au sein de l'incubateur Let's GO. Je n'ai pas ressenti le besoin de plus écrire, même si ça reste un objectif que j'aimerais tenir un jour. J'ai beaucoup appris de gens qui ont partagé leurs expériences avant moi, et je souhaite rendre ce service moi aussi. C'est dans cette démarche que j'ai écrit ces deux billets, mais je pense que je peux en faire plus.

Allez, objectif pour 2023 : 6 billets dans l'année, soit un tous les deux mois !

Mes recommandations de l'année

Pour conclure ce billet, j'ai envie de faire un truc nouveau : vous recommander quelques œuvres culturelles qui m'ont marquées cette année.

Mon jeu vidéo de l'année

Sans conteste, c'est Planet Crafter qui a été mon jeu de 2022. On y mélange survie, exploration et construction de base sur une planète inhabitable, et notre objectif est de la terraformer. Le jeu est en early access, mais son contenu est déjà énorme, et les mises à jour ont toutes été très bénéfiques. J'ai pris quelques grosses claques en découvrant certains lieux, j'ai passé des heures à me construire une belle base, la progression est excellemment maîtrisée, il y a toujours quelque chose à faire, bref : je vous recommande de jouer à Planet Crafter !

PS : j'ai découvert via le CanardPC de janvier que les créateurs de Planet Crafter sont un couple de Toulousains. Ils ont fait ce jeu à deux. C'est très impressionnant. :-)

Mon jeu de plateau de l'année

J'ai été conquis par Terraforming Mars: Expédition Arès. Ce mélange des cartes du merveilleux Terraforming Mars original avec la mécanique d'actions partagées de Race for the Galaxy a complètement fait mouche chez moi. C'est tout ce que j'aime : de l'engine building pur, avec de la planification, un poil de bluff, et juste ce qu'il faut de ressources. C'est accessible, et ça se joue (relativement) vite, entre 1h et 1h30.

Ma BD de l'année

Je décerne le prix de la BD de l'année à Bolchoï Arena, de Boulet et Aseyn. Le tome 1 date de 2018, mais je n'ai découvert la série qu'en 2022 à l'occasion de la sortie du tome 3 — pour une série prévue en 5 livres. Dans cette histoire de Science Fiction, on suit les pérégrinations d'une jeune femme dans le Bolchoï, monde virtuel en ligne particulièrement gigantesque qui reproduit à l'identique l'univers connu. Jusqu'à, bien sûr, qu'il se passe des trucs de ouf qui posent des tonnes de questions. On y retrouve de l'aventure, de l'exploration, de la géopolitique, des questions existentielles sur le rapport aux mondes virtuels, et bien plus mais je peux pas dire quoi pour pas spoiler. J'ai très très hâte de lire la suite, les trois premiers tomes sont excellents !

Mon livre de l'année

Andy Weir, auteur du livre de SF The Martian, qui a été adapté au cinéma dans un film éponyme avec Matt Damon (très bonne adaptation soit dit en passant), a sorti deux autres livres : Artemis et Project Hail Mary. Si Artemis est une lecture très agréable, Project Hail Mary a été une claque monumentale. Le personnage principal cynique à souhaite, la narration par flashbacks qui fait monter la compréhension et les enjeux, et un incroyable twist au milieu du livre qui change complètement la donne : j'ai adoré ce livre, et je ne peux que le recommander à tout le monde, c'est une merveille.

Conclusions sur l'année 2022

2022 fût une année encore plus éprouvante que ce que j'avais prévu. Mais j'ai énormément appris, sur beaucoup de choses. J'ai été tour à tour programmeur, game designer, producer, entrepreneur, recruteur, organisateur… Ça fait beaucoup pour un seul homme, c'est épuisant, mais je ne regrette pas ! Dans tout ça, j'ai tout de même vraiment réussi à me préserver, à ne pas me surcharger de travail, à prendre de (longues) vacances, et c'est une très bonne chose. Je ne suis pas cramé, j'ai encore plein d'énergie pour 2023, et je suis confiant sur l'avenir.

Bonne année 2023 à vous toutes et tous, chères lectrices, chers lecteurs, et merci de tout cœur de me suivre dans ces aventures !

Mozilla Localization (L10N)Advancing Mozilla’s mission through our work on localization standards

After the previous post highlighting what the Mozilla community and Localization Team achieved in 2023, it’s time to dive deeper on the work the team does in the area of localization technologies and standards.

A significant part of our work on localization at Mozilla happens within the space of Internet standards. We take seriously our commitments that stem from the Mozilla Manifesto:

We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.

To us, this means that it’s not enough to strive to improve the localization of our products, but that we need to improve the localizability of the Internet as a whole. We need to take the lessons we are learning from our work on Firefox, Thunderbird, websites, and all our other projects, and make them available to everyone, everywhere.

That’s a pretty lofty goal we’ve set ourselves, but to be fair it’s not just about altruism. With our work on Fluent and DOM Localization, we’re in a position where it would be far too easy to rest on our laurels, and to consider what we have “good enough”. To keep going forward and to keep improving the experiences of our developers and localizers, we need input from the outside that questions our premises and challenges us. One way for us to do that is to work on Internet standards, presenting our case to other experts in the field.

In 2023, a large part of our work on localization standards has been focused on Unicode MessageFormat 2 (aka “MF2”), an upcoming message formatting specification, as well as other specifications building on top of it. Work on this has been ongoing since late 2019, and Mozilla has been one of the core participants from the start. The base MF2 spec is now slated for an initial “technology preview” release as a part of the 2024 Spring’s Unicode CLDR release.

Compared to Fluent, MF2 corresponds to the syntax and formatting of a single message pattern. Separately, we’ve also been working on the syntax and representation of a resource format for messages (corresponding to Fluent’s FTL files), as well as championing JavaScript language proposals for formatting messages and parsing resources. Work on standardizing DOM localization (as in, being able to use just HTML to localize a website) is also getting started in W3C/WHATWG, but its development is contingent on all the preceding specifications reaching a more stable stage.

So, besides the long term goal of improving localization everywhere, what are the practical results of these efforts? The nature of this work is exploratory, so predicting results has not and will not be completely possible. One tangible benefit that we’ve been able to already identify and deploy is a reconsideration of how Fluent messages with internal selectors — like plurals — are presented to localizers: Rather than showing a message in pieces, we’ve adopted the MF2 approach of presenting a message with its selectors (possibly more than one) applying to the whole message. This duplicates some parts of the message, but it also makes it easier to read and to translate via machine translation, as well as ensuring that it is internally consistent across all languages.

Another byproduct of this work is MF2’s message data model: Unlike anything before it, it is capable of representing all messages in all languages in all formats. We are currently refactoring our tools and internal systems around this data model, allowing us to deduplicate file format-specific tooling, making it easier to add new features and support new syntaxes. In Pontoon, this approach already made it easier to introduce syntax highlighting and improve the editing experience for right-to-left scripts. To hear more, you can join us at FOSDEM next month, where we’ll be presenting on this in more detail!

At Mozilla, we do not presume to have all the answers, or to always be right. Instead, we try to share what we have, and to learn from others. With many points of view, we gain greater insights – and we help make the world a better place for all peoples of all demographic characteristics.

Mozilla Localization (L10N)Mozilla Localization in 2023

A Year in Data

The Mozilla localization community had a busy and productive 2023. Let’s look at some numbers that defined our year:

  • 32 projects and 258 locales set up in Pontoon
  • 3,685 new user registrations
  • 1,254 active users, submitting at least one translation (on average 235 users per month)
  • 432,228 submitted translations
  • 371,644 approved translations
  • 23,866 new strings to translate

Slide summarizing the activity in Pontoon over 2023. It includes the Mozilla Localization team logo (a red and black lion head) and an image of a cartoonish lion cub holding a thank you sign. Data in the slide: * 32 projects and 258 locales set up in Pontoon * 3,685 new user registrations * 1,254 active users, submitting at least one translation (on average 235 users per month) * 432,228 submitted translations * 371,644 approved translations * 23,866 new strings to translateThank you to all the volunteers who contributed to Mozilla’s localization efforts over the last 12 months!

In case you’re curious about the lion theme: localization is often referred to as l10n, a numeronym which looks like the word lion. That’s why our team’s logo is a lion head, stylized as the original Mozilla logo by artist Shepard Fairey.

Pontoon Development

A core area of focus in 2023 was pretranslation. From the start, our goal with this feature was to support the community by making it easier to leverage existing translations and provide a way to bootstrap translation of new content.

When pretranslation is enabled, any new string added in Pontoon will be pretranslated using a 100% match from translation memory or — if no match exists —  we’ll leverage Google AutoML Translation engine with a model custom trained on the existing locale’s translation memory. Translations are stored in Pontoon with a special “pretranslated” status so that localizers can easily find and review them. Pretranslated strings are also saved to repositories (e.g. GitHub), and eventually ship in the product.

You can find more details on how we approached testing and involved the community in this blog post from July. Over the course of 2023 we pretranslated 14,033 strings for 16 locales across 15 projects.

Towards the end of the year, we also worked on two features that have been long requested by users: 1) it’s now possible to use Pontoon with a light theme; and 2) we improved the translation experience on mobile, with the original 3-column layout adapting to smaller screen sizes.

Screenshot of Pontoon's UI with the light theme selected.

Screenshot of Pontoon’s UI with the light theme selected.

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Screenshot of Pontoon UI on a smartphone running Firefox for Android

Listening to user feedback remains our priority: in case you missed it, we have just published the results of a new survey, where we asked localizers which features they would like to see implemented in Pontoon. We look forward to implementing some of your fantastic ideas in 2024!

Community

Community is at the core of Mozilla’s localization model, so it’s crucial to identify sustainability issues as early as possible. Only relying on completion levels, or how quickly a locale can respond to urgent localization requests, are not sufficient inputs to really understand the health of a community. Indeed, an extremely dedicated volunteer can mask deeper problems and these issues only become visible — and urgent — when such a person leaves a project, potentially without a clear succession plan.

To prevent these situations, we’ve been researching ways to measure the health of each locale by analyzing multiple data points — for example, the number of new sign-ups actively contributing to localization and getting reviews from translators and managers — and we’ve started reaching out to specific communities to trial test interventions. With the help of existing locale managers, this resulted in several promotions to translator (Arabic, Czech, German) or even manager (Czech, Russian, Simplified Chinese).

During these conversations with various local communities, we heard loud and clear how important in-person meetings are to understanding what Mozilla is working on, and how interacting with other volunteers and building personal connections is extremely valuable. Over the past few years, some unique external factors — COVID and an economic recession chief among them — made the organization of large scale events challenging. We investigated the feasibility of small-scale, local events organized directly by community members, but this initiative wasn’t successful since it required a significant investment of time and energy by localizers on top of the work they were already doing to support Mozilla with product localization.

To counterbalance the lack of in-person events and keep volunteers in the loop, we organized two virtual fireside chats for localizers in May and November (links to recordings).

What’s coming in 2024

In order to strengthen our connection with existing and potential volunteers, we’re planning to organize regular online events this year. We intend to experiment with different formats and audiences for these events, while also improving our presence on social networks (did you know we’re on Mastodon?). Keep an eye out on this blog and Matrix for more information in the coming months.

As many of you have asked in the past, we also want to integrate email functionalities in Pontoon; users should be able to opt in to receive specific communications via email on top of in-app notifications. We also plan to experiment with automated emails to re-engage inactive users with elevated permissions (translators, managers).

It’s clear that a community can only be sustainable if there are active managers and translators to support new contributors. On one side, we will work to create onboarding material for new volunteers so that existing managers and translators can focus on the linguistic aspects. On the other, we’ll engage the community to discuss a refined set of policies that foster a more inclusive and transparent environment. For example, what should the process be when a locale doesn’t have a manager or active translator, yet there are contributors not receiving reviews? How long should an account retain elevated permissions if it’s apparently gone silent? What are the criteria for promotions to translator or manager roles?

For both initiatives, we will reach out to the community for feedback in the coming months.

As for Pontoon, you can expect some changes under the hood to improve performances and overall reliability, but also new user-facing features (e.g. fine-grained search, better translation memory management).

Thank you!

We want to thank all the volunteers who have dedicated their time and skills to localizing Mozilla products. Your tireless efforts are essential in advancing the Mozilla mission of fostering an open and accessible internet for everyone.

Looking ahead, we are excited about the opportunities that 2024 brings. We look forward to working alongside our community to expand the impact of localization and continue breaking down language barriers. Your support is invaluable, and together, we will continue shaping a more inclusive digital world. Thank you for being an integral part of this journey.

Mozilla Open Policy & Advocacy BlogMozilla Weighs in on State Comprehensive Privacy Proposals

[Read our letters to legislators in Massachusetts and Maine.]

Today, Mozilla is calling for the passage of strong state privacy protections, such as those modeled off of the American Data Privacy and Protection Act at the federal level. Today’s action came in the form of letters to relevant committee leadership in the Massachusetts and Maine legislatures encouraging them to consider and pass proposals that have been introduced in their respective states.

At Mozilla, we believe that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. In the best of worlds, this “privacy for all” mindset would mean a law at the federal level that protects all Americans from abuse and misuse of their data, which is why we have advocated for decisive action to pass a comprehensive Federal privacy law.

Recently, however, even more states are considering enacting privacy protections. These protections, if crafted incorrectly, could create a false facade of privacy for users and risk enshrining harmful data practices in the marketplace. If crafted correctly, they could provide vital privacy protections and drive further conversation of federal legislation.

The proposals we weighed in on today meet the Mozilla standard for privacy because they: require data minimization; create strong security requirements; prohibit deceptive design that impairs individual autonomy; prohibit algorithmic discrimination; and more.

Mozilla has previously supported legislative and regulatory action in California, and we hope to see more state legislatures introduce and pass strong privacy legislation.

The post Mozilla Weighs in on State Comprehensive Privacy Proposals appeared first on Open Policy & Advocacy.

Firefox Developer Experience[Reverted] Fixing keyboard navigation in Inspector Rules view

2024-02-03 Update

Given the feedback we received on this blog post and in other channel, we’re reverting this and the Enter key will work the way it was previously. The fix is already in Firefox Beta/Developer Edition 123.0b6, and will be in Firefox 122.0.1 which should be released 2024-02-06.
If you liked the “new” behavior that we were trying to introduce, you can enable it by navigating to about:config, and set devtools.inspector.rule-view.focusNextOnEnter to false. We also plan to expose this option in the settings UI (#1878490).

The new focus indicator style we introduced also revealed a couple issues that we’ll tackle in next releases:

  • The closing bracket is focusable, and hitting Enter or Space while it has the focus will add a new property to the rule. This is definitely not self-explanatory, so we’ll try to make it better (#1876676). Note that you can still click anywhere in the rule where there no item to add a new property, and we’ll keep it that way.
  • It’s hard to tell when an element is focused whether it’s being edited or not (#1876674). This one is a bit trickier, as we want to limit layout shift when toggling edit mode, and we want a consistent focus indicator. We’ll experiment various solutions to find what feels right.

Finally, I wanted to emphasize that we do want to hear (hopefully constructive) feedback from you, web developers, so we can make better choices to support you. You can do that on Mastodon, Twitter, Discourse, Element and of course, on Bugzilla, our bug tracker (you can connect with a Github account). We’re a very small team, we definitely don’t know everything and we can’t test all the new libraries, frameworks and workflows that are created. So we really rely on your feedback and bug reports to make Firefox Developer Tools better, faster and more solid.


Original Article

Starting Firefox 122, when editing a selector, a property name or a property value in the Inspector, the Enter key will no longer move the focus to the next input, but will validate what was entered and focus the matching element (#1861674). You can still use Ctrl + Enter (Cmd + Enter on macOS) or Tab to validate and move the focus to the next input.

Firefox DevTools Inspector panel. The Rules view is highlighted and shows a couple rule. One of them has the following property `background-color: #000000`. The `#000000` element has a focus indicator around it, a thick blue border.<figcaption class="wp-element-caption">The Rules view after the background-color value was modified and validated with the Enter key. The value element is now focused (hence the focus indicator). Previously, this will have enabled the edit mode on the color property.</figcaption>

Why?

When you click on a selector, a property name or a property value, a text input appears to modify the underlying value. Previously, when the user hit Enter, we advanced the editor to the next editable property, which is also directly turned into a text input. This behavior seems to exist since the Firebug days and every browsers Developers Tools implemented it, as it allowed to quickly edit multiple properties in a rule without leaving the keyboard.

In 2023 the Accessibility team at Mozilla ran an audit on DevTools and created a list of issues that needed to be fixed. One of the area we focused on was the Inspector, and especially keyboard navigation in the Rules view. As we were fixing those issues, making the keyboard navigation better, it struck us that it was unnecessary hard to exit “edit” mode with the keyboard only; the only way to do this was with the Esc key, but that also reverts any changes that was made in the text input! What I ended up doing most of the time is do validate with Enter, which moves the focus to the next input, then hit Esc to opt-out of the edit mode.
This extra step (and the unnecessary CPU cycles that goes with it) doesn’t seem justified when we already have other keyboard shortcut that can validate the input and move to the next one: Tab, which already existed and works across all browsers, and Ctrl (Cmd on macOS) + Enter, which we added based on user feedback (#1873416).

On top of that, this could be confusing for non-sighted user. In the web, you navigate through the inputs of a form with the Tab key, and Enter should validate the form. The change we made bring the Rules view behavior closer to regular forms, which should be more comfortable for non-sighted user, as well as people with no prior experience of the tool.
For those who’ve been using it for years or even decades (and all the DevTools team members fall onto that category), we know this is going to take a bit to get used to. We did fix some of the issues we saw in Tab and “edit mode” navigation, so when you hit Enter but wanted the focus to move to the next input, you should be able to hit Tab and then Enter to activate edit mode on the field you wanted to modify.

Again, we know this could be frustrating in the beginning, but, for us, the advantages this brings to the table makes it worthwhile, and I hope to you to.

Eitan IsaacsonIntroducing Spiel

A New Speech API and Framework

Spiel Logo

I wrote the beginning of what I hope will be an appealing speech API for desktop Linux and beyond. It consists of two parts, a speech provider interface specification and a client library. My hope is that the simplicity of the design and its leverage of existing free desktop technologies will make adoption of this API easy.

Of course, Linux already has a speech framework in the form of Speech Dispatcher. I believe there have been a handful of technologies and recent developments in the free desktop space that offer a unique opportunity to build something truly special. They include:

D-Bus

D-Bus came about several years after Speech Dispatcher. It is worth pausing and thinking about the different architectural similarities between a local speech service and a desktop IPC bus. The problems that Speech Dispatcher tackles, such as auto-spawning, wire protocols, IPC transports, session persistence, modularity, and others have been generalized by D-Bus.

Instead of a specialized module for Speech Dispatcher, what if speech engines just exposed an interface on the session bus? With a service file they can automatically spawn and go away as needed.

Flatpak (and Snap??)

Flatpak offers a standardized packaging format that can encapsulate complex setups into a sandboxed installation with little to no thoughts of the dependency hell Linux users have grown accustomed to. One neat feature in Flatpaks is that they support exposing fully sandboxed D-Bus services, such as a speech engine. Flatpaks offer an out-of-band distribution model that sidesteps the limitations and fragmentation of traditional distro package streams. Flatpak repositories like Flathub are the perfect vehicle for speech engines because of the mix of proprietary and peculiar licenses that are often associated with them, for example…

Neural text to speech

I have always been frustrated with the lack of naturally sounding speech synthesis in free software. It always seemed that the game was rigged and only the big tech platforms would be able to afford to distribute nice sounding voices. This is all quickly changing with a flurry of new speech systems covering many languages. It is very exciting to see this happening, it seems like there is a new innovation on this front every day. Because of the size of some of the speech models, and because of the eclectic copyright associated with them we can’t expect distros to preinstall them, Flatpaks and Neural speech systems are a perfect match for this purpose.

Talking apps that aren’t screen readers

In recent years we have seen many new applications of speech synthesis entering the mainstream - navigation apps, e-book readers, personal assistants and smart speakers. When Speech Dispatcher was first designed, its primary audience was blind Linux users. As the use cases have ballooned so has the demand for a more generalized framework that will cater to a diverse set of users.

There is precedent for technology that was designed for disabled people becoming mainstream. Everyone benefits when a niche technology becomes conventional, especially those who depend on it most.

Questions and Answers

I’m sure you have questions, I have some answers. So now we will play our two roles, you the perplexed skeptic, unsure about why another software stack is needed, and me - a benevolent guide who can anticipate your questions.

Why are you starting from scratch? Can’t you improve Speech Dispatcher?

Speech Dispatcher is over 20 years old. Of course, that isn’t a reason to replace it. After all, some of your favorite apps are even older. Perhaps there is room for incremental improvements in Speech Dispatcher. But, as I wrote above, I believe there are several developments in recent years that offer an opportunity for a clean slate.

I love eSpeak, what is all this talk about “naturally sounding” voices?

eSpeak isn’t going anywhere. It has a permissible license, is very responsive, and is ergonomic for screen reader users who consume speech at high rates for long periods of time. We will have an eSpeak speech provider in this new framework.

Many other users, who rely on speech for narration or virtual assistants will prefer a more natural voice. The goal is to make those speech engines available and easy to install.

I know for a fact that you can use /insert speech engine/ with Speech Dispatcher

It is true that with enough effort you can plug anything into Speech Dispatcher.

Speech Dispatcher depends on a fraught set of configuration files, scripts, executables and shared libraries. A user who wants to use a synthesis engine other than the default bundled one in their distro needs to open a terminal, carefully place resources in the right place and edit configuration files.

What plan do you have to migrate all the current applications that rely on Speech Dispatcher?

I don’t. Both APIs can coexist. I’m not a contributor or maintainer of Speech Dispatcher. There might always be a need for the unique features in Speech Dispatcher, and it might have another 20 years of service ahead.

I couldn’t help but notice you chose to write libspiel in C instead of a modern memory safe language with a strong ownership model like Rust.

Yes.

Support.Mozilla.OrgIntroducing Mandy and Donna

Hey everybody,

I’m so thrilled to start 2024 with good news for you all. Mandy Cacciapaglia and Donna Kelly are joining our Customer Experience team as a Product Support Manager for Firefox and a Content Strategist. Here’s a bit from them both:

Hi there! Mandy here — I am Mozilla’s new Product Support Manager for Firefox. I’m so excited to collaborate with this awesome group, and dive into Firefox reporting, customer advocacy and feedback, and product support so we can keep elevating our amazing browser. I’m based in NYC, and outside of work you will find me watercolor painting, backpacking, or reading mysteries.

Hi everyone! I’m Donna, and I am very happy to be here as your new Content Strategist on the Customer Experience team. I will be working on content strategy to improve our knowledge base, documentation, localization, and overall user experience!In my free time, I love hanging out with my dog (a rescue tri-pawd named Sundae), hiking, reading (big Stephen King fan), playing video games, and anything involving food. Looking forward to getting to know everyone!

You’ll hear more from them in our next community call (which will be on January 17). In the meantime, please join me to congratulate and welcome both of them into the team!

Niko MatsakisWhat I'd like to see for Async Rust in 2024 🎄

Well, it’s that time of year, when thoughts turn to…well, Rust of course. I guess that’s every time of year. This year was a pretty big year for Rust, though I think a lot of what happened was more in the vein of “setting things up for success in 2024”. So let’s talk about 2024! I’m going to publish a series of blog posts about different aspects of Rust I’m excited about, and what I think we should be doing. To help make things concrete, I’m going to frame the 2024 by using proposed project goals – basically a specific piece of work I think we can get done this year. In this first post, I’ll focus on async Rust.

What we did in 2023

On Dec 28, with the release of Rust 1.75.0, we stabilized async fn and impl trait in traits. This is a really big deal. Async fn in traits has been “considered hard” since 2019 and they’re at the foundation of basically everything that we need to do to make async better.

Async Rust to me showcases the best and worst of Rust. It delivers on that Rust promise of “high-level code, low-level performance”. Building on the highly tuned Tokio runtime, network services in Rust consistently have tighter tail latency and lower memory usage, which means you can service a lot more clients with a lot less resources. Alternatively, because Rust doesn’t hardcode the runtime, you can write async Rust code that targets embedded environments that don’t even have an underlying operating system, or anywhere in between.

And yet it continues to be true that, in the words of an Amazon engineer I talked to, “Async Rust is Rust on hard mode”. Truly closing this gap requires work in the language, standard library, and the ecosystem. We won’t get all the way there in 2024, but I think we can make some big strides.

Proposed goal: Solve the send bound problem in Q2

We made a lot of progress on async functions in traits last year, but we still can’t cover the use case of generic traits that can be used either with a work-stealing executor or without one. One very specific example of this is the Service trait from tower. To handle this use case, we need a solution to the send bound problem. We have a bunch of idea for what this might be, and we’ve even got a prototype implementation for (a subset of) return type notation, so we are well positioned for success. I think we should aim to finish this by the end of Q2 (summer, basically). This in turn would unblock a 1.0 release of the tower crate, letting us having a stable trait for middleware.

Proposed goal: Stabilize an MVP for async closures in Q3

The holy grail for async is that you should be able to easily make any synchronous function into an asynchronous one. The 2019 MVP supported only top-level functions and inherent methods. We’ve now extended that to include trait methods. In 2024, we should take the next step and support async closures. This will allow people to define combinator methods like iterator map and so forth and avoid the convoluted workarounds currently required.

For this first goal, I think we should be working to establish an MVP. Recently, Errs and I outlined an MVP we thought seemed quite doable. It began with creating AsyncFn traits that look that mirror the Fn trait hierarchy…

trait AsyncFnOnce<A> {
    type Output;
    
    async fn call_once(self, args: A) -> Self::Output;
}

trait AsyncFnMut<A>: AsyncFnOnce<A> {
    async fn call_mut(&mut self, args: A) -> Self::Output;
}

trait AsyncFn<A>: AsyncFnMut<A> {
    async fn call(self, args: A) -> Self::Output;
}

…and the ability to write async closures like async || <expr>, as well as a bridge such that any function that returns a future also implements the appropiate AsyncFn traits. Async clsoures would unblock us from creating combinator traits, like a truly nice version of async iterators.

This MVP is not intended as the final state, but it is intended to be compatible with whatever final state we wind up with. There remains a really interesing question about how to integrate the AsyncFn traits with the regular Fn traits. Nonetheless, I think we can stabilize the above MVP in parallel with exploring that question.

Proposed goal: Author an RFC for “maybe async” in Q4 (or decide not to!)

One of the big questions around async is whether we should be supporting some way to write “maybe async” code. This idea has gone through a lot of names. Yosh and Oli originally kicked off something they called keyword generics and later rebranded as effect generics. I prefer the framing of trait transformers, and I wrote a blog post about how trait transformers can make async closures fit nicely.

There is significant skepticism about whether this is a good direction. There are other ways to think about async closures (though Errs pointed out an issue with this that I hope to write about in a future post). Boats has written a number of blog posts with concerns, and members of the types team have expressed fear about what will be required to write code that is generic over effects. These concerns make a lot of sense to me!

Overall, I still believe that something like trait transformers could make Rust feel simpler and help us scale to future needs. But I think we have to prove our case! My goal for 2024 then is to do exactly that. The idea would be to author an RFC laying out a “maybe async” scheme and to get that RFC accepted. To address the concerns of the types team, I think that will require modeling “maybe async” formally as part of a-mir-formality, so that everybody can understand how it will work.

Another possible outcome here is that we opt to abandon the idea. Maybe the complexity really is infeasible. Or maybe the lang design doesn’t feel right. I’m good with that too, but either way, I think we need to settle on a plan this year.

Stretch goal: stabilize generator syntax

As a stretch goal, it would be really cool to land support for generator expressions – basically a way to write async iterators. Errs recently opened a PR adding nightly support for async and RFC #3513 proposed reserving the gen keyword for Rust 2024. Really stabilizing generators however requires us to answer some interesting questions about the best design for the async iteration trait. Thanks to the stabilization of async fn in trait, we can now have this conversation – and we have certainly been having it! Over the last month or so there has also been a lot of interesting back and forth about the best setup. I’m still digesting all the posts, I hope to put up some thoughts this month (no promises). Regardless, I think it’s plausible that we could see async genreators land in 2024, which would be great, as it would eliminate the major reason that people have to interact directly with Pin.

Conclusion: looking past 2024

If we accomplish the goals I outlined above, async Rust by the end of 2024 will be much improved. But there will still be a few big items before we can really say that we’ve laid out the pieces we need. Sadly, we can’t do it all, so these items would have to wait until after 2024, though I think we will continue to experiment and discuss their design:

  • Async drop: Once we have async closures, there remains one place where you cannot write an async function – the Drop trait. Async drop has a bunch of interesting complications (Sabrina wrote a great blog post on this!), but it is also a major pain point for users. We’ll get to it!
  • Dyn async trait: Besides send bounds, the other major limitation for async fn in trait is that traits using them do not yet support dynamic dispatch. We should absolutely lift this, but to me it’s lower in priority because there is an existing workaround of using a proc-macro to create a DynAsyncTrait type. It’s not ideal, but it’s not as fundamental a limitation as send bounds or the lack of async closures and async drop. (That said, the design work for this is largely done, so it is entirely possible that we land it this year as a drive-by piece of work.)
  • Traits for being generic over runtimes: Async Rust’s ability to support runtimes as varied as Tokio and Embassy is one of its superpowers. But the fact that switching runtimes or writing code that is generic over what runtime it uses is very hard to impossible is a key pain point, made even worse by the fact that runtimes often don’t play nice together. We need to build out traits for interop, starting with [async read + write] but eventually covering [task spawning and timers].
  • Better APIs: Many of the nastiest async Rust bugs come about when users are trying to manage nested tasks. Existing APIs like FutureUnordered and select have a lot of rough edges and can easily lead to deadlockTyler had a good post on this. I would like to see us take a fresh look at the async APIs we offer Rust programmers and build up a powerful, easy to use library that helps steer people away from potential sources of deadlock. Ideally this API would not be specific to the underlying runtime, but instead let users switch between different runtimes, and hopefully cleanly support embedded systems (perhaps with limited functionality). I don’t think we know how to do this yet, and I think that doing it will require us to have a lot more tools (things like send bounds, async closure, and quite possibly trait transformers or async drop).

Firefox Developer ExperienceGeckodriver 0.34.0 Released

We are proud to announce the next major release of geckodriver 0.34.0. It ships with a new extension feature that has been often requested by the WebDriver community.

Contributions

With geckodriver being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • Mitesh Gulecha updated the Print command to also allow numbers to be used for printing single pages as PDF.
  • James Hendry refactored our error handling code, now utilizing the anyhow and thiserror crates, and as such removed the unknown path error type which is not part of the WebDriver specification.
  • Razvan Cojocaru improved the Firefox version check to allow Firefox distributions with custom prefixes for the application name.

Geckodriver code is written in Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for geckodriver.

New Features

Support for “Virtual Authenticators”

Virtual Authenticators serve as a WebDriver Extension designed to simulate user authentication (WebAuthn) on web applications during automated testing. This functionality encompasses a range of methods, including passwords, biometrics, and security keys.

Geckodriver supports all available commands:

Dynamic Port Selection: Adapting to Available Ports at Runtime

Specifying –port=0 as an argument allows geckodriver to dynamically find and use an available free port on the system. It’s important to note that when employing this argument, the final port value must be retrieved from the standard output (stdout).

Fixes

  • While searching for a default Firefox installation on the system, geckodriver used the Contents/MacOS/firefox-bin executable instead of the binary specified in the app bundle’s Info.plist file. This behavior resulted in a malfunction due to a regression in Firefox, particularly affecting the Firefox 121 release.

Downloads

As usual links to the pre-compiled binaries for popular platforms and the source code are available on the GitHub repository.

The Rust Programming Language BlogAnnouncing Rust 1.75.0

The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.75.0 stable

async fn and return-position impl Trait in traits

As announced last week, Rust 1.75 supports use of async fn and -> impl Trait in traits. However, this initial release comes with some limitations that are described in the announcement post.

It's expected that these limitations will be lifted in future releases.

Pointer byte offset APIs

Raw pointers (*const T and *mut T) used to primarily support operations operating in units of T. For example, <*const T>::add(1) would add size_of::<T>() bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8/*mut u8 first.

Code layout optimizations for rustc

The Rust compiler continues to get faster, with this release including the application of BOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so library containing most of the rustc code, allowing for better cache utilization.

We are also now building rustc with -Ccodegen-units=1, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.

In this release these optimizations are limited to x86_64-unknown-linux-gnu compilers, but we expect to expand that over time to include more platforms.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.75.0

Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!

Support.Mozilla.Org2023 in a nutshell

Hey SUMO nation,

As we’re inching closer towards 2024, I’d like to take a step back to reflect on what we’ve accomplished in 2023. It’s a lot, so let’s dive in! 

  • Overall pageviews

From Jan 1st to the end of November, we’ve got a total of 255+ million pageviews on SUMO. We’ve been in a consistent pageview number drop since 2018, and this time around, we’re down 7% from last year. This is far from bad, though, as this is our lowest yearly drop since 2018.

  • Forum

In the forum, we’ve seen an average of 2.8k questions per month this year. This is a 6.67% down turn from last year. We also see a downturn in our answer rate within 72 hours, 71% compared to 75% last year. We also see a drop in our solved rate, 10% this year compared to 14% last year. On a typical month, our average contributors on the forum excluding OP is around 200 (compared to 240 last year).

*See Support glossary
  • KB

We see an increase over different metrics on KB contribution this year, though. In total, we’ve got a total of 1990 revisions (14% increase from last year) from 136 non staff members. Our review rate this year is 80%, while our approval rate is 96%, compared to 73% and 95% in 2022). In total, we’ve got 29 non-staff reviewers this year.

  • Localization

On the localization side, the number is overall pretty normal. Total revision is around 13K (same as last year) from 400 non-staff members, with 93% review rate and 99% approval rate (compared to 90% and 99% last year) from a total of 118 non-staff reviewers.

  • Social Support

From year to date, the Social Support contributors have sent a total of 850 responses (compared to 908 last year) and interacted with 1645 conversations. Our resolved rate has dropped to 40.74%, compared to 70% last year. We have made major improvements on other metrics, though. For example, this year, our contributors were responsible for more replies from our total responses (75% in total compared to 39.6% last year). Our conversion rate is also improving from 20% in 2022 to 52% this year. It means, our contributors have taken more role in answering the overall inbounds and have replied more consistently than last year.

  • Mobile Store Support

On the Mobile Store Support side, our contributors this year have contributed to 1260 replies and interacted with 3149 conversations in total. That makes our conversion rate at 36% this year, compared to 46% last year. And those are mostly contributions to non-English reviews.


In addition to the regular contribution, here are some of the community highlights from 2023:

  • We did some internal assessment and external benchmarking in Q1, which informed our experiments in Q2. Learn the results of those experiments from this call.
  • We also updated our contributor guidelines, including article review guidelines and created a new policy around the use of generative AI.
  • By the end of the year, the Spanish community has done something really amazing. They have managed to translate and update 70% of in-product desktop articles (as opposed to 11% when we started the call for help.

We’d also like to take this opportunity to highlight some Customer Experience team’s projects that we’ve tackled this year (some with close involvement and help from the community).

We split this one into two concurrent projects:

  • Phase 1 Navigation Improvements — initial phase aims to:
    • Surface the community forums in a clearer way
    • Streamline the Ask a Question user flow
    • Improve link text and calls-to-action to better match what users might expect when navigating on the site
    • Updates to the main navigation and small changes to additional site UI (like sidebar menus, page headers, etc.) can be expected
  • Cross-system content structure and hierarchy — the goal of this project is to:
    • Improve our ability to gather data metrics across functional areas of SUMO (KB, ticketing, and forums)
    • Improve recommended “next steps” by linking related content across KB and Forums
    • Create opportunities for grouping and presenting content on SUMO by alternate categories and not just by product

Project Background:

    • This research was conducted between August 2023 and November 2023. The goal of this project is to provide actionable insights on how to improve the customer experience of SUMO.
    • Research approach:
      • Stakeholder engagement process
      • Surveyed 786 Mozilla Support users
      • Conducted three rounds of interviews recruited from survey respondents:
        • Sprint 1: Evaluated content and article structure
        • Sprint 2: Evaluated the overall SUMO customer experience
        • Sprint 3: Co-design of an improved SUMO experience
      • This research was conducted by PH1 Research, who have conducted similar research for Mozilla in 2022.
  • Please consider: Participants for this study were recruited via a banner ad in SUMO. As a result, these findings only reflect the experiences and needs of users who actively use SUMO. It does not reflect users who may not be aware of SUMO or have decided not to use it. 

Executive Summary:

  • Users consider SUMO a trustworthy and content-rich resource. SUMO offers resources that can appropriately help users of different technical levels. The most common user flow is via Google search. Very few are logging in to SUMO directly.
  • The goal of SUMO should be to assist Mozilla users to improve their product experience. Content should be consolidated and optimized to show fewer, high quality results on Google search and SUMO search. The article experience should aim to boost relevance and task success. The SUMO website should aid users to diagnose systems, understand problems, find solutions, and discover additional resources when needed.

Recommendations:

  • Our recommendation is that SUMO’s strategy should be to provide a self-service experience that makes users feel that Mozilla cares about their problems and offers a range of solutions appealing to various persona types (technical/non-technical).
  • The pillars for making SUMO valuable to users should be:
    • Confidence: As a user, I need to be confident that the resource provided will resolve my problem.
    • Guidance: As a user, I need to feel guided through the experience of finding a solution, even when I don’t understand the problem or solutions available.
    • Trust: As a user, I need to trust that the resources have been provided by a trustworthy authority on the subject (SUMO scores well here because of Mozilla).
      • Modernizing our CMS can provide significant benefits in terms of user experience, performance, security, flexibility, collaboration, and analytics.
      • This resulted in a decision to move forward with the plan to migrate our CMS to Wagtail — a modern, open-source content management system focused on flexibility and user experience.
      • We are currently in the process of planning the next phases for implementation.
    • Pocket migration to SUMO
      • We successfully migrated and published 100% of previously identified Pocket help center content from HelpScout’s CMS to SUMO’s CMS, with proper redirects in place to ensure a seamless transition for the user.
      • The localization community began efforts to help us localize the content, which had previously only been available in en-US.
    • Firefox account to Mozilla account rebrand in early November.
    • Officially supporting account users and login less support flow (read more about that here).
    • This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
      • Postgres is better suited for enterprise-level applications like ours, with very large datasets, frequent write operations and complex queries.
      • We can also take advantage of connection pooling via PgBouncer, which will improve our resilience under huge and often malicious traffic spikes (which have been occurring much more frequently during the past year).
      • Last but not least, our database now supports the full unicode character set, which means it can fully handle all characters, including emoji’s , in all languages. Our MySQL database had only limited unicode support, due to its initial configuration, and rather than invest in resolving that, which would have meant a significant chunk of work, we decided to invest instead in Postgres.

This year, you all continue to impress us with the persistence and dedication that you show to Mozilla by contributing to our platform, despite the current state of our world right now. To every single one of you who contributed in one way or another to SUMO, I’d like to express my sincere gratitude because without you all, our platform is just an empty shell. To celebrate this, we’ve prepared this simple dashboard with contribution data that you can filter based on username so you can see how much you’ve accomplished this year (we talked about this in our last community call this year).

Let’s be proud of what we’ve accomplished to keep the internet as a global & public resource for everybody, and let’s keep on rocking the helpful web through 2024 and beyond!

If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!

Mozilla Open Policy & Advocacy BlogMozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy

[Read our full submission here]

Net neutrality – the concept that your internet provider should not be able to block, throttle, or prioritize elements of your internet service, such as to favor their own products or business partners – is on the docket again in the United States. With the FCC putting out a notice of proposed rulemaking (NPRM) to reinstate net neutrality, Mozilla weighed in last week with a clear message: the FCC should reestablish these common sense rules as soon as possible.

We have been fighting for net neutrality around the world for the better part of a decade and a half. Most notably, this included Mozilla’s challenge to the Trump FCC’s dismantling of net neutrality in 2018.

American internet users are on the cusp of renewed protections for the open internet. Our recently submitted comment to the FCC’s NPRM took a step back to remind the FCC and the public of the real benefits of net neutrality: Competition, Grassroots Innovation, Privacy, and Transparency and Accountability.

Simply put, if the FCC moves forward with reclassification of broadband as a Title II service, it will protect innovation in edge services; unlock vital privacy safeguards; and prevent ISPs from leveraging their market power to control people’s experiences online. With vast increases in our dependence on the internet since the COVID-19 pandemic, these protections are more important than ever.

We encourage others who are passionate about the open internet to file reply comments on the proceeding, which are due January 17, 2024.

You can read our full comment here.

The post Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy appeared first on Open Policy & Advocacy.

The Mozilla BlogCAPTCHA successor Privacy Pass has no easy answers for online abuse

As much as the Web continues to inspire us, we know that sites put up with an awful lot of abuse in order to stay online. Denial of service attacks, fraud and other flavors of abusive behavior are a constant pressure on website operators.

One way that sites protect themselves is to find some way to sort “good” visitors from “bad.” CAPTCHAs are a widely loathed and unreliable means of distinguishing human visitors from automated solvers. Even worse, beneath this sometimes infuriating facade is a system that depends extensively on invasive tracking and profiling.

(You can find a fun overview of the current state of CAPTCHA here.)

Finding a technical solution to this problem that does not involve such privacy violations is an appealing challenge, but a difficult one. Well-meaning attempts can easily fail without giving due consideration to other factors. For instance, Google’s Web Environment Integrity proposal fell flat because of its potential to be used to unduly constrain personal choice in how to engage online (see our position for details).

Privacy Pass is a framework published by the IETF that is seen as having the potential to help address this difficult problem. It is a generalization of a system originally developed by Cloudflare to reduce their dependence on CAPTCHAs and tracking. For the Web, the central idea is that Privacy Pass might provide websites with a clean indication that a visitor is OK, separate from the details of their browsing history.

The way Privacy Pass works is that one website hands out special tokens to people the site thinks are OK. Other sites can ask people to give them a token. The second site then knows that a visitor with a token is considered OK by the first site, but they don’t learn anything else. If the second site trusts the first, they might treat people with tokens more favorably than those without.

The cryptography that backs Privacy Pass provides two interlocked guarantees: 

  • authenticity: the recipient of a token can guarantee that it came from the issuer
  • privacy: the recipient of the token cannot trace the token to its issuance, which prevents them from learning who was issued each token

The central promise of Privacy Pass is that the privacy guarantee would allow the exchange of tokens to be largely automated, with your browser forwarding tokens between sites that trust you to sites that are uncertain. This would happen without your participation. Sites could use these tokens to reduce their dependence on annoying and ineffective CAPTCHAs.

Our analysis of Privacy Pass shows that while the technology is sound, applying that technology to an open system like the Web comes with a host of non-technical hazards.

We examine the privacy properties of Privacy Pass, how useful it might be, whether it could improve equity of access, and whether it might bias toward centralization. We find problems that aren’t technical in nature and hard to reconcile. 

In considering how Privacy Pass might be deployed, there is a direct tension between privacy and open participation. The system requires token providers to be widely trusted to respect privacy, but our vision of an open Web means that restrictions on participation cannot be imposed lightly. Resolving this tension is necessary when deciding who can provide tokens.

The analysis concludes that the problem of abuse is not one that will yield to a technical solution like Privacy Pass. For a problem this challenging, technical options might not provide a comprehensive solution, but they need to do more than shift problems around. Technical solutions need to complement other measures. Privacy Pass does allow us to focus on the central problem of identifying abusive visitors, but there is a need to have safeguards in place that prevent a number of serious secondary problems.

Our analysis does not ultimately identify a path to building the non-technical safeguards necessary for a successful deployment of Privacy Pass on the Web.

Finally, we look at the deployments of Privacy Pass in Safari and Chrome browsers. We conclude that these deployments have inadequate safeguards for the problems we identify.

The post CAPTCHA successor Privacy Pass has no easy answers for online abuse appeared first on The Mozilla Blog.

The Mozilla BlogYour Rich BFF, Vivian Tu, On Creating Her Own Personal Finance Community

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like.

This month we chat with Vivian Tu, also known as Your Rich BFF, a former Wall Street trader-turned-expert, personal finance educator, and entrepreneur about her her new book (available everywhere now), the internet’s worst DJ, the personal finance corner of the internet, and her own community, the BFFs.

What is your favorite corner of the internet? (wherever you love on the internet and feel accepted/comfortable/inspired)?
Robert Irwin interacting with wild animals at the Australia Zoo! I grew up watching his dad on the Animal Planet channel and now it’s so fun to watch him continue his family’s legacy of love for animals. The content is soothing and many of the animals are so cute!

What is an internet deep dive that you can’t wait to jump back into?
Whatever we’re calling the Taylor Swift & Travis Kelce romance, it’s a topic that I can’t seem to escape.

What is the one tab you always regret closing?
This never happens because I don’t close tabs. I wish I was kidding. I’m always running ~40 tabs open at once— to the point where you can barely see each tab and my laptop sounds like a rocket ship about to take off. The BFFs always give me a hard time about this any time I show my computer screen on camera.

What can you not stop talking about on the internet right now?
DJ Mandy, the internet’s worst DJ! While most DJs that go viral on the internet are known for their incredible mashups or amazing sets, DJ Mandy is specifically known for intentionally DJing awfully. She’ll blend Hallelujah by Leonard Cohen and Act Up by City Girls and any time I watch her content I am laughing out loud within 45 seconds. I’ve sent her mixes to all of my friends. She makes me laugh and I have no idea how she does it with a straight face.

What was the first online community you engaged with?
The Personal Finance community! I create content around budgeting, saving, investing, etc. myself —  one of the first communities I engaged with online were other folks on their personal finance journeys. It’s been so fun getting to know other finance creators, as well as the BFFs on their personal finance journeys.

What articles and videos are in your Pocket waiting to be read/watched right now?
There are so many!

https://www.washingtonpost.com/wellness/2023/10/14/grief-healing-families-joy/
https://fortune.com/2023/10/11/return-to-office-costs-commuting-lunch/
https://www.bonappetit.com/gallery/cheap-recipes

If you could create your corner of the internet, what would it look like?
I’m lucky because I feel like I have! Most of my audience, who I call the BFFs, come to my corner to learn more about financial tips and tricks! We cover everything from lowering your rent to buying luxury goods to saving on your tax bill, and so much more. That said, if I wasn’t a personal finance creator —  I would 100% be a Slime creator. I am weirdly drawn to watching people make different types of slime and the ASMR of listening to them squish it around!

Why do you think younger generations are more comfortable talking about money among their peers and online?
Since we’re able to often hide behind usernames & profile pictures of memes, there’s a level of anonymity! You can have a more candid conversation with an internet stranger than you might with someone in your day-to-day IRL life. Also, I will say social media has created an unprecedented level of transparency with influencers telling us everything from how much they paid for their nose job, to how to travel hack a $ 2k-a-night hotel room, to how much student debt regular people have. This has made conversations around money more common, more comfortable, and more democratized. I, for one, LOVE this new level of financial honesty.

Former Wall Street trader-turned-expert, personal finance educator, public speaker, entrepreneur, and newly minted author, Vivian Tu AKA Your Rich BFF is on a global mission to make the financial industry less “male, pale, and stale.”  She is the founder and CEO of the financial equity phenomenon, “Your Rich BFF,” which she developed as a passion project to destigmatize and make the rules of personal finance accessible and digestible to non-experts and marginalized communities.  Her dedication to promoting financial literacy has earned her cross-platform fame and notoriety, having garnered 6 million followers and counting, as well as honors on both the Forbes’ ‘30 Under 30 – Social Media’ (2023) and inaugural ‘Top Creators’ (2022 + 2023) lists.  In addition to her breakout digital content, Vivian continues to spread her wealth of knowledge on her top-charting podcast, “Networth and Chill” (Audioboom Studios), a first-of-its-kind podcast offering accessible advice and lessons in finance, featuring Vivian alongside notable experts, professionals, and famous faces to break down the economics of our lives. She is also the author to the book Rich AF: The Winning Money Mindset That Will Change Your Life available everywhere now.

The post Your Rich BFF, Vivian Tu, On Creating Her Own Personal Finance Community appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing `async fn` and return-position `impl Trait` in traits

The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait notation and async fn in traits.

This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.

What's stabilizing

Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.

/// Given a list of players, return an iterator
/// over their names.
fn player_names(
    players: &[Player]
) -> impl Iterator<Item = &String> {
    players
        .iter()
        .map(|p| &p.name)
}

Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:

trait Container {
    fn items(&self) -> impl Iterator<Item = Widget>;
}

impl Container for MyContainer {
    fn items(&self) -> impl Iterator<Item = Widget> {
        self.items.iter().cloned()
    }
}

So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.

trait HttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
//  ^^^^^^^^ desugars to:
//  fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>;
}

Where the gaps lie

-> impl Trait in public traits

The use of -> impl Trait is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container trait:

fn print_in_reverse(container: impl Container) {
    for item in container.items().rev() {
        // ERROR:                 ^^^
        // the trait `DoubleEndedIterator`
        // is not implemented for
        // `impl Iterator<Item = Widget>`
        eprintln!("{item}");
    }
}

Even though some implementations might return an iterator that implements DoubleEndedIterator, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1

async fn in public traits

Since async fn desugars to -> impl Future, the same limitations apply. In fact, if you use bare async fn in a public trait today, you'll see a warning.

warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified
 --> src/lib.rs:7:5
  |
7 |     async fn fetch(&self, url: Url) -> HtmlBody;
  |     ^^^^^
  |
help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change
  |
7 -     async fn fetch(&self, url: Url) -> HtmlBody;
7 +     fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send;
  |

Of particular interest to users of async are Send bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?

Thankfully, we have a solution that allows using async fn in public traits today! We recommend using the trait_variant::make proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant, then use it like so:

#[trait_variant::make(HttpService: Send)]
pub trait LocalHttpService {
    async fn fetch(&self, url: Url) -> HtmlBody;
}

This creates two versions of your trait: LocalHttpService for single-threaded executors and HttpService for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:

pub trait HttpService: Send {
    fn fetch(
        &self,
        url: Url,
    ) -> impl Future<Output = HtmlBody> + Send;
}

This macro works for async because impl Future rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.

Dynamic dispatch

Traits that use -> impl Trait and async fn are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant crate.

How we hope to improve in the future

In the future we would like to allow users to add their own bounds to impl Trait return types, which would make them more generally useful. It would also enable more advanced uses of async fn. The syntax might look something like this:

trait HttpService = LocalHttpService<fetch(): Send> + Send;

Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.

Of course, the goals of the Async Working Group don't stop with async fn in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.

Frequently asked questions

Is it okay to use -> impl Trait in traits?

For private traits you can use -> impl Trait freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make], as we do for async). We expect to lift this restriction in the future.

Should I still use the #[async_trait] macro?

There are a couple of reasons you might need to continue using async-trait:

  • You want to support Rust versions older than 1.75.
  • You want dynamic dispatch.

As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant crate.

Is it okay to use async fn in traits? What are the limitations?

Assuming you don't need to use #[async_trait] for one of the reasons stated above, it's totally fine to use regular async fn in traits. Just remember to use #[trait_variant::make] if you want to support multithreaded runtimes.

The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T> that is HttpService if T: HttpService.

Why do I need #[trait_variant::make] and Send bounds?

In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:

fn spawn_task(service: impl HttpService + 'static) {
    tokio::spawn(async move {
        let url = Url::from("https://rust-lang.org");
        let _body = service.fetch(url).await;
    });
}

Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.

Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.

For a more thorough explanation of the problem, see this blog post.2

Can I mix async fn and impl trait?

Yes, you can freely move between the async fn and -> impl Future spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant nicer to use.

trait HttpService: Send {
    fn fetch(&self, url: Url)
    -> impl Future<Output = HtmlBody> + Send;
}

impl HttpService for MyService {
    async fn fetch(&self, url: Url) -> HtmlBody {
        // This works, as long as `do_fetch(): Send`!
        self.client.do_fetch(url).await.into_body()
    }
}

Why don't these signatures use impl Future + '_?

For -> impl Trait in traits we adopted the 2024 Capture Rules early. This means that the + '_ you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.

Why am I getting a "refine" warning when I implement a trait with -> impl Trait?

If your impl signature includes more detailed information than the trait itself, you'll get a warning:

pub trait Foo {
    fn foo(self) -> impl Debug;
}

impl Foo for u32 {
    fn foo(self) -> String {
//                  ^^^^^^
//  warning: impl trait in impl method signature does not match trait method signature
        self.to_string()
    }
}

The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?

fn main() {
    // Did the implementer mean to allow
    // use of `Display`, or only `Debug` as
    // the trait says?
    println!("{}", 32.foo());
}

Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)] on the impl.

Conclusion

The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.

  1. Note that associated types can only be used in cases where the type is nameable. This restriction will be lifted once impl_trait_in_assoc_type is stabilized.

  2. Note that in that blog post we originally said we would solve the Send bound problem before shipping async fn in traits, but we decided to cut that from the scope and ship the trait-variant crate instead.

  3. This works because of auto-trait leakage, which allows knowledge of auto traits to "leak" from an item whose signature does not specify them.

Mozilla Localization (L10N)2024 Pontoon survey results

The results from the 2024 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

The remaining features ranked as follows:

  1. Add ability to preview Fluent strings in the editor (572 votes).
  2. Link project names in Concordance search results to corresponding strings (540 votes).
  3. Add “Copy translation from another locale as suggestion” batch action (523 votes).
  4. Add ability to receive automated notifications via email (521 votes).
  5. Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (501 votes).
  6. Add ability to read notifications one by one, or mark notifications as unread (495 votes).
  7. Add virtual keyboard with special characters to the editor (469 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

A total of 365 Pontoon users participated in the survey, 169 of which voted on all features. Each user could give each feature 1 to 5 votes. Check out the full report.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

Firefox Developer ExperienceFirefox DevTools Newsletter — 121

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 121 release cycle.

text-wrap (https://developer.mozilla.org/en-US/docs/Web/CSS/text-wrap) is a new CSS property to control how text inside an element is wrapped. The balance value makes it so that:

text is wrapped in a way that best balances the number of characters on each line, enhancing layout quality and legibility. Because counting characters and balancing them across multiple lines is computationally expensive, this value is only supported for blocks of text spanning a limited number of lines

MDN

Sebastian Zartner added an inactive CSS warning when text-wrap: balance is ignored (#1851756), which can be because the text spans too many lines, or if the text is split across multiple columns.

Firefox Inspector Rules view, showing a rule with the text-wrap: balance property. The property is dimmed, and there's an info icon after its value. A tooltip is visible, pointing to the icon, and has the following text: "text-wrap has no effect on this element because it is fragmented, i.e. its content is split across multiple columns or pages. Avoid splitting the element’s content e.g. by removing the columns or by using page-break-inside:avoid"

Firefox being an open source project, we often get contributions from people outside of Mozilla. In 2023, we’re very grateful that 60 bugs were worked on by 30 unique external contributors. Thanks to all who helped us making Firefox DevTools better this year!

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

:has(:not(.slow))

Yes, it’s here (finally as people like to say on social media): The :has() pseudo-class (aka “parent” selector), landed in Firefox, and is now available in all major browsers 🥳. We are probably as excited as you are to be able to have this new ability to style some elements where we would previously need to add specific classes in JS.

As always with powerful technologies, this might come with a catch. The CSS engine now has to check more things to retrieve the elements matching a selector using :has(). In order to spot potential performance issues, there will be a icon next to rules that we think could be problematic (#1863006).

Firefox Inspector Rules view, showing a rule with the following selector: `:has(.mushroom:not(.gu-transit))`  After the selector, a yellow turtle icon is displayed. A tooltip is visible, pointing to the icon, and has the following text: "This selector uses uncosntrained :has(), which can be slow"

This is not a silver bullet though, and this might actually be okay if the page you’re using it on doesn’t have a lot of elements.

Elements relationships

Some HTML attributes (for, aria-labelledby, …) are used to reference specific element ids in the page. The markup view already allowed to select the referenced node, either by Ctrl (Cmd on macOS) + click on the attribute, or via the context menu displayed on the attribute. As we got reports, even from Mozilla employees, who wanted this already existing feature, we figured out this wasn’t very discoverable.
As a result, there’s now an icon in the markup view next to the id reference (#1850953). Clicking on it will select the referenced node so you can quickly check its attributes, content or applied rules.

Debugger, don’t break

Did you know that you could pause in the Debugger without adding breakpoints in the UI? That’s right, adding a debugger statement in your code will make the Debugger pause at this very location. This can be extremely useful, but can also be frustrating if said debugger statement is in a hot path, or on a website you need to debug but don’t have access to the code.
We don’t like frustrated users, we want to make life as easy as possible for web developers, so we added a checkbox (#1578220) that you can uncheck to avoid pausing on debugger statements.

Screenshot of the Breakpoints section in the Firefox Debugger panel. There's a "Pause on debugger statement" checkbox displayed, checked.

We also fixed a bug with the variable tooltip that could display erroneous values when line wrapping is enabled (#1815472).

Accessibility

As said in the previous newsletter, we’re working on accessibility issues in the toolbox, and fixed a few of them in this release:

  • Fixed color contrast for filtered properties, pseudo-class and unmatched selectors in the Rules view (#1843335, #1843343, #1863472, #1844958)
  • Added proper aria attributes on our Accordion component, used in the Debugger and Inspector (#1843318)
  • Made Debugger sections (Threads, Watch Expressions, …) appear as different semantic regions (#1843319)
  • Fixed keyboard navigation of stacktrace frame in the console (#1844092)

The most visible change here is the change we made to our focus indicators across the toolbox (#1855200, #1865047, #1862142) that should make keyboard navigation much easier

Screenshot of the Firefox DevTools console. The category toggle buttons are visible (Errors, Warnings, Logs, …), and the Warnings button has a blue outline as a focus indicator.

The project is coming to an end, but we still have changes coming in future releases, and we’ll likely have another project next year to take care of issues in other tools like the Network panel.

Miscellaneous

SharedArrayBuffer objects logged from Worklets are now displayed in the Console (#1860888).
We addressed a bug impacting VueJS DevTools, where Custom formatters were not called with Proxy objects (#1857722).
Finally, we fixed an issue that could make the Network panel unusable (#1864905).


Thank you for reading this and using our tools, see you next month for a new round of exciting updates 🙂

Firefox Developer ExperienceFirefox WebDriver Newsletter — 121

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 121 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi

New: “browsingContext.contextDestroyed” event

browsingContext.contextDestroyed is a new event that allows clients to be notified when a context is discarded. This event will be emitted for instance when a tab is closed or when a frame is removed from the DOM. The event’s payload contains the context which was destroyed, the url of the context and the parent context id (for child contexts). Note that when closing a tab containing iframes, only a single event will be emitted for the top-level context to avoid unnecessary protocol traffic.

Support for “userActivation” parameter in script.callFunction and script.evaluate

The userActivation parameter is a boolean which allows the script.callFunction and script.evaluate commands to execute JavaScript while simulating that the user is currently interacting with the page. This can be useful to use features which are only available on user activation, such as interacting with the clipboard. The default value for this parameter is false.

Support for “defaultValue” field in browsingContext.userPromptOpened event

The browsingContext.userPromptOpened event will now provide a defaultValue field set to the default value of user prompts of type “prompt“. If the default value was not provided (or was an empty string), the defaultValue field is omitted.

Here is an example payload for a window.prompt usage:

{
  "type": "event",
  "method": "browsingContext.userPromptOpened",
  "params": {
    "context": "67b77507-0728-496f-b951-72650ead8c8a",
    "type": "prompt",
    "message": "What is your favorite automation protocol",
    "defaultValue": "WebDriver BiDi"
  }
}
Prompt example on a webpage.<figcaption class="wp-element-caption">Prompt example on a webpage.</figcaption>

Updates for the browsingContext.captureScreenshot command

The browsingContext.captureScreenshot command received several updates, some of which are non backwards-compatible.

First, the scrollIntoView parameter was removed. The parameter could lead to confusing results as it does not ensure the scrolled element becomes fully visible. If needed, it is easy to scroll into view using script.evaluate.

The clip parameter value BoxClipRectangle renamed its type property from “viewport” to “box.

Finally, a new origin parameter was added with two possible values: “document” or “viewport” (defaults to “viewport“). This argument allows clients to define the origin and bounds of the screenshot. Typically, in order to take “full page” screenshots, using the “document” value will allow the screenshot to expand beyond the viewport, without having to scroll manually. In combination with the clip parameter, this should allow more flexibility to take page, viewport or element screenshots.

Typically, you can use the origin set to “document” and the clip type “element” to take screenshots of elements without worrying about the scroll position or the viewport size:

{
  "context": "67b77507-0728-496f-b951-72650ead8c8a",
  "origin": "document",
  "clip": {
    "type": "element",
    "element": { 
      "sharedId": "67b77507-0728-496f-b951-72650ead8c8a" 
    }
  }
}
Left: an example page scrolled to the top. Right: screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin "document" and clip type "element".<figcaption class="wp-element-caption">Left: an example page scrolled to the top. Right: screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin “document” and clip type “element”.</figcaption>

Added context property for Window serialization

Serialized Window or Frame objects now contain a context property which contains the corresponding context id. This id can then be used to send commands to this Window/Frame and can also be exchanged with WebDriver Classic (Marionette).

Bug Fixes

Marionette (WebDriver classic)

Added support for Window and Frame serialization

Marionette now supports serialization and deserialization of Window and Frame objects.

Mozilla ThunderbirdWhen Will Thunderbird For Android Be Released?

When will Thunderbird for Android be released? This is a question that comes up quite a lot, and we appreciate that you’re all excited to finally put Thunderbird in your pocket. It’s not a simple answer, but we’ll do our best to explain why things are taking longer than expected.

We have always been a bit vague on when we were going to release Thunderbird for Android. At first this was because we still had to figure out what features we wanted to add to K-9 Mail before we were comfortable calling it Thunderbird. Once we had a list, we estimated how long it would take to add those features to the app. Then something happened that always happens in software projects – things took longer than expected. So we cut down on features and aimed for a release at the end of 2023. As we got closer to the end of the year, it became clear that even with the reduced set of features, the release date would have almost certainly slipped into early 2024.

We then sat together and reevaluated the situation. In the end we decided that there’s no rush. We’ll work on the features we wanted in the app in the first place, because you deserve the best mobile experience we can give you. Once those features have been added, we’ll release the app as Thunderbird for Android.

Why Wait? Try K-9 Mail Now

But of course you don’t have to wait until then. All our development happens out in the open. The stable version of K-9 Mail contains all of the features we have already completed. The beta version of K-9 Mail contains the feature(s) we’re currently working on.

Both stable and beta versions can be installed via F-Droid or Google Play.

K-9 Mail’s Future

Side note: Quite a few people seem to love K-9 Mail and have asked us to keep the robot dog around. We believe it should be relatively little effort to build two apps from one code base. The apps would be virtually identical and only differ in app name, app icon, and the color scheme. So our current plan is to keep K-9 Mail around.

Whether you prefer metal dogs or mythical birds, we’ve got you covered.

The post When Will Thunderbird For Android Be Released? appeared first on The Thunderbird Blog.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: November/December 2023 Progress Report

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, November 2023 progress report"

In February 2023 we started publishing monthly reports on the progress of transforming K-9 Mail into Thunderbird for Android. Somewhat to my surprise, we managed to keep this up throughout the entire year. 

But since the end of the year company shutdown is coming up and both Wolf and I have some vacation days left, this will be the last progress report of the year, covering both November and December. If you need a refresher on where we left off previously, know that the progress report for October is only one click away.

New Home On Google Play

If you’ve recently visited K-9 Mail’s page on Google Play you might have noticed that the developer name changed from “K-9 Dog Walkers” to “Mozilla Thunderbird”. That’s because we finally got around to moving the app to a developer account owned by Thunderbird.

I’d like to use this opportunity to thank Jesse Vincent, who not only founded the K-9 Mail project, but also managed the Google Play developer account for all these years. Thank you ♥

Asking For Android permissions

Previously, the app asked the user to grant the permission to access contacts when the message list or compose screens were displayed. 

The app asked for the contacts permission every time one of these screens was opened. That’s not as bad as it sounds. Android automatically ignores such a request after the user has selected the “deny” option twice. Unfortunately, dismissing the dialog e.g. by using the back button, doesn’t count as denying the permission request. So users who chose that option to get rid of the dialog were asked again and again. Clearly not a great experience.

So we changed it. Now, the app no longer asks for the contacts permission in those screens. Instead, asking the user to grant permissions is now part of the onboarding flow. After adding the first account, users will see the following screen:

The keen observer will have noticed that the app is now also asking for the permission to create notifications. Since the introduction of notification categories in Android 8, users have always had the option to disable some or all notifications created by an app. But starting with Android 13, users now have to explicitly grant the permission to create notifications.

While the app will work without the notification permission, you should still grant it to the app, at least for now. Currently, some errors (e.g. when sending an email has failed) are only communicated via a notification. 

And don’t worry, granting the permission doesn’t mean you’ll be bombarded with notifications. You can still configure whether you want to get notifications for new messages on a per account basis.

Improved Account Setup

This section has been a fixture in the last couple of progress reports. The new account setup code has been a lot of work. And we’re still not quite done yet. However, it already is in a state where it’s a vast improvement over what we had previously.

Bug fixes

Thanks to feedback from beta testers, we identified and fixed a couple of bugs.

  • The app was crashing when trying to display an error message after the user had entered an invalid or unsupported email address.
  • While fixing the bug above, we also noticed that some placeholder code to validate email addresses was still used. We replaced that code and improved error messages, e.g. when encountering a syntactically valid, but deliberately unsupported email address like test@[127.0.0.1].
  • A user reported a crash when trying to set up an account with a particular email domain. We tracked this down to an MX DNS record containing an underscore. That’s not a valid character for a hostname. The app already checked for that, but the error wasn’t caught and so crashed the app.

User experience improvements

Thanks to feedback from people who went through the manual setup flow multiple times, we identified a couple of usability issues. We made some changes like disabling auto-correct in the server name text field and copying the password entered in the incoming server settings screen to the outgoing server settings screen.

Hopefully, automatic account setup will just work for you. But if you have to use the manual setup route, at least now it should be a tiny bit less annoying.

Edit server settings

Editing incoming or outgoing server settings is not strictly part of setting up an account. However, the same screens used in the manual account setup flow are also used when editing server settings of an existing account (e.g. by going to Settings → [Account] → Fetching mail → Incoming server). 

The screens don’t behave exactly the same in both instances, so some changes were necessary. In November we finally got around to adapting the screens. And now the new UI is also used when editing server settings.

Targeting Android 13

Every year Google requires Android developers to change their apps to support the new (security) features and restrictions of the Android version that was released the prior year. This is automatically enforced by only allowing developers to publish app updates on Google Play when they “target” the required Android version. This year’s deadline was August 31.

There was only one change in Android 13 that affected K-9 Mail. Once an app targets this Android version, it has to ask the user for permission before being able to create notifications. Since our plans already included adding a new screen to ask for permissions during onboarding, we didn’t spend too much time worrying about the deadline.

But due to us being busy working on other features, we only got around to adding the permission screen in November. We requested an extension to the deadline, which (to my surprise) seems to have been granted automatically. Still, there was a brief period of time where we weren’t able to publish new beta versions because we missed the extended deadline by a couple of days.

We’ll prioritize updating the app to target the latest Android version in the future.

Push Not Working On Android 14

When Push is enabled, K-9 Mail uses what the developer documentation calls “exact alarms” to periodically refresh its Push connection to the server. Starting with Android 12, apps need to request a separate permission to use exact alarms. But the permission itself was granted automatically.

In Android 14 (released in October 2023) Google changed the behavior and Android no longer pre-grants this permission to newly installed apps. However, instead of limiting this to apps targeting Android 14, for some reason they decided to extend this behavior change to apps targeting Android 13.

This unfortunate choice by the creator of Android means that Push is currently not working for users who perform a fresh install of K-9 Mail 6.712 or newer on Android 14. Upgrading from a previous version of K-9 Mail should be fine since the permission was then granted automatically in the past.

At the beginning of next year we’ll be working on adding a screen to guide the user to grant the necessary permission when enabling Push on Android 14. Until then, you can manually grant the permission by opening Android’s App info screen for the app, then enable Allow setting alarms and reminders under Alarms & reminders.

Community Contributions

In November and December the following contributions by community members were merged into K-9 Mail:

Thanks for the contributions! ❤

Releases

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report appeared first on The Thunderbird Blog.

Mozilla Performance BlogPerformance Testing Newsletter (Q4 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

Highlights  🎉

Blog Posts  ✍️

Contributors

  • MyeongJun Go [:myeongjun]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

P.S. If you’re interested in including updates from your teams in a quarterly newsletter like this, and you are not currently covered by another newsletter, please reach out to me (:sparky). I’m interested in making a more general newsletter for these.

The Talospace ProjectFirefox 121

We're still in the process of finding a place to live at the new job and alternating back and forth to the tune of 400 miles each way. Still, this weekend I updated Firefox on the Talos II to Fx121, which fortunately also builds fine with the WebRTC patch from Fx116 (or --disable-webrtc in your .mozconfig), the PGO-LTO patch from Fx117 and the .mozconfigs from Firefox 105.

Unfortunately I had intended to also sit down with the Blackbird and do a test upgrade to Fedora 39 before doing so on the Talos II, but the Blackbird BMC's persistent storage seems to be hosed, the BMC password is whacked and the clock is permanently stuck in June 2022, causing signature checks on the upgrade to fail (even with --nopgpcheck). This is going to require a little work with a serial console and I just didn't have enough spare cycles over the weekend, so I'll do that over the Christmas holiday when we have a few free days. Hopefully I can also get some more work done on upstreaming the JIT at the same time.

The Servo BlogThis year in Servo: over 1000 pull requests and beyond

Servo is well and truly back.

Bar chart: 453 (44%) by Igalia, 195 (19%) by non-Igalia, 389 (37%) by bots <figcaption>Contributors to servo/servo in 2023.</figcaption>

This year, to date, we’ve had 53 unique contributors (+140% over 22 last year), landing 1037 pull requests (+382% over 215) and 2485 commits (+375% over 523), and that’s just in our main repo!

Individual contributors are especially important for the health of the project, and of the pull requests made by humans (rather than our friendly bots), 30% were by people outside Igalia, and 18% were by non-reviewers.

Servo has been featured in six conference talks this year, including at RustNL, Web Engines Hackfest, LF Europe Member Summit, Open Source Summit Europe, GOSIM Workshop, and GOSIM Conference.

Servo now has a usable “minibrowser” UI, now supports offscreen rendering, its experimental WebGPU support (--pref dom.webgpu.enabled) has been updated, and Servo is now listed on wpt.fyi again (click Edit to add Servo).

Our new layout engine is now proving its strengths, with support for iframes, floats, stacking context improvements, inline layout improvements, margin collapsing, ‘position: sticky’, ‘min-width’ and ‘min-height’, ‘max-width’ and ‘max-height’, ‘align-content’, ‘justify-content’, ‘white-space’, ‘text-indent’, ‘text-align: justify’, ‘outline’ and ‘outline-offset’, and ‘filter: drop-shadow()’.

Bar chart: 17% + 64pp in floats, 18% + 55pp in floats-clear, 63% + 15pp in key CSS2 tests, 80% + 14pp in abspos, 34% + 14pp in CSS position module, 67% + 13pp in margin-padding-clear, 49% + 13pp in CSSOM, 51% + 10pp in all CSS tests, 49% + 6pp in all WPT tests <figcaption style="margin: 0 auto;">Pass rates in parts of the Web Platform Tests with our new layout engine, showing the improvement we’ve made since the start of our data in April 2023.</figcaption>

Floats are notoriously tricky, to the point we found them impossible to implement correctly in our legacy layout engine, but thanks to the move from eager to opportunistic parallelism, they are now supported fairly well. Whereas legacy layout was only ever able to reach 53.9% in the floats tests and 68.2% in floats-clear, we’re now at 82.2% in floats (+28.3pp over legacy) and 73.3% in floats-clear (+5.1pp over legacy).

Acid1 now passes in the new layout engine, and we’ve also surpassed legacy layout in the CSS2 abspos (by 50.0pp), CSS2 positioning (by 6.5pp), and CSS Position (by 4.4pp) test suites, while making big strides in others, like the CSSOM tests (+13.1pp) and key parts of the CSS2 test suite (+15.8pp).

Next year, our funding will go towards maintaining Servo, releasing nightlies on Android, finishing our integration with Tauri (thanks to NLNet), and implementing tables and better support for floats and non-Latin text (thanks to NLNet).

Servo will also be at FOSDEM 2024, with Rakhi Sharma speaking about embedding Servo in Rust projects on 3 February at 16:45 local time (15:45 UTC). See you there!

There’s a lot more we would like to do, so if you or a company you know are interested in sponsoring the development of an embeddable, independent, memory-safe, modular, parallel web rendering engine, we want to hear from you! Head over to our sponsorship page, or email join@servo.org for enquiries.

In a decade that many people feared would become the nadir of browser engine diversity, we hope we can help change that with Servo.

The Rust Programming Language BlogLaunching the 2023 State of Rust Survey

It’s time for the 2023 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

Patrick ClokeMatrix Intentional Mentions explained

Previously I have written about how push rules generate notifications and how read receipts mark notificiations as read in the Matrix protocol. This article is about a change that I instigated to improve when a “mention” (or “ping”) notification is created. (This is a “highlight” notification in the Matrix specification.)

This was part of the work I did at Element to reduce unintentional pings. I preferred thinking of it in the positive — that we should only generate a mention on purpose, hence “intentional” mentions. MSC3952 details the technical protocol changes, but this serves as a bit of a higher-level overview (some of this content is copied from the MSC).

Note

This blog post assumes that default push rules are enabled, these can be heavily modified, disabled, etc. but that is ignored in this post.

Legacy mentions

The legacy mention system searches for the current user’s display name or the localpart of the Matrix ID [1] in the text content of an event. For example, an event like the following would generate a mention for me:

{
  // Additional fields ignored.
  "content": {
    "body": "Hello @clokep:matrix.org!"
  }
}

A body content field [2] containing clokep or Patrick Cloke would cause a “highlight” notification (displayed as red in Element). This isn’t uncommon in chat protocols and is how IRC and XMPP.

Some of the issues with this are:

There were some prior attempts to fix this, but I would summarize them as attempting to reduce edge-cases instead of attempting to rethink how mentions are done.

Intentional mentions

I chose to call this “intentional” mentions since the protocol now requires explicitly referring to the Matrix IDs to mention in a dedicated field, instead of implicit references in the text content.

The overall change is simple: include a list of mentioned users in a new content field, e.g.:

{
  // Additional fields ignored.
  "content": {
    "body": "Hello @clokep:matrix.org!"
    "m.mentions": {
      "user_ids": ["@clokep:matrix.org"]
    }
  }
}

Only the m.mentions field is used to generate mentions, the body field is no longer involved. Not only does this remove a whole class of potential bugs, but also allows for “hidden” mentions and paves the way for mentions in extensible events (see MSC4053).

That’s the gist of the change, although the MSC goes deeper into backwards compatibility, and interacting with replies or edits.

Comparison to other protocols

The m.mentions field is similar to how Twitter, Mastodon, Discord, and Microsoft Teams handle mentioning users. The main downside of this approach is that it is not obvious where in the text the user’s mention is (and allows for hidden mentions).

The other seriously considered approach was searching for “pills” in the HTML content of the event. This is similar to how Slack handles mentions, where the user ID is encoded with some markup [3]. This has a major downside of requiring HTML parsing on a hotpath of processing notifications (and it is unclear how this would work for non-HTML clients).

Can I use this?

You can! The MSC was approved and included in Matrix 1.7, Synapase has had support since v1.86.0; it is pretty much up to clients to implement it!

Element Web has handled (and sent intentional mentions) since v1.11.37, although I’m not aware of other clients which do (Element X might now). Hopefully it will become used throughout the ecosystem since many of the above issues are still common complaints I see with Matrix.

[1]This post ignores room-mentions, but they’re handled very similarly.
[2]Note that the plaintext content of the event is searched not the “formatted” content (which is usually HTML).
[3]This solution should also reduce the number of unintentional mentions, but doesn’t allow for hidden mentions.

Patrick ClokeMatrix Presence

I put together some notes on presence when implementing multi-device support for presence in Synapse, maybe this is helpful to others! This is a combination of information from the specification, as well as some information about how Synapse works.

Note

These notes are true as of the v1.9 of the Matrix spec and also cover some Matrix spec changes which may or may not have been merged since.

Presence in Matrix

Matrix includes basic presence support, which is explained decently from the specification:

Each user has the concept of presence information. This encodes:

  • Whether the user is currently online
  • How recently the user was last active (as seen by the server)
  • Whether a given client considers the user to be currently idle
  • Arbitrary information about the user’s current status (e.g. “in a meeting”).

This information is collated from both per-device (online, idle, last_active) and per-user (status) data, aggregated by the user’s homeserver and transmitted as an m.presence event. Presence events are sent to interested parties where users share a room membership.

User’s presence state is represented by the presence key, which is an enum of one of the following:

  • online : The default state when the user is connected to an event stream.
  • unavailable : The user is not reachable at this time e.g. they are idle. [1]
  • offline : The user is not connected to an event stream or is explicitly suppressing their profile information from being sent.

MSC3026 defines a busy presence state:

the user is online and active but is performing an activity that would prevent them from giving their full attention to an external solicitation, i.e. the user is online and active but not available.

Presence information is returned to clients in the presence key of the sync response as a m.presence EDU which contains:

  • currently_active: Whether the user is currently active (boolean)
  • last_active_ago: The last time since this used performed some action, in milliseconds.
  • presence: online, unavailable, or offline (or busy)
  • status_msg: An optional description to accompany the presence.

Updating presence

Clients can call PUT /_matrix/client/v3/presence/{userId}/status to update the presence state & status message or can set the presence state via the set_presence parameter on /sync request.

Note that when using the set_presence parameter, offline is equivalent to “do not make a change”.

User activity

From the Matrix spec on last active ago:

The server maintains a timestamp of the last time it saw a pro-active event from the user. A pro-active event may be sending a message to a room or changing presence state to online. This timestamp is presented via a key called last_active_ago which gives the relative number of milliseconds since the pro-active event.

If the presence is set to online then last_active_ago is not part of the /sync response and currently_active is returned instead.

Idle timeout

From the Matrix spec on automatically idling users:

The server will automatically set a user’s presence to unavailable if their last active time was over a threshold value (e.g. 5 minutes). Clients can manually set a user’s presence to unavailable. Any activity that bumps the last active time on any of the user’s clients will cause the server to automatically set their presence to online.

MSC3026 also recommends:

If a user’s presence is set to busy, it is strongly recommended for implementations to not implement a timer that would trigger an update to the unavailable state (like most implementations do when the user is in the online state).

Presence in Synapse

Note

This describes Synapse’s behavior after v1.93.0. Before that version Synapse did not account for multiple devices, essentially meaning that the latest device update won.

This also only applies to local users; per-device information for remote users is not available, only the combined per-user state.

User’s devices can set a device’s presence state and a user’s status message. A user’s device knows better than the server whether they’re online and should send that state as part of /sync calls (e.g. sending online or unavailable or offline).

Thus a device is only ever able to set the “minimum” presence state for the user. Presence states are coalesced across devices as busy > online > unavailable > offline. You can build simple truth tables of how these combine with multiple devices:

Device 1 Device 2 User state
online unavailable online
busy online busy
unavailable offline unavailable

Additionally, users expect to see the latest activity time across all devices. (And therefore if any device is online and the latest activity is recent then the user is currently active).

The status message is global and setting it should always override any previous state (and never be cleared automatically).

Automatic state transitions

Note

Note that the below only describes the logic for local users. Data received over federation is handled differently.

If a device is unavailable or offline it should transition to online if a “pro-active event” occurs. This includes sending a receipt or event, or syncing without set_presence or set_presence=online.

If a device is offline it should transition to unavailable if it is syncing with set_presence=unavailable.

If a device is online (either directly or implicitly via user actions) it should transition to unavailable (idle) after a period of time [2] if the device is continuing to sync. (Note that this implies the sync is occurring with set_presence=unavailable as otherwise the device is continuing to report as online). [3]

If a device is online or unavailable it should transition to offline after a period of time if it is not syncing and not making other actions which would transition the device to online. [4]

Note if a device is busy it should not transition to other states. [5]

There’s a huge testcase which checks all these transitions.

Examples
  1. Two devices continually syncing, one online and one unavailable. The end result should be online. [6]
  2. One device syncing with set_presence=unavailable but had a “pro-active” action, after a period of time the user should be unavailable if no additional “pro-active” actions occurred.
  3. One device that stops syncing (and no other “pro-active” actions” are occurring), after a period of time the user should be offline.
  4. Two devices continually syncing, one online and one unavailable. The online device stops syncing, after a period of time the user should be unavailable.
[1]This should be called idle.
[2]The period of time is implementation specific.
[3]Note that syncing with set_presence=offline does not transition to offline, it is equivalent to not syncing. (It is mostly for mobile applications to process push notifications.)
[4]The spec doesn’t seem to ever say that devices can transition to offline.
[5]See the open thread on the MSC3026.
[6]This is essentially the bug illustrated by the change in Element Web’s behavior.

The Rust Programming Language BlogA Call for Proposals for the Rust 2024 Edition

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's 1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.

  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.

  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related change that was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  1. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.

  2. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter() will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iterator that yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this, all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented (not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.