hacks.mozilla.orgFuzzing rust-minidump for Embarrassment and Crashes – Part 2

This is part 2 of a series of articles on rust-minidump. For part 1, see here.

So to recap, we rewrote breakpad’s minidump processor in Rust, wrote a ton of tests, and deployed to production without any issues. We killed it, perfect job.

And we still got massively dunked on by the fuzzer. Just absolutely destroyed.

I was starting to pivot off of rust-minidump work because I needed a bit of palette cleanser before tackling round 2 (handling native debuginfo, filling in features for other groups who were interested in rust-minidump, adding extra analyses that we’d always wanted but were too much work to do in Breakpad, etc etc etc).

I was still getting some PRs from people filling in the corners they needed, but nothing that needed too much attention, and then @5225225 smashed through the windows and released a bunch of exploding fuzzy rabbits into my office.

I had no idea who they were or why they were there. When I asked they just lowered one of their seven pairs of sunglasses and said “Because I can. Now hold this bunny”. I did as I was told and held the bunny. It was a good bun. Dare I say, it was a true bnnuy: it was libfuzzer. (Huh? You thought it was gonna be AFL? Weird.)

As it turns out, several folks had built out some really nice infrastructure for quickly setting up a decent fuzzer for some Rust code: cargo-fuzz. They even wrote a little book that walks you through the process.

Apparently those folks had done such a good job that 5225225 had decided it would be a really great hobby to just pick up a random rust project and implement fuzzing for it. And then to fuzz it. And file issues. And PRs that fix those issues. And then implement even more fuzzing for it.

Please help my office is drowning in rabbits and I haven’t seen my wife in weeks.

As far as I can tell, the process seems to genuinely be pretty easy! I think their first fuzzer for rust-minidump was basically just:

  • checked out the project
  • run cargo fuzz init (which autogenerates a bunch of config files)
  • write a file with this:

use libfuzzer_sys::fuzz_target;
use minidump::*;

fuzz_target!(|data: &[u8]| {
    // Parse a minidump like a normal user of the library
    if let Ok(dump) = minidump::Minidump::read(data) {
        // Ask the library to get+parse several streams like a normal user.

        let _ = dump.get_stream::<MinidumpAssertion>();
        let _ = dump.get_stream::<MinidumpBreakpadInfo>();
        let _ = dump.get_stream::<MinidumpCrashpadInfo>();
        let _ = dump.get_stream::<MinidumpException>();
        let _ = dump.get_stream::<MinidumpLinuxCpuInfo>();
        let _ = dump.get_stream::<MinidumpLinuxEnviron>();
        let _ = dump.get_stream::<MinidumpLinuxLsbRelease>();
        let _ = dump.get_stream::<MinidumpLinuxMaps>();
        let _ = dump.get_stream::<MinidumpLinuxProcStatus>();
        let _ = dump.get_stream::<MinidumpMacCrashInfo>();
        let _ = dump.get_stream::<MinidumpMemoryInfoList>();
        let _ = dump.get_stream::<MinidumpMemoryList>();
        let _ = dump.get_stream::<MinidumpMiscInfo>();
        let _ = dump.get_stream::<MinidumpModuleList>();
        let _ = dump.get_stream::<MinidumpSystemInfo>();
        let _ = dump.get_stream::<MinidumpThreadNames>();
        let _ = dump.get_stream::<MinidumpThreadList>();
        let _ = dump.get_stream::<MinidumpUnloadedModuleList>();

And that’s… it? And all you have to do is type cargo fuzz run and it downloads, builds, and spins up an instance of libfuzzer and finds bugs in your project overnight?

Surely that won’t find anything interesting. Oh it did? It was largely all bugs in code I wrote? Nice.

cargo fuzz is clearly awesome but let’s not downplay the amount of bafflingly incredible work that 5225225 did here! Fuzzers, sanitizers, and other code analysis tools have a very bad reputation for drive-by contributions.

I think we’ve all heard stories of someone running a shiny new tool on some big project they know nothing about, mass filing a bunch of issues that just say “this tool says your code has a problem, fix it” and then disappearing into the mist and claiming victory.

This is not a pleasant experience for someone trying to maintain a project. You’re dumping a lot on my plate if I don’t know the tool, have trouble running the tool, don’t know exactly how you ran it, etc.

It’s also very easy to come up with a huge pile of issues with very little sense of how significant they are.

Some things are only vaguely dubious, while others are horribly terrifying exploits. We only have so much time to work on stuff, you’ve gotta help us out!

And in this regard 5225225’s contributions were just, bloody beautiful.

Like, shockingly fantastic.

They wrote really clear and detailed issues. When I skimmed those issues and misunderstood them, they quickly clarified and got me on the same page. And then they submitted a fix for the issue before I even considered working on the fix. And quickly responded to review comments. I didn’t even bother asking them to squashing their commits because damnit they earned those 3 commits in the tree to fix one overflow.

Then they submitted a PR to merge the fuzzer. They helped me understand how to use it and debug issues. Then they started asking questions about the project and started writing more fuzzers for other parts of it. And now there’s like 5 fuzzers and a bunch of fixed issues!

I don’t care how good cargo fuzz is, that’s a lot of friggin’ really good work! Like I am going to cry!! This was so helpful??? 😭

That said, I will take a little credit for this going so smoothly: both Rust itself and rust-minidump are written in a way that’s very friendly to fuzzing. Specifically, rust-minidump is riddled with assertions for “hmm this seems messed up and shouldn’t happen but maybe?” and Rust turns integer overflows into panics (crashes) in debug builds (and index-out-of-bounds is always a panic).

Having lots of assertions everywhere makes it a lot easier to detect situations where things go wrong. And when you do detect that situation, the crash will often point pretty close to where things went wrong.

As someone who has worked on detecting bugs in Firefox with sanitizer and fuzzing folks, let me tell you what really sucks to try to do anything with: “Hey so on my machine this enormous complicated machine-generated input caused Firefox to crash somewhere this one time. No, I can’t reproduce it. You won’t be able to reproduce it either. Anyway, try to fix it?”

That’s not me throwing shade on anyone here. I am all of the people in that conversation. The struggle of productively fuzzing Firefox is all too real, and I do not have a good track record of fixing those kinds of bugs. 

By comparison I am absolutely thriving under “Yeah you can deterministically trip this assertion with this tiny input you can just check in as a unit test”.

And what did we screw up? Some legit stuff! It’s Rust code, so I am fairly confident none of the issues were security concerns, but they were definitely quality of implementation issues, and could have been used to at very least denial-of-service the minidump processor.

Now let’s dig into the issues they found!

#428: Corrupt stacks caused infinite loops until OOM on ARM64


As noted in the background, stackwalking is a giant heuristic mess and you can find yourself going backwards or stuck in an infinite loop. To keep this under control, stackwalkers generally require forward progress.

Specifically, they require the stack pointer to move down the stack. If the stack pointer ever goes backwards or stays the same, we just call it quits and end the stackwalk there.

However, you can’t be so strict on ARM because leaf functions may not change the stack size at all. Normally this would be impossible because every function call at least has to push the return address to the stack, but ARM has the link register which is basically an extra buffer for the return address.

The existence of the link register in conjunction with an ABI that makes the callee responsible for saving and restoring it means leaf functions can have 0-sized stack frames!

To handle this, an ARM stackwalker must allow for there to be no forward progress for the first frame of a stackwalk, and then become more strict. Unfortunately I hand-waved that second part and ended up allowing infinite loops with no forward progress:

// If the new stack pointer is at a lower address than the old,
// then that's clearly incorrect. Treat this as end-of-stack to
// enforce progress and avoid infinite loops.
// NOTE: this check allows for equality because arm leaf functions
// may not actually touch the stack (thanks to the link register
// allowing you to "push" the return address to a register).
if frame.context.get_stack_pointer() < self.get_register_always("sp") as u64 {
    trace!("unwind: stack pointer went backwards, assuming unwind complete");
    return None;

So if the ARM64 stackwalker ever gets stuck in an infinite loop on one frame, it will just build up an infinite backtrace until it’s killed by an OOM. This is very nasty because it’s a potentially very slow denial-of-service that eats up all the memory on the machine!

This issue was actually originally discovered and fixed in #300 without a fuzzer, but when I fixed it for ARM (32-bit) I completely forgot to do the same for ARM64. Thankfully the fuzzer was evil enough to discover this infinite looping situation on its own, and the fix was just “copy-paste the logic from the 32-bit impl”.

Because this issue was actually encountered in the wild, we know this was a serious concern! Good job, fuzzer!

(This issue specifically affected minidump-processor and minidump-stackwalk)

#407: MinidumpLinuxMaps address-based queries didn’t work at all


MinidumpLinuxMaps is an interface for querying the dumped contents of Linux’s /proc/self/maps file. This provides metadata on the permissions and allocation state for mapped ranges of memory in the crashing process.

There are two usecases for this: just getting a full dump of all the process state, and specifically querying the memory properties for a specific address (“hey is this address executable?”). The dump usecase is handled by just shoving everything in a Vec. The address usecase requires us to create a RangeMap over the entries.

Unfortunately, a comparison was flipped in the code that created the keys to the RangeMap, which resulted in every correct memory range being discarded AND invalid memory ranges being accepted. The fuzzer was able to catch this because the invalid ranges tripped an assertion when they got fed into the RangeMap (hurray for redundant checks!).

if self.base_address < self.final_address { 
 return None; 

Although tests were written for MinidumpLinuxMaps, they didn’t include any invalid ranges, and just used the dump interface, so the fact that the RangeMap was empty went unnoticed!

This probably would have been quickly found as soon as anyone tried to actually use this API in practice, but it’s nice that we caught it beforehand! Hooray for fuzzers!

(This issue specifically affected the minidump crate which technically could affect minidump-processor and minidump-stackwalk. Although they didn’t yet actually do address queries, they may have crashed when fed invalid ranges.)

#381: OOM from reserving memory based on untrusted list length.


Minidumps have lots of lists which we end up collecting up in a Vec or some other collection. It’s quite natural and more efficient to start this process with something like Vec::with_capacity(list_length). Usually this is fine, but if the minidump is corrupt (or malicious), then this length could be impossibly large and cause us to immediately OOM.

We were broadly aware that this was a problem, and had discussed the issue in #326, but then everyone left for the holidays. #381 was a nice kick in the pants to actually fix it, and gave us a free simple test case to check in.

Although the naive solution would be to fix this by just removing the reserves, we opted for a solution that guarded against obviously-incorrect array lengths. This allowed us to keep the performance win of reserving memory while also making rust-minidump fast-fail instead of vaguely trying to do something and hallucinating a mess.

Specifically, @Swatinem introduced a function for checking that the amount of memory left in the section we’re parsing is large enough to even hold the claimed amount of items (based on their known serialized size). This should mean the minidump crate can only be induced to reserve O(n) memory, where n is the size of the minidump itself.

For some scale:

  • A minidump for Firefox’s main process with about 100 threads is about 3MB.
  • A minidump for a stackoverflow from infinite recursion (8MB stack, 9000 calls) is about 8MB.
  • A breakpad symbol file for Firefox’s main module can be about 200MB.

If you’re symbolicating, Minidumps probably won’t be your memory bottleneck. 😹

(This issue specifically affected the minidump crate and therefore also minidump-processor and minidump-stackwalk.)

The Many Integer Overflows and My Greatest Defeat

The rest of the issues found were relatively benign integer overflows. I claim they’re benign because rust-minidump should already be working under the assumption that all the values it reads out of the minidump could be corrupt garbage. This means its code is riddled with “is this nonsense” checks and those usually very quickly catch an overflow (or at worst print a nonsense value for some pointer).

We still fixed them all, because that’s shaky as heck logic and we want to be robust. But yeah none of these were even denial-of-service issues, as far as I know.

To demonstrate this, let’s discuss the most evil and embarrassing overflow which was definitely my fault and I am still mad about it but in a like “how the heck” kind of way!?

The overflow is back in our old friend the stackwalker. Specifically in the code that attempts to unwind using frame pointers. Even more specifically, when offsetting the supposed frame-pointer to get the location of the supposed return address:

let caller_ip = stack_memory.get_memory_at_address(last_bp + POINTER_WIDTH)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

If the frame pointer (last_bp) was ~u64::MAX, the offset on the first line would overflow and we would instead try to load ~null. All of our loads are explicitly fallible (we assume everything is corrupt garbage!), and nothing is ever mapped to the null page in normal applications, so this load would reliably fail as if we had guarded the overflow. Hooray!

…but the overflow would panic in debug builds because that’s how debug builds work in Rust!

This was actually found, reported, and fixed without a fuzzer in #251. All it took was a simple guard:

(All the casts are because this specific code is used in the x86 impl and the x64 impl.)

if last_bp as u64 >= u64::MAX - POINTER_WIDTH as u64 * 2 {
    // Although this code generally works fine if the pointer math overflows,
    // debug builds will still panic, and this guard protects against it without
    // drowning the rest of the code in checked_add.
    return None;

let caller_ip = stack_memory.get_memory_at_address(last_bp as u64 + POINTER_WIDTH as u64)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp as u64)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

And then it was found, reported, and fixed again with a fuzzer in #422.

Wait what?

Unlike the infinite loop bug, I did remember to add guards to all the unwinders for this problem… but I did the overflow check in 64-bit even for the 32-bit platforms.

slaps forehead

This made the bug report especially confusing at first because the overflow was like 3 lines away from a guard for that exact overflow. As it turns out, the mistake wasn’t actually as obvious as it sounds! To understand what went wrong, let’s talk a bit more about pointer width in minidumps.

A single instance of rust-minidump has to be able to handle crash reports from any platform, even ones it isn’t natively running on. This means it needs to be able to handle both 32-bit and 64-bit platforms in one binary. To avoid the misery of copy-pasting everything or making everything generic over pointer size, rust-minidump prefers to work with 64-bit values wherever possible, even for 32-bit plaftorms.

This isn’t just us being lazy: the minidump format itself does this! Regardless of the platform, a minidump will refer to ranges of memory with a MINIDUMP_MEMORY_DESCRIPTOR whose base address is a 64-bit value, even on 32-bit platforms!

  ULONG64                      StartOfMemoryRange;

So quite naturally rust-minidump’s interface for querying saved regions of memory just operates on 64-bit (u64) addresses unconditionally, and 32-bit-specific code casts its u32 address to a u64 before querying memory.

That means the code with the overflow guard was manipulating those values as u64s on x86! The problem is that after all the memory loads we would then go back to “native” sizes and compute caller_sp = last_bp + POINTER_WIDTH * 2. This would overflow a u32 and crash in debug builds. 😿

But here’s the really messed up part: getting to that point meant we were successfully loading memory up to that address. The first line where we compute caller_ip reads it! So this overflow means… we were… loading memory… from an address that was beyond u32::MAX…!?


The fuzzer had found an absolutely brilliantly evil input.

It abused the fact that MINIDUMP_MEMORY_DESCRIPTOR technically lets 32-bit minidumps define memory ranges beyond u32::MAX even though they could never actually access that memory! It could then have the u64-based memory accesses succeed but still have the “native” 32-bit operation overflow!

This is so messed up that I didn’t even comprehend that it had done this until I wrote my own test and realized that it wasn’t actually failing because I foolishly had limited the range of valid memory to the mere 4GB a normal x86 process is restricted to.

And I mean that quite literally: this is exactly the issue that creates Parallel Universes in Super Mario 64.

But hey my code was probably just bad. I know google loves sanitizers and fuzzers, so I bet google breakpad found this overflow ages ago and fixed it:

uint32_t last_esp = last_frame->context.esp;
uint32_t last_ebp = last_frame->context.ebp;
uint32_t caller_eip, caller_esp, caller_ebp;

if (memory_->GetMemoryAtAddress(last_ebp + 4, &caller_eip) &&
    memory_->GetMemoryAtAddress(last_ebp, &caller_ebp)) {
    caller_esp = last_ebp + 8;
    trust = StackFrame::FRAME_TRUST_FP;
} else {

Ah. Hmm. They don’t guard for any kind of overflow for those uint32_t’s (or the uint64_t’s in the x64 impl).

Well ok GetMemoryAtAddress does actual bounds checks so the load from ~null will generally fail like it does in rust-minidump. But what about the Parallel Universe overflow that lets GetMemoryAtAddress succeed?

Ah well surely breakpad is more principled with integer width than I was–

virtual bool GetMemoryAtAddress(uint64_t address, uint8_t*  value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint16_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint32_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint64_t* value) const = 0;

Whelp congrats to 5225225 for finding an overflow that’s portable between two implementations in two completely different languages by exploiting the very nature of the file format itself!

In case you’re wondering what the implications of this overflow are: it’s still basically benign. Both rust-minidump and google-breakpad will successfully complete the frame pointer analysis and yield a frame with a ~null stack pointer.

Then the outer layer of the stackwalker which runs all the different passes in sequence will see something succeeded but that the frame pointer went backwards. At this point it will discard the stack frame and terminate the stackwalk normally and just calmly output whatever the backtrace was up to that point. Totally normal and reasonable operation.

I expect this is why no one would notice this in breakpad even if you run fuzzers and sanitizers on it: nothing in the code actually does anything wrong. Unsigned integers are defined to wrap, the program behaves reasonably, everything is kinda fine. We only noticed this in rust-minidump because all integer overflows panic in Rust debug builds.

However this “benign” behaviour is slightly different from properly guarding the overflow. Both implementations will normally try to move on to stack scanning when the frame pointer analysis fails, but in this case they give up immediately. It’s important that the frame pointer analysis properly identifies failures so that this cascading can occur. Failing to do so is definitely a bug!

However in this case the stack is partially in a parallel universe, so getting any kind of useful backtrace out of it is… dubious to say the least.

So I totally stand by “this is totally benign and not actually a problem” but also “this is sketchy and we should have the bounds check so we can be confident in this code’s robustness and correctness”.

Minidumps are all corner cases — they literally get generated when a program encounters an unexpected corner case! It’s so tempting to constantly shrug off situations as “well no reasonable program would ever do this, so we can ignore it”… but YOU CAN’T.

You would not have a minidump at your doorstep if the program had behaved reasonably! The fact that you are trying to inspect a minidump means something messed up happened, and you need to just deal with it!

That’s why we put so much energy into testing this thing, it’s a nightmare!

I am extremely paranoid about this stuff, but that paranoia is based on the horrors I have seen. There are always more corner cases.

There are ALWAYS more corner cases. ALWAYS.


The post Fuzzing rust-minidump for Embarrassment and Crashes – Part 2 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogJoey Amato, Publisher of Pride Journeys, Shares What Brings Him Joy Online

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner of the Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later and what sites and forums shaped them.

With Pride celebrations taking place throughout June, we’re featuring LGBTQ+ leaders this month as part of our My Corner of the Internet series. In this next installment, Joey Amato, publisher of Pride Journeys, shares his love of travel, Peloton and his nostalgia of MySpace.

What is your favorite corner of the internet?

I enjoy reading Billboard.com to keep up with music industry news as well as CNBC.com for business and finance news. I’m a stock market junkie so I love reading about financial news, especially in today’s crazy market.

What is an internet deep dive that you can’t wait to jump back into? 

I like keeping up with music trends and reading up new artists and concert tours. I used to be in the music industry when I lived in New York and Nashville, so to stay connected, I check out Rolling Stone, Music Row and Pollstar almost on a daily basis.

What is the one tab on your browser you always regret closing? 

Amazon – I spend way too much money there! Say what you want about Amazon and Jeff Bezos, but he changed the world. Our lives are completely different because of this company and the convenience they provide.

Who is an LGBTQ+ person with a very online presence that is a role model for you? 

I really enjoy Matty Maggiacomo who is a trainer on Peloton. He’s easy on the eyes and also pushes me in every workout. I recently began subscribing to Peloton earlier this year. I don’t own a Peloton bike, but I love their other workouts.

What can you not stop talking about on the internet right now?

The new ABBA show in London. I can’t wait to attend in person. The reviews thus far have been quite incredible.

What was the first online community you engaged with?

Probably MySpace….yes, I’m old! It was the first time I began connecting with people who I haven’t seen in a while or had an interest in. I liked being able to customize my own space. 

What articles and videos are in your Pocket waiting to be read/watched right now?

Since I have been traveling a lot recently, I am behind in my financial research and launching my new site for the LGBTQ community… details coming soon! I also need to catch up on my travel writing.

If you could create your own corner of the internet what would it look like? 

It would definitely be an LGBTQ related corner. I have been in the space for most of my professional career, so I feel like this is an area I can really contribute to.

Joey Amato is the publisher of Pride Journeys, a website dedicated to LGBTQ travel. He also publishes an annual LGBTQ travel directory which features destinations from around the world looking to reach LGBTQ travelers. Most recently, Pride Journeys published the Ultimate Pride Guide, a calendar of pride festivals for those looking to travel with pride.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Joey Amato, Publisher of Pride Journeys, Shares What Brings Him Joy Online appeared first on The Mozilla Blog.

The Mozilla BlogUnderstanding Apple’s Private Click Measurement

Private advertising technology proposals could greatly improve privacy for web users. Web advertising has a reputation for poor privacy practices. Firefox, other browsers, and the web community are collaborating on finding ways to support advertising while maintaining strong, technical privacy protections for users.

This series of posts aims to contribute to the ongoing conversation regarding the future of advertising on the web by providing technical analyses on proposals that have been put forward by various players in the ecosystem to address the questions of what might replace third-party cookies. In this installment, we look at Apple’s Private Click Measurement (PCM).

PCM differs from the subject of our other analyses in that it is more than a mere proposal. PCM is available in Safari and iOS today.

The goal of PCM is to support conversion measurement through a new in-browser API. Conversion measurement aims to measure how effective advertising is by observing which advertisements lead to sales or other outcomes like page views and sign-ups. PCM measures the most direct kind of conversion, where a user clicks an ad on one site then later takes an action on the advertiser’s site, like buying a product.

The way that PCM works is that the browser records when clicks and conversions occur. When a click is followed by a conversion, the browser creates a report. PCM aims to safeguard privacy by strictly limiting the information that is included in the report and by submitting reports to the websites through an anonymization service after a delay.

Our analysis concludes that PCM is a poor trade-off between user privacy and advertising utility:

  • Although PCM prevents sites from performing mass tracking, it still allows them to track a small number of users.
  • The measurement capabilities PCM provides are limited relative to the practices that advertisers currently employ, with long delays and too few identifiers for campaigns being the most obvious of the shortcomings.

The poor utility of PCM offers sites no incentive to use it over tracking for browsers that continue to offer cross-site cookies, like Chrome. For browsers that limit the use of cookies — like Firefox or Safari, especially with stricter controls like Total Cookie Protection in Firefox — tracking is already difficult. If Firefox implemented PCM, it would enable a new way to perform cross-site tracking. While some sites might choose to use PCM as intended, nothing in the design of PCM prevents sites from using it for tracking.

Overall, the design choices in PCM that aim to safeguard privacy provide insufficient privacy protections, but they appear to make the API less useful for measurement.

The Private Advertising Technology Community Group (PATCG) in the W3C are currently discussing options for measurement in advertising.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

Privacy analysis of FLoC

Mozilla responds to the UK CMA consultation on Google’s commitments on the Chrome Privacy Sandbox

Privacy analysis of SWAN.community and Unified ID 2.0

Analysis of Google’s Privacy Budget proposal

Privacy Preserving Attribution for Advertising: our Interoperable Private Attribution proposal

The post Understanding Apple’s Private Click Measurement appeared first on The Mozilla Blog.

The Mozilla BlogReflecting on 10 years of time well spent with Pocket

Ten years ago, a small, yet mighty team launched Pocket because we felt that people deserved a better way to consume content on the internet. We wanted it to be easy — “as simple an action as putting it in your pocket” — and empowering, giving people the means to engage with the web on their own terms. We championed the save as a fundamental internet action — akin to browse, search and share — but more than any other, allowing you to create your own corner of the internet. 

Right away, Pocket’s save became powerful — a means to grab what you wanted from the waves of posts and tweets, long reads and memes and come back to it when it made the most sense for you. And it continues to lay the groundwork for the future ahead of us.  

A couple of interesting things happened as we started to lean into the save.

First, we remembered the power of stories. People understand the world through stories. The stories that people consume give them power, knowledge, and ideas. Stories allow people to access who they are and who they aspire to be. 

But we get distracted from those stories when too much of the focus from media and platforms tend to be on breaking news — who is first and loudest. In addition, platforms have created incentives all about the click — quality be damned. We have committed Pocket to being a place where quality stories, and those who crave them, can breathe and thrive.

Second, we’ve embraced curation. The modern web is vast, messy and noisy. There is some inherent beauty and innovation in this. As the Mozilla Manifesto states, the open, global internet is the most powerful communication and collaboration resource we have ever seen. It embodies some of our deepest hopes for human progress. It enables new opportunities for learning, building a sense of shared humanity, and solving the pressing problems facing people everywhere. 

But as the web has evolved, it has created a challenging and often overwhelming environment for online content consumers. People rightly feel like they’ve lost agency with their online time. Curation is our way to help readers fight back. We built Pocket’s approach to recommendations with the same intention as the save — content discovery grounded in surfacing stories worthy of your time and attention and taking an approach that brings together algorithms and a human touch so that our commitment to quality is always clear and felt. We often hear from users that “we really get them” even when they are viewing recommendations on non-personalized surfaces — what I believe they are responding to is the quality of what we curate and recommend to our users. 

Since we launched Pocket, people have saved more than 6.5 billion pieces of content. Every month, through Firefox New Tab and our Pocket Hits newsletter, more than 40 million people see our Pocket recommendations. We are strong believers and participants in the broader content ecosystem — and we want to direct users to quality content wherever it happens to live, championing creators and publishers, big and small. Pocket can always be trusted to bring you the good stuff.

In 2020, our recommendations evolved further, with the introduction of Pocket Collections. In the uncertain and disorienting early days of the pandemic, and then after George Floyd was murdered, we saw a need for high-quality content to contextualize and help people navigate events. We also saw an opportunity to elevate voices and use our platform to dig into complicated, systematic issues with room for nuance. Just reading the breaking news or knowing dates and names, the things that pop up in simple Google searches, wasn’t adequate. 

We began digging in deeper and inviting experts to create collections to bring a broader perspective, animated by the idea that some topics require more than just a single article or point of view to bring understanding. Pocket might not be where you come to learn who won an election. But it will be where you come to understand why. Since those initial collections around covid and racial justice, we’ve continued to build and explore where best to use this medium. We now have hundreds of collections, ranging from how to talk to people you disagree with and managing Zoom brain to great long reads on scams and the science of the multiverse. 

This is the future of Pocket we are continuing to expand — improving our ability to find the right piece of content and recommending it to you at that right moment. We are also trying in our own way to elevate unique voices, helping creators and fantastic independent publishers get in front of new audiences. In time, we may even explore opportunities to tap our users as experts in specific topics and pass the mic to them. As the internet and digital content overload economy continue to evolve, so will we. We won’t pretend to be able to solve the internet’s problems, but we do want to help connect people with the hope and talent and wonder we know still exists on the web. 

The internet doesn’t have to be a place that leaves people feeling overwhelmed. For the past 10 years, Pocket has been a corner of the internet where you can joy-scroll and satiate your curiosity about how we got here. Ultimately, our goal is to make sure that using Pocket is time well spent — and will continue to be 10 years from now. 

10 years of fascinating reads

From the top article from each year, to the way Pocket readers kind of predicted the future, these collections will certainly spark your interest

The post Reflecting on 10 years of time well spent with Pocket appeared first on The Mozilla Blog.

The Mozilla BlogQueer Singer/Songwriter Criibaby Finds Pride and Joy at the Intersection of Tech and Music

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner of the Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later and what sites and forums shaped them.

With Pride celebrations taking place throughout June, we’re featuring LGBTQ+ leaders this month as part of our My Corner of the Internet series. In this next installment, queer music artist Criibaby shares her love of the intersection of music and tech, her process to creating music and how her art extends to shoe design.

What is your favorite corner of the internet?

I have a special place in my heart for a website called soundtrap which is basically an online software that lets you record songs and collaborate with other artists from anywhere you can connect to the internet. Before I had access to a real recording studio and long before I ever released any music as Criibaby, I was writing demos at my kitchen table after work. It reminded me of when I was a kid when I would spend hours after school making mashups and recording songs on garageband. In my early twenties, I was going through a pretty hard time processing grief, and getting in touch with my inner child in that way felt, well, just right. I have tons of voice memos on my phone, probably into the three digits at this point, and soundtrap was one of the first places I could put them all together and get back into multi-track recording. Years ago, I sent a soundtrap demo I’d recorded on my phone to this guy called Pandaraps who was making really inspiring, introspective queer lofi hip hop. Thanks to that demo, I got an invitation to the studio he was recording at, and was introduced to my now-producer Surfer Dave

What is an internet deep dive that you can’t wait to jump back into?

I always try to tap into Kiefer and knxwledge’s twitch streams whenever a notification pops up that they’re live. Those are two producers that I really look up to, so it’s cool to go behind the scenes with them and learn about their creative process or hear what they’ve been working on lately. I’ve also been learning a bit about web3 stuff in discords like Mashibeats hosted by Marc De Clive Lowe, and StemsDAO which hosts a producer remix game that I’m hoping to submit to once I finish the LP I’m in the middle of writing. I’m super interested in the intersection of music and tech, so while I’m totally new to that scene, I’m curious to see where it goes.

What is the one tab on your browser you always regret closing?

Any Bill Wurtz video. Watching one always leads to watching 100, so it’s a bit of a rabbit hole. But with those catchy melodies and ridiculous graphics, how could you not get sucked in for like an hour at a time? Someone asked me if I consider myself a Bill Wurtz superfan and I’d say the answer is a resounding yes. A few years ago I recreated the Converse from the Bill Wurtz classic “Christmas isn’t real” and gave a pair to my partner as a birthday present. Not gonna lie, I’m pretty proud of that. That’s peak gifting. 

Who is an LGBTQ+ person with a very online presence that is a role model for you?

Two words: Queer memes. Shout out to instagram meme dealers @hypedykes, @tofuminati, @themmememes and @bi_astrology — I’d call this niche “relatable queer chaos” and their memes are truly some of the best in the game. This breed of hyper-specific, exaggerated surrealist queer commentary that pulls no punches and hits the nail directly on the head brings me endless joy. It’s weirdly affirming to know my emotional rollercoaster of a life is right on track, according to these absurd lil windows into queer life circa 2022.

What can you not stop talking about on the internet right now? 

My new Criibaby EP “crii” came out on June 10 so I’m grateful for these songs to be out in the world! One thing that distinguishes me from other indie artists is that I write all my songs without any gender-specific lyrics. No binary pronouns, no heteronormativity, all written from an openly queer perspective as part of a conscious effort to create more inclusive music. That was the intent behind my debut popsoul EP love songs for everyone and I got some really sweet notes from fellow queer folks who said it really resonated with them and made them feel seen, so I’m excited to be releasing more gender-neutral music. (It also premiered in Billboard and played on BBC 6 Music, which was crazy validating! Turns out critics can be nice??) 

With love songs for everyone I wanted to send a message that no matter how you identify, this feel-good, empowering music is for you — and you are welcome here. Exist! Freely! My new EP is pretty different —- for starters it’s a lot more personal, and some of it comes from a bit of a darker place because that’s just honestly where I was at when I wrote these songs. And I didn’t want to hide from those emotions. I think as queer people, we feel all this pressure to put on a good face as a way to say, hey, I’m queer and I’m thriving! But sometimes you’re just not ok… and that’s ok. I guess that’s my main message with this EP: frankly, you don’t always have to have your sh*t together. Which is something I’m still trying to learn for myself. Sonically it’s inspired by a mix of neo soul, lofi, jazz, and alternative aesthetics, but I wouldn’t really define it as any of those. As someone who is often assumed to be straight based on how I appear and present, I want to show the world that queerness doesn’t have any one look, sound, or genre. They’re some of the best songs I’ve ever made and I’m really proud of them. I hope you check it out and let me know what you think!

What was the first online community you engaged with? 

Yikes, this is really embarrassing, but I think it would have been Yahoo Answers. If you don’t know what that is (good), it was basically like Reddit on training wheels, but … worse than whatever you’re imagining? I can’t remember everything I posted (thank god) but I almost certainly asked strangers on the internet if they thought I might be gay. Good times. 

I also got really into Daytrotter, a website for this studio in Illinois that hosted recording sessions with indie artists, kinda like NPR’s Tiny Desk, before that was a thing. I found a lot of new music from people I’d never heard of that way, and this was during my absolute raging hipster phase, so it fueled a ton of music discovery. In classic hipster fashion, I was really into listening to stuff my friends hadn’t heard of. That and Radiohead, tons of Radiohead. 

What articles and videos are in your Pocket waiting to be read/watched right now?

Them released a new queer music monthly that I’m excited to read up on and listen through. I hope to be included in one of those someday — it would really mean a lot to me to be recognized by that magazine. 

If you could create your own corner of the internet what would it look like?

I’m working on it! Right now I’m building up a Patreon-style group of VIP fans that get free merch, secret listening parties ahead of my release dates, and eventually backstage access to my live shows. DM me on instagram if you want to come along for the ride— it’s free, and shaping up to be a really sweet community!

Rising queer artist Criibaby invites listeners to look deeper within themselves with her genre-bending, intimate music. Using a unique songwriting method she describes as possessing “no names, no genders, only the feelings, only the bare truths,” Criibaby’s songs contain only lyrics without binary pronouns, written from an openly queer perspective in a conscious effort to create more inclusive music. Drawing upon neo soul, lofi, jazz, and alternative aesthetics to form something entirely her own, Criibaby’s intricate layers of delicate, emotional vocals and emotionally bare lyricism come together to form a new kind of introspective sonic escapism. You can find her on Instagram, Twitter, LinkedIn and Facebook and can listen to her new EP “crii” on all streaming platforms.  

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Queer Singer/Songwriter Criibaby Finds Pride and Joy at the Intersection of Tech and Music appeared first on The Mozilla Blog.

The Mozilla BlogFirefox Presents: Redefining love and identity, one viral video at a time

Every Monday and Thursday, queer comedy writer and actor Brandon Kyle Goodman asks their 176,000 Instagram followers to tell them something good or messy. 

People who tune into the Instagram Stories series, called “Messy Mondays” and “Thot Thursdays,” send in submissions that are indeed good, messy or both. Some share their kinks. Others inquire about techniques they want to try in bed. One asked for advice for people who struggle with getting in the mood. Another confessed to having slept with their therapist.

In turn, Brandon asks their own questions (“what’s your favorite phrase to tell someone you want them?”), shares musings (“I think ‘Babe’ was a gay movie. I think it was my gay awakening”), gives smart advice (“find a new therapist”) and does demonstrations. NSFW memes are deployed. 

One can’t help but laugh along with Brandon, who also writes for and voices the queer character Walter the Lovebug in the Netflix animated show “Big Mouth” and its spinoff, “Human Resources.” They’re also the author of “You Gotta Be You,” part memoir and part humorous self-help guide out this September, that Brandon hopes will help people like them — who’s nonbinary and Black — feel less alone. 

This intention is also what’s behind Brandon’s social media series, which, while irreverently funny, can also turn tender. 

Once, Brandon’s husband made an appearance and shared his new fondness for being kissed in his stomach, an area he used to be sensitive about. 

Sitting next to him, Brandon said “this is why I love talking about sex, because of the ability to reclaim your body and your sexuality.” 

Brandon makes periodic reminders that their account is a safe space, where people can be joyous about queer sex. 

Brandon Kyle Goodman and their husband Matthew sit next to each other, smiling in front of a phone and a ring light.<figcaption>Photo: Nita Hong for Mozilla</figcaption>

The 35-year-old first gained notice on social media during a particularly dark time in the world. In 2020, the comedy writer had virtually gone back to work after a lunch break, during which they watched the video of a white police officer kill George Floyd.

“I logged back onto my Zoom and sat there for the next three or four hours pitching jokes with my non-Black and white colleagues,” Brandon said. “No one had mentioned it. It had just happened, and so they might not have seen it. But it was wild to me that I was sitting there, not speaking about it and continuing my day as if I didn’t just watch this Black man get murdered for nothing.”

Afterwards, Brandon posted a 7-minute video on Instagram addressed to “my white friends.”

“These are my thoughts and I hope it will compel you and yours to be ACTIVE in the fight for our Black lives,” the caption says. 

Brandon went to sleep after making the video. When they woke up, the clip had nearly a million views. Their followers had jumped to 30,000. 

“I remember being in my living room and feeling my grandmother, who was a minister, and feeling ancestors, whose names I don’t know,” Brandon said. “It was a question of, you can completely ignore this and go about your business. Or, you can step into this. I heard very clearly, ‘Step in. You can do this.’”

At first, Brandon hesitated identifying as an activist, “not wanting to take away” from organizers who march and lead rallies on the streets. But they realized that activism has always been laced into their work. 

“Viola Davis said, ‘Me just showing up as a dark-skinned, Black woman on your camera is activism.’ I think that’s true, too,” Brandon said. 

They continued to make videos for social media and started “Messy Mondays” because they needed a break from the heaviness.

“When you think about sex education, it’s trash in our country,” Brandon said. “It doesn’t include queer people at all. Here was this chance to redefine that.”

They added that when it comes to sex, it’s about the whole person. 

“Your Blackness, your queerness, your womanhood, your nonbinaryhood, your manhood. It’s about you,” Brandon said. “This is part of my activism, my relationship to sex. And so for me to be a part of a collective of influencers and educators and activists and people who are boldly talking about it is so liberating. I wish that young Brandon had that.”

Brandon Kyle Goodman dances.<figcaption>Photo: Nita Hong for Mozilla</figcaption>

Brandon was raised by their mother and grandmother in Queens, New York. They attended boarding school in Georgia and the Tisch School of the Arts at New York University before moving to Los Angeles to pursue a writing and acting career. They didn’t come out as gay until they were 21 years old. 

“Growing up Black, queer, nonbinary, there’s so much messaging that there’s something wrong with me,” Brandon said. “That’s in addition to the fact that when you turn on TV or read books in school, there’s just not a Black protagonist, let alone a Black queer protagonist. So it became really important to reclaim and refocus and reframe that to ‘I am enough. There’s nothing wrong with me. Even if the world is not ready to see me or hold me, I have to see and hold myself.’”

Brandon said their mother, who became deeply religious after their grandmother died, could not accept their queerness. The two haven’t spoken in 10 years.

Blood family may not always be capable of taking you through your human existence, Brandon said. But they found their own family and community, including online. 

The internet has its dark side, Brandon acknowledged, but it can also be a glorious place. Their advice: “Just remember that you are enough.”

Firefox is exploring all the ways the internet makes our planet an awesome place. Almost everything we do today ties back to the online world in some way — so, join us in highlighting the funny, weird, inspiring and courageous stories that remind us why we love the world wide web.

Brandon Kyle Goodman smiles at the camera.

Get the browser that makes a difference

Download Firefox

The post Firefox Presents: Redefining love and identity, one viral video at a time appeared first on The Mozilla Blog.

The Mozilla BlogKids are growing up in a very online world. What’s a concerned parent to do?

Technology is easy to blame for the mental health crisis that kids are facing. But according to experts, it’s not that simple. 

A rare public advisory from the U.S. surgeon general in December 2021 warned that young people are facing unprecedented challenges that have had a “devastating” effect on their mental health. These difficulties were already widespread before the pandemic started in 2020 — with up to 1 in 5 people in the U.S. ages 3 to 17 having a reported mental, emotional, developmental or behavioral disorder. 

We often attribute the crisis to technology, particularly social media. After all, young people today are spending much of their time glued to screens like no generation before them. One study conducted in 2021 found that teens spent 7.7 hours per day in front of screens for activities unrelated to school. But there is not a definitive correlation between mental health and social media use.

Over 7 hours of screen time may sound excessive. But as more and more of life moves online for people of all ages, it is to be expected. Technology brings with it challenges and opportunities for today’s youngest generations. Even as its use rises and youth mental health declines, researchers haven’t found a clear link between social media and mental health but instead a lot of factors of modern life, including technology, that interconnect.

What researchers are learning about social media and kids’ mental health

There’s been a lot of research on the subject over the last decade, but we still haven’t learned anything conclusive.

Amanda Lenhart, a researcher who focuses on how technology affects families and children for the Data & Society Research Institute, said studies generally show a slight negative impact of social media on people’s well-being, or none at all.

“We have this narrative about it being a negative thing, but we’re not seeing it in the data,” Lenhart said.

In a project by the Universities of Amsterdam and Tilburg, researchers have been exploring the relationship between adolescents’ social media use and their well-being. Early data from that project, Lenhart said, suggests that about 70% don’t feel any different after using social media; 19% feel a lot better; and 10% feel worse.

“I think one big takeaway is that there’s a subgroup of people for whom using these platforms is not great,” Lenhart said. More research is now focusing on identifying this group and what their experience is like.

Lenhart said results are showing a model called differential susceptibility, meaning different people can have different responses to the same platform.

For example, a child of color may have a more negative experience on a social media platform than others because he or she faces harassment. A platform could also worsen feelings of depression for someone who’s already depressed.

“It’s not that a platform always makes people feel worse or better, but that it exacerbates their highs and exacerbates their lows,” Lenhart explained.

Experts are concerned that social media platforms are designed with adults as the user in mind, grabbing their attention and keeping them scrolling. Tech companies can do a lot better with content moderation and ensuring age-appropriate and health-promoting content, said Michael Robb, who for the last 15 years has been researching the effects of media and technology on children’s development. He’s ​​currently the senior director of research at Common Sense Media, a U.S. nonprofit that advocates for internet safety for families.

“Expecting that you could iterate and iterate when you’re dealing with a sensitive population without having given real care to what’s going on developmentally can be irresponsible,” Robb said, adding that the concept of “move fast and break things” in the tech space should not be applied to children. 

Lenhart expressed the same sentiment, pointing to Snapchat’s Snapstreaks feature, which encourages two people to send each other a photo or video snap everyday to keep up a streak. 

“I think when they were built, the idea was this was going to be a really fun thing that would be a chance for people to feel connected, get people coming back every day, “ Lenhart said. “But I think they failed to realize that in particular contexts, particularly among young people, peer relationships are very important and intense.”

In some instances, the feature resulted in unproductive behavior and an obsession with making sure to keep up the streaks. People would give other people their passwords before they couldn’t send snaps when they go places without reliable internet or when they were sick.

“It became a thing that was very agitating and disturbing for a subset of young people,” Lenhart said. 

How social media affects people’s well-being could also depend by age, according to a large study conducted in the U.K. from 2011 and 2018. It identified two periods when heavy social media use predicted a drop in “life satisfaction” ratings a year later: 14-15 and 19 years old for males, and 11-13 and 19 years old for females. The inverse was also true: Lower social media use in those periods predicted an increase in life satisfaction ratings. 

The study argues that by looking at the issue in a developmental lens, research could make “much needed progress in understanding how social media use affects well-being.”

A ‘lifeline’ for many young people

During the pandemic, social media has played an outsized role among young people seeking connection and help with their mental health. 

A study from Common Sense Media found that social media and other online tools concluded that they’ve “proven to be the lifeline that many young people needed to get through this last year.”

The study surveyed 14- to 22-year-olds across the U.S. from September to November 2020. More than 85% of them went online in search of health information, 69% used mobile apps related to health issues, 47% connected with a provider online and 40% sought people experiencing similar health concerns. 

From 2018 to 2020, more teens and young adults reported relying on social media platforms for support, community and self-expression, the study found.

A table shows the results of a survey by Common Sense Media on the importance of social media during the coronavirus pandemic. <figcaption>A table shows the results of a survey by Common Sense Media on the importance of social media during the coronavirus pandemic. </figcaption>

Social media can pose real problems among young people, especially if they’re already vulnerable, like those experiencing depression or those who come from marginalized communities, said Robb, who co-authored the report. “But many kids are also using social media and the internet to look up things related to their mental health, or look for other people who may be experiencing similar things.”

He added, “If you’re a teen who’s experiencing symptoms of depression or anxiety looking for others who have had similar experiences, being able to take solace in the fact that you’re not alone, or perhaps get tips or advice or other kinds of support from others who are in your community – I don’t want to undervalue social media in those respects, because that’s the other side of the coin that I think does not get talked about quite as often.”

Tips for families

Current research suggests that there’s no clear, direct line between internet use and screen time to mental health concerns with children. Like everything in life, context matters.

“We worry a lot about how much screen time every kid is having, and there are no conclusive studies saying how much screen time exactly is good or bad,” said Melanie Pinola, who spoke to various experts in writing a guide to how and when to limit kids’ tech use for The New York Times. She now covers tech and privacy at Consumer Reports.

She noted that even the American Association of Pediatrics has changed its recommendations on screen time a couple of times.

With a new generation of people who never lived in a world without the constant presence of technology, there’s still a lot of unknowns.

“We’re always learning and trying to adapt to what we know,” Pinola said. 

An illustration shows a phone surrounded by emojis.<figcaption>Illustration: Nick Velazquez / Mozilla</figcaption>

How to help kids have a better internet experience

While there’s no consensus as to how, exactly, children can be raised to have a good relationship with technology, here are what families can do:

1. Be open

Lenhart suggests parents learn about online platforms and use them with their children. “Have an attitude of openness about them, remembering that even as you limit your child’s use of these platforms, you’re also potentially limiting their access to social connection, learning and relationships, because that’s where a lot of these things happen,” Lenhart said. 

She acknowledged that there’s a lot of good reasons why some platforms shouldn’t be used by children as young as 12 or 13, but Lenhart said ideally, adults should figure it out with their kids. She suggested families ask themselves: Is the platform something that you can use together, or that children can use independently?

2. Find good content

Robb noted that there’s plenty of content online, from YouTube channels to video games, that are great for younger kids. 

Common Sense Media rates movies, TV shows, books, games and other media for age appropriateness and “learning potential.” 

“The PBS Kids apps are a lifesaver,” said Lucile Vareine, who works with Mozilla’s communications team and has two young kids. “We go to the public library to research about animals we just discovered on PBS Shows.”

The Mozilla Foundation also creates a guide that analyzes the online security of different products, from mental health apps to toys and games.

3. Think of your child’s well-being outside of technology

Instead of just dialing down on children’s screen time, Robb suggests focusing on things that are essential to children’s health development. Think about whether they’re getting quality sleep, enough socialization with friends, family time and good nutrition. 

“There are lots of things that are really important that are much better supported by data in terms of kids’ mental and physical health than just how many hours of screen use,” Robb said. “I wouldn’t worry so much, if it’s an hour or three hours. I’d look over the course of the week and see, ‘Is my kid doing the things that I hoped that they would be doing?’”

4. Set boundaries

Pinola said it helps, just like with other aspects of parenting, to set some boundaries. She suggests starting slowly, like having a “no tech dinner” rule.

“When I tried that with my [16-year-old] daughter, it worked,” Pinola said. “Now, we’re actually having conversations over dinner, which is what I was used to growing up. If you start small like that, you start to introduce the feeling for kids that they can be off their devices, and there might be a better experience for them.”

5. Use parental controls with your child’s consent, but give them room to grow

Whether it’s time limits or filters, there’s a lot of tools within platforms that can be used by parents who have younger children. Lenhart recommends using these tools with your child’s knowledge, making sure they understand why you’re using them and having a plan to ease oversight.

“We need to teach them slowly but surely how to manage themselves in these spaces,” Lenhart said. “Giving them opportunities to fail with you around to help pick them back up and talk about it with is good. There can be false starts. But it’s definitely something that we have to do.”

Adults shouldn’t be surprised if their kids can go around these tools.

“Young people are much more adaptive than we would think,” Pinola said.

“When we try to limit their tech use too restrictively or try to monitor them a lot, that can be counterproductive because they are so good at [technology], they’re going to find ways to bypass whatever barriers you put,” she said. “It’s just a matter of balance between you and your child.”

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for parents raising the next generation. Mozilla wants to help parents make the best online decisions for their kids, whatever that looks like, in our latest series, Parental Control

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Kids are growing up in a very online world. What’s a concerned parent to do? appeared first on The Mozilla Blog.

hacks.mozilla.orgHacks Decoded: Bikes and Boomboxes with Samuel Aboagye

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.

Meet Samuel Aboagye!

Samuel Aboagye is a genius. Aboagye is 17 years old. In those 17 years, he’s crafted more inventions than you have, probably. Among them: a solar-powered bike and a Bluetooth speaker, both using recycled materials. We caught up with Ghanaian inventor Samuel Aboagye over video chat in hopes that he’d talk with us about his creations, and ultimately how he’s way cooler than any of us were at 17.


Samuel, you’ve put together lots of inventions like an electric bike and Bluetooth speaker and even a fan. What made you want to make them?

For the speaker, I thought of how I could minimize the rate at which yellow plastic containers pollute the environment.  I tried to make good use of it after it served its purpose. So, with the little knowledge, I acquired in my science lessons, instead of the empty container just lying down and polluting the environment, I tried to create something useful with it.  

After the Bluetooth speaker was successful, I realized there was more in me I could show to the universe. More importantly, we live in a very poor ventilated room and we couldn’t afford an electric fan so the room was unbearably hot. As such, this situation triggered and motivated me to manufacture a fan to solve this family problem.

With the bike, I thought it would be wise to make life easier for the physically challenged because I was always sad to see them go through all these challenges just to live their daily lives. Electric motors are very expensive and not common in my country, so I decided to do something to help.

Since solar energy is almost always readily available in my part of the world and able to renew itself, I thought that if I am able to make a bike with it, it would help the physically challenged to move from one destination to another without stress or thinking of how to purchase a battery or fuel.  

So how did you go about making them? Did you run into any trouble?

I went around my community gathering used items and old gadgets like radio sets and other electronics and then removed parts that could help in my work. With the electrical energy training given to me by my science teacher after discovering me since JHS1, I was able to apply this and also combined with my God-given talent.

Whenever I need some sort of technical guidance, I call on my teacher Sir David. He has also been my financial help for all my projects.  Financing projects has always been my biggest struggle and most times I have to wait on him to raise funds for me to continue.

The tricycle: Was it much harder to make than a bike?

​​Yes, it was a little bit harder to make the tricycle than the bike. It’s time-consuming and also cost more than a bike. It needs extra technical and critical thinking too. 

You made the bike and speaker out of recycled materials. This answer is probably obvious but I’ve gotta ask: why recycled materials?  Is environment-friendly tech important to you?

I used recycled materials because they were readily available and comparable to cheap and easy to get. With all my inventions I make sure they are all environmentally friendly so as not to pose any danger now or future to the beings on Earth.  But also, I want the world to be a safe and healthy place to be. 


The post Hacks Decoded: Bikes and Boomboxes with Samuel Aboagye appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.13 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.13 Beta 1.

Please check out [1] and/or [2].   Updates will be available shortly.



[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.13/

[2] – https://www.seamonkey-project.org/releases/2.53.13b1

The Mozilla BlogHow to easily switch from Chrome to Firefox

There’s never been a better time to switch from Chrome to Firefox, if we do say so ourselves. 

Some of the internet’s most popular ad blockers, such as uBlock Origin — tools that save our digital sanity from video ads that auto-play, banners that take up half the screen and pop-up windows with infuriatingly tiny “x” buttons — will become less effective on Google’s web browser thanks to a set of changes in Chrome’s extensions platform

At Mozilla, we’re all about protecting your privacy and security – all while offering add-ons and features that enhance performance and functionality so you can experience the very best of the web. We know that content blockers are very important to Firefox users, so rest assured that despite changes to Chrome’s new extensions platform, we’ll continue to ensure access to the best privacy tools available – including content-blocking extensions that not only stop creepy ads from following you around the internet, but also allows for a faster and more seamless browsing experience. In addition, Firefox has recently enabled Total Cookie Protection as default for all users, making Firefox the most private and secure major browser available across Windows, Mac and Linux.  

Longtime Chrome user? We know change can be hard. But we’re here to help you make the move with any data you want to bring along, including your bookmarks, saved passwords and browsing history.  

Here’s how to easily switch from Chrome to Firefox as your desktop browser in five steps:

Step 1: Download and install Firefox from Mozilla’s download page

Step 2: If you have Chrome running, quit the app. But don’t delete it just yet.

Step 3: Open Firefox. The import tool should pop up. 

In case it doesn’t, click the menu button Fx57Menu > Bookmarks > Manage Bookmarks > Import import button > Import Data from another Browser. 

Step 4: In the Import Settings and Data window, choose Chrome. Then in the next screen, select what you want to import:

  • Cookies: Bits of data stored on your computer by some websites. They’re used to remember information like your log-in, language preference and even items you added in an online shopping cart. With Total Cookies Protection on by default on Firefox, cookies are confined to the site where they were created — preventing tracking companies from using your data.
  • Browsing History: A list of web pages you’ve visited. If there’s an article you didn’t quite finish last week, bring over your browsing history so you can find it later. (Pro tip: Save it in your Pocket list next time!) 
  • Saved Passwords: Usernames and passwords you saved in Chrome. Here’s why you can trust Firefox with your passwords.
  • Bookmarks: Web pages that you bookmarked in Chrome. 

Step 5: Once you pick which data to bring to Firefox, click Continue > Done. 

If you imported your bookmarks, you’ll find them in a folder named “From Google Chrome” in the Firefox Bookmarks Toolbar. 

In case the toolbar is hidden, click the menu button Fx89menuButton> More Tools… > choose Customize Toolbar > Toolbars > Bookmarks Toolbar > set to Always Show, Never Show or Only Show on New Tab > Done.

We may be a little biased, but we truly believe that Mozilla’s commitment to privacy helps make the internet better and safer for everyone. We wouldn’t be able to do it without the support of our community of Firefox users, so we’d love for you to join us.  

Related stories:

The post How to easily switch from Chrome to Firefox appeared first on The Mozilla Blog.

Firefox UXHow we created inclusive writing guidelines for Firefox

Prioritizing inclusion work as a small team, and what we learned in the process.

Sharpened pencil with pencil shavings and pencil sharpener on white background.<figcaption>Photo by Eduardo Casajús Gorostiaga on Unsplash.</figcaption>

Our UX content design team recently relaunched our product content guidelines. In addition to building out more robust guidance on voice, tone, and mechanics, we also introduced new sections, including inclusive writing.

At the time, our content design team had just two people. We struggled to prioritize inclusion work while juggling project demands and string requests. If you’re in a similar cozy boat, here’s how we were able to get our inclusive writing guidelines done.

1. Set a deadline, even if it’s arbitrary

It’s hard to prioritize work that doesn’t have a deadline or isn’t a request from your product manager. You keep telling yourself you’ll eventually get to it… and you don’t.

So I set a deadline. That deadline applied to our entire new content guidelines, but this section had its own due date to track against. To hold myself accountable, I scheduled weekly check-ins to review drafts.

Spoiler: I didn’t hit my deadline. But, I made significant progress. When the deadline came and went, the draft was nearly done. This made it easier to bring the guidelines across the finish line.

2. Gather inspiration

Developing inclusive writing guidance from scratch felt overwhelming. Where would I even start? Fortunately, a lot of great work has already been done by other product content teams. I started by gathering inspiration.

There are dozens of inclusive writing resources, but not all focus exclusively on product content. These were good models to follow:

I looked for common themes as well as how other organizations tackled content specific to their products. As a content designer, I also paid close attention to how to make writing guidelines digestible and easy to understand.

Based on my audit, I developed a rough outline:

  • Clarify what we mean by ‘inclusive language’
  • Include best practices for how to consider your own biases and write inclusively
  • Provide specific writing guidance for your own product
  • Develop a list of terms to avoid and why they’re problematic. Suggest alternate terms.

3. Align on your intended audience

Inclusivity has many facets, particularly when it comes to language. Inclusive writing could apply to job descriptions, internal communications, or marketing content. To start, our focus would be writing product content only.

Our audience was mainly Firefox content designers, but occasionally product designers, product managers, and engineers might reference these guidelines as well.

Having a clear audience was helpful when our accessibility team raised questions about visual and interaction design. We debated including color and contrast guidance. Ultimately, we decided to keep scope limited to writing for the product. At a later date, we could collaborate with product designers to draft more holistic accessibility guidance for the larger UX team.

4. Keep the stakes low for your first draft

This was our first attempt at capturing how to write inclusively for Firefox. I was worried about getting it wrong, but didn’t want that fear to stop me from even getting started.

So I let go of the expectation I’d write perfect, ready-to-ship guidance on the first try. I simply tried to get a “good start” on paper. Then I brought my draft to our internal weekly content team check-in. This felt like a safe space to bring unfinished work.

The thoughtful conversations and considerations my colleagues raised helped me move the work forward. Through multiple feedback loops, we worked collaboratively to tweak, edit, and revise.

5. Gather input from subject matter experts

I then sought feedback from our Diversity & Inclusion and Accessibility teams. Before asking them to weigh in, I wrote a simple half-page brief to clarify scope, deadlines, and the type of feedback we needed.

Our cross-functional peers helped identify confusing language and suggested further additions. With their input, I made significant revisions that made the guidelines even stronger.

6. Socialize, socialize, socialize

The work doesn’t stop once you hit publish. Documentation has a tendency to collect dust on the shelf unless you make sure people know it exists. Our particular strategy includes:

  • Include on our internal wiki, with plans to publish it to our new design system later this year
  • Seek placement in our internal company-wide newsletter
  • Promote in internal Slack channels
  • Look for opportunities to reference the guidelines in internal conversations and company meetings

7. Treat the guidelines as a living document

We take a continuous learning approach to inclusive work. I expect our inclusive writing guidance to evolve.

To encourage others to participate in this work, we will continue to be open to contributions and suggestions outside of our team, making updates as we go. We also intend to review the guidelines as a content team each quarter to see what changes we may need to make.

Wrapping up

My biggest learning from the process of creating new inclusive writing guidelines is this: Your impact can start small. It can even start as a messy Google Doc. But the more you bring other people in to contribute, the stronger the end result in a continuing journey of learning and evolution.


Thank you to the following individuals who contributed to the inclusive writing guidelines: Meridel Walkington, Leslie Gray, Asa Dotzler, Jainaba Seckan, Steven Potter, Kelsey Carson, Eitan Isaacson, Morgan Rae Reschenberg, Emily Wachowiak

How we created inclusive writing guidelines for Firefox was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogHow to set Firefox as your default browser on Windows

During a software update, your settings can sometimes change or revert back to their original state. For example, if your computer has multiple browsers installed, you may end up with a different default browser than Firefox. That’s easy to fix so that Firefox is there for you when you expect it, like when you click on web links in email and other apps.

With Firefox set as your default Windows browser, you’ll be automatically guarded from invasive tracking methods like fingerprinting and cryptomining, thanks to Firefox’s technology that blocks more than 10,000,000,000 trackers every day including Total Cookie Protection. (See what Firefox has blocked for you.) Your bookmarks, history, open tabs, form information and passwords are accessible wherever you’re logged into Firefox, from your PC to your phone to your tablet.

If you’re using a PC — like an HP, Dell, Lenovo or Acer brand computer — that runs the Microsoft Windows operating system, here’s how to set Firefox as your default:

  1. Click the menu button (three horizontal lines) and select Options.
  2. In the General panel, click Make Default
  3. The Windows Settings app will open with the Choose default apps screen.
  4. Scroll down and click the entry under Web browser.
  5. Click on Firefox in the dialog that opens with a list of available browsers.

Firefox is now listed as your default browser. Close the Settings window by clicking the X in the upper right to save your changes.

Another option is to go through the Windows 10 operating systems settings:

  1. Go to the Windows Start menu and click the Settings icon.
  2. Click Apps, then choose Default Apps on the left pane.
  3. Scroll down and click the entry under Web browser.
  4. Click on Firefox in the dialog that opens with a list of available browsers.
  5. Firefox is now listed as your default browser. Close the Settings window to save your changes.

Originally published January 20, 2020.

See also:
Make Firefox as your default browser on Mac
Make Firefox the default browser on Android
Make Firefox the default browser for iOS

The post How to set Firefox as your default browser on Windows appeared first on The Mozilla Blog.

hacks.mozilla.orgEverything Is Broken: Shipping rust-minidump at Mozilla – Part 1

Everything Is Broken: Shipping rust-minidump at Mozilla

For the last year I’ve been leading the development of rust-minidump, a pure-Rust replacement for the minidump-processing half of google-breakpad.

Well actually in some sense I finished that work, because Mozilla already deployed it as the crash processing backend for Firefox 6 months ago, it runs in half the time, and seems to be more reliable. (And you know, isn’t a terrifying ball of C++ that parses and evaluates arbitrary input from the internet. We did our best to isolate Breakpad, but still… yikes.)

This is a pretty fantastic result, but there’s always more work to do because Minidumps are an inky abyss that grows deeper the further you delve… wait no I’m getting ahead of myself. First the light, then the abyss. Yes. Light first.

What I can say is that we have a very solid implementation of the core functionality of minidump parsing+analysis for the biggest platforms (x86, x64, ARM, ARM64; Windows, MacOS, Linux, Android). But if you want to read minidumps generated on a PlayStation 3 or process a Full Memory dump, you won’t be served quite as well.

We’ve put a lot of effort into documenting and testing this thing, so I’m pretty confident in it!

Unfortunately! Confidence! Is! Worth! Nothing!

Which is why this is the story of how we did our best to make this nightmare as robust as we could and still got 360 dunked on from space by the sudden and incredible fuzzing efforts of @5225225.

This article is broken into two parts:

  1. what minidumps are, and how we made rust-minidump
  2. how we got absolutely owned by simple fuzzing

You are reading part 1, wherein we build up our hubris.

Background: What’s A Minidump, and Why Write rust-minidump?

Your program crashes. You want to know why your program crashed, but it happened on a user’s machine on the other side of the world. A full coredump (all memory allocated by the program) is enormous — we can’t have users sending us 4GB files! Ok let’s just collect up the most important regions of memory like the stacks and where the program crashed. Oh and I guess if we’re taking the time, let’s stuff some metadata about the system and process in there too.

Congratulations you have invented Minidumps. Now you can turn a 100-thread coredump that would otherwise be 4GB into a nice little 2MB file that you can send over the internet and do postmortem analysis on.

Or more specifically, Microsoft did. So long ago that their docs don’t even discuss platform support. MiniDumpWriteDump’s supported versions are simply “Windows”. Microsoft Research has presumably developed a time machine to guarantee this.

Then Google came along (circa 2006-2007) and said “wouldn’t it be nice if we could make minidumps on any platform”? Thankfully Microsoft had actually built the format pretty extensibly, so it wasn’t too bad to extend the format for Linux, MacOS, BSD, Solaris, and so on. Those extensions became google-breakpad (or just Breakpad) which included a ton of different tools for generating, parsing, and analyzing their extended minidump format (and native Microsoft ones).

Mozilla helped out with this a lot because apparently, our crash reporting infrastructure (“Talkback”) was miserable circa 2007, and this seemed like a nice improvement. Needless to say, we’re pretty invested in breakpad’s minidumps at this point.

Fast forward to the present day and in a hilarious twist of fate, products like VSCode mean that Microsoft now supports applications that run on Linux and MacOS so it runs breakpad in production and has to handle non-Microsoft minidumps somewhere in its crash reporting infra, so someone else’s extension of their own format is somehow their problem now!

Meanwhile, Google has kind-of moved on to Crashpad. I say kind-of because there’s still a lot of Breakpad in there, but they’re more interested in building out tooling on top of it than improving Breakpad itself. Having made a few changes to Breakpad: honestly fair, I don’t want to work on it either. Still, this was a bit of a problem for us, because it meant the project became increasingly under-staffed.

By the time I started working on crash reporting, Mozilla had basically given up on upstreaming fixes/improvements to Breakpad, and was just using its own patched fork. But even without the need for upstreaming patches, every change to Breakpad filled us with dread: many proposed improvements to our crash reporting infrastructure stalled out at “time to implement this in Breakpad”.

Why is working on Breakpad so miserable, you ask?

Parsing and analyzing minidumps is basically an exercise in writing a fractal parser of platform-specific formats nested in formats nested in formats. For many operating systems. For many hardware architectures. And all the inputs you’re parsing and analyzing are terrible and buggy so you have to write a really permissive parser and crawl forward however you can.

Some specific MSVC toolchain that was part of Windows XP had a bug in its debuginfo format? Too bad, symbolicate that stack frame anyway!

The program crashed because it horribly corrupted its own stack? Too bad, produce a backtrace anyway!

The minidump writer itself completely freaked out and wrote a bunch of garbage to one stream? Too bad, produce whatever output you can anyway!

Hey, you know who has a lot of experience dealing with really complicated permissive parsers written in C++? Mozilla! That’s like the core functionality of a web browser.

Do you know Mozilla’s secret solution to writing really complicated permissive parsers in C++?

We stopped doing it.

We developed Rust and ported our nastiest parsers to it.

We’ve done it a lot, and when we do we’re always like “wow this is so much more reliable and easy to maintain and it’s even faster now”. Rust is a really good language for writing parsers. C++ really isn’t.

So we Rewrote It In Rust (or as the kids call it, “Oxidized It”). Breakpad is big, so we haven’t actually covered all of its features. We’ve specifically written and deployed:

  • dump_syms which processes native build artifacts into symbol files.
  • rust-minidump which is a collection of crates that parse and analyze minidumps. Or more specifically, we deployed minidump-stackwalk, which is the high-level cli interface to all of rust-minidump.

Notably missing from this picture is minidump writing, or what google-breakpad calls a client (because it runs on the client’s machine). We are working on a rust-based minidump writer, but it’s not something we can recommend using quite yet (although it has sped up a lot thanks to help from Embark Studios).

This is arguably the messiest and hardest work because it has a horrible job: use a bunch of native system APIs to gather up a bunch of OS-specific and Hardware-specific information about the crash AND do it for a program that just crashed, on a machine that caused the program to crash.

We have a long road ahead but every time we get to the other side of one of these projects it’s wonderful.


Background: Stackwalking and Calling Conventions

One of rust-minidump’s (minidump-stackwalk’s) most important jobs is to take the state for a thread (general purpose registers and stack memory) and create a backtrace for that thread (unwind/stackwalk). This is a surprisingly complicated and messy job, made only more complicated by the fact that we are trying to analyze the memory of a process that got messed up enough to crash.

This means our stackwalkers are inherently working with dubious data, and all of our stackwalking techniques are based on heuristics that can go wrong and we can very easily find ourselves in situations where the stackwalk goes backwards or sideways or infinite and we just have to try to deal with it!

It’s also pretty common to see a stackwalker start hallucinating, which is my term for “the stackwalker found something that looked plausible enough and went on a wacky adventure through the stack and made up a whole pile of useless garbage frames”. Hallucination is most common near the bottom of the stack where it’s also least offensive. This is because each frame you walk is another chance for something to go wrong, but also increasingly uninteresting because you’re rarely interested in confirming that a thread started in The Same Function All Threads Start In.

All of these problems would basically go away if everyone agreed to properly preserve their cpu’s PERFECTLY GOOD DEDICATED FRAME POINTER REGISTER. Just kidding, turning on frame pointers doesn’t really work either because Microsoft invented chaos frame pointers that can’t be used for unwinding! I assume this happened because they accidentally stepped on the wrong butterfly while they were traveling back in time to invent minidumps. (I’m sure it was a decision that made more sense 20 years ago, but it has not aged well.)

If you would like to learn more about the different techniques for unwinding, I wrote about them over here in my article on Apple’s Compact Unwind Info. I’ve also attempted to document breakpad’s STACK WIN and STACK CFI unwind info formats here, which are more similar to the  DWARF and PE32 unwind tables (which are basically tiny programming languages).

If you would like to learn more about ABIs in general, I wrote an entire article about them here. The end of that article also includes an introduction to how calling conventions work. Understanding calling conventions is key to implementing unwinders.


How Hard Did You Really Test Things?

Hopefully you now have a bit of a glimpse into why analyzing minidumps is an enormous headache. And of course you know how the story ends: that fuzzer kicks our butts! But of course to really savor our defeat, you have to see how hard we tried to do a good job! It’s time to build up our hubris and pat ourselves on the back.

So how much work actually went into making rust-minidump robust before the fuzzer went to work on it?

Quite a bit!

I’ll never argue all the work we did was perfect but we definitely did some good work here, both for synthetic inputs and real world ones. Probably the biggest “flaw” in our methodology was the fact that we were only focused on getting Firefox’s usecase to work. Firefox runs on a lot of platforms and sees a lot of messed up stuff, but it’s still a fairly coherent product that only uses so many features of minidumps.

This is one of the nice benefits of our recent work with Sentry, which is basically a Crash Reporting As A Service company. They are way more liable to stress test all kinds of weird corners of the format that Firefox doesn’t, and they have definitely found (and fixed!) some places where something is wrong or missing! (And they recently deployed it into production too! 🎉)

But hey don’t take my word for it, check out all the different testing we did:

Synthetic Minidumps for Unit Tests

rust-minidump includes a synthetic minidump generator which lets you come up with a high-level description of the contents of a minidump, and then produces an actual minidump binary that we can feed it into the full parser:

// Let’s make a synth minidump with this particular Crashpad Info…

let module = ModuleCrashpadInfo::new(42, Endian::Little)
    .add_simple_annotation("simple", "module")
    .add_annotation_object("string", AnnotationValue::String("value".to_owned()))
    .add_annotation_object("invalid", AnnotationValue::Invalid)
    .add_annotation_object("custom", AnnotationValue::Custom(0x8001, vec![42]));

let crashpad_info = CrashpadInfo::new(Endian::Little)
    .add_simple_annotation("simple", "info");

let dump = SynthMinidump::with_endian(Endian::Little).add_crashpad_info(crashpad_info);

// convert the synth minidump to binary and read it like a normal minidump
let dump = read_synth_dump(dump).unwrap();

// Now check that the minidump reports the values we expect…

minidump-synth intentionally avoids sharing layout code with the actual implementation so that incorrect changes to layouts won’t “accidentally” pass tests.

A brief aside for some history: this testing framework was started by the original lead on this project, Ted Mielczarek. He started rust-minidump as a side project to learn Rust when 1.0 was released and just never had the time to finish it. Back then he was working at Mozilla and also a major contributor to Breakpad, which is why rust-minidump has a lot of similar design choices and terminology.

This case is no exception: our minidump-synth is a shameless copy of the synth-minidump utility in breakpad’s code, which was originally written by our other coworker Jim Blandy. Jim is one of the only people in the world that I will actually admit writes really good tests and docs, so I am totally happy to blatantly copy his work here.

Since this was all a learning experiment, Ted was understandably less rigorous about testing than usual. This meant a lot of minidump-synth was unimplemented when I came along, which also meant lots of minidump features were completely untested. (He built an absolutely great skeleton, just hadn’t had the time to fill it all in!)

We spent a lot of time filling in more of minidump-synth’s implementation so we could write more tests and catch more issues, but this is definitely the weakest part of our tests. Some stuff was implemented before I got here, so I don’t even know what tests are missing!

This is a good argument for some code coverage checks, but it would probably come back with “wow you should write a lot more tests” and we would all look at it and go “wow we sure should” and then we would probably never get around to it, because there are many things we should do.

On the other hand, Sentry has been very useful in this regard because they already have a mature suite of tests full of weird corner cases they’ve built up over time, so they can easily identify things that really matter, know what the fix should roughly be, and can contribute pre-existing test cases!

Integration and Snapshot Tests

We tried our best to shore up coverage issues in our unit tests by adding more holistic tests. There’s a few checked in Real Minidumps that we have some integration tests for to make sure we handle Real Inputs properly.

We even wrote a bunch of integration tests for the CLI application that snapshot its output to confirm that we never accidentally change the results.

Part of the motivation for this is to ensure we don’t break the JSON output, which we also wrote a very detailed schema document for and are trying to keep stable so people can actually rely on it while the actual implementation details are still in flux.

Yes, minidump-stackwalk is supposed to be stable and reasonable to use in production!

For our snapshot tests we use insta, which I think is fantastic and more people should use. All you need to do is assert_snapshot! any output you want to keep track of and it will magically take care of the storing, loading, and diffing.

Here’s one of the snapshot tests where we invoke the CLI interface and snapshot stdout:

fn test_evil_json() {
    // For a while this didn't parse right
    let bin = env!("CARGO_BIN_EXE_minidump-stackwalk");
    let output = Command::new(bin)

    let stdout = String::from_utf8(output.stdout).unwrap();
    let stderr = String::from_utf8(output.stderr).unwrap();

    insta::assert_snapshot!("json-pretty-evil-symbols", stdout);
    assert_eq!(stderr, "");

Stackwalker Unit Testing

The stackwalker is easily the most complicated and subtle part of the new implementation, because every platform can have slight quirks and you need to implement several different unwinding strategies and carefully tune everything to work well in practice.

The scariest part of this was the call frame information (CFI) unwinders, because they are basically little virtual machines we need to parse and execute at runtime. Thankfully breakpad had long ago smoothed over this issue by defining a simplified and unified CFI format, STACK CFI (well, nearly unified, x86 Windows was still a special case as STACK WIN). So even if DWARF CFI has a ton of complex features, we mostly need to implement a Reverse Polish Notation Calculator except it can read registers and load memory from addresses it computes (and for STACK WIN it has access to named variables it can declare and mutate).

Unfortunately, Breakpad’s description for this format is pretty underspecified so I had to basically pick some semantics I thought made sense and go with that. This made me extremely paranoid about the implementation. (And yes I will be more first-person for this part, because this part was genuinely where I personally spent most of my time and did a lot of stuff from scratch. All the blame belongs to me here!)

The STACK WIN / STACK CFI parser+evaluator is 1700 lines. 500 of those lines are a detailed documentation and discussion of the format, and 700 of those lines are an enormous pile of ~80 test cases where I tried to come up with every corner case I could think of.

I even checked in two tests I knew were failing just to be honest that there were a couple cases to fix! One of them is a corner case involving dividing by a negative number that almost certainly just doesn’t matter. The other is a buggy input that old x86 Microsoft toolchains actually produce and parsers need to deal with. The latter was fixed before the fuzzing started.

And 5225225 still found an integer overflow in the STACK WIN preprocessing step! (Not actually that surprising, it’s a hacky mess that tries to cover up for how messed up x86 Windows unwinding tables were.)

(The code isn’t terribly interesting here, it’s just a ton of assertions that a given input string produces a given output/error.)

Of course, I wasn’t satisfied with just coming up with my own semantics and testing them: I also ported most of breakpad’s own stackwalker tests to rust-minidump! This definitely found a bunch of bugs I had, but also taught me some weird quirks in Breakpad’s stackwalkers that I’m not sure I actually agree with. But in this case I was flying so blind that even being bug-compatible with Breakpad was some kind of relief.

Those tests also included several tests for the non-CFI paths, which were similarly wobbly and quirky. I still really hate a lot of the weird platform-specific rules they have for stack scanning, but I’m forced to work on the assumption that they might be load-bearing. (I definitely had several cases where I disabled a breakpad test because it was “obviously nonsense” and then hit it in the wild while testing. I quickly learned to accept that Nonsense Happens And Cannot Be Ignored.)

One major thing I didn’t replicate was some of the really hairy hacks for STACK WIN. Like there are several places where they introduce extra stack-scanning to try to deal with the fact that stack frames can have mysterious extra alignment that the windows unwinding tables just don’t tell you about? I guess?

There’s almost certainly some exotic situations that rust-minidump does worse on because of this, but it probably also means we do better in some random other situations too. I never got the two to perfectly agree, but at some point the divergences were all in weird enough situations, and as far as I was concerned both stackwalkers were producing equally bad results in a bad situation. Absent any reason to prefer one over the other, divergence seemed acceptable to keep the implementation cleaner.

Here’s a simplified version of one of the ported breakpad tests, if you’re curious (thankfully minidump-synth is based off of the same binary data mocking framework these tests use):

fn test_x86_frame_pointer() {
    let mut f = TestFixture::new();
    let frame0_ebp = Label::new();
    let frame1_ebp = Label::new();
    let mut stack = Section::new();

    // Setup the stack and registers so frame pointers will work
    stack = stack
        .append_repeated(12, 0) // frame 0: space
        .mark(&frame0_ebp)      // frame 0 %ebp points here
        .D32(&frame1_ebp)       // frame 0: saved %ebp
        .D32(0x40008679)        // frame 0: return address
        .append_repeated(8, 0)  // frame 1: space
        .mark(&frame1_ebp)      // frame 1 %ebp points here
        .D32(0)                 // frame 1: saved %ebp (stack end)
        .D32(0);                // frame 1: return address (stack end)
    f.raw.eip = 0x4000c7a5;
    f.raw.esp = stack.start().value().unwrap() as u32;
    f.raw.ebp = frame0_ebp.value().unwrap() as u32;

    // Check the stackwalker's output:
    let s = f.walk_stack(stack).await;
    assert_eq!(s.frames.len(), 2);
        let f0 = &s.frames[0];
        assert_eq!(f0.trust, FrameTrust::Context);
        assert_eq!(f0.context.valid, MinidumpContextValidity::All);
        assert_eq!(f0.instruction, 0x4000c7a5);
        let f1 = &s.frames[1];
        assert_eq!(f1.trust, FrameTrust::FramePointer);
        assert_eq!(f1.instruction, 0x40008678);

A Dedicated Production Diffing, Simulating, and Debugging Tool

Because minidumps are so horribly fractal and corner-casey, I spent a lot of time terrified of subtle issues that would become huge disasters if we ever actually tried to deploy to production. So I also spent a bunch of time building socc-pair, which takes the id of a crash report from Mozilla’s crash reporting system and pulls down the minidump, the old breakpad-based implementation’s output, and extra metadata.

It then runs a local rust-minidump (minidump-stackwalk) implementation on the minidump and does a domain-specific diff over the two inputs. The most substantial part of this is a fuzzy diff on the stackwalks that tries to better handle situations like when one implementation adds an extra frame but the two otherwise agree. It also uses the reported techniques each implementation used to try to identify whose output is more trustworthy when they totally diverge.

I also ended up adding a bunch of mocking and benchmarking functionality to it as well, as I found more and more places where I just wanted to simulate a production environment.

Oh also I added really detailed trace-logging for the stackwalker so that I could easily post-mortem debug why it made the decisions it made.

This tool found so many issues and more importantly has helped me quickly isolate their causes. I am so happy I made it. Because of it, we know we actually fixed several issues that happened with the old breakpad implementation, which is great!

Here’s a trimmed down version of the kind of report socc-pair would produce (yeah I abused diff syntax to get error highlighting. It’s a great hack, and I love it like a child):

comparing json...

: {
    crash_info: {
        address: 0x7fff1760aca0
        crashing_thread: 8
    crashing_thread: {
        frames: [
            0: {
                file: wrappers.cpp:1750da2d7f9db490b9d15b3ee696e89e6aa68cb7
                frame: 0
                function: RustMozCrash(char const*, int, char const*)
                function_offset: 0x00000010
-               did not match
+               line: 17
-               line: 20
                module: xul.dll


    unloaded_modules: [
        0: {
            base_addr: 0x7fff48290000
-           local val was null instead of:
            code_id: 68798D2F9000
            end_addr: 0x7fff48299000
            filename: KBDUS.DLL
        1: {
            base_addr: 0x7fff56020000
            code_id: DFD6E84B14000
            end_addr: 0x7fff56034000
            filename: resourcepolicyclient.dll
~   ignoring field write_combine_size: "0"

- Total errors: 288, warnings: 39

benchmark results (ms):
    2388, 1986, 2268, 1989, 2353, 
    average runtime: 00m:02s:196ms (2196ms)
    median runtime: 00m:02s:268ms (2268ms)
    min runtime: 00m:01s:986ms (1986ms)
    max runtime: 00m:02s:388ms (2388ms)

max memory (rss) results (bytes):
    267755520, 261152768, 272441344, 276131840, 279134208, 
    average max-memory: 258MB (271323136 bytes)
    median max-memory: 259MB (272441344 bytes)
    min max-memory: 249MB (261152768 bytes)
    max max-memory: 266MB (279134208 bytes)

Output Files: 
    * (download) Minidump: b4f58e9f-49be-4ba5-a203-8ef160211027.dmp
    * (download) Socorro Processed Crash: b4f58e9f-49be-4ba5-a203-8ef160211027.json
    * (download) Raw JSON: b4f58e9f-49be-4ba5-a203-8ef160211027.raw.json
    * Local minidump-stackwalk Output: b4f58e9f-49be-4ba5-a203-8ef160211027.local.json
    * Local minidump-stackwalk Logs: b4f58e9f-49be-4ba5-a203-8ef160211027.log.txt

Staging and Deploying to Production

Once we were confident enough in the implementation, a lot of the remaining testing was taken over by Will Kahn-Greene, who’s responsible for a lot of the server-side details of our crash-reporting infrastructure.

Will spent a bunch of time getting a bunch of machinery setup to manage the deployment and monitoring of rust-minidump. He also did a lot of the hard work of cleaning up all our server-side configuration scripts to handle any differences between the two implementations. (Although I spent a lot of time on compatibility, we both agreed this was a good opportunity to clean up old cruft and mistakes.)

Once all of this was set up, he turned it on in staging and we got our first look at how rust-minidump actually worked in ~production:


Our staging servers take in about 10% of the inputs that also go to our production servers, but even at that reduced scale we very quickly found several new corner cases and we were getting tons of crashes, which is mildly embarrassing for the thing that handles other people’s crashes.

Will did a great job here in monitoring and reporting the issues. Thankfully they were all fairly easy for us to fix. Eventually, everything smoothed out and things seemed to be working just as reliably as the old implementation on the production server. The only places where we were completely failing to produce any output were for horribly truncated minidumps that may as well have been empty files.

We originally did have some grand ambitions of running socc-pair on everything the staging servers processed or something to get really confident in the results. But by the time we got to that point, we were completely exhausted and feeling pretty confident in the new implementation.

Eventually Will just said “let’s turn it on in production” and I said “AAAAAAAAAAAAAAA”.

This moment was pure terror. There had always been more corner cases. There’s no way we could just be done. This will probably set all of Mozilla on fire and delete Firefox from the internet!

But Will convinced me. We wrote up some docs detailing all the subtle differences and sent them to everyone we could. Then the moment of truth finally came: Will turned it on in production, and I got to really see how well it worked in production:

*dramatic drum roll*

It worked fine.

After all that stress and anxiety, we turned it on and it was fine.

Heck, I’ll say it: it ran well.

It was faster, it crashed less, and we even knew it fixed some issues.

I was in a bit of a stupor for the rest of that week, because I kept waiting for the other shoe to drop. I kept waiting for someone to emerge from the mist and explain that I had somehow bricked Thunderbird or something. But no, it just worked.

So we left for the holidays, and I kept waiting for it to break, but it was still fine.

I am honestly still shocked about this!

But hey, as it turns out we really did put a lot of careful work into testing the implementation. At every step we found new problems but that was good, because once we got to the final step there were no more problems to surprise us.

And the fuzzer still kicked our butts afterwards.

But that’s part 2! Thanks for reading!


The post Everything Is Broken: Shipping rust-minidump at Mozilla – Part 1 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogFirefox rolls out Total Cookie Protection by default to all users worldwide

Take back your privacy

Starting today, Firefox is rolling out Total Cookie Protection by default to all Firefox users worldwide, making Firefox the most private and secure major browser available across Windows, Mac and Linux. Total Cookie Protection is Firefox’s strongest privacy protection to date, confining cookies to the site where they were created, thus preventing tracking companies from using these cookies to track your browsing from site to site.

Whether it’s applying for a student loan, seeking treatment or advice through a health site, or browsing an online dating app, massive amounts of your personal information is online — and this data is leaking all over the web. The hyper-specific-to-you ads you so often see online are made possible by cookies that are used to track your behavior across sites and build an extremely sophisticated profile of who you are.

Recent stories (including an excellent Last Week Tonight episode) have shown how robust, yet under-the-radar, the data selling economy is and how easy it is for anyone to buy your data, combine it with more data about you and use it for a variety of purposes, even beyond advertising.

It’s an alarming reality — the possibility that your every move online is being watched, tracked and shared — and one that’s antithetical to the open web we at Mozilla have strived to build. That’s why we developed Total Cookie Protection to help keep you safe online.

What is Total Cookie Protection?

Total Cookie Protection offers strong protections against tracking without affecting your browsing experience.

Total Cookie Protection creates a separate cookie jar for each website you visit. (Illustration: Meghan Newell)

Total Cookie Protection works by creating a separate “cookie jar” for each website you visit. Instead of allowing trackers to link up your behavior on multiple sites, they just get to see behavior on individual sites. Any time a website, or third-party content embedded in a website, deposits a cookie in your browser, that cookie is confined to the cookie jar assigned to only that website. No other websites can reach into the cookie jars that don’t belong to them and find out what the other websites’ cookies know about you — giving you freedom from invasive ads and reducing the amount of information companies gather about you. 

This approach strikes the balance between eliminating the worst privacy properties of third-party cookies – in particular the ability to track you – and allowing those cookies to fulfill their less invasive use cases (e.g. to provide accurate analytics). With Total Cookie Protection in Firefox, people can enjoy better privacy and have the great browsing experience they’ve come to expect. 

Total Cookie Protection offers additional privacy protections beyond those provided by our existing anti-tracking features. Enhanced Tracking Protection (ETP), which we launched in 2018, works by blocking trackers based on a maintained list. If a party is on that list, they lose the ability to use third-party cookies. ETP was a huge privacy win for Firefox users, but we’ve known this approach has some shortcomings. If a tracker for some reason isn’t on that list, they can still track users and violate their privacy. And if an attacker wants to thwart ETP, they can set up a new tracking domain that isn’t on the list. Total Cookie Protection avoids these problems by restricting the functionality for all cookies, not just for those on a defined list.  

A Culmination of Years of Anti-Tracking Work

Total Cookie Protection is the culmination of years of work to fight the privacy catastrophe that stems from online trackers. We first began to block tracking in 2015 with the release of Tracking Protection, a feature people could turn on by going into Private Browsing mode. We recognized at that time that browser makers couldn’t just sit back and let their users be abused. In 2018, we introduced Enhanced Tracking Protection and turned it on by default for all Firefox users in 2019, reflecting our commitment to actively protect our users rather than expect them to protect themselves. Since then, we have continued to make progress towards blocking trackers and ending cross-site tracking by introducing protections against fingerprinting and supercookies

Today’s release of Total Cookie Protection is the result of experimentation and feature testing, first in ETP Strict Mode and Private Browsing windows, then in Firefox Focus earlier this year. We’re now making it a default feature for all Firefox desktop users worldwide.

Our long history of fighting online tracking manifests itself in our advocacy to policy makers and other tech companies to reinforce their own privacy protections. We also push to make privacy an industry priority through our efforts in industry standards bodies when shaping the future of online advertising. Furthermore, we created the Privacy Not Included guide to simplify the very complicated privacy landscape and help consumers shop smarter and safer for products that connect to the internet.

Over more than a decade, Mozilla has proudly been leading the fight to build a more private internet. Bringing Total Cookie Protection to all Firefox users is our next step towards creating a better internet, one where your privacy is not optional.

Take back your privacy by downloading Firefox today.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Firefox rolls out Total Cookie Protection by default to all users worldwide appeared first on The Mozilla Blog.

The Mozilla BlogCalling for Antitrust Reform

Mozilla supports the American Innovation and Choice Online Act (AICOA). The time for change is now.

It’s time for governments to address the reality that five tech companies—not everyday consumers—control our online experiences today. Updated competition laws are essential for the internet to be private, secure, interoperable, open, accessible, transparent, and a balance between commercial profit and public benefit. This is Mozilla’s vision for the internet. For a number of years, we have shared our views supporting government competition efforts globally to achieve it. 

One such proposal now under discussion in the US Congress is the American Innovation and Choice Online Act (AICOA). This bill is an important step in correcting two decades of digital centralization by creating a level playing field for smaller, independent software companies to compete. We support this bipartisan effort led by Senators Amy Klobuchar and Chuck Grassley and Representatives David Cicilline and Ken Buck. 

We believe that AICOA will facilitate innovation and consumer choice by ensuring that big tech companies cannot give preference to their own products and services over the rich diversity of competitive options offered by others. Mozilla—and many other independent companies—cannot effectively compete without this antitrust law. We are disadvantaged by the fact that current and future Firefox users, many of whom are privacy and security focused, cannot easily install and keep Firefox as their preferred browser because of confusing operating system messages and settings. We are further challenged by app store rules designed to keep out Gecko, our independent browser engine that powers Firefox, Tor and other browsers. We are stuck when big tech companies do not offer us and other developers open APIs and other functionality needed for true interoperability. 

A fair playing field is vital to ensure that Mozilla and other independent companies can continue to act as a counterweight to big tech and shape the future of the internet to be more private and more secure. We understand that the bill sponsors intend AICOA to regulate only gatekeeper companies and their controlled products. It is not intended to regulate or impact the agreements or product offerings of non-regulated independent companies like Mozilla that partner with gatekeepers for critical services. Nor does it require trading off privacy and security in order to enhance competition.

We disagree with the position taken by opponents to AICOA that competition legislation will undermine privacy and security. It is true that companies like Apple and Google offer key privacy features and security services that protect millions; for example, Apple’s App Tracking Transparency (ATT) approach and Google’s Safebrowsing service. Mozilla advocated for Apple to implement ATT, and Firefox (and all major browsers) use Safebrowsing. We do not believe these technologies would be impacted by AICOA because they can be provided without engaging in problematic self-preferencing behavior and because the bill includes clear protections for privacy and security.

Our view is that self-preferencing is preventing internet development from being more private and secure than it is today. For example, Mozilla was at the forefront of developing technology against cross-site tracking. Yet we have never released this technology to Firefox users on iOS because of App Store rules preferring Apple’s own browser engine over alternatives. As another example, Android’s affiliated browser Chrome does not offer anti-tracking technology. This leaves the majority of people on the planet without effective privacy protections. Real browser competition would empower millions to choose freely.

This year marks the 15th anniversary of the Mozilla Manifesto and two decades of our advocacy for a better internet. There has been progress in many areas, but the time has come for government action. 15 years of every major platform deciding for you that you should use their software is far too long. Enabling a level playing field for independent options is good for people’s online experiences, good for innovation and the economy, and ultimately good for a healthy, open internet.  We applaud those leading the charge on antitrust reform in the US and across the globe. The time for change is now.

The post Calling for Antitrust Reform appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogFrequently Asked Questions: Thunderbird Mobile and K-9 Mail

Today, we announced our detailed plans for Thunderbird on mobile. We also welcomed the open-source Android email client K-9 Mail into the Thunderbird family. Below, you’ll find an evolving list of frequently asked questions about this collaboration and our future plans.

Why not develop your own mobile client?

The Thunderbird team had many discussions on how we might provide a great mobile experience for our users. In the end, we didn’t want to duplicate effort if we could combine forces with an existing open-source project that shared our values. Over years of discussing ways K-9 and Thunderbird could collaborate, we decided it would best serve our users to work together.

Should I install K-9 Mail now or wait for Thunderbird?

If you want to help shape the future of Thunderbird on Android, you’re encouraged to install K-9 Mail right now. Leading up to the first official release of Thunderbird for Android, the user interface will probably change a few times. If you dislike somewhat frequent changes in apps you use daily, you might want to hold off.

Will this affect desktop Thunderbird? How?

Many Thunderbird users have asked for a Thunderbird experience on mobile, which we intend to provide by helping make K-9 amazing (and turning it into Thunderbird on Android). K-9 will supplement the Thunderbird experience and enhance where and how users are able to have a great email experience. Our commitment to desktop Thunderbird is unchanged, most of our team is committed to making that a best-in-class email client and it will remain that way.

What will happen to K-9 Mail once the official Thunderbird for Android app has been released?

K-9 Mail will be brought in-line with Thunderbird from a feature perspective, and we will ensure that syncing between Thunderbird and K-9/Thunderbird on Android is seamless. Of course, Thunderbird on Android and Thunderbird on Desktop are both intended to serve very different form factors, so there will be UX differences between the two. But we intend to allow similar workflows and tools on both platforms.

Will I be able to sync my Thunderbird accounts with K-9 Mail?

Yes. We plan to offer Firefox Sync as one option to allow you to securely sync accounts between Thunderbird and K-9 Mail. We expect this feature to be implemented in the summer of 2023.

Will Thunderbird for Android support calendars, tasks, feeds, or chat like the desktop app?

We are working on an amazing email experience first. We are looking at the best way to provide Thunderbird’s other functionality on Android but currently are still debating how best to achieve that. For instance, one method is to simply sync calendars, and then users are able to use their preferred calendar application on their device. But we have to discuss this within the team, and the Thunderbird and K-9 communities, then decide what the best approach is.

Going forward, how will K-9 Mail donations be used?

Donations made towards K-9 will be allocated to the Thunderbird project. Of course, Thunderbird in turn will provide full support for K-9 Mail’s development and activities that support the advancement and sustainability of the app.

Is a mobile Thunderbird app in development for iOS?

Thunderbird is currently evaluating the development of an iOS app.

How can I get involved?

1) Participate in our discussion and planning forum.
2) Developers are encouraged to visit https://developer.thunderbird.net to get started.
3) Obtain Thunderbird source code by visiting https://developer.thunderbird.net/thunderbird-development/getting-started.
4) K-9 Mail source code is available at: https://github.com/thundernest/k-9
5) You can financially support Thunderbird and K-9 Mail’s development by donating via this link: https://mzla.link/k9-give.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire more developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Frequently Asked Questions: Thunderbird Mobile and K-9 Mail appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogRevealed: Our Plans For Thunderbird On Android

For years, we’ve wanted to extend Thunderbird beyond the desktop, and the path to delivering a great Thunderbird on Android™ experience started in 2018.

That’s when Thunderbird Product Manager Ryan Lee Sipes first met up with Christian Ketterer (aka “cketti”), the project maintainer for open-source Android email client K-9 Mail. The two instantly wanted to find a way for the two projects to collaborate. Throughout the following few years, the conversation evolved into how to create an awesome, seamless email experience across platforms.

But Ryan and cketti both agreed that the final product had to reflect the shared values of both projects. It had to be open source, respect the user, and be a perfect fit for power users who crave customization and a rich feature set.

“Ultimately,” Sipes says, “it made sense to work together instead of developing a mobile client from scratch.”

K-9 Mail Joins The Thunderbird Family

To that end, we’re thrilled to announce that today, K-9 Mail officially joins the Thunderbird family. And cketti has already joined the full-time Thunderbird staff, bringing along his valuable expertise and experience with mobile platforms.

Ultimately, K-9 Mail will transform into Thunderbird on Android.

That means the name itself will change and adopt Thunderbird branding. Before that happens, we need to reach certain development milestones that will bring K-9 Mail into alignment with Thunderbird’s feature set and visual appearance.

To accomplish that, we’ll devote finances and development time to continually improving K-9 Mail. We’ll be adding brand new features and introducing quality-of-life enhancements.

K-9 Mail and Thunderbird are both community-funded projects. If you want to help us improve and expand K-9 Mail faster, please consider donating at https://mzla.link/k9-give

Here’s a glimpse into our features roadmap:

  • Account setup using Thunderbird account auto-configuration.
  • Improved folder management.
  • Support for message filters.
  • Syncing between desktop and mobile Thunderbird.

“Joining the Thunderbird family allows K-9 Mail to become more sustainable and gives us the resources to implement long-requested features and fixes that our users want,” cketti says. “In other words, K-9 Mail will soar to greater heights with the help of Thunderbird.”

Thunderbird On Android: Join The Journey

Thunderbird users have long been asking for Thunderbird on their Android and iOS devices. This move allows Thunderbird users to have a powerful, privacy-respecting email experience today on Android. Plus, it lets the community help shape the transition of K-9 Mail into a fully-featured mobile Thunderbird experience.

This is only the beginning, but it’s a very exciting first step.

Want to talk directly with the Thunderbird team about it? Join us for a Twitter Spaces chat (via @MozThunderbird) on Wednesday, June 15 at 10am PDT / 1pm EDT / 7pm CEST). I’ll be there alongside cketti and Ryan to answer your questions, and discuss the future of Thunderbird on mobile devices.

Additional Links And Resources

FAQ: Frequently Asked Questions

We’ve published a separate FAQ here, addressing many of the community’s questions and concerns. Check back there from time to time, as we plan to update the FAQ as this collaboration progresses.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire more developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Revealed: Our Plans For Thunderbird On Android appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogManifest V3 Firefox Developer Preview — how to get involved

Firefox logoWhile MV3 is still in development, many major features are already included in the Developer Preview, which provides an opportunity to expose functionality for testing and feedback. With strong developer feedback, we’re better equipped to quickly address critical bug fixes, provide clear developer documentation, and reorient functionality.

Some features, such as a well defined and documented lifecycle for Event Pages, are still works in progress. As we complete features, they’ll land in future versions of Firefox and you’ll be able to test and progress your extensions into MV3 compatibility. In most ways Firefox is committed to MV3 cross browser compatibility. However in some cases Firefox will offer distinct extension functionality.

Developer Preview is not available to regular users; it requires you to change preferences in about:config. Thus you will not be able to upload MV3 extensions to addons.mozilla.org (AMO) until we have an official release available to users.

The following are key considerations about migration at this time and areas we’d greatly appreciate developer feedback.

  1. Read the MV3 migration guide. MV3 contains many changes and our migration guide covers the major necessary steps, as well as linking to documentation to help understand further details.
  2. Update your extension to be compatible with Event Pages. One major difference in Firefox is our use of Event Pages, which provides an alternative to the existing Background Pages that allows idle timeouts and page restarts. This adds resilience to the background, which is necessary for resource constraints and mobile devices. For the most part, Event Pages are compatible with existing Background Pages, requiring only minor changes. We plan to release Event Pages for MV2 in an upcoming Firefox release, so preparation to use Event Pages can be included in MV2 addons soon. Many extensions may not need all the capabilities available in Event Pages. The background scripts are easily transferable to the Service Worker background when it becomes available in a future release. In the meantime, extensions attempting to support both Chrome and Firefox can take advantage of Event Pages in Firefox.
  3. Test your content scripts with MV3. There are multiple changes that will impact content scripts, ranging from tighter restrictions on CORS, CSP, remote code execution, and more. Not all extensions will run into issues in these cases, and some may only require minor modifications that will likely work within MV2 as well.
  4. Understand and consider your migration path for APIs that have changed or deprecated. Deprecated APIs will require code changes to utilize alternate or new APIs. Examples include New Scripting API (which will be part of MV2 in a future release), changing page and browser actions to the action API, etc.
  5. Test and plan migration for permissions. Most permissions are already available as optional permissions in MV2. With MV3, we’re making host permissions optional — in many cases by default. While we do not yet have the primary UI for user control in Developer Preview, developers should understand how these changes will affect their extensions.
  6. Let us know how it’s going! Your feedback will help us make the transition from MV2 to MV3 as smooth as possible. Through Developer Preview we anticipate learning about MV3 rough edges, documentation needs, new features to be fleshed out, and bugs to be fixed. We have a host of community channels you can access to ask questions, help others, report problems, or whatever else you desire to communicate as it relates to the MV3 migration process.

Stay in touch with us on any of these forums…


The post Manifest V3 Firefox Developer Preview — how to get involved appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgTraining efficient neural network models for Firefox Translations

Machine Translation is an important tool for expanding the accessibility of web content. Usually, people use cloud providers to translate web pages. State-of-the-art Neural Machine Translation (NMT) models are large and often require specialized hardware like GPUs to run inference in real-time.

If people were able to run a compact Machine Translation (MT) model on their local machine CPU without sacrificing translation accuracy it would help to preserve privacy and reduce costs.

The Bergamot project is a collaboration between Mozilla, the University of Edinburgh, Charles University in Prague, the University of Sheffield, and the University of Tartu with funding from the European Union’s Horizon 2020 research and innovation programme. It brings MT to the local environment, providing small, high-quality, CPU optimized NMT models. The Firefox Translations web extension utilizes proceedings of project Bergamot and brings local translations to Firefox.

In this article, we will discuss the components used to train our efficient NMT models. The project is open-source, so you can give it a try and train your model too!


NMT models are trained as language pairs, translating from language A to language B. The training pipeline was designed to train translation models for a language pair end-to-end, from environment configuration to exporting the ready-to-use models. The pipeline run is completely reproducible given the same code, hardware and configuration files.

The complexity of the pipeline comes from the requirement to produce an efficient model. We use Teacher-Student distillation to compress a high-quality but resource-intensive teacher model into an efficient CPU-optimized student model that still has good translation quality. We explain this further in the Compression section.

The pipeline includes many steps: compiling of components, downloading and cleaning datasets, training teacher, student and backward models, decoding, quantization, evaluation etc (more details below). The pipeline can be represented as a Directly Acyclic Graph (DAG).


Firfox Translation training pipeline DAG

The workflow is file-based and employs self-sufficient scripts that use data on disk as input, and write intermediate and output results back to disk.

We use the Marian Neural Machine Translation engine. It is written in C++ and designed to be fast. The engine is open-sourced and used by many universities and companies, including Microsoft.

Training a quality model

The first task of the pipeline is to train a high-quality model that will be compressed later. The main challenge at this stage is to find a good parallel corpus that contains translations of the same sentences in both source and target languages and then apply appropriate cleaning procedures.


It turned out there are many open-source parallel datasets for machine translation available on the internet. The most interesting project that aggregates such datasets is OPUS. The Annual Conference on Machine Translation also collects and distributes some datasets for competitions, for example, WMT21 Machine Translation of News. Another great source of MT corpus is the Paracrawl project.

OPUS dataset search interface:

OPUS dataset search interface

It is possible to use any dataset on disk, but automating dataset downloading from Open source resources makes adding new language pairs easy, and whenever the data set is expanded we can then easily retrain the model to take advantage of the additional data. Make sure to check the licenses of the open-source datasets before usage.

Data cleaning

Most open-source datasets are somewhat noisy. Good examples are crawled websites and translation of subtitles. Texts from websites can be poor-quality automatic translations or contain unexpected HTML, and subtitles are often free-form translations that change the meaning of the text.

It is well known in the world of Machine Learning (ML) that if we feed garbage into the model we get garbage as a result. Dataset cleaning is probably the most crucial step in the pipeline to achieving good quality.

We employ some basic cleaning techniques that work for most datasets like removing too short or too long sentences and filtering the ones with an unrealistic source to target length ratio. We also use bicleaner, a pre-trained ML classifier that attempts to indicate whether the training example in a dataset is a reversible translation. We can then remove low-scoring translation pairs that may be incorrect or otherwise add unwanted noise.

Automation is necessary when your training set is large. However, it is always recommended to look at your data manually in order to tune the cleaning thresholds and add dataset-specific fixes to get the best quality.

Data augmentation

There are more than 7000 languages spoken in the world and most of them are classified as low-resource for our purposes, meaning there is little parallel corpus data available for training. In these cases, we use a popular data augmentation strategy called back-translation.

Back-translation is a technique to increase the amount of training data available by adding synthetic translations. We get these synthetic examples by training a translation model from the target language to the source language. Then we use it to translate monolingual data from the target language into the source language, creating synthetic examples that are added to the training data for the model we actually want, from the source language to the target language.

The model

Finally, when we have a clean parallel corpus we train a big transformer model to reach the best quality we can.

Once the model converges on the augmented dataset, we fine-tune it on the original parallel corpus that doesn’t include synthetic examples from back-translation to further improve quality.


The trained model can be 800Mb or more in size depending on configuration and requires significant computing power to perform translation (decoding). At this point, it’s generally executed on GPUs and not practical to run on most consumer laptops. In the next steps we will prepare a model that works efficiently on consumer CPUs.

Knowledge distillation

The main technique we use for compression is Teacher-Student Knowledge Distillation. The idea is to decode a lot of text from the source language into the target language using the heavy model we trained (Teacher) and then train a much smaller model with fewer parameters (Student) on these synthetic translations. The student is supposed to imitate the teacher’s behavior and demonstrate similar translation quality despite being significantly faster and more compact.

We also augment the parallel corpus data with monolingual data in the source language for decoding. This improves the student by providing additional training examples of the teacher’s behavior.


Another trick is to use not just one teacher but an ensemble of 2-4 teachers independently trained on the same parallel corpus. It can boost quality a little bit at the cost of having to train more teachers. The pipeline supports training and decoding with an ensemble of teachers.


One more popular technique for model compression is quantization. We use 8-bit quantization which essentially means that we store weights of the neural net as int8 instead of float32. It saves space and speeds up matrix multiplication on inference.

Other tricks

Other features worth mentioning but beyond the scope of this already lengthy article are the specialized Neural Network architecture of the student model, half-precision decoding by the teacher model to speed it up, lexical shortlists, training of word alignments, and finetuning of the quantized student.

Yes, it’s a lot! Now you can see why we wanted to have an end-to-end pipeline.

How to learn more

This work is based on a lot of research. If you are interested in the science behind the training pipeline, check out reference publications listed in the training pipeline repository README and across the wider Bergamot project. Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task is a good academic starting article. Check this tutorial by Nikolay Bogoychev for a more practical and operational explanation of the steps.


The final student model is 47 times smaller and 37 times faster than the original teacher model and has only a small quality decrease!

Benchmarks for en-pt model and Flores dataset:

Model Size Total number of parameters Dataset decoding time on 1 CPU core Quality, BLEU
Teacher 798Mb 192.75M 631s 52.5
Student quantized 17Mb 15.7M 17.9s 50.7

We evaluate results using MT standard BLEU scores that essentially represent how similar translated and reference texts are. This method is not perfect but it has been shown that BLEU scores correlate well with human judgment of translation quality.

We have a GitHub repository with all the trained models and evaluation results where we compare the accuracy of our models to popular APIs of cloud providers. We can see that some models perform similarly, or even outperform, the cloud providers which is a great result taking into account our model’s efficiency, reproducibility and open-source nature.

For example, here you can see evaluation results for the English to Portuguese model trained by Mozilla using open-source data only.

Evaluation results en-pt

Anyone can train models and contribute them to our repo. Those contributions can be used in the Firefox Translations web extension and other places (see below).


It is of course possible to run the whole pipeline on one machine, though it may take a while. Some steps of the pipeline are CPU bound and difficult to parallelize, while other steps can be offloaded to multiple GPUs. Most of the official models in the repository were trained on machines with 8 GPUs. A few steps, like teacher decoding during knowledge distillation, can take days even on well-resourced single machines. So to speed things up, we added cluster support to be able to spread different steps of the pipeline over multiple nodes.

Workflow manager

To manage this complexity we chose Snakemake which is very popular in the bioinformatics community. It uses file-based workflows, allows specifying step dependencies in Python, supports containerization and integration with different cluster software. We considered alternative solutions that focus on job scheduling, but ultimately chose Snakemake because it was more ergonomic for one-run experimentation workflows.

Example of a Snakemake rule (dependencies between rules are inferred implicitly):

rule train_teacher:
    message: "Training teacher on all data"
    log: f"{log_dir}/train_teacher{{ens}}.log"
    conda: "envs/base.yml"
    threads: gpus_num*2
    resources: gpu=gpus_num
    output: model=f'{teacher_base_dir}{{ens}}/{best_model}'
    shell: '''bash pipeline/train/train.sh \
                teacher train {src} {trg} "{params.prefix_train}" \
                "{params.prefix_test}" "{params.dir}" \
                "{input.vocab}" {params.args} >> {log} 2>&1'''

Cluster support

To parallelize workflow steps across cluster nodes we use Slurm resource manager. It is relatively simple to operate, fits well for high-performance experimentation workflows, and supports Singularity containers for easier reproducibility. Slurm is also the most popular cluster manager for High-Performance Computers (HPC) used for model training in academia, and most of the consortium partners were already using or familiar with it.

How to start training

The workflow is quite resource-intensive, so you’ll need a pretty good server machine or even a cluster. We recommend using 4-8 Nvidia 2080-equivalent or better GPUs per machine.

Clone https://github.com/mozilla/firefox-translations-training and follow the instructions in the readme for configuration.

The most important part is to find parallel datasets and properly configure settings based on your available data and hardware. You can learn more about this in the readme.

How to use the existing models

The existing models are shipped with the Firefox Translations web extension, enabling users to translate web pages in Firefox. The models are downloaded to a local machine on demand. The web extension uses these models with the bergamot-translator Marian wrapper compiled to Web Assembly.

Also, there is a playground website at https://mozilla.github.io/translate where you can input text and translate it right away, also locally but served as a static website instead of a browser extension.

If you are interested in an efficient NMT inference on the server, you can try a prototype HTTP service that uses bergamot-translator natively compiled, instead of compiled to WASM.

Or follow the build instructions in the bergamot-translator readme to directly use the C++, JavaScript WASM, or Python bindings.


It is fascinating how far Machine Translation research has come in recent years. Local high-quality translations are the future and it’s becoming more and more practical for companies and researchers to train such models even without access to proprietary data or large-scale computing power.

We hope that Firefox Translations will set a new standard of privacy-preserving, efficient, open-source machine translation accessible for all.


I would like to thank all the participants of the Bergamot Project for making this technology possible, my teammates Andre Natal and Abhishek Aggarwal for the incredible work they have done bringing Firefox Translations to life, Lonnen for managing the project and editing this blog post and of course awesome Mozilla community for helping with localization of the web-extension and testing its early builds.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303 🇪🇺

The post Training efficient neural network models for Firefox Translations appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogWelcome To The Thunderbird 102 Beta! Resources, Links, And Guides

The wait for this year’s major new Thunderbird release is almost over! But you can test-drive many of the new features like the brand new Address Book, Matrix Chat support, import/export wizard, and refreshed visuals right now with the Thunderbird 102 Beta. Better still, you might be directly responsible for improving the final product via your feedback and bug reports.

Below, you’ll find all the resources you need for testing the Thunderbird 102 Beta. From technical guides to a community feedback forum to running the beta side-by-side with your existing stable version, we’ve got you covered.

Do you feel something is missing from this list? Please leave a comment here, or email me personally (jason at thunderbird dot net). I’ll make sure we get it added!

Thunderbird 102 Beta: First Steps

Here are some first steps and important considerations to take into account before deciding to install the beta.

Thunderbird 102 Beta: Guides, Links, And Resources

We want you to have the smoothest beta experience possible. Whether you’re reporting bugs, seeking solutions, or trying to run beta side-by-side with an existing Thunderbird installation, these resources should help.

From all of us here at Thunderbird: Have fun, and happy testing!

The post Welcome To The Thunderbird 102 Beta! Resources, Links, And Guides appeared first on The Thunderbird Blog.

SUMO BlogIntroducing Ryan Johnson

Hi folks,

Please join me to welcome Ryan Johnson to the Customer Experience team as a Staff Software Engineer. He will be working closely with Tasos to maintain and improve the Mozilla Support platform.

Here’s a short intro from Ryan:

Hello everyone! I’m Ryan Johnson, and I’m joining the SUMO engineering team as a Staff Software Engineer. This is a return to Mozilla for me, after a brief stint away, and I’m excited to work with Tasos and the rest of the Customer Experience team in reshaping SUMO to better serve the needs of Mozilla and all of you. In my prior years at Mozilla, I was fortunate to work on the MDN team, and with many of its remarkable supporters, and this time I look forward to meeting, working with, and learning from many of you.

Once again, please join me to congratulate and welcome Ryan to our team!

Open Policy & AdvocacyEnhancing trust and security on the internet – browsers are the first line of defence

Enhancing trust and security online is one of the defining challenges of our time – in the EU alone, 37% of residents do not believe they can sufficiently protect themselves from cybercrime. Individuals need assurance that their credit card numbers, social media logins, and other sensitive data are protected from cybercriminals when browsing. With that in mind, we’ve just unveiled an update to the security policies that protect people from cybercrime, demonstrating again the critical role Firefox plays in ensuring trust and security online.

Browsers like Firefox use encryption to protect individuals’ data from eavesdroppers when they navigate online (e.g. when sending credit card details to an online marketplace). But protecting data from cybercriminals when it’s on the move is only part of the risk we mitigate. Individuals also need assurance that they are sending data to the correct domain (e.g., “amazon.com”). If someone sends their private data to a cybercriminal instead of to their bank, for example, it is of little consolation that the data was encrypted while getting there.

To address this we rely on cryptographic website certificates, which allow a website to prove that it controls the domain name that the individual has navigated to. Websites obtain these certificates from certificate authorities, organisations that run checks to verify that websites are not compromised. Certificate authorities are a critical pillar of trust in this ecosystem – if they mis-issue certificates to cybercriminals or other malicious actors, the consequences for individuals can be catastrophic.

To keep Firefox users safe, we ensure that only certificate authorities that maintain high standards of security and transparency are trusted in the browser (i.e., included in our ‘root certificate store’). We also continuously monitor and review the behaviour of certificate authorities that we opt to trust to ensure that we can take prompt action to protect individuals in cases where a trusted certificate authority has been compromised.

Properly maintaining a root certificate store is a significant undertaking, not least because the cybersecurity threat landscape is constantly evolving. We aim to ensure our security standards are always one step ahead, and as part of that effort, we’ve just finalised an important policy update that will increase transparency and security in the certificate authority ecosystem. This update introduces new standards for how audits of certificate authorities should be conducted and by whom; phases out legacy encryption standards that some certificate authorities still deploy today; and requires more transparency from certificate authorities when they revoke certificates. We’ve already begun working with certificate authorities to ensure they can properly transition to the new higher security standards.

The policy update is the product of a several-month process of open dialogue and debate amongst various stakeholders in the website security space. It is a further case-in-point of our belief in the value of transparent, community-based processes across the board for levelling-up the website security ecosystem. For instance, before accepting a certificate authority in Firefox we process lots of data and perform significant due diligence, then publish our findings and hold a public discussion with the community. We also maintain a public security incident reporting process to encourage disclosure and learning from experts in the field.

Ultimately, this update process highlights once again how operating an independent root certificate store allows us to drive the website security ecosystem towards ever-higher standards, and to serve as the first line of defence for when web certificates are misused. It’s a responsibility we take seriously and we see it as critical to enhancing trust on the internet.

It’s also why we’re so concerned about draft laws under consideration in the EU (Article 45 of the ‘eIDAS regulation’) that would forbid us from applying our security standards to certain certificate authorities and block us from taking action if and when those certificate authorities mis-issue certificates. If adopted in its current form by the EU, Article 45 would be a major step back for security on the internet, because of how it would restrict browser security efforts and because of the global precedent it would set. A broad movement of digital rights organisations; consumer groups; and numerous independent cybersecurity experts (here, here, and here) has begun to raise the alarm and to encourage the EU to change course on Article 45. We are working hard to do so too.

We’re proud of our root certificate store and the role it plays in enhancing trust and security online. It’s part of our contribution to the internet – we’ll continue to invest in it with security updates like this one and work with lawmakers on ensuring legal frameworks continue to support this critical work.


Thumbnail photo credit:

Creative Commons Attribution-Share Alike 4.0 International license.
Attribution: Santeri Viinamäki


The post Enhancing trust and security on the internet – browsers are the first line of defence appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird + RSS: How To Bring Your Favorite Content To The Inbox

The official RSS logo

I first discovered RSS feeds in 2004 when I fell in love with podcasting. That’s when I learned I could utilize RSS to bring my favorite web content to me, on my schedule. Whether it was weekly music podcasts, tech blogs, newspaper articles, or a local weather forecast, RSS became a way to more easily digest and disseminate the growing onslaught of content on the web. Back then, I used Google Reader (RIP). But now I use Thunderbird to manage and read all my news feeds, and I love it!

In this post I’ll explain what RSS is, why it’s useful, and how to get all set up with some feeds inside of Thunderbird.

What Is RSS?

Skip ahead if you’re old-school and already know this. But if you’re hearing about RSS for the first time, here’s a brief description since it’s not exactly a household name.

RSS stands for “Really Simple Syndication.” It’s a web technology that regularly monitors web pages for frequently updated content, and then delivers that content outside of the web page in a universal, computer-readable format. This is done through an auto-generated XML file that feed readers (and software like Thunderbird) transform back into a tidy, human-readable format.

A raw RSS feed is an XML file containing important information about, and enclosures for, web content. <figcaption>Here’s what a raw RSS feed looks like</figcaption>

Adding that resulting URL (for example https://blog.thunderbird.net/rss) to RSS-compatible software like Thunderbird gives you an always-updating, free subscription to that content.

And RSS supports more than text: it’s also built for images, audio and video “enclosures.” (As a Thunderbird user, just think of them as attachments!)

Want to learn more about RSS feeds? Read this great explainer.

Why You’ll Love RSS

Here are a few compelling reasons for using RSS feeds to consume your favorite web content:

  • You don’t have to track down the news. The news comes to you!
  • Stay on top of updates from your favorite sites without needing to subscribe to newsletters or remembering to manually check in. (Especially useful for sites that don’t update regularly.)
  • Organize your favorite content into categories, tags, folders and sub-folders just like your email.
  • Bypass algorithms, intrusive ads and privacy-invading trackers.
  • RSS feeds are free
  • All podcasts (except Spotify exclusives) use RSS feeds. So does YouTube!
  • It’s easy to move your RSS feed subscriptions to other desktop and mobile apps.
  • Shameless plug: You can read this Thunderbird blog in Thunderbird!

Why You’ll Love The Thunderbird + RSS Combo

It all clicked for me when I realized how intuitive it was treating news content just like email. If you already use Thunderbird as your main email client, you know how powerful it is. Put simply, I organize all the web content I subscribe to in Thunderbird the same way I organize my email.

My own Feeds (that’s what we call RSS subscriptions) are divided into subfolders like Music, Tech, Comics, Gaming, and World News.

Just like email, I can favorite/star articles I love or want to revisit later. I can utilize tags and Quick Filters and tabs. I can search them, save them, print them, and delete them. Plus, sharing something is as easy as forwarding an email.

And because of Thunderbird’s flexibility, you’re not just limited to text excerpts. You can read full web pages, meaning consuming the content as originally published. If you’re privacy-conscious, you can absolutely block intrusive trackers and ads with this uBlock Origin extension for Thunderbird.

Need a bonus reason? You can even watch your YouTube subscriptions inside Thunderbird via RSS! (Keep reading to learn how to easily discover a channel’s RSS feed.)

How To Set Up Thunderbird For RSS

Add a new "Feeds" Account<figcaption>First, add a New Feed Account that will contain all your RSS subscriptions</figcaption>

The first step is adding a Feed Account to Thunderbird. This process is similar to adding email accounts, but faster. You won’t need a username or password; this is simply a place to collect all your RSS feeds.

From the File or Menu, select New > Feed Account. Now, give it a name. Any name you like!

Here's what I named my Feeds account<figcaption>Here’s what I named my Feeds account</figcaption>

Having a unique “account” and name for your RSS feeds is helpful if you decide to export your subscriptions to another version of Thunderbird, or another app altogether.

And that’s it! Now you just need to find and add some feeds. Let’s tackle that next, and then we’ll get into some organizational tips to streamline your experience.

How To Find RSS Feeds

An RSS icon on the Thunderbird blog<figcaption>An RSS icon on the Thunderbird blog</figcaption>

In the image above, see the circled image that looks like a WiFi icon? That’s an RSS logo, and it’s how you’ll know a website offers a feed. Sometimes it’ll be orange (left), and it may be located on the sidebar or bottom footer of a page.

While they may not be advertised, the majority of websites do offer RSS feeds. Sometimes you just have to search a little harder to find them, occasionally needing to view a page’s source code to find any reference to it.

Fortunately, there are useful browser extensions that eliminate all the searching and guesswork. If a page offers an RSS feed, these extensions will auto-magically find the URL for you:

How To Add RSS Feeds To Thunderbird

Once you have that URL, here’s how to add it to Thunderbird.

First, just click on your Feed Account folder (in my case it’s “Jason’s News Hub” as highlighted in the image above). Then click “Manage feed subscriptions.”

It feels appropriate to use the Thunderbird blog as our first feed. Find the RSS icon at the top of this page, and copy the link. (Or just copy this: https://blog.thunderbird.net/feed/.)

Adding a new RSS feed to Thunderbird

Now paste that link inside the “Feed URL” box, and click the “Add” button. If it’s a valid link, you should see the Title populate, and the see it nested under your Feeds account.

Here’s what my personal Feeds account currently looks like. As you can see, I’ve loosely categorized my feed with subfolders:

<figcaption>I have some catching up to do!</figcaption>

If you use Thunderbird, you certainly know what to do next. Just click and enjoy your favorite content delivered to your inbox. Most web content these days is responsive, so articles and videos should adapt when adjusting your Thunderbird layout or window size.

If you want to read the content in its own tab, just double click the title as you would an email. It looks pretty slick, especially with the Dark Reader Add-on enabled to remove the original white background of this blog. Here’s an example using a recent article highlighting 7 great new features coming to Thunderbird 102:

Want More RSS Guides?

Using RSS with Thunderbird has dramatically changed how I consume content. This may sound borderline melodramatic, but it’s removed some anxiety surrounding my habit of constantly scouring the web for information or entertainment. Now, I invest some time upfront to find the sources I love, and then sit back and get all that content delivered to me.

Specifically, it’s rekindled my love of discovering new music. Thunderbird makes this particularly productive for me since I can manage, organize and manipulate all those new music recommendations with tags, filters, search, subfolders and more.

If you want to dive deeper into RSS and read more guides on managing your own RSS feed collection, leave a comment here and let me know! Have questions about using RSS? You can ask them in the comments, or reach out to us on Mastodon, Twitter, or Facebook.

Thanks for reading, and thanks for using Thunderbird!

The post Thunderbird + RSS: How To Bring Your Favorite Content To The Inbox appeared first on The Thunderbird Blog.

SUMO BlogWhat’s up with SUMO – May

Hi everybody,

Q2 is a busy quarter with so many exciting projects on the line. The onboarding project implementation is ongoing, mobile support project also goes smoothly so far (we even start to scale to support Apple AppStore), but we also managed to audit our localization process (with the help of our amazing contributors!). Let’s dive more into it without further ado.

Welcome note and shout-outs

  • Welcome to the social support program to Magno Reis and Samuel. They both are long-time contributors on the forum who are spreading their wings to Social Support.
  • Welcome to the world of the SUMO forum to Dropa, YongHan, jeyson1099, simhk, and zianshi17.
  • Welcome to the KB world to kaie, alineee, and rodrigo.bucheli.
  • Welcome to the KB localization to Agehrg4 (ru), YongHan (zh-tw), ibrahimakgedik3 (tr), gabriele.siukstaite (t), apokvietyte (lt), Anokhi (ms), erinxwmeow (ms), and dvyarajan7 (ms). Welcome to the SUMO family!
  • Thanks to the localization contributors who helped me understand their workflow and pain points on the localization process. So many insightful feedback and things that we may not understand without your input. I can’t thank everybody enough for your input!
  • Huge shout outs to Kaio Duarte Costa for stepping up as Social Support moderator. He’s been an amazing contributor to the program since 2020, and I believe that he’ll be a great role model for the community. Thank you and congratulations!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • I highly recommend checking out KB call from May. We talked about many interesting topics, from KB review queue, to a group exercise on writing content for localization.
  • It’s been 2 months since we on board Dayana as a Community Support Advocate (read the intro blog post here) and we can’t wait to share more about our learnings and accomplishment!
  • Please read this forum thread and this bug report for those of you who experiences trouble with uploading images to Mozilla Support.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • There’s also KB call, this one was the recording for the month of May. Find out more about KB call from this wikipage.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Apr 2022 7,407,129 -1.26%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 7.99% 98%
zh-CN 7.42% 100%
fr 6.27% 87%
es 6.07% 30%
pt-BR 4.94% 54%
ru 4.60% 82%
ja 3.80% 48%
It 2.36% 100%
pl 2.06% 87%
ca 1.67% 0%

* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats



Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming message Conv interacted Resolution rate
Apr 2022 504 316 75.00%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah Koshy
  2. Christophe Villeneuve
  3. Magno Reis
  4. Md Monirul Alom
  5. Felipe Koji

Play Store Support

Channel Apr 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 1226 234 291
Firefox Focus for Android 109 0 4
Firefox Klar Android 1 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Selim Şumlu
  • Bithiah Koshy

Product updates

Firefox desktop

Firefox mobile

  • TBD

Other products / Experiments

  • TBD

Useful links:

Open Policy & AdvocacyMozilla Meetups: The Building Blocks of a Trusted Internet

Join us on June 9 at 3 PM ET for a virtual conversation on how the digital policy landscape not only shapes, but is also shaped by, the way we move around online and what steps our policymakers need to take to ensure a healthy internet.

The post Mozilla Meetups: The Building Blocks of a Trusted Internet appeared first on Open Policy & Advocacy.

Blog of DataCrash Reporting Data Sprint

Two weeks ago the Socorro Eng/Ops team, in charge of Socorro and Tecken, had its first remote 1-day Data Sprint to onboard folks from ops and engineering.

The Sprint was divided into three main parts, according to the objectives we initially had:

  • Onboard new folks in the team
  • Establish a rough roadmap for the next 3-6 months.
  • Find a more efficient way to work together,

The sprint was formatted as a conversation followed by a presentation guided by Will Kahn-Greene, who leads the efforts in maintaining and evolving the Socorro/Tecken platforms. In the end we went through the roadmap document to decide what our immediate future would look like.


Finding a more efficient way to work together

We wanted to track our work queue more efficiently and decided that a GitHub project would be a great candidate for the task. It is simple to set up and maintain and has different views that we can lightly customize.

That said, because our main issue tracker is Bugzilla, one slightly annoying thing we still have to do while creating an issue on our GitHub project is place the full url to the bug in the issue title:

If we could place links as part of the title, then we could do something like:

Which is much nicer, but GitHub doesn’t support that.

Here’s the link to our work queue: https://github.com/orgs/mozilla-services/projects/16/views/1


Onboarding new people to Socorro/Tecken

This was a really interesting part of the day in which we went through different aspects of the crash reporting ecosystem and crash ingestion pipeline.

Story time

The story of Mozilla’s whole Crash Reporting system dates back to 2007, when the Socorro project was created. Since then, Crash Reporting has been an essential part of our products. It is present in all stages, from development to release, and is comprised of an entire ecosystem of libraries and systems maintained by different teams across Mozilla.

Socorro is one of the longer-running projects we have at Mozilla. Along with Antenna and Crash Stats it comprises the Crash Ingestion pipeline, which is maintained by the socorro-eng team. The team is also in charge of the Symbol and Symbolication Servers a.k.a. Tecken.

Along with that story we also learned interesting facts about Crash Reporting, such as:

    • Crash Reports are not the same as Crash Pings: Both things are emitted by the Crash Reporter when Firefox crashes, but Reports go to Socorro and Pings go to the telemetry pipeline
    • Not all Crash Reports are accepted: The collector throttles crash reports according to a set of rules that can be found here
    • Crash Reports are pretty big compared to telemetry pings: They’re 600Kb aggregate but stack overflow crashes can be bigger than 25MB
    • Crash Reports are reprocessed regularly: Whenever something that is involved in generating crash signatures or crash stacks is changed or fixed we reprocess the Crash Reports to regenerate their signatures and stacks

What’s what

There are lots of names involved in the Crash Reporting. We went over what most of them mean:

Symbols: A symbol is an entry in a .sym file that maps from a memory location (a byte number) to something that’s going on in the original code. Since binaries don’t contain information about the code such as code lines, function names and stack navigation, symbols are used to enrich minidumps emitted by binaries with such info. This process is called symbolication. More on symbol files: https://chromium.googlesource.com/breakpad/breakpad/+/HEAD/docs/symbol_files.md

Crash Report: When an application process crashes, the Crash Reporter will submit a Crash Report with metadata annotations (BuildId, ProductName, Version, etc) and minidumps which contain info on the crashed processes.

Crash Signature: Generated to every Crash Report by an algorithm unique to Socorro with the objective of grouping similar crashes.

Minidump: A file created and managed by the Breakpad library. It holds info on a crashed process such as CPU, register contents, heap, loaded modules, threads etc.

Breakpad: A set of tools to work with minidump files. It defines the sym file format and includes components to extract information from processes as well as package, submit  and process them. More on Breakpad: https://chromium.googlesource.com/breakpad/breakpad/+/master/docs/getting_started_with_breakpad.md#the-minidump-file-format

A peek at how it works

Will also explained how things work under the hood and we had a look on the diagrams that show what comprises Tecken and Socorro:



Tecken architecture diagram

Tecken (https://symbols.mozilla.org/) is a Django web application that uses S3 for storage and RDS for bookkeeping.

Eliot (https://symbolication.services.mozilla.com/) is a webapp that downloads sym files from Tecken for symbolication.



Socorro Architecture Diagram

Socorro has a Crash Report Collector, a Processor and a web application (https://crash-stats.mozilla.org/) for searching and analyzing crash data. Notice the Crash Ingestion pipeline processes Crash Reports and exports a safe form of the processed crash to Telemetry.

More on Crash Reports data

The Crash Reporter is an interesting piece of an application since it needs to do its work while the world around it is collapsing. That means a number of unusual things can happen to the data it collects to build a Report. That being said, there’s a good chance the data it collects is ok, and even when it isn’t, it can still be interesting.

A real concern toward Crash Report data is how toxic it can get: While some pieces of the data are things like the ProductName, BuildID, Version and so on, other pieces are highly identifiable such as URL, UserComments and Breadcrumbs.

Add that to the fact that minidumps contain copies of memory from the crashed processes, which can store usernames, passwords, credit card numbers and so on, and you end up with a very toxic dataset!


Establishing a roadmap for the next 3-6 months

Another interesting exercise that made that Sprint feel even more productive was going over the Tecken/Socorro roadmap and reprioritizing things. While Will was explaining the reasons why we should do certain things, I took that chance to also ask questions and get better context on the different decisions we made, our struggles of past and present and where we aim to be.



It was super productive to have a full day on which we could focus completely on all things Socorro/Tecken. That series of activities allowed us to improve the way we work, transfer knowledge and prioritize things for the not-so-distant future.

Big shout out to Will Kahn-Greene for organizing and driving this event, and also for the patience to explain things carefully and precisely.

The Mozilla Thunderbird BlogThunderbird By The Numbers: Our 2021 Financial Report

Transparency and open source go hand-in-hand. But just because Thunderbird’s development work, roadmap, and financials are public, doesn’t always mean they’re well publicized.

That’s where my role as Marketing Manager comes into focus. To shine a spotlight on the numbers, the features, the facts, and the future. I want to keep you informed without you needing to hunt down every scrap of information!

With that in mind, let’s talk about money. Specifically, Thunderbird’s income for 2021, and how it positively affects our team, our product, and our roadmap.

Thunderbird Income in 2021

You may have heard rumors of Thunderbird’s demise, or assumed that the project’s financial outlook was bleak. Thankfully, that’s not remotely close to reality.

In fact, 2021 was a very successful year for Thunderbird. Our income, which is sourced almost entirely by user donations, totaled $2,796,996. That represents a 21% increase over donations in 2020, and it’s more than twice the amount we received in 2018.

Thunderbird year-to-year donations: 2017 through 2021<figcaption>Thunderbird year-to-year donations: 2017 through 2021</figcaption>

Do we have other sources of income? Yes, but that non-donation income is negligible, and represents less than a fraction of a fraction of one percent. It comes from our partnerships with Gandi and Mailfence for users to get new email addresses. That said, we are exploring other potential revenue opportunities beyond user donations, but only those that align with our mission and values. (When we make concrete decisions regarding those opportunities, you’ll read about them here).

Thunderbird Spending in 2021

In total we spent $1,984,510 last year. The majority of that (78.1%) was allocated to Thunderbird personnel. These are the talented full-time employees and contractors paid to work on Thunderbird. And with an increase in generous donations, we’ve been able to grow our core staff.

We currently employ a talented team of 20 people in the following roles:

  • Technical Manager
  • Product and Business Development Manager
  • Community Manager
  • QA Engineer
  • Add-ons Coordinator
  • Lead UX Architect
  • Security Engineer
  • Senior Developers x3
  • Developers x4
  • Infra Team Lead
  • Build Engineer
  • Thunderbird Release Engineer / Web Infra Engineer
  • Director of Operations
  • Designer
  • Marketing Manager

But we’re not done expanding the team! For those interested, we are still hiring!

A pie chart showing Thunderbird spending in 2021. <figcaption>Total Thunderbird spending in 2021</figcaption>

The pie chart above breaks down the rest of Thunderbird’s spending in 2021.

“Professional Services” include things like HR (Human Resources), tax services, and agreements with other Mozilla entities (for example, access to build infrastructure). The remaining items help us to run the business, such as various services and technology that help us communicate and manage operations.

2022 and 2023: Not Surviving, THRIVING!

As 2021 came to a close, we had a total of $3,616,032 in the bank. This means we can afford to pursue a variety of bold initiatives that will radically improve Thunderbird. We don’t want to just meet your expectations of what a modern, best-in-class communication tool can be. We want to exceed them.

And you’ve graciously given us those resources!

Moving forward, you’ll see fantastic new features and quality-of-life improvements in Thunderbird 102 this June.

Also happening in June: we’ll also be unveiling our plans to bring Thunderbird to Android, providing a much-needed open source alternative for mobile users. (June is going to be awesome!)

And in 2023, you can look forward to a modernized Thunderbird experience with a completely overhauled UX and UI.

One more thing before we sign off: having cash reserves doesn’t make us complacent. We are careful stewards of the donations we receive from you. We don’t just use it to enhance features; we invest it strategically and wisely towards ensuring long-term stability and viability of Thunderbird. That’s what you deserve!

Your ongoing contributions not only enable Thunderbird to survive, but to thrive. And we can’t thank you enough for that.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Thunderbird By The Numbers: Our 2021 Financial Report appeared first on The Thunderbird Blog.

Web Application SecurityUpgrading Mozilla’s Root Store Policy to Version 2.8

In accordance with the Mozilla Manifesto, which emphasizes the open development of policy that protects users’ privacy and security, we have worked with the Mozilla community over the past several months to improve the Mozilla Root Store Policy (MRSP) so that we can now announce version 2.8, effective June 1, 2022. These policy changes aim to improve the transparency of Certificate Authority (CA) operations and the certificates that they issue. A detailed comparison of the policy changes may be found here, and the significant policy changes that appear in this version are:

  • MRSP section 2.4: any matter documented in an audit as a qualification, a modified opinion, or a major non-conformity is also considered an incident and must have a corresponding Incident Report filed in Mozilla’s Bugzilla system;
  • MRSP section 3.2: ETSI auditors must be members of the Accredited Conformity Assessment Bodies’ Council and WebTrust auditors must be enrolled by CPA Canada as WebTrust practitioners;
  • MRSP section 3.3: CAs must maintain links to older versions of their Certificate Policies and Certification Practice Statements until the entire root CA certificate hierarchy operated in accordance with such documents is no longer trusted by the Mozilla root store;
  • MRSP section 4.1: before October 1, 2022, intermediate CA certificates capable of issuing TLS certificates are required to provide the Common CA Database (CCADB) with either the CRL Distribution Point for the full CRL issued by the CA certificate or a JSON array of partitioned CRLs that are equivalent to the full CRL for certificates issued by the CA certificate;
  • MRSP section 5.1.3: as of July 1, 2022, CAs cannot use the SHA-1 algorithm to issue S/MIME certificates, and effective July 1, 2023, CAs cannot use SHA-1 to sign any CRLs, OCSP responses, OCSP responder certificates, or CA certificates;
  • MRSP section 5.3.2: CA certificates capable of issuing working server or email certificates must be reported in the CCADB by July 1, 2022, even if they are technically constrained;
  • MRSP section 5.4: while logging of Certificate Transparency precertificates is not required by Mozilla, it is considered by Mozilla as a binding intent to issue a certificate, and thus, the misissuance of a precertificate is equivalent to the misissuance of a certificate, and CAs must be able to revoke precertificates, even if corresponding final certificates do not actually exist;
  • MRSP section 6.1.1:  specific RFC 5280 Revocation Reason Codes must be used under certain circumstances (see blog post Revocation Reason Codes for TLS Server Certificates)
  • MRSP section 8.4: new unconstrained third-party CAs must be approved through Mozilla’s review process that involves a public discussion.

These changes will provide Mozilla with more complete information about CA practices and certificate status. Several of these changes will require that CAs revise their practices, so we have also sent CAs a CA Communication and Survey to alert them about these changes and to inquire about their ability to comply with the new requirements by the effective dates.

In summary, these updates to the MRSP will improve the quality of information about CA operations and the certificates that they issue, which will increase security in the ecosystem by further enabling Firefox to keep your information private and secure.

The post Upgrading Mozilla’s Root Store Policy to Version 2.8 appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogManifest v3 in Firefox: Recap & Next Steps

It’s been about a year since our last update regarding Manifest v3. A lot has changed since then, not least of which has been the formation of a community group under the W3C to advance cross-browser WebExtensions (WECG).

In our previous update, we announced that we would be supporting MV3 and mentioned Service Workers as a replacement for background pages. Since then, it became apparent that numerous use cases would be at risk if this were to proceed as is, so we went back to the drawing board. We proposed Event Pages in the WECG, which has been welcomed by the community and supported by Apple in Safari.

Today, we’re kicking off our Developer Preview program to gather feedback on our implementation of MV3. To set the stage, we want to outline the choices we’ve made in adopting MV3 in Firefox, some of the improvements we’re most excited about, and then talk about the ways we’ve chosen to diverge from the model Chrome originally proposed.

Why are we adopting MV3?

When we decided to move to WebExtensions in 2015, it was a long term bet on cross-browser compatibility. We believed then, as we do now, that users would be best served by having useful extensions available for as many browsers as possible. By the end of 2017 we had completed that transition and moved completely to the WebExtensions model. Today, many cross-platform extensions require only minimal changes to work across major browsers. We consider this move to be a long-term success, and we remain committed to the model.

In 2018, Chrome announced Manifest v3, followed by Microsoft adopting Chromium as the base for the new Edge browser. This means that support for MV3, by virtue of the combined share of Chromium-based browsers, will be a de facto standard for browser extensions in the foreseeable future. We believe that working with other browser vendors in the context of the WECG is the best path toward a healthy ecosystem that balances the needs of its users and developers. For Mozilla, this is a long term bet on a standards-driven future for WebExtensions.

Why is MV3 important to improving WebExtensions?

Manifest V3 is the next iteration of WebExtensions, and offers the opportunity to introduce improvements that would otherwise not be possible due to concerns with backward compatibility. MV2 had architectural constraints that made some issues difficult to address; with MV3 we are able to make changes to address this.

One core part of the extension architecture is the background page, which lives forever by design. Due to memory or platform constraints (e.g. on Android), we can’t guarantee this state, and termination of the background page (along with the extension) is sometimes inevitable. In MV3, we’re introducing a new architecture: the background script must be designed to be restartable. To support this, we’ve reworked existing and introduced new APIs, enabling extensions to declare how the browser should behave without requiring the background script.

Another core part of extensions are content scripts, to directly interact with web pages. We are blocking unsafe coding practices and are offering more secure alternatives to improve the base security of extensions: string-based code execution has been removed from extension APIs. Moreover, to improve the isolation of data between different origins, cross-origin requests are no longer possible from content scripts, unless the destination website opts in via CORS.

User controls for site access

Extensions often need to access user data on websites. While that has enabled extensions to provide powerful features and address numerous user needs, we’ve also seen misuse that impacts user’s privacy.

Starting with MV3, we’ll be treating all site access requests from extensions as optional, and provide users with transparency and controls to make it easier to manage which extensions can access their data for each website.

At the same time, we’ll be encouraging extensions to use models that don’t require permanent access to all websites, by making it easier to grant access for extensions with a narrow scope, or just temporarily. We are continuing to evaluate how to best handle cases, such as privacy and security extensions, that need the ability to intercept or affect all websites in order to fully protect our users.

What are we doing differently in Firefox?


One of the most controversial changes of Chrome’s MV3 approach is the removal of blocking WebRequest, which provides a level of power and flexibility that is critical to enabling advanced privacy and content blocking features. Unfortunately, that power has also been used to harm users in a variety of ways. Chrome’s solution in MV3 was to define a more narrowly scoped API (declarativeNetRequest) as a replacement. However, this will limit the capabilities of certain types of privacy extensions without adequate replacement.

Mozilla will maintain support for blocking WebRequest in MV3. To maximize compatibility with other browsers, we will also ship support for declarativeNetRequest. We will continue to work with content blockers and other key consumers of this API to identify current and future alternatives where appropriate. Content blocking is one of the most important use cases for extensions, and we are committed to ensuring that Firefox users have access to the best privacy tools available.

Event Pages

Chrome’s version of MV3 introduced Background Service Worker as a replacement for the (persistent) Background Page. Mozilla is working on extension Service Workers in Firefox for compatibility reasons, but also because we like that they’re an event-driven environment with defined lifetimes, already part of the Web Platform with good cross-browser support.

We’ve found Service Workers can’t fully support various use cases we consider important, especially around DOM-related features and APIs. Additionally, the worker environment is not as familiar to regular web developers, and our developer community has expressed that completely rewriting extensions can be tedious for thousands of independent developers of existing extensions.

In Firefox, we have decided to support Event Pages in MV3, and our developer preview will not include Service Workers (we’re continuing to work on supporting these for a future release). This will help developers to more easily migrate existing persistent background pages to support MV3 while retaining access to all of the DOM related features available in MV2. We will also support Event Pages in MV2 in an upcoming release, which will additionally aid migration by allowing extensions to transition existing MV2 extensions over a series of releases.

Next Steps for Firefox

In launching our Developer Preview program for Manifest v3, our hope is that authors will test out our MV3 implementation to help us identify gaps or incompatibilities in our implementation. Work is continuing in parallel, and we expect to launch MV3 support for all users by the end of 2022. As we get closer to completion, we will follow up with more detail on timing and how we will support extensions through the transition.

For more information on the Manifest v3 Developer Preview, please check out the migration guide.  If you have questions or feedback on Manifest v3, we would love to hear from you on the Firefox Add-ons Discourse.

The post Manifest v3 in Firefox: Recap & Next Steps appeared first on Mozilla Add-ons Community Blog.

Web Application SecurityRevocation Reason Codes for TLS Server Certificates

In our continued efforts to improve the security of the web PKI, we are taking a multi-pronged approach to tackling some long-existing problems with revocation of TLS server certificates. In addition to our ongoing CRLite work, we added new requirements to version 2.8 of Mozilla’s Root Store Policy that will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. We also added a new requirement that CA operators provide their full CRL URLs in the CCADB. This will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

Previous Policy Updates

Significant improvements have already been made in the web PKI, including the following changes to Mozilla’s Root Store Policy and the CA/Browser Forum Baseline Requirements (BRs), which reduced risks associated with exposure of the private keys of TLS certificates by reducing the amount of time that the exposure can exist.

  • TLS server certificates issued on or after 1 September 2020 MUST NOT have a Validity Period greater than 398 days.
  • For TLS server certificates issued on or after October 1, 2021, each dNSName or IPAddress in the certificate MUST have been validated within the prior 398 days.

Under those provisions, the maximum validity period and maximum re-use of domain validation for TLS certificates roughly corresponds to the typical period of time for owning a domain name; i.e. one year. This reduces the risk of potential exposure of the private key of each TLS certificate that is revoked, replaced, or no longer needed by the original certificate subscriber.

New Requirements

In version 2.8 of Mozilla’s Root Store Policy we added requirements stating that:

  1. Specific RFC 5280 Revocation Reason Codes must be used under certain circumstances; and
  2. CA operators must provide their full CRL URLs in the Common CA Database (CCADB).

These new requirements will provide a complete accounting of all revoked TLS server certificates. This will enable Firefox to pre-load more complete certificate revocation data, eliminating the need for it to query CAs for revocation information when establishing TLS connections.

The new requirements about revocation reason codes account for the situations that can happen at any time during the certificate’s validity period, and address the following problems:

  • There were no policies specifying which revocation reason codes should be used and under which circumstances.
  • Some CAs were not using revocation reason codes at all for TLS server certificates.
  • Some CAs were using the same revocation reason code for every revocation.
  • There were no policies specifying the information that CAs should provide to their certificate subscribers about revocation reason codes.

Revocation Reason Codes

Section 6.1.1 of version 2.8 of Mozilla’s Root Store Policy states that when a TLS server certificate is revoked for one of the following reasons the corresponding entry in the CRL must include the revocation reason code:

  • keyCompromise (RFC 5280 Reason Code #1)
    • The certificate subscriber must choose the “keyCompromise” revocation reason code when they have reason to believe that the private key of their certificate has been compromised, e.g., an unauthorized person has had access to the private key of their certificate.
  • affiliationChanged (RFC 5280 Reason Code #3)
    • The certificate subscriber should choose the “affiliationChanged” revocation reason code when their organization’s name or other organizational information in the certificate has changed.
  • superseded (RFC 5280 Reason Code #4)
    • The certificate subscriber should choose the “superseded” revocation reason code when they request a new certificate to replace their existing certificate.
  • cessationOfOperation (RFC 5280 Reason Code #5)
    • The certificate subscriber should choose the “cessationOfOperation” revocation reason code when they no longer own all of the domain names in the certificate or when they will no longer be using the certificate because they are discontinuing their website.
  • privilegeWithdrawn (RFC 5280 Reason Code #9)
    • The CA will specify the “privilegeWithdrawn” revocation reason code when they obtain evidence that the certificate was misused or the certificate subscriber has violated one or more material obligations under the subscriber agreement or terms of use.

RFC 5280 Reason Codes that are not listed above shall not be specified in the CRL for TLS server certificates, for reasons explained in the wiki page.


These new requirements are important steps towards improving the security of the web PKI, and are part of our effort to resolve long-existing problems with revocation of TLS server certificates. The requirements about revocation reason codes will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. The requirement that CA operators provide their full CRL URLs in the CCADB will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

The post Revocation Reason Codes for TLS Server Certificates appeared first on Mozilla Security Blog.

hacks.mozilla.orgImproved Process Isolation in Firefox 100


Firefox uses a multi-process model for additional security and stability while browsing: Web Content (such as HTML/CSS and Javascript) is rendered in separate processes that are isolated from the rest of the operating system and managed by a privileged parent process. This way, the amount of control gained by an attacker that exploits a bug in a content process is limited.

Ever since we deployed this model, we have been working on improving the isolation of the content processes to further limit the attack surface. This is a challenging task since content processes need access to some operating system APIs to properly function: for example, they still need to be able to talk to the parent process. 

In this article, we would like to dive a bit further into the latest major milestone we have reached: Win32k Lockdown, which greatly reduces the capabilities of the content process when running on Windows. Together with two major earlier efforts (Fission and RLBox) that shipped before, this completes a sequence of large leaps forward that will significantly improve Firefox’s security.

Although Win32k Lockdown is a Windows-specific technique, it became possible because of a significant re-architecting of the Firefox security boundaries that Mozilla has been working on for around four years, which allowed similar security advances to be made on other operating systems.

The Goal: Win32k Lockdown

Firefox runs the processes that render web content with quite a few restrictions on what they are allowed to do when running on Windows. Unfortunately, by default they still have access to the entire Windows API, which opens up a large attack surface: the Windows API consists of many parts, for example, a core part dealing with threads, processes, and memory management, but also networking and socket libraries, printing and multimedia APIs, and so on.

Of particular interest for us is the win32k.sys API, which includes many graphical and widget related system calls that have a history of being exploitable. Going back further in Windows’ origins, this situation is likely the result of Microsoft moving many operations that were originally running in user mode into the kernel in order to improve performance around the Windows 95 and NT4 timeframe.

Having likely never been originally designed to run in this sensitive context, these APIs have been a traditional target for hackers to break out of application sandboxes and into the kernel.

In Windows 8, Microsoft introduced a new mitigation named PROCESS_MITIGATION_SYSTEM_CALL_DISABLE_POLICY that an application can use to disable access to win32k.sys system calls. That is a long name to keep repeating, so we’ll refer to it hereafter by our internal designation: “Win32k Lockdown“.

The Work Required

Flipping the Win32k Lockdown flag on the Web Content processes – the processes most vulnerable to potentially hostile web pages and JavaScript – means that those processes can no longer perform any graphical, window management, input processing, etc. operations themselves.

To accomplish these tasks, such operations must be remoted to a process that has the necessary permissions, typically the process that has access to the GPU and handles compositing and drawing (hereafter called the GPU Process), or the privileged parent process. 

Drawing web pages: WebRender

For painting the web pages’ contents, Firefox historically used various methods for interacting with the Windows APIs, ranging from using modern Direct3D based textures, to falling back to GDI surfaces, and eventually dropping into pure software mode.

These different options would have taken quite some work to remote, as most of the graphics API is off limits in Win32k Lockdown. The good news is that as of Firefox 92, our rendering stack has switched to WebRender, which moves all the actual drawing from the content processes to WebRender in the GPU Process.

Because with WebRender the content process no longer has a need to directly interact with the platform drawing APIs, this avoids any Win32k Lockdown related problems. WebRender itself has been designed partially to be more similar to game engines, and thus, be less susceptible to driver bugs.

For the remaining drivers that are just too broken to be of any use, it still has a fully software-based mode, which means we have no further fallbacks to consider.

Webpages drawing: Canvas 2D and WebGL 3D

The Canvas API provides web pages with the ability to draw 2D graphics. In the original Firefox implementation, these JavaScript APIs were executed in the Web Content processes and the calls to the Windows drawing APIs were made directly from the same processes.

In a Win32k Lockdown scenario, this is no longer possible, so all drawing commands are remoted by recording and playing them back in the GPU process over IPC.

Although the initial implementation had good performance, there were nevertheless reports from some sites that experienced performance regressions (the web sites that became faster generally didn’t complain!). A particular pain point are applications that call getImageData() repeatedly: having the Canvas remoted means that GPU textures must now be obtained from another process and sent over IPC.

We compensated for this in the scenario where getImageData is called at the start of a frame, by detecting this and preparing the right surfaces proactively to make the copying from the GPU faster.

Besides the Canvas API to draw 2D graphics, the web platform also exposes an API to do 3D drawing, called WebGL. WebGL is a state-heavy API, so properly and efficiently synchronizing child and parent (as well as parent and driver) takes great care.

WebGL originally handled all validation in Content, but with access to the GPU and the associated attack surface removed from there, we needed to craft a robust validating API between child and parent as well to get the full security benefit.

(Non-)Native Theming for Forms

HTML web pages have the ability to display form controls. While the overwhelming majority of websites provide a custom look and styling for those form controls, not all of them do, and if they do not you get an input GUI widget that is styled like (and originally was!) a native element of the operating system.

Historically, these were drawn by calling the appropriate OS widget APIs from within the content process, but those are not available under Win32k Lockdown.

This cannot easily be fixed by remoting the calls, as the widgets themselves come in an infinite amount of sizes, shapes, and styles can be interacted with, and need to be responsive to user input and dispatch messages. We settled on having Firefox draw the form controls itself, in a cross-platform style.

While changing the look of form controls has web compatibility implications, and some people prefer the more native look – on the few pages that don’t apply their own styles to controls – Firefox’s approach is consistent with that taken by other browsers, probably because of very similar considerations.

Scrollbars were a particular pain point: we didn’t want to draw the main scrollbar of the content window in a different manner as the rest of the UX, since nested scrollbars would show up with different styles which would look awkward. But, unlike the rather rare non-styled form widgets, the main scrollbar is visible on most web pages, and because it conceptually belongs to the browser UX we really wanted it to look native.

We, therefore, decided to draw all scrollbars to match the system theme, although it’s a bit of an open question though how things should look if even the vendor of the operating system can’t seem to decide what the “native” look is.

Final Hurdles

Line Breaking

With the above changes, we thought we had all the usual suspects that would access graphics and widget APIs in win32k.sys wrapped up, so we started running the full Firefox test suite with win32k syscalls disabled. This caused at least one unexpected failure: Firefox was crashing when trying to find line breaks for some languages with complex scripts.

While Firefox is able to correctly determine word endings in multibyte character streams for most languages by itself, the support for Thai, Lao, Tibetan and Khmer is known to be imperfect, and in these cases, Firefox can ask the operating system to handle the line breaking for it. But at least on Windows, the functions to do so are covered by the Win32k Lockdown switch. Oops!

There are efforts underway to incorporate ICU4X and base all i18n related functionality on that, meaning that Firefox will be able to handle all scripts perfectly without involving the OS, but this is a major effort and it was not clear if it would end up delaying the rollout of win32k lockdown.

We did some experimentation with trying to forward the line breaking over IPC. Initially, this had bad performance, but when we added caching performance was satisfactory or sometimes even improved, since OS calls could be avoided in many cases now.

DLL Loading & Third Party Interactions

Another complexity of disabling win32k.sys access is that so much Windows functionality assumes it is available by default, and specific effort must be taken to ensure the relevant DLLs do not get loaded on startup. Firefox itself for example won’t load the user32 DLL containing some win32k APIs, but injected third party DLLs sometimes do. This causes problems because COM initialization in particular uses win32k calls to get the Window Station and Desktop if the DLL is present. Those calls will fail with Win32k Lockdown enabled, silently breaking COM and features that depend on it such as our accessibility support. 

On Windows 10 Fall Creators Update and later we have a fix that blocks these calls and forces a fallback, which keeps everything working nicely. We measured that not loading the DLLs causes about a 15% performance gain when opening new tabs, adding a nice performance bonus on top of the security benefit.

Remaining Work

As hinted in the previous section, Win32k Lockdown will initially roll out on Windows 10 Fall Creators Update and later. On Windows 8, and unpatched Windows 10 (which unfortunately seems to be in use!), we are still testing a fix for the case where third party DLLs interfere, so support for those will come in a future release.

For Canvas 2D support, we’re still looking into improving the performance of applications that regressed when the processes were switched around. Simultaneously, there is experimentation underway to see if hardware acceleration for Canvas 2D can be implemented through WebGL, which would increase code sharing between the 2D and 3D implementations and take advantage of modern video drivers being better optimized for the 3D case.


Retrofitting a significant change in the separation of responsibilities in a large application like Firefox presents a large, multi-year engineering challenge, but it is absolutely required in order to advance browser security and to continue keeping our users safe. We’re pleased to have made it through and present you with the result in Firefox 100.

Other Platforms

If you’re a Mac user, you might wonder if there’s anything similar to Win32k Lockdown that can be done for macOS. You’d be right, and I have good news for you: we already quietly shipped the changes that block access to the WindowServer in Firefox 95, improving security and speeding process startup by about 30-70%. This too became possible because of the Remote WebGL and Non-Native Theming work described above.

For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol. Although Linux distributions have been moving towards the more secure Wayland protocol as the default, we still see a lot of users that are using X11 or XWayland configurations, so this is definitely a nice-to-have, which shipped in Firefox 99.

We’re Hiring

If you found the technical background story above fascinating, I’d like to point out that our OS Integration & Hardening team is going to be hiring soon. We’re especially looking for experienced C++ programmers with some interest in Rust and in-depth knowledge of Windows programming.

If you fit this description and are interested in taking the next leap in Firefox security together with us, we’d encourage you to keep an eye on our careers page.

Thanks to Bob Owen, Chris Martin, and Stephen Pohl for their technical input to this article, and for all the heavy lifting they did together with Kelsey Gilbert and Jed Davis to make these security improvements ship.

The post Improved Process Isolation in Firefox 100 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird Blog7 Great New Features Coming To Thunderbird 102

Welcome back to the Thunderbird blog! We’re really energized about our major 2022 release and cannot wait to put it in your hands. Thunderbird 102 includes several major new features for our global community of users, and we’re confident you’ll love them. So grab your favorite beverage, and let’s highlight seven features from Thunderbird 102 we’re most excited about.

Before we jump in, it’s worth mentioning that we’ve been rapidly expanding our team in order to power up your productivity and improve your favorite email client. From major milestones like a completely modernized UI/UX in next year’s Thunderbird 114 (codenamed “Supernova”) to smaller touches like new iconography, elegant new address book functionality, and an Import/Export wizard, all of it happens for you and because of you. Thunderbird not only survives but thrives thanks to your generous donations. Every amount, large or small, makes a difference. Please consider donating what you can, and know that we sincerely appreciate your support!

OK! Here's an overview of the new features in Thunderbird 102. Stay tuned to our blog for in-depth updates and deeper dives leading up to the late June release.

#1: The New Address Book In Thunderbird 102

We’ve teased a new address book in the past, and it’s finally coming in Thunderbird 102. Not only does the refreshed design make it easier to navigate and interact with your contacts, but it also boasts new features to help you better understand who you’re communicating with.

Complete address book entry in Thunderbird 102<figcaption>Address Book gets a new look and enhanced functionality in Thunderbird 102</figcaption>

The new Address Book has compatibility with vCard specs, the defacto standard for saving contacts. If your app (like Google Contacts) or device (iPhone, Android) can export existing contacts into vCard format, Thunderbird can import them. And as you can see from the above screenshot, each contact card acts as a launchpad for messaging, email, or event creation involving that contact.

We’re also adding several more fields to each contact entry, and they’re displayed in a much better, clearer way than before.

Your contacts are getting a serious upgrade in Thunderbird 102! There’s so much more to share on this front, so please watch this blog for a standalone deep-dive on the new Address Book in the near future.

#2: The Spaces Toolbar

One of the underlying themes of Thunderbird 102 is making the software easier to use, with smarter visual cues that can enhance your productivity. The new Spaces Toolbar is an easy, convenient way to move between all the different activities in the application. Such as managing your email, working with contacts via that awesome new address book, using the calendar and tasks functionality, chat, and even add-ons!

The Spaces Toolbar, on the left-hand side of Thunderbird<figcaption>The Spaces Toolbar, on the left-hand side of Thunderbird</figcaption>

If you want to save screen real estate, the Spaces Toolbar can be dismissed, and you can instead navigate the different activities Thunderbird offers with the new pinned Spaces tab. (Pictured to the left of the tabs at the top)

Pinned spaces tab showing the different activities, to the left of the tabs.<figcaption>Pinned Spaces Tab</figcaption>

#3: Link Preview Cards

Want to share a link with your friends or your colleagues, but do it with a bit more elegance? Our new Link Preview Cards do exactly that. When you paste a link into the compose window, we’ll ask you (via a tooltip you can turn off) if you’d like to display a rich preview of the link. It’s a great way for your recipient to see at a glance what they’re clicking out to, and a nice way for your emails to have a bit more polish if desired!

Embedded Link Previews in Thunderbird 102<figcaption>Embedded Link Previews in Thunderbird 102</figcaption>

#4: Account Setup Hub In Thunderbird 102

In past releases, we have improved first-time account setup. When setting up an email, autodiscovery of calendars and address books works really well! But managing accounts and setting up additional accounts beyond your initial setup has lagged behind. We are updating that experience in Thunderbird 102.

Want to use Thunderbird without an email account? We know you exist, and we’re making this much easier for you! After installing the software, from now on you’ll be taken to the below account hub instead of being forced to set up a new mail account. You’re free to configure Thunderbird in the order you choose, and only the elements you choose.

New Account Setup Hub in Thunderbird 102<figcaption>New Account Setup Hub in Thunderbird 102</figcaption>

#5: Import/Export Wizard

And that’s a perfect segue into the brand new Import and Export tool. Moving accounts and data in and out of Thunderbird should be a breeze! Until now, you’ve had to use add-ons for this, but we’re excited to share that this is now core functionality with Thunderbird 102.

A step-by-step wizard will provide a guided experience for importing all that data that’s important to you. Moving from Outlook, SeaMonkey, or another Thunderbird installation will be easier than ever.

A screenshot from the new Import/Export wizard<figcaption>A screenshot from the new Import/Export wizard</figcaption>

We’ve also taken extra precautions to ensure that no data is accidentally duplicated in your profile after an import. To that end, none of the actions you choose are executed until the very last step in the process. As with the new Address Book, watch for a deeper dive into the new Import/Export tool in a future blog post.

#6: Matrix Chat Support

We obviously love open source, which is one of the reasons why we’ve added support for the popular, decentralized chat protocol Matrix into Thunderbird 102. Those of you enjoying the Beta version know it’s been an option since version 91, but it will finally be usable out-of-the-box in this new release. We’re going to continuously develop updates to the Matrix experience, and we welcome your feedback.

#7: Message Header Redesign

Another UX/Visual update can be seen in the redesign of the all-important message header. The refreshed design better highlights important info, making it more responsive and easier for you to navigate.

Redesigned message header in Thunderbird 102<figcaption>Redesigned message header in Thunderbird 102</figcaption>

All of these improvements are gradual but confident steps toward the major release of Thunderbird 114 “Supernova” in 2023, which is set to deliver a completely modernized overhaul to the Thunderbird interface.

Thunderbird 102 Availability?

We think you’re going to love this release and can’t wait for you to try it!

Interested in experiencing Thunderbird 102 early? It should be available in our Beta channel by the end of May 2022. We encourage you to try it! We’ve entered “feature freeze” for version 102, and are focusing on polishing it up now. That means your Beta experience should be quite stable.

For everyone who’s enjoying the Stable version, you can expect it by the end of June 2022.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post 7 Great New Features Coming To Thunderbird 102 appeared first on The Thunderbird Blog.

SeaMonkeySeaMonkey 2.53.12 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12!

Please check out [1] and [2].  Updates forthcoming.

Nothing beats a quick release.  🙂  Kudos to the guys driving these bug fixes.

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.12/

[2] – https://www.seamonkey-project.org/releases/2.53.12

The Mozilla Thunderbird BlogOpenPGP keys and SHA-1

As you may know, Thunderbird offers email encryption and digital email signatures using the OpenPGP technology and uses Ribose’s RNP library that provides the underlying functionality.

To strengthen the security of the OpenPGP implementation, a recent update of the RNP library had included changes to refuse the use of several unsafe algorithms, such as MD5 and SHA-1. The Thunderbird team had delivered RNP version 0.16.0 as part of the Thunderbird 91.8.0 update.

Unfortunately, this change resulted in some users no longer being able to use their OpenPGP keys. We learned that the affected users still depend on keys that were created or modified with OpenPGP software that used SHA-1 for the signatures that are part of OpenPGP keys.

After analyzing and discussing the issue, it was decided to continue to allow SHA-1 for this use of signatures, also known as binding signatures. This matches the behavior of other popular OpenPGP software like GnuPG. Thunderbird 91.9.0 includes this fix, and will be released today.

While some attacks on SHA-1 are possible, the currently known attacks are difficult to apply on OpenPGP binding signatures. In addition, RNP 0.16.0 includes an improvement that provides SHA-1 collision detection code, and it is assumed it makes it even more difficult for an attacker to abuse the fact that Thunderbird accepts SHA-1 in binding signatures.

More details on the background, on the affected and future versions, and considerations for other OpenPGP software, can be found in the following knowledge base article:



Follow Thunderbird on Mastodon.

The post OpenPGP keys and SHA-1 appeared first on The Thunderbird Blog.

hacks.mozilla.orgCommon Voice dataset tops 20,000 hours

The latest Common Voice dataset, released today, has achieved a major milestone: More than 20,000 hours of open-source speech data that anyone, anywhere can use. The dataset has nearly doubled in the past year.

Why should you care about Common Voice?

  • Do you have to change your accent to be understood by a virtual assistant? 
  • Are you worried that so many voice-operated devices are collecting your voice data for proprietary Big Tech datasets?
  • Are automatic subtitles unavailable for you in your language?

Automatic Speech Recognition plays an important role in the way we can access information, however, of the 7,000 languages spoken globally today only a handful are supported by most products.

Mozilla’s Common Voice seeks to change the language technology ecosystem by supporting communities to collect voice data for the creation of voice-enabled applications for their own languages. 

Common Voice Dataset Release 

This release wouldn’t be possible without our contributors — from voice donations to initiating their language in our project, to opening new opportunities for people to build voice technology tools that can support every language spoken across the world.

Access the dataset: https://commonvoice.mozilla.org/datasets

Access the metadata: https://github.com/common-voice/cv-dataset 

Highlights from the latest dataset:

  • The new release also features six new languages: Tigre, Taiwanese (Minnan), Meadow Mari, Bengali, Toki Pona and Cantonese.
  • Twenty-seven languages now have at least 100 hours of speech data. They include Bengali, Thai, Basque, and Frisian.
  • Nine languages now have at least 500 hours of speech data. They include Kinyarwanda (2,383 hours), Catalan (2,045 hours), and Swahili (719 hours).
  • Nine languages now all have at least 45% of their gender tags as female. They include Marathi, Dhivehi, and Luganda.
  • The Catalan community fueled major growth. The Catalan community’s Project AINA — a collaboration between Barcelona Supercomputing Center and the Catalan Government — mobilized Catalan speakers to contribute to Common Voice. 
  • Supporting community participation in decision making yet. The Common Voice language Rep Cohort has contributed feedback and learnings about optimal sentence collection, the inclusion of language variants, and more. 

 Create with the Dataset 

How will you create with the Common Voice Dataset?

Take some inspiration from technologists who are creating conversational chatbots, spoken language identifiers, research papers and virtual assistants with the Common Voice Dataset by watching this talk: 


Share with us how you are using the dataset on social media using #CommonVoice or sharing on our Community discourse. 


The post Common Voice dataset tops 20,000 hours appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgMDN Plus now available in more regions

At the end of March this year, we announced MDN Plus, a new premium service on MDN that allows users to customize their experience on the website.

We are very glad to announce today that it is now possible for MDN users around the globe to create an MDN Plus free account, no matter where they are.

Click here to create an MDN Plus free account*.

The premium version of the service is currently available as follows: in the United States, Canada (since March 24th, 2022), Austria, Belgium, Finland, France, United Kingdom, Germany, Ireland, Italy, Malaysia, the Netherlands, New Zealand, Puerto Rico, Sweden, Singapore, Switzerland, Spain (since April 28th, 2022), Estonia, Greece, Latvia, Lithuania, Portugal, Slovakia and Slovenia (since June 15th, 2022).

We continue to work towards expanding this list even further.

Click here to create an MDN Plus premium account**.

* Now available to everyone

** You will need to subscribe from one of the regions mentioned above to be able to have an MDN Plus premium account at this time

The post MDN Plus now available in more regions appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.12 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12 Beta 1.

As it is a beta, please do backup your profile before updating to it.

Please check out [1] and [2].

Updates are slowly being turned on for 2.53.12b1 after I post this blog.  The last few times users had updated to the newest version even before I had posted the blog, so that somewhat confused people.   This shouldn’t be the case now.(After all, I had posted the blog, *then* I flipped the update bit, then updated this blog with this side note. :))

Best Regards,


[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.12b1/

[2] – https://www.seamonkey-project.org/releases/2.53.12b1

SUMO BlogIntroducing Dayana Galeano

Hi everybody, 

I’m excited to welcome Dayana Galeano, our new Community Support Advocate, to the Customer Experience team.

Here’s a short introduction from Dayana: 

Hi everyone! My name is Dayana and I’ll be helping out with mobile support for Firefox. I’ll be pitching in to help respond to app reviews and identifying trends to help track feedback. I’m excited to join this community and work alongside all of you!

Since the Community Support Advocate role is new for Mozilla Support, we’d like to take a moment to describe the role and how it will enhance our current support efforts. 

Open-source culture has been at the center of Mozilla’s identity since the beginning, and this has been our guide for how we support our products. Our “peer to peer” support model, powered by the SUMO community, has enabled us to support Firefox and other products through periods of rapid growth and change, and it’s been a crucial strategy to our success. 

With the recent launches of premium products like Mozilla VPN and Firefox Relay, we’ve adapted our support strategy to meet the needs and expectations of subscribers. We’ve set up processes to effectively categorize and identify issues and trends, enabling us to pull meaningful insights out of each support interaction. In turn, this has strengthened our relationships with product teams and improved our influence when it comes to improving customer experience. With this new role, we hope to apply some of these processes  to our peer to peer support efforts as well.

To be clear about our intentions, this is not a step away from peer to peer support at Mozilla. Instead, we are optimistic that this will deepen the impact our peer to peer support strategy will have with the product teams, enabling us to better segment our support data, share more insightful reports on releases, and showcase the hard work that our community is putting into SUMO each and every day. This can then pave the way for additional investment into resources, training, and more effective onboarding for new members of the community. 

Dayana’s primary focus will be supporting the mobile ecosystem, including Firefox for Android, Firefox for iOS, Firefox Focus (Android and iOS), as well as Firefox Klar. The role will initially emphasize support question moderation, including tagging and categorizing our inbound questions, and the primary support channels will be app reviews on iOS and Android. This will evolve over time, and we will be sure to communicate about these changes.

And with that, please join me to give a warm welcome to Dayana! 

Open Policy & AdvocacyThe FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets

As the internet becomes increasingly closed and centralized, consolidation and the opportunity for anti-competitive behavior rises. We are encouraged to see legislators and regulators in many jurisdictions exploring how to update consumer protection and competition policies. We look forward to working together to advance innovation, interoperability, and consumer choice.

Leveling the playing field so that any developer or company can offer innovative new products and people can shape their own online experiences has long been at the core of Mozilla’s vision and of our advocacy to policymakers. Today, we focus on the call for public comments on merger enforcement from the US Federal Trade Commission (FTC) and the US Department of Justice (DOJ) – a key opportunity for us to highlight how existing barriers to competition and transparency in digital markets can be addressed in the context of merger rules.

Our submission focuses on the below key themes, viewed particularly through the lens of increasing competition in browsers and browser engines – technologies that are central to how consumers engage on the web.

  • The Challenge of Centralization Online: For the internet to fulfill its promise as a driver for innovation, a variety of players must be able to enter the market and grow. Regulators need to be agile in their approach to tackle walled gardens and vertically-integrated technology stacks that tilt the balance against small, independent players.
  • The Role of Data: Data aggregation can be both the motive and the effect of a merger, with potential harms to consumers from vertically integrated data sharing being increasingly recognised and sometimes addressed. The roles of privacy and data protection in competition should be incorporated into merger analysis in this context.
  • Greater Transparency to Inform Regulator Interventions: Transparency tools can provide insight into how data is being used or how it is shared across verticals and are important both for consumer protection and to ensure effective competition enforcement. We need to create the right regulatory environment for these tools to be developed and used, including safe harbor access to data for public interest researchers.
  • Enabling Effective Interoperability as a Remedy: Interoperability should feature as an essential tool in the competition enforcer’s toolkit. In particular, web compatibility – ensuring that services and websites work equally no matter what operating system, browser, or device a person is using – may prove useful in addressing harms arising from a vertically integrated silo of technologies.
  • Critical Role of Open Standards in Web Compatibility: The role of Standards Development Organizations and standards processes is vital to an open and decentralized internet.
  • Harmful Design Practices Impede Consumer Choice: The FTC and the DOJ should ban design practices that inhibit consumer control. This includes Dark Patterns and Manipulative Design Techniques used by companies to trick consumers into doing something they don’t mean to do.


The post The FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets appeared first on Open Policy & Advocacy.

hacks.mozilla.orgAdopting users’ design feedback

On March 1st, 2022, MDN Web Docs released a new design and a new brand identity. Overall, the community responded to the redesign enthusiastically and we received many positive messages and kudos. We also received valuable feedback on some of the things we didn’t get quite right, like the browser compatibility table changes as well as some accessibility and readability issues.

For us, MDN Web Docs has always been synonymous with the term Ubuntu, “I am because we are.” Translated in this context, “MDN Web Docs is the amazing resource it is because of our community’s support, feedback, and contributions.”

Since the initial launch of the redesign and of MDN Plus afterwards, we have been humbled and overwhelmed by the level of support we received from our community of readers. We do our best to listen to what you have to say and to act on suggestions so that together, we make MDN better. 

Here is a summary of how we went about addressing the feedback we received.

Eight days after the redesign launch, we started the MDN Web Docs Readability Project. Our first task was to triage all issues submitted by the community that related to readability and accessibility on MDN Web Docs. Next up, we identified common themes and collected them in this meta issue. Over time, this grew into 27 unique issues and several related discussions and comments. We collected feedback on GitHub and also from our communities on Twitter and Matrix.

With the main pain points identified, we opened a discussion on GitHub, inviting our readers to follow along and provide feedback on the changes as they were rolled out to a staging instance of the website. Today, roughly six weeks later, we are pleased to announce that all these changes are in production. This was not the effort of any one person but is made up of the work and contributions of people across staff and community.

Below are some of the highlights from this work.

Dark mode

We updated the color palette used in dark mode in particular.

  • We reworked the initial color palette to use colors that are slightly more subtle in dark mode while ensuring that we still meet AA accessibility guidelines for color contrast.
  • We reconsidered the darkness of the primary background color in dark mode and settled on a compromise that improved the experience for the majority of readers.
  • We cleaned up the notecards that indicate notices such as warnings, experimental features, items not on the standards track, etc.


We got a clear sense from some of our community folks that readers found it more difficult to skim content and find sections of interest after the redesign. To address these issues, we made the following improvements:

Browser compatibility tables

Another area of the site for which we received feedback after the redesign launch was the browser compatibility tables. Almost its own project inside the larger readability effort, the work we invested here resulted, we believe, in a much-improved user experience. All of the changes listed below are now in production:

  • We restored version numbers in the overview, which are now color-coded across desktop and mobile.
  • The font size has been bumped up for easier reading and skimming.
  • The line height of rows has been increased for readability.
  • We reduced the table cells to one focusable button element.
  • Browser icons have been restored in the overview header.
  • We reordered support history chronologically to make the version range that the support notes refer to visually unambiguous.

We also fixed the following bugs:

  • Color-coded pre-release versions in the overview
  • Showing consistent mouseover titles with release dates
  • Added the missing footnote icon in the overview
  • Showing correct support status for edge cases (e.g., omit prefix symbol if prefixed and unprefixed support)
  • Streamlined mobile dark mode

We believe this is a big step in the right direction but we are not done. We can, and will, continue to improve site-wide readability and functionality of page areas, such as the sidebars and general accessibility. As with the current improvements, we invite you to provide us with your feedback and always welcome your pull requests to address known issues.

This was a collective effort, but we’d like to mention folks who went above and beyond. Schalk Neethling and Claas Augner from the MDN Team were responsible for most of the updates. From the community, we’d like to especially thank Onkar Ruikar, Daniel Jacobs, Dave King, and Queen Vinyl Da.i’gyu-Kazotetsu.


The post Adopting users’ design feedback appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgMozilla partners with the Center for Humane Technology

We’re pleased to announce that we have partnered with the Center for Humane Technology, a nonprofit organization that radically reimagines the digital infrastructure. Its mission is to drive a comprehensive shift toward humane technology that supports the collective well-being, democracy and shared information environment. Many of you may remember the Center for Humane Tech from the Netflix documentary ‘Social Dilemma’, solidifying the saying “If you’re not paying for the product, then you are the product”. The Social Dilemma, is all about the dark side of technology, focusing on the individual and societal impact of algorithms. 

It’s no surprise that this decision to partner was a no brainer and supports our efforts for a safe and open web that is accessible and joyful for all. Many people do not understand how AI and algorithms regularly touch our lives and feel powerless in the face of these systems. We are dedicated to making sure the public understands that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made. 

Over the last few years, our work has been increasingly focused on building more trustworthy AI and safe online spaces. From challenging YouTube’s algorithms, where Mozilla research shows that the platform keeps pushing harmful videos and its algorithm is recommending videos with misinformation, violent content, hate speech and scams to its over two billion users to developing Enhanced Tracking Protection in Firefox that automatically protects your privacy while you browse, and Pocket which recommends high-quality, human-curated articles without collecting your browsing history or sharing your personal information with advertisers.

Let’s face it, most, if not all people, would probably prefer to use social media platforms that are safer and technologists should design products that reflect all users and without bias. As we collectively continue to think about our role in these areas — now and in the future, this course from the Center for Humane Tech is a great addition to the many tools necessary for change to take place. 

The course rightly titled ‘Foundations of Humane Technologylaunched out of beta in March of this year, after rave reviews from hundreds of beta testers!

It explores the personal, societal, and practical challenges of being a humane technologist. Participants will leave the course with a strong conceptual framework, hands-on tools, and an ecosystem of support from peers and experts. Topics range from respecting human nature to minimizing harm to designing technology that deliberately avoids reinforcing inequitable dynamics of the past. 

The course is completely free of charge and is centered towards building awareness and self-education through an online, at-your-own pace or binge-worthy set of eight modules. The course is marketed to professionals, with or without a technical background involved in shaping tomorrow’s technology. 

It includes interactive exercises and reflections to help you internalize what you’re learning and regular optional Zoom sessions to discuss course content, connect with like-minded people, learn from experts in the field and even rewards a credential upon completion that can be shared with colleagues and prospective employers.

The problem with tech is not a new one, but this course is a stepping stone in the right direction.

The post Mozilla partners with the Center for Humane Technology appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: What Flips Your Bit?

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

The idea of “soft-errors”, particularly “single-event upsets” often comes up when we have strange errors in telemetry. Single-event upsets are defined as: “a change of state caused by one single ionizing particle (ions, electrons, photons…) striking a sensitive node in a micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. The state change is a result of the free charge created by ionization in or close to an important node of a logic element (e.g. memory “bit”)”. And what exactly causes these single-event upsets? Well, from the same Wikipedia article: “Terrestrial SEU arise due to cosmic particles colliding with atoms in the atmosphere, creating cascades or showers of neutrons and protons, which in turn may interact with electronic circuits”. In other words, energy from space can affect your computer and turn a 1 into a 0 or vice versa.

There are examples in our data collected by Glean from Mozilla projects like Firefox, that appear to be malformed by a single bit from the value we would expect. In almost every case we cannot find any plausible explanation or bug in any of the infrastructure from client to analysis, so we often shrug and say “oh well, it must be cosmic rays”. A totally fantastical explanation for an empirical measurement of some anomaly that we cannot explain.

What if it wasn’t just some fantastical explanation? What if there was some grain of truth in there and somehow we could detect cosmic rays with browser telemetry data? I was personally struck with these questions, recently, as I became aware of a recent bug that was filed that described just these sorts of errors in their data. These errors were showing up as strings with a single character different in the data (well, a single bit actually). At about the same time, I read an article about a geomagnetic storm that hit at the end of March. Something clicked and I started to really wonder if we could possibly have detected a cosmic event through these single-event upsets in our telemetry data.

I did a little research to see if there was any data on the frequency of these events and found a handful of articles (for instance) that kept referring to a study done by IBM in the 1990’s that referenced 1 cosmic ray bit flip per 256MB of memory per month. After a little digging, I was able to come up with two papers by J.F. Ziegler, an IBM researcher. The first paper, from 1979, on “The Effects of Cosmic Rays on Computer Memories”, goes into the mechanisms by which cosmic rays can affect bits in computer memory, and makes some rough estimates on the frequency of such events, as well as the effect of elevation on the frequency. The later article from the 1990’s, “Accelerated Testing For Cosmic Soft-Error Rate”, went more in detail in measuring the soft-error rates of different chips by different manufacturers. While I never found the exact source of the “1 bit-flip per 256MB per month” quote in either of these papers, the figure could possibly be generalized from the soft-error rate data in the papers. So, while I’m not entirely sure that that number for the rate is accurate, it’s probably close enough for us to do some simple calculations.

So, now that I had checked out the facts behind cosmic ray induced errors, it was time to see if there was any evidence of this in our data. First of all, where could I find these errors, and where would I most likely find these sorts of errors? I thought about the types of data that we collect and decided that a numeric field would be nearly impossible to detect a bit-flip within, unless it was a field with a very limited expected range. String fields seemed to be a little easier candidate to search for, since single bit flips tend to make strings a little weird due to a single unexpected character. There are also some good places to go looking for bit flips in our error streams too, such as when a column or table name is affected. Secondly, I had to make a few hand-wavy assumptions in order to crunch some numbers. The main assumption is that every bit in our data has the same chance of being flipped as any other bit in any other memory. The secondary assumption is that the bits are getting flipped at the client side of the connection, and not while on our servers.

We have a lot of users, and the little bit of data we collect from each client really adds up. Let’s convert that error rate to some more compatible units. Using the 1/256MB/month figure from the article, that’s 4096 cosmic soft-errors per terabyte per month. According to my colleague, chutten, we receive about 100 terabytes of data per day, or 2800 TB in a 4 week period. If we multiply that out, it looks like we have the potential to find 11,468,800 bit flips in a given 4 week period of our data. WHAT! That seemed like an awful lot of possibilities, even if I suspect a good portion of them to be undetectable just due to not being an “obvious” bit flip.

Looking at the Bugzilla issue that had originally sparked my interest in this, it contained some evidence of labels embedded in the data being affected by bit-flips. This was pretty easy to spot because we knew what labels we were expecting and the handful of anomalies stood out. Not only that, the effect seemed to be somewhat localized to a geographical area. Maybe this wasn’t such a bad place to try and correlate this information with space-weather forecasts. Back to the internet and I find an interesting space-weather article that seems to line up with the dates from the bug. I finally hit a bit of a wall in this fantastical investigation when I found it difficult to get data on solar radiation by day and geographical location. There is a rather nifty site, SpaceWeatherLive.com which has quite a bit of interesting data on solar radiation, but I was starting to hit the limits of my current knowledge and the limits on time that I had set out for myself to write this blog post.

So, rather reluctantly, I had to set aside any deeper investigations into this for another day. I do leave the search here feeling that not only is it possible that our data contains signals for cosmic activity, but that it is very likely that it could be used to correlate or even measure the impact of cosmic ray induced single-event upsets. I hope that sometime in the future I can come back to this and dig a little deeper. Perhaps someone reading this will also be inspired to poke around at this possibility and would be interested in collaborating on it, and if you are, you can reach me via the Glean Channel on Matrix as @travis. For now, I’ve turned something that seemed like a crazy possibility in my mind into something that seems a lot more likely than I ever expected. Not a bad investigation at all.

Open Policy & AdvocacyCompetition should not be weaponized to hobble privacy protections on the open web

Recent privacy initiatives by major tech companies, such as Google’s Chrome Privacy Sandbox (GCPS) and Apple’s App Tracking Transparency, have brought into sharp focus a key underlying question – should we maintain pervasive data collection on the web under the guise of preserving competition?

Mozilla’s answer to this is that the choice between a more competitive or a more privacy-respecting web is a false one and should be scrutinized. Many parties on the Internet, including but also beyond the largest players, have built their business models to depend on extensive user tracking. Because this tracking is so baked into the web ecosystem, closing privacy holes necessarily means limiting various parties’ ability to collect and exploit that data. This ubiquity is not, however, a reason to protect a status quo that harms consumers and society. Rather, it is a reason to move away from that status quo to find and deploy better technology that continues to offer commercial value with better privacy and security properties.

None of this is to say that regulators should not intervene to prevent blatant self-preferencing by large technology companies, including in their advertising services. However, it is equally important that strategically targeted complaints not be used as a trojan horse to prevent privacy measures, such as the deprecation of third party cookies (TPCs) or restricting device identifiers, from being deployed more widely. As an example, bundling legitimate competition scrutiny of the GCPS proposals with the deprecation of third party cookies has led to the indefinite delay of this vital privacy improvement in one of the most commonly used browsers. Both the competition and privacy aspects warranted close attention, but leaning too much in favor of the former has left people unprotected.

Rather than asking regulators to look at the substance of privacy features so they do not favor dominant platforms (and there is undoubtedly work to be done on that front), vested interests have instead managed to spin the issue into one with a questionable end goal – to ensure they retain access to exploitative models of data extraction. This access, however, is coming at the cost of meaningful progress in privacy preserving advertising. Any attempt to curtail access to the unique identifiers by which people are tracked online (cookies or device IDs) is being painted as “yet another example” of BigTech players unfairly exercising dominant power. Mozilla agrees with the overall need for scrutiny of concentrated platforms when it comes to the implementation of such measures. However, we are deeply concerned that the scope creep of these complaints to include privacy protections, such as TPC deprecation which is already practiced elsewhere in the industry, is actively harming consumers.

Instead of standing in the way of privacy protection, the ecosystem should instead be working to create a high baseline of privacy protections and an even playing field for all players. That means foreclosing pervasive data collection for large and small parties alike. In particular, we urge regulators to consider advertising related privacy enhancements by large companies with the following goals:

  • Prevent Self-Preferencing: It is crucial to ensure that dominant platforms aren’t closing privacy holes for small players while leaving those holes in place for themselves. Dominant companies shouldn’t allow their services to exploit data at the platform-level that third party apps or websites can no longer access due to privacy preserving measures.
  • Restricting First Party Data Sharing: Regulatory interventions should limit data sharing within large technology conglomerates which have first party relationships with consumers across a variety of services. Privacy regulations already require companies to be explicit with consumers about who has access to their data, how it is shared, etc. Technology conglomerates conveniently escape these rules because the individual products and services are housed within the same company. Some would suggest that third party tracking identifiers are a means to counterbalance the dominance of large, first party platforms. However, we believe competition regulators can tackle dominance in first party data directly through targeted interventions governing how data can be shared and used within the holding structures of large platforms. This leverages classic competition remedies and is far better than using regulatory authority to prop up an outdated and harmful tracking technology like third party cookies.

Consumer welfare is at the heart of both competition and privacy enforcement, and leaving people’s privacy at risk shouldn’t be a remedy for market domination. Mozilla believes that the development of new technologies and regulations will need to go hand in hand to ensure that the future of the web is both private for consumers and remains a sustainable ecosystem for players of all sizes.

The post Competition should not be weaponized to hobble privacy protections on the open web appeared first on Open Policy & Advocacy.

SUMO BlogWhat’s up with SUMO – April 2022

Hi everybody,

April is a transition month, with the season starting to change from winter to spring, and a new quarter is beginning to unfold. A lot to plan, but it also means a lot of things to be excited about. With that spirit, let’s see what the Mozilla Support community has been up to these days:

Welcome note and shout-outs

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • The result of the Mozilla Support Contributor Survey 2022 is out. You can check the summary and recommendations from this deck.
  • The TCP/ETP project has been running so well. The KB changes are on the way, and we finished the forum segmentation and found 2 TCP-related bugs. The final report of the project is underway.
  • We’re one version away from Firefox 100. Check out what to expect in Firefox 100!
  • For those of you who experience problems with media upload in SUMO, check out this contributor thread to learn more about the issue.
  • Mozilla Connect was officially soft-launched recently. Check out the Connect Campaign and learn more about how to get involved!
  • The buddy forum is now archived and replaced with the contributor introduction forum. However, due to a permission issue, we’re hiding the new introduction forum at the moment until we figure out the problem.
  • Previously, I mentioned that we’re hoping to finish the onboarding project implementation by the end of Q1. But we should expect a delay for this project as our platform team is stretched tight at the moment.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in February and Marchs! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Feb 2022 6,772,577 -14.56%
Mar 2022 7,501,867 10.77%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Pierre Mozinet
  3. Bithiah
  4. Danny C
  5. Seburo

KB Localization

Top 10 locales based on total page views

Locale Feb 2022 pageviews (*) Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 9.56% 8.74% 97%
fr 6.83% 6.84% 89%
es 6.79% 6.56% 32%
zh-CN 5.65% 7.28% 100%
ru 4.30% 6.12% 86%
pt-BR 3.91% 4.61% 56%
ja 3.81% 3.82% 52%
It 2.64% 2.45% 99%
pl 2.51% 2.28% 87%
zh-TW 1.42% 1.19% 4%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Jim Spentzos
  2. Michele Rodaro
  3. TyDraniu
  4. Mark Heijl
  5. Milupo

Forum Support

Forum stats


Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Jscher2000
  3. Cor-el
  4. Seburo
  5. Sfhowes
  6. Davidsk

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Feb 2022 229 217 64.09%
Mar 2022 360 347 66.14%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah K
  2. Christophe Villeneuve
  3. Kaio Duarte
  4. Tim Maks
  5. Felipe Koji

Play Store Support

Channel Feb 2022 Mar 2022
Total priority review Total priority review replied Total reviews replied Total priority review Total priority review replied Total reviews replied
Firefox for Android 1464 58 92 1387 346 411
Firefox Focus for Android 45 11 54 142 11 94
Firefox Klar Android 0 0 0 2 0 2

Top 3 Play Store contributors in the past 2 months: 

  1. Paul Wright
  2. Tim Maks
  3. Selim Şumlu

Product updates

Firefox desktop

  • V99 landed on Apr 5, 2022
    • Enable CC autofill UK, FR, DE
  • V100 is set for May 3, 2022
    • Picture in Picture
    • Quality Foundations
    • Privacy Segmentation (promoting Fx Focus)

Firefox mobile

  • Mobile V100 set to land on May 3, 2022
  • Firefox Android V100 (unconfirmed)
    • Wallpaper foundations
    • Task Continuity
    • New Tab Banner – messaging framework
    • Clutter-Free History
  • Firefox iOS V100 (unconfirmed)
    • Clutter Free History
  • Firefox Focus V100 (unconfirmed)
    • Unknown


Other products / Experiments

  • Pocket Android (End of April) [Unconfirmed]
    • Sponsored content
  • Relay Premium V22.03 staggered release cadence
    • Sign in with Alias Icon (April 27th)
    • Sign back in with Alias Icon (April 27th)
    • Promotional email blocker to free users (April 21st)
    • Non-Premium Waitlist (April 21st)
    • Replies count surfaced to users (unknown)
    • History section of News (unknown)
  • Mozilla VPN V2.8 (April 18)
    • Mobile onboarding/authentication flow improvements
    • Connection speed
    • Tunnel VPN through Port 53/DNS


Useful links:

Mozilla L10NL10n Report: April 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Firefox 100 is now in beta, and will be released on May 3, 2022. The deadline to update localization is April 24.

As part of this release, users will see a special celebration message.

You can test this dialog by:

  • Opening about:welcome in Nightly.
  • Copying and pasting the following code in the Browser Console:

If you’re not familiar with the Browser Console, take a look at these old instructions to set it up, then paste the command provided above.

What’s new or coming up in mobile

Just like Firefox desktop, v100 is right around the corner for mobile.

  • Firefox for Android and Focus for Android: deadline is April 27.
  • Firefox for iOS and Focus for iOS: deadline is April 24.

Some strings landed late in the cycle – but everything should have arrived by now.

What’s new or coming up in web projects

Relay Website and add-on

The next release is on April 19th. This release includes new strings along with massive updates to both projects thanks to key terminology changes:

  • alias to mask
  • domain to subdomain
  • real email to true email

To learn more about the change, please check out this Discourse post. If you can’t complete the updates by the release date, there will be subsequent updates soon after the deadline so your work will be in production soon. Additionally, the obsolete strings will be removed once the products have caught up with the updates in most locales.

What’s new or coming up in SuMo

What’s new or coming up in Pontoon

Review notifications

We added a notification for suggestion reviews, so you’ll now know when your suggestions have been accepted or rejected. These notifications are batched and sent daily.

Changes to Fuzzy strings

Soon, we’ll be making changes to the way we treat Fuzzy strings. Since they aren’t used in the product, they’ll be displayed as Missing. You will no longer find Fuzzy strings on the dashboards and in the progress charts. The Fuzzy filter will be moved to Extra filters. You’ll still see the yellow checkmark in the History panel to indicate that a particular translation is Fuzzy.

Newly published localizer facing documentation


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Thanks to everybody on the TCP/ETP contributor focus group. You’re all amazing and the Customer Experience team can’t thank you enough for everyone’s collaboration on the project.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Open Policy & AdvocacyPhilippines’ SIM Card Registration Act will expose users to greater privacy and security risks online

While well-intentioned, the Philippines’ Subscriber Identity Module (SIM) Card Registration Act (2022) will set a worrying precedent for the privacy and anonymity of people on the internet. In its current state, approved by the Philippine Congress (House of Representatives and Senate) but awaiting Presidential assent, the law contains provisions requiring social media companies to mandatorily verify the real names and phone numbers of users that create accounts on their platform. Such a move will not only limit the anonymity that is essential online (for example, for whistle blowing and protection from stalkers) but also reduce the privacy and security they can expect from private companies.

These provisions raise a number of concerns, both in principle as well as regarding implementation, which merit serious reconsideration of the law.

  • Sharing sensitive personal data with technology companies: Implementing the real name and phone number requirement in practice would entail users sending photos of government issued IDs to the companies. This will incentivise the collection of sensitive personal data from government IDs that are submitted for this verification, which can then be used to profile and target users. This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling by some of the largest technology players in the world.
  • Harming smaller players: Such a move would entrench power in the hands of large players in the social media space who can afford to build and maintain such verification systems, harming the ability of innovation from smaller, more agile startups from being able to compete effectively within the Philippines. The broad definition of “social media” in the law also leaves the possibility of applying to many more players than intended, further harming the innovation economy.
  • Increased risk of data breaches: As we have seen from the deployment digital identity systems around the world, such a move will also increase the risk from data breaches by creating large, single points of failure in the form of those systems where these identification documents used to verify real world identity are stored by private, social media companies. As evidence from far better protected systems has shown, such breaches are just a matter of time, with disastrous consequences for users that will extend far beyond their online interactions on such platforms.
  • Inadequate Solution: There is no evidence to prove that this measure will help fight crimes, misinformation or scams (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistleblowing and protection from stalkers. Anonymity is an integral aspect of free speech online and such a move will have a chilling effect on public discourse.

For all of these reasons, it is critical that the Subscriber Identity Module (SIM) Card Registration Act not be approved into binding law and these provisions be reconsidered to allow the residents of Philippines to continue to enjoy an open internet.

The post Philippines’ SIM Card Registration Act will expose users to greater privacy and security risks online appeared first on Open Policy & Advocacy.

hacks.mozilla.orgPerformance Tool in Firefox DevTools Reloaded

In Firefox 98, we’re shipping a new version of the existing Performance panel. This panel is now based on the Firefox profiler tool that can be used to capture a performance profile for a web page, inspect visualized performance data and analyze it to identify slow areas.

The icing on the cake of this already extremely powerful tool is that you can upload collected profile data with a single click and share the resulting link with your teammates (or anyone really). This makes it easier to collaborate on performance issues, especially in a distributed work environment.

The new Performance panel is available in Firefox DevTools Toolbox by default and can be opened by Shift+F5 key shortcut.


The only thing the user needs to do to start profiling is clicking on the big blue button – Start recording. Check out the screenshot below.

As indicated by the onboarding message at the top of the new panel the previous profiler will be available for some time and eventually removed entirely.

When profiling is started (i.e. the profiler is gathering performance data) the user can see two more buttons:

  • Capture recording – Stop recording, get what’s been collected so far and visualize it
  • Cancel recording – Stop recording and throw away all collected data

When the user clicks on Capture recording all collected data are visualized in a new tab. You should see something like the following:

The inspection capabilities of the UI are powerful and let the user inspect every bit of the performance data. You might want to follow this detailed UI Tour presentation created by the Performance team at Mozilla to learn more about all available features.


There are many options that can be used to customize how and what performance data should be collected to optimize specific use cases (see also the Edit Settings… link at the bottom of the panel).

To make customization easier some presets are available and the Web Developer preset is selected by default. The profiler can be also used for profiling Firefox itself and Mozilla is extensively using it to make Firefox fast for millions of its users. The WebDeveloper preset is intended for profiling standard web pages and the rest is for profiling Firefox.

The Profiler can be also used directly from the Firefox toolbar without the DevTools Toolbox being opened. The Profiler button isn’t visible in the toolbar by default, but you can enable it by loading https://profiler.firefox.com/ and clicking on the “Enable Firefox Profiler Menu Button” on the page.

This is what the button looks like in the Firefox toolbar.

As you can see from the screenshot above the UI is almost exactly the same (compared to the DevTools Performance panel).

Sharing Data

Collected performance data can be shared publicly. This is one of the most powerful features of the profiler since it allows the user to upload data to the Firefox Profiler online storage. Before uploading a profile, you can select the data that you want to include, and what you don’t want to include to avoid leaking personal data. The profile link can then be shared in online chats, emails, and bug reports so other people can see and investigate a specific case.

This is great for team collaboration and that’s something Firefox developers have been doing for years to work on performance. The profile can also be saved as a file on a local machine and imported later from https://profiler.firefox.com/

There are many more powerful features available and you can learn more about them in the extensive documentation. And of course, just like Firefox itself, the profiler tool is an open source project and you might want to contribute to it.

There is also a great case study on using the profiler to identify performance issues.

More is coming to DevTools, so stay tuned!

The post Performance Tool in Firefox DevTools Reloaded appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey is released!

Hi All,

I hope everyone’s keeping safe.

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey

As this is a security update, please ensure you’ve updated your SeaMonkey (either via automatic updates or via manual download (if you aren’t able to have it automatically update)).

Please check out [1] or [2].


PS: Updates are gradually being enabled. Thanks.

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.11.1/

[2] – https://www.seamonkey-project.org/releases/


hacks.mozilla.orgIntroducing MDN Plus: Make MDN your own

MDN is one of the most trusted resources for information about web standards, code samples, tools, and everything you need as a developer to create websites. In 2015, we explored how we could expand beyond documentation to provide a structured learning experience. Our first foray was the Learning Area, with the goal of providing a useful addition to the regular MDN reference and guide material. In 2020, we added the first Front-end developer learning pathway. We saw a lot of interest and engagement from users, and the learning area contributed to about 10% of MDN’s monthly web traffic. These two initiatives were the start of our exploration into how we could offer more learning resources to our community. Today, we are launching MDN Plus, our first step to providing a personalized and more powerful experience while continuing to invest in our always free and open webdocs.

Build your own MDN Experience with MDN Plus

In 2020 and 2021 we surveyed over 60,000 MDN users and learned that many of the respondents  wanted a customized MDN experience. They wanted to organize MDN’s vast library in a way that worked for them. For today’s premium subscription service, MDN Plus, we are releasing three new features that begin to address this need: Notifications, Collections and MDN Offline. More details about the features are listed below:

  • Notifications: Technology is ever changing, and we know how important it is to stay on top of the latest updates and developments. From tutorial pages to API references, you can now get notifications for the latest developments on MDN. When you follow a page, you’ll get notified when the documentation changes, CSS features launch, and APIs ship. Now, you can get a notification for significant events relating to the pages you want to follow. Read more about it here.

Screenshot of a list of notifications on mdn plus

  • Collections: Find what you need fast with our new collections feature. Not only can you pick the MDN articles you want to save, we also automatically save the pages you visit frequently. Collections help you quickly access the articles that matter the most to you and your work. Read more about it here.

Screenshot of a collections list on mdn plus

  • MDN offline: Sometimes you need to access MDN but don’t have an internet connection. MDN offline leverages a Progressive Web Application (PWA) to give you access to MDN Web Docs even when you lack internet access so you can continue your work without any interruptions. Plus, with MDN offline you can have a faster experience while saving data. Read more about it here.

Screenshot of offline settings on mdn plus

Today, MDN Plus is available in the US and Canada. In the coming months, we will expand to other countries including France, Germany, Italy, Spain, Belgium, Austria, the Netherlands, Ireland, United Kingdom, Switzerland, Malaysia, New Zealand and Singapore. 

Find the right MDN Plus plan for you

MDN is part of the daily life of millions of web developers. For many of us MDN helped with getting that first job or helped land a promotion. During our research we found many of these users, users who felt so much value from MDN that they wanted to contribute financially. We were both delighted and humbled by this feedback. To provide folks with a few options, we are launching MDN Plus with three plans including a supporter plan for those that want to spend a little extra. Here are the details of those plans:

  • MDN Core: For those who want to do a test drive before purchasing a plan, we created an option that lets you try a limited version for free.  
  • MDN Plus 5:  Offers unlimited access to notifications, collections, and MDN offline with new features added all the time. $5 a month or an annual subscription of $50.
  • MDN Supporter 10:  For MDN’s loyal supporters the supporter plan gives you everything under MDN Plus 5 plus early access to new features and a direct feedback channel to  the MDN team. It’s $10 a month or $100 for an annual subscription.  

Additionally, we will offer a 20% discount if you subscribe to one of the annual subscription plans.

We invite you to try the free trial version or sign up today for a subscription plan that’s right for you. MDN Plus is only available in selected countries at this time.


The post Introducing MDN Plus: Make MDN your own appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogA new API for submitting and updating add-ons

The addons.mozilla.org (AMO) external API has offered add-on developers the ability to submit new add-on versions for signing for a number of years, in addition to being available to get data about published add-ons directly or internally inside Firefox.

Current “signing” API

Currently, the signing api offers some functionality, but it’s limited – you can’t submit the first listed version of an add-on (extra metadata is needed to be collected via developer hub); you can’t edit existing submissions; you can’t submit/edit extra metadata about the add-on/version; and you can’t share the source code for an add-on when it’s needed to comply with our policies. For all of those tasks you need to use the forms on the appropriate developer hub web pages.

New Add-on “submission” API

The new add-on submission api aims to overcome these limitations and (eventually) allow developers to submit and manage all parts of their add-on via the API. It’s available now in our v5 api, and should be seen as beta quality for now.

Submission Workflow

The submission workflow is split by the process of uploading the file for validation, and attaching the validated file to a new add-on, or as a new version to an existing add-on.

  1. The add-on file to be distributed is uploaded via the upload create endpoint, along with the channel, returning an upload uuid.
  2. The upload detail endpoint can be polled for validation status.
  3. Once the response has "valid": true, it can be used to create either a new add-on, or a new version of an existing add-on. Sources may be attached if required.
Uploading the add-on file

Regardless of if you are creating a new add-on or adding a new version to an existing add-on, you will need to upload the file for validation first. Here you will decide if the file will be associated with a public listing (listed), or will be self-hosted (unlisted). See our guide on signing and distribution for further details.

# Send a POST request to the upload create endpoint
# Pass addon.xpi as a file using multipart/form-data, along with the
# distribution channel.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/upload/" \
  -H "Authorization: <JWT blob>" \
  -F "upload=@addon.xpi" -F "channel=listed" 

The response will provide information on successful validation, if valid is set to true you will be able to use the uuid in the next submission steps. The recommended polling interval is 5-10 seconds, making sure your code times out after a maximum of 10 minutes.

Creating a new add-on

When creating a new add-on, we require some initial metadata to describe what the add-on does, as well as some optional fields that will allow you to create an appealing listing. Make a request to the add-ons create endpoint to attach the uploaded file to a new add-on:

# Send a POST request to the add-ons create endpoint
# Include the add-on metadata as JSON.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/" \
  -H "Authorization: <JWT blob>" \
  -H "Content-Type: application/json" -d @- <<EOF
  "categories": {
    "firefox": ["bookmarks"]
  "summary": {
    "en-US": “This add-on does great things!”
  "version": {
    "upload": "<upload-uuid>",
    "license": "MPL-2.0"

When submitting to the self-hosted channel, you can omit extra metadata such as categories, summary or license.

Adding a version to an existing add-on

If instead you are  adding a version to an existing add-on, the metadata has already been provided in the initial submission. The following request can be made to attach the version to the add-on:

# Send a POST request to the versions create endpoint.
# Include the upload uuid from the previous add-on upload
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/" \
  -H "Authorization: <JWT blob>" -H "Content-Type: application/json" \
  -d '{ "upload": <upload-uuid> }'

Updating existing add-on or version metadata

Metadata on any existing add-ons or versions can be updated, regardless of how they have been initially submitted. To do so, you can use the add-on edit or version edit endpoints. For example:

# Send a PATCH request to the add-ons edit endpoint
# Set the slug and tags as JSON data.
curl -XPATCH "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/" \ \
  -H "Authorization: <JWT blob>" -H "Content-Type: application/json" \
  -d @- <<EOF
  "slug": "new-slug",
  "tags": ["chat", "music"]

Providing Source code

When an add-on/version submission requires source code to be submitted it can either be uploaded while creating the version, or as an update to an existing version.  Files are always uploaded as multipart/form-data rather than JSON so setting source can’t be combined with every other field.

# Send a PATCH request to the version edit endpoint
# Pass source.zip as a file using multipart/form-data, along with the license field.
curl -XPATCH "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/<version-id>/"  \
  -H "Authorization: <JWT blob>" \
  -F "source=@source.zip" -F "license=MPL-2.0"

You may also provide the source code as part of adding a version to an existing add-on. Fields such as compatibility, release_notes or custom_license can’t be set at the same time because complex data structures (lists and objects) can only be sent as JSON.

# Send a POST request to the version create endpoint
# Pass source.zip as a file using multipart/form-data,
# along with the upload field set to the uuid from the previous add-on upload.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/" \
  -H "Authorization: <JWT blob>" \
  -F "source=@source.zip" -F "upload=500867eb-0fe9-47cc-8b4b-4645377136b3"


Future work and known limitations

There may be bugs – if you find any please file an issue! – and the work is still in progress, so there are some known limitations where not all add-on/version metadata that is editable via developer hub can be changed yet, such as adding/removing add-on developers, or uploading icons and screenshots.

Right now the web-ext tool (or sign-addon) doesn’t use the new submission API (they use the signing api); updating those tools is next on the roadmap.

Longer term we aim to replace the existing developer hub and create a new webapp that will use the add-on submission apis directly, and also deprecate the existing signing api, leaving a single method of uploading and managing all add-ons on addons.mozilla.org.

The post A new API for submitting and updating add-ons appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgMozilla and Open Web Docs working together on MDN

For both MDN and Open Web Docs (OWD), transparency is paramount to our missions. With the upcoming launch of MDN Plus, we believe it’s a good time to talk about how our two organizations work together and if there is a financial relationship between us. Here is an overview of how our missions overlap, how they differ, and how a premium subscription service fits all this.

History of our collaboration

MDN and Open Web Docs began working together after the creation of Open Web Docs in 2021. Our organizations were born out of the same ethos, and we constantly collaborate on MDN content, contributing to different parts of MDN and even teaming up for shared projects like the conversion to Markdown. We meet on a weekly basis to discuss content strategies and maintain an open dialogue on our respective roadmaps.

MDN and Open Web Docs are different organizations; while our missions and goals frequently overlap, our work is not identical. Open Web Docs is an open collective, with a mission to contribute content to open source projects that are considered important for the future of the Web. MDN is currently the most significant project that Open Web Docs contributes to.

Separate funding streams, division of labor

Mozilla and Open Web Docs collaborate closely on sustaining the Web Docs part of MDN. The Web Docs part is and will remain free and accessible to all. Each organization shoulders part of the costs of this labor, from our distinct budgets and revenue sources.

  • Mozilla covers the cost of infrastructure, development and maintenance of the MDN platform including a team of engineers and its own team of dedicated writers.
  • Open Web Docs receives donations from companies like Google, Microsoft, Meta, Coil and others, and from private individuals. These donations pay for Technical Writing staff and help finance Open Web Docs projects. None of the donations that Open Web Docs receive go to MDN or Mozilla; rather they pay for a team of writers to contribute to MDN.

Transparency and dialogue but independent decision-making

Mozilla and OWD have an open dialogue on content related to MDN. Mozilla sits on the Open Web Docs’ Steering Committee, sharing expertise and experience but does not currently sit on the Open Web Docs’ Governing Committee. Mozilla does not provide direct financial support to Open Web Docs and does not participate in making decisions about Open Web Docs’ overall direction, objectives, hiring and budgeting.

MDN Plus: How does it fit into the big picture?

MDN Plus is a new premium subscription service by Mozilla that allows users to customize their MDN experience. 

As with so much of our work, our organizations engaged in a transparent dialogue regarding MDN Plus. When requested, Open Web Docs has provided Mozilla with feedback, but it has not been a part of the development of MDN Plus. The resources Open Web Docs has are used only to improve the free offering of MDN. 

The existence of a new subscription model will not detract from MDN’s current free Web Docs offering in any way. The current experience of accessing web documentation will not change for users who do not wish to sign up for a premium subscription. 

Mozilla’s goal with MDN Plus is to help ensure that MDN’s open source content continues to be supported into the future. While Mozilla has incorporated its partners’ feedback into their vision for the product, MDN Plus has been built only with Mozilla resources. Any revenue generated by MDN Plus will stay within Mozilla. Mozilla is looking into ways to reinvest some of these additional funds into open source projects contributing to MDN but it is still in early stages.

A subscription to MDN Plus gives paying subscribers extra MDN features provided by Mozilla while a donation to Open Web Docs goes to funding writers creating content on MDN Web Docs, and potentially elsewhere. Work produced via OWD will always be publicly available and accessible to all. 

Open Web Docs and Mozilla will continue to work closely together on MDN for the best possible web platform documentation for everyone!

Thanks for your continuing feedback and support.



The post Mozilla and Open Web Docs working together on MDN appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataDocumenting outages to seek transparency and accountability

Mozilla Opens Access to Dataset on Network Outages

The internet doesn’t just have a simple on/off switch — rather, there are endless ways connectivity can be ruptured or impaired, both intentionally (cyber attacks) and unintentionally (weather events). While a difficult task, knowing more about how connectivity is affected and where can help us better understand the outages of today, as well as who (or what) is behind them to prevent them in the future.

Today, Mozilla is opening access to an anonymous telemetry dataset that will enable researchers to explore signals of network outages around the world. The aim of the release is to create more transparency around outages, a key step towards achieving accountability for a more open and resilient internet for all. We believe this data, which is anonymized and aggregated to ensure user privacy, will be valuable to a range of actors, from technical communities working on network resilience to digital rights advocates documenting internet outages.

While a number of outage measurements rely on hardware installations or require people experiencing outages to initiate their own measurements, Mozilla’s data originates from everyday use of Firefox browsers around the world, essentially creating a timeline of both regular and irregular connectivity patterns across large populations of internet users. In practice, this means that when significant numbers of Firefox clients experience connection failures for any reason, this registers in Mozilla’s telemetry once a connection is restored. At a country or city level, this can provide indications of whether an outage occurred.

In addition to being able to see city-specific outages, Mozilla’s dataset also offers a comparatively high degree of technical granularity which allows researchers to isolate different types of connectivity issues in a given time frame. Because outages are often shrouded in secrecy, researchers can sometimes only estimate the exact nature of a local outage. Combined with other data sources, for instance from companies like Google and Cloudflare, Mozilla’s dataset will be a valuable source to corroborate reports of outages.

Whenever internet connections are cut, the safety, security and health of millions of people may be at stake. Documenting outages is an important step in seeking transparency and accountability, particularly in contexts of uncertainty or insecurity around recent events.

“Mozilla is excited to make our relevant telemetry data available to researchers around the world to aid efforts toward transparency and accountability. Internet outages can be hard to measure and it is very fortunate that there is a dedicated international community that is focused on this crucial task. We look forward to interesting ways in which the community will use this anonymous dataset to help keep the internet an open, global public resource,” says Daniel McKinley, VP, Data Science and Analytics at Mozilla.

Over the course of 2020 and 2021, researchers from Internet Outage Detection and Analysis (IODA) of the Center for Applied Internet Data Analysis (CAIDA), Open Observatory of Network Interference (OONI), RIPE Network Coordination Center (RIPE NCC), Measurement Lab (M-Lab), Internews and Access Now joined a collaborative effort to compare existing data on outages with Mozilla’s dataset. Their feedback has uniformly stated that this data would be helpful to the internet outage measurement community in critical work across the world.

“We are thrilled that Mozilla’s dataset on outages is being published. Our own analysis of the data demonstrated that it is a valuable resource for investigating Internet outages worldwide, complimenting other public datasets. Unlike other datasets, it provides geographical granularity with novel insights and new research opportunities. We are confident that it will serve as an extremely valuable resource for researchers, human rights advocates, and the broader Internet freedom community,” says Maria Xynou, the Research and Partnerships Director of OONI.

In order to gain access to the dataset, which is licensed under the Creative Common Public Domain license (CC0) and contains data from January 2020 onward, researchers can apply via this Google Form, after which Mozilla representatives will reach out with next steps. More information and background on the project and the dataset can be found on Mozilla Wiki.

We look forward to seeing the exciting work that internet outage researchers will produce with this dataset and hope to inspire more use of aggregated datasets for public good.

This post was co-authored by Solana Larsen, Alessio Placitelli, Udbhav Tiwari.

hacks.mozilla.orgAnnouncing Interop 2022

A key benefit of the web platform is that it’s defined by standards, rather than by the code of a single implementation. This creates a shared platform that isn’t tied to specific hardware, a company, or a business model.

Writing high quality standards is a necessary first step to an interoperable web platform, but ensuring that browsers are consistent in their behavior requires an ongoing process. Browsers must work to ensure that they have a shared understanding of web standards, and that their implementation matches that understanding.

Interop 2022

Interop 2022 is a cross-browser initiative to find and address the most important interoperability pain points on the web platform. The end result is a public metric that will assess progress toward fixing these interoperability issues.

Interop 2022 scores. Chrome/Edge 71, Firefox 74, and Safari 73.

In order to identify the areas to include, we looked at two primary sources of data:

  • Web developer feedback (e.g., through developer facing surveys including MDN’s Web DNA Report) on the most common pain points they experience.
  • End user bug reports (e.g., via webcompat.com) that could be traced back to implementation differences between browsers.

During the process of collecting this data, it became clear there are two principal kinds of interoperability problems which affect end users and developers:

  • Problems where there’s a relatively clear and widely accepted standard, but where implementations are incomplete or buggy.
  • Problems where the standard is missing, unclear, or doesn’t match the behavior sites depend on.

Problems of the first kind have been termed “focus areas”. For these we use web-platform-tests: a large, shared testsuite that aims to ensure web standards are implemented consistently across browsers. It accepts contributions from anyone, and browsers, including Firefox, contribute tests as part of their process for fixing bugs and shipping new features.

The path to improvement for these areas is clear: identify or write tests in web-platform-tests that measure conformance to the relevant standard, and update implementations so that they pass those tests.

Problems of the second kind have been termed “investigate areas”. For these it’s not possible to simply write tests as we’re not really sure what’s necessary to reach interoperability. Such unknown unknowns turn out to be extremely common sources of developer and user frustration!

We’ll make progress here through investigation. And we’ll measure progress with more qualitative goals, e.g., working out what exact behavior sites depend on, and what can be implemented in practice without breaking the web.

In all cases, the hope is that we can move toward a future in which we know how to make these areas interoperable, update the relevant web standards for them, and measure them with tests as we do with focus areas.

Focus areas

Interop 2022 has ten new focus areas:

  • Cascade Layers
  • Color Spaces and Functions
  • Containment
  • Dialog Element
  • Forms
  • Scrolling
  • Subgrid
  • Typography and Encodings
  • Viewport Units
  • Web Compat

Unlike the others the Web Compat area doesn’t represent a specific technology, but is a group of specific known problems with already shipped features, where we see bugs and deviations from standards cause frequent site breakage for end users.

There are also five additional areas that have been adopted from Google and Microsoft’s “Compat 2021” effort:

  • Aspect Ratio
  • Flexbox
  • Grid
  • Sticky Positioning
  • Transforms

A browser’s test pass rate in each area contributes 6% — totaling at 90% for fifteen areas — of their score of Interop 2022.

We believe these are areas where the standards are in good shape for implementation, and where improving interoperability will directly improve the lives of developers and end users.

Investigate areas

Interop 2022 has three investigate areas:

  • Editing, contentEditable, and execCommand
  • Pointer and Mouse Events
  • Viewport Measurement

These are areas in which we often see complaints from end users, or reports of site breakage, but where the path toward solving the issues isn’t clear. Collaboration between vendors is essential to working out how to fix these problem areas, and we believe that Interop 2022 is a unique opportunity to make progress on historically neglected areas of the web platform.

The overall progress in this area will contribute 10% to the overall score of Interop 2022. This score will be the same across all browsers. This reflects the fact that progress on the web platform requires browsers to collaborate on new or updated web standards and accompanying tests, to achieve the best outcomes for end users and developers.

Contributions welcome!

Whilst the focus and investigate areas for 2022 are now set, there is still much to do. For the investigate areas, the detailed targets need to be set, and the complex work of understanding the current state of the art, and assessing the options to advance it, are just starting. Additional tests for the focus areas might be needed as well to address particular edge cases.

If this sounds like something you’d like to get involved with, follow the instructions on the Interop 2022 Dashboard.

Finally, it’s also possible that Interop 2022 is missing an area you consider to be a significant pain point. It won’t be possible to add areas this year, but, if the effort is a success we may end up running further iterations. Feedback on browser differences that are making your life hard as developer or end user are always welcome and will be helpful for identifying the correct focus and investigate areas for any future edition.

Partner announcements

Bringing Interop 2022 to fruition was a collaborative effort and you might be interested in the other announcements:

The post Announcing Interop 2022 appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.11 is out!

Hi Everyone,

The SeaMonkey project would like the announce the immediate release of SeaMonkey 2.53.11!

Please check out [1] and/or [2].


PS:  Once again, I managed to jump the gun and set up updates before the updates were in place.  My apologies to all and sorry for the inconveniences caused.

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.11/

[2] – https://www.seamonkey-project.org/releases/2.53.11


hacks.mozilla.orgA new year, a new MDN

If you’ve accessed the MDN website today, you probably noticed that it looks quite different. We hope it’s a good different. Let us explain!

MDN has undergone many changes in its sixteen-year history from its early beginning as a wiki to the recent migration of a static site backed by GitHub. During that time MDN grew organically, with over 45,000 contributors and numerous developers and designers. It’s no surprise that the user experience became somewhat inconsistent throughout the website. 

In mid-2021 we started to think about modernizing MDN’s design, to create a clean and inviting website that makes navigating our 44,000 articles as easy as possible. We wanted to create a more holistic experience for our users, with an emphasis on improved navigability and a universal look and feel across all our pages. 

A new Homepage, focused on community

The MDN community is the reason our content can be counted on to be both high quality and trustworthy. MDN content is scrutinized, discussed, and yes, in some cases argued about. Anyone can contribute to MDN, either by writing content, suggesting changes or fixing bugs.

We wanted to acknowledge and celebrate our awesome community and our homepage is the perfect place to do so.

The new homepage was built with a focus on the core concepts of community and simplicity. We made an improved search a central element on the page, while also showing users a selection of the newest and most-read articles. 

We will also show the most recent contributions to our GitHub content repo and added a contributor spotlight where we will highlight MDN contributors.

Redesigned article pages for improved navigation

It’s been years—five of them, in fact—since MDN’s core content presentation has received a comprehensive design review. In those years, MDN’s content has evolved and changed, with new ways of structuring content, new ways to build and write docs, and new contributors. Over time, the documentation’s look and feel had become increasingly disconnected from the way it’s read and written.

While you won’t see a dizzying reinvention of what documentation is, you’ll find that most visual elements on MDN did get love and attention, creating a more coherent view of our docs. This redesign gives MDN content its due, featuring:

  • More consistent colors and theming
  • Better signposting of major sections, such as HTML, CSS, and JavaScript
  • Improved accessibility, such as increased contrast
  • Added dark mode toggle for easy switching between modes


We’re especially proud of some subtle improvements and conveniences. For example, in-page navigation is always in view to show you where you are in the page as you scroll:

We’re also revisiting the way browser compatibility data appears, with better at-a-glance browser support. So you don’t have to keep version numbers in your head, we’ve put more emphasis on yes and no iconography for browser capabilities, with the option to view the detailed information you’ve come to expect from our browser compatibility data. We think you should check it out. 

And we’re not stopping there. The work we’ve done is far-reaching and there are still many opportunities to polish and improve on the design we’re shipping.

A new logo, chosen by our community

As we began working on both the redesign and expanding MDN beyond WebDocs we realized it was also time for a new logo. We wanted a modern and easily customizable logo that would represent what MDN is today while also strengthening its identity and making it consistent with Mozilla’s current brand.

We worked closely with branding specialist Luc Doucedame, narrowed down our options to eight potential logos and put out a call to our community of users to help us choose and invited folks to vote on their favorite. We received over 10,000 votes in just three days and are happy to share with you “the MDN people’s choice.”

The winner was Option 4, an M monogram using underscore to convey the process of writing code. Many thanks to everyone who voted!

What you can expect next with MDN

Bringing content to the places where you need it most

In recent years, MDN content has grown more sophisticated for authors, such as moving from a wiki to Git and converting from HTML to Markdown. This has been a boon to contributors, who can use more powerful and familiar tools to create more structured and consistent content.

With better tools in place, we’re finally in a position to build more visible and systematic benefits to readers. For example, many of you probably navigate MDN via your favorite search engine, rather than MDN’s own site navigation. We get it. Historically, a wiki made large content architecture efforts impractical. But we’re now closer than ever to making site-wide improvements to structure and navigation.

Looking forward, we have ambitious plans to take advantage of our new tools to explore improved navigation, generated standardization and support summarizes, and embedding MDN documentation in the places where developers need it most: in their IDE, browser tools, and more.

Coming soon: MDN Plus

MDN has built a reputation as a trusted and central resource for information about standards, codes, tools, and everything you need as a developer to create websites. In 2015, we explored ways to be more than a central resource through creating a Learning Area, with the aim of providing a useful counterpart to the regular MDN reference and guide material.

In 2020, we added the first Front-end developer learning pathway to it.  We saw a lot of interest and engagement from users, the learning area currently being responsible for 10% of MDN’s monthly web traffic. This started us on a path to see what more we can do in this area for our community.

Last year we surveyed users and asked them what they wanted out of their MDN experience. The top requested features included notifications, article collections and an offline experience on MDN. The overall theme we saw was that users wanted to be able to organize MDN’s vast library in a way that worked for them. 

We are always looking for ways to meet our users’ needs whether it’s through MDN’s free web documentation or personalized features. In the coming months, we’ll be expanding MDN to include a premium subscription service based on the feedback we received from web developers who want to customize their MDN experience. Stay tuned for more information on MDN Plus.

Thank you, MDN community

We appreciate the thousands of people who voted for the new logo as well as everyone who participated in the early beta testing phase since we started this journey. Also, many thanks to our partners from the Open Web Docs, who gave us valuable feedback on the redesign and continue to make daily contributions to MDN content. Thanks to you all we could make this a reality and we will continue to invest in improving even further the experience on MDN.

The post A new year, a new MDN appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Your personal Glean data pipeline

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

On February 11th, 2022 we hosted a Data Club Lightning Talk session. There I presented my small side project of setting up a minimal data pipeline & data storage for Glean.

The premise:

Can I build and run a small pipeline & data server to collect telemetry data from my own usage of tools?

To which I can now answer: Yes, it’s possible. The complete ingestion server is a couple hundred lines of Rust code. It’s able to receive pings conforming to the Glean ping schema, transform them and store it into an SQLite database. It’s very robust, not crashing once on me (except when I created an infinite loop within it).

You can watch the lightning talk here:

Instead of creating some slides for the talk I created an interactive report. The full report can be read online.

Besides actually writing a small pipeline server this was also an experiment in trying out Irydium and Datasette to produce an interactive & live-updated data report.

Irydium is a set of tooling designed to allow people to create interactive documents using web technologies, started by wlach a while back. Datasette is an open source multi-tool for exploring and publishing data, created and maintained by simonw. Combining both makes for a nice experience, even though there’s still some things that could be simplified.

My pipeline server is currently not open source. I might publish it as an example at a later point.

Blog of DataThis Week in Glean: What If I Want To Collect All The Data?

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. All “This Week in Glean” blog posts are listed in the TWiG index).

Mozilla’s approach to data is “as little as necessary to get the job done” as espoused in our Firefox Privacy Promise and put in a shape you can import into your own organization in Mozilla’s Lean Data Practices. If you didn’t already know, you’d find out very quickly by using it that Glean is a Mozilla project. All of its systems are designed with the idea that you’ve carefully considered your instrumentation ahead of time, and you’ve done some review to ensure that the collection aligns with your values.

(This happens to have some serious knock-on benefits for data democratization and tooling that allows Mozilla’s small Data Org to offer some seriously-powerful insights on a shoestring budget, which you can learn more about in a talk I gave to Ubisoft at their Data Summit in 2021.)

Less Data, as the saying goes, implies Greater Data and Greatest Data. Or in a less memetic way, Mozilla wants to collect less data… but less than what?

Less than more, certainly. But how much more? How much is too much?

How much is “all”?

Since my brain’s weird I decided to pursue this thought experiment of “What is the _most_ data you could collect from a software project being used?”.

Well, looking at Firefox, every button press and page load and scroll and click and and and… all of that matters. As does the state of Firefox when it’s being clicked and scrolled and so forth. Typing in the urlbar is different if you already have a page loaded. Opening your first tab is different from opening your nine-thousand-two-hundred-and-fiftieth.

And, underneath it all, is the code. How fast is it running? How much memory are we using? All these performance questions that Firefox Telemetry was originally built to answer. Is code on line 123 of file XYZ.cpp running? Is it running well? What do we run next?

For software this means to record all of the data, we’ll need to know the full state of the program at every expression it runs in every line of code. At every advancement of the Program Counter, we’d need to dump the entire Stack and Heap.

Yikes! That’s gigabytes of data per clock cycle.

Well, maybe we can be cleverer than this. Another one of those projects Mozilla incubated that now has a whole community of contributors and users (like Rust) is a lightweight record-and-replay debugger called rr. The rr debugger collects traces of a running piece of software and can deterministically replay it over and over again (backwards, even!), meaning it has all the information we need in it.

So a decent size estimate for “all the data” might be the size of one of these trace recordings. They’re big, but not “full heap and stack at every program counter” big. A short test run of Firefox was about 2GB for a one minute run (albeit without any user interaction or graphics).

Could Glean collect traces like these? Or bigger ones after, say, a full day’s use? Not easily. Not without modification.

Let’s say we did those modifications. Let’s push this thought experiment further. What does that mean for analysis? Well, we’d have all these recordings we could spin up a VM to replay for us. If we want the number of open tabs, we could replay it and sample that count whenever we wanted.

This would be a seismic shift in how instrumentation interacted with analysis. We’d no longer have to ship code to instrument Firefox, we could “simply” (in quotes because using rr requires you to be a programming nerd) replay existing traces and extract the new data we needed.

It would also be absolutely horrible. We’d have to store every possible metric just in case we wanted _one_ of them. And there’s so much data in these traces that Mozilla doesn’t want to store: pictures you looked at, videos you watched, videos you uploaded… good grief. We don’t want any of that.

(( I’d like to take a second to highlight that this is a thought experiment: Mozilla doesn’t do this. We don’t have plans to do this. In fact, Mozilla’s Data Privacy Principles (specifically “Limited Data”) and Mozilla’s Manifesto (specifically Principle 4 “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”) pretty clearly state how we think about data like this. ))

And processing these traces into a useful form for analysis to be performed would take the CPU processing power of a small country, over and over again.

(( And rr introduces a 20% performance penalty which really wouldn’t ingratiate us to our users. And it only works on Linux meaning the data we’d have access to wouldn’t be representative of our user base anyway. ))

And what was the point of this again? Right. We’re here to quantify what “less data” means. But how can we do that, even knowing as we do now what the size of “all data” is? Can we compare the string value of the profile directory’s random portion comparable to the url the user visits the most? Are those both 1 piece of data that we can compare to the N pieces of data we get in a full rr trace? Mozilla doesn’t think they’re the same, since we categorize (and thus treat) these collections differently.

All in all maybe figuring out the maximum amount of data you could collect in order to contextualize how much less of it you are collecting might not be meaningful.

Oh well.

I guess this means that the only way Mozilla (and you!) can continue to quantify “less data” is by comparing it to “no data” – the least possible amount of data.


(( This post is a syndicated copy of the original post. ))

SUMO BlogIntroducing Cindi Jordan

Hey everybody,

Please join me to welcome Cindi Jordan into our Customer Experience team as a Sr. Customer Experience Program Manager.

Here’s a short introduction from Cindi:

Hi there, I’m Cindi Jordan joining Mozilla as a Sr. Customer Experience Program Manager. I will be working closely with the team to find process efficiencies, document team strategy, and proactively identify ways we all can work together more effectively. I am a huge advocate for the user experience and it’s vast amount of support channels within the community, through content and in product. I’m looking forward to learning much more about the organization and all of you, using my experience managing a social support team and in content/strategy management to help how I can.

Welcome, Cindi!

hacks.mozilla.orgVersion 100 in Chrome and Firefox

Chrome and Firefox will reach version 100 in a couple of months. This has the potential to cause breakage on sites that rely on identifying the browser version to perform business logic.  This post covers the timeline of events, the strategies that Chrome and Firefox are taking to mitigate the impact, and how you can help.

User-Agent string

User-Agent (UA) is a string that browsers send in HTTP headers, so servers can identify the browser.  The string is also accessible through JavaScript with navigator.userAgent. It’s usually formatted as follows:


For example, the latest release versions of browsers at the time of publishing this post are:

  • Chrome: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.54 Safari/537.36
  • Firefox: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:96.0) Gecko/20100101 Firefox/96.0
  • Safari: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.2 Safari/605.1.15

Major version 100—three-digit version number

Major version 100 is a big milestone for both Chrome and Firefox. It also has the potential to cause breakage on websites as we move from a two-digit to a three-digit version number.  Web developers use all kinds of techniques for parsing these strings, from custom code to using User-Agent parsing libraries, which can then be used to determine the corresponding processing logic. The User-Agent and any other version reporting mechanisms will soon report a three-digit version number.

Version 100 timelines

Version 100 browsers will be first released in experimental versions (Chrome Canary, Firefox Nightly), then beta versions, and then finally on the stable channel.

Chrome (Release Schedule) March 29, 2022
Firefox (Release Schedule) May 3, 2022

Why can a three-digit version number be problematic?

When browsers first reached version 10 a little over 12 years ago, many issues were discovered with User-Agent parsing libraries as the major version number went from one digit to two.

Without a single specification to follow, different browsers have different formats for the User-Agent string, and site-specific User-Agent parsing. It’s possible that some parsing libraries may have hard-coded assumptions or bugs that don’t take into account three-digit major version numbers.  Many libraries improved the parsing logic when browsers moved to two-digit version numbers, so hitting the three-digit milestone is expected to cause fewer problems. Mike Taylor, an engineer on the Chrome team, has done a survey of common UA parsing libraries which didn’t uncover any issues. Running Chrome experiments in the field has surfaced some issues, which are being worked on.

What are browsers doing about it?

Both Firefox and Chrome have been running experiments where current versions of the browser report being at major version 100 in order to detect possible website breakage. This has led to a few reported issues, some of which have already been fixed. These experiments will continue to run until the release of version 100.

There are also backup mitigation strategies in place, in case version 100 release to stable channels causes more damage to websites than anticipated.

Firefox mitigation

In Firefox, the strategy will depend on how important the breakage is. Firefox has a site interventions mechanism. Mozilla webcompat team can hot fix broken websites in Firefox using this mechanism. If you type about:compat in the Firefox URL bar, you can see what is currently being fixed. If a site breaks with the major version being 100 on a specific domain, it is possible to fix it by sending version 99 instead.

If the breakage is widespread and individual site interventions become unmanageable, Mozilla can temporarily freeze Firefox’s major version at 99 and then test other options.

Chrome mitigation

In Chrome, the backup plan is to use a flag to freeze the major version at 99 and report the real major version number in the minor version part of the User-Agent string (the code has already landed).

The Chrome version as reported in the User-Agent string follows the pattern <major_version>.<minor_version>.<build_number>.<patch_number>.

If the backup plan is employed, then the User-Agent string would look like this:


Chrome is also running experiments to ensure that reporting a three-digit value in the minor version part of the string does not result in breakage, since the minor version in the Chrome User-Agent string has reported 0 for a very long time. The Chrome team will decide on whether to resort to the backup option based on the number and severity of the issues reported.

What can you do to help?

Every strategy that adds complexity to the User-Agent string has a strong impact on the ecosystem. Let’s work together to avoid yet another quirky behavior. In Chrome and Firefox Nightly, you can configure the browser to report the version as 100 right now and report any issues you come across.

Configure Firefox Nightly to report the major version as 100

  1. Open Firefox Nightly’s Settings menu.
  2. Search for “Firefox 100” and then check the “Firefox 100 User-Agent String” option.

Configure Chrome to report the major version as 100

  1. Go to chrome://flags/#force-major-version-to-100
  2. Set the option to `Enabled`.

Test and file reports

  • If you are a website maintainer, test your website with Chrome and Firefox 100. Review your User-Agent parsing code and libraries, and ensure they are able to handle three-digit version numbers. We have compiled some of the patterns that are currently breaking.
  • If you develop a User-Agent parsing library, add tests to parse versions greater than and equal to 100. Our early tests show that recent versions of libraries can handle it correctly. But the Web is a legacy machine, so if you have old versions of parsing libraries, it’s probably time to check and eventually upgrade.
  • If you are browsing the web and notice any issues with the major version 100, file a report on webcompat.com.

The post Version 100 in Chrome and Firefox appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NL10n Report: February 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Tok: Toki Pona

New content and projects

What’s new or coming up in Firefox desktop

Things have been quiet in terms of new content, while Firefox is quickly approaching version 100.

  • Firefox 98 is currently in beta, and localization will be possible until February 27.
  • Firefox 99 is now shipping in nightly, and will move to beta on March 8.

We expect to have more substantial content updates in the coming months.

Unfortunately, it’s not all good news. There are a few locales, shipping in the release channel, that have been struggling to keep up, and are falling behind:

  • Afrikaans (af)
  • Macedonian (mk)
  • Sinhala (si)
  • Songhay (son)
  • Xhosa (xh)

If you speak one of these languages and want to help, please don’t hesitate to get in touch to learn how you can contribute.

What’s new or coming up in mobile

Things are starting to move around again in mobile land, after a small calm period. You can expect more content to progressively trickle in with the upcoming releases. As usual, we will keep you posted on what to expect, once we know more.

Stay tuned!

What’s new or coming up in web projects

Firefox Relay Website and Add-on

Both projects would be on a 4-week release cycle. New content will be added to Pontoon up to  about 7-10 days before the release. The last build to include localised content can be hours before the release.


Two more languages were migrated from Pontoon to the vendor supported platform: German and French. Mozilla staff would have access to edit the localized content when necessary. If you spot any errors or issues, feel free to report them by filing a bug, or an issue at the repository.

What’s new or coming up in SuMo

  • The platform team is working on implementing the onboarding project design that has been pending for a few years now. You can now track the progress from this GitHub milestone. We can’t wait to see how it turns out!
  • If you want to get information about release updates sooner, you can now do so by subscribing to our weekly release scrum meeting that we host in Air Mozilla. You can even subscribe to the folder to get notifications whenever a new recording comes up.
  • Are you a sucker for data? This P2P dashboard made by JR might be a perfect playground for you to explore. We talked about that a bit more in our monthly blog post. So go check it out if you haven’t. We also share our monthly stats from across contribution areas in that regular blog post. You won’t miss that one out!
  • Are you a Knowledge Base contributors? Please make sure to fill out this survey before Jan 11, 2022
  • Recently, we teamed up with the Firefox Focus team to get the messaging out for people to update their app to the latest version. Thanks to everybody who helped out to localize the banner strings for Firefox Focus. There are also shutouts for you at the bottom!
  • A call for help to Indonesian contributors to help us prioritize localization for Firefox Focus due to Indonesia being one of the top countries with high usage and profile creation for Firefox Focus for Android.

What’s new or coming up in Pontoon

  • PSA: We have temporarily removed the ability to change contributor email addresses due to security concerns. We’ll keep you updated when the feature becomes available again.
  • Thanks to Mitch, re-applying existing filters has become much simpler and finally works as expected. If you filter e.g. Unreviewed strings, review some and then want to refresh the list, simply select the Unreviewed filter.
  • We have changed the display of pinned comments, which have been rendered twice in the past. Another great work by Mitch!
  • Finally, thanks to Pike and April for many under-the-hood improvements to our codebase. One of the more obvious ones happened in the Concordance search, which now features infinite scroll.

Newly published localizer facing documentation

  • Thanks to gregdan3 for completing the style guide first in Toki Pona before starting on a project.


  • Join us on Support Mozilla or SUMO Sprint for Firefox 97 this week to help users with issues. Interested to learn more? Check out our event page here!

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report).

Friends of the Lion

  • Thank you to 你我皆凡人, Marcelo G, Ihor H, and Michael Kohler who helped us to translate Firefox Focus banner on support.mozilla.org to invite people to update their app to the latest version. Because of them, we’re able to get a quick turnaround on most of our priority locales: de, en-US, en-GB, fr, id, pt-BR, and zh-CN. Thank you!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

SUMO BlogWhat’s up with SUMO – February 2022

Hey SUMO folks,

We decided to skip January’s update since we’ve published December’s data along with the 2021 retrospection. January was also packed with planning and many incidents that makes it such a packed month. But today, we’re finally here to give you another round of updates so, let’s dive into it!

Welcome note and shout-outs

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to  add them in our next edition.

Community news

  • If you’re an NDA contributor, you should now be able to watch the weekly support release scrum recording from Air Mozilla. The support release scrum meeting is where the Customer Experience team doing weekly catch up on product releases. Make sure to use your NDA-ed email to log in to Air Mozilla. You can also subscribe to the folder to get notifications whenever a new recording comes up.
  • I spoke about this briefly at the community call in January. For those of you, who’s been missing the old contributor dashboard, you should now be pleased because we now have the P2P dashboard open for public. This dashboard was originally created by JR to report on product support metrics. We think that the data points are pretty similar to the old contributor dashboard, so we decided to make it public for contributors to play with. Just note that the data source may not necessarily be updated if you choose a recent time frame, since somebody has to pull the data manually.
  • The implementation of the onboarding project is underway this quarter. We hope to be able to finish it before the end of March, so please stay tuned!
  • Are you a Knowledge Base contributor? Please make sure to fill out this survey before Feb 11, 2022
  • Check out the following release notes from Kitsune this month:
    • No Kitsune release notes for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in January!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Jan 2022 7,926,477 -3.83%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. Romado33
  5. K_alex

KB Localization

Top 10 locale based on total page views

Locale Jan 2022 pageviews (*) Localization progress (per Feb, 9)(**)
de 10.22% 98%
fr 7.19% 91%
zh-CN 5.62% 100%
es 5.58% 35%
ru 5.70% 84%
ja 4.36% 52%
pt-BR 3.93% 59%
pl 3.62% 88%
It 2.61% 100%
zh-TW 2.35% 5%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Soucet
  3. Jim Spentzos
  4. Michele Rodaro
  5. Mark Heijl

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2022 3175 72.13% 15.02% 81.82%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Jan 2022
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top 5 contributors in January 2022

  1. Bithiah Koshy
  2. Christophe Villeneuve
  3. Tim Maks van den Broek
  4. Kaio Duarte Costa
  5. Felipe Koji

Play Store Support

Channel Jan 2022
Total priority review Total priority review replied Total review replied
Firefox for Android 1440 29 96
Firefox Focus for Android 66 15 51

Top 5 contributors in January 2022

  1. Tim Maks van den Broek
  2. Selim Şumlu
  3. Matt Cianfarani
  4. Paul Wright
  5. Christian Noriega

Product updates

Firefox desktop

Firefox mobile

  • Version 97 for Android, iOS, and Focus went live Feb 8th
  • Mobile Version 98 scheduled to go live Mar 8th
    • Potential updates
      • Wallpapers functionality addition (Android, iOS)
      • Customize Search bar: Top or Bottom (iOS)
      • Inactive tabs work (iOS)
      • CC autofill work (Android)
      • HTTPS-only work (Focus Android)

Other products / Experiments

  • Mozilla VPN V2.7 went live Feb 1
    • List of changes
      • Multi-hop on mobile
      • Client update from ‘about us’ section
      • Localization of Multi account containers
  • Mozilla VPN V2.8 expected to land Mar 30
    • Potential updates
      • Connection speed reliability and screen redesign
      • In app authentication and FxA creation updates
  • Premium Relay V22.01 went live Feb 1
    • List of Changes
      • Custom Subdomain Education (register domain)
      • Create alias through Relay icon/sign in with Relay icon
      • Add-on panel redesign
  • Premium Relay V22.02 expected to land Mar 1
    • Potential updates
      • Critical emails
      • Google Chrome add-on for Relay
      • More custom subdomain education (Sub-domains, add-on, tool tips)
  • Pocket Migrating to Firefox Accounts
    • The Pocket team has started migrating users from their native account management system over to Firefox accounts.
  • TCP Breakage Updates
    • Experiment to improve how we identify and capture site breakage related issues and provide them back to the Anti-tracking engineering team
    • Hoping this experiment will help to formalize a new process to include the community with similar projects in the future.
  • Major Release
    • There will be at least 1 major release for Desktop along with 1 Major release for mobile in H1, 2022.
    • SUMO team will follow up with community support opportunities for these big releases.

Useful links:

Blog of DataThis Week in Glean: Migrating Legacy Telemetry Collections to Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

One of the things I’ve spent a fair amount of time helping with on the Glean Team are the migrations from legacy telemetry to Glean being performed by different Mozilla products. This includes everything from internal tools and libraries to the mobile browsers and even desktop Firefox. As it turns out, there were quite a few projects and products that needed to be migrated. While we have started migrating all of our active products, each of them are at different stages and have different timelines for completion. I thought it might be helpful to take a little narrative look through what the migration process looks like, and so here we go!

Add Glean To Your Project

The first step in migration is adding Glean to the project. Depending on the platform, the precise steps vary slightly, but ultimately result in adding Glean as a project dependency and initializing the SDK on startup. Libraries using Glean follow a slightly different path since they aren’t responsible for initializing Glean. Libraries instead just add the dependency at this point and rely on the base application integration with Glean to initialize the SDK. Oh, and don’t forget to get a data review for adding Glean, this is an important step in ensuring that we are following the guidelines and policies.

Enable Ingestion

Now that the app or library can send Glean telemetry (and still is sending Legacy telemetry), we will need to inform the ingestion pipeline about the application so that the data will be handled properly by the telemetry endpoint. The process for this involves filing a bug, and someone on the data-engineering team will add the appropriate application ids to the Probe Scraper so that ingestion of the data can occur. For libraries, Probe Scraper also needs to know which applications depend on it so that the metrics from the library will be included in the datasets for those applications.

Verify The Initial Integration

Once the basic Glean integration is complete and live, the first steps of verification of the data begins. For applications migrating to Glean, this involves making sure that the baseline pings are flowing in. There are a few ways to do this, from writing some SQL to using Looker to explore the data. This is the opportunity to ensure that the application is showing up in our data tools like our Looker instance, the Glean Dictionary, and possibly checking for the data in GLAM. The things that are important to check here are: that we are getting the data without unexpected ingestion errors, that the client and ping counts appear reasonable, and that the technical information in the baseline ping matches expected values. This is also the time to check that data is being received from the different distribution channels for the application. Typically we have a “nightly”, “beta”, and “release” channel that needs to be verified so some of these analysis steps may need to be repeated for each channel. It’s also a good idea to look at some of the metrics that are critical in filtering, such as language/locale, ping start and end times, and duration, to ensure that everything matches our expectations. Being confident that the integration is correct is the ultimate goal, but don’t be alarmed if you see some differences in things like client and ping counts between legacy and Glean: this is often expected due to differences in how each system is designed and works. Product teams are experts in their product, so if the differences between legacy and Glean data seem wrong, don’t be afraid to find a Glean Team member or your friendly neighborhood data scientist to take a look and advise you if needed.

Enable Deletion Requests For Legacy Telemetry (if needed)

The next step in the process is to add the legacy telemetry identifier to the Glean deletion-request ping. The deletion-request ping is a special Glean ping that gets sent when a user opts out of the telemetry collection. It informs the pipeline to delete the data associated with the identifier. Glean can also handle this step for legacy telemetry, but we need to add the legacy id as a secondary identifier in the deletion-request ping to make it work. Just a note that this is typically only a requirement for applications that are integrating Glean and not libraries, unless those libraries contain identifiers that reference data that may need to be deleted as part of a user’s request. This is also only required if the legacy system doesn’t already have a mechanism for sending its own “deletion-request”

Plan The Migration

At this point we should have basic usage information coming from Glean, so the next step is to migrate the metrics that are specific to the application (or library). The Glean Team provides a spreadsheet to assist in this process, where the product team will start by filling in all of the existing legacy metric collections, along with some information about where they are collected in the code, who owns the collection, and what questions they answer. Once the product team fills in this information, the Glean Team will advise on the correct Glean metric types to ensure that the metric collections in Glean record the same information needed to answer the questions. This is a really good place for product teams to really audit their existing telemetry collections to ensure that it is answering the questions that they have, and that it is really needed. This can help to reduce the overall work required for the migration by potentially eliminating unnecessary and unused metric collections, and promotes lean data collection overall.

Migrate The Metrics

Now that the list of metrics to migrate has been settled upon, the work of instrumenting the metrics in Glean begins. This involves adding metrics to the metrics.yaml and instrumenting the recording of the metric in the code. There are likely several strategies that could be used here, but I would recommend migrating metrics in logical units, such as a feature at a time, in order to better plan, prioritize, implement, and verify. An entire application likely has a lot of metrics as a whole, but looking at it feature by feature makes the process more manageable. The instrumentation isn’t complete without adding test coverage for the metrics. Glean provides a testing API which allows to check that a valid value was recorded for every metric type to use for the purpose of writing these tests. The API extends to checking for errors in recording, as well as for testing of custom pings. These tests should be a first line of validation and can help catch things that could potentially cause issues with the data. As each feature is migrated from legacy to Glean, the product team should look at the legacy and Glean data side-by-side to ensure that the Glean data is correct and complete. Part of this process should include verifying that any ETL (extract, transform, load) job that processes the data is also correct. This may require some data-science help if there are any questions that arise, but it’s important to ensure that the collection is correct before calling it complete.

Validate The Migration And Reporting

Once all the legacy collections have been migrated, there will be two telemetry systems instrumenting the application or library. While it might be tempting to remove the legacy system now, there is one more important consideration that must be taken into account before that work can proceed. All of the data that is collected goes somewhere, ending up in a query or dashboard that (we hope) someone is using to make decisions on. Part of the migration work includes migrating those consumers to using the new Glean data so that there is a continuity of information and the ability to answer those business questions isn’t interrupted. Don’t forget that it may not just be the product team that is consuming this data, it could be other teams interested in this for management dashboards, or for revenue dashboards, etc. It is important to understand all of the stakeholders in the legacy system’s data so that they can migrate to the Glean data along with the product. Finally, now that everything from the instrumentation to the reporting infrastructure is migrated, and with the okay of data-science that everything looks good, it should be safe to remove the legacy telemetry instrumentation.

There are a lot of steps and nuances to the migration process that might not be clear at first glance. My intention with this post is to illuminate the overall migration process a bit more, and perhaps help you to find where you are at in it and where to go if you are feeling a bit lost in the process. The Glean Team is always around to advise and help, but no one knows each product better than the product teams themselves, so understanding this process will hopefully help those teams have a better command and ownership over their telemetry collections and the questions they can answer with them.

hacks.mozilla.orgImproving the Storage Access API in Firefox

Before we roll out State Partitioning for all Firefox users, we intend to make a few privacy and ergonomic improvements to the Storage Access API. In this blog post, we’ll detail a few of the new changes we made.

With State Partitioning, third parties can’t access the same cookie jar when they’re embedded in different sites. Instead, they get a fresh cookie jar for each site they’re embedded in. This isn’t just limited to cookies either—all storage is partitioned in this way.

In an ideal world, this would stop trackers from keeping tabs on you wherever they’re embedded because they can’t keep a unique identifier for you across all of these sites. Unfortunately, the world isn’t so simple—trackers aren’t the only third parties that use storage. If you’ve ever used an authentication provider that requires an embedded resource, you know how important third-party storage can be.

Enter the Storage Access API. This API lets third parties request storage access as if they were a first party. This is called “unpartitioning” and it gives browsers and users control over which third parties can maintain state across first-party origins as well as determine which origins they can access that state from. This is the preferred way for third parties to keep sharing storage across sites.

The Storage Access API leaves a lot of room for the browser to decide when to allow a third party unrestricted storage access. This is a feature that gives the browser freedom to make decisions it feels are best for the user and decide when to present choices about storage permissions to users directly. 

On the other hand, this means the Storage Access API can vary from browser to browser and version to version. As a result, the developer experience will suffer unless we do two things: 1) Design with the developer experience in mind; and 2) communicate what we’re doing. 

So let’s dive in! Here are four changes we’re making to the Storage Access API that will improve user privacy and maintain a strong developer experience…

Requiring User Consent for Third-Parties the User Never Interacted With

With Storage API, the browser determines whether to involve the user in the decision to grant storage access to a third party. Previously, Firefox didn’t involve users until a third party already had access to its storage on five different sites. At that point, the third party’s storage access requests were presented to users to make a decision. 

We’re allowing third parties some leeway to unpartition their storage on a few sites because we’re worried about overwhelming users with popup permission requests. We feel that allowing only a few permission grants per third party would keep the permission frequency down while still preventing any one party from tracking the user on many sites.

We also wanted to improve user privacy in our Storage Access API implementation by reducing the number of times third parties can automatically unpartition themselves without overwhelming the user with storage access requests. The improvement we settled on was requiring the user to have interacted with the third party recently to give them storage access without explicitly asking the user whether or not to allow it. We believe that removing automatic storage access grants for sites the user has never seen before captures the spirit of State Partitioning without having to bother the user too much more.

Careful readers may now be concerned that any embed-only pages, like some authentication services, will be heavily impacted by this. To tip the scales even further toward low user touch, we expanded the definition of “interacting with a site” to support embed-only contexts. Now, whenever a user grants storage access via permission popups or interacts with an iframe with storage access, these both count as user interactions. This change is the result of a lot of careful balancing between preserving legitimate use cases, protecting user privacy, and not annoying users with endless permission prompts. We think we found the sweet spot.

Changing the Scope of First-Party Storage Access to Site

While rolling out State Partitioning, we’ve seen the emergence of a fair number of use cases for the Storage Access API. One common use is to enable authentication using a third party.

We found on occasion the login portal that gave first-party storage access to the authentication service was a subdomain, like https://login.example.com. This caused problems when the user navigated to https://example.com after logging in… they were no longer logged in! This is because the storage access permission was only granted to the login subdomain and not the rest of the site. The authentication provider had access to its cookies on https://login.example.com, but not on https://example.com

We fixed this by moving the storage access permission to the Site-scope. This means that when a third party gets storage access on a page, it has access to unpartitioned storage on all pages on that same Site. So in the example above, the authenticating third party would have access to the user’s login cookie on https://login.example.comhttps://example.com, and https://any.different.subdomain.example.com! Yet they still wouldn’t have access to that login cookie on http://example.com or https://different-example.com.

Cleaning Up User Interaction Requirements

Requiring user interaction when requesting storage access was one rough edge of the Storage Access API definition. Let’s talk about that requirement.

If a third party calls requestStorageAccess as soon as a page loads, it should not get that storage access. It needs to wait until the user interacts with their iframe. Scrolling or clicking are good ways to get this user interaction and it will expire a few seconds after it’s granted. Unfortunately, there were some corner cases in this requirement that we needed to clean up. 

One corner case concerns what to do with the user’s interaction state when they click Accept or Deny on a permission prompt. We decided that when a user clicks Deny on a storage access permission prompt, the third party should lose their user interaction. This prevents the third party from immediately requesting storage access again, bothering the user until they accept. 

Conversely, we decided to reset the timer for user interaction if the user clicks Accept to reflect that the user did interact with the third party. This will allow the third party to use APIs that require both storage access and user interaction with only one user interaction in their iframe.

Another corner case concerned how strict to be when requiring user interaction for storage access requests. As we’ve iterated on the Storage Access API, minor changes have been introduced. One of the changes has to do with the case of giving a third party storage access on a page, but then the page is reloaded. Does the third party have to get a user interaction before requesting storage access again? Initially, the answer was no, but now it is yes. We updated our implementation to reflect that change and align with other browsers. 

Integrating User Cookie Preferences

In the settings for Firefox Enhanced Tracking Protection, users can specify how they want the browser to handle cookies. By default, Firefox blocks cookies from known trackers. But we have a few other possible selections, such as allowing all cookies or blocking all third-party cookies. Users can alter this preference to their liking.

We have always respected this user choice when implementing the Storage Access API. However, this wasn’t clear to developers. For example, users that set Firefox to block all third-party cookies will be relieved to know the Storage Access API in no way weakens their protection; even a storage access permission doesn’t give a third party any access to storage. But this wasn’t clear to the third party’s developers.

The returned promise from requestStorageAccess would resolve, indicating that the third party had access to its unpartitioned storage. We endeavored to fix this. In Firefox 98, when the user has disabled third-party cookies via the preferences, the function requestStorageAccess will always return a rejecting promise and hasStorageAccess will always return false.



The post Improving the Storage Access API in Firefox appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyUK CMA’s mobile ecosystems report is a step toward improving choice for consumers; swift independent enforcement is still necessary

Consumers today face many barriers that prevent them from accessing and using a variety of software options on their devices. We welcome efforts by the UK Competition and Market Authority (CMA) to better understand the situations faced by mobile device users and to address them.

Earlier today, we submitted our comments to the CMA’s interim report on mobile ecosystems. Their assessment adds to a growing body of work by regulators on the systemic barriers that prevent meaningful consumer choice and stifle innovation online. As these reports show, all devices run on operating systems, and concentration of operating systems and affiliated software harms developers and consumers alike. In addition, the CMA’s report is the first to chronicle the importance of web compatibility and the harmful network effects that result when popular software apps are incompatible with all browsers. It also dives into the importance of browser engines to a healthy internet ecosystem that is decentralized and open.

The report’s findings serve as a blueprint for regulators looking at these issues around the world. At the same time, we believe the CMA should go further in some crucial areas, most notably acting upon the need for swift independent enforcement rather than leaving it to the yet-to-be-established Digital Markets Unit (DMU).

Our submission focuses on the following key themes:

  • Ex-ante regulation should complement, and not replace, traditional enforcement – Mozilla strongly endorses efforts to reform competition law for the modern age in the form of ex-ante frameworks like the DMU in the UK and the Digital Markets Act (DMA) in the EU. However, if the harms identified in this report are not addressed imminently, it will cause irreparable damage to innovation and competition. We recommend that the CMA exercise its independent enforcement powers in certain sectors (such as the mobile browser market) and suggest measures that can limit the more egregious practices in the space.
  • Privacy and competition are complementary – We welcome the improved cooperation between regulators, such as the CMA and the Information Commissioner’s Office (ICO). We urge the CMA to work towards creating a high baseline of privacy protections and an even playing field for the open web. As we’ve said before, Chrome should be allowed to join all other major browsers to limit the use of third-party cookies for pervasive web tracking. At the same time, no dominant platform should be permitted to self-preference or indiscriminately share data internally without meaningful consumer consent. All platforms should also develop and deploy technologies relevant to the web ecosystem via formal processes at open standard bodies to ensure their privacy and competition aspects are adequately vetted in a neutral forum.
  • Importance of consumer experience and need for more evidence based research – The CMA recognises that interventions such as choice screens have not led to meaningful changes in the market share of impacted services, highlighting the importance of consumer experience in the choices that users make on their devices. We recommend that the CMA invest in  public research, more robust metrics collection, and other insights to explore remedies to meaningfully improve competition.

We look forward to working with the CMA over the coming months, both in the lead up to the final report and beyond, to see the insights from the report translate into regulatory action, increased consumer choice and a better, more interoperable internet.

More on this

The post UK CMA’s mobile ecosystems report is a step toward improving choice for consumers; swift independent enforcement is still necessary appeared first on Open Policy & Advocacy.

hacks.mozilla.orgRetrospective and Technical Details on the recent Firefox Outage

On January 13th 2022, Firefox became unusable for close to two hours for users worldwide. This incident interrupted many people’s workflow. This post highlights the complex series of events and circumstances that, together, triggered a bug deep in the networking code of Firefox.

What Happened?

Firefox has a number of servers and related infrastructure that handle several internal services. These include updates, telemetry, certificate management, crash reporting and other similar functionality. This infrastructure is hosted by different cloud service providers that use load balancers to distribute the load evenly across servers. For those services hosted on Google Cloud Platform (GCP) these load balancers have settings related to the HTTP protocol they should advertise and one of these settings is HTTP/3 support with three states: “Enabled”, “Disabled” or “Automatic (default)”. Our load balancers were set to the “Automatic (default)” setting and on January 13, 2022 at 07:28 UTC, GCP deployed an unannounced change to make HTTP/3 the default. As Firefox uses HTTP/3 when supported, from that point forward, some connections that Firefox makes to the services infrastructure would use HTTP/3 instead of the previously used HTTP/2 protocol.¹

Shortly after, we noticed a spike in crashes being reported through our crash reporter and also received several reports from inside and outside of Mozilla describing a hang of the browser.

A graph showing the curve of unprocessed crash reports quickly growing.

Backlog of pending crash reports building up and reaching close to 300K unprocessed reports.

As part of the incident response process, we quickly discovered that the client was hanging inside a network request to one of the Firefox internal services. However, at this point we neither had an explanation for why this would trigger just now, nor what the scope of the problem was. We continued to look for the “trigger” — some change that must have occurred to start the problem. We found that we had not shipped updates or configuration changes that could have caused this problem. At the same time, we were keeping in mind that HTTP/3 had been enabled since Firefox 88 and was actively used by some popular websites.

Although we couldn’t see it, we suspected that there had been some kind of “invisible” change rolled out by one of our cloud providers that somehow modified load balancer behavior. On closer inspection, none of our settings were changed. We then discovered through logs that for some reason, the load balancers for our Telemetry service were serving HTTP/3 connections while they hadn’t done that before. We disabled HTTP/3 explicitly on GCP at 09:12 UTC. This unblocked our users, but we were not yet certain about the root cause and without knowing that, it was impossible for us to tell if this would affect additional HTTP/3 connections.

¹ Some highly critical services such as updates use a special beConservative flag that prevents the use of any experimental technology for their connections (e.g. HTTP/3).

A Special Mix of Ingredients

It quickly became clear to us that there must be some combination of special circumstances for the hang to occur. We performed a number of tests with various tools and remote services and were not able to reproduce the problem, not even with a regular connection to the Telemetry staging server (a server only used for testing deployments, which we had left in its original configuration for testing purposes). With Firefox itself, however, we were able to reproduce the issue with the staging server.

After further debugging, we found the “special ingredient” required for this bug to happen. All HTTP/3 connections go through Necko, our networking stack. However, Rust components that need direct network access are not using Necko directly, but are calling into it through an intermediate library called viaduct.

In order to understand why this mattered, we first need to understand some things about the internals of Necko, in particular about HTTP/3 upload requests. For such requests, the higher-level Necko APIs² check if the Content-Length header is present and if it isn’t, it will automatically be added. The lower-level HTTP/3 code later relies on this header to determine the request size. This works fine for web content and other requests in our code.

When requests pass through viaduct first, however, viaduct will lower-case each header and pass it on to Necko. And here is the problem: the API checks in Necko are case-insensitive while the lower-level HTTP/3 code is case-sensitive. So if any code was to add a Content-Length header and pass the request through viaduct, it would pass the Necko API checks but the HTTP/3 code would not find the header.

It just so happens that Telemetry is currently the only Rust-based component in Firefox Desktop that uses the network stack and adds a Content-Length header. This is why users who disabled Telemetry would see this problem resolved even though the problem is not related to Telemetry functionality itself and could have been triggered otherwise.

A diagram showing the different network components in Firefox.

A specific code path was required to trigger the problem in the HTTP/3 protocol implementation.

² These are internal APIs, not accessible to web content.

The Infinite Loop

With the load balancer change in place, and a special code path in a new Rust service now active, the necessary final ingredient to trigger the problem for users was deep in Necko HTTP/3 code.

When handling a request, the code looked up the field in a case-sensitive way and failed to find the header as it had been lower-cased by viaduct. Without the header, the request was determined by the Necko code to be complete, leaving the real request body unsent. However, this code would only terminate when there was no additional content to send. This unexpected state caused the code to loop indefinitely rather than returning an error. Because all network requests go through one socket thread, this loop blocked any further network communication and made Firefox unresponsive, unable to load web content.

Lessons Learned

As so often is the case, the issue was a lot more complex than it appeared at first glance and there were many contributing factors working together. Some of the key factors we have identified include:

  • GCP’s deployment of HTTP/3 as default was unannounced. We are actively working with them to improve the situation. We realize that an announcement (as is usually sent) might not have entirely mitigated the risk of an incident, but it would likely have triggered more controlled experiments (e.g. in a staging environment) and deployment.

  • Our setting of “Automatic (default)” on the load balancers instead of a more explicit choice allowed the deployment to take place automatically. We are reviewing all service configurations to avoid similar mistakes in the future.

  • The particular combination of HTTP/3 and viaduct on Firefox Desktop was not covered in our continuous integration system. While we cannot test every possible combination of configurations and components, the choice of HTTP version is a fairly major change that should have been tested, as well as the use of an additional networking layer like viaduct. Current HTTP/3 tests cover the low-level protocol behavior and the Necko layer as it is used by web content. We should run more system tests with different HTTP versions and doing so could have revealed this problem.

We are also investigating action points both to make the browser more resilient towards such problems and to make incident response even faster. Learning as much as possible from this incident will help us improve the quality of our products. We’re grateful to all the users who have sent crash reports, worked with us in Bugzilla or helped others to work around the problem.

The post Retrospective and Technical Details on the recent Firefox Outage appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyAdvocating for a “use-it-or-share-it” spectrum approach to bridge the digital divide in India

On January 10, Mozilla, in partnership with the Centre for Internet and Society, made a submission to TRAI regarding the upcoming 5G spectrum auction. We advocated for a “use-it-or-share-it” approach to spectrum to help small and medium operators ensure connectivity reaches undeserved areas across India.

The COVID-19 pandemic has brought home to us all how important affordable, accessible broadband is to modern society.  Internet access has allowed millions to safely carry on their work, their education, their social connections, and more. Its value has multiplied because of this. Yet the unfortunate consequence of this is that those without affordable internet access fall further and further behind by default. They are quite literally invisible to the connected. The inescapable conclusion is that inclusiveness, making sure everyone has affordable access to broadband, must be a policy priority.

Restrictive licensing can serve as a barrier to access

Mozilla is committed to an internet that includes all the peoples of the earth and recognizes that, as internet growth is slowing, achieving a truly inclusive internet will require fresh ideas beyond those that got us this far.  This will require new business models, new technologies, and, in particular, new policies and regulations.  Central to affordable access, particularly in emerging markets, are mobile wireless technologies. However, the ability to deploy mobile wireless networks is dependent on spectrum licenses that guarantee exclusive access to wireless spectrum and which are often auctioned for millions of dollars each. While the value of these licenses can represent a windfall for government treasuries, they have the unfortunate side effect of excluding smaller, often more innovative, operators from participation in the market.  Worse still, the high prices paid for these licenses can operate as a disincentive for the license holder to invest in more sparsely populated, economically underdeveloped regions…exactly where affordable access is most desperately needed.

Sharing spectrum creates opportunity

Fortunately, there is a simple solution to this problem. Regulators can unlock access to wireless spectrum in regions that are most desperately in need of affordable access by making a subtle but important change to the way that spectrum is licensed. Traditionally spectrum licenses have guaranteed exclusivity of use to a license holder.  By shifting from a guarantee of exclusivity to a guarantee of ‘protection from interference’, the regulator opens the possibility of sharing that spectrum in areas where the primary license holder has no intention of building networks.

This “use-it-or-share-it” approach to spectrum licensing (which we’ve also advocated for in the past) has already been implemented in Mexico, the United States, the U.K, and Germany.  It is currently under consideration in Canada and other countries.  Spectrum sharing can unlock affordable access where it is needed most in disadvantaged rural areas.

While we encourage TRAI to consider this approach in their ongoing deliberations regarding the spectrum auction process in India, we also think that the issue merits a dedicated public consultation to incorporate wider stakeholder input and independent consideration from the 5G rollout process in India. We look forward to engaging with TRAI, the Department of Telecom and other stakeholders on this issue over the coming year.

The post Advocating for a “use-it-or-share-it” spectrum approach to bridge the digital divide in India appeared first on Open Policy & Advocacy.

hacks.mozilla.orgHacks Decoded: Adewale Adetona

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.

Meet Adetona Adewale Akeem!

Adewale Adetona

Adetona Adewale Akeem, more popularly known as iSlimfit, is a Nigeria-born revered digital technologist and marketing expert. He is the co-founder of Menopays, a fintech startup offering another Buy Now Pay Later (BNPL) option across Africa. 

So, I’ve got to ask — where does the name iSlimfit come from?

“Slimfit” is a nickname from my University days. But when I wanted to join social media, Twitter, in particular, I figured out the username Slimfit was already taken. All efforts to reach and plead with the user — who even up until now has never posted anything on the account — to release the username for me proved abortive. Then I came up with another username by adding “i” (which signifies referring to myself) to the front of Slimfit.

How did you get started in the tech industry, iSlimfit?

My journey into tech started as far back as 2014, when I made the switch from working at a Media & Advertising Agency in Lagos Nigeria to working as a Digital Marketing Executive in a Fintech Company called SystemSpecs in Nigeria. Being someone that loved combining data with tech, I have always had a knack for growth marketing. So the opportunity to work in a fintech company in that capacity wasn’t something I could let slide.

Where are you based currently? And where are you from originally? How does where you’re from affect how you move through the tech industry?

I am currently based in Leeds, United Kingdom after recently getting a Tech Nation Global Talent endorsement by the UK government. I am from Ogun State, Nigeria. 

There is actually no negative impact from my background or where I am from as regards my work in tech. The Nigerian tech space is huge and the opportunities are enormous. Strategic positioning and working with a goal in mind has helped me in navigating my career in tech so far.

What brought about the idea of your new vlog Tech Chat with iSlimfit?

My desire to make an impact and contribute to the growth of upcoming tech professionals birthed the vlog. Also, I wanted to replicate what I do offline with Lagos Digital Summit, in an online manner. The vlog is basically a series of YouTube chat series where I bring various people in tech — growth marketers, UI/UX designers, product managers, startup founders, mobile app developers, etc. — to share their career journey, background, transitioning, their career journey, learnings, and general questions about their day-to-day job so that Tech enthusiasts can learn from their expertise.

I have to bring up the fact that in 2021, you were endorsed by Tech Nation as an Exceptional agent in Digital Tech. What’s it feel like to achieve something like that?

The Tech Nation endorsement by the UK government is one of my biggest achievements. It made me realize how important my impact on the Nigerian tech industry over the years has been. The endorsement was granted based on my significant contribution to the Nigerian Digital Tech sector, my mentorship & leadership capabilities, and also the potential contribution my talent & expertise would add to the UK digital economy. I am particularly grateful for the opportunity to positively make an impact to the digital economy of the United Kingdom.

What’s something folks may not immediately realize about the tech sector in Nigeria if they’re not from there?

Easy: the fact that the tech sector in Nigeria is the biggest in Africa, and the impact of tech solutions developed in Nigeria is felt all over Africa. Also, as we can see from a recent report, Nigerian startups lead the list of African Startups that received funding in 2021.

What digital policy or policies do you think Nigeria (your home country) should pursue in order to accelerate digital development in the country?

The Nigerian government need to come to terms with the fact that digital technology is the bedrock for the development of the Nation. They need to develop policies that will shape the Nation’s digital economy and design a roadmap for grassroots digital Tech empowerment of Nigeria’s agile population. 

We also need more people to champion and improve on our quest for digital entrepreneurship development through various platforms.

You helped co-found a company called Menopays. What were some of the hurdles when it comes to getting a tech company off the ground over there? What about the opposite? What are the ways those in tech benefit from founding and working in Nigeria?

Some hurdles in starting a tech company is putting together the right team for the job. This cuts across legal, product, marketing, and the tech itself. The idea could be great but without the right team, execution is challenging. 

A great benefit is that the continent of Africa is gaining in popularity and the world is watching, so a genuine team founding a business will get the benefits of foreign investments which is great in terms of dollar value.

Some take issue with Buy Now Pay Later apps and services like Menopays in how they may profit off of buyers who may have less. How is Menopays different? How does the company make money? What measures are in place to make sure you aren’t taking advantage of people?

Menopays is different because our focus goes beyond the profitability of the industry. We tailored a minimum spendable amount with a decent repayment period for the minimum wage in Nigeria. Our vision stands in the middle of every decision we make both business-wise and/or product development-wise. 

The measure in place is that decisions are guided by why we started Menopays, which is “to fight poverty”. We don’t charge customers exorbitant interest as it goes against what we are preaching as a brand. So our Vision is imprinted in the heart of all the team members working towards making Menopays a family brand.

You’ve mentioned Menopays is fighting poverty in Nigeria and eventually all of Africa, how so?

Thinking about one of the incidents that happened to one of our co-founders, Reuben Olawale Odumosu, about eight years back. He lost his best friend because of a substandard malaria medication. His best friend in high school died because his parents couldn’t afford NGN2,500 malaria medication at the time and point of need which led to them going for a cheaper drug that eventually led to his death. Menopays exists to prevent such situations by making basic needs like healthcare, groceries and clothing available to our customers even when they don’t have the money to pay at that moment.

So in light of this, at Menopays, we believe that if some particular things are taken care of, individuals stand a lot more chances of survival. Take for instance, someone earns NGN18,000, spends NGN5,000 on transport, NGN7,000 on food and rent and some other miscellaneous of NGN6,000; with Menopays, we take out the cost of transportation and food (by providing you access to our merchants) and we give them more time to pay over the next three months. Which means each month the customer is positive cash flow of NGN6,000. We turn a negative cash flow into a positive cash flow and savings, thereby fighting poverty.

If you didn’t help found Menopays, what would you be doing now instead?

I would probably be working on founding another tech startup doing something for the greater good of the world and helping brands achieve their desired marketing objectives.

How can the African tech diaspora help startups similar to Menopays?

One way African tech diaspora can help startups similar to Menopays is by promoting their services, sharing with potential users, and also by investing in it.

How did you come up with the idea for Lagos Digital Summit?

Lagos Digital Summit started in 2017 with just an idea in my small shared apartment back then in Lagos with my friend who is now in Canada. The goal back then was simply to facilitate a platform for the convergence of 50 to 60 digital marketing professionals and business thought leaders for the advancement of SMEs and Digital Media enthusiasts within our network.

Five years down the line, despite being faced with plenty of challenges, it’s been a big success story. We have had the privilege of empowering over 5,000 businesses and individuals with diverse digital marketing skills. 

What’s it been like arranging that sort of summit in the midst of a pandemic?

Lagos Digital Summit 2020 has been the only edition that we’ve had to do full virtual because it was in the peak of the COVID-19 pandemic. Every other edition before then had been physical with fully packed attendees of an average of 1,000. For the 2021 edition, it was hybrid because Covid-19 restrictions were relaxed, where we had just 300 people attend physically and every other people watched online.

What’s something you see everywhere in tech that you wish more people would talk about?

I wish more people would talk about the struggle, the disappointments, the challenges and the numerous sacrifices that comes with building a tech startup. A lot of times, the media only portray the success stories, especially when a startup raises funds; the headlines are always very inspiring and rosy. 

What’s been the most impactful thing you’ve done since working in tech? What’s been the most memorable?

That should be founding Lagos Digital Summit; the kind of sponsors, corporate organisations, high-profiled speakers, volunteers and attendees that the Summit has been able to attract has been a memorable and proud feeling.

What sort of lasting impact do you want to have on the industry and the world? What keeps you going?

Waking up every day, knowing that a lot of people would have a smile on their faces because I have chosen to impact lives and make the world a better place through relevant tech solutions and platforms is the best feeling for me. The fact that I can read through reports and data and see the number of people using Menopays as a Buy Now Pay Later (BNPL) payment option to ease their lifestyle is a big motivation for me.

What’s some advice you’d give to others hoping to enter the tech world or hoping to start up a company?

Venturing into Tech or building a Startup takes a whole lot of concerted effort and determination. Getting the right set of partner(s) would however make the journey easier for you. Just have partners or cofounders with similar vision and complementing skills.

You can keep up with Adewale’s work by following him here. Stay tuned for more Hacks Decoded Q&A’s!

The post Hacks Decoded: Adewale Adetona appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Building and Deploying a Rust library on iOS

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

We ship the Glean SDK for multiple platforms, one of them being iOS applications. Previously I talked about how we got it to build on the Apple ARM machines. Today we will take a closer look at how to bundle a Rust static library into an iOS application.

The Glean SDK project was set up in 2019 and we have evolved its project configuration over time. A lot has changed in Xcode since then, so for this article we’re starting with a fresh Xcode project, a fresh Rust library and put it all together step by step.
This is essentially an update to the Building and Deploying a Rust library on iOS article from 2017.

For future readers: This was done using Xcode 13.2.1 and rustc 1.58.1.
One note: I learned iOS development to the extent required to ship Glean iOS. I’ve never written a full iOS application and lack a lot of experience with Xcode.

The application

The premise of our application is easy:

Show a non-interactive message to the user with data from a Rust library.

Let’s get started on that.

The project

We start with a fresh iOS project. Go to File -> New -> Project, then choose the iOS App template, give it a name such as ShippingRust, select where to store it and finally create it. You’re greeted with ContentView.swift and the following code:

import SwiftUI

struct ContentView: View {
    var body: some View {
        Text("Hello, world!")

You can build and run it now. This will open the Simulator and display “Hello, world!”. We’ll get back to the Swift application later.

The Rust parts

First we set up the Rust library.

In a terminal navigate to your ShippingRust project directory. In there create a new Rust crate:

We will need a static library, so we change the crate type in the generated shipping-rust-ffi/Cargo.toml. Add the following below the package configuration:

Let’s also turn the project into a Cargo workspace. Create a new top-level Cargo.toml with the content:

cargo build in the project directory should work now and create a new static library.

Let’s add some code to shipping-rust-ffi/src/lib.rs next. Nothing fancy, a simple function taking some arguments and returning the sum:

The no_mangle ensures the name lands in the compiled library as-is and the extern "C" makes sure it uses the right ABI.

We now have a Rust library exporting a C-ABI compatible interface. We can now consume this in our iOS application.

The Xcode parts

Before we can use the code we need a bit more setup. Strap in, there’s a lot of fiddly manual steps now1.

We start by linking against the libshipping_rust_ffi.a library. In your Xcode project open your target configuration2, go to “Build Phases”, then look for “Link Binary with Libraries”. Add a new one, in the popup select “Add files” on the bottom left and look for the target/debug/libshipping_rust_ffi.a file. Yes, that’s actually for the wrong target. This is just for the name, we’ll fix up the path next. Go to “Build Settings” and search for “Library Search Paths”. It probably has the path to file in there right now for both Debug and Release builds. Remove that one for Debug, then add a new row by clicking the small + symbol. Select the Any Driverkit matcher. It doesn’t matter which matcher you choose or what value you give it, but when we overwrite this manually in the next step I’ll assume you chose Any Driverkit. Do the same for the Release configuration.

Once that’s done, save your project and go back to your project directory. We will modify the project configuration to have Xcode look for the library based on the target it is building for3. Open up ShippingRust.xcodeproj/project.pbxproj in a text editor, then search for the first line with "LIBRARY_SEARCH_PATHS[sdk=driverkit*]". It should be in a section saying /* Debug */. Remove the LIBRARY_SEARCH_PATHS line and add 3 new ones:

"LIBRARY_SEARCH_PATHS[sdk=iphoneos*][arch=arm64]" = "$(PROJECT_DIR)/target/aarch64-apple-ios/debug";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*][arch=arm64]" = "$(PROJECT_DIR)/target/aarch64-apple-ios-sim/debug";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*][arch=x86_64]" = "$(PROJECT_DIR)/target/x86_64-apple-ios/debug";

Look for the next line with "LIBRARY_SEARCH_PATHS[sdk=driverkit*]", now in a /* Release */ section and replace it with:

"LIBRARY_SEARCH_PATHS[sdk=iphoneos*][arch=arm64]" = "$(PROJECT_DIR)/target/aarch64-apple-ios/release";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*][arch=arm64]" = "$(PROJECT_DIR)/target/aarch64-apple-ios-sim/release";
"LIBRARY_SEARCH_PATHS[sdk=iphonesimulator*][arch=x86_64]" = "$(PROJECT_DIR)/target/x86_64-apple-ios/release";

Save the file and return focus back to Xcode. If you didn’t make any typos Xcode should still have your project open. In the settings you will find the library search paths as we’ve just defined them. If you messed something up Xcode will complain that it cannot read the project file if you try to go to the settings.

Next we need to teach Xcode how to compile Rust code. Once again go to your target settings, selecting the “Build Phases” tab again.

There add a new “Run Script” phase, give it the name “Build Rust library” (double-click the “Run Script” section header), and set the command to:

The compile-library.sh script is going to do the heavy lifting. The first argument is the crate name we want to compile, the second is the build variant to select. This is not yet defined, so let’s do it first.

Go to the “Build Settings” tab and click the + button to add a new “User-Defined Setting”. Give it the name buildvariant and choose a value based on the build variant: debug for Debug and release for Release.

Now we need the actual script to build the Rust library for the right targets. It’s a bit long to write out, but the logic is not too complex: First we select the Cargo profile to use based on our buildvariant (that is whether to pass --release or not), then we set up LIBRARY_PATH if necessary and finally compile the Rust library for the selected target. Xcode passes the architectures to build in ARCHS. It’s either x86_64 for simulator builds on Intel Mac hardware or arm64. If it’s arm64 it can be either the simulator or an actual hardware target. Those differ, but we can know which is which from what’s in LLVM_TARGET_TRIPLE_SUFFIX and select the right Rust target.

Let’s put all of that into a compile-library.sh script. Create a new directory bin in your project directory. In there create the file with the following content:

And now we’re done with the setup for compiling the Rust library automatically as part of the Xcode project build.

The code parts

We now have an Xcode project that builds our Rust library and links against it. We now need to use this library!

Swift speaks Objective-C, which is an extension to C, but we need to tell it about the things available. In C land that’s done with a header. Let’s create a new file, select the “Header File” template and name it FfiBridge.h. This will create a new file with this content:

Here we need to add the definition of our function. As a reminder this is its definition in Rust:

This translates to the following in C:

Add that line between the #define and #endif lines. Xcode doesn’t know about that file yet, so once more into the Build Settings of the target. Search for Objective-C Bridging Header and set it to $(PROJECT_DIR)/ShippingRust/FfiBridge.h. In Build Phases add a new Header Phase. There you add the FfiBridge.h as well.

If it now all compiles we’re finally ready to use our Rust library.

Open up ContentView.swift and change the code to call your Rust library:

struct ContentView: View {
    var body: some View {
        Text("Hello, world! \(shipping_rust_addition(30, 1))")

We simply interpolate the result of shipping_rust_addition(30, 1) into the string displayed.

Once we compile and run it in the simulator we see we’ve succeeded at satisfying our premise:

Show a non-interactive message to the user with data from a Rust library.

iOS simulator running our application showing “Hello, world! 31”<figcaption>iOS simulator running our application showing “Hello, world! 31”</figcaption>

Compiling for any iOS device should work just as well.

The next steps

This was a lot of setup for calling one simple function. Luckily this is a one-time setup. From here on you can extend your Rust library, define them in the header file and call them from Swift. If you go that route you should really start using cbindgen to generate that header file automatically for you.

This time we looked at building an iOS application directly calling a Rust library. That’s not actually how Glean works. The Glean Swift SDK itself wraps the Rust library and exposes a Swift library. In a next blog post I’ll showcase how we ship that as a Swift package.

For Glean we’re stepping away from manually writing our FFI functions. We’re instead migrating our code base to use UniFFI. UniFFI will generate the C API from an API definitions file and also comes with a bit of runtime code to handle conversion between Rust, C and Swift data types for us. We’re not there yet for Glean, but you can try it on your own. Read the UniFFI documentation and integrate it into your project. It should be possible to extent the setup we done her to also run the necessary steps for UniFFI. Eventually I’ll document how we did it as well.


  1. And most of these steps are user-interface-dependent and might be different in future Xcode version. 🙁↩︎

  2. Click your project name in the tree view on the left. This gets you to the project configuration (backed by the ShippingRust.xcodeproj/project.pbxproj file). You should then see the Targets, including your ShippingRust target and probably ShippingRustTests as well. We need the former.↩︎

  3. Previously we would have built a universal library containing the library of multiple targets. That doesn’t work anymore now that arm64 can stand for both the simulator and hardware targets. Thus linking to the individual libraries is the way to go, as the now-deprecated cargo-lipo also points out.↩︎

SeaMonkeySeaMonkey 2.53.11 Beta 1 is out!

Hi All,

The SeaMonkey Project team is happy to announce the release of 2.53.11 Beta 1!

Please check out [1] or [2].


[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.11b1/

[2] – https://www.seamonkey-project.org/releases/2.53.11b1


Open Policy & AdvocacyPracticing lean data is a journey that can start anywhere

“It’s not about the destination, but about the journey.” I’m sure data and privacy are the furthest from your mind when you hear this popular saying. However, after a year of virtually sharing Mozilla’s Lean Data Practices (LDP), I’ve realized this quote perfectly describes privacy, LDP, and the process that stakeholders work through as they apply the principles to their projects, products, and policies.

LDP is Mozilla’s framework for applying privacy, security, and transparency to its products and practices. It consists of three pillars:

1. Audience engagement: keeping your audience (i.e. consumers, customers, etc.) informed and empowered over their data;

2. Stay lean: striving to minimize data collection to that which delivers value (rather than collecting without a purpose); and

3. Build in security: protecting the data that is entrusted to you.

Over the past year, I’ve been able to teach Mozilla’s LDP framework and practical ways to apply it to individuals all over the world (virtually) and in a variety of industries. From artists and technologists in the United States and various European countries, to product managers and engineers in India, to startups and entrepreneurs across the African continent, we aimed to reach as many individuals as we could around the world with the message of LDP and how to apply it in various contexts. We also reached a younger audience by teaching university engineering students for two semesters, introducing privacy and LDP concepts at an earlier stage with the hope that they can take the knowledge into their own future engineering designs.

One year later, here are my seven key observations regarding how participants in our LDP presentations approach privacy and data handling, regardless of their background:

1. Privacy makes people nervous.

Privacy can be seen as complex and confusing to many, especially now with laws like the General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA). In my experience, people know they should care, but are intimidated and often don’t know where to begin. When the privacy jargon is removed though, the concepts become easier to understand. Simply using terms like “individuals” rather than “data subjects” or “permission” instead of “consent” helps non-privacy professionals who are learning about LDP grasp the concepts and the reasons behind why it’s important. When your biggest stakeholders are non-privacy professionals, this change in framing helps get their buy-in because they are better able to understand the value and therefore implement what is needed to address consumer privacy expectations. Without knowing anything about privacy, many have been able to walk away after just a one hour LDP discussion with actionable steps they can take in their own area, whether it’s the arts, human resources, engineering, product, marketing, or something else.

2. People don’t think privacy rules apply to “them”.

Often, it is assumed that it’s the government’s responsibility or only for large corporations or specific departments (e.g. Compliance, Legal). However, through LDP conversations our audience was able to understand that they too have a critical role to play. If they have access to personal data or leverage it for their roles, LDP can apply to help their company to build trust with their consumers.

3. LDP is not one size fits all.

Every company has its own challenges. For example, an organization may struggle with being transparent with its users, but may have really strong security practices over the data that they do have. The level of risk of a data type can also vary depending on the company. For example, business contact information for a consumer-facing organization may be lower risk than business contact information for a business-to-business organization who sees it as competitive sales information. This is also why we remind our audience to be mindful of copying and pasting a privacy policy that they see online. What one company does with data is surely going to be different from their own needs, so it’s important to understand their data and how it’s being used in their environment to ensure they can be transparent with what is actually happening.

4. Deletion of data is often overlooked.

The last step of the data lifecycle is disposal of the data. This can be the most forgotten step for many across a variety of industries. The Stay Lean pillar of LDP is a good reminder to establish data retention policies that can actually be followed to ensure data does not remain for longer than necessary.

5. LDP is adaptable across many industries, each with their own unique challenges.

University engineering students can learn and grasp the concepts as they are building out innovative tools; creatives can apply it as they design solutions to tackle big problems such as racism and bias; and organizations can use it as they design and promote their products. I have always known privacy was applicable across industries, but it was eye opening to see it in practice, especially in the arts and creative space.

6. LDP applies globally.

LDP is adaptable globally, and it’s important to understand the local challenges to maximize its benefits. The sensitivity of data — for example, a mobile phone number — may vary depending on where you are in the world. I strive to incorporate local contexts into LDP presentations, but also learn from our participants the unique challenges they experience in their various geographies and how they can use LDP concepts to tackle them.

7. LDP empowers its practitioners to have more control of their own data.

There is an appetite to understand how we as consumers can hold companies accountable. One of the biggest surprises for me came when I would field questions at the end of a presentation, and people would ask about their rights as consumers and how they can hold companies accountable. For example, people wanted to understand their rights and recourse options if companies contacted them without permission, didn’t honor their unsubscribe requests, or did something else frustrating. I teach LDP for individuals to apply it in a business context, but we are all also consumers and customers. LDP can help us better understand how our own data should be handled and improve our understanding of what organizations are doing. We can then remember how we feel about certain situations and then ensure we are doing things in a more consumer-friendly way within our organizations.

Lean Data Practices is a journey. For many there won’t be an ultimate destination because it is an iterative process. If you try to apply all the principles across your entire organization at once, you will find yourself overwhelmed and likely unsuccessful. To maximize your chance of success, my advice — which is the same advice we give when we present — is to just start somewhere. Choose one aspect of your business and focus on that, one pillar at a time. Once you’ve successfully applied the principles, go to a different business unit and do the same. Remember to review and adapt as products and business needs (or data!) change as well. You may likely never reach your destination, but you will see your company improve in its practices along the way.

In 2022 I plan on continuing to spread our message of LDP, especially on the African continent. We will also have a course launching soon for anyone to take whenever they would like, which will help us reach more people compared to live discussions. Sign up here to receive a one-time email notification when the course is ready. Join us on the journey that is LDP.

The post Practicing lean data is a journey that can start anywhere appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyEuropean Parliament green-lights crucial new rulebook for Big Tech

Today the European Parliament adopted its report on the draft Digital Services Act, the EU’s flagship proposal to improve internet health. Today’s vote is a crucial procedural step on the road to bringing the draft rules to reality, and we commend Members of Parliament for their efforts.

Speaking after the vote, Owen Bennett, Senior Policy Manager at Mozilla said:

“Today we’re a step closer to a better internet. The European Parliament has had its say on the Digital Services Act, and has set out a vision to meaningfully address the harms we experience with Big Tech.

We’re glad to see Parliament give researchers and oversight bodies what they need to identify hidden harms online, especially when it comes to online advertising. Harms fester when they happen in the dark, and so meaningful transparency in the ecosystem can help mitigate them. We believe that the future of online advertising should be more private, transparent, and give more control to people. The DSA doesn’t solve everything, but it’s a crucial step forward towards a healthier advertising ecosystem.

And while transparency is crucial we also need the tools to take action; we need robust rules to ensure companies build and operate their products more responsibly. The DSA’s risk-based approach is a thoughtful way forward – it puts the onus on companies to assess and meaningfully address the risks their products may pose to individuals and society, and nudges those with the biggest problem to make the biggest effort.

The DSA is a once-in-a-generation opportunity for the EU to set Big Tech on a better course. The stakes are high, and the outcome will likely shape how these issues are debated and legislated in other regions. We commend the European Parliament for seizing the moment.”



In December 2020 the European Commission published the draft EU Digital Services Act. The law seeks to establish a new paradigm for tech sector regulation, and we see it as a crucial opportunity to address many of the challenges holding back the internet from what it should be.

The draft law contained many thoughtful policy approaches that Mozilla and our allies have long argued for. In particular, the draft law’s provision on advertising transparency, oversight, and a risk-based approach to content responsibility can help us advance towards a better internet. Since the draft law was published, we have been working closely with EU lawmakers to fine-tune and improve the proposal, and today’s European Parliament vote is a crucial step forward as the draft law edges towards final adoption.

The European Parliament and the EU Council (that represents EU Member State governments) must now finalise the law, and have committed to do so by April 2022.

The post European Parliament green-lights crucial new rulebook for Big Tech appeared first on Open Policy & Advocacy.

hacks.mozilla.orgContributing to MDN: Meet the Contributors

If you’ve ever built anything with web technologies, you’re probably familiar with MDN Web Docs. With about 13,000 pages documenting how to use programming languages such as HTML, CSS and JavaScript, the site has about 8,000 people using it at any given moment.

MDN relies on contributors to help maintain its ever-expanding and up to date documentation. Supported by companies such as Open Web Docs, Google, w3c, Microsoft, Samsung and Igalia (to name a few), contributions also come from community members. These contributions take many different forms, from fixing issues to contributing code to helping newcomers and localizing content.

We reached out to 4 long-time community contributors to talk about how and why they started contributing, why they kept going, and ask what advice they have for new contributors.

Meet the contributors

MDN contributors come from all over the world, have different backgrounds, and contribute in different ways. 

Irvin and Julien’s main area of contribution is localizations. They are part of a diverse team of volunteers that ensure that MDN is translated in seven different languages (Discover here how translations of MDN content happens.

Since the end of 2020, the translation of MDN articles happen on the new GitHub based platform.


Irvin, @irvinfly, volunteer from Mozilla Taiwan Community

I had been a front-end engineer for more than a decade. I had been a leisure contributor on MDN for a long time. I check MDN all the time when writing websites, but only made some simple contributions, like fixing typos.

In early 2020, the MDN team asked us if zh (Chinese) locale would like to join the early stage of the localization system on Yari, the new Github-based platform. We accepted the invitation and formed the zh-review-team. Since then, I have begun to contribute to MDN every week.

My primary work is collaboration with other zh reviewers to check and review the open pull requests on both Traditional Chinese and Simplified Chinese locales. Our goal is to ensure that all the changes to the zh docs are well done, both regarding the file format and translations. 


Sphinx  (Julien) (he / him), @Sphinx_Twitt 

Most of my contributions revolve around localizing MDN content in French (translating new articles and also maintaining existing pages). Since MDN moved to GitHub, contributing also encompasses reviewing other’s contributions. 

I started to contribute when, having time as a student, I joined a collaborative translation project led by Framasoft. After a few discussions, I joined a mailing list and IRC. One of the first contribution proposals I saw was about improving the translation of the MDN Glossary in French to help newcomers.

I started helping and was welcomed by the team and community at that time. One thing led to another, and I started helping to translate other areas of MDN in French.

Tanner and Kenrick are also longtime volunteers. Their main areas of activity are contributing code, solving issues in MDN repositories, as well as reviewing and assisting the submissions of other contributors.

In MDN, all users can add issues to the issue tracker, as well as contributing fixes, and reviewing other people fixes


Tanner Dolby, @tannerdolby 

 I contribute to MDN by being active in the issue tracker of MDN repositories. 

I tend to look through the issues and search for one I understand, then I read the conversation in the issue thread for context. If I have any questions or notice that the conversation wasn’t resolved, I comment in the thread to get clarification before moving forward. 

From there, I test my proposed changes locally and then submit a pull request to fix the issue on GitHub. The changes I submit are then reviewed by project maintainers. After the review, I implement recommended changes. 

Outside of this, I contribute to MDN by spotting bugs and creating new issues, fixing existing issues, making feature requests for things I’d like to see on the site, assisting in the completion of a feature request, participating in code review and interacting with other contributors on existing issues.

I started contributing to MDN by creating an issue in the mdn/yari repository. I was referencing documentation and wanted to clarify a bit of information that could be a typo. 

The MDN Web Docs team was welcoming of me resolving the issue, so I opened and reviewed/merged a PR I submitted, which fixed things. The Yari project maintainers explained things in detail, helping me to understand that the content for MDN Web Docs lived in mdn/content and not directly in mdn/yari source. The issue I originally opened was transferred to mdn/content and the corresponding fix was merged. 

My first OSS experience with MDN was really fun. It helped me to branch out and explore other issues/pull requests in MDN repositories to better understand how MDN Web Docs worked, so I could contribute again in the future.


Kenrick, @kenrick95

I’ve edited content and contributed codes to MDN repositories: browser-compat-data, interactive-examples, and yari.

My first contribution to content was a long time ago, when we could directly edit on MDN. I can no longer recall what it was, probably fixing a typo. 

My first code contribution was to the “interactive-examples” repo. I noticed that the editor had some bugs, and I found the GitHub issue. After I read the codes, it seemed to me that the bug could be easily fixed, so I went ahead and sent a pull request

Why contribute?

Contributions are essential to the MDN project. When talking about why they deem contribution to MDN a critical task, contributors underlined different facets, stressing its importance as an open, reliable and easily accessible resource to programmers, web developers and learners.

Contributions to MDN documentation and infrastructure help insure the constant improvement of this resource. 

Contributions to MDN are important because it helps to provide a reliable and accessible source of information on the Web for developers. MDN Web Docs being open source allows for bugs to quickly be spotted by contributors and for feature requests to be readily prototyped. 

Building in the open creates an environment that allows for contributors from all over the world to help make MDN a better resource for everyone and that is incredible. (Tanner)

Contributions to the platform and tools that powers MDN are important to enhance users experience (Kenrick)

Small and big contributions are all significant and have a real impact. A common misconception about contributing to MDN is that you can only contribute code, but that is not the case! 

MDN is the primary place for people to check any references on web-dev tech. As small as fixing one typo, any contribution to MDN can always help thousands of programmers and learners. (Irvin)

Contribution to localization allows learners and developers to access this resource in languages other than English, making it more accessible. 

Especially for those who are struggling with reading English docs, localization can enable them to access the latest and solid knowledge (Irvin)

Contributing to localization help beginners on the Web finding quality documentation and explanations so that they can build sites, apps and so on without having to know English. MDN is a technical reference, but also a fantastic learning ground to educate newcomers. From basic concepts to complex techniques, language should not be a barrier to build something on the Web. (Julien)

Contributing is a rewarding experience

We asked contributors why they find contributing to MDN a rewarding experience. They told us that contribution is a way to help others, but also to learn new things. They spoke about the relationship that volunteers build with other people while contributing, and the possibility to learn from and help others. 

The part of contributing that I enjoy most is providing a fix for something that positively impacts the experience for users browsing MDN Web Docs. This could be an update to documentation to help provide developers with accurate docs, or helping to land a new feature on the site that will provide users new or improved functionality. Before I started contributing to MDN, I referenced MDN Web Docs very often and really appreciated the hard work that was put into the site. To this day, I’m motivated to continue help making MDN Web Docs the best resource it can be through open source contributions. (Tanner)

I enjoy finding different points of view on how to achieve the same things. This is natural, since the people I interact comes from different part of the world and we all are influenced by our local cultures (Kenrick)

The part of contributing I most enjoy is definitely the part when I’m learning and discovering from what I’m translating (…). My best memory to contribute to MDN is that I had the great privilege of spending an evening watching a sunset of lava and sea with people related to MDN for whom I have the deepest esteem. (Julien)

The journey of contribution itself is important. The support of MDN maintainers and the exchange of ideas is essential. Contribution does not happen in a silo but is a collaborative effort between volunteers and the MDN team.

My best memory of contributing to MDN would have to be the journey of creating the copy-to-clipboard functionality for code snippets on MDN Web Docs. I remember prototyping the feature in mdn/yari locally and then beginning to see it come to life really quickly, which was wonderful to see.

The code review process for this feature was such a joy and incredibly motivating. Each step of the feature was tested thoroughly and every win was celebrated.

Each morning, I would wake up and eagerly check my email and see if any “Re: [mdn/yari]” labelled emails were there because it meant I could get back to collaborating with the MDN Web Docs team. This contribution really opened my eyes to how incredibly fun and rewarding open source software can be. (Tanner)

My best memory of contributing to MDN was working on https://github.com/mdn/yari/pull/172. The change in itself wasn’t big, but the solution changed several times after lengthy discussion. I’m amazed on how open the maintainers are in accepting different point of views for achieving the end goal (Kenrick)

Contributions to be proud of

All contributions are important, but some hold a special place with each volunteer.

The contribution that I’m most proud of is adding copy-to-clipboard functionality to all code snippets for documentation pages on MDN Web Docs. I use this utility very often while browsing pages on MDN Web Docs and seeing a feature I helped build live on the site for other people to use is an amazing feeling.

This contribution was something I wanted to see on the site and after discussing the feature with the Yari team, I began prototyping and participating in code review until the feature was merged into the live site. This utility was one of the first “large” feature requests that I contributed to mdn/yari and is something I’m very proud of. (Tanner)

The contribution I am most proud of is having the HTML, CSS, and JavaScript section complete and up-to-date in French in 2017 after being told this would be impossible :) . More recently, helping rebuilding tools for localizers on the new MDN platform with a tracking dashboard (Julien)

Kenrick was most proud of adding a feature that marks the page you are looking at in the sidebar. This change makes a significant difference for visual learners. 

It was a simple change, but I felt that this UX improvement is important because it serves as a guide to the reader to check what are the documents related to the one they are reading. 

Getting started 

There are many ways to contribute to MDN! Our seasoned contributors suggest starting with reporting issues and trying to fix them, follow the issue trackers and getting familiarized with GitHub. Don’t be afraid to ask questions, and to make mistakes, there are people that will help you and review your work.

 Go at your own pace, don’t hesitate to ask questions. If you can, try to hack things to fix the issues you encounter on a project. If you are eager to learn things about the Web, check MDN as a way to contribute to open source (Julien)

Suppose you become aware of a bug in any MDN doc (such as a typo), you are welcome to fix them directly by clicking the “Edit on Github” button. The review team will ensure it’s good, so you don’t need to worry about making any mistakes. (Irvin)

From taking the first steps, contributors can then progress to more difficult issues and contributions. 

Don’t be afraid of reading code. Pick up any issue from GitHub, and you can easily start contributing code! (Kenrick)

My advice for new contributors or those getting started with open source is to get familiarized with the project that they wish to contribute in and then begin staying up-to-date with the issue tracker.

Start being active in the project by looking through issues and reading through the comments, this is a sure-fire way to learn about the project. If there is something that you aren’t ready to contribute but want to have a conversation about, drop a comment in the issue thread or create a discussion in the repository for a great way to inspire conversation about a topic.

Lastly, understanding a version control software like Git is recommended for those that are considering starting to contribute to open source software. Be open to help in any way you can when first getting started in open source, I started small with documentation fixes on MDN Web Docs and then gradually worked my way into more complex contributions as I became more familiar with the project. (Tanner)

If you want to start contributing, please check out these resources:

If you have any questions, join the matrix chat room for MDN.

The post Contributing to MDN: Meet the Contributors appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeyHappy New Year!.. well.. nearly

Hi everyone!

On behalf the SeaMonkey Project, I’d like to wish everyone a Happy, Healthy, Safe and Prosperous New Year!

The past year has been, like the year before and the year before that, quite challenging for all.   I had hoped to write an essay on this here; but it’s safe to say, it’s not really necessary.

So all in all, I personally would like to wish everyone a Happy, Healthy, and Safe New Year.  I hope this coming year will be an improvement over the past few and that the crazy conflicts around the world and in our society will lessen.  This world really doesn’t need additional strife.  The pandemic is enough to drive everyone mad.

To my fellow SeaMonkey devs, I’ll strive even harder to get my stuff together.  The infrastructure should’ve been ready eons ago.  I apologize for the delay.

Best Regards,



hacks.mozilla.orgHacks Decoded: Sara Soueidan, Award-Winning UI Design Engineer and Author

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.


Meet Sara Soueidan!

Sara Soueidan is an independent Web UI and design engineer, author, speaker, and trainer from Lebanon.

Sara has worked with companies around the world, building web user interfaces, designing systems, and creating digital products that focus on responsive design and accessibility. She’s worked with companies like SuperFriendly, Herman Miller, Khan Academy, and has given workshops within companies like Netflix and Telus that focus on building scalable, resilient design.

When Sara isn’t offering keynote speeches at conferences (she’s done so a dozen times) she’s writing books like “Codrops CSS Reference” and “Smashing Book 5.” Currently, she’s working on a new course, “Practical Accessibility,” meant to teach devs and designers ways to make their products accessible.

In 2015, Sara was voted Developer of the Year in the net awards, and shortlisted for the Outstanding Contribution of the Year award. She also won an O’Reilly Web Platform Award for “exceptional leadership, creativity, and collaboration in the development of JavaScript, HTML, CSS, and the supporting Web ecosystem.”

We chatted with Sara about front-end web development, the importance of design and her appreciation of birds.

Where did you get your start? How did you end up working in tech?

I took my first HTML class in eighth grade. I instantly fell in love with it. It just made sense; and it felt like a second language that I found myself speaking fluently. But back then, it was just another class. As I continued my journey through high school, I considered architecture as a major. I never thought I’d major in anything even remotely related to tech. I always thought I’d choose a career that had nothing to do with computers. In fact, before choosing computer science as a major, I was preparing to study architecture in the Faculty of Arts.

Then, life happened. A series of events had me choosing CS as a major. And even after I did, I didn’t really think I’d make a career in tech. I spent 18 months after college pondering what I could do for a living with a CS major in Lebanon, but I didn’t find my calling anywhere.

My love for the web was rekindled when someone suggested I learn web development and try making websites for a living. The appeal of that was two-fold: I’d get to work remotely from the comfort of my home, and I’d get to be my own boss, and have full control over my time and the work that I choose.

After a few weeks of learning modern HTML and CSS, and dipping my feet into JavaScript, I was hooked. I found myself spending more time learning and practicing. Codepen was new back then, and it was a great place to do quick code exercises and experiments. I also created a one-page Web site — because if you’re going to work freelance and accept work requests, you gotta have that!

As I continued learning and experimenting for a few months, I started sharing what I learned as articles on a blog that I started in 2013. A few weeks after I published my first article, I got my first client request to create the UI for a Facebook-like Web application. And over the course of the first year, I got one small client project after another.

My career really kicked off though in 2014. By then, I was writing more, getting more client work, and writing a CSS reference for Codrops. Conference speaking invitations started flooding in after I delivered my first talk at CSSConf in 2014. I gave my first workshop in LA in 2015. And I have been doing what I do now since.

I am grateful things didn’t work out the way I wanted them to after high school.

You’ve been programming for a while now, you’ve co-authored a book about the craft, you’ve created guides like the Codrops CSS Reference — what drives you?

A thirst for knowledge and a craving for variety in work. I don’t think I’d be inspired enough to do any kind of work that doesn’t satisfy both. I also need to feel like I’m doing something meaningful, like helping others. And I’ve been able to fulfill all of these needs in this field. That’s why I fell in love with it.

Being independent, I have full control over my time and the type of work I spend it on. While building websites is my main work and source of income, I do spend a large portion of my time switching between writing, editing, giving talks, running workshops (in-house and at events), making courses (this one’s new!) and working on personal projects.

Everything I do complements one another: I learn, to write, to teach; I code, to write, to speak; I code, to learn, to share. It’s a wonderful circle of creative work! This variety helps keep the spark alive, and helps me rekindle my passion for the web even after frequent burnouts.

I like that I must keep learning for a living! And that I get to also teach (another passion and — dare I say — talent of mine) as part of my job. I teach through writing, through speaking, through running workshops, and even through direct collaboration with designers and engineers on client projects.

I always think that even if I end up changing careers, I would still make some time to fiddle with code and make web projects on the side of whatever else I’d be doing for a living.

When it comes to front-end versus back-end versus full stack, you seem to be #TeamFrontEnd. What is it about front-end web and app development that calls your name (more so than back-end)?

I love working at the intersection of design and engineering! This is the area of the front end typically referred to as “the front of the front end.” It is the perfect sweet spot between design and engineering. It stimulates both parts of my brain, and keeps me inspired and challenged — a combination my brain needs to stay creative.

I find building interfaces fascinating. I love the fact that the interfaces I build are the bridge between people and the information they access online.

That comes with great responsibility, of course. Building for people is not easy because people are so diverse and so are the ways they access the Web. And it’s the interfaces they use that determine whether they can!

It is our responsibility as front-end developers and designers to ensure that what we create is inclusive of as many people as possible.

While this may sound intimidating and maybe even scary, I find it inspiring. It is what gives more meaning to what I do, and what pushes me to keep learning and trying to do better. The front of the front end is where I found my sweet spot: a place where I can be challenged and inspired.

A couple of years ago, I was feeling this so much that I shared that moment on Twitter. Among the many replies I got, this quote by Douglas Adams stuck with me:

We all like to congregate, at boundary conditions. Where land meets water. Where earth meets air. Where body meets mind. Where space meets time.”

What do you love about coding? What’s your least favorite part?

My favorite part is the satisfaction of seeing my code “come to life”. The idea that I can write a few lines of code that computers understand, and that so many people can consume and interact with it using various technologies — present and in the future.

I also appreciate the short feedback loop in modern code environments: you write code or make changes to existing one, and see the results immediately in the browser. It is almost magical. And who doesn’t like a little bit of magic in their lives?

My least favorite part, however, is that it requires so little movement. There is life in movement! One of my favorite yoga teachers once said: “Once you stop moving, you start dying.” And I felt that. Spending so much time in front of a screen is very taxing.

Regular exercise is crucial for my ability to continue doing what I do. But I still sometimes feel like I need more movement during my work sessions. So I got a standing desk a couple of years ago.

Switching between standing and sitting gives my body short “breathers” throughout the day and allows for better blood flow. A balanced lifestyle is crucial to maintaining a good health when you spend as much time in front of a screen. Try to move, drink lots of water, and go outside more.

You’re based out of Lebanon. What’s something many folks may not realize about the tech scene there?

I know this isn’t the answer you’re expecting, but I think what many people don’t realize about the tech scene here is how challenging it is! In Lebanon, we live in a country that has a massive, serious, and ongoing power crisis.

This crisis, as you can imagine, affects almost every facet of our lives, including the digital. You need power to do work. And you need an internet connection to do work. We’ve always had problems with internet speed. And with the fuel shortage, full power outages, and reception problems, having a reliable connection is less likely than before.

But there are some incredibly talented designers and developers still making it work through this all. Living in Lebanon brings daily challenges, but being challenged in life is inevitable.

I try to look on the bright side of everything. Working on a slow connection has its upsides, you know. You learn to appreciate performance more and strive to make better, faster Web sites. You appreciate tech like Service Worker more, and learn to use it to make content available offline. If anything, living here has made many of us more resilient to change, and more creative with our solutions in the face of crisis.

How do you find (tech) supporting communities in Lebanon, if not where does your community live?

I don’t. But that’s mainly because I live in an area with no active tech community. And I live far from where any tech meetups happen. I also don’t know any front-end focused developers in Lebanon. I’m sure they exist; it’s just that, being the introvert that I am, I don’t happen to know any. So my community is mainly online — on Twitter, and in a couple of not-very-busy Slack channels.

Ok, random question. We’ve gotta know about the birds. You’ve raised at least a dozen. What’s the story there?

It all started back in 2009, I think. A close friend had, for whatever reason, decided that I might enjoy taking care of baby birds. So, he got me a baby White-spectacled Bulbul (my favorite bird species currently), with all the bird food I needed to start. He taught me what I needed to know to take care of it. And he told me that, when it grows up, it won’t need to live in a cage because I would be its home. I had no idea back then how much I’d fall in love with that bird.

I’ve raised 10+ birds since. Not a single one of them was kept in a cage. I would raise them and train them so that, when they grew up, they would fly out in the morning — making friends, living like they were meant to, and return home before the end of the day.

They would drink from my tea cup, share my sandwiches, eat out of my plate (mainly rice) and spend most of the day either sitting on my shoulder and head, or napping on my arm. Friends have always told me that I was like a Disney princess with my birds. I’m not sure about that, but it did sometimes feel that way. x)

Here’s a photo of my last two baby birds from a couple of years ago. I took them out in a car drive to “explore the outside world” for the first time.

They just sat there chilling on my arm, as they watched the world (cars, mainly) pass by.

Years after my friend got me my first bird, I asked him why he did, and whether he knew about the connection that was going to happen. His answer was short. He said: “You have the heart of a bird. I knew you’d love creatures that are like you.

Another random question: In an interview, you mentioned mainly working in the morning (6am-10am), and slowing down after lunch. You’re like me! How important is a flexible work day to your workflow? (And how do we convince more people that 9-to-5 work isn’t realistic for everyone? How do we normalize hard work in the morning, meetings and calls in the afternoon?) 

I can’t imagine myself working on a 9-to-5 schedule! That’s actually one of the few reasons I never took a full-time job. As I mentioned earlier, flexibility was a key factor in choosing a freelance career.

I am an early bird. On a typical day, I wake up no later than 5:30 in the morning. So my day starts very early. My brain’s information retention powers are at their highest early in the morning. So I get my best work done during that time. With my brain firing on all cylinders, I make quite a bit of headway with the day’s tasks. What makes this time even more productive is the fact that there are no expectations, nor interruptions: no emails, no client communication, not even any IRL interruptions.

The earlier you start in the day, and knowing that most people are only really productive for about 4.5 hours a day, I believe it makes a lot of sense to slow down after lunch.

I realize this is easier said than done, though. Being freelance gives me this flexibility but I realize others may not have that working full time. But with more companies going fully or partially remote now, I think more people will hopefully get to choose when they work during the day.

You’re working on an accessibility course, can you talk a bit about why you decided to develop this course and the importance of creating more accessible web interfaces?

Before COVID-19 hit, I traveled to run workshops at conferences and in-house at companies. The lockdown had us all, well, locked down, so that was put on temporary hold.

Over the years, I collected some amazing feedback to my accessibility workshop from former attendees. I knew I had useful content that many others would find helpful.

As many events went online, running the workshop online was the sensible plan B. But the fact that my Internet was unreliable made that a little risky — I wouldn’t want my internet connection to fail in the middle of an online workshop! So that plan was put on hold too.

On the other hand, working with designers and engineers on client projects made me realize that there was a big accessibility knowledge gap in most companies I’ve worked with. I love to teach teams I work with about accessibility at every chance I get, but there’s only so much you can share in Zoom meetings and Slack channels. In-house workshops were not always an option, and online training was not feasible at the time.

And last but not least, I noticed that there is quite a bit of misinformation and bad advice circulating the web community around accessibility. You can cover a good amount of information in articles, but I already had a good bunch of content I could start with from the accessibility workshop that I can use as a foundation for a more comprehensive series of teaching materials — sort of like a mini curriculum.

By developing this course I am scratching my own itch. All the reasons mentioned above had me wishing I had created a course that I could share around, especially with client teams, and then with members of the community. So with the time I have in between client projects and speaking, I started working on it!

The course is called Practical Accessibility, and is still under active development, coming in 2022. The content of the course is going to be much more comprehensive than that of the workshop, and it will cover much more ground, and hopefully be a great foundation for anyone wanting to learn how to create more accessible websites.

Of everything you worked on, what’s your favorite?

Out of all the projects I’ve worked on, probably the one that stood out for me is a project for Herman Miller that I collaborated with SuperFriendly on. The project was under NDA, and was discontinued a few weeks after COVID-19 hit and the world realized it was going to change moving forward; so I, unfortunately, don’t have any details to share about the project itself.

But what made this opportunity so special is that this was the first and only project that I was involved in from the very start— from early kick-off meetings and ideation, through research and user testing, UX and UI design, and development. I learned so much working with an amazing group of SuperFriends. The trip to the Herman Miller showroom in Atlanta, where we ran a workshop with the team at Herman Miller, was the last trip most of us took before the big lockdown.

Herman Miller is a furniture company. And what many people don’t know about me is how much I love interior design. I even took an interior design course last year! So, on this project, I got to (1) work with an amazing team (who I get to call my friends now 💕), (2) on a creative project, (3) for a company specializing in making modern furniture, (4) in the field of interior design! How could I not love that?!

The cherry on top of the cake was that I got a generous discount which I used to upgrade my office chair and desk to an ergonomic Herman Miller chair and standing desk. So even my body and health were thankful for this opportunity!

Sara Soueidan desk - Sara' favorite project was working with SuperFriendly and Herman Miller. "The cherry on top of the cake was that I got a generous discount which I used to upgrade my office chair and desk to an ergonomic Herman Miller chair and standing desk. So even my body and health were thankful for this opportunity!"

Final question: What would you tell folks learning a programming language or aspiring to be a front end developer, or any sort of developer. What advice would you give them?

Learn the fundamentals — HTML, accessibility, CSS, and just enough vanilla JavaScript to get started. Build upon those skills with tools and frameworks as your work needs.

Don‘t get intimidated or overwhelmed by what everybody else is doing. Learn what you need when you need it. And practice as much as you can. Practice won’t make you perfect because there is no Perfect in this field, but it will make you better!

This probably should have been the first piece of advice though: Put the user first. User experience should trump developer convenience. Once you let that guide your work, you’re already halfway through to being a better developer than many others.

Oh and last but certainly not least: Create a personal website! Own your content. And share your work with the world!

You can keep up with Sara’s work by following her blog on her personal site here. Stay tuned for more Hacks Decoded Q&A’s!

The post Hacks Decoded: Sara Soueidan, Award-Winning UI Design Engineer and Author appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey is out!


Firstly, on behalf of the SeaMonkey Project, I’d like to wish everyone a very Merry Christmas and a Happy, Prosperous and Healthy New Year.

Secondly, and the main reason for this post,  the SeaMonkey Project is pleased to announce the release of SeaMonkey

Please check out [1] and [2].

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.10.2/

[2] – https://www.seamonkey-project.org/releases/

Finally, a humble apology to all affected by my goofup.  I had enabled the updates before the actual files were available which confused a lot of people.  My apologies for the confusion.  I had some trouble with this release as I made some changes which didn’t work out and while this wasn’t the reason for the update gaff, it delayed the release.

Best Regards,