Cameron KaiserApple desktop users screwed again

Geez, Tim. You could have at least phoned in a refresh for the mini. Instead, we get a TV app and software function keys. Apple must be using the Mac Pro cases as actual trash cans by now.

Siri, is Phil Schiller insane?

That's a comfort.

(*Also, since my wife and I both own 11" MacBook Airs and like them as much as I can realistically like an Intel Mac, we'll mourn their passing.)

Support.Mozilla.OrgWhat’s Up with SUMO – 27th October

Hello, SUMO Nation!

How are you doing today? Here, the days are getting shorter and the nights are getting colder… But that’s not a problem, since we have a hot new round of updates from our side :-) Ah, I forgot to mention – you’re all our main guest stars, as usual!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 26th of October – you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 26th of October!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.




  • A training with Tyler from the Respond team took place yesterday – if you missed it, email Sierra at to get a scheduled training date and get on board as soon as you complete it!
  • Army of Awesome – the tool, not the people! – will be going away in about 3 weeks. We salute all those who have kept the social fire going strong using AoA!
  • One more thing – if you noticed the shoutout to Jhonatas above, and know someone else who deserves one – let Sierra know!

Support Forum

Knowledge Base & L10n


…and that’s it for now, dear readers! If you made it all the way here, you should enjoy this complimentary video of VR frolicking by members of the Firefox team.

We’ll see you around! <3 you all!

Bill McCloskeyMozilla’s Quantum Project

A few months ago, Mozilla began a project to make significant changes to our Gecko rendering engine to make it faster and more reliable. The project was just announced. In this post I will fill in some technical details. Quantum was originally conceived to integrate technology from our Servo research browser into Gecko. While the project has evolved somewhat since then, Servo has heavily influenced the design. Like Servo and Rust, the unifying themes of Quantum are increased concurrency and parallelism, reduced latency, and better reliability.

Quantum is roughly divided into four distinct projects.

The Quantum CSS project will replace Gecko’s CSS engine with the one from Servo. Servo’s engine is heavily parallel while Gecko’s is not.

The Quantum DOM project will make Gecko more responsive, especially when there are a lot of background tabs open. When Quantum DOM is finished, JS code for different tabs (and possibly different iframes) will run in separate cooperatively scheduled threads; the code for some background tabs will never run at all.

Quantum Compositor moves Gecko’s compositor into its own process. Since graphics driver instability is a major source of Firefox crashes, we expect that moving code that interacts with the GPU into its own process will make Firefox more stable.

Finally, Quantum Rendering will replace Gecko’s graphics subsystem with the one from Servo, called WebRender. Servo uses the GPU more effectively than Gecko does, driving it more like a game would than a browser.

These projects are in varying stages of completeness. Quantum Compositor is fairly far along while Quantum Rendering is just getting started. There’s still a good deal of uncertainty about the projects. However, I wanted to write about Quantum DOM, the project that I’m working on.

Quantum DOM

Quantum DOM is primarily designed to make web content run more smoothly and to reduce “jank”–annoying little hangs and hiccups that make for a poor browsing experience. A lot of jank comes from background tabs and advertisements. You can find measurements in bug 1296486. An easy way to reduce this jank is to run each tab as well as each frame within the tab in its own process. Since the OS schedules processes preemptively, background frames can always be interrupted to handle work in the foreground.

Unfortunately, increasing the number of content processes also increases memory usage. In some preliminary experiments we’ve done, it looks unlikely that we’ll be able to increase the number of content processes beyond 8 in the near future; any more processes will increase memory usage unacceptably. Eight content processes are certainly better than one, but many users keep more than 8 tabs open. Mozilla developers are working to reduce the memory overhead of a content process, but we will never be able to reduce the overhead to zero. So we’re exploring some new ideas, such as cooperative scheduling for tabs and tab freezing.

Cooperative Scheduling

Quantum DOM is an alternative approach to reduce jank without increasing memory usage. Rather than putting frames in different processes, we will put them in separate threads. And rather than using OS threads, we will use cooperatively scheduled user-space threads. Threads are nice because they can share address space, which makes it much easier to share data. The downside of threads is that all this shared data needs to be protected with locks. Cooperative scheduling allows us to switch between threads only at “safe” points where the shared state is known to be consistent, making locks unnecessary (or much less necessary).

Firefox already has a number of natural safe points built in. The JavaScript engine is able to pause at function call entry and loop heads in order to run other code. We currently use this functionality to implement the “slow script dialog” that allows users to stop infinite loops. However, the same mechanism can be used to switch threads and start executing code from a different iframe.

We have already begun experimenting with this sort of switching. Bug 1279086, landed in Firefox 51, allows us to pause JavaScript execution so that we can paint during a tab switch. Telemetry shows that, as a consequence, the number of long (> 300ms) tab switches has been cut in half. We’re hoping to use the same facility in the near future so that we can paint during scrolling even when a background page is running JavaScript.

Ultimately, though, we want to run each frame in its own cooperatively scheduled thread. That will allow us to pause the execution of a background tab in order to process tasks (like input events or animations) in a foreground tab. Before we can do that, we need to “label” all the tasks in our event queue with the iframe that they correspond to. That way we can run each task on the thread that it belongs to.

Labeling tasks and prioritizing them is a big undertaking. Michael Layzell is building a “DocGroup” abstraction so that same-origin frames that can talk to each other, either via window.parent or window.opener, will run on the same thread (bug 1303196). Andreas Farre is implementing a mechanism to schedule low-priority tasks like GC (bug 1198381). Olli Pettay and Thinker Li are creating a similar mechanism for high-priority tasks like input events (bug 1306591). As our infrastructure for labeling improves over the next few weeks, task labeling will be a great place for contributors to help with the project. We’ll be posting updates to the dev-platform mailing list.

Eventually, we may want to run frames in their own preemptively scheduled OS threads. The advantage of OS threads over user-space threads is that they can take advantage of multiple cores. However, our user-space threads will still be split across 4 or 8 content processes, and most users don’t have more cores than that. As the project progresses, we will evaluate whether OS threads make sense for us.

As an addendum, I would be remiss if I didn’t point out that Opera pioneered a cooperatively scheduled architecture. They did all the good stuff first.

Tab Freezing

In our discussions about how to schedule background work, we began to wonder why we even run it at all. Firefox already throttles some background work, like setTimeout tasks, to run at most once per second. We’ll most likely do even more throttling in the future. But could we be even more aggressive and completely freeze certain background tabs?

Freezing isn’t always possible since some background tabs need to run. For example, you want Pandora to continue to play even when you switch away from it. And you want Gmail to notify you that an email has arrived even when the Gmail tab isn’t selected. But if the browser could identify such “important” tabs, we could freeze the rest. We’re still working out the heuristics that we’ll need to identify important background tabs. Ehsan Akhgari is doing some exploratory work and we’re hoping to land some experiments on Nightly soon.

Chris H-COne-Year Moziversary

Holy crap, I totally forgot to note on the 19th that it had been a full year since I started working at Mozilla!

In that year I’ve done surprisingly little of what I was hired to do (coding in Gecko, writing webpages and dashboards for performance monitoring) and a whole lot more of stuff I’d never done before (data analysis and interpretation, data management, teaching, blogging, interviewing).

Which is pretty awesome, I have to say, even if I sorta-sleepwalked into some of these responsibilities.

Highlights include hosting a talk at MozLondon (Death-Defying Stats), running… three? iterations of Telemetry Onboarding (including a complete rewrite), countless phone screens for positions within and outside of the team, being a team lead for just long enough to help my two reports leave the team (one to another team, the other to another company :S), and becoming (somehow) a voice for Windows XP Firefox users (expect another blog post in that series, real soon).

For my New MozYear Resolutions, I resolve to blog more (I’ve certainly slacked on the blogging in the latter half of the year), think more, and mentor more. We’ll see how that goes.

Anyway, here’s to another year!


Air MozillaConnected Devices Weekly Program Update, 27 Oct 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Soledad PenadesTalking about Servo in Hackference Birmingham 2016

I visited Birmingham for the very first time last week, to give a talk at Hackference. Apparently the organiser always swears that it will always be the last hackference, and it has been “the last one” for the last four editions. Teehehe!

I spoke about Servo. They didn’t record the talks but I did an screencast, so here’s it:

📽 Here are the slides, if you want to follow along (or maybe run the demos!). The demos come from the servo-experiments repository, if you want to try more demos than the ones I showed.

If you watched this talk at ColdFrontConf, this one has more clarifications added to it, so complicated aspects should be easier to follow now (specially the explanations about layout calculations, optimisations, parallelisation and work stealing algorithms).

People enjoyed the talk!

Someone even forked one the dogemania experiment to display other images:

And Martin got so intrigued about Servo, he even sent a PR!

I didn’t get to see much of the city, to be honest, but two things caught my attention:

a) it was quite empty even during ‘rush hours’
b) people were quite calm and chill

That’s perhaps why Jessica Rose is always saying that Birmingham is The Absolute Best place. I will have to find out some other time!

A very funny thing / activity / experiment happened in the slot before my talk. It was a reenactment of the BBC’s Just A Minute show, which I have never watched in my life. Essentially you have 4 participants and each one has a little device to “stop the show” when the active participant messes up. The active participant has to speak for a minute on a given topic, but they cannot hesitate or repeat words, so it starts getting challenging! This was organised and conducted by Andrew Faraday, who also helped run the Web Audio London meetup a while ago and is always an interesting nice person to talk to.

So this, this was hilarious beyond anything I expected. I guess because I didn’t expect any funny thing in particular, and also because I didn’t have any preconception of any of the participants being a “funny person” per se, so the whole comedy came from the situation and their reactions. It had some memorable moments, such as Terence Eden’s “unexploded item in bagging area” (in relation to the Samsung Galaxy Note 7 exploding fiasco, plus the very annoying voice that anyone who’s ever used a self-service checkout till in the UK will recognise 😏).

After so much laughing, I was super relaxed when the time for my talk came! Every conference should have Andrew doing this. It was excellent!

Other interesting talks and things:

  • Felienne Hermans’ on machine learning and bridge playing AIs built with DSLs built with F# – I basically don’t know much about any of these subjects, so I figured this could be an interesting challenge. You can watch this recording of this talk from another conference, if intrigued.
  • Martin Splitt’s aka geekonaut talk on WebGL – if you have the chance to watch it, it will be quite informative for people who want to get started in WebGL and learn about what it can do for you
  • I’m certainly not a Web Audio beginner, but I tend to watch those talks anyway as I am always curious to see how other people present on Web Audio. Hugh Rawlinson‘s presentation on Web Audio had a few interesting nuggets like Audiocrawl, which showcases the most interesting things in Web Audio. He also worked on meyda, which is a library to do feature detection using Web Audio.
  • Jonathan Kingsley gave one of the most depressing and hilarious talks I’ve seen in a long time. IoT is such a disaster, and the Dyn DDoS attack via IoT devices, just a couple hours after this talk, was so on point, it almost seemed deliberate.
  • Finally Remy declared his love for the web and encouraged everyone to get better and be better to others – and also stressed that you don’t need to use all the latest fashions in order to be a web developer. It’s good when renowned speakers like Remy admit to not to like frameworks, despite also seeing their strengths. A bit of balance never hurt anyone!

The conference had a very low key tone, let’s say that it was a bit “organise as you go”, but due to the small scale of the conference that wasn’t much of a problem. As I mentioned before, everything was pretty chill and everyone was very approachable and willing to help you sort things out. It’s not like a had a terrible problem, anyway: my biggest problem was that my badge had been temporarily lost, but no one told me off for not wearing a badge while inside the venue, and I eventually got it, heheh. So yeah, nice and friendly people, both attendees and organisers.

I also liked that everything was super close to the train station. So there was no need for additional transportation, and we could use the many food places in the station to have lunch, which was super convenient.

Oh and Jessica as MC was the best, I really enjoyed the introductions she prepared for each speaker, and the way she led the time between talks, and she was really funny, unless presenters that think they are funny (but aren’t).

If you have the chance, attend the next Hackference! It might the last one! 😝

Here are other conference write ups: from Dan Pope and from Flaki (who stayed for the hackathon during the week-end).

flattr this!

David BryantA Quantum Leap for the Web

Over the past year, our top priority for Firefox was the Electrolysis project to deliver a multi-process browsing experience to users. Running Firefox in multiple processes greatly improves security and performance. This is the largest change we’ve ever made to Firefox, and we’ll be rolling out the first stage of Electrolysis to 100% of Firefox desktop users over the next few months.

But, that doesn’t mean we’re all out of ideas in terms of how to improve performance and security. In fact, Electrolysis has just set us up to do something we think will be really big.

We’re calling it Project Quantum.

Quantum is our effort to develop Mozilla’s next-generation web engine and start delivering major improvements to users by the end of 2017. If you’re unfamiliar with the concept of a web engine, it’s the core of the browser that runs all the content you receive as you browse the web. Quantum is all about making extensive use of parallelism and fully exploiting modern hardware. Quantum has a number of components, including several adopted from the Servo project.

The resulting engine will power a fast and smooth user experience on both mobile and desktop operating systems — creating a “quantum leap” in performance. What does that mean? We are striving for performance gains from Quantum that will be so noticeable that your entire web experience will feel different. Pages will load faster, and scrolling will be silky smooth. Animations and interactive apps will respond instantly, and be able to handle more intensive content while holding consistent frame rates. And the content most important to you will automatically get the highest priority, focusing processing power where you need it the most.

So how will we achieve all this?

Web browsers first appeared in the era of desktop PCs. Those early computers only had single-core CPUs that could only process commands in a single stream, so they truly could only do one thing at a time. Even today, in most browsers an individual web page runs primarily on a single thread on a single core.

But nowadays we browse the web on phones, tablets, and laptops that have much more sophisticated processors, often with two, four or even more cores. Additionally, it’s now commonplace for devices to incorporate one or more high-performance GPUs that can accelerate rendering and other kinds of computations.

One other big thing that has changed over the past fifteen years is that the web has evolved from a collection of hyperlinked static documents to a constellation of rich, interactive apps. Developers want to build, and consumers expect, experiences with zero latency, rich animations, and real-time interactivity. To make this possible we need a web platform that allows developers to tap into the full power of the underlying device, without having to agonize about the complexities that come with parallelism and specialized hardware.

And so, Project Quantum is about developing a next-generation engine that will meet the demands of tomorrow’s web by taking full advantage of all the processing power in your modern devices. Quantum starts from Gecko, and replaces major engine components that will benefit most from parallelization, or from offloading to the GPU. One key part of our strategy is to incorporate groundbreaking components of Servo, an independent, community-based web engine sponsored by Mozilla. Initially, Quantum will share a couple of components with Servo, but as the projects evolve we will experiment with adopting even more.

A number of the Quantum components are written in Rust. If you’re not familiar with Rust, it’s a systems programming language that runs blazing fast, while simplifying development of parallel programs by guaranteeing thread and memory safety. In most cases, Rust code won’t even compile unless it is safe.

We’re taking on a lot of separate but related initiatives as part of Quantum, and we’re revisiting many old assumptions and implementations. The high-level approach is to rethink many fundamental aspects of how a browser engine works. We’ll be re-engineering foundational building blocks, like how we apply CSS styles, how we execute DOM operations, and how we render graphics to your screen.

Quantum is an ambitious project, but users won’t have to wait long to start seeing improvements roll out. We’re going to ship major improvements next year, and we’ll iterate from there. A first version of our new engine will ship on Android, Windows, Mac, and Linux. Someday we hope to offer this new engine for iOS, too.

We’re confident Quantum will deliver significantly improved performance. If you’re a developer and you’d like to get involved, you can learn more about Quantum on the the Mozilla wiki, and explore ways that you can contribute. We hope you’ll take the Quantum leap with us.

A Quantum Leap for the Web was originally published in Mozilla Tech on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wil ClouserTest Pilot Legacy Program Final Review

One of my Q3 goals was to migrate the Legacy Test Pilot users into our new Test Pilot program (some background on the two programs). The previous program was similar in that people could give feedback on experiments, but different enough that we didn't feel comfortable simply moving the users to the new program without some kind of notification and opting-in.

We decided the best way to do that was simply push out a new version of the legacy add-on which opened a new tab to the Test Pilot website and then uninstalled itself. This lets people interested in testing experiments know about the new program without being overbearing. Worst case scenario, they close the tab and have one less add-on loading every time Firefox is started.

In our planning meeting it was suggested that getting three percent of users from the old program to the new would be a reasonable compromise between realistic and optimistic. I guffawed, suggested that the audience had already opted-in once, and put 6% in as our goal and figured it would be way higher. Spoiler alert: I was wrong.

I'll spare you the pain of writing the add-on (most of the trouble was that the legacy add-on was so old you had to restart Firefox to uninstall it which really broke up the flow). On August 24th, we pushed the update to the old program.

In the daily usage graph, you can see we successfully uninstalled ourselves from several hundred thousand profiles, but we still have a long tail that doesn't seem to be uninstalling. Fortunately, AMO has really great statistics dashboards (these stats are public, by the way) and we can dig a little deeper. So, as of today there are around 150k profiles with the old add-on still installed. About half of those are reporting running the latest version (the one that uninstalls itself) and about half are disabled by the user. I suspect those halves overlap and account for 75k of the installed profiles.

The second 75k profiles are on older add-on versions and are not upgrading to a new version. There could be many reasons when we're dealing with profiles this old: they could be broken, they might not have write permissions to their profile, their network traffic could be being blocked, an internet security suite could be misbehaving, etc. I don't think there is much more we can do for these folks right now, unfortunately.

Let's talk about the overall goal though - how many people joined the new program as a result of the new tab?

As of the end of Q3, we had just over 26k conversions making for a 3.6% conversion rate. Quite close to what was suggested in the original meeting by the people who do this stuff for a living, and quite short of my brash guess.

Overall we got a 0.6 score on the quarterly OKR.

Since I'm writing this post a few weeks after the end of Q3, I can see that we're continuing to get about 80 new users per day from the add-on prompt. Overall that makes for about 28.5k total conversions as of Oct 27th.

Liz HenryCaptain’s log, stardate 2016-10-26

Once again I resolve to write about my work at Mozilla as a Firefox release manager. It’s hard to do, because even the smallest thing could fill LONG paragraphs with many links! Since I keep daily notes on what I work on, let me try translating that in brief. When moved, maybe I’ll go into depth.

This week we are coming into the home stretch of a 7 week release cycle. “My” release for right now is Firefox 49, which was released on I’m still juggling problems and responses and triaging for that every day. In a week and a half, we were scheduled to release Firefox 50. Today after some discussion we pushed back that schedule by a week.

Meanwhile, I am also helping a new release manager (Gerry) to go through tracked bugs, new regressions, top crash reports, and uplift requests for Aurora/Developer Edition (Firefox 51). I’m going through uplift requests for Firefox 45.5.0esr, the extended support release. There’s still more – I paid some attention to our “update orphaning” project to bring users stuck on older versions of Firefox forward to the current, better and safer versions.

As usual, this means talking to developers and managers across pretty much all the teams at Mozilla, so it is never boring. Our goal is to get fixes and improvements as fast as possible while making sure, as best we can, that those fixes aren’t causing worse problems. We also have the interesting challenges of working across many time zones around the world.

Firefox stuffed animal  installed

Today I had a brief 1:1 meeting with my manager and went to the Firefox Product cross-functional meeting. I always find useful as it brings together many teams. There was a long Firefox team all hands discussion and then I skipped going to another hour long triage meeting with the platform/firefox engineering managers. Whew! We had a lively discussion over the last couple of days about a performance regression (Bug 1304434). The issues are complicated to sort out. Everyone involved is super smart and the discussions have a collegiate quality. No one is “yelling at each other”, while we regularly challenge each other’s assumptions and are free to disagree – usually in public view on a mailing list or in our bug tracker. This is part of why I really love Mozilla. While we can get a bit heated and stressed, overall, the culture is good. YMMV of course.

By that time (11am) I had been working since 7:30am, setting many queries in bugs, and on IRC, and in emails into motion and made a lot of small but oddly difficult decisions. Often this meant exercising my wontfix powers on bugs — deferring uplift (aka “backport” ) to 50, 51, or leaving a fix in 52 to ride the trains to release some time next year. As I’m feeling pretty good I headed out to have lunch and work from a cafe downtown (Mazarine – the turkey salad sandwich was very good!)

This afternoon I’m focusing on ESR 45, and Aurora 51, doing a bit more bug triage. There are a couple of ESR uplifts stressing me out — seriously, I was having kittens over these patches — but now that we have an extra week until we release, it feels like a better position for asking for a 2nd code review, a bit more time for QA, and so on.

Heading out soon for drinks with friends across the street from this cafe, and then to the Internet Archive’s 20th anniversary party. Yay, Internet Archive!

Related posts:

Ben HearsumRings of Power - Multiple Signoff in Balrog

I want to tell you about an important new Balrog feature that we're working on. But I also want to tell you about how we planned it, because I think that part is even more interesting that the project itself.

The Project

Balrog is Mozilla’s update server. It is responsible for deciding which updates to deliver for a given update request. Because updates deliver arbitrary code to users this means that a bad data in update server could result in orphaning users, or be used as an attack vector to infect users with malware. It is crucial that we make it more difficult for a single user’s account to make changes that affect large populations of users. Not only does this provide some footgun protection, but it safeguards our users from attacks if an account is compromise or an employee goes rogue.

While the current version of Balrog has a notion of permissions, most people effectively have carte-blanche access to one or more products. This means that an under-caffeinated Release Engineer could ship the wrong thing, or a single compromised account can begin an attack. Requiring multiple different accounts to sign off on any sensitive changes will protect us against both of these scenarios.

Multiple sign offs may also be used to enhance Balrog’s ability to support workflows that are more reflective of reality. For example, the Release Management team are the final gatekeepers for most products (ie: we can’t ship without their sign off), but they are usually not the people in the best place to propose changes to Rules. A multiple sign off system that supports different types of roles would allow some people to propose changes and others to sign off on them.

The Planning Process

Earlier this year I blogged about Streamlining the throttled rollout of Firefox releases, which was the largest Balrog projects to-date at the time. While we did some up-front planning for it, it took significantly longer to implement than I'd originally hoped. This isn't uncommon for software projects, but I was still very disappointed with the slow pace. One of the biggest reasons for this was discovering new edge cases or implementation difficulties after we were deep into coding. Often this would result in needing to rework code that was thought to be finished already, or require new non-trivial enhancements to be made. For Multiple Signoff, I wanted to do better. Instead of a few hours of brainstorming, we've taken a more formal approach with it, and I'd like to share both the process, and the plan we've come up with.

Setting Requirements

I really enjoy writing code. I find it intellectually challenging and fun. This quality is usually very useful, but I think it can be a hinderance when in the early stages of large projects, as I tend to jump straight to thinking about implementation before even knowing the full set of requirements. Recognizing this, I made a concious effort to purge implementation-related thoughts until we had a full set of requirements for Multiple Signoff reviewed by all stakeholders. Forcing myself not to go down the (more fun) path of thinking about code made me spend more time thinking about what we want and why we want it. All of this, particularly the early involvement of stakeholders, uncovered many hidden requirements and things that not everyone agreed. I believe that identifying them at such an early stage made them much easier to resolve, largely because there was no sunk-cost to consider.

Planning the Implementation

Once our full set of requirements were written, I was amazed at how obvious much of the implementation was. New objects and pieces of data stood out like neon signs, and I simply plucked them out of the requirements section. Most of the interactions between them came very naturally as well. I wrote some use cases that acted almost as unit tests for the implementation proposal, and identified a lot of edge cases and bugs in the first pass of the implementation proposal. In retrospect, I probably should've written the use cases at the same time as the requirments. Between all of that and another round of review from stakeholders, I have significantly more confidence that the proposed implementation will look like the actual implementation than I have with any other projects of similar size.

Bugs and Dependencies

Just like the implementation flowed easily from the requirements, the bugs and dependencies between them were easy to find by rereading the implementation proposal. In the end, I identified 18 distinct pieces of work, and filed them all as separate bugs. Because the dependencies were easy to identify, I was able to convince Bugzilla to give me a decent graph that helps identify the critical path, and which bugs are ready to be worked on.

Multiple Signoff Bug Graph

But will it even help?

Overall, we probably spend a couple of people-weeks of active time on the requirements and implementation proposal. This isn't an overwhelming amount of time upfront, but it's enough that it's important to know if it's worthwhile next time. This is a question that can only be answered in retrospect. If the work goes faster and the implementation has less churn, I think it's safe to say that it was time well spent. Those are both things that are relatively easy to measure, so I hope to be able to measure this fairly objectively in the end.

The Plan

If you're interested in reading the full set of requirements, implementation plan, and use cases, I've published them here. A HUGE thanks goes out to Ritu, Nick, Varun, Hal, Johan, Justin, Julien, Mark, and everyone else who contributed to it.

Air MozillaThe Joy of Coding - Episode 77

The Joy of Coding - Episode 77 mconley livehacks on real Firefox bugs while thinking aloud.

Daniel PocockFOSDEM 2017 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet ( contact
XMPP Planet Jabber ( contact
SIP Planet SIP ( contact
SIP (Español) Planet SIP-es ( contact

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Chris CooperRelEng & RelOps highlights - October 24, 2016

OK, I’ve given up the charade that these are weekly now. Welcome back.

Modernize infrastructure:

Thanks to an amazing effort from Ed Morley and the rest of the Treeherder team, Treeherder has been migrated to Heroku, giving us significantly more flexible infrastructure.

Git-internal is no longer a standalone single point of failure (SPOF)! A warm standby host is running, and repository mirroring is in place. We now also have a fully matching staging environment for testing.

Improve Release Pipeline:

Aki and Catlee attended the security offsite and came away with todo items and a list to prioritize to improve release security.

Aki released scriptworker 0.8.0; this gives us signed chain of trust artifacts from scriptworkers, and gpg key management for chain of trust verification.

Improve CI Pipeline:

We now have nightly Linux64, Linux32 and Android 4.0 API15+ builds running on the date branch on taskcluster. Kim’s work to refactor the nightly task graph to transform the existing build “kind” into a signing “kind” has made adding new platforms quite straightforward. See and for more details.

There is still some remaining setup to be done, mostly around updates and moving artifacts into the proper locations (beetmover). Releng will then begin internal testing of these new nightlies (essentially dogfooding) to ensure that important things like updates are working correctly before we uplift this code to mozilla-central.

We hope to make that switch for Linux/Android nightlies within the next month, with Mac and Windows coming later this quarter.


During a recent tree closing window (TCW), the database team managed to successfully switch the buildbot database from MyISAM to InnoDB format for improved stability. This is something we’ve wanted to do for many years and it’s good to see it finally done.


We’re currently on beta 10 for Firefox 50. This is noteworthy because in the next release cycle, Firefox 52 will be uplifted to the Aurora (developer edition) and Firefox 52 will be the last version of Firefox to support Windows XP, Windows Vista, and Universal binaries on Mac. Firefox 52 is due for release in March of 2017. Don’t worry though, all these platforms will be moving to the Firefox 52 ESR branch where they will continue to receive security updates for another year beyond that.

See you soon!

Mozilla Privacy BlogMozilla Asks President Obama to Help Strengthen Cybersecurity

Last week’s cyber attack on Dyn that blocked access to popular websites like Amazon, Spotify, and Twitter is the latest example of the increasing threats to Internet security, making it more important that we acknowledge cybersecurity is a shared responsibility. Governments, companies, and users all need to work together to protect Internet security.

This is why Mozilla applauds Sens. Angus King Jr. (I-ME) and Martin Heinrich (D-NM) for calling on President Obama to establish enduring government-wide policies for the discovery, review, and sharing of security vulnerabilities. They suggest creating bug bounty programs and formalizing the Vulnerabilities Equities Process (VEP) – the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates.

“The recent intrusions into United States networks and the controversy surrounding the Federal Bureau of Investigation’s efforts to access the iPhone used in the San Bernardino attacks have underscored for us the need to establish more robust and accountable policies regarding security vulnerabilities,” Senators King and Heinrich wrote in their letter.

Mozilla prioritizes the privacy and security of users and we work to find and fix vulnerabilities in Firefox as quickly as possible. We created one of the first bug bounty programs more than 10 years ago to encourage security researchers to report security vulnerabilities.

Mozilla has also called for five specific, important reforms to the VEP:

  • All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
  • All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
  • The VEP should be codified in law to ensure compliance and permanence.

These changes to the discovery, review, and sharing of security vulnerabilities would be a great start to strengthening the shared responsibility of cybersecurity and reducing the countless cyber attacks we see today.

Matjaž HorvatConnect your Pontoon profile with a Firefox Account

We are switching the sign in provider in Pontoon from Persona to Firefox Accounts. This means you will have to connect your existing Pontoon profile with a Firefox Account before continuing to use Pontoon. You need to do this before November 1, 2016 by following these steps:

1. Go to and sign in with Persona as usually.

2. After you’re redirected to the Firefox Accounts migration page, click Sign in with Firefox Account and follow the instructions.

3. And that’s it! From this point on, you can log in with your Firefox Account.

Note that the email address of your Firefox Account and Pontoon account do not need to match. And if you don’t have a Firefox Account yet, you will be able to create it during the sign in process.

November 1, 2016 will the last day for you to sign in to Pontoon using Persona and connect your existing Pontoon profile with a Firefox Account. We recognize this is an inconvenience, and we apologize for it. Unfortunately, it is out of our control.

Huge thanks to Jarek for making the migration process so simple!

Ben HearsumWhat's New with Balrog - October 25th, 2016

The past month has seen some significant and exciting improvements to Balrog. We've had the usual flow of feature work, but also a lot of improvements to the infrastructure and Docker image structure. Let's have a look at all the great work that's been done!

Core Features

Most recently, two big changes have landed that allow us to use multifile updates for SystemAddons. This type of update configuration let us simplify configuration of Rules and Releases, which is one of the main design goals of Balrog. Part of this project involved implementing "fallback" Releases - which are used if an incoming request fails a throttle dice roll. This benefits Firefox updates as as well, because it will allow us to continue serving updates to older users while we're in the middle of a throttled rollout.

Some house cleaning work has been done to remove attributes that Firefox no longer supports, and add support for the new "backgroundInterval" attribute. The latter will give us server-side control of how fast Firefox downloads updates, which we hope will help speed up uptake of new Releases.

There's been some significant refactoring of our Domain Whitelist system. This doesn't change the way it works at all, but cleaning up the implementation has paved the way for some future enhancements.

General Improvements

E-mails are now sent to a mailing list for some types of changes to Balrog's database. This serves as an alert system that ensures unexpected changes don't go unnoticed. In a similar vein, we also started aggregating exceptions to CloudOps' Sentry instance, which has already covered numerous production-only errors that have gone unnoticed for months.

Significant improvements have been made to the way our Docker images are structured. Instead of sharing one single Dockerfile for production and local dev, we've split them out. This has allowed the production image to get a lot smaller (mostly thanks to Benson's changes). On the dev side, it has let us improve the local development workflow - all code (including frontend) is now automatically rebuilt and reloaded when changed on the host machine. And thanks to Stefan, we even support development on Windows now!

We now have a script that extracts what the "active data" from the production database. When imported into a fresh database (ie: local database, or stage), it will serve exactly the same updates as production without all of the unnecessary history. This should make it much easier to reproduce issues locally, and verify that stage is functionally correctly.

Finally, something that's been on my wishlist for a long time finally happened as well: the Balrog Admin API Client code has been moved into the Balrog repo! Because it is so closely linked with the server side API, integrating them makes it much easier to keep them in sync.

The People

The work above was only possible because of all the great contributors to Balrog. A big thanks goes to Ninad, Johan, Varun, Stefan, Simon, Meet, Benson, and Njira for all their time and hard work on Balrog!

Christian HeilmannDecoded Chats – third edition featuring Chris Wilson on JavaScript and Web Standards

At the Microsoft/Mozilla Progressive Web Apps workshop in Seattle I ran into Chris Wilson and took the opportunity to interview him on Web Standards, JavaScript dependency and development complexity.

In this first interview we covered the need for JavaScript in today’s web and how old-school web standards stand up to today’s needs.

You can see the video and get the audio recording of our chat over at the Decoded blog:

Monica saying hi

Chris has been around the web block several times and knows a lot about standards and how developers make them applicable to various different environments. He worked on various browsers and has a high passion for the open web and empowering developers with standards and great browsers.

Here are the questions we covered:

  • A current hot topic that seems to come up every few years is the dependency of web products on JavaScript, and if we could do without it. What is the current state there?
  • Didn’t the confusion start when we invented the DOM and allowed for declarative and programmatic access to the document? JavaScript can create HTML and CSS and give us much more control over the outcome.
  • One of the worries with Web Components was that it would allow developers to hide a lot of complexity in custom elements. Do we have a problem understanding that modules are meant to be simple?
  • Isn’t part of the issue that the web was built on the premise of documents and that a nature of modules needs to be forced into it? CSS has cascade in its name, yet modules shouldn’t inherit styles from the document.
  • Some functionality needed for modern interfaces seem to be achievable with competing standards. You can animate in CSS, JavaScript and in SVG. Do different standard working groups not talk to each other?
  • Declarative functionality in CSS and HTML can be optimised by browser makers. When you – for example – create animations in JavaScript, we can’t do that for you. Is that a danger?
  • A lot of JavaScript enhancements we see in browsers now is enhancing existing APIs instead of inventing new ones. Passive Event listeners is a great example. Is this something that will be the way forward?
  • One thing that seems to be wasteful is that a lot of research that went into helper libraries in the past dies with them. YUI had a lot of great information about animation and interaction. Can we prevent this somehow?
  • Do you feel that hacks die faster these days? Is a faster release schedule of browsers the solution to not keeping short-term solutions clog up the web?
  • It amazes me what browsers allow me to do these days and create working layouts and readable fonts for me. Do you think developers don’t appreciate the complexity of standards and CSS enough?

Air MozillaMartes Mozilleros, 25 Oct 2016

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Wladimir PalantImplementing efficient PBKDF2 for the browser

As I mentioned previously, an efficient PBKDF2 implementation is absolutely essential for Easy Passwords in order to generate passwords securely. So when I looked into Microsoft Edge and discovered that it chose to implement WebCrypto API but not the PBKDF2 algorithm this was quite a show-stopper. I still decided to investigate the alternatives, out of interest.

First of all I realized that Edge’s implementation of the WebCrypto API provides the HMAC algorithm which is a basic building block for PBKDF2. With PBKDF2 being a relatively simple algorithm on top of HMAC-SHA1, why not try to implement it this way? And so I implemented a fallback that would use HMAC if PBKDF2 wasn’t supported natively. It worked but there was a “tiny” drawback: generating a single password took 15 seconds on Edge, with Firefox not being significantly faster either.

I figured that the culprit was WebCrypto being asynchronous. This isn’t normally an issue, unless you have to do around 250 thousands HMAC operations — sending input data to another thread and receiving result for each single one of them. There is no way to call WebCrypto API in a synchronous fashion, not even from a web worker. Which means that using WebCrypto API in this way makes no sense here and pure JavaScript implementations would be faster.

So I went one step further. HMAC again is fairly simple, and the only non-trivial part here is SHA1. The Rusha project offers a very fast SHA1 implementation so I went with this one. And I realized that my 250 thousands HMAC operations were mostly hashing the same data: the first block of each hashing operation was derived from the secret key, always the same data. So I hacked Rusha so that it would allow me to pre-process this block and save the resulting state. HMAC would then restore the state and only hash the variable part of the input on top of it.

This implementation got the time down to 1.5 seconds on Edge — still not great but almost an acceptable delay for the user interface. And profiling made it obvious that the performance could still be improved: most time was being spent converting between big-endian and low-endian numbers. The issue is that SHA1 input is considered a sequence of big-endian numbers. With the calculation using low-endian numbers internally on almost every platform a fairly expensive conversion is necessary. And then the hashing result would be available as five low-endian numbers, yet for a proper byte representation of the SHA1 hash these need to be converted to big-endian again.

With PBKDF2 you are usually hashing the result of the previous hashing operation however. Converting that result from low-endian to big-endian only to perform the reverse conversion on the input is a complete waste. So I implemented a shortcut that allowed hashing the result of the previous hashing operation efficiently, without any conversion. While at it, I’ve thrown away most Rusha code since I was only using its high-performance core at that point anyway. And that change brought the password derivation time down to below 0.5 seconds on Edge. On Firefox the calculation even completes in 0.3 seconds which is very close to the 0.2 seconds needed by the native implementations on Firefox and Chrome.

Was it worth it? This implementation makes Edge a platform that can be realistically supported by Easy Passwords — but there are still lots of other issues to fix for that. But at least I updated the online version of the password generator which no longer relies on WebCrypto, meaning that it will work in any browser with typed arrays support — I even tested it in Internet Explorer 11 and it was still very fast.

I got into this one step after another so that I only thought about existing implementations when I was already done. Turns out, their performance isn’t too impressive, for whatever reason. I found four existing pure-JavaScript implementations of the PBKDF2-HMAC-SHA1 algorithm: SJCL, forge, crypto-browserify and crypto-js. Out of those, crypto-js turned out to be so slow that I couldn’t even measure its performance. As this paper indicates, its PBKDF2 implementation slows down massively as the number of iterations increases, this goes beyond linear.

As to the others, you can test them yourself if you like (on browsers without native PBKDF2 support the first test will error out so you need to click the tests individually there). The following graph sums up my results:

Performance of PBKDF2 libraries in different browsers

The implementation I added as fallback in Easy Passwords now performs consistently well in all browsers, never needing more than twice the time of native implementations. The other pure-JavaScript implementations are at least factor three slower, always taking more than a second to derive a single password. For some reason they seem particularly slow in Edge, with crypto-browserify even taking around a minute for one operation.

Edit: Shortly after writing all this I found asmCrypto that uses asm.js consistently. Guess what — its performance is on par with the native implementations and in Firefox it’s even significantly faster than that (around 0.1 seconds)! Thanks for reading up on my wasted effort.

This Week In RustThis Week in Rust 153

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Blog Posts

News & Project Updates

Other Weeklies from Rust Community

New Crates

  • EdgeDNS. A high performance DNS cache designed for Content Delivery Networks, with built-in security mechanisms to protect origins, clients and itself.
  • Pinky. An NES emulator written in Rust.
  • combine. A parser combinator library for Rust.
  • TensorFlow Rust. Rust language bindings for TensorFlow.

Crate of the Week

No crate was selected for CotW.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

126 pull requests were merged in the last week.

New Contributors

  • Duncan
  • loggerhead
  • Ryan Senior
  • Vangelis Katsikaros
  • Артём Павлов [Artyom Pavlov]

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!


Issue 17 (comments) is ready for a PR, we'd love someone to help out with that, if you're interested ping someone in #rust-style.

No FCP issues this week.

New P-high issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Mozilla Security BlogDistrusting New WoSign and StartCom Certificates

Mozilla has discovered that a Certificate Authority (CA) called WoSign has had a number of technical and management failures. Most seriously, we discovered they were backdating SSL certificates in order to get around the deadline that CAs stop issuing SHA-1 SSL certificates by January 1, 2016. Additionally, Mozilla discovered that WoSign had acquired full ownership of another CA called StartCom and failed to disclose this, as required by Mozilla policy. The representatives of WoSign and StartCom denied and continued to deny both of these allegations until sufficient data was collected to demonstrate that both allegations were correct. The levels of deception demonstrated by representatives of the combined company have led to Mozilla’s decision to distrust future certificates chaining up to the currently-included WoSign and StartCom root certificates.

Specifically, Mozilla is taking the following actions:

  1. Distrust certificates with a notBefore date after October 21, 2016 which chain up to the following affected roots. If additional back-dating is discovered (by any means) to circumvent this control, then Mozilla will immediately and permanently revoke trust in the affected roots.
    • This change will go into the Firefox 51 release train.
    • The code will use the following Subject Distinguished Names to identify the root certificates, so that the control will also apply to cross-certificates of these roots.
      • CN=CA 沃通根证书, OU=null, O=WoSign CA Limited, C=CN
      • CN=Certification Authority of WoSign, OU=null, O=WoSign CA Limited, C=CN
      • CN=Certification Authority of WoSign G2, OU=null, O=WoSign CA Limited, C=CN
        CN=CA WoSign ECC Root, OU=null, O=WoSign CA Limited, C=CN
      • CN=StartCom Certification Authority, OU=Secure Digital Certificate Signing, O=StartCom Ltd., C=IL
      • CN=StartCom Certification Authority G2, OU=null, O=StartCom Ltd., C=IL
  2. Add the previously identified backdated SHA-1 certificates chaining up to these affected roots to OneCRL.
  3. No longer accept audits carried out by Ernst & Young Hong Kong.
  4. Remove these affected root certificates from Mozilla’s root store at some point in the future. If the CA’s new root certificates are accepted for inclusion, then Mozilla may coordinate the removal date with the CA’s plans to migrate their customers to the new root certificates. Otherwise, Mozilla may choose to remove them at any point after March 2017.
  5. Mozilla reserves the right to take further or alternative action.

If you receive a certificate from one of these two CAs after October 21, 2016, your certificate will not validate in Mozilla products such as Firefox 51 and later, until these CAs provide new root certificates with different Subject Distinguished Names, and you manually import the root certificate that your certificate chains up to. Consumers of your website will also have to manually import the new root certificate until it is included by default in Mozilla’s root store.

Each of these CAs may re-apply for inclusion of new (replacement) root certificates as described in Bug #1311824 for WoSign, and Bug #1311832 for StartCom.

We believe that this response is consistent with Mozilla policy and is one which we could apply to any other CA that demonstrated similar levels of deception to circumvent Mozilla’s CA Certificate Policy, the CA/Browser Forum’s Baseline Requirements, and direct inquiries from Mozilla representatives.

Mozilla Security Team

Julia ValleraAda Lovelace Day Curriculum Design Workshop at Libre Learn Lab

This blog post was co-authored by Zannah Marsh and Julia Vallera

ada_lovelace_portraitOctober 11 was Ada Lovelace Day, an annual celebration of the contributions of women to the fields of Science, Technology, Engineering, and Mathematics (also known as STEM). Born in 1815, Lovelace was given a rigorous education by her mathematician mother, and went on to devise a method for programming the Analytical Engine, a conceptual model for the first-ever general purpose computer. Lovelace (pictured here in full Victorian splendor) is known as the first computer programmer. This year, Ada Lovelace Day presented the perfect opportunity for Mozilla to engage community members in something cool and inspiring around the contributions (past and future) of women and girls in STEM. Zannah from the Mozilla Science Lab(MSL) and Julia from the Mozilla Clubs program decided to team up to run a women in STEM themed session at Libre Learn Lab, a two-day summit for people who create, use and implement freely licensed resources for K-12 education. We jumped at this chance to collaborate to make something fun and new that would be useful for both of our programs, and the broader Mozilla community.

At MSL and Mozilla Clubs, we’ve been experimenting with creating “train-the-trainer” materials, resources that are packed with all the info needed to run a workshop on a given topic (for example, this resource on Open Data for academic researchers). There are 200+ clubs around the world meeting, making, and learning together… and many are eager for new curriculum and activities. In both programs and across Mozilla, we’re committed to bringing learning around the open web and all the amazing work it enables (from mathematics to advocacy) to as wide an audience as possible, especially to populations that have traditionally been excluded, like women and girls. Mozilla Learning has been running online, hour-long curriculum workshops on a monthly basis, in which users discuss a topic and get to hack on curriculum together, and had planned a special Ada Lovelace Day edition. We resolved to make an Ada Lovelace Day in-person event that would link together our “train-the-trainer” model and online curriculum creation initiatives, and help meet the need for new material for Clubs… all while highlighting the issue of inclusion on the open web.

Developing the workshop

After kicking around a few ideas for our Libre Learn Lab session, we settled on an intensive collaborative curriculum development workshop to guide participants to create their own materials inspired by Ada Lovelace Day and the contributions of women and girls to STEM. We drafted the workshop plan, tested it by working through each step, and then used insights from prototyping to make tweaks. After incorporating suggestions from key stakeholders we arrived at the final product.

What we came up with is a workshop experience that gets participants from zero to a draft by prototyping curriculum in about one and a half hours. In this workshop, we made a particular effort to:

  • Encourage good, intentional collaboration by getting participants to brainstorm and agree on guidelines for working together
  • Get users to work creatively right away, and encourage them to work on a topic they find fascinating and exciting
  • Introduce the idea of design for a specific audience (AKA user-centered design) early on, and keep returning to that audience (their needs, motivations, challenges) throughout the design process
  • Create a well-structured process of idea generation, sharing, and refining (along with a matrix to organize content) to get participants past decision making on process that can often hinder creative collaboration

If you’d like to know more you can take a look at the workshop plan, carefully documented on GitHub in a way that should make it easily reusable and remixable by anyone.

Running the workshop


On October 8 put our plans into action at Libre Learn Lab. The conference (only in its second year) had a small turnout of highly qualified participants with valuable experience in the field of education and open practices. Everyone who came to our workshop was connected to curriculum development in some way — teachers, program managers, and directors of educational organizations. After introducing the workshop theme and agenda with a short slide deck we brainstormed new ideas and worked in groups to refine or expand our ideas and prototype new curriculum. At the end of the session, we asked users to fill out a short survey on their experience.

The workshop development and implementation process so far has resulted in new lessons on understanding how climate effects living things and on women inventors throughout history. These are available in the GitHub repository for public use — and keep an eye on this, as we’ll be adding more lessons soon. Every workshop participant was eager to develop their materials further and use them with audiences ASAP. Thanks to Megan Black, Felix Alvarado, Victor Zuniga, and Don Davis for creating curriculum!

Wrap up and learnings

We got useful feedback from participants that will help make future evolutions of this workshop stronger. From our survey results we learned that participants loved the opportunity to collaborate, get hands-on experience and connect with Mozilla. They also liked having the matrix and sample cards as prompts. Suggested improvements included a desire for more curriculum examples, and the need for more time for prototyping. As facilitators, we’ll look for ways to encourage participants to move around the room and mix with other groups. We will look at improving our slides as an activity guide with clearer instructions. We’d like to find better ways for latecomers to jump in and find more ways to engage participants with different learning styles (for example more visual learners). We also learned that with ten or more participants it is best to have three or more facilitators in this type of intensive workshop.

We hope to find a time to run another session of the workshop in the Open Learning Circle in our Demystify the Web Space at this weekend’s Mozfest — keep an eye on the #mozfest hashtag on twitter for an announcement or reach out to us if you’d like to join.

Niko MatsakisSupporting blanket impls in specialization

In my previous post, I talked about how we can separate out specialization into two distinct concepts: reuse and override. Doing so makes because the conditions that make reuse possible are more stringent than those that make override possible. In this post, I want to extend this idea to talk about a new rule for specialization that allow overriding in more cases. These rules are a big enabler for specialization, allowing it to accommodate many use cases that we couldn’t handle before. In particular, they enable us to add blanket impls like impl<T: Copy> Clone for T in a backwards compatible fashion, though only under certain conditions.

Revised algorithm

The key idea in this blog post is to change the rules for when some impl I specializes another impl J. Instead of basing the rules on subsets of types, I propose a two-tiered rule. Let me outline it first and then I will go into more detail afterwards.

  1. First, impls with more specific types specialize other impls (ignoring where clauses altogether).
    • So, for example, if impl I is impl<T: Clone> Clone for Option<T>, and impl J is impl<U: Copy> Clone for U, then I will be used in preference to J, at least for those types where they intersect (e.g., Option<i32>). This is because Option<T> is more specific than U.
    • For types where they do not intersect (e.g., i32 or Option<String>), then only one impl is used.
    • Note that the where clauses like T: Clone and U: Copy don’t matter at all for this test.
  2. However, reuse is only allowed if the full subset conditions are met.
    • So, in our example, impl I is not a full subset of impl J, because of types like Option<String>. This means that impl I could not reuse items from impl J (and hence that all items in impl J must be declared default).
  3. If the impls types are equally generic, then impls with more specific where clauses specialize other impls.
    • So, for example, if impl I is impl<T: Debug> Parse for T and impl J is impl<T> Parse for T, then impl I is used in preference to impl J where possible. In particular, types that implement Debug will prefer impl I.

Another way to express the rule is to say that impls can specialize one another in two ways:

  • if the types matched by one impl are a subset of the other, ignoring where clauses altogether;
  • otherwise, if the types matched by the two impls are the same, then if the where clauses of one impl are more selective.

Interestingly, and I’ll go into this a bit more later, this rule is not necessarily an alternative to the intersection impls I discussed at first. In fact, the two can be used together, and complement each other quite well.

Some examples

Let’s revisit some of the examples we’ve been working through and see how the rule would apply. The first three examples illustrate the first three clauses. Then I’ll show some other interesting examples that highlight various other facets and interactions of the rules.

Blanket impl of Clone for Copy types

First, we started out considering the case of trying to add a blanket impl of Clone for all Copy types:

impl<T: Copy> Clone for T {
  default fn clone(&self) -> Self {

We were concerned before that there are existing impls of Clone that will partially overlap with this new blanket impl, but which will not be full subsets of it, and which would therefore not be considered specializations. For example, an impl for the Option type:

impl<T: Clone> Clone for Option<T> {
  fn clone(&self) -> Self {
    self.as_ref().map(|c| c.clone())

Under these rules, this is no problem: the Option impl will take precedence over the blanket impl, because its types are more specific.

Note the interesting tie-in with the orphan rules here. When we add blanket impls, we have to worry about backwards compatibility in one of two ways:

  • existing impls will now fail coherence checks that used to pass;
  • some code that used to use an existing impl will silently change to using the blanket impl instead.

Naturally, the biggest concern is about impls in other crates, since those impls are not visible to us. Interestingly, the orphan rules require that those impls in other crates must be using some local type in their signature. Thus I believe the orphan rules ensure that existing impls in other crates will take precedence over our new blanket impl – that is, we are guaranteed that they are considered legal specializations, and hence will pass coherence, and moreover that the existing impl is used in preference over the blanket one.

Dump trait: Reuse requires full subset

In previous blog post I gave an example of a Dump trait that had a blanket impl for Debug things:

trait Dump {
    fn display(&self);
    fn debug(&self);

impl<T> Dump // impl A
    where T: Debug,
    default fn display(&self) {

    default fn debug(&self) {
        println!("{:?}", self);

The idea was that some other crate might want to specialize Dump just to change how display works, perhaps trying something like this:

struct Widget<T> { ... }

impl<T: Debug> Debug for Widget<T> {...}

// impl B (note that it is defined for all `T`, not `T: Debug`):
impl<T> Dump for Widget<T> {
    fn display(&self) {

Here, impl B only defines the display() item from the trait because it intends to reuse the existing debug() method from impl A. However, this poses a problem: impl A only applies when Widget<T>: Debug, which may be true but is not always true. In particular, impl B is defined for any Widget<T>.

Under the rules I gave, this is an error. Here we have a scenario where impl B does specialize impl A (because its types are more specific), but impl B is not a full subset of impl A, and therefore it cannot reuse items from impl A. It must provide a full definition for all items in the trait (this also implies that every item in impl A must be declared as default, as is the case here).

Note that either of these two alternatives for impl B would be fine:

// Alternative impl B.1: provides all items
impl<T> Dump for Widget<T> {
    fn display(&self) {...}
    fn debug(&self) {...}

// Alternative impl B.2: full subset
impl<T: Debug> Dump for Widget<T> {
    fn display(&self) {...}

There is some intersection with backwards compatibility here. If the impl of Dump for Widget were added before impl A, then it necessarily would have defined all items (as in impl B.1), and hence there would be no error when impl A is added later.

Using where clauses to detect Debug

You may have noticed that if you do an index into a map and the key is not found, the error message is kind of lackluster:

use std::collections::HashMap;

fn main() {
    let mut map = HashMap::new();
    map.insert("a", "b");
    // Error: thread 'main' panicked at 'no entry found for key', ../src/libcore/

In particular, it doesn’t tell you what key you were looking for! I would have liked to see ‘no entry found for c’. Well, the reason for this is that the map code doesn’t require that the key type K have a Debug impl. That’s good, but it’d be nice if we could get a better error if a debug impl happens to exist.

We might do so by using specialization. Let’s imagine defining a trait that can be used to panic when a key is not found. Thus when a map fails to find a key, it invokes key.not_found():

trait KeyNotFound {
    fn not_found(&self) -> !;

impl<T> KeyNotFound for T { // impl A
    fn not_found(&self) -> ! {
        panic!("no entry found for key")

Now we could provide a specialized impl that kicks in when Debug is available:

impl<T: Debug> KeyNotFound for T { // impl B
    fn not_found(&self) -> ! {
        panic!("no entry found for key `{:?}`", self)

Note that the types for impl B are not more specific than impl A, unless you consider the where clauses. That is, they are both defined for any type T. It is only when we consider the where clauses that we see that impl B can in fact be judged more specific than A. This is the third clause in my rules (it also works with specialization today).

Fourth example: AsRef

One longstanding ergonomic problem in the standard library has been that we could add all of the impls of the AsRef trait that we wanted. T: AsRef<U> is a trait that says an `&T` reference can be converted into a an `&U` reference. It is particularly useful for types that support slicing, like String: AsRef<str> – this states that an &String can be sliced into an &str reference.

There are a number of blanket impls for AsRef that one might expect:

  • Naturally one might expect that T: AsRef<T> would always hold. That just says that an &T reference can be converted into another &T reference (duh) – which is sometimes called being reflexive.
  • One might also that AsRef would be compatible with deref coercions. That is, if I can convert an &U reference to an &V reference, than I can also convert an &&U reference to an &V reference.

Unfortunately, if you try to combine both of those two cases, the current coherence rules reject it (I’m going to ignore lifetime parameters here for simplicity):

impl<T> AsRef<T> for T { } // impl A

impl<U, V> AsRef<V> for &U
    where U: AsRef<V> { }  // impl B

It’s clear that these two impls, at least potentially, overlap. In particular, a trait reference like &Foo: AsRef<&Foo> could be satisfied by either one (assuming that Foo: AsRef<&Foo>, which is probably not true in practice, but could be implemented by some type Foo in theory).

At the same time, it’s clear that neither represents a subset of one another, even if ignore where clauses. Just consider these examples:

  • String: AsRef<String> (matches impl A, but not impl B)
  • &String: AsRef<String> (matches impl B, but not impl A)

However, we’ll see that we can satisfy this example if we incorporate intersection impls; we’ll cover this later.

Detailed explanation: drilling into subset of types

OK, that was the high-level summary, let’s start getting a bit more into the details. In this section, I want to discuss how to implement this new rule. I’m going to assume you’ve read and understood the Algorithmic formulation section of the specialization RFC, which describes how to implement the subset check (if not, go ahead and do so, it’s quite readable – nice job aturon!).

Implementing the rules today basically consists of two distinct tests, applied in succession. RFC 1210 describes how, given two impls I and J, we can say define an ordering Subset(I, J) that indicates I matches a subset of the types of J (the RFC calls it I <= J). The current rules then say that I specializes J if Subset(I, J) holds but Subset(J, I) does not.

To decide if Subset(I, J) holds, we apply two tests (both of which must pass):

  • Type(I, J): For any way of instantiating I.vars, there is some way of instantiating J.vars such that the Self type and trait type parameters match up.
    • Here I.vars refers to the generic parameters of impl I
    • The actual technique here is to skolemize I.vars and then attempt unification. If unification succeeds, then Type(I, J) holds.
  • WhereClause(I, J): For the instantiation of I.vars used in Type(I, J), if you assume I.wc holds, you can prove J.wc.
    • Here I.wc refers to the where clauses of impl I.
    • The actual technique here is to consider I.wc as true, and attempt to prove J.wc using the standard trait machinery.

The algorithm to test whether an impl I can specialize an impl J is this:

  • Specializes(I, J):
    • If Type(I, J) holds:
      • If Type(J, I) does not hold:
        • true
      • Otherwise, if WhereClause(I, J) holds:
        • If WhereClause(J, I) does not hold:
          • true
        • else:
          • false
    • false

You could also write this as Specializes(I, J) is:

Type(I, J) && (!Type(J, I) || WhereClause(I, J) && !WhereClause(J, I))

Unlike before, we also need a separate test to check whether reuse is legal. Reuse is legal if Subset(I, J) holds.

You can view the Specializes(I, J) test as being based on a partial order, where the <= predicate is the lexicographic combination of two other partial orders, Type(I, J) and WhereClause(I, J). This implies that it is transitive.

Combining with intersection impls

It’s interesting to note that this rule can also be combined with the rule for intersection impls. The idea of intersection impls is really somewhat orthogonal to what exact test is being used to decide which impl specializes another. Essentially, whereas without intersection impls we say: two impls can overlap so long as one of them specializes the other, we would now add the additional possibility that two impls can overlap so long as some other impl specializes both of them.

This is helpful for realizing some other patterns that we wanted to get out of specialization but which, until now, we could not.

Example: AsRef

We saw earlier that this new rule doesn’t allow us to add the reflexive AsRef impl that we wanted to add. However, using an intersection impl, we can make progress. We can basically add a third impl:

impl<T> AsRef<T> for T { } // impl A

impl<U, V> AsRef<V> for &U
    where U: AsRef<V> { }  // impl B

impl<W> AsRef<&W> for &W { ... } // impl C

Impl C is a specialiation of both of the others, since every type it can match can also be matched by the others. So this would be accepted, since impl A and B overlap but have a common specializer.

(As an aside, you might also expect a generic transitivity impl, like impl<T,U,V> AsRef<V> for T where T: AsRef<U>. I haven’t thought much about if such an impl would work with the specialization rules, since I’m pretty sure though that we’d have to improve the trait matcher implementation in any case to make it work, as I think right now it would quickly overflow.)

Example: Overlapping blanket impls for Dump

Let’s see another, more conventional example where an intersection impl might be useful. We’ll return to our Dump trait. If you recall, it had a blanket impl that implemented Dump for any type T where T: Debug:

trait Dump {
    fn display(&self);
    fn debug(&self);

impl<T> Dump // impl A
    where T: Debug,
    default fn display(&self) {

    default fn debug(&self) {
        println!("{:?}", self);

But we might also want another blanket impl for types where T: Display:

impl<T> Dump // impl B
    where T: Display,
    default fn display(&self) {
        println!("{}", self);

    default fn debug(&self) {

Now we have a problem. Impl A and B clearly potentially overlap, but (a) neither is more specific in terms of its types (both apply to any type T, so Type(A, B) and Type(B, A) will both hold) and (b) neither is more specific in terms of its where-clauses: one applies to types that implement Debug, and one applies to types that implement Display, but clearly types can implement both.

With intersection impls we could resolve this error by providing a third impl for types T where T: Debug + Display:

impl<T> Dump // impl C
    where T: Debug + Display,
    default fn display(&self) {
        println!("{}", self);

    default fn debug(&self) {
        println!("{:?}", self);

Orphan rules, blanket impls, and negative reasoning

Traditionally, we have said that it is considering backwards compatible (in terms of semver) to add impls for traits, with the exception of backwards impls that apply to all T, even if T is guarded by some traits (like the impls we saw for Dump in the previous section). This is because if I add an impl like impl<T: Debug> Dump for T where none existed before, some other crate may already have an impl like impl Dump for MyType, and then if MyType: Debug, we would have an overlap conflict, and hence that downstream crate will not compile (see RFC 1023 for more information on these rules).

This new proposed specialization rule has the potential to change that balance. In fact, at first you might think that adding a blanket impl would always be legal, as long as all of its members are declared default. After all, any pre-existing impl from another crate must, because of the orphan rules, have more specific types, and will thus take precedence over the default impl (moreover, since there was nothing for this impl to inherit from before, it must still inherit). So something like impl Dump for MyType would still be legal, right?

But there is actually still a risk from blanket impls around negative reasoning. To see what I mean, let’s continue with a simplified variant of the Dump example from the previous section which doesn’t use intersection impls. So imagine that we have the Dump trait and the following impls:

// crate `dump`
trait Dump { }
trait<T: Display> Dump for T { .. }
trait<T: Debug + Display> Dump for T { .. }

So, these are pre-existing impls. Now, imagine that in the standard library, we decided to add a kind of fallback impl of Debug that says any type which implements `Display`, automatically implements `Debug`:

impl<T: Display> Debug for T {
  fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
    Display::fmt(self, fmt)

Interestingly, this impl creates a problem for the crate dump! Before, its two impls were well-ordered; one applied to types that implement Display, and one applied to types that implement both Debug and Display. But with this new impl, all types that implement Display also implement Debug, so this distinction is meaningless.

But wait, you cry! That impl looks awfully familiar to our motivating example from the very first post! Remember that this all started because we wanted to implement Clone for all Copy types:

impl<T: Copy> Clone for T { .. }

So is that actually illegal?

It turns out that there is a crucial difference between these two. It does not lie in the impls, but rather in the traits. In particular, the Copy trait is a subtrait of Clone – that is, anything which is copyable must also be cloneable. But Display and Debug have no relationship; in fact, the blanket impl interconverting between them is effectively imposing an undeclared subtrait relationship Display: Debug. After all, now some type T implements Display, we are guaranteed that it also implements Debug.

So this suggests that the new rule for semver compatibility is that one can add blanket impls after the fact, but only if a subtrait relationship already existed.

As an aside, this – along with the similar example raised by withoutboats and reddit user oconnor663 – strongly suggests to me that traits need to predeclare strong relationships, like subtraits but also mutual exclusion if we ever support that, at the point when they are created. I know withoutboats has some interesting thoughts in this direction. =)

However, another possibility that aturon raised is to use a more syntactic criteria for when something is more specialized – in that case, Debug+Display would be considered more specialized than Display, even if in reality they are equivalent. This may wind up being easier to understand – and more flexible – even if it is less smart.


This post lays out an alternative specialization predicate that I believe helps to overcome a lot of the shortcomings of the current subset rule. The rule is fairly simple to describe: impls with more specific types get precedence. If the types of two impls are equally generic, then the impl with more specific where-clauses gets precedence. I claim this rule is intuitive in practice; perhaps more intuitive than the current rule.

This predicate allows for a number of scenarios that the current specialization rule excludes, but which we wanted initially. The ones I have considered mostly fall into the category of adding an impl of a supertrait in terms of a subtrait backwards compatibly:

  • impl<T: Copy> Clone for T { ... }
  • impl<T: Eq> PartialEq for T { ... }
  • impl<T: Ord> PartialOrd for T { ... }

If we combine with intersection impls, we can also accommodate the AsRef impl, and also get better support for having overlapping blanket impls. I’d be interested to hear about other cases where the coherence rules were limiting that may be affected by specializaton, so we can see how they fare.

One sour note has to do with negative reasoning. Specialization based on where clauses (orthogonally from the changes proposed in this post, in fact) introduces a kind of negative reasoning that is not currently subject to the rules in RFC 1023. This implies that crates cannot add blanket impls with impunity. In particular, introducing subtrait relationships can still cause problems, which affects a number of suggested bridge cases:

  • impl<R, T: Add<R> + Clone> AddAssign<R> for T
    • anything that has Add and Clone is now AddAssign
  • impl<T: Display> Debug for T
    • anything that is Debug is now Display

There may be some room to revise the specialization rules to address this, by tweaking the WhereClause(I, J) test to be more conservative, or to be more syntactical in nature. This will require some further experimentation and tinkering.


Please leave comments in this internals thread.

Hannes VerschoreRequest for hardware

Update: I want to thank everybody that has offered a way to access such a netbook. We now have access to the needed hardware and are trying to fix the bug as soon as possible! Much appreciated.

Do you have a netbook (from around 2011) with AMD processor, please take a look if it is bobcat processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). If you have one and are willing to help us giving vpn/ssh access please contact me (hverschore [at]

Improving stability and decreasing crash rate is an ongoing issue for all our teams in Mozilla. That is also true for the JS team. We have fuzzers abusing our JS engine, we review each-others code in order to find bugs, we have static analyzers looking at our code, we have best practices, we look at crash-stats trying to fix the underlying bug … Lately we have identified a source of crashes in our JIT engine on specific hardware. But we haven’t been able to find a solution yet.

Our understanding of the bug is quite limited, but we know it is related to the generated code. We have tried to introduce some work-around to fix this issue, but none have worked yet and the turn-around is quite slow. We have to find a possible way to work-around and release that to nightly and wait for crash-stats to see if it could be fixed.

That is the reason for our call for hardware. We don’t have the hardware our-self and having access to the correct hardware would make it possible to test possible fixes much quicker until we find a possible solution. It would help us a lot.

This is the first time our team tries to leverage our community in order to find specific hardware and I hope it works out. We have a backup plan, but we are hoping that somebody reading this could make our live a little bit easier. We would appreciate it a lot if everybody could see if they still have a laptop/netbook with an bobcat AMD processor (C-30, C-50, C-60, C-70, E-240, E-300, E-350, E-450). E.g. this processor was used in the Asus Eee variant with AMD. If you do please contact me at (hverschore [at] in order to discuss a way to access the laptop for a limited time.

François MarierTweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

Smokey ArdissonThoughts on the Mac OS X upgrade cycle

Michael Tsai recently linked to Ricardo Mori’s lament on the unfashionable state of the Mac, quoting the following passage:

Having a mandatory new version of Mac OS X every year is not necessarily the best way to show you’re still caring, Apple. This self-imposed yearly update cycle makes less and less sense as time goes by. Mac OS X is a mature operating system and should be treated as such. The focus should be on making Mac OS X even more robust and reliable, so that Mac users can update to the next version with the same relative peace of mind as when a new iOS version comes out.

I wonder how much the mandatory yearly version cycle is due to the various iOS integration features—which, other than the assorted “bugs introduced by rewriting stuff that ‘just worked,’” seem to be the main changes in every Mac OS X (er, macOS, previously OS X) version of late.

Are these integration features so wide-ranging that they touch every part of the OS and really need an entire new version to ship safely, or are they localized enough that they could safely be released in a point update? Of course, even if they are safe to release in an update, it’s still probably easier on Apple’s part to state “To use this feature, your Mac must be running macOS 10.18 or newer, and your iOS device must be running iOS 16 or newer” instead of “To use this feature, your Mac must be running macOS 10.15.5 or newer, and your iOS device must be running iOS 16 or newer” when advising users on the availability of the feature.

At this point, as Mori mentioned, Mac OS X is a mature, stable product, and Apple doesn’t even have to sell it per se anymore (although for various reasons, they certainly want people to continue to upgrade). So even if we do have to be subjected to yearly Mac OS X releases to keep iOS integration features coming/working, it seems like the best strategy is to keep the scope of those OS releases small (iOS integration, new Safari/WebKit, a few smaller things here and there) and rock-solid (don’t rewrite stuff that works fine, fix lots of bugs that persist). I think a smaller, more scoped release also lessens the “upgrade burnout” effect—there’s less fear and teeth-gnashing over things that will be broken and never fixed each year, but there’s still room for surprise and delight in small areas, including fixing persistent bugs that people have lived with for upgrade after upgrade. (Regressions suck. Regressions that are not fixed, release after release, are an indication that your development/release process sucks or your attention to your users’ needs sucks. Neither is a very good omen.) And when there is something else new and big, perhaps it has been in development and QA for a couple of cycles so that it ships to the user solid and fully-baked.

I think the need not to have to “sell” the OS presents Apple a really unique opportunity that I can imagine some vendors would kill to have—the ability to improve the quality of the software—and thus the user experience—by focusing on the areas that need attention (whatever they may be, new features, improvements, old bugs) without having to cram in a bunch of new tentpole items to entice users to purchase the new version. Even in terms of driving adoption, lots of people will upgrade for the various iOS integration features alone, and with a few features and improved quality overall, the adoption rate could end up being very similar. Though there’s the myth that developers are only happy when they get to write new code and new features (thus the plague of rewrite-itis), I know from working on Camino that I—and, more importantly, most of our actual developers1—got enormous pleasure and satisfaction from fixing bugs in our features, especially thorny and persistent bugs. I would find it difficult to believe that Apple doesn’t have a lot of similar-tempered developers working for it, so keeping them happy without cranking out tons of brand-new code shouldn’t be overly difficult.

I just wish Apple would seize this opportunity. If we are going to continue to be saddled with yearly Mac OS X releases (for whatever reason), please, Apple, make them smaller, tighter, more solid releases that delight us in how pain-free and bug-free they are.


1 Whenever anyone would confuse me for a real developer after I’d answered some questions, my reply was “I’m not a developer; I only play one on IRC.”2 ↩︎
2 A play on the famous television commercial disclaimer, “I’m not a doctor; I only play one on TV,” attributed variously, perhaps first to Robert Young, television’s Marcus Welby, M.D. from 1969-1976.3 ↩︎
3 The nested footnotes are a tribute to former Mozilla build/release engineer J. Paul Reed (“preed” on IRC), who was quite fond of them. ↩︎

Daniel StenbergAnother wget reference was Bourne

wget-is-not-a-crimeBack in 2013, it came to light that Wget was used to to copy the files private Manning was convicted for having leaked. Around that time, EFF made and distributed stickers saying wget is not a crime.

Weirdly enough, it was hard to find a high resolution version of that image today but I’m showing you a version of it on the right side here.

In the 2016 movie Jason Bourne, Swedish actress Alicia Vikander is seen working on her laptop at around 1:16:30 into the movie and there’s a single visible sticker on that laptop. Yeps, it is for sure the same EFF sticker. There’s even a very brief glimpse of the top of the red EFF dot below the “crime” word.


Also recall the wget occurance in The Social Network.

Yunier José Sosa VázquezActualización para Firefox 49

En el día de hoy Mozilla a publicado una nueva actualización para su navegador, en esta ocasión la 49.0.2.

Esta liberación resuelve pequeños problemas que han estado confrontando algunos usuarios, por lo que recomendamos actualizar.

La pueden obtener desde nuestra zona de Descargas para Linux, Mac, Windows y Android en español e inglés.

Air MozillaWebdev Beer and Tell: October 2016

Webdev Beer and Tell: October 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

QMOFirefox 51.0a2 Aurora Testday, October 28th

Hello Mozillians,

We are happy to let you know that Friday, October 28th, we are organizing Firefox 51.0 Aurora Testday. We’ll be focusing our testing on the following features: Zoom indicator, Downloads dropmaker.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Hal WineUsing Auto Increment Fields to Your Advantage

Using Auto Increment Fields to Your Advantage

I just found, and read, Clément Delafargue’s post “Why Auto Increment Is A Terrible Idea” (via @CoreRamiro). I agree that an opaque primary key is very nice and clean from an information architecture viewpoint.

However, in practice, a serial (or monotonically increasing) key can be handy to have around. I was reminded of this during a recent situation where we (app developers & ops) needed to be highly confident that a replica was consistent before performing a failover. (None of us had access to the back end to see what the DB thought the replication lag was.)


Christian HeilmannDecoded Chats – second edition featuring Monica Dinculescu on Web Components

At SmashingConf Freiburg this year I was lucky enough to find some time to sit down with Monica Dinculescu (@notwaldorf) and chat with her about Web Components, extending the web, JavaScript dependency and how to be a lazy but dedicated developer. I’m sorry about the sound of the recording and some of the harsher cuts but we’ve been interrupted by tourists trying to see the great building we were in who couldn’t read signs that it is closed for the day.

You can see the video and get the audio recording of our chat over at the Decoded blog:

Monica saying hi

I played a bit of devil’s advocate interviewing Monica as she has a lot of great opinions and the information to back up her point of view. It was very enjoyable seeing the current state of the web through the eyes of someone talented who just joined the party. It is far too easy for those who have been around for a long time to get stuck in a rut of trying not to break up with the past or considering everything broken as we’ve seen too much damage over the years. Not so Monica. She is very much of the opinion that we can trust developers to do the right thing and that by giving them tools to analyse their work the web of tomorrow will be great.

I’m happy that there are people like her in our market. It is good to pass the torch to those with a lot of dedication rather than those who are happy to use whatever works.

Support.Mozilla.OrgWhat’s Up with SUMO – 20th October

Hello, SUMO Nation!

We had a bit of a break, but we’re back! First, there was the meeting in Toronto with the Lithium team about the migration (which is coming along nicely), and then I took a short holiday. I missed you all, it’s great to be back, time to see what’s up in the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 19th of October – you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 26th of October!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.




Support Forum

Knowledge Base & L10n

  • We are 3 weeks before next release / 1 week after current release What does that mean? (Reminder: we are following the process/schedule outlined here).
    • Joni will finalize next release content by the end of this week; no work for localizers for the next release yet
    • All existing content is open for editing and localization as usual; please focus on localizing the most recent / popular content
  • Migration: please check this spreadsheet to see which locales are going to be migrated in the first wave
    • Locale packages that will be migrated are marked as “match” and “needed” in the spreadsheet
    • Other locales will be stored as an archive at – and will be added whenever there are contributors ready to keep working on them
    • We are also waiting for confirmation about the mechanics of l10n, we may be launching the first version without an l10n system built in – but all the localized content and UI will be there in all the locales listed in the spreadsheet above
  • Remember the MozPizza L10n Hackathon in Brazil? Take a look here!


  • for iOS
    • No news, keep biting the apple ;-)

…Whew, that’s it for now, then! I hope you could catch up with everything… I’m still digging through my post-holiday inbox ;-) Take care, stay safe, and keep rocking the helpful web! WE <3 YOU ALL!

Cameron KaiserWe need more desktop processor branches

Ars Technica is reporting an interesting attack that uses a side-channel exploit in the Intel Haswell branch translation buffer, or BTB (kindly ignore all the political crap Ars has been posting lately; I'll probably not read any more articles of theirs until after the election). The idea is to break through ASLR, or address space layout randomization, to find pieces of code one can string together or directly attack for nefarious purposes. ASLR defeats a certain class of attacks that rely on the exact address of code in memory. With ASLR, an attacker can no longer count on code being in a constant location.

Intel processors since at least the Pentium use a relatively simple BTB to aid these computations when finding the target of a branch instruction. The buffer is essentially a dictionary with virtual addresses of recent branch instructions mapping to their predicted target: if the branch is taken, the chip has the new actual address right away, and time is saved. To save space and complexity, most processors that implement a BTB only do so for part of the address (or they hash the address), which reduces the overhead of maintaining the BTB but also means some addresses will map to the same index into the BTB and cause a collision. If the addresses collide, the processor will recover, but it will take more cycles to do so. This is the key to the side-channel attack.

(For the record, the G3 and the G4 use a BTIC instead, or a branch target instruction cache, where the table actually keeps two of the target instructions so it can be executing them while the rest of the branch target loads. The G4/7450 ("G4e") extends the BTIC to four instructions. This scheme is highly beneficial because these cached instructions essentially extend the processor's general purpose caches with needed instructions that are less likely to be evicted, but is more complex to manage. It is probably for this reason the BTIC was dropped in the G5 since the idea doesn't work well with the G5's instruction dispatch groups; the G5 uses a three-level hybrid predictor which is unlike either of these schemes. Most PowerPC implementations also have a return address stack for optimizing the blr instruction. With all of these unusual features Power ISA processors may be vulnerable to a similar timing attack but certainly not in the same way and probably not as predictably, especially on the G5 and later designs.)

To get around ASLR, an attacker needs to find out where the code block of interest actually got moved to in memory. Certain attributes make kernel ASLR (KASLR) an easier nut to crack. For performance reasons usually only part of the kernel address is randomized, in open-source operating systems this randomization scheme is often known, and the kernel is always loaded fully into physical memory and doesn't get swapped out. While the location it is loaded to is also randomized, the kernel is mapped into the address space of all processes, so if you can find its address in any process you've also found it in every process. Haswell makes this even easier because all of the bits the Linux kernel randomizes are covered by the low 30 bits of the virtual address Haswell uses in the BTB index, which covers the entire kernel address range and means any kernel branch address can be determined exactly. The attacker finds branch instructions in the kernel code such as by disassembling it that service a particular system call and computes (this is feasible due to the smaller search space) all the possible locations that branch could be at, creates a "spy" function with a branch instruction positioned to try to force a BTB collision by computing to the same BTB index, executes the system call, and then executes the spy function. If the spy process (which times itself) determines its branch took longer than an average branch, it logs a hit, and the delta between ordinary execution and a BTB collision is unambiguously high (see Figure 7 in the paper). Now that you have the address of that code block branch, you can deduce the address of the entire kernel code block (because it's generally in the same page of memory due to the typical granularity of the randomization scheme), and try to get at it or abuse it. The entire process can take just milliseconds on a current CPU.

The kernel is often specifically hardened against such attacks, however, and there are more tempting targets though they need more work. If you want to attack a user process (particularly one running as root, since that will have privileges you can subvert), you have to get your "spy" on the same virtual core as the victim process or otherwise they won't share a BTB -- in the case of the kernel, the system call always executes on the same virtual core via context switch, but that's not the case here. This requires manipulating the OS' process scheduler or running lots of spy processes, which slows the attack but is still feasible. Also, since you won't have a kernel system call to execute, you have to get the victim to do a particular task with a branch instruction, and that task needs to be something repeatable. Once this is done, however, the basic notion is the same. Even though only a limited number of ASLR bits can be recovered this way (remember that in Haswell's case, bit 30 and above are not used in the BTB, and full Linux ASLR uses bits 12 to 40, unlike the kernel), you can dramatically narrow the search space to the point where brute-force guessing may be possible. The whole process is certainly much more streamlined than earlier ASLR attacks which relied on fragile things like cache timing.

As it happens, software mitigations can blunt or possibly even completely eradicate this exploit. Brute-force guessing addresses in the kernel usually leads to a crash, so anything that forces the attacker to guess the address of a victim routine in the kernel will likely cause the exploit to fail catastrophically. Get a couple of those random address bits outside the 30 bits Haswell uses in the BTB table index and bingo, a relatively simple fix. One could also make ASLR more granular to occur at the function, basic block or even single instruction level rather than merely randomizing the starting address of segments within the address space, though this is much more complicated. However, hardware is needed to close the gap completely. A proper hardware solution would be to either use most or all of the virtual address in the BTB to reduce the possibility of a collision, and/or to add a random salt to whatever indexing or hashing function is used for BTB entries that varies from process to process so a collision becomes less predictable. Either needs a change from Intel.

This little fable should serve to remind us that monocultures are bad. This exploit in question is viable and potentially ugly but can be mitigated. That's not the point: the point is that the attack, particularly upon the kernel, is made more feasible by particular details of how Haswell chips handle branching. When everything gets funneled through the same design and engineering optics and ends up with the same implementation, if someone comes up with a simple, weapons-grade exploit for a flaw in that implementation that software can't mask, we're all hosed. This is another reason why we need an auditable, powerful alternative to x86/x86_64 on the desktop. And there's only one system in that class right now.

Okay, okay, I'll stop banging you over the head with this stuff. I've got a couple more bugs under investigation that will be fixed in 45.5.0, and if you're having the issue where TenFourFox is not remembering your search engine of choice, please post your country and operating system here.

Air MozillaConnected Devices Weekly Program Update, 20 Oct 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaReps Weekly Meeting Oct. 20, 2016

Reps Weekly Meeting Oct. 20, 2016 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps CommunityRep of the Month – September 2016

Please join us in congratulating Mijanur Rahman Rayhan, Rep of the Month for September 2016!

Mijanur is a Mozilla Rep and Tech Speaker from Sylhet, Bangladesh. With his diverse knowledge he organized hackathons around Connected Devices and held a Web Compatibility event to find differences in different browsers.


Mijanur proved himself as a very active Mozillian through his different activities and work with different communities. With his patience and consistency to reach his goals he is always ready and prepared for these. He showed commitment to the Reps program and his proactive spirit these last elections by running as a nominee for the Cohort position in Reps Council.

Be sure to follow his activities as he continues the activate series with a Rust workshop, Dive Into Rust events, Firefox Testpilot MozCoffees, Web Compatibility Sprint and Privacy and Security seminar with Bangladesh Police!

Gervase MarkhamNo Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Gervase MarkhamSomeone Thought This Was A Good Idea

You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!


Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:


And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:


And how do you define what label the tablet displays? Easy:


Seriously, can any reader give me one single advantage this system has over a paper label?

Daniel PocockChoosing smartcards, readers and hardware for the Outreachy project

One of the projects proposed for this round of Outreachy is the PGP / PKI Clean Room live image.

Interns, and anybody who decides to start using the project (it is already functional for command line users) need to decide about purchasing various pieces of hardware, including a smart card, a smart card reader and a suitably secure computer to run the clean room image. It may also be desirable to purchase some additional accessories, such as a hardware random number generator.

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Choice of smart card

For standard PGP use, the OpenPGP card provides a good choice.

For X.509 use cases, such as VPN access, there are a range of choices. I recently obtained one of the SmartCard HSM cards, Card Contact were kind enough to provide me with a free sample. An interesting feature of this card is Elliptic Curve (ECC) support. More potential cards are listed on the OpenSC page here.

Choice of card reader

The technical factors to consider are most easily explained with a table:

On disk Smartcard reader without PIN-pad Smartcard reader with PIN-pad
Software Free/open Mostly free/open, Proprietary firmware in reader
Key extraction Possible Not generally possible
Passphrase compromise attack vectors Hardware or software keyloggers, phishing, user error (unsophisticated attackers) Exploiting firmware bugs over USB (only sophisticated attackers)
Other factors No hardware Small, USB key form-factor Largest form factor

Some are shortlisted on the GnuPG wiki and there has been recent discussion of that list on the GnuPG-users mailing list.

Choice of computer to run the clean room environment

There are a wide array of devices to choose from. Here are some principles that come to mind:

  • Prefer devices without any built-in wireless communications interfaces, or where those interfaces can be removed
  • Even better if there is no wired networking either
  • Particularly concerned users may also want to avoid devices with opaque micro-code/firmware
  • Small devices (laptops) that can be stored away easily in a locked cabinet or safe to prevent tampering
  • No hard disks required
  • Having built-in SD card readers or the ability to add them easily

SD cards and SD card readers

The SD cards are used to store the master private key, used to sign the certificates/keys on the smart cards. Multiple copies are kept.

It is a good idea to use SD cards from different vendors, preferably not manufactured in the same batch, to minimize the risk that they all fail at the same time.

For convenience, it would be desirable to use a multi-card reader:

although the software experience will be much the same if lots of individual card readers or USB flash drives are used.

Other devices

One additional idea that comes to mind is a hardware random number generator (TRNG), such as the FST-01.

Can you help with ideas or donations?

If you have any specific suggestions for hardware or can help arrange any donations of hardware for Outreachy interns, please come and join us in the pki-clean-room mailing list or consider adding ideas on the PGP / PKI clean room wiki.

Mozilla Open Design BlogNearly there

We’ve spent the past two weeks asking people around the world to think about our four refined design directions for the Mozilla brand identity. The results are in and the data may surprise you.

If you’re just joining this process, you can get oriented here and here. Our objective is to refresh our Mozilla logo and related visual assets that support our mission and make it easier for people who don’t know us to get to know us.

A reminder of the factors we’re taking into account in this phase. Data is our friend, but it is only one of several aspects to consider. In addition to the three quantitative surveys—of Mozillians, developers, and our target consumer audience—qualitative and strategic factors play an equal role. These include comments on this blog, constructive conversations with Mozillians, our 5-year strategic plan for Mozilla, and principles of good brand design.

Here is what we showed, along with a motion study, for each direction:






We asked survey respondents to rate these design directions against seven brand attributes. Five of them—Innovative, Activist, Trustworthy, Inclusive/Welcoming, Opinionated—are qualities we’d like Mozilla to be known for in the future. The other two—Unique, Appealing—are qualities required for any new brand identity to be successful.

Mozillians and developers meld minds.

Members of our Mozilla community and the developers surveyed through MDN (the Mozilla Developer Network) overwhelmingly ranked Protocol 2.0 as the best match to our brand attributes. For over 700 developers and 450 Mozillians, Protocol scored highest across 6 of 7 measures. People with a solid understanding of Mozilla feel that a design embedded with the language of the internet reinforces our history and legacy as an Internet pioneer. The link’s role in connecting people to online know-how, opportunity and knowledge is worth preserving and fighting for.


But consumers think differently.

We surveyed people making up our target audience, 400 each in the U.S., U.K., Germany, France, India, Brazil, and Mexico. They are 18- to 34-year-old active citizens who make brand choices based on values, are more tech-savvy than average, and do first-hand research before making decisions (among other factors).

We asked them first to rank order the brand attributes most important for a non-profit organization “focused on empowering people and building technology products to keep the internet healthy, open and accessible for everyone.” They selected Trustworthy and Welcoming as their top attributes. And then we also asked them to evaluate each of the four brand identity design systems against each of the seven brand attributes. For this audience, the design system that best fit these attributes was Burst.


Why would this consumer audience choose Burst? Since this wasn’t a qualitative survey, we don’t know for sure, but we surmise that the colorful design, rounded forms, and suggestion of interconnectedness felt appropriate for an unfamiliar nonprofit. It looks like a logo.


Also of note, Burst’s strategic narrative focused on what an open, healthy Internet feels and acts like, while the strategic narratives for the other design systems led with Mozilla’s role in world. This is a signal that our targeted consumer audience, while they might not be familiar with Mozilla, may share our vision of what the Internet could and should be.

Why didn’t they rank Protocol more highly across the chosen attributes? We can make an educated guess that these consumers found it one dimensional by comparison, and they may have missed the meaning of the :// embedded in the wordmark.


Although Dino 2.0 and Flame had their fans, neither of these design directions sufficiently communicated our desired brand attributes, as proven by the qualitative survey results as well as through conversations with Mozillians and others in the design community. By exploring them, we learned a lot about how to describe and show certain facets of what Mozilla offers to the world. But we will not be pursuing either direction.

Where we go from here.

Both Protocol and Burst have merits and challenges. Protocol is distinctly Mozilla, clearly about the Internet, and it reinforces our mission that the web stay healthy, accessible, and open. But as consumer testing confirmed, it lacks warmth, humor, and humanity. From a design perspective, the visual system surrounding it is too limited.

By comparison, Burst feels fresh, modern, and colorful, and it has great potential in its 3D digital expression. As a result, it represents the Internet as a place of endless, exciting connections and possibilities, an idea reinforced by the strategic narrative. Remove the word “Mozilla,” though, and are there enough cues to suggest that it belongs to us?

Our path forward is to take the strongest aspects of Burst—its greater warmth and dimensionality, its modern feel—and apply them to Protocol. Not to Frankenstein the two together, but to design a new, final direction that builds from both. We believe we can make Protocol more relatable to a non-technical audience, and build out the visual language surrounding it to make it both harder working and more multidimensional.

Long live the link.

What do we say to Protocol’s critics who have voiced concern that Mozilla is hitching itself to an Internet language in decline? We’re doubling down on our belief in the original intent of the Internet—that people should have the ability to explore, discover and connect in an unfiltered, unfettered, unbiased environment. Our mission is dedicated to keeping that possibility alive and well.

For those who are familiar with the Protocol prompt, using the language of the Internet in our brand identity signals our resolve. For the unfamiliar, Protocol will offer an opportunity to start a conversation about who we are and what we believe. The language of the Internet will continue to be as important to building its future as it was in establishing its origin.

We’ll have initial concepts for a new, dare-we-say final design within a few weeks. To move forward, first we’ll be taking a step back. We’ll explore different graphic styles, fonts, colors, motion, and surrounding elements, making use of the design network established by our agency partner johnson banks. In the meantime, tell us what you think.

The Rust Programming Language BlogAnnouncing Rust 1.12.1

The Rust team is happy to announce the latest version of Rust, 1.12.1. Rust is a systems programming language with a focus on reliability, performance, and concurrency.

As always, you can install Rust 1.12.1 from the appropriate page on our website, or install via rustup with rustup update stable.

What’s in 1.12.1 stable

Wait… one-point-twelve-point… one?

In the release announcement for 1.12 a few weeks ago, we said:

The release of 1.12 might be one of the most significant Rust releases since 1.0.

It was true. One of the biggest changes was turning on a large compiler refactoring, MIR, which re-architects the internals of the compiler. The overall process went like this:

  • Initial MIR support landed in nightlies back in Rust 1.6.
  • While work was being done, a flag, --enable-orbit, was added so that people working on the compiler could try it out.
  • Back in October, we would always attempt to build MIR, even though it was not being used.
  • A flag was added, -Z orbit, to allow users on nightly to try and use MIR rather than the traditional compilation step (‘trans’).
  • After substantial testing over months and months, for Rust 1.12, we enabled MIR by default.
  • In Rust 1.13, MIR will be the only option.

A change of this magnitude is huge, and important. So it’s also important to do it right, and do it carefully. This is why this process took so long; we regularly tested the compiler against every crate on, we asked people to try out -Z orbit on their private code, and after six weeks of beta, no significant problems appeared. So we made the decision to keep it on by default in 1.12.

But large changes still have an element of risk, even though we tried to reduce that risk as much as possible. And so, after release, 1.12 saw a fair number of regressions that we hadn’t detected in our testing. Not all of them are directly MIR related, but when you change the compiler internals so much, it’s bound to ripple outward through everything.

Why make a point release?

Now, given that we have a six-week release cycle, and we’re halfway towards Rust 1.13, you may wonder why we’re choosing to cut a patch version of Rust 1.12 rather than telling users to just wait for the next release. We have previously said something like “point releases should only happen in extreme situations, such as a security vulnerability in the standard library.”

The Rust team cares deeply about the stability of Rust, and about our users’ experience with it. We could have told you all to wait, but we want you to know how seriously we take this stuff. We think it’s worth it to demonstrate our commitment to you by putting in the work of making a point release in this situation.

Furthermore, given that this is not security related, it’s a good time to practice actually cutting a point release. We’ve never done it before, and the release process is semi-automated but still not completely so. Having a point release in the world will also shake out any bugs in dealing with point releases in other tooling as well, like rustup. Making sure that this all goes smoothly and getting some practice going through the motions will be useful if we ever need to cut some sort of emergency point release due to a security advisory or anything else.

This is the first Rust point release since Rust 0.3.1, all the way back in 2012, and marks 72 weeks since Rust 1.0, when we established our six week release cadence along with a commitment to aggressive stability guarantees. While we’re disappointed that 1.12 had these regressions, we’re really proud of Rust’s stability and will to continue expanding our efforts to ensure that it’s a platform you can rely on. We want Rust to be the most reliable programming platform in the world.

A note about testing on beta

One thing that you, as a user of Rust, can do to help us fix these issues sooner: test your code against the beta channel! Every beta release is a release candidate for the next stable release, so for the cost of an extra build in CI, you can help us know if there’s going to be some sort of problem before it hits a stable release! It’s really easy. For example, on Travis, you can use this as your .travis.yml:

language: rust
  - stable
  - beta

And you’ll test against both. Furthermore, if you’d like to make it so that any beta failure doesn’t fail your own build, do this:

    - rust: beta

The beta build may go red, but your build will stay green.

Most other CI systems, such as AppVeyor, should support something similar. Check the documentation for your specific continuous integration product for full details.

Full details

There were nine issues fixed in 1.12.1, and all of those fixes have been backported to 1.13 beta as well.

In addition, there were four more regressions that we decided not to include in 1.12.1 for various reasons, but we’ll be working on fixing those as soon as possible as well.

You can see the full diff from 1.12.0 to 1.12.1 here.

Support.Mozilla.OrgFirefox 49 Support Release Report

This report is aiming to capture and explain what has happened during and after the launch of Firefox 49on multiple support fronts: Knowledge Base and localization, 1:1 social and forum support, trending issues and reported bugs, as well as to celebrate and recognize the tremendous work the SUMO community is putting in to make sure our users experience a happy release. We have lots of ways to contribute, from Support to Social to PR, the ways you can help shape our communications program and tell the world about Mozilla are endless. For more information: []

Knowledge Base and Localization

Article Voted “helpful” (English/US only) Global views Comments from dissatisfied users
Desktop (Sept. 20 – Oct. 12) 76-80% 93871 “No explanation of why it was removed.” 61-76% 8625 none 36-71% 11756 “Didn’t address Firefox not playing YouTube tutorials” 70-75% 5147 “Please continue to support Firefox for Pentium III. It is not that hard to do.”

“What about those who can’t afford to upgrade their processors?”

Android (Sept. 20 – Oct. 12) 68% 292 none


Article Top 10 locale coverage Top 20 locale coverage
Desktop (Sept. 20 – Oct. 12) 100% 86% 100% 81% 100% 81% 100% 81%
Android (Sept. 20 – Oct. 12) 100% 71%


Support Forum Threads


Great teamwork between some top contributors


Bugs Created from Forum threads – SUMO Community
  • [Bug 1305436] Firefox 49 won’t start after installation
  • [Bug 1304848] Users report Firefox is no longer launching after the 49 update with a mozglue.dll missing error instead
  • (Contributed to) [Bug 1304360] Firefox49 showing graphics artifacts with HWA enabled

Army Of Awesome

(by Stefan Costen -Costenslayer)

My thanks goes out to all contributors for their help in supporting everyone from crashed (which can be difficult and annoying) to people thanking us. All of your hard work has been noticed and is much appreciated

Along with Amit Roy (twitter: amitroy2779) for helping uses every day

Social Support Highlights

Brought to you by Sprinklr

Total active contributors in program ~16

Top 12 Contributors
Name Engagements
Noah 103
Magdno 69
Daniela 28
Andrew 25
Geraldo 10
Cynthia 10
Marcelo 4
Jhonatas 2
Thiago 2
Joa Paulo 1

Number of Replies:


Trending issues

Innbound, what people are clicking and asking about:


Outbound top engagement:


Thank yous from users who received SUMO help

Support Forums:

Thanks to jscher from determining between how windows and Firefox handles different video file types Thank you post

Thank you for Noah from a user on Social, link here

Tune in next time in three weeks for Firefox 50!

Air MozillaSingularity University

Singularity University Mozilla Executive Chair Mitchell Baker's address at Singularity University's 2016 Closing Ceremony.

Air MozillaIEEE Global Connect

IEEE Global Connect Mozilla Executive Chair Mitchell Baker's address at IEEE Global Connect

Eric ShepherdFinding stuff: My favorite Firefox search keywords

One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc.

You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN.

For web development

Here are the search keywords I use the most as a web developer.

Keyword Description URL
if Opens an API reference page on MDN given an interface name.
elem Opens an HTML element’s reference page on MDN.
css Opens a CSS reference page on MDN.
fx Opens the release notes for a given version of Firefox, given its version number.
mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers.
mdnall Searches MDN for the given term(s) with no filters in place.

For documentation work

When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too.

Keyword Description URL
bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number.
bs Searches Bugzilla for the specified term(s).
dxr Searches the Mozilla source code on DXR for the given term(s).
file Looks for files whose name contains the specified text in the Mozilla source tree on DXR.
ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR.
func Searches for the definition of function(s)/method(s) with the specified name, using DXR.
t Opens the specified MDN KumaScript macro page, given the template/macro name.
wikimo Searches for the specified term(s).

Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits!

Air MozillaMozFest Volunteer Health & Safety Briefing

MozFest Volunteer Health & Safety Briefing Excerpt from 2016 MozFest Volunteer Briefing on 19th October for Health and Safety

Air MozillaThe Joy of Coding - Episode 76

The Joy of Coding - Episode 76 mconley livehacks on real Firefox bugs while thinking aloud.

Yunier José Sosa VázquezNueva versión de Firefox llega con mejoras en la reproducción de videos y mucho más

El pasado martes 19 de septiembre Mozilla liberó una nueva versión de su navegador e inmediatamente compartimos con ustedes sus novedades y su descarga. Pedimos disculpa a todas las personas por las molestias que esto pudo causar.

Lo nuevo

El administrador de contraseñas ha sido actualizado para permitir a las páginas HTTPS emplear las credenciales HTTP almacenadas. Esta es una forma más para soportar Let’s Encrypt y ayudar a los usuarios en la transición hacia una web más segura.

El modo de lectura ha recibido varias funcionalidades que mejoran nuestra lectura y escucha mediante la adición de controles para ajustar el ancho y el espacio entre líneas del texto, y la inclusión de narración donde el navegador lee en voz alta el contenido de la página; sin dudas características que mejorarán la experiencia de uso en personas con discapacidad visual.

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El reproductor de audio y video HTML5 ahora posibilita la reproducción de archivos a diferentes velocidades (0.5x, Normal, 1.25x, 1.5x, 2x) y repetirlos indefinidamente. En este sentido, se mejoró el rendimiento al reproducir videos para usuarios con sistemas que soportan instrucciones SSSE3 sin aceleración por hardware.

Firefox Hello, el sistema de comunicación mediante videollamadas y chat ha sido eliminado por su bajo empleo. No obstante, Mozilla seguirá desarrollando y mejorando WebRTC.

Fin del soporte para sistemas OS X 10.6, 10.7 y 10.8, y Windows que soportan procesadores SSE.

Para desarrolladores

  • Añadida la columna Causa al Monitor de Red para mostrar la causa que generó la petición de red.
  • Introducida la API web speech synthesis.

Para Android

  • Adicionado el modo de vista de página sin conexión, con esto podrás ver algunas páginas aunque no tengas acceso a Internet.
  • Añadido un paseo por características fundamentales como el Modo de Lectura y Sync a la página Primera Ejecución.
  • Introducidos las localizaciones Español de Chile (es-CL) y Noruego (nn-NO).
  • El aspecto y comportamiento de las pestañas ha sido actualizado y ahora:
    • Las pestañas antiguas ahora son ocultadas cuando la opción restaurar pestañas está establecida en “Siempre restaurar”.
    • Recuerdo de la posición del scroll y el nivel de zoom para las pestañas abiertas.
    • Los controles multimedia han sido actualizados para evitar sonidos desde múltiples pestañas al mismo tiempo.
    • Mejoras visuales al mostrar los favicons.

Otras novedades

  • Mejoras en la página about:memory para reportar el uso de memoria dedicada a las fuentes.
  • Rehabilitado el valor por defecto para la organización de las fuentes mediante Graphite2.
  • Mejorado el rendimiento en sistemas Windows y OS X que no cuentan con aceleración por hardware.
  • Varias correcciones de seguridad.

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.

Gervase MarkhamSecurity Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

Gervase MarkhamWoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Christian HeilmannDecoded Chats – first edition live on the Decoded Blog

Over the last few weeks I was busy recording interviews with different exciting people of the web. Now I am happy to announce that the first edition of Decoded Chats is live on the new Decoded Blog.

Decoded Chats - Chris interviewing Rob Conery

In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.

We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.

Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.

Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do :)

Aki Sasakiscriptworker 0.8.1 and 0.7.1

Tl;dr: I just shipped scriptworker 0.8.1 (changelog) (github) (pypi) and scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.

scriptworker 0.8.1

The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing taskId and runId. This means that older versions of scriptworker no longer successfully poll for tasks.

This is now fixed in scriptworker 0.8.1.

scriptworker 0.7.1

Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.

To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.

comment count unavailable comments

Mike RatcliffeRunning ESLint in Atom for Mozilla Development

Due to some recent changes in the way that we use eslint to check that our coding style linting Mozilla source code in Atom has been broken for a month or two.

I have recently spent some time working on Atom's linter-eslint plugin making it possible to bring all of that linting goodness back to life!

From the root of the project type:

./mach eslint --setup

Install the linter-eslint package v.8.00 or above. Then go to the package settings and enable the following options:

Eslint Settings

Once done, you should see errors and warnings as shown in the screenshot below:

Eslint in the Atom Editor

Air MozillaMozFest 2016 Brown Bag

MozFest 2016 Brown Bag MozFest 2016 Brown Bag - October 18th, 2016 - 16:00 London

Mozilla Security BlogPhasing Out SHA-1 on the Public Web

An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.

Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.

In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.

This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage.  Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.

Questions about SHA-1 based certificates should be directed to the forum.

Christian Heilmanncrossfit.js

Also on Medium, in case you want to comment.

Rey Bango telling you to do it

When I first heard about Crossfit, I thought it to be an excellent idea. I still do, to be fair:

  • Short, very focused and intense workouts instead of time consuming exercise schedules
  • No need for expensive and complex equipment; it is basically running and lifting heavy things
  • A lot of the workouts use your own body weight instead of extra equipment
  • A strong focus on good nutrition. Remove the stuff that is fattening and concentrate on what’s good for you

In essence, it sounded like the counterpoint to overly complex and expensive workouts we did before. You didn’t need expensive equipment. Some bars, ropes and tyres will do. There was also no need for a personal trainer, tailor-made outfits and queuing up for machines to be ready for you at the gym.

Fast forward a few years and you’ll see that we made Crossfit almost a running joke. You have overly loud Crossfit bros crashing weights in the gym, grunting and shouting and telling each other to “feel the burn” and “when you haven’t thrown up you haven’t worked out hard enough”. You have all kind of products branded Crossfit and even special food to aid your Crossfit workouts.

Thanks, commercialism and marketing. You made something simple and easy annoying and elitist again. There was no need for that.

One thing about Crossfit is that it can be dangerous. Without good supervision by friends it is pretty easy to seriously injure yourself. It is about moderation, not about competition.

I feel the same thing happened to JavaScript and it annoys me. JavaScript used to be an add-on to what we did on the web. It gave extra functionality and made it easier for our end users to finish the tasks they came for. It was a language to learn, not a lifestyle to subscribe to.

Nowadays JavaScript is everything. Client side use is only a small part of it. We use it to power servers, run tasks, define build processes and create fat client software. And everybody has an opinionated way to use it and is quick to tell others off for “not being professional” if they don’t subscribe to it. The brogrammer way of life rears its ugly head.

Let’s think of JavaScript like Crossfit was meant to be. Lean, healthy exercise going back to what’s good for you:

  • Use your body weight – on the client, if something can be done with HTML, let’s do it with HTML. When we create HTML with JavaScript, let’s create what makes sense, not lots of DIVs.
  • Do the heavy lifting – JavaScript is great to make complex tasks easier. Use it to create simpler interfaces with fewer reloads. Change user input that was valid but not in the right format. Use task runners to automate annoying work. However, if you realise that the task is a nice to have and not a need, remove it instead. Use worker threads to do heavy computation without clobbering the main UI.
  • Watch what you consume – keep dependencies to a minimum and make sure that what you depend on is reliable, safe to use and update-able.
  • Run a lot – performance is the most important part. Keep your solutions fast and lean.
  • Stick to simple equipment – it is not about how many “professional tools” we use. It is about keeping it easy for people to start working out.
  • Watch your calories – we have a tendency to use too much on the web. Libraries, polyfills, frameworks. Many of these make our lives easier but weigh heavy on our end users. It’s important to understand that our end users don’t have our equipment. Test your products on a cheap Android on a flaky connection, remove what isn’t needed and make it better for everyone.
  • Eat good things – browsers are evergreen and upgrade continuously these days. There are a lot of great features to use to make your products better. Visit “Can I use” early and often and play with new things that replace old cruft.
  • Don’t be a code bro – nobody is impressed with louts that constantly tell people off for not being as fit as they are. Be a code health advocate and help people get into shape instead.

JavaScript is much bigger these days than a language to learn in a day. That doesn’t mean, however, that every new developer needs to know the whole stack to be a useful contributor. Let’s keep it simple and fun.

QMOFirefox 50 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – October 14th – we held a new Testday event, for Firefox 50 Beta 7.

Thank you all for helping us making Mozilla a better place – Onek Jude, Sadamu Samuel, Moin Shaikh, Suramya,ss22ever22 and Ilse Macías.

From Bangladesh: Maruf Rahman, Md.Rahimul Islam, Sayed Ibn Masud, Abdullah Al Jaber Hridoy, Zayed News, Md Arafatul Islam, Raihan Ali, Md.Majedul islam, Tariqul Islam Chowdhury, Shahrin Firdaus, Md. Nafis Fuad, Sayed Mahmud, Maruf Hasan Hridoy, Md. Almas Hossain, Anmona Mamun Monisha, Aminul Islam Alvi, Rezwana Islam Ria, Niaz Bhuiyan Asif, Nazmul Hassan, Roy Ayers, Farhadur Raja Fahim, Sauradeep Dutta, Sajedul Islam, মাহফুজা হুমায়রা মোহনা.

A big thank you goes out to all our active moderators too!


  • there were 4 verified bugs:
  • all the tests performed on Flash 23 were marked as PASS and 1 new possible issue was found on the New Awesome Bar feature that need to be investigated.

Keep an eye on QMO for upcoming events!

Nicholas NethercoteHow to speed up the Rust compiler

Rust is a great language, and Mozilla plans to use it extensively in Firefox. However, the Rust compiler (rustc) is quite slow and compile times are a pain point for many Rust users. Recently I’ve been working on improving that. This post covers how I’ve done this, and should be of interest to anybody else who wants to help speed up the Rust compiler. Although I’ve done all this work on Linux it should be mostly applicable to other platforms as well.

Getting the code

The first step is to get the rustc code. First, I fork the main Rust repository on GitHub. Then I make two local clones: a base clone that I won’t modify, which serves as a stable comparison point (rust0), and a second clone where I make my modifications (rust1). I use commands something like this:

for r in rust0 rust1 ; do
  cd ~/moz
  git clone$user/rust $r
  cd $r
  git remote add upstream
  git remote set-url origin$user/rust

Building the Rust compiler

Within the two repositories, I first configure:

./configure --enable-optimize --enable-debuginfo

I configure with optimizations enabled because that matches release versions of rustc. And I configure with debug info enabled so that I get good information from profilers.

Then I build:

RUSTFLAGS='' make -j8

[Update: I previously had -Ccodegen-units=8 in RUSTFLAGS because it speeds up compile times. But Lars Bergstrom informed me that it can slow down the resulting program significantly. I measured and he was right — the resulting rustc was about 5–10% slower. So I’ve stopped using it now.]

That does a full build, which does the following:

  • Downloads a stage0 compiler, which will be used to build the stage1 local compiler.
  • Builds LLVM, which will become part of the local compilers.
  • Builds the stage1 compiler with the stage0 compiler.
  • Builds the stage2 compiler with the stage1 compiler.

It can be mind-bending to grok all the stages, especially with regards to how libraries work. (One notable example: the stage1 compiler uses the system allocator, but the stage2 compiler uses jemalloc.) I’ve found that the stage1 and stage2 compilers have similar performance. Therefore, I mostly measure the stage1 compiler because it’s much faster to just build the stage1 compiler, which I do with the following command.

RUSTFLAGS='-Ccodegen-units=8' make -j8 rustc-stage1

Building the compiler takes a while, which isn’t surprising. What is more surprising is that rebuilding the compiler after a small change also takes a while. That’s because a lot of code gets recompiled after any change. There are two reasons for this.

  • Rust’s unit of compilation is the crate. Each crate can consist of multiple files. If you modify a crate, the whole crate must be rebuilt. This isn’t surprising.
  • rustc’s dependency checking is very coarse. If you modify a crate, every other crate that depends on it will also be rebuilt, no matter how trivial the modification. This surprised me greatly. For example, any modification to the parser (which is in a crate called libsyntax) causes multiple other crates to be recompiled, a process which takes 6 minutes on my fast desktop machine. Almost any change to the compiler will result in a rebuild that takes at least 2 or 3 minutes.

Incremental compilation should greatly improve the dependency situation, but it’s still in an experimental state and I haven’t tried it yet.

To run all the tests I do this (after a full build):

ulimit -c 0 && make check

The checking aborts if you don’t do the ulimit, because the tests produces lots of core files and it doesn’t want to swamp your disk.

The build system is complex, with lots of options. This command gives a nice overview of some common invocations:

make tips

Basic profiling

The next step is to do some basic profiling. I like to be careful about which rustc I am invoking at any time, especially if there’s a system-wide version installed, so I avoid relying on PATH and instead define some environment variables like this:

export RUSTC01="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc"
export RUSTC02="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage2/bin/rustc"
export RUSTC11="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage1/bin/rustc"
export RUSTC12="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage2/bin/rustc"

In the examples that follow I will use $RUSTC01 as the version of rustc that I invoke.

rustc has the ability to produce some basic stats about the time and memory used by each compiler pass. It is enabled with the -Ztime-passes flag. If you are invoking rustc directly you’d do it like this:

$RUSTC01 -Ztime-passes

If you are building with Cargo you can instead do this:

RUSTC=$RUSTC01 cargo rustc -- -Ztime-passes

The RUSTC= part tells Cargo you want to use a non-default rustc, and the part after the -- is flags that will be passed to rustc when it builds the final crate. (A bit weird, but useful.)

Here is some sample output from -Ztime-passes:

time: 0.056; rss: 49MB parsing
time: 0.000; rss: 49MB recursion limit
time: 0.000; rss: 49MB crate injection
time: 0.000; rss: 49MB plugin loading
time: 0.000; rss: 49MB plugin registration
time: 0.103; rss: 87MB expansion
time: 0.000; rss: 87MB maybe building test harness
time: 0.002; rss: 87MB maybe creating a macro crate
time: 0.000; rss: 87MB checking for inline asm in case the target doesn't support it
time: 0.005; rss: 87MB complete gated feature checking
time: 0.008; rss: 87MB early lint checks
time: 0.003; rss: 87MB AST validation
time: 0.026; rss: 90MB name resolution
time: 0.019; rss: 103MB lowering ast -> hir
time: 0.004; rss: 105MB indexing hir
time: 0.003; rss: 105MB attribute checking
time: 0.003; rss: 105MB language item collection
time: 0.004; rss: 105MB lifetime resolution
time: 0.000; rss: 105MB looking for entry point
time: 0.000; rss: 105MB looking for plugin registrar
time: 0.015; rss: 109MB region resolution
time: 0.002; rss: 109MB loop checking
time: 0.002; rss: 109MB static item recursion checking
time: 0.060; rss: 109MB compute_incremental_hashes_map
time: 0.000; rss: 109MB load_dep_graph
time: 0.021; rss: 109MB type collecting
time: 0.000; rss: 109MB variance inference
time: 0.038; rss: 113MB coherence checking
time: 0.126; rss: 114MB wf checking
time: 0.219; rss: 118MB item-types checking
time: 1.158; rss: 125MB item-bodies checking
time: 0.000; rss: 125MB drop-impl checking
time: 0.092; rss: 127MB const checking
time: 0.015; rss: 127MB privacy checking
time: 0.002; rss: 127MB stability index
time: 0.011; rss: 127MB intrinsic checking
time: 0.007; rss: 127MB effect checking
time: 0.027; rss: 127MB match checking
time: 0.014; rss: 127MB liveness checking
time: 0.082; rss: 127MB rvalue checking
time: 0.145; rss: 161MB MIR dump
 time: 0.015; rss: 161MB SimplifyCfg
 time: 0.033; rss: 161MB QualifyAndPromoteConstants
 time: 0.034; rss: 161MB TypeckMir
 time: 0.001; rss: 161MB SimplifyBranches
 time: 0.006; rss: 161MB SimplifyCfg
time: 0.089; rss: 161MB MIR passes
time: 0.202; rss: 161MB borrow checking
time: 0.005; rss: 161MB reachability checking
time: 0.012; rss: 161MB death checking
time: 0.014; rss: 162MB stability checking
time: 0.000; rss: 162MB unused lib feature checking
time: 0.101; rss: 162MB lint checking
time: 0.000; rss: 162MB resolving dependency formats
 time: 0.001; rss: 162MB NoLandingPads
 time: 0.007; rss: 162MB SimplifyCfg
 time: 0.017; rss: 162MB EraseRegions
 time: 0.004; rss: 162MB AddCallGuards
 time: 0.126; rss: 164MB ElaborateDrops
 time: 0.001; rss: 164MB NoLandingPads
 time: 0.012; rss: 164MB SimplifyCfg
 time: 0.008; rss: 164MB InstCombine
 time: 0.003; rss: 164MB Deaggregator
 time: 0.001; rss: 164MB CopyPropagation
 time: 0.003; rss: 164MB AddCallGuards
 time: 0.001; rss: 164MB PreTrans
time: 0.182; rss: 164MB Prepare MIR codegen passes
 time: 0.081; rss: 167MB write metadata
 time: 0.590; rss: 177MB translation item collection
 time: 0.034; rss: 180MB codegen unit partitioning
 time: 0.032; rss: 300MB internalize symbols
time: 3.491; rss: 300MB translation
time: 0.000; rss: 300MB assert dep graph
time: 0.000; rss: 300MB serialize dep graph
 time: 0.216; rss: 292MB llvm function passes [0]
 time: 0.103; rss: 292MB llvm module passes [0]
 time: 4.497; rss: 308MB codegen passes [0]
 time: 0.004; rss: 308MB codegen passes [0]
time: 5.185; rss: 308MB LLVM passes
time: 0.000; rss: 308MB serialize work products
time: 0.257; rss: 297MB linking

As far as I can tell, the indented passes are sub-passes, and the parent pass is the first non-indented pass afterwards.

More serious profiling

The -Ztime-passes flag gives a good overview, but you really need a profiling tool that gives finer-grained information to get far. I’ve done most of my profiling with two Valgrind tools, Cachegrind and DHAT. I invoke Cachegrind like this:

valgrind \
 --tool=cachegrind --cache-sim=no --branch-sim=yes \
 --cachegrind-out-file=$OUTFILE $RUSTC01 ...

where $OUTFILE specifies an output filename. I find the instruction counts measured by Cachegrind to be highly useful; the branch simulation results are occasionally useful, and the cache simulation results are almost never useful.

The Cachegrind output looks like this:

22,153,170,953 PROGRAM TOTALS

         Ir file:function
923,519,467 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_malloc
879,700,120 /home/njn/moz/rust0/src/rt/miniz.c:tdefl_compress
629,196,933 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_free
394,687,991 ???:???
379,869,259 /home/njn/moz/rust0/src/libserialize/
376,921,973 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:malloc
263,083,755 /build/glibc-GKVZIf/glibc-2.23/string/::/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:__memcpy_avx_unaligned
257,219,281 /home/njn/moz/rust0/src/libserialize/<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize
217,838,379 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:free
217,006,132 /home/njn/moz/rust0/src/librustc_back/
211,098,567 ???:llvm::SelectionDAG::Combine(llvm::CombineLevel, llvm::AAResults&, llvm::CodeGenOpt::Level)
185,630,213 /home/njn/moz/rust0/src/libcore/hash/<rustc_incremental::calculate_svh::hasher::IchHasher as core::hash::Hasher>::write
171,360,754 /home/njn/moz/rust0/src/librustc_data_structures/<rustc::ty::subst::Substs<'tcx> as core::hash::Hash>::hash
150,026,054 ???:llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int)

Here “Ir” is short for “I-cache reads”, which corresponds to the number of instructions executed. Cachegrind also gives line-by-line annotations of the source code.

The Cachegrind results indicate that malloc and free are usually the two hottest functions in the compiler. So I also use DHAT, which is a malloc profiler that tells you exactly where all your malloc calls are coming from.  I invoke DHAT like this:

/home/njn/grind/ws3/vg-in-place \
 --tool=exp-dhat --show-top-n=1000 --num-callers=4 \
 --sort-by=tot-blocks-allocd $RUSTC01 ... 2> $OUTFILE

I sometimes also use --sort-by=tot-bytes-allocd. DHAT’s output looks like this:

==16425== -------------------- 1 of 1000 --------------------
==16425== max-live: 30,240 in 378 blocks
==16425== tot-alloc: 20,866,160 in 260,827 blocks (avg size 80.00)
==16425== deaths: 260,827, at avg age 113,438 (0.00% of prog lifetime)
==16425== acc-ratios: 0.74 rd, 1.00 wr (15,498,021 b-read, 20,866,160 b-written)
==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299)
==16425== by 0x5AD392B: <syntax::ptr::P<T> as serialize::serialize::Decodable>::decode (
==16425== by 0x5AD4456: <core::iter::Map<I, F> as core::iter::iterator::Iterator>::next (
==16425== by 0x5AE2A52: rustc_metadata::decoder::<impl rustc_metadata::cstore::CrateMetadata>::get_attributes (
==16425== -------------------- 2 of 1000 --------------------
==16425== max-live: 1,360 in 17 blocks
==16425== tot-alloc: 10,378,160 in 129,727 blocks (avg size 80.00)
==16425== deaths: 129,727, at avg age 11,622 (0.00% of prog lifetime)
==16425== acc-ratios: 0.47 rd, 0.92 wr (4,929,626 b-read, 9,599,798 b-written)
==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299)
==16425== by 0x881136A: <syntax::ptr::P<T> as core::clone::Clone>::clone (
==16425== by 0x88233A7: syntax::ext::tt::macro_parser::parse (
==16425== by 0x8812E66: syntax::tokenstream::TokenTree::parse (

The “deaths” value here indicate the total number of calls to malloc for each call stack, which is usually the metric of most interest. The “acc-ratios” value can also be interesting, especially if the “rd” value is 0.00, because that indicates the allocated blocks are never read. (See below for example of problems that I found this way.)

For both profilers I also pipe $OUTFILE through eddyb’s script which demangles ugly Rust symbols like this:


to something much nicer, like this:

<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize

For programs that use Cargo, sometimes it’s useful to know the exact rustc invocations that Cargo uses. Find out with either of these commands:

RUSTC=$RUSTC01 cargo build -v
RUSTC=$RUSTC01 cargo rust -v

I also have done a decent amount of ad hoc println profiling, where I insert println! calls in hot parts of the code and then I use a script to post-process them. This can be very useful when I want to know exactly how many times particular code paths are hit.

I’ve also tried perf. It works, but I’ve never established much of a rapport with it. YMMV. In general, any profiler that works with C or C++ code should also work with Rust code.

Finding suitable benchmarks

Once you know how you’re going to profile you need some good workloads. You could use the compiler itself, but it’s big and complicated and reasoning about the various stages can be confusing, so I have avoided that myself.

Instead, I have focused entirely on rustc-benchmarks, a pre-existing rustc benchmark suite. It contains 13 benchmarks of various sizes. It has been used to track rustc’s performance at for some time, but it wasn’t easy to use locally until I wrote a script for that purpose. I invoke it something like this:

./ \
  /home/njn/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc \

It compares the two given compilers, doing debug builds, on the benchmarks See the next section for example output. If you want to run a subset of the benchmarks you can specify them as additional arguments.

Each benchmark in rustc-benchmarks has a makefile with three targets. See the README for details on these targets, which can be helpful.


Here are the results if I compare the following two versions of rustc with

  • The commit just before my first commit (on September 12).
  • A commit from October 13.
futures-rs-test  5.028s vs  4.433s --> 1.134x faster (variance: 1.020x, 1.030x)
helloworld       0.283s vs  0.235s --> 1.202x faster (variance: 1.012x, 1.025x)
html5ever-2016-  6.293s vs  5.652s --> 1.113x faster (variance: 1.011x, 1.008x)
hyper.0.5.0      6.182s vs  5.039s --> 1.227x faster (variance: 1.002x, 1.018x)
inflate-0.1.0    5.168s vs  4.935s --> 1.047x faster (variance: 1.001x, 1.002x)
issue-32062-equ  0.457s vs  0.347s --> 1.316x faster (variance: 1.010x, 1.007x)
issue-32278-big  2.046s vs  1.706s --> 1.199x faster (variance: 1.003x, 1.007x)
jld-day15-parse  1.793s vs  1.538s --> 1.166x faster (variance: 1.059x, 1.020x)
piston-image-0. 13.871s vs 11.885s --> 1.167x faster (variance: 1.005x, 1.005x)
regex.0.1.30     2.937s vs  2.516s --> 1.167x faster (variance: 1.010x, 1.002x)
rust-encoding-0  2.414s vs  2.078s --> 1.162x faster (variance: 1.006x, 1.005x)
syntex-0.42.2   36.526s vs 32.373s --> 1.128x faster (variance: 1.003x, 1.004x)
syntex-0.42.2-i 21.500s vs 17.916s --> 1.200x faster (variance: 1.007x, 1.013x)

Not all of the improvement is due to my changes, but I have managed a few nice wins, including the following.

#36592: There is an arena allocator called TypedArena. rustc creates many of these, mostly short-lived. On creation, each arena would allocate a 4096 byte chunk, in preparation for the first arena allocation request. But DHAT’s output showed me that the vast majority of arenas never received such a request! So I made TypedArena lazy — the first chunk is now only allocated when necessary. This reduced the number of calls to malloc greatly, which sped up compilation of several rustc-benchmarks by 2–6%.

#36734: This one was similar. Rust’s HashMap implementation is lazy — it doesn’t allocate any memory for elements until the first one is inserted. This is a good thing because it’s surprisingly common in large programs to create HashMaps that are never used. However, Rust’s HashSet implementation (which is just a layer on top of the HashMap) didn’t have this property, and guess what? rustc also creates large numbers of HashSets that are never used. (Again, DHAT’s output made this obvious.) So I fixed that, which sped up compilation of several rustc-benchmarks by 1–4%. Even better, because this change is to Rust’s stdlib, rather than rustc itself, it will speed up any program that creates HashSets without using them.

#36917: This one involved avoiding some useless data structure manipulation when a particular table was empty. Again, DHAT pointed out a table that was created but never read, which was the clue I needed to identify this improvement. This sped up two benchmarks by 16% and a couple of others by 3–5%.

#37064: This one changed a hot function in serialization code to return a Cow<str> instead of a String, which avoided a lot of allocations.

Future work

Profiles indicate that the following parts of the compiler account for a lot of its runtime.

  • malloc and free are still the two hottest functions in most benchmarks. Avoiding heap allocations can be a win.
  • Compression is used for crate metadata and LLVM bitcode. (This shows up in profiles under a function called tdefl_compress.)  There is an issue open about this.
  • Hash table operations are hot. A lot of this comes from the interning of various values during type checking; see the CtxtInterners type for details.
  • Crate metadata decoding is also costly.
  • LLVM execution is a big chunk, especially when doing optimized builds. So far I have treated LLVM as a black box and haven’t tried to change it, at least partly because I don’t know how to build it with debug info, which is necessary to get source files and line numbers in profiles.

A lot of programs have broadly similar profiles, but occasionally you get an odd one that stresses a different part of the compiler. For example, in rustc-benchmarks, inflate-0.1.0 is dominated by operations involving the (delighfully named) ObligationsForest (see #36993), and html5ever-2016-08-25 is dominated by what I think is macro processing. So it’s worth profiling the compiler on new codebases.

Caveat lector

I’m still a newcomer to Rust development. Although I’ve had lots of help on the #rustc IRC channel — big thanks to eddyb and simulacrum in particular — there may be things I am doing wrong or sub-optimally. Nonetheless, I hope this is a useful starting point for newcomers who want to speed up the Rust compiler.

This Week In RustThis Week in Rust 152

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Blog Posts

News & Project Updates

Other Weeklies from Rust Community

Crate of the Week

This week's Create of the Week is xargo - for effortless cross compilation of Rust programs to custom bare-metal targets like ARM Cortex-M. It recently reached version 0.2.0 and you can read the announcement here.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

106 pull requests were merged in the last week.

New Contributors

  • Danny Hua
  • Fabian Frei
  • Mikko Rantanen
  • Nabeel Omer

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

FCP issues:

Other issues getting a lot of discussion:

No PRs this week.

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

This week's friends of the forest are:

I'd like to nominate bluss for his work on scientific programming in Rust. ndarray is a monumental project but in addition to that he has worked (really) hard to share that knowledge among others and provided easy-to-use libraries like matrixmultiply. Without bluss' assistance rulinalg would be in a far worse state.

I'd like to nominate Yehuda Katz, the lord of package managers.

Submit your Friends-of-the-Forest nominations for next week!

Quote of the Week

<dRk> that gives a new array of errors, guess that's a good thing <misdreavus> you passed one layer of tests, and hit the next layer :P <misdreavus> rustc is like onions <dRk> it makes you cry?

— From #rust-beginners.

Thanks to Quiet Misdreavus for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Daniel Stenbergcurl up in Nuremberg!

I’m very happy to announce that the curl project is about to run our first ever curl meeting and developers conference.

March 18-19, Nuremberg Germany

Everyone interested in curl, libcurl and related matters is invited to participate. We only ask of you to register and pay the small fee. The fee will be used for food and more at the event.

You’ll find the full and detailed description of the event and the specific location in the curl wiki.

The agenda for the weekend is purposely kept loose to allow for flexibility and unconference-style adding things and topics while there. You will thus have the chance to present what you like and affect what others present. Do tell us what you’d like to talk about or hear others talk about! The sign-up for the event isn’t open yet, as we first need to work out some more details.

We have a dedicated mailing list for discussing the meeting, called curl-meet, so please consider yourself invited to join in there as well!

Thanks a lot to SUSE for hosting!

Feel free to help us make a cool logo for the event!


(The 19th birthday of curl is suitably enough the day after, on March 20.)

Firefox NightlyThese Weeks in Firefox: Issue 3

The Firefox Desktop team met yet again last Tuesday to share updates. Here are some fresh updates that we think you might find interesting:


Contributor(s) of the Week

Project Updates

Context Graph

Electrolysis (e10s)

Platform UI

Privacy / Security


Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Firefox NightlyBetter default bookmarks for Nightly

Because software defaults matter, we have just changed the default bookmarks for the Nightly channel to be more useful to power-users deeply interested in day to day progress of Firefox and potentially willing to help Mozilla improve their browser through bug and crash reports, shared telemetry data and technical feedback.

Users on the Nightly channels had the same bookmarks than users on the release channel, these bookmarks target end-users with limited technical knowledge and link to Mozilla sites providing end-user support, add-ons or propose a tour of Firefox features. Not very compelling for a tech-savvy audience that installed pre-alpha software!

As of last week, new Nightly users or existing Nightly users creating a new profile have a different set of bookmarks that are more likely to meet their interest in the technical side of Mozilla and contributing to Firefox as an alpha tester. Here is what the default bookmarks are:

New Nightly Bookmarks

There are links to this blog of course, to Planet Mozilla, to the Mozilla Developer Network, to the Nightly Testers Tools add-on, to about:crashes and to the IRC #nightly channel in case you find a bug and would like to talk to other Nightly users about it and of course a link to Bugzilla. The Firefox tour link was also replaced by a link to the contribute page on

It’s a minor change to the profile data as we don’t want to make of Nightly a different product from Firefox, but I hope another small step in the direction of empowering our more technical user base to help Mozilla build the most stable and reliable browser for hundreds of millions of people!

Giorgos LogiotatidisSystemd Unit to activate loopback devices before LVM

In a Debian server I'm using LVM to create a single logical volume from multiple different volumes. One of the volumes is a loop-back device which refers to a file in another filesystem.

The loop-back device needs to be activated before the LVM service starts or the later will fail due to missing volumes. To do so a special systemd unit needs to be created which will not have the default dependencies of units and will get executed before lvm2-activation-early service.

Systemd will set a number of dependencies for all units by default to bring the system into a usable state before starting most of the units. This behavior is controlled by DefaultDependencies flag. Leaving DefaultDependencies to its default True value creates a dependency loop which systemd will forcefully break to finish booting the system. Obviously this non-deterministic flow can result in different than desired execution order which in turn will fail the LVM volume activation.

Setting DefaultDependencies to False will disable all but essential dependencies and will allow our unit to execute in time. Systemd manual confirms that we can set the option to false:

Generally, only services involved with early boot or late shutdown should set this option to false.

The second is to execute before lvm2-activation-early. This is simply achieved by setting Before=lvm2-activation-early.

The third and last step is to set the command to execute. In my case it's /sbin/losetup /dev/loop0 /volume.img as I want to create /dev/loop0 from the file /volume.img. Set the process type to oneshot so systemd waits for the process to exit before it starts follow-up units. Again from the systemd manual

Behavior of oneshot is similar to simple; however, it is expected that the process has to exit before systemd starts follow-up units.

Place the unit file in /etc/systemd/system and in the next reboot the loop-back device should be available to LVM.

Here's the final unit file:

Description=Activate loop device

ExecStart=/sbin/losetup /dev/loop0 /volume.img


See also: - Anthony's excellent LVM Loopback How-To

Firefox NightlyDevTools now display white space text nodes in the DOM inspector

Web developers don’t write all their code in just one line of text. They use white space between their HTML elements because it makes markup more readable: spaces, returns, tabs.

In most instances, this white space seems to have no effect and no visual output, but the truth is that when a browser parses HTML it will automatically generate anonymous text nodes for elements not contained in a node. This includes white space (which is, after all a type of text).

If these auto generated text nodes are inline level, browsers will give them a non-zero width and height, and you will find strange gaps between the elements in the context, even if you haven’t set any margin or padding on nearby elements.

This behaviour can be hard to debug, but Firefox DevTools are now able to display these whitespace nodes, so you can quickly spot where do the gaps come from in your markup, and fix the issues.


Whitespace debugging in DevTools in action

The demo shows two examples with slightly different markup to highlight the differences both in browser rendering and what DevTools are showing.

The first example has one img per line, so the markup is readable, but the browser renders gaps between the images:

<img src="..." />
<img src="..." />

The second example has all the img tags in one line, which makes the markup unreadable, but it also doesn’t have gaps in the output:

<img src="..." /><img src="..." />

If you inspect the nodes in the first example, you’ll find a new whitespace indicator that denotes the text nodes created for the browser for the whitespace in the code. No more guessing! You can even delete the node from the inspector, and see if that removes mysterious gaps you might have in your website.

The Servo BlogThese Weeks In Servo 81

In the last two weeks, we landed 171 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online and now includes the Q4 plans and tentative outline of some ideas for 2017. Please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • bholley added benchmark support to mach’s ability to run unit tests
  • frewsxcv implemented the value property on <select>
  • pcwalton improved the rendering of by fixing percentages in top and bottom
  • joewalker added support for font-kerning in Stylo
  • ms2ger implemented blob URL support in the fetch stack
  • scottrinh hid some canvas-related interfaces from workers
  • pcwalton improved by avoiding vertical alignment of absolutely positioned children in table rows
  • namsoocho added font-variant-position for Stylo
  • mmatyas fixed Android and ARM compilation issues in WebRender
  • pcwalton improved by avoiding incorrect block element position modifications
  • heycam factored out a UrlOrNone type to avoid some duplication in property bindings code
  • manishearth vendored bindings for Gecko’s nsString
  • awesomeannirudh implemented the -moz-text-align-last property
  • mrobinson added a custom debug formatter for ClippingRegion
  • manishearth implemented column-count for Stylo
  • anholt added the WebGL uniformMatrix*fv methods
  • UK992 made our build environment warn if it finds the MinGW Python, which breaks Windows MinGW builds
  • nox updated Rust
  • waffles added image-rendering support for Stylo
  • glennw fixed routing of touch events to the correct iframe
  • jdub added some bindings generation builder functions
  • larsberg picked up the last fix to get Servo on MSVC working
  • glennw added fine-grained GPU profiling to WebRender
  • canaltinova implemented some missing gradient types for Stylo
  • pcwalton implemented vertical-align: middle and fixed some vertical-align issues
  • splav added initial support for the root SVG element
  • glennw added transform support for text runs in WebRender
  • nox switched many crates to serde_derive, avoiding a fragile nightly dependency in our ecosystem
  • wafflespeanut added font-stretch support to Stylo
  • aneeshusa fixed the working directory for CI steps auto-populated from the in-tree rules
  • dati91 added mock WebBluetooth device support, in order to implement the WebBluetooth Test API
  • aneeshusa fixed a potential GitHub token leak in our documentation build
  • pcwalton fixed placement of inline hypothetical boxes for absolutely positioned elements, which fixes the Rust docs site
  • SimonSapin changed PropertyDeclarationBlock to use parking_lot::RwLock
  • shinglyu restored the layout trace viewer to aid in debugging layout
  • KiChjang implemented CSS transition DOM events
  • nox added intemediary, Rust-only WebIDL interfaces that replaced lots of unnecessary code duplication
  • mathieuh improved web compatibility by matching the new specification changes related to XMLHttpRequest events
  • emilio improved web compatibility by adding more conformance checks to various WebGL APIs
  • mortimergoro implemented several missing WebGL APIs
  • g-k created tests verifying the behaviour of browser cookie implementations

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


Canaltinova implemented parsing for many gradients so that they can be used in Firefox via Stylo and also provided comparisons:

Radial gradient support in Stylo

Robert O'CallahanIronic World Standards Day

Apparently World Standards Day is on October 14. Except in the USA it's celebrated on October 27 and in Canada on October 5.

Are they trying to be ironic?