Shyam ManiTest

Upgrade test!

Rubén MartínMoving forward with Mozilla Participation Leaders

Core mozillians are more important than ever for Mozilla to succeed in 2016, Participation team keeps working to provide leadership opportunities and guidance on impactful initiatives. We are centralizing communications in this discourse category.


Leadership cohort in Singapore – Photo by Christos Bacharakis (by-nc-sa)

This a post originally published on Mozilla Discourse, please add you comments there.

During last months, the Participation team together with a big group of core mozillians organized and delivered a set of initiatives and gatherings to support Mozilla’s mission and goals.

Mozfest, Orlando All Hands, Leadership Summit and London All Hands were some of the key events where we worked together, grow as contributors and evolve our leadership and mobilization path inside the community.

Currently we are working on a set of initiatives to keep this work going:

  • RepsNext: The Reps program is evolving to become a volunteer participation platform, really focused on leadership and community mobilization. Be more inclusive to welcome any core mozillian interested in this growth path is going to be a key focus.
  • We are developing a leadership toolkit to be able to provide concrete resources to improve mozillians skills that are going to be important for supporting Mozilla in the next years.
  • In order to deliver these skills, we are creating a coaching team with the help of new Reps mentors so we can train mozillians to deliver this knowledge to other volunteers and their local communities.
  • Refreshing the participation buffet, in order to provide clear guidance on the focus initiatives that are more relevant to support Mozilla this year and how to get involved. This will be also provided through coaching.
  • Systematize community gatherings strategy in order to provide skills and action-focused guidance to regional and functional communities around the world, as well as adapting our strategy taking in consideration local needs and ideas.

What’s next?

Beginning in July we are going to start seeing some of the previous ideas being implemented and communicated, and we want core mozillians to be part of it. A lot of mozillians have demonstrated great leadership inside Mozilla and there is no way Mozilla can succeed without their help, ideas and support, and we are inviting them to join the group. We are centralizing communications in this discourse category.

Keep it rocking the free web!


Air MozillaONOS Project Presentation

ONOS Project Presentation The event aims to educate the general public about the ONOS project and give them the opportunity to meet members of ON.Lab, the non profit...

Cameron KaiserProgress to TenFourFox 45: milestone 2 (plus: get your TALOS on or else, and Let's Engulf Comodo)

After a highly prolonged porting phase, TenFourFox 43 finally starts up, does browsery things and doesn't suck. Changesets are available on SourceForge for your amusement. I am not entirely happy with this release, though; our PowerPC JavaScript JIT required substantial revision, uncovering another bug which will be fixed in 38.10 (this one is an edge case but it's still wrong), and there is some glitch in libnestegg that keeps returning the audio sampling rate on many WebM videos as 0.000. I don't think this is an endian problem because some videos do play and I can't figure out if this is a legitimate bug or a compiler freakout, so right now there is a kludge to assume 22.050kHz when that happens. The video is otherwise parseable and that gets it to play, but I find this solution technically disgusting, so I'm going to ponder it some more in the meantime. We'll see if it persists in 45. On the other hand, we're also able to load the JavaScript Internationalization API now instead of using our compat shim, which should fix several bugs and add-on compatibility issues.

Anyway, the next step is to port 45 using the 43 sets, and that's what I'll be working on over the next several weeks. I'm aiming for the first beta in mid-July, so stay tuned.

For those of you who have been following the Talos POWER8 workstation project (the most powerful and open workstation-class Power Architecture system to date; more info here and here), my contacts inform me that the fish-or-cut-bait deadline is approaching where Raptor needs to determine if the project is financially viable with the interest level so far received. Do not deny me my chance to give them my money for the two machines I am budgeting (a kidneystone) for. Do not foresake me, O my audience. I will find thee and smite thee. Sign up, thou cowards, and make this project a reality. Let's get that Intel crap you don't actually control off thy desks. You can also check out using the Talos to run x86 applications through QEMU, making it the best of both worlds, as demonstrated by a video on their Talos pre-release page.

Last but not least, increasingly sketchy certificate authority and issuer Comodo, already somewhat of a pariah for previously dropping their shorts, has decided to go full scumbag and is trying to trademark "Let's Encrypt." Does that phrase seem familiar to you? It should, because "Let's Encrypt" is (and has been for some time) a Mozilla-sponsored free and automated certificate authority trying to get certificates in the hands of more people so that more websites can be protected by strong encryption. As their FAQ says, "Anyone who owns a domain name can use Let's Encrypt to obtain a trusted certificate at zero cost."

Methinks Comodo is hoping to lawyer Let's Encrypt out of existence because they believe a free certificate issuer will be a huge impact to their business model. Well, yes, that's probably true, which makes me wonder what would happen if Mozilla threatened to pull the Comodo CA root out of Firefox in response. Besides, based on this petulant and almost certainly specious legal action and their previous poor security history, the certificate authority pool could definitely use a little chlorine anyhow.

Gervase MarkhamProject Fear

I’ve been campaigning a bit on the EU Referendum. (If you want to know why I think the UK should leave, here are my thoughts.) Here’s the leaflet my wife and I have been stuffing into letterboxes in our spare moments for the past two weeks:


And here’s the leaflet in our area being distributed today by one of the Labour local councillors and the Remain campaign:


Says it all.

Support.Mozilla.OrgWhat’s Up with SUMO – 23rd June

Hello, SUMO Nation!

Did you miss us? WE MISSED YOU! It’s good to be back, even if we had quite a fateful day today… The football fever in Europe reaching new heights, some countries wondering aloud if they want to keep being a part of the EU, and a Platform meeting to inform you about the current state of our explorations. Busy times – let’s dive straight into some updates:

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 29th of June!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n


  • for Android
    • No big news for now.

Once again – it’s good to be back, and we’re looking forward to a great end of June and kick-off in July with you all. Keep rocking the helpful web!

PS. If you’re a football fan, let’s talk about it in our forums!

Air MozillaWeb QA Weekly Team Meeting, 23 Jun 2016

Web QA Weekly Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 23 Jun 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Daniel GlazmanImplementing Media Queries in an editor

You're happy as a programmer ? You think you know the Web ? You're a browser implementor ? You think dealing with Media Queries is easy ? Do the following first:

Given a totally arbitrary html document with arbitrary stylesheets and arbitrary media constraints, write an algo that gives all h1 a red foreground color when the viewport size is between min and max, where min is a value in pixels (0 indicating no min-width in the media query...) , and max is a value in pixels or infinite (indicating no max-width in the media query).  You can't use inline styles, of course. !important can be used ONLY if it's the only way of adding that style to the document and it's impossible otherwise. Oh, and you have to handle the case where some stylesheets are remote so you're not allowed to modify them because you could not reserialize them :-)

What's hard? Eh:

  • media attributes on stylesheet owner node
  • badly ordered MQs
  • intersecting (but not contained) media queries
  • default styles between MQs
  • remote stylesheets
  • so funny to deal with the CSS OM where you can't insert a rule before or after another rule but have to deal with rule indices...
  • etc.

My implementation in BlueGriffon is almost ready. Have fun...

Emily DunhamCFPs Made Easier

CFPs Made Easier

Check out this post by Lucy Bain about how to come up with an idea for what to talk about at a conference. I blogged last year about how I turn abstracts into talks, as well. Now that the SeaGL CFP is open, it’s time to look in a bit more detail about the process of going from a talk idea to a compelling abstract.

In this post, I’ll walk you through some exercises to clarify your understanding of your talk idea and find its audience, then help you use that information to outline the 7 essential parts of a complete abstract.

Getting ready to write your abstract

Your abstract is a promise about what your talk will deliver. Have you ever gotten your hopes up for a talk based on its abstract, then attended only to hear something totally unrelated? You can save your audience from that disappointment by making sure that you present what your abstract says you will.

For both you and your audience to get the most out of your talk, the following questions can help you refine your talk idea before you even start to write its abstract.

Why do you love this idea?

Start working on your abstract by taking some quick notes on why you’re excited about speaking on this topic. There are no wrong answers! Your reasons might include:

  • Document a topic you care about in a format that works well for those who learn by listening and watching
  • Impress a potential employer with your knowledge and skills
  • Meet others in the community who’ve solved similar problems before, to advise you
  • Recruit contributors to a project
  • Force yourself to finish a project or learn more detail about a tool
  • Save novices from a pitfall that you encountered
  • Travel to a conference location that you’ve always wanted to visit
  • Build your resume
  • Or something else entirely!

Starting out by identifying what you personally hope to gain from giving the talk will help ensure that you make the right promises in your abstract, and get the right people into the room.

What’s your idea’s scope?

Make 2 quick little lists:

  • Topics I really want this presentation to cover
  • Topics I do not want this presentation to cover

Once you think that you have your abstract all sorted out, come back to these lists and make sure that you included enough topics from the first list, and excluded those from the second.

Who’s the conference’s target audience?

Keynotes and single-track conferences are special, but generally your talk does not have to appeal to every single person at the conference.

Write down all the major facts you know about the people who attend the conference to which you’re applying. How young or old might they be? How technically expert or inexperienced? What are their interests? Why are they there?

For example, here are some statements that I can make about the audience at SeaGL:

  • Expertise varies from university students and random community members to long-time contributors who’ve run multiple FOSS projects.
  • Age varies from a few school-aged kids (usually brought by speakers and attendees) to retirees.
  • The audience will contain some long-term FOSS contributors who don’t program, and some relatively expert programmers who might have minimal involvement in their FOSS communities
  • Most attendees will be from the vicinity of Seattle. It will be some attendees’ first tech conference. A handful of speakers are from other parts of the US and Canada; international attendees are a tiny minority.
  • The audience comes from a mix of socioeconomic backgrounds, and many attendees have day jobs in fields other than tech.
  • Attendees typically come to SeaGL because they’re interested in FOSS community and software.

Where’s your niche?

Now that you’ve taken some guesses about who will be reading your abstract, think about which subset of the conference’s attendees would get the most benefit out of the topic that you’re planning to talk about.

Write down which parts of the audience will get the most from your talk – novices to open source? Community leaders who’ve found themselves in charge of an IRC channel but aren’t sure how to administer it? Intermediate Bash users looking to learn some new tricks?

If your talk will appeal to multiple segments of the community (developers interested in moving into DevOps, and managers wondering what their operations people do all day?), write one question that your talk will answer for each segment.

You’ll use this list to customize your abstract and help get the right people into the room for your talk.

Still need an idea?

Conferences with as broad an audience as SeaGL often offer an introductory track to help enthusiastic newcomers get up to speed. If you have intermediate skills in a technology like Bash, Git, LaTeX, or IRC, offer an introductory talk to help newbies get started with it! Can you teach a topic that you learned recently in a way that’s useful to newbies?

If you’re an expert in a field that’s foreign to most attendees (psychology? beekeeping? Cray Supercomputer assembly language?), consider an intersection talk: “What you can learn from X about Y”. Can you combine your hobby, background, or day job with a theme from the conference to come up with something unique?

The Anatomy of an Abstract

There are many ways to structure a good abstract. Here’s how I like to structure them:

  • Set the scene with an introductory sentence that reminds your target audience of your topic’s relevance to them. Some of mine have included:

    • “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.”
    • “Git is the most popular source code management and version control system in the open source community.”
    • “When you’re new to programming, or self-taught with an emphasis on those topics that are directly relevant to your current project, it’s easy to skip learning about analyzing the complexity of algorithms.”
  • Ask some questions, which the talk promises to answer. These questions should be asked from the perspective of your target audience, which you identified earlier. This is the least essential piece of an abstract, and can be skipped if you make sure your exposition clearly shows that you understand your target audience in some other way. Here are a couple of questions I’ve used in abstracts that were accepted to conferences:

    • “Do you know how to control what information people can discover about you on an IRC network?”
    • “Is the project of your dreams ignoring your pull requests?”
  • Drop some hints about the format that the talk will take. This shows the selection commitee that you’ve planned ahead, and helps audience members select sessiosn that’re a good fit for their learning styles. Useful words here include:

    • “Overview of”
    • “Case study”
    • “Demonstrations of”
    • “Deep dive into”
    • “Outline X principles for”
    • “Live coding”
  • Identify what knowledge the audience will need to get the talk’s benefit, if applicable. Being specific about this helps welcome audience members who’re undecided about whether the talk is applicable to them. Useful phrases include:

    • “This talk will assume no background knowledge of...”
    • “If you’ve used ____ to ____, ...”
    • “If you’ve completed the ____ tutorial...”
  • State a specific benefit that audience members will get from having attended the talk. Benefits can include:

    • “Halve your Django website’s page load times”
    • “Get help on IRC”
    • “Learn from ____‘s mistakes”
    • “Ask the right questions about ____
  • Reinforce and quantify your credibility. If you’re presenting a case study into how your company deployed a specific tool, be sure to mention your role on the team! For instance, you might say:

    • “Presented by [the original author | a developer | a maintainer | a long-term user] of [the project], this talk will...”
  • End with a recap of the talk’s basic promise, and welcome audience members to attend.

These pieces of information don’t have to each be in their own sentence – for instance, the credibility reinforcement and talk format hint often fit together nicely.

Once you’ve got all of the essential pieces of an abstract, munge them around until it sounds like concise, fluent English. Get some feedback on if you’d like assistance!

Give it a title

Naming things is hard. Here are some assorted tips:

  • Keep it under about 50 characters, or it might not fit on the program
  • Be polite. Rude puns or metaphors might be eye-catching, but probably violate your conference or community’s code of conduct, and will definitely alienate part of your prospective audience.
  • For general talks, it’s hard to go wrong with “Intro to ___” or “___ for ___ users”.
  • The form “[topic]: A [history|overview|melodrama|case study|love story]” is generally reliable. Well, I’m kidding about “melodrama” and “love story”... Mostly.
  • Clickbait is underhanded, but it works. “___ things I wish I’d known about ___”, anyone?

Good luck, and happy conferencing!

Mozilla Addons BlogFriend of Add-ons: Yuki Hiroshi

piro-photoPlease meet our newest Friend of Add-ons: Yuki “Piro” Hiroshi. A longtime add-on developer with 37 extensions and counting (he’s most proud of Tree Style Tab and Second Search), Hiroshi also recently filed more than two dozen high-impact WebExensions bugs.

Hiroshi recently recounted his experience porting one of his XUL add-ons to WebExtensions in the hopes that he could help support fellow add-on developers through the transition. He likens XUL to an “experimental laboratory” that over the past decade allowed us to explore the possibilities of a customized web browser. But now, Hiroshi says, we need to “go for better security and stability” and embrace forward-thinking API’s that will cater to building richer user experiences.

While add-ons technology is evolving, Hiroshi’s motivation to create remains the same. “It’s an emotional reason,” he says, which took root when he first discovered the power of a Gecko engine that allowed him to transform himself from being a mere hobbyist to a true developer. “Mozilla is a symbol of liberty for me,” Hiroshi explains. “It’s one of the legends of the early days of the web.”

When he’s not authoring add-ons, Hiroshi enjoys reading science fiction and manga. A recent favorite is The Hyakumanjo Labyrinth, a “bizarre adventure story” that takes place on an infinity field beyond space and time within an old Japanese apartment building.

Do you contribute to AMO in some fashion? If so, don’t forget to add your contributions to our Recognition page!

Robert O'CallahanPlayCanvas Is Impressive

I've been experimenting on my children with different ways to introduce them to programming. We've tried Stencyl, Scratch, JS/HTML, Python, and CodeAcademy with varying degrees of success. It's difficult because, unlike when I learned to program 30 years ago, it's hard to quickly get results that compare favourably with a vast universe of apps and content they've already been exposed to. Frameworks and engines face a tradeoff between power, flexibility and ease-of-use; if it's too simple then it's impossible to do what you want to do and you may not learn "real programming", but if it's too complex then it may just be too hard to do what you want to do or you won't get results quickly.

Recently I discovered PlayCanvas and so far it looks like the best approach I've seen. It's a Web-based 3D engine containing the ammo.js (Bullet) physics engine, a WebGL renderer, a WYSIWYG editor, and a lot more. It does a lot of things right:

  • Building in a physics engine, renderer and visual editor gives a very high level of abstraction that lets people get impressive results quickly while still being easy to understand (unlike, say, providing an API with 5000 interfaces, one of which does what you want). Stencyl does this, but the other environments I mentioned don't. But Stencyl is only 2D; supporting 3D adds significant power without, apparently, increasing the user burden all that much.
  • Being Web-based is great. There's no installation step, it works on all platforms (I guess), and the docs, tutorials, assets, forkable projects, editor and deployed content are all together on the Web. I suspect having the development platform be the same as the deployment platform helps. (The Stencyl editor is Java but its deployed games are not, so WYS is not always WYG.)
  • Performance is good. The development environment works well on a mid-range Chromebook. Deployed games work on a new-ish Android phone.
  • So far the implementation seems robust. This is really important; system quirks and bugs make learning a lot harder, because novices can't distinguish their own bugs from system bugs.
  • The edit-compile-run cycle is reasonably quick, at least for small projects. Slow edit-compile-run cycles are especially bad for novices who'll be making a lot of mistakes.
  • PlayCanvas is programmable via a JS component model. You write JS components that get imported into the editor and are then attached to scene-graph entities. Components can have typed parameters that appear in the editor, so it's pretty easy to create components reusable by non-programmers. However, for many behaviors (e.g. autonomously-moving objects) you probably need to write code --- which is a good thing. It's a bit harder than Scratch/Stencyl but since you're using JS you have more power and develop more reusable skills, and cargo-culting and tweaking scripts works well. You actually have access to the DOM if you want although mostly you'd stick to the PlayCanvas APIs. It looks like you could ultimately do almost anything you want, e.g. add multiplayer support and voice chat via WebRTC.
  • PlayCanvas has WebVR support though I haven't tried it.
  • It's developed on github and MIT licensed so if something's broken or missing, someone can step in and fix it.

So far I'm very impressed and my child is getting into it.

Mozilla Addons BlogAdd-ons Update – Week of 2016/06/22

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1432 listed add-ons were reviewed:

  • 1354 (95%) were reviewed in fewer than 5 days.
  • 45 (3%) were reviewed between 5 and 10 days.
  • 33 (2%) were reviewed after more than 10 days.

There are 61 listed add-ons awaiting review.

You can read about the recent improvements in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Compatibility Communications

Most of you should have received an email from us about the future compatibility of your add-ons. You can use the compatibility tool to enter your add-on ID and get some info on what we think is the best path forward for your add-on. This tool only works for listed add-ons.

To ensure long-term compatibility, we suggest you start looking into WebExtensions, or use the Add-ons SDK and try to stick to the high-level APIs. There are many XUL add-ons that require APIs that aren’t available in either of these options, which is why we ran a survey so we know which APIs we should look into adding to WebExtensions. You can read about the survey results here.

We’re holding regular office hours for Multiprocess Firefox compatibility, to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Firefox 48 Compatibility

The compatibility blog post for Firefox 48 is up, and the bulk validation will be run shortly.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 48.

The Mozilla BlogMozilla Awards $385,000 to Open Source Projects as part of MOSS “Mission Partners” Program


For many years people with visual impairments and the legally blind have paid a steep price to access the Web on Windows-based computers. The market-leading software for screen readers costs well over $1,000. The high price is a considerable obstacle to keeping the Web open and accessible to all. The NVDA Project has developed an open source screen reader that is free to download and to use, and which works well with Firefox. NVDA aligns with one of the Mozilla Manifesto’s principles: “The Internet is a global public resource that must remain open and accessible.”

That’s why, at Mozilla, we have elected to give the project $15,000 in the inaugural round of our Mozilla Open Source Support (MOSS) “Mission Partners” awards. The award will help NVDA stay compatible with the Firefox browser and support a long-term relationship between our two organizations. NVDA is just one of eight grantees in a wide range of key disciplines and technology areas that we have chosen to support as part of the MOSS Mission Partners track. This track financially supports open source software projects doing work that meaningfully advances Mozilla’s mission and priorities.

Giving Money for Open Source Accessibility, Privacy, Security and More

Aside from accessibility, security and privacy are common themes in this set of awards. We are supporting several secure communications tools, a web server which only works in secure mode, and a distributed, client-side, privacy-respecting search engine. The set is rounded out with awards to support the growing Rust ecosystem and promote open source options for the building of compelling games on the Web. (Yes, games. We consider games to be a key art-form in this modern era, which is why we are investing in the future of Web games with WebAssembly and Open Web Games.)

MOSS is a continuing program. The Mission Partners track has a budget for 2016 of around US$1.25 million. The first set of awards listed below total US$385,000 and we look forward to supporting more projects in the coming months. Applications remain open both for Mission Partners and for the Foundational Technology track (for projects creating software that Mozilla already uses or deploys) on an ongoing basis.

We are greatly helped in evaluating applications and making awards by the MOSS Committee. Many thanks again to them.

And The Winners Are….

The first eight awardees are:

Tor: $152,500. Tor is a system for using a distributed network to communicate anonymously and without being tracked. This award will be used to significantly enhance the Tor network’s metrics infrastructure so that the performance and stability of the network can be monitored and improvements made as appropriate.

Tails: $77,000. Tails is a secure-by-default live operating system that aims at preserving the user’s privacy and anonymity. This award will be used to implement reproducible builds, making it possible for third parties to independently verify that a Tails ISO image was built from the corresponding Tails source code.


Caddy: $50,000. Caddy is an HTTP/2 web server that uses HTTPS automatically and by default via Let’s Encrypt. This award will be used to add a REST API, web UI, and new documentation, all of which make it easier to deploy more services with TLS.

Mio: $30,000. Mio is an asynchronous I/O library written in Rust. This award will be used to make ergonomic improvements to the API and thereby make it easier to build high performance applications with Mio in Rust.


DNSSEC/DANE Chain Stapling: $25,000. This project is standardizing and implementing a new TLS extension for transport of a serialized DNSSEC record set, to reduce the latency associated with DANE and DNSSEC validation. This award will be used to complete the standard in the IETF and build both a client-side and a server-side implementation.


Godot Engine: $20,000. Godot is a high-performance multi-platform game engine which can deploy to HTML5. This award will be used to add support for Web Sockets, WebAssembly and WebGL 2.0.


PeARS: $15,500. PeARS (Peer-to-peer Agent for Reciprocated Search) is a lightweight, distributed web search engine which runs in an individual’s browser and indexes the pages they visit in a privacy-respecting way. This award will permit face-to-face collaboration among the remote team and bring the software to beta status.


NVDA: $15,000. NonVisual Desktop Access (NVDA) is a free, open source screen reader for Microsoft Windows. This award will be used to make sure NVDA and Firefox continue to work well together as Firefox moves to a multi-process architecture.

This is only the beginning. Stay tuned for more award announcements as we allocate funds. Open Source is a movement that is only growing, both in numbers and in importance. Operating in the open makes for better security, better accessibility, better policy, better code and, ultimately, a better world. So if you know any projects whose work furthers the Mozilla Mission, send them our way and encourage them to apply.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1281189] Modal View Date field is Glitched after Setting Needinfo

discuss these changes on

Firefox UXWhat We Learned from the Government Digital Service Design Team

Toward the end of Mozilla’s All-Hands Meeting in London last week, about a dozen members of the Firefox UX team paid a visit to the main…

Matjaž HorvatTriage translations by author in Pontoon

A few months ago we rolled out bulk actions in Pontoon, allowing you to perform various operations on multiple strings at the same time. Today we’re introducing a new string filter, bringing mass operations a level further.

From now on you can filter translations by author, which simplifies tasks like triaging suggestions from a particular translator. The new filter is especially useful in combination with bulk actions.

For example, you can delete all suggestions submitted by Prince of Nigeria, because they are spam. Or approve all suggestions from Mia Müller, who was just granted Translator permission and was previously unable submit approved translations.

See how to filter by translation author in the video.

P.S.: Gašper, don’t freak out. I didn’t actually remove your translations.

Daniel Stenbergsyscast discussion on curl and life

I sat down and talked curl, HTTP, HTTP/2, IETF, the web, Firefox and various internet subjects with Mattias Geniar on his podcast the syscast the other day.

Air MozillaMozilla H1 2016

Mozilla H1 2016 What did Mozilla accomplish in the first half of 2016? Here's the mind-boggling list in rapid-fire review.

Air MozillaMartes mozilleros, 21 Jun 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1277600] Custom field “Crash Signature” link is deprecated
  • [1274757] hourly whine being run every 15 minutes
  • [1278592] “status” label disappears when changing the resolution in view mode
  • [1209219] Add an option to the disable the “X months ago” format for dates

discuss these changes on

John O'DuinnJoining the U.S. Digital Service

I’ve never worked in government before – or even considered it until I read Dan Portillo’s blog post when he joined the U.S. Digital Service. Mixing the technical skills and business tactics honed in Silicon Valley with the domain specific skills of career government employees is a brilliant way to solve long-standing complex problems in the internal mechanics of government infrastructure. Since their initial work on, they’ve helped out at the Veterans Administration, Dept of Education and IRS to mention just a few public examples. Each of these solutions have material impact to real humans, every single day.

Building Release Engineering infrastructure at scale, in all sorts of different environments, has always been interesting to me. The more unique the situation, the more interesting. The possibility of doing this work, at scale, while also making a difference to the lives of many real people made me stop, ask a bunch of questions and then apply.

The interviews were the most thorough and detailed of my career so far, and the consequence of this is clear once I started working with other USDS folks – they are all super smart, great at their specific field, unflappable when suddenly faced with un-imaginable projects and downright nice, friendly people. These are not just “nice to have” attributes – they’re essential for the role and you can instantly see why once you start.

The range of skills needed is staggering. In the few weeks since I started, projects I’ve been involved with have involved some combinations of: Ansible, AWS, Cobol, GitHub, NewRelic, Oracle PL/SQL, nginx, node.js, PowerBuilder, Python, Ruby, REST and SAML. All while setting up fault tolerant and secure hybrid physical-colo-to-AWS production environments. All while meeting with various domain experts to understand the technical and legal constraints behind why things were done in a certain way and also to figure out some practical ideas of how to help in an immediate and sustainable way. All on short timelines – measured in days/weeks instead of years. In any one day, it is not unusual to jump from VPN configurations to legal policy to branch merging to debugging intermittent production alerts to personnel discussions.

Being able to communicate effectively up-and-down the technical stack and also the human stack is tricky, complicated and also very very important to succeed in this role. When you see just how much the new systems improve people’s lives, the rewards are self-evident, invigorating and humbling – kinda like the view walking home from the office – and I find myself jumping back in to fix something else. This is very real “make a difference” stuff and is well worth the intense long days.

Over the coming months, please be patient with me if I contact you looking for help/advice – I may very well be fixing something crucial for you, or someone you know!

If you are curious to find out more about USDS, feel free to ask me. There is a lot of work to do (before starting, I was advised to get sleep!) and yes, we are hiring (for details, see here!). I suspect you’ll find it is the hardest, most rewarding job you’ve ever had!


Tantek Ç at 11

ERMERGERD!!! (excited girl holding a large microformats logo) HERPER BERTHDER!!! Thanks to Julie Anne Noying for the meme birthday card.

10,000s of microformats2 sites and now 10 microformats2 parsers

The past year saw a huge leap in the number of sites publishing microformats2, from 1000s to now 10s of thousands of sites, primarily by adoption in the IndieWebCamp community, and especially the excellent Known publishing system and continually improving WordPress plugins & themes.

New modern microformats2 parsers continue to be developed in various languages, and this past year, four new parsing libraries (in three different languages) were added, almost doubling our previous set of six (in five different languages) that brought our year 11 total to 10 microformats2 parsing libraries available in 8 different programming languages.

microformats2 parsing spec updates

The microformats2 parsing specification has made significant progress in the past year, all of it incremental iteration based on real world publishing and parsing experience, each improvement discussed openly, and tested with real world implementations. The microformats2 parsing spec is the core of what has enabled even simpler publishing and processing of microformats.

The specification has reached a level of stability and interoperability where fewer issues are being filed, and those that are being filed are in general more and more minor, although once in a while we find some more interesting opportunities for improvement.

We reached a milestone two weeks ago of resolving all outstanding microformats2 parsing issues thanks to Will Norris leading the charge with a developer spec hacking session at the recent IndieWeb Summit where he gathered parser implementers and myself (as editor) and walked us through issue by issue discussions and consensus resolutions. Some of those still require minor edits to the specification, which we expect to complete in the next few days.

One of the meta-lessons we learned in that process is that the wiki really is less suitable for collaborative issue filing and resolving, and as of today are switching to using a GitHub repo for filing any new microformats2 parsing issues.

more microformats2 parsers

The number of microformats2 parsers in different languages continues to grow, most of them with deployed live-input textareas so you can try them on the web without touching a line of parsing code or a command line! All of these are open source (repos linked from their sections), unless otherwise noted. These are the new ones:

The Java parsers are a particularly interesting development as one is part of the upgrade to Apache Any23 to support microformats2 (thanks to Lewis John McGibbney). Any23 is a library used for analysis of various web crawl samples to measure representative use of various forms of semantic markup.

The other Java parser is mf2j, an early-stage Java microformats2 parser, created by Kyle Mahan.

The Elixir, Haskell, and Java parsers add to our existing in-development parser libraries in Go and Ruby. The Go parser in particular has recently seen a resurgence in interest and improvement thanks to Will Norris.

These in-development parsers add to existing production parsers, that is, those being used live on websites to parse and consume microformats for various purposes:

As with any open source projects, tests, feedback, and contributions are very much welcome! Try building the production parsers into your projects and sites and see how they work for you.

Still simpler, easier, and smaller after all these years

Usually technologies (especially standards) get increasingly complex and more difficult to use over time. With microformats we have been able to maintain (and in some cases improve) their simplicity and ease of use, and continue to this day to get testimonials saying as much, especially in comparison to other efforts:

…hmm, looks like I should use a separate meta element: .
Man, Schema is verbose. @microformats FTW!

On the broader problem of verbosity (no matter the syntax), Kevin Marks wrote a very thorough blog post early in the past year:

More testimonials:

I still prefer @microformats over microdata
* * *
@microformats are easier to write, easier to maintain and the code is so much smaller than microdata.
* * *
I am not a big fan of RDF, semanticweb, or predefined ontologies. We need something lightweight and emergent like the microformats

This last testimonial really gets at the heart of one of the deliberate improvements we have made to iterating on microformats vocabularies in particular.

evolving h-entry

We have had an implementation-driven and implementation-tested practice for the microformats2 parsing specification for quite some time.

More and more we are adopting a similar approach to growing and evolving microformats vocabularies like h-entry.

We have learned to start vocabularies as minimal as possible, rather than start with everything you might want to do. That “start with everything you might want” is a common theory-first approach taken by a-priori vocabularies or entire "predefined ontologies" like's 150+ objects at launch, very few of which (single digits?) Google or anyone bothers to do anything with, a classic example of premature overdesign, of YAGNI).

With h-entry in particular, we started with an implementation filtered subset of hAtom, and since then have started documenting new properties through a few deliberate phases (which helps communicate to implementers which are more experimental or more stable)

  1. Proposed Additions – when someone proposes a property, gets some sort of consensus among their community peers, and perhaps one more person to implementing it in the wild beyond themselves (e.g. as the IndieWebCamp community does), it's worth capturing it as a proposed property to communicate that this work is happening between multiple people, and that feedback, experimentation, and iteration is desired.
  2. Draft Properties - when implementations begin to consume proposed properties and doing something explicit with them, then a postive reinforcement feedback loop has started and it makes sense to indicate that such a phase change has occured by moving those properties to "draft". There is growing activity around those properties, and thus this should be considered a last call of sorts for any non-trivial changes, which get harder to make with each new implementation.
  3. Core Properties - these properties have gained so much publishing and consuming support that they are for all intents and purposes stable. Another phase change has occured: it would be much harder to change them (too many implementations to coordinate) than keep them the same, and thus their stability has been determined by real world market adoption.

The three levels here, proposed, draft, and core, are merely "working" names, that is, if you have a better idea what to call these three phases by all means propose it.

In h-entry in particular, it's likely that some of the draft properties are now mature (implemented) enough to move them to core, and some of the proposed properties have gained enough support to move to draft. The key to making this happen is finding and citing documentation of such implementation and support. Anyone can speak up in the IRC channel etc. and point out such properties that they think are ready for advancement.

How we improve moving forward

We have made a lot of progress and have much better processes than we did even a year ago, however I think there’s still room for improvement in how we evolve both microformats technical specifications like the microformats2 parsing spec, and in how we create and improve vocabularies.

It’s pretty clear that to enable innovation we have to ways of encouraging constructive experimentation, and yet we also need a way of indicating what is stable vs in-progress. For both of those we have found that real world implementations provide both a good focusing mechanism and a good way to test experiments.

In the coming year I expect we will find even better ways to explain these methods, in the hopes that others can use them in their efforts, whether related to microformats or in completely different standards efforts. For now, let’s appreciate the progress we’ve made in the past year from publishing sites, to parsing implementations, from process improvements, to continuously improving living specifications. Here's to year 12.

Also published on:

Daniel Stenbergcurl user survey results 2016

The annual curl user poll was up 11 days from May 16 to and including May 27th, and it has taken me a while to summarize and put together everything into a single 21 page document with all the numbers and plenty of graphs.

Full 2016 survey analysis document

The conclusion I’ve drawn from it: “We’re not done yet”.

Here’s a bonus graph from the report, showing what TLS backends people are using with curl in 2016 and 2015:


Christian HeilmannMy closing keynote at Awwwards NYC 2016: A New Hope – the web strikes back

Last week I was lucky enough to give the closing keynote at the Awwwards Conference in New York.

serviceworker beats appcache

Following my current fascination, I wanted to cover the topic of Progressive Web Apps for an audience that is not too technical, and also very focused on delivering high-fidelity, exciting and bleeding edge experiences on the web.

Getting slightly too excited about my Star Wars based title, I got a bit overboard with bastardising Star Wars quotes in the slides, but I managed to cover a lot of the why of progressive web apps and how it is a great opportunity right now.

I covered:

  • The web as an idea and its inception: independent, distributed and based on open protocols
  • The power of links
  • The horrible environment that was the first browser wars
  • The rise of standards as a means to build predictable, future-proof products
  • How we became too dogmatic about standards
  • How this lead to rebelling developer using JavaScript to build everything
  • Why this is a brittle environment and a massive bet on things working flawlessly on our users’ computers
  • How we never experience this as our environments are high-end and we’re well connected
  • How we defined best practices for JavaScript, like Unobtrusive JavaScript and defensive coding
  • How libraries and frameworks promise to fix all our issues and we’ve become dependent on them
  • How a whole new generation of developers learned development by copying and pasting library-dependent code on Stackoverflow
  • How this, among other factors, lead to a terribly bloated web full of multi-megabyte web sites littered with third party JavaScript and library code
  • How to rise of mobile and its limitations is very much a terrible environment for those to run in
  • How native apps were heralded as the solution to that
  • How we retaliated by constantly repeating that the web will win out in the end
  • How we failed to retaliate by building web-standard based apps that played by the rules of native – an environment where the deck was stacked against browsers
  • How right now our predictions partly came true – the native environments and closed marketplaces are failing to deliver right now. Users on mobile use 5 apps and download on average not a single new one per month
  • How users are sick of having to jump through hoops to try out some new app and having to lock themselves in a certain environment
  • How the current state of supporting mobile hardware access in browsers is a great opportunity to build immersive experiences with web technology
  • How ServiceWorker is a great opportunity to offer offline capable solutions and have notifications to re-engage users and allow solutions to hibernate
  • How Progressive Web Apps are a massive opportunity to show native how software distribution should happen in 2016

Yes, I got all that in. See for yourself :).

The slides are available on SlideShare

You can watch the screencast of the video on YouTube.

Daniel PocockWebRTC and communications projects in GSoC 2016

This year a significant number of students are working on RTC-related projects as part of Google Summer of Code, under the umbrella of the Debian Project. You may have already encountered some of them blogging on Planet or participating in mailing lists and IRC.

WebRTC plugins for popular CMS and web frameworks

There are already a range of pseudo-WebRTC plugins available for CMS and blogging platforms like WordPress, unfortunately, many of them are either not releasing all their source code, locking users into their own servers or requiring the users to download potentially untrustworthy browser plugins (also without any source code) to use them.

Mesut is making plugins for genuinely free WebRTC with open standards like SIP. He has recently created the WPCall plugin for WordPress, based on the highly successful DruCall plugin for WebRTC in Drupal.

Keerthana has started creating a similar plugin for MediaWiki.

What is great about these plugins is that they don't require any browser plugins and they work with any server-side SIP infrastructure that you choose. Whether you are routing calls into a call center or simply using them on a personal blog, they are quick and convenient to install. Hopefully they will be made available as packages, like the DruCall packages for Debian and Ubuntu, enabling even faster installation with all dependencies.

Would you like to try running these plugins yourself and provide feedback to the students? Would you like to help deploy them for online communities using Drupal, WordPress or MediaWiki to power their web sites? Please come and discuss them with us in the Free-RTC mailing list.

You can read more about how to run your own SIP proxy for WebRTC in the RTC Quick Start Guide.

Finding all the phone numbers and ham radio callsigns in old emails

Do you have phone numbers and other contact details such as ham radio callsigns in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book?

Jaminy is working on Python scripts to do just that. Her project takes some inspiration from the Telify plugin for Firefox, which detects phone numbers in web pages and converts them to hyperlinks for click-to-dial. The popular libphonenumber from Google, used to format numbers on Android phones, is being used to help normalize any numbers found. If you would like to test the code against your own mailbox and address book, please make contact in the #debian-data channel on IRC.

A truly peer-to-peer alternative to SIP, XMPP and WebRTC

The team at Savoir Faire Linux has been busy building the Ring softphone, a truly peer-to-peer solution based on the OpenDHT distribution hash table technology.

Several students (Simon, Olivier, Nicolas and Alok) are actively collaborating on this project, some of them have been fortunate enough to participate at SFL's offices in Montreal, Canada. These GSoC projects have also provided a great opportunity to raise Debian's profile in Montreal ahead of DebConf17 next year.

Linux Desktop Telepathy framework and reSIProcate

Another group of students, Mateus, Udit and Balram have been busy working on C++ projects involving the Telepathy framework and the reSIProcate SIP stack. Telepathy is the framework behind popular softphones such as GNOME Empathy that are installed by default on the GNU/Linux desktop.

I previously wrote about starting a new SIP-based connection manager for Telepathy based on reSIProcate. Using reSIProcate means more comprehensive support for all the features of SIP, better NAT traversal, IPv6 support, NAPTR support and TLS support. The combined impact of all these features is much greater connectivity and much greater convenience.

The students are extending that work, completing the buddy list functionality, improving error handling and looking at interaction with XMPP.

Streamlining provisioning of SIP accounts

Currently there is some manual effort for each user to take the SIP account settings from their Internet Telephony Service Provider (ITSP) and transpose these into the account settings required by their softphone.

Pranav has been working to close that gap, creating a JAR that can be embedded in Java softphones such as Jitsi, Lumicall and CSipSimple to automate as much of the provisioning process as possible. ITSPs are encouraged to test this client against their services and will be able to add details specific to their service through Github pull requests.

The project also hopes to provide streamlined provisioning mechanisms for privately operated SIP PBXes, such as the Asterisk and FreeSWITCH servers used in small businesses.

Improving SIP support in Apache Camel and the Jitsi softphone

Apache Camel's SIP component and the widely known Jitsi softphone both use the JAIN SIP library for Java.

Nik has been looking at issues faced by SIP users in both projects, adding support for the MESSAGE method in camel-sip and looking at why users sometimes see multiple password prompts for SIP accounts in Jitsi.

If you are trying either of these projects, you are very welcome to come and discuss them on the mailing lists, Camel users and Jitsi users.

GSoC students at DebConf16 and DebConf17 and other events

Many of us have been lucky to meet GSoC students attending DebConf, FOSDEM and other events in the past. From this year, Google now expects the students to complete GSoC before they become eligible for any travel assistance. Some of the students will still be at DebConf16 next month, assisted by the regular travel budget and the diversity funding initiative. Nik and Mesut were already able to travel to Vienna for the recent MiniDebConf /

As mentioned earlier, several of the students and the mentors at Savoir Faire Linux are based in Montreal, Canada, the destination for DebConf17 next year and it is great to see the momentum already building for an event that promises to be very big.

Explore the world of Free Real-Time Communications (RTC)

If you are interesting in knowing more about the Free RTC topic, you may find the following resources helpful:

RTC mentoring team 2016

We have been very fortunate to build a large team of mentors around the RTC-themed projects for 2016. Many of them are first time GSoC mentors and/or new to the Debian community. Some have successfully completed GSoC as students in the past. Each of them brings unique experience and leadership in their domain.

Helping GSoC projects in 2016 and beyond

Not everybody wants to commit to being a dedicated mentor for a GSoC student. In fact, there are many ways to help without being a mentor and many benefits of doing so.

Simply looking out for potential applicants for future rounds of GSoC and referring them to the debian-outreach mailing list or an existing mentor helps ensure we can identify talented students early and design projects around their capabilities and interests.

Testing the projects on an ad-hoc basis, greeting the students at DebConf and reading over the student wikis to find out where they are and introduce them to other developers in their area are all possible ways to help the projects succeed and foster long term engagement.

Google gives Debian a USD $500 grant for each student who completes a project successfully this year. If all 2016 students pass, that is over $10,000 to support Debian's mission.

Shing LyuShow Firefox Bookmark Toolbar in Fullscreen Mode

By default, the bookmark toolbar is hidden when Firefox goes into fullscreen mode. It’s quite annoying because I use the bookmark toolbar a lot. And since I use i3 window manager, I also use the fullscreen mode very often to avoid resizing the window. After some googling I found this quick solution on SUMO (Firefox commuity is awesome!).


The idea is that the Firefox chrome (not to be confused with the Google Chrome browser) is defined using XUL. You can adjust its styling using CSS. The user defined chrome CSS is located in your Firefox profile. Here is how you do it:

  • Open your Firefox profile folder, which is ~/.mozilla/firefox/<hash>.<profile_name> on Linux. If you can’t find it, you can open about:support in your Firefox. and click the “Open Directory” button in the “Profile Directory” field.


  • Create a folder named chrome if it doesn’t exist yet.
  • Create a file called userChrome.css in the chrome folder, copy the following content into it and save.

@namespace url(""); /* only needed once */

/* full screen toolbars */
#navigator-toolbox[inFullscreen] toolbar:not([collapsed="true"]) {
  • Restart your Firefox and Voila!


Mozilla Open Policy & Advocacy BlogEU Internet Users Can Stand Up For Net Neutrality

Over the past 18 months, the debate around the free and open Internet has taken hold in countries around the world, and we’ve been encouraged to see governments take steps to secure net neutrality. A key component of these movements has been strong public support from users upholding the internet as a global public resource. From the U.S. to India, public opinion has helped to positively influence internet regulators and shape internet policy.

Now, it’s time for internet users in the EU to speak out and stand up for net neutrality.

The Body of European Regulators of Electronic Communications (BEREC) is currently finalising implementation guidelines for the net neutrality legislation passed by EU Parliament last year. This is an important moment — how the legislation is interpreted will have a major impact on the health of the internet in the EU. A clear, strong interpretation can uphold the internet as a free and open platform. But a different interpretation can grant big telecom companies considerable influence and the ability to implement fast lanes, slow lanes, and zero-rating. It would make the internet more closed and more exclusive.

At Mozilla, we believe the internet is at its best as a free, open, and decentralised platform. This is the kind of internet that enables creativity and collaboration; that grants everyone equal opportunity; and that benefits competition and innovation online.

Everyday internet users in the EU have the opportunity to stand up for this type of internet. From now through July 18, BEREC is accepting public comments on the draft guidelines. It’s a small window — and BEREC is simultaneously experiencing pressure from telecom companies and other net neutrality foes to undermine the guidelines. That’s why it’s so important to sound off. When more and more citizens stand up for net neutrality, we’re empowering BEREC to stand their ground and interpret net neutrality legislation in a positive way.

Mozilla is proud to support, an initiative by several NGOs — like European Digital Rights (EDRi) and Access Now — to uphold strong net neutrality in the EU. makes it simple to submit a public comment to BEREC and stand up for an open internet in the EU. BEREC’s draft guidelines already address many of the ambiguities in the Regulation; your input and support can bring needed clarity and strength to the rules. We hope you’ll join us: visit and write BEREC before the July 18 deadline.

This Week In RustThis Week in Rust 135

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42 and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is error-chain which feels like the missing piece in Rust's Result-based error-handling puzzle. Thanks to KodrAus for the suggestion.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

73 pull requests were merged in the last two weeks.

New Contributors

  • Esteban Küber
  • marudor

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

  • 6/22. Rust Community Team Meeting at #rust-community on
  • 6/23. Rust release triage at #rust-triage on
  • 6/29. Rust Community Team Meeting at #rust-community on
  • 6/30. Zurich, Switzerland - Introduction to Rust.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The Rust standard libs aren't quite batteries included, but they come with a pile of adaptor cables and an optional chemistry lab.

Gankro on Twitter

Thanks to llogiq for the suggestion.

Submit your quotes for next week!

The Servo BlogThis Week In Servo 68

In the last week, we landed 55 PRs in the Servo organization’s repositories.

The entire Servo team and several of our contributors spent last week in London for the Mozilla All Hands meeting. While this meeting resulted in fewer PRs than normal, there were many great meetings that resulted in both figuring out some hard problems and introducing more people to Servo’s systems. These meetings included:

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • connorgbrewster added tests for the history interface
  • ms2ger moved ServoLayoutNode to script, as part of the effort to untangle our build dependencies
  • nox is working to reduce our shameless dependencies on Nightly Rust across our dependencies
  • aneeshusa added better support for installing the Android build tools for cross-compilation on our builders
  • jdm avoided a poor developer experience when debugging on OS X
  • darinm223 fixed the layout of images with percentage dimensions
  • izgzhen implemented several APIs related to Blob URLs
  • srm912 and jdm added support for private mozbrowser iframes (ie. incognito mode)
  • nox improved the performance of several 2d canvas APIs
  • jmr0 implemented throttling for mozbrowser iframes that are explicitly marked as invisible
  • notriddle fixed the positioning of the cursor in empty input fields

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


No screenshots this week.

Anjana VakilMozilla London All Hands 2016

Last week, all of Mozilla met in London for a whirlwind tour from TARDIS to TaskCluster, from BBC1 to e10s, from Regent Park to the release train, from Paddington to Positron. As an Outreachy intern, I felt incredibly lucky to be part of this event, which gave me a chance to get to know Mozilla, my team, and the other interns much better. It was a jam-packed work week of talks, meetings, team events, pubs, and parties, and it would be impossible to blog about all of the fun, fascinating, and foxy things I learned and did. But I can at least give you some of the highlights! Or, should I say, Who-lights? (Be warned, that is not the last pun you will encounter here today.)

Role models

While watching the plenary session that kicked off the week, it felt great to realize that of the 4 executives emerging from the TARDIS in the corner to take the stage (3 Mozillians and 1 guest star), a full 50% were women. As I had shared with my mentor (also a woman) before arriving in London, one of my goals for the week was to get inspired by finding some new role moz-els (ha!): Mozillians who I could aspire to be like one day, especially those of the female variety.

Why a female role model, specifically? What does gender have to do with it?

Well, to be a good role model for you, a person needs to not only have a life/career/lego-dragon you aspire to have one day, but also be someone who you can already identify with, and see yourself in, today. A role model serves as a bridge between the two. As I am a woman, and that is a fundamental part of my experience, a role model who shares that experience is that much easier for me to relate to. I wouldn’t turn down a half-Irish-half-Indian American living in Germany, either.

At any rate, in London I found no shortage of talented, experienced, and - perhaps most importantly - valued women at Mozilla. I don’t want to single anyone out here, but I can tell you that I met women at all levels of the organization, from intern to executive, who have done and are doing really exciting things to advance both the technology and culture of Mozilla and the web. Knowing that those people exist, and that what they do is possible, might be the most valuable thing I took home with me from London.

Aw, what about the Whomsycorn?

No offense, Whomyscorn, you’re cool too.

Electrolysis (e10s)

Electrolysis, or “e10s” for those who prefer integers to morphemes, is a massive and long-running initiative to separate the work Firefox does into multiple processes.

At the moment, the Firefox desktop program that the average user downloads and uses to explore the web runs in a single process. That means that one process has to do all the work of loading & displaying web pages (the browser “content”), as well as the work of displaying the user interface and its various tabs, search bars, sidebars, etc. (the browser “chrome”). So if something goes wrong with, say, the execution of a poorly-written script on a particular page, instead of only that page refusing to load, or its tab perhaps crashing, the entire browser itself may hang or crash.

That’s not cool. Especially if you often have lots of tabs open. Not that I ever do.

Of course not. Anyway, even less cool is the possibility that some jerk (not that there are any of those on the internet, though, right?) could make a page with a script that hijacks the entire browser process, and does super uncool stuff.

It would be much cooler if, instead of a single massive process, Firefox could use separate processes for content and chrome. Then, if a page crashes, at least the UI still works. And if we assign the content process(es) reduced permissions, we can keep possibly-jerkish content in a nice, safe sandbox so that it can’t do uncool things with our browser or computer.

Better performance, more security? Super cool.

Indeed. Separation is cool. Electrolysis is separation. Ergo, Electrolysis is cool.

It’s not perfect yet - for example, compatibility with right-to-left languages, accessibility (or “a11y”, if “e10s” needs a buddy), and add-ons is still an issue - but it’s getting there, and it’s rolling out real soon! Given that the project has been underway since 2008, that’s pretty exciting.

Rust, Servo, & Oxidation

Rust logo I first heard about the increasingly popular language Rust when I was at the Recurse Center last fall, and all I knew about it was that it was being used at Mozilla to develop a new browser engine called Servo.

More recently, I heard talks from Mozillians like E. Dunham that revealed a bit more about why people are so excited about Rust: it’s a new language for low-level programming, and compared with the current mainstay C, it guarantees memory safety. As in, “No more segfaults, no more NULLs, no more dangling pointers’ dirty looks”. It’s also been designed with concurrency and thread safety in mind, so that programs can take better advantage of e.g. multi-core processors. (Do not ask me to get into details on this; the lowest level I have programmed at is probably sitting in a beanbag chair. But I believe them when they say that Rust does those things, and that those things are good.)

Finally, it has the advantage of having a tight-knit, active, and dedicated community “populated entirely by human beings”, which is due in no small part to the folks at Mozilla and beyond who’ve been working to keep the community that way.

OK OK OK, so Rust is a super cool new language. What can you do with it?

Well, lots of stuff. For example, you could write a totally new browser engine, and call it Servo.

Wait, what’s a browser engine?

A browser engine (aka layout or rendering engine) is basically the part of a browser that allows it to show you the web pages you navigate to. That is, it takes the raw HTML and CSS content of the page, figures out what it means, and turns it into a pretty picture for you to look at.

Uh, I’m pretty sure I can see web pages in Firefox right now. Doesn’t it already have an engine?

Indeed it does. It’s called Gecko, and it’s written in C++. It lets Firefox make the web beautiful every day.

So why Servo, then? Is it going to replace Gecko?

No. Servo is an experimental engine developed by Mozilla Research; it’s just intended to serve(-o!) as a playground for new ideas that could improve a browser’s performance and security.

The beauty of having a research project like Servo and a real-world project like Gecko under the same roof at Mozilla is that when the Servo team’s research unveils some new and clever way of doing something faster or more awesomely than Gecko does, everybody wins! That’s thanks to the Oxidation project, which aims to integrate clever Rust components cooked up in the Servo lab into Gecko. Apparently, Firefox 45 already got (somewhat unexpectedly) an MP4 metadata parser in Rust, which has been running just fine so far. It’s just the tip of the iceberg, but the potential for cool ideas from Servo to make their way into Gecko via Oxidation is pretty exciting.

The Janitor

Another really exciting thing I heard about during the week is The Janitor, a tool that lets you contribute to FOSS projects like Firefox straight from your browser.

For me, one of the biggest hurdles to contributing to a new open-source project is getting the development environment all set up.

Ugh I hate that. I just want to change one line of code, do I really need to spend two days grappling with installation and configuration?!?

Exactly. But the Janitor to the rescue!

Cowboy Bebop via GIPHY

Powered by the very cool Cloud9 IDE, the Janitor gives you one-click access to a ready-to-go, cloud-based development environment for a given project. At the moment there are a handful of project supported (including Firefox, Servo, and Google Chrome), and new ones can be added by simply writing a Dockerfile. I’m not sure that an easier point of entry for new FOSS contributors is physically possible. The ease of start-up is perfect for short-term contribution efforts like hackathons or workshops, and thanks to the collaborative features of Cloud9 it’s also perfect for remote pairing.

Awesome, I’m sold. How do I use it?

Unfortunately, the Janitor is still in alpha and invite-only, but you can go to and sign up to get on the waitlist. I’m still waiting to get my invite, but if it’s half as fantastic as it seems, it will be a huge step forward in making it easier for new contributors to get involved with FOSS projects. If it starts supporting offline work (apparently the Cloud9 editor is somewhat functional offline already, once you’ve loaded the page initially, but the terminal and VNC always needs a connection to function), I think it’ll be unstoppable.


The last cool thing I heard about (literally, it was the last session on Friday) at this work week was L20n.

Wait, I thought “localization” was abbreviated “L10n”?

Yeah, um, that’s the whole pun. Way to be sharp, exhausted-from-a-week-of-talks-Anjana.

See, L20n is a next-generation framework for web and browser localization (l10n) and internationalization (i18n). It’s apparently a long-running project too, born out of the frustrations of the l10n status quo.

According to the L20n team, at the moment the localization system for Firefox is spread over multiple files with multiple syntaxes, which is no fun for localizers, and multiple APIs, which is no fun for developers. What we end up with is program logic intermingling with l10n/i18n decisions (say, determining the correct format for a date) such that developers, who probably aren’t also localizers, end up making decisions about language that should really be in the hands of the localizers. And if a localizer makes a syntax error when editing a certain localization file, the entire browser refuses to run. Not cool.

Pop quiz: what’s cool?


C’mon, we just went over this. Go on and scroll up.


Yeah, that’s cool, but thinking more generally…


That’s right! Separation is super cool! And that’s what L20n does: separate l10n code from program source code. This way, developers aren’t pretending to be localizers, and localizers aren’t crashing browsers. Instead, developers are merely getting localized strings by calling a single L20n API, and localizers are providing localized strings in a single file format & syntax.

Wait but, isn’t unifying everything into a single API/file format the opposite of separation? Does that mean it’s not cool?

Shhh. Meaningful separation of concerns is cool. Arbitrary separation of a single concern (l10n) is not cool. L20n knows the difference.

OK, fine. But first “e10s” and “a11y”, now “l10n”/”l20n” and “i18n”… why does everything need a numbreviation?

You’ve got me there.

Nick DesaulniersSetting up mutt with gmail on Ubuntu

I was looking to set up the mutt email client on my Ubuntu box to go through my gmail account. Since it took me a couple of hours to figure out, and I’ll probably forget by the time I need to know again, I figure I’d post my steps here.

I’m on Ubuntu 16.04 LTS (lsb_release -a)

Install mutt:

$ sudo apt-get install mutt

In gmail, allow other apps to access gmail:

Allowing less secure apps to access your account Less Secure Apps

Create the folder

$ sudo touch $MAIL
$ sudo chmod 660 $MAIL
$ sudo chown `whoami`:mail $MAIL

where $MAIL for me was /var/mail/nick.

Create the ~/.muttrc file

set realname = "<first and last name>"
set from = "<gmail username>"
set use_from = yes
set envelope_from = yes

set smtp_url = "smtps://<gmail username>"
set smtp_pass = "<gmail password>"
set imap_user = "<gmail username>"
set imap_pass = "<gmail password>"
set folder = "imaps://"
set spoolfile = "+INBOX"
set ssl_force_tls = yes

# G to get mail
bind index G imap-fetch-mail
set editor = "vim"
unset record
set move = no
set charset = "utf-8"

I’m sure there’s better/more config options. Feel free to go wild, this is by no means a comprehensive setup.

Run mutt:

$ mutt

We should see it connect and download our messages. m to start sending new messages. G to fetch new messages.

Daniel GlazmanPourquoi il n'aurait pas du arrêter d'utiliser CSS

Cet article est un commentaire sur celui-ci. Je l'ai trouvé via un tweet de Korben, évidemment. Il m'a un peu hérissé le poil, évidemment, et donc j'ai besoin de fournir une réponse à son auteur via ce blog.

Les sélecteurs

L'auteur de l'article fait les trois reproches suivants aux sélecteurs CSS :

  1. La définition d'un style associé à un sélecteur peut être redéfinie ailleurs
  2. Si on associe plusieurs styles à un sélecteur, les derniers définis dans le CSS auront toujours la priorité
  3. Quelqu'un peut péter les styles d'un composant pour peu qu'il ne sache pas qu'un sélecteur est utilisé ailleurs

Le moins que l'on puisse dire est j'ai halluciné en lisant cela. Quelques lignes au-dessus, l'auteur faisait une comparaison avec JavaScript. Reprenons donc ses trois griefs....

Pour le numéro 1, il râle parce que  if (foo) { a = 1; } ... if (foo) { a = 2;} est tout simplement possible. Bwarf.

Dans le cas numéro 2, il râle parce que dans if (foo) { a = 1; a = 2; ... } les ... verront la variable a avoir la valeur 2.

Dans le cas numéro 3, oui, certes, je ne connais pas de langage dans lequel quelqu'un qui modifie sans connaître le contexte ne fera aucune connerie...

La spécificité

Le « plein délire » de !important, et je suis le premier à reconnaître que cela ne m'a pas aidé dans l'implémentation de BlueGriffon, c'est quand même ce qui a permis à des tétraflopées de bookmarklets et de codes injectés dans des pages totalement arbitraires d'avoir un résultat visuel garanti. L'auteur ne râle pas seulement contre la spécificité et son calcul mais aussi sur la possibilité d'avoir une base contextuelle pour l'application d'une règle. Il a en partie raison, et j'avais moi-même il y a longtemps proposé un CSS Editing Profile limitant les sélecteurs utilisés dans un tel profil pour une plus grande facilité de manipulation. Mais là où il a tort, c'est que des zillions de sites professionnels utilisant des composants ont absolument besoin des sélecteurs complexes et de ce calcul de spécificité...

Les régressions

En lisant cette section, j'ai laché un sonore « il exagère tout de même »... Oui, dans tout langage interprété ou compilé, modifier un truc quelque part sans tenir compte du reste peut avoir des effets de bords négatifs. Son exemple est exactement conforme à celui d'une classe qu'on dériverait ; un ajout à la classe de base se retrouverait dans la classe dérivée. Oh bah c'est pas bien ça... Soyons sérieux une seconde svp.

Le choix de priorisation des styles

Là, clairement, l'auteur n'a pas compris un truc fondamental dans les navigateurs Web et le DOM et l'application des CSS. Il n'y a que deux choix possibles : soit on utilise l'ordre du document tree, soit on utilise des règles de cascade des feuilles de styles. Du point de vue du DOM, class="red blue" et class="blue red" sont strictement équivalents et il n'y a aucune, je répète aucune, garantie que les navigateurs préservent cet ordre dans leur DOMTokenList.

Le futur de CSS

Revenons à la comparaison JS de l'auteur. En gros, si on a en ligne 1 var a=1 en ligne 2 alert(a), l'auteur râle parce que si on insère var a = 2 entre les deux lignes on affichera la valeur 2 et pas 1... C'est clairement inadmissible (au sens de pas acceptable) comme argument.

La méthodologie BEM

Un pansement sur une jambe de bois... On ne change rien mais on fout des indentations qui augmentent la taille du fichier, gênent son édition et sa manipulation, et ne sont aucunement compréhensibles par une machine puisque tout cela n'est pas préservé par le CSS Object Model.

Sa proposition alternative

J'ai toussé à m'en éjecter les poumons du corps à la lecture de cette section. C'est une horreur non-maintenable, verbeuse et error-prone.

En conclusion...

Oui, CSS a des défauts de naissance. Je le reconnais bien volontiers. Et même des défauts d'adulte quand je vois certaines cochoncetés que le ShadowDOM veut nous faire mettre dans les Sélecteurs CSS. Mais ça, c'est un rouleau-compresseur pour écraser une mouche, une usine à gaz d'une magnitude rarement égalée.

Je ne suis globalement pas d'accord du tout avec bloodyowl, qui oublie trop facilement les immenses bénéfices que nous avons pu tirer de tout ce qu'il décrie dans son article. Des centaines de choses seraient impossibles sans tout ça. Alors oui, d'accord, la Cascade c'est un peu capillotracté. Mais on n'attrape pas des mouches avec du vinaigre et si le monde entier a adopté CSS (y compris le monde de l'édition qui vient pourtant de solutions assez radicalement différentes du Web), c'est bien parce que c'est bien et que ça marche.

En résumé, non CSS n'est pas « un langage horriblement dangereux ». Mais oui, si on laisse n'importe qui faire des ajouts n'importe comment dans un corpus existant, ça peut donner des catas. C'est pareil dans un langage de programmation, dans un livre, dans une thèse, dans de la mécanique, partout. Voilà voilà.

Soledad PenadesPost #mozlondon

Writing this from the comfort of my flat, in London, just as many people are tweeting about their upcoming flight from “#mozlondon”—such a blissful post-all Hands travel experience for once!

Note: #mozlondon was a Mozilla all hands which was held in London last week. And since everything is quite social networked nowadays, the “#mozlondon” tag was chosen. Previous incarnations: mozlando (for Orlando), mozwww (for Vancouver’s “Whistler Work Week” which made for a very nice mountainous jagged tag), and mozlandia (because it was held in Portland, and well, Portlandia. Obviously!)

I always left previous all hands feeling very tired and unwell in various degrees. There’s so much going on, in different places, and there’s almost no time to let things sink in your brain (let alone in your stomach as you quickly brisk from location to location). The structure of previous editions also didn’t really lend itself very well to collaboration between teams—too many, too long plenaries, very little time to grab other people’s already exhausted attention.

This time, the plenaries were shortened and reduced in number. No long and windy “inspirational” keynotes, and way more room for arranging our own meetings with other teams, and for arranging open sessions to talk about your work to anyone interested. More BarCamp style than big and flashy, plus optional elective training sessions in which we could learn new skills, related or not to our area of expertise.

I’m glad to say that this new format has worked so much better for me. I actually was quite surprised that it was going really well for me half-way during the week, and being the cynic that I sometimes am, was expecting a terrible blow to be delivered before the end of the event. But… no.

We have got better at meetings. Our team meeting wasn’t a bunch of people interrupting each other. That was a marvel! I loved that we got things done and agreements and disagreements settled in a civilised manner. The recipe for this successful meeting: have an agenda, a set time, and a moderator, and demand one or more “conclusions” or “action items” after the meeting (otherwise why did you meet?), and make everyone aware that time is precious and running out, to avoid derailments.

We also met with the Servo team. With almost literally all of them. This was quite funny: we had set up a meeting with two or three of them, and other members of the team saw it in somebody else’s calendar and figured a meeting to discuss Servo+DevRel sounded interesting, so they all came, out of their own volition! It was quite unexpected, but welcome, and that way we could meet everyone and put faces to IRC nicknames in just one hour. Needless to say, it’s a great caring team and I’m really pleased that we’re going to work together during the upcoming months.

I also enjoyed the elective training sessions.

I went to two training sessions on Rust; it reminded me how much fun “systems programming” can be, and made me excited about the idea of safe parallelism (among other cool stuff). I also re-realised how hard programming and teaching programming can be as I confronted my total inexperience in Rust and increasing frustration at the amount of new concepts thrown at me in such a short interval—every expert on any field should regularly try learning something new every now and then to bring some ‘humility’ back and replenish the empathy stores.

The people sessions were quite long and extenuating and had a ton of content in 3 hours each, and after them I was just an empty hungry shell. But a shell that had learned good stuff!

One was about having difficult conversations, navigating conflict, etc. I quickly saw how many of my ways had been wrong in the past (e.g. replying to a hurt person with self-defense instead of trying to find why they were hurt). Hopefully I can avoid falling in the same traps in the future! This is essential for so many aspects in life, not only open source or software development; I don’t know why this is not taught to everyone by default.

The second session was about doing good interviews. In this respect, I was a bit relieved to see that my way of interviewing was quite close to the recommendations, but it was good to learn additional techniques, like the STAR interview technique. Which surfaces an irony: even “non-technical” skills have a technique to them.

A note to self (that I’m also sharing with you): always make an effort to find good adjectives that aren’t a negation, but a description. E.g. in this context “people sessions” or “interpersonal skills sessions” work so much better and are more descriptive and specific than “non-technical” while also not disrespecting those skills because they’re “just not technical”.

A thing I really liked from these two sessions is that I had the chance to meet people from areas I would not have ever met otherwise, as they work on something totally different from what I work on.

The session on becoming a more senior engineer was full of good inspiration and advice. Some of the ideas I liked the most:

  • as soon as you get into a new position, start thinking of who should replace you so you can move on to something else in the future (so you set more people in a path of success). You either find that person or make it possible for others to become that person…
  • helping people be successful as a better indicator of your progress to seniority than being very good at coding
  • being a good generalist is as good as being a good specialist—different people work differently and add different sets of skills to an organisation
  • but being a good specialist is “only good” if your special skill is something the organisation needs
  • changing projects and working on different areas as an antidote to burn out
  • don’t be afraid to occasionally jump into something even if you’re not sure you can do it; it will probably grow you!
  • canned projects are not your personal failure, it’s simply a signal to move on and make something new and great again, using what you learned. Most of the people on the panel had had projects canned, and survived, and got better
  • if a project gets cancelled there’s a very high chance that you are not going to be “fired”, as there are always tons of problems to be fixed. Maybe you were trying to fix the wrong problem. Maybe it wasn’t even a problem!
  • as you get more senior you speak less to machines and more to people: you develop less, and help more people develop
  • you also get less strict about things that used to worry you a lot and turn out to be… not so important! you also delegate more and freak out less. Tolerance.
  • I was also happy to hear a very clear “NO” to programming during every single moment of your spare time to prove you’re a good developer, as that only leads to burn out and being a mediocre engineer.

Deliberate strategies

I designed this week with the full intent of making the most of it while still keeping healthy. These are my strategies for future reference:

  • A week before: I spent time going through the schedule and choosing the sessions I wanted to attend.
  • I left plenty of space between meetings in order to have some “buffer” time to process information and walk between venues (the time pedestrians spend in traffic lights is significantly higher than you would expect). Even then, I had to rush between venues more than once!
  • I would not go to events outside of my timetable – no late minute stressing over going to an unexpected session!
  • If a day was going to be super busy on the afternoon, I took it easier on the morning
  • Drank lots of water. I kept track of how much, although I never met my target, but I felt much better the days I drank more water.
  • Avoided the terrible coffee at the venues, and also caffeine as much as possible. Also avoided the very-nice-looking desserts, and snacks in general, and didn’t eat a lot because why, if we are just essentially sitting down all day?
  • Allowed myself a good coffee a day–going to the nice coffee places I compiled, which made for a nice walk
  • Brought layers of clothes (for the venues were either scorching hot and humid or plainly freezing) and comfy running trainers (to walk 8 km a day between venues and rooms without developing sore feet)
  • Saying no to big dinners. Actively seeking out smaller gatherings of 2-4 people so we all hear each other and also have more personal conversations.
  • Saying no to dinner with people when I wasn’t feeling great.

The last points were super essential to being socially functional: by having enough time to ‘recharge’, I felt energised to talk to random people I encountered in the “Hallway track”, and had a number of fruitful conversations over lunch, drinks or dinner which would otherwise not have happened because I would have felt aloof.

I’m now tired anyway, because there is no way to not get tired after so many interactions and information absorbing, but I am not feeling sick and depressed! Instead I’m just thinking about what I learnt during the last week, so I will call this all hands a success! 🎉

flattr this!

Giorgos LogiotatidisBuild and Test against Docker Images in Travis

The road towards the absolute CI/CD pipeline goes through building Docker images and deploying them to production. The code included in the images gets unit tested, both locally during development and after merging in master branch using Travis.

But Travis builds its own environment to run the tests on which could be different from the environment of the docker image. For example Travis may be running tests in a Debian based VM with libjpeg version X and our to-be-deployed docker image runs code on-top of Alpine with libjpeg version Y.

To ensure that the image to be deployed to production is OK, we need to run the tests inside that Docker image. That still possible with Travis with only a few changes to .travis.yml:

Sudo is required

sudo: required

Start by requesting to run tests in a VM instead of in a container.

Request Docker service:

  - docker

The VM must run the Docker daemon.

Add TRAVIS_COMMIT to Dockerfile (Optional)

  - docker --version
  - echo "ENV GIT_SHA ${TRAVIS_COMMIT}" >> Dockerfile

It's very useful to export the git SHA of HEAD as a Docker environment variable. This way you can always identify the code included, even if you have .git directory in .dockerignore to reduce size of the image.

The resulting docker image gets also tagged with the same SHA for easier identification.

Build the image

  - docker pull ${DOCKER_REPOSITORY}:last_successful_build || true
  - docker pull ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} || true
  - docker build -t ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} --pull=true .

Instead of pip installing packages, override the install step to build the Docker image.

Start by pulling previously built images from the Docker Hub. Remember that Travis will run each job in a isolate VMs therefore there is no Docker cache. By pulling previously built images cache gets seeded.

Travis' built-in cache functionality can also be used, but I find it more convenient to push to the Hub. Production will later pull from there and if debug is needed I can pull the same image locally too.

Each docker pull is followed by a || true which translates to "If the Docker Hub doesn't have this repository or tag it's OK, don't stop the build".

Finally trigger a docker build. Flag --pull=true will force downloading the latest versions of the base images, the ones from the FROM instructions. For example if an image is based on Debian this flag will force Docker to download the latest version of Debian image. The Docker cache has been already populated so this is not superfluous. If skipped the new build could use an outdated base image which could have security vulnerabilities.

Run the tests

  - docker run -d --name mariadb -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=foo mariadb:10.0
  - docker run ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} flake8 foo
  - docker run --link mariadb:db -e CHECK_PORT=3306 -e CHECK_HOST=db giorgos/takis
  - docker run --env-file .env --link mariadb:db ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} coverage run ./ test

First start mariadb which is needed for the Django tests to run. Fork to the background with -d flag. The --name flag makes linking with other containers easier.

Then run flake8 linter. This is run after mariadb - although it doesn't depend on it - to allow some time for the database to download and initialize before it gets hit with tests.

Travis needs about 12 seconds to get MariaDB ready which is usually more than the time the linter runs. To wait for the database to become ready before running the tests, run Takis. Takis waits for the container named mariadb to open port 3306. I blogged in detail about Takis before.

Finally run the tests making sure that the database is linked using --link and that environment variables needed for the application to initialize are set using --env-file.

Upload built images to the Hub

  - provider: script
    script: bin/
      branch: master
      repo: foo/bar


docker tag -f ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} ${DOCKER_REPOSITORY}:last_successful_build
docker push ${DOCKER_REPOSITORY}:last_successful_build

The deploy step is used to run a script to tag images, login to Docker Hub and finally push those tags. This step is run only on branch: master and not on pull requests.

Pull requests will not be able to push to Docker Hub anyway because Travis does not include encrypted environment variables to pull requests and therefore, there will be no $DOCKER_PASSWORD. In the end of the day this is not a problem because you don't want pull requests with arbitrary code to end up in your Docker image repository.

Set the environment variables

Set the environment variables needed to build, test and deploy in env section:

    # Docker
    - DOCKER_REPOSITORY=example/foo
    - DOCKER_USERNAME="example"
    # Django
    - DEBUG=False
    - DISABLE_SSL=True
    - SECRET_KEY=foo
    - DATABASE_URL=mysql://root@db/foo
    - SITE_URL=http://localhost:8000
    - CACHE_URL=dummy://

and save them to .env file for the docker daemon to access with --env-file

  - env > .env

Variables with private data like DOCKER_PASSWORD can be added through Travis' web interface.

That's all!

Pull requests and merges to master are both tested against Docker images and successful builds of master are pushed to the Hub and can be directly used to production.

You can find a real life example of .travis.yml in the Snippets project.

Mozilla Addons BlogMulti-process Firefox and AMO

In Firefox 48, which reaches the release channel on August 1, 2016, multi-process support (code name “Electrolysis”, or “e10s”) will begin rolling out to Firefox users without any add-ons installed.

In preparation for the wider roll-out to users with add-ons installed, we have implemented compatibility checks on all add-ons uploaded to (AMO).

There are currently three possible states:

  1. The add-on is a WebExtension and hence compatible.
  2. The add-on has marked itself in the install.rdf as multi-process compatible.
  3. The add-on has not marked itself compatible, so the state is currently unknown.

If a new add-on or a new version of an old add-on is not multi-process compatible, a warning will be shown in the validation step. Here is an example:


In future releases, this warning might become more severe as the feature nears full deployment.

For add-ons that fall into the third category, we might implement a more detailed check in a future release, to provide developers with more insight into the “unknown” state.

After an add-on is uploaded, the state is shown in the Developer Hub. Here is an example:


Once you verify that your add-on is compatible, be sure to mark it as such and upload a new version to AMO. There is documentation on MDN on how to test and mark your add-on.

If your add-on is not compatible, please head to our resource center where you will find information on how to update it and where to get help. We’re here to support you!

John O'DuinnRelEng Conf 2016: Call for papers

(Suddenly, its June! How did that happen? Where did the year go already?!? Despite my recent public silence, there’s been a lot of work going on behind the scenes. Let me catchup on some overdue blogposts – starting with RelEngConf 2016!)

We’ve got a venue and a date for this conference sorted out, so now its time to start gathering presentations, speakers and figuring out all the other “little details” that go into making a great, memorable, conference. This means two things:

1) RelEngCon 2016 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 01-jul-2016.

2) Like all previous RelEng Conferences, the mixture of attendees and speakers, from academia and battle-hardened industry, makes for some riveting topics and side discussions. Come talk with others of your tribe, swap tips-and-gotchas with others who do understand what you are talking about and enjoy brainstorming with people with very different perspectives.

For further details about the conference, or submitting proposals, see If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off November 18th, 2016 on your calendar and book your travel to Seattle now. It will be well worth your time.

I’ll be there – and look forward to seeing you there!

Michael ComellaEnhancing Articles Through Hyperlinks

When reading an article, I often run into a problem: there are links I want to open but now is not a good time to open them. Why is now not a good time?

  • If I open them and read them now, I’ll lose the context of the article I’m currently reading.
  • If I open them in the background now and come back to them later, I won’t remember the context that this page was opened from and may not remember why it was relevant to the original article.

I prototyped a solution – at the end of an article, I attach all of the links in the article to the end of the article with some additional context. For example, from my Android thread annotations post:

links with context at the end of an article

To remember why I wanted to open the link, I provide the sentence the link appeared in.

To see if the page is worth reading, I access the page the link points to and include some of its data: the title, the host name, and a snippet.

There is more information we can add here as well, e.g. a “trending” rating (a fake implementation is pictured), a favicon, or a descriptive photo.

And vice versa

You can also provide the original article’s context on a new page after you click a link:

context from where this page was opened

This context can be added for more than just articles.

Shout-out to Chenxia & Stefan for independently discovering this idea and a context graph brainstorming group for further fleshing this out.

Note: this is just a mock-up – I don’t have a prototype for this.


The web is a graph. In a graph, we can access new nodes, and their content, by traversing backwards or forwards. Can we take advantage of this relationship?

Alan Kay once said, people largely use computers “as a convenient way of getting at old media…” This is prevalent on the web today – many web pages fill fullscreen browser windows that allow you to read the news, create a calendar, or take notes, much like we can with paper media. How can we better take advantage of the dynamic nature of computers? Of the web?

Can we mix and match live content from different pages? Can we find clever ways to traverse & access the web graph?

This blog & prototype provide two simple examples of traversing the graph and being (ever so slightly) more dynamic: 1) showing the context of where the user is going to go and 2) showing the context of where they came from. Wikipedia (with a certain login-needed feature enabled) has another interesting example when mousing over a link:

wikipedia link mouse-over shows next page pop-up

They provide a summary and an image of the page the hyperlink will open. Can we use this technique, and others, to provide more context for hyperlinks on every page on the web?

To summarize, perhaps we can solve user problems by considering:

  • The web as a graph – accessing content backwards & forwards from the current page
  • Computers & the web as a truly dynamic medium, with different capabilities than their print predecessors

For the source and a chance to run it for yourself, check out the repository on github.

Mozilla Reps CommunityRepsNext – Introduction Video

At the 2015 Reps Leadership Meeting in Paris it became clear that the program was ready for “a version 2”. As the Reps Council had recently become a formal part of Mozilla Leadership, it was time to bring the program to the next level. Literally building on that idea, the RepsNext initiative was born.

Since then several working groups were formed to condense reflections on the past and visions for the future into new program proposals.

At our last Council meetup from 14-17 April 2016 in Berlin we recorded interviews with Council and Peers explaining RepsNext and summarizing our current status.

You can find a full transcript at the end of this blog post. Thanks to Yofie for editing the video!

Please share this video broadly, creating awareness for the exciting future of the Reps program.


Getting involved

We will focus our work at the London All Hands from June 12th to June 17th to work on open questions around the working groups. We will share our outcomes and open up for discussions after that. For now, there are several discussions to jump in and shape the future of the Reps program:

Additionally, you can help out and track our Council efforts on the Reps GitHub repository.


Moving beyond RepsNext

It took us a little more than a year to come up with this “new release” of the Reps program. For the future we plan to take smaller steps improving the program beyond RepsNext. So expect experiments and tweaks arriving in smaller bits and with a higher clockspeed (think Firefox Rapid Release Model).


Video transcript

Question: What is RepsNext?

[Arturo] I think we have reached a point of maturity in the program that we need to reinvent ourselves to be adaptors of Mozilla’s will and to the modern times.

Question: How will the Reps program change?

[Pierros] What we’re really interested in and picking up as a highlight are the changes on the governance level. There are a couple of things that are coming. The Council has done really fanstastic work on bringing up and framing really interesting conversations around what RepsNext is, and PeersNext as a subset of that, and how do we change and adapt the leadership structure of Mozilla Reps to be more representative of the program that we would like to see.

[Brian] The program will still remain a grassroots program, run by volunteers for volunteers.

[Henrik] We’ve been working heavily on it in various working groups over the last year, developed a very clear understanding of the areas that need work and actually got a lot of stuff done.

[Konstantina] I think that the program has a great future ahead of it. We’re moving to a leadership body where our role is gonna be to empower the rest of the volunteer community and we’re gonna try to minimize the bureacracy that we already have. So the Reps are gonna have the same resources that they had but they are gonna have tracks where they can evolve their leadership skills and with that empower the volunteer communities. Reps is gonna be the leadership body for the volunteer community and I think that’s great. We’re not only about events but we’re something more and we’re something the rest of Mozilla is gonna rely on when we’re talking about volunteers.

Question: What’s important about this change?

[Michael] We will have the Participation team’s support to have meetings together, to figure out the strategy together.

[Konstantina] We are bringing the tracks where we specialize the Reps based on their interest.

Question: Why do we need changes?

[Christos] There is the need of that. There is the need to reconsider the mentoring process, reconsidering budgets, interest groups inside of Reps. There is a need to evolve Reps and be more impactful in our regions.

Question: Is this important for Mozilla?

[Arturo] We’re going to have mentors and Reps specialized in their different contribution areas.

Question: How is RepsNext helping local communities?

[Guillermo] Our idea, what we’re planning with the changes on RepsNext is to bring more people to the program. More people is more diversity, so we’re trying to find new people, more people with new interests.

Question: What excites you about RepsNext?

[Faisal] We have resources for different types of community, for example if somebody needs hardware or somebody training material, a variety of things not just what we used to have. So it will open up more ways on how we can support Reps for more impactful events and making events more productive.

QMOFirefox 48 Beta 3 Testday, June 24th

Hello Mozillians,

We are happy to announce that next Friday, June 24th, we are organizing Firefox 48 Beta 3 Testday. We’ll be focusing our testing on the New Awesomebar feature, bug verifications and bug triage. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Tanvi VyasContextual Identities on the Web

The Containers Feature in Firefox Nightly enables users to login to multiple accounts on the same site simultaneously and gives users the ability to segregate site data for improved privacy and security.

We all portray different characteristics of ourselves in different situations. The way I speak with my son is much different than the way I communicate with my coworkers. The things I tell my friends are different than what I tell my parents. I’m much more guarded when withdrawing money from the bank than I am when shopping at the grocery store. I have the ability to use multiple identities in multiple contexts. But when I use the web, I can’t do that very well. There is no easy way to segregate my identities such that my browsing behavior while shopping for toddler clothes doesn’t cross over to my browsing behavior while working. The Containers feature I’m about to describe attempts to solve this problem: empowering Firefox to help segregate my online identities in the same way I can segregate my real life identities.

With Containers, users can open tabs in multiple different contexts – Personal, Work, Banking, and Shopping.  Each context has a fully segregated cookie jar, meaning that the cookies, indexeddb, localStorage, and cache that sites have access to in the Work Container are completely different than they are in the Personal Container. That means that the user can login to their work twitter account on in their Work Container and also login to their personal twitter on in their Personal Container. The user can use both mail accounts in side-by-side tabs simultaneously. The user won’t need to use multiple browsers, an account switcher[1], or constantly log in and out to switch between accounts on the same domain.

User logged into work twitter account in Work Container and personal twitter account in Personal Container, simulatenously in side-by-side tabs

Simultaneously logged into Personal Twitter and Work Twitter accounts.

Note that the inability to efficiently use “Contextual Identities” on the web has been discussed for many years[2]. The hard part about this problem is figuring out the right User Experience and answering questions like:

  • How will users know what context they are operating in?
  • What if the user makes a mistake and uses the wrong context; can the user recover?
  • Can the browser assist by automatically assigning websites to Containers so that users don’t have to manage their identities by themselves?
  • What heuristics would the browser use for such assignments?

We don’t have the answers to all of these questions yet, but hope to start uncovering some of them with user research and feedback. The Containers implementation in Nightly Firefox is a basic implementation that allows the user to manage identities with a minimal user interface.

We hope to gather feedback on this basic experience to see how we can iterate on the design to make it more convenient, elegant, and usable for our users. Try it out and share your feedback by filling out this quick form or writing to


How do I use Containers?

You can start using Containers in Nightly Firefox 50 by opening a New Container Tab. Go the File Menu and select the “New Container Tab” option. (Note that on Windows you need to hit the alt key to access the File Menu.) Choose between Personal, Work, Shopping, and Banking.

Use the File Menu to access New Container Tab, then choose between Personal, Work, Banking, and Shopping.

Notice that the tab is decorated to help you remember which context you are browsing in. The right side of the url bar specifies the name of the Container you are in along with an icon. The very top of the tab has a slight border that uses the same color as the icon and Container name. The border lets you know what container a tab is open in, even when it is not the active tab.

User interface for the 4 different types of Container tabs

You can open multiple tabs in a specific container at the same time. You can also open multiple tabs in different containers at the same time:

User Interface when multiple container tabs are open side-by-side

2 Work Containers tabs, 2 Shopping Container tabs, 1 Banking Container tab

Your regular browsing context (your “default container”) will not have any tab decoration and will be in a normal tab. See the next section to learn more about the “default container”

Containers are also accessible via the hamburger menu. Customize your hamburger menu by adding in the File Cabinet icon. From there you can select a container tab to open. We are working on adding more access points for container tabs; particularly on long-press of the plus button.

User Interface for Containers Option in Hamburger Menu

How does this change affect normal tabs and the site data already stored in my browser?

The containers feature doesn’t change the normal browsing experience you get when using New Tab or New Window. The normal tab will continue to access the site data the browser has already stored in the past. The normal tab’s user interface will not change. When browsing in the normal context, any site data read or written will be put in what we call the “default container”.

If you use the containers feature, the different container tabs will not have access to site data in the default container. And when using a normal tab, the tab won’t have access to site data that was stored for a different container tab. You can use normal tabs along side other containers:

User Interface when 2 normal tabs are open, next to 2 Work Container tabs and 1 Banking Container tab

2 normal tabs (“Default Container tabs”), 2 Work Container tabs, 1 Banking Container tab

What browser data is segregated by containers?

In principle, any data that a site has read or write access to should be segregated.

Assume a user logins into in their Personal Container, and then loads in their Work Container. Since these loads are in different containers, there should be no way for the server to tie these two loads together. Hence, each container has its own separate cookies, indexedDB, localStorage, and cache.

Assume the user then opens a Shopping Container and opens the History menu option to look for a recently visited site. will still appear in the user’s history, even though they did not visit in the Shopping Container. This is because the site doesn’t have access to the user’s locally stored History. We only segregate data that a site has access to, not data that the user has access to. The Containers feature was designed for a single user who has the need to portray themselves to the web in different ways depending on the context in which they are operating.

By separating the data that a site has access to, rather than the data that a user has access to, Containers is able to offer a better experience than some of the alternatives users may be currently using to manage their identities.

Is this feature going to be in Firefox Release?

This is an experimental feature in Nightly only. We would like to collect feedback and iterate on the design before the containers concept goes beyond Nightly. Moreover, we would like to get this in the hands of Nightly users so they can help validate the OriginAttribute architecture we have implemented for this feature and other features. We have also planned a Test Pilot study for the Fall.

To be clear, this means that when Nightly 50 moves to Aurora/DevEdition 50, containers will not be enabled.

How do users manage different identities on the web today?

What do users do if they have two twitter accounts and want to login to them at the same time? Currently, users may login to one twitter account using their main browser, and another using a secondary browser. This is not ideal, since then the user is running two browsers in order to accomplish their tasks.

Alternatively, users may open a Private Browsing Window to login to the second twitter account. The problem with this is that all data associated with Private Browsing Windows is deleted when they are closed. The next time the user wants to use their secondary twitter account, they have to login again. Moreover, if the account requires two factor authentication, the user will always be asked for the second factor token, since the browser shouldn’t remember that they had logged in before when using Private Browsing.

Users may also use a second browser if they are worried about tracking. They may use a secondary browser for Shopping, so that the trackers that are set while Shopping can’t be associated with the tasks on their primary browser.

Can I disable containers on Nightly?

Yes, by following these steps:

  1. Open a new window or tab in Firefox.
  2. Type about:config and press enter.
  3. You will get to a page that asks you to promise to be careful. Promise you will be.
  4. Set the privacy.userContext.enabled preference to false.
Can I enable containers on a version of Firefox that is not Nightly?

Although the privacy.userContext.enabled preference described above may be present in other versions of Firefox, the feature may be incomplete, outdated, or buggy. We currently only recommend enabling the feature in Nightly, where you’ll have access to the newest and most complete version.

How is Firefox able to Compartmentalize Containers?

An origin is defined as a combination of a scheme, host, and port. Browsers make numerous security decisions based on the origin of a resource using the same-origin-policy. Various features require additional keys to be added to the origin combination. Examples include the Tor Browser’s work on First Party Isolation, Private Browsing Mode, the SubOrigin Proposal, and Containers.

Hence, Gecko has added additional attributes to the origin called OriginAttributes. When trying to determine if two origins are same-origin, Gecko will not only check if they have matching schemes, hosts, and ports, but now also check if all their OriginAttributes match.

Containers adds an OriginAttribute called userContextId. Each container has a unique userContextId. Stored site data (i.e. cookies) is now stored with a scheme, host, port, and userContextId. If a user has cookies with the userContextId for the Shopping Container, those cookies will not be accessible by in the Banking Container.

Note that one of the motivations in enabling this feature in Nightly is to help ensure that we iron out any bugs that may exist in our OriginAttribute implementation before features that depend on it are rolled out to users.

How does Containers improve user privacy and security?

The Containers feature offers users some control over the techniques websites can use to track them. Tracking cookies set while shopping in the Shopping Container won’t be accessible to sites in the Personal Container. So although a tracker can easily track a user within their Shopping Container, they would have to use device fingerprinting techniques to link that tracking information with tracking information from the user’s Personal Container.

Containers also offers the user a way to compartmentalize sensitive information. For example, users could be careful to only use their Banking Container to log into banking sites, protecting themselves from potential XSS and CSRF attacks on these sites. Assume a user visits in an non-banking-container. The malicious site may try to use a vulnerability in a banking site to obtain the user’s financial data, but wouldn’t be able to since the user’s bank’s authentication cookies are shielded off in a separate container that the malicious site can’t touch.

Is there any chance that a tracker will be able to track me across containers?

There are some caveats to data separation with Containers.

The first is that all requests by your browser still have the same IP address, user agent, OS, etc. Hence, fingerprinting is still a concern. Containers are meant to help you separate your identities and reduce naive tracking by things like cookies. But more sophisticated trackers can still use your fingerprint to identify your device. The Containers feature is not meant to replace the Tor Browser, which tries to minimize your fingerprint as much as possible, sometimes at the expense of site functionality. With Containers, we attempt to improve privacy while still minimizing breakage.

There are also some bugs still open related to OriginAttribute separation. Namely, the following areas are not fully separated in Containers yet:

  • Some favicon requests use the default container cookies even when you are in a different container – Bug 1277803
  • The about:newtab page makes network requests to recently visited sites using the default container’s cookies even when you are in a different container – Bug 1279568
  • Awesome Bar search requests use the default container cookies even when you are in a different container – Bug 1244340
  • The Forget About Site button doesn’t forget about site data from Container tabs – Bug 1238183
  • The image cache is shared across all containers – Bug 1270680

We are working on fixing these last remaining bugs and hope to do so during this Nightly 50 cycle.

How can I provide feedback?

I encourage you to try out the feature and provide your feedback via:

Thank you

Thanks to everyone who has worked to make this feature a reality! Special call outs to the containers team:

Andrea Marchesini
Kamil Jozwiak
David Huseby
Bram Pitoyo
Yoshi Huang
Tim Huang
Jonathan Hao
Jonathan Kingston
Steven Englehardt
Ethan Tseng
Paul Theriault


[1] Some websites provide account switchers in their products. For websites that don’t support switching, users may install addons to help them switch between accounts.
[3] Containers Slide Deck

David BurnsThe final major player is set to ship WebDriver

It was nearly a year ago that Microsoft shipped their first implementation of WebDriver. I remember being so excited as I wrote a blog post about it.

This week, Apple have said that they are going to be shipping a version of WebDriver that will allow people to drive Safari 10 in macOS. In the release notes they have created safari driver that will be shipping with the OS.

If you have ever wondered why this is important? Have a read of my last blog post. In Firefox 47 Selenium caused Firefox to crash on startup. The Mozilla implementation of WebDriver, called Marionette and GeckoDriver, would never have hit this problem because test failures and crashes like this would lead to patches being reverted and never shipped to end users.

Many congratulations to the Apple team for making this happen!

Yunier José Sosa VázquezConoce los complementos destacados para junio

Como cada mes, les traemos de primera mano los complementos recomendamos para los usuarios de Firefox.

Ver entrada en el blog de los Addons »

Anjana VakilI want to mock with you

This post brought to you from Mozilla’s London All Hands meeting - cheers!

When writing Python unit tests, sometimes you want to just test one specific aspect of a piece of code that does multiple things.

For example, maybe you’re wondering:

  • Does object X get created here?
  • Does method X get called here?
  • Assuming method X returns Y, does the right thing happen after that?

Finding the answers to such questions is super simple if you use mock: a library which “allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.” Since Python 3.3 it’s available simply as unittest.mock, but if you’re using an earlier Python you can get it from PyPI with pip install mock.

So, what are mocks? How do you use them?

Well, in short I could tell you that a Mock is a sort of magical object that’s intended to be a doppelgänger for some object in your code that you want to test. Mocks have special attributes and methods you can use to find out how your test is using the object you’re mocking. For example, you can use Mock.called and .call_count to find out if and how many times a method has been called. You can also manipulate Mocks to simulate functionality that you’re not directly testing, but is necessary for the code you’re testing. For example, you can set Mock.return_value to pretend that an function gave you some particular output, and make sure that the right thing happens in your program.

But honestly, I don’t think I could give a better or more succinct overview of mocks than the Quick Guide, so for a real intro you should go read that. While you’re doing that, I’m going to watch this fantastic Michael Jackson video:

Oh you’re back? Hi! So, now that you have a basic idea of what makes Mocks super cool, let me share with you some of the tips/tips/trials/tribulations I discovered when starting to use them.

Patches and namespaces

tl;dr: Learn where to patch if you don’t want to be sad!

When you import a helper module into a module you’re testing, the tested module gets its own namespace for the helper module. So if you want to mock a class from the helper module, you need to mock it within the tested module’s namespace.

For example, let’s say I have a Super Useful helper module, which defines a class HelperClass that is So Very Helpful:


class HelperClass():
    def __init__(self): = "helper"
    def help(self):
        helpful = True
        return helpful

And in the module I want to test, tested, I instantiate the Incredibly Helpful HelperClass, which I imported from


from helper import HelperClass

def fn():
    h = HelperClass() # using tested.HelperClass

Now, let’s say that it is Incredibly Important that I make sure that a HelperClass object is actually getting created in tested, i.e. that HelperClass() is being called. I can write a test module that patches HelperClass, and check the resulting Mock object’s called property. But I have to be careful that I patch the right HelperClass! Consider


import tested

from mock import patch

# This is not what you want:
def test_helper_wrong(mock_HelperClass):
    assert mock_HelperClass.called # Fails! I mocked the wrong class, am sad :(

# This is what you want:
def test_helper_right(mock_HelperClass):
    assert mock_HelperClass.called # Passes! I am not sad :)

OK great! If I patch tested.HelperClass, I get what I want.

But what if the module I want to test uses import helper and helper.HelperClass(), instead of from helper import HelperClass and HelperClass()? As in


import helper

def fn():
    h = helper.HelperClass()

In this case, in my test for tested2 I need to patch the class with patch('helper.HelperClass') instead of patch('tested.HelperClass'). Consider


import tested2
from mock import patch

# This time, this IS what I want:
def test_helper_2_right(mock_HelperClass):
    assert mock_HelperClass.called # Passes! I am not sad :)

# And this is NOT what I want!
# Mock will complain: "module 'tested2' does not have the attribute 'HelperClass'"
def test_helper_2_right(mock_HelperClass):
    assert mock_HelperClass.called


In short: be careful of which namespace you’re patching in. If you patch whatever object you’re testing in the wrong namespace, the object that’s created will be the real object, not the mocked version. And that will make you confused and sad.

I was confused and sad when I was trying to mock the TestManifest.active_tests() function to test BaseMarionetteTestRunner.add_test, and I was trying to mock it in the place it was defined, i.e. patch('manifestparser.manifestparser.TestManifest.active_tests').

Instead, I had to patch TestManifest within the runner.base module, i.e. the place where it was actually being called by the add_test function, i.e. patch('marionette.runner.base.TestManifest.active_tests').

So don’t be confused or sad, mock the thing where it is used, not where it was defined!

Pretending to read files with mock_open

One thing I find particularly annoying is writing tests for modules that have to interact with files. Well, I guess I could, like, write code in my tests that creates dummy files and then deletes them, or (even worse) just put some dummy files next to my test module for it to use. But wouldn’t it be better if I could just skip all that and pretend the files exist, and have whatever content I need them to have?

It sure would! And that’s exactly the type of thing mock is really helpful with. In fact, there’s even a helper called mock_open that makes it super simple to pretend to read a file. All you have to do is patch the builtin open function, and pass in mock_open(read_data="my data") to the patch to make the open in the code you’re testing only pretend to open a file with that content, instead of actually doing it.

To see it in action, you can take a look at a (not necessarily great) little test I wrote that pretends to open a file and read some data from it:

def test_nonJSON_file_throws_error(runner):
    with patch('os.path.exists') as exists:
        exists.return_value = True
        with patch('', mock_open(read_data='[not {valid JSON]')):
            with pytest.raises(Exception) as json_exc:
                runner._load_testvars() # This is the code I want to test, specifically to be sure it throws an exception
    assert 'not properly formatted' in json_exc.value.message

Gotchya: Mocking and debugging at the same time

See that patch('os.path.exists') in the test I just mentioned? Yeah, that’s probably not a great idea. At least, I found it problematic.

I was having some difficulty with a similar test, in which I was also patching os.path.exists to fake a file (though that wasn’t the part I was having problems with), so I decided to set a breakpoint with pytest.set_trace() to drop into the Python debugger and try to understand the problem. The debugger I use is pdb++, which just adds some helpful little features to the default pdb, like colors and sticky mode.

So there I am, merrily debugging away at my (Pdb++) prompt. But as soon as I entered the patch('os.path.exists') context, I started getting weird behavior in the debugger console: complaints about some ~/ file and certain commands not working properly.

It turns out that at least one module pdb++ was using (e.g. fancycompleter) was getting confused about file(s) it needs to function, because of checks for os.path.exists that were now all messed up thanks to my ill-advised patch. This had me scratching my head for longer than I’d like to admit.

What I still don’t understand (explanations welcome!) is why I still got this weird behavior when I tried to change the test to patch 'mymodule.os.path.exists' (where contains import os) instead of just 'os.path.exists'. Based on what we saw about namespaces, I figured this would restrict the mock to only mymodule, so that pdb++ and related modules would be safe - but it didn’t seem to have any effect whatsoever. But I’ll have to save that mystery for another day (and another post).

Still, lesson learned: if you’re patching a commonly used function, like, say, os.path.exists, don’t forget that once you’re inside that mocked context, you no longer have access to the real function at all! So keep an eye out, and mock responsibly!

Mock the night away

Those are just a few of the things I’ve learned in my first few weeks of mocking. If you need some bedtime reading, check out these resources that I found helpful:

I’m sure mock has all kinds of secrets, magic, and superpowers I’ve yet to discover, but that gives me something to look forward to! If you have mock-foo tips to share, just give me a shout on Twitter!

Chris IliasWho uses voice control?

Voice control (Siri, Google Now, Amazon Echo, etc.) is not a very useful feature to me, and wonder if I’m in the minority.

Why it is not useful:

  • I live with other people.
    • Sometimes one of the people I live with or myself may be sleeping. If someone speaks out loud to the TV or phone, that might wake the other up.
    • Even when everyone is awake, that doesn’t mean we are together. It annoys me when someone talks to the TV while watching basketball. I don’t want to find out how annoying it would be to listen to someone in another room tell the TV or their phone what to do.
  • I work with other people.
    • If I’m having lunch, and a co-worker wants to look something up on his/her phone, I don’t want to hear them speak their queries out loud. I actually have coworkers that use their phones as boomboxes to listen to music while eating lunch, as if no-one else can hear it, or everyone has the same taste in music, or everyone wants to listen to music at all during lunch.

The only times I use Siri are:

  • When I am in the car.
  • When I am speaking with others in a social setting, like a pub, and we want to look something up pertaining to the conversation.
  • When I’m alone

When I saw Apple introduce tvOS, the dependence on Siri turned me off from upgrading my Apple TV.

Am I in the minority here?

I get the feeling I’m not. I cannot recall anyone I know using Siri for other anything than entertainment with friends. Controlling devices with your voice in public must be Larry David’s worst nightmare.

Jen Kaganday 18: the case of the missing vine api and the add-on sdk

i’m trying to add vine support to min-vid and realizing that i’m still having a hard time wrapping my head around the min-vid program structure.

what does adding vine support even mean? it means that when you’re on, i want you to be able to right click on any video on the website, send it to the corner of your browser, and watch vines in the corner while you continue your browsing.

i’m running into a few obstacles. one obstacle is that i can’t find an official vine api. what’s an api? it stands for “application programming interface” (maybe? i think? no, i’m not googling) and i don’t know the official definition, but my unofficial definition is that the api is documentation i need from vine about how to access and manipulate content they have on their website. i need to be able to know the pattern for structuring video URLs. i need to know what functions to call in order to autoplay, loop, pause, and mute their videos. since this doesn’t exist in an official, well-documented way, i made a gist of their embed.js file, which i think/hope maybe controls their embedded videos, and which i want to eventually make sense of by adding inline comments.

another obstacle is that mozilla’s add-on sdk is really weirdly structured. i wrote about this earlier and am still sketching it out. here’s what i’ve gathered so far:

  • the page you navigate to in your browser is a page. with its own DOM. this is the page DOM.
  • the firefox add-on is made of content scripts (CS) and page scripts (PS).
  • the CS talks to the page DOM and the PS talks to the page DOM, but the CS and the PS don’t talk to each other.
  • with min-vid, the CS is index.js. this controls the context menu that comes up when you right click, and it’s the thing that tells the panel to show itself in the corner.
  • the two PS’s in min-vid are default.html and controls.js. the default.html PS loads the stuff inside the panel. the controls.js PS lets you control the stuff that’s in the panel.

so far, i can get the vine video to show up in the panel, but only after i’ve sent a youtube video to the panel. i can’t the vine video to show up on its own, and i’m not sure why. this makes me sad. here is a sketch:


Doug BelshawWhy we need 'view source' for digital literacy frameworks

Apologies if this post comes across as a little jaded, but as someone who wrote their doctoral thesis on this topic, I had to stifle a yawn when I saw that the World Economic Forum have defined 8 digital skills we must teach our children.

World Economic Forum - digital skills

In a move so unsurprising that it’s beyond pastiche, they’ve also coined a new term:

Digital intelligence or “DQ” is the set of social, emotional and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life.

I don’t mean to demean what is obviously thoughtful and important work, but I do wonder how (and who!) came up with this. They’ve got an online platform which helps develop the skills they’ve identified as important, but it’s difficult to fathom why some things were included and others left out.

An audit-trail of decision-making is important, as it reveals both the explicit and implicit biases of those involved in the work, as well as lazy shortcuts they may have taken. I attempted to do this in my work as lead of Mozilla’s Web Literacy Map project through the use of a wiki, but even that could have been clearer.

What we need is the equivalent of ‘view source’ for digital literacy frameworks. Specifically, I’m interested in answers to the following 10 questions:

  1. Who worked on this?
  2. What’s the goal of the organisation(s) behind this?
  3. How long did you spend researching this area?
  4. What are you trying to achieve through the creation of this framework?
  5. Why do you need to invent a new term? Why do very similar (and established) words and phrases not work well in this context?
  6. How long is this project going to be supported for?
  7. Is your digital literacy framework versioned?
  8. If you’ve included skills, literacies,and habits of mind that aren’t obviously 'digital’, why is that?
  9. What were the tough decisions that you made? Why did you come down on the side you did?
  10. What further work do you need to do to improve this framework?

I’d be interested in your thoughts and feedback around this post. Have you seen a digital literacy framework that does this well? What other questions would you add?

Note: I haven’t dived into the visual representation of digital literacy frameworks. That’s a whole other can of worms…

Get in touch! I’m @dajbelshaw and you can email me:

Mozilla Open Policy & Advocacy BlogA Step Forward for Net Neutrality in the U.S.

We’re thrilled to see the D.C. Circuit Court upholding the FCC’s historic net neutrality order, and the agency’s authority to continue to protect Internet users and businesses from throttling and blocking. Protecting openness and innovation is at the core of Mozilla’s mission. Net neutrality supports a level playing field, critical to ensuring a healthy, innovative, and open Web.

Leading up to this ruling Mozilla filed a joint amicus brief with CCIA supporting the order, and engaged extensively in the FCC proceedings. We filed a written petition, provided formal comments along the way, and engaged our community with a petition to Congress. Mozilla also organized global teach-ins and a day of action, and co-authored a letter to the President.

We’re glad to see this development and we remain steadfast in our position that net neutrality is a critical factor to ensuring the Internet is open and accessible. Mozilla is committed to continuing to advocate for net neutrality principles around the world.

Daniel StenbergNo websockets over HTTP/2

There is no websockets for HTTP/2.

By this, I mean that there’s no way to negotiate or upgrade a connection to websockets over HTTP/2 like there is for HTTP/1.1 as expressed by RFC 6455. That spec details how a client can use Upgrade: in a HTTP/1.1 request to switch that connection into a websockets connection.

Note that websockets is not part of the HTTP/1 spec, it just uses a HTTP/1 protocol detail to switch an HTTP connection into a websockets connection. Websockets over HTTP/2 would similarly not be a part of the HTTP/2 specification but would be separate.

(As a side-note, that Upgrade: mechanism is the same mechanism a HTTP/1.1 connection can get upgraded to HTTP/2 if the server supports it – when not using HTTPS.)



There’s was once a draft submitted that describes how websockets over HTTP/2 could’ve been done. It didn’t get any particular interest in the IETF HTTP working group back then and as far as I’ve seen, there has been very little general interest in any group to pick up this dropped ball and continue running. It just didn’t go any further.

This is important: the lack of websockets over HTTP/2 is because nobody has produced a spec (and implementations) to do websockets over HTTP/2. Those things don’t happen by themselves, they actually require a bunch of people and implementers to believe in the cause and work for it.

Websockets over HTTP/2 could of course have the benefit that it would only be one stream over the connection that could serve regular non-websockets traffic at the same time in many other streams, while websockets upgraded on a HTTP/1 connection uses the entire connection exclusively.


So what do users do instead of using websockets over HTTP/2? Well, there are several options. You probably either stick to HTTP/2, upgrade from HTTP/1, use Web push or go the WebRTC route!

If you really need to stick to websockets, then you simply have to upgrade to that from a HTTP/1 connection – just like before. Most people I’ve talked to that are stuck really hard on using websockets are app developers that basically only use a single connection anyway so doing that HTTP/1 or HTTP/2 makes no meaningful difference.

Sticking to HTTP/2 pretty much allows you to go back and use the long-polling tricks of the past before websockets was created. They were once rather bad since they would waste a connection and be error-prone since you’d have a connection that would sit idle most of the time. Doing this over HTTP/2 is much less of a problem since it’ll just be a single stream that won’t be used that much so it isn’t that much of a waste. Plus, the connection may very well be used by other streams so it will be less of a problem with idle connections getting killed by NATs or firewalls.

The Web Push API was brought by W3C during 2015 and is in many ways a more “webby” way of doing push than the much more manual and “raw” method that websockets is. If you use websockets mostly for push notifications, then this might be a more convenient choice.

Also introduced after websockets, is WebRTC. This is a technique introduced for communication between browsers, but it certainly provides an alternative to some of the things websockets were once used for.


Websockets over HTTP/2 could still be done. The fact that it isn’t done just shows that there isn’t enough interest.


Recall how browsers only speak HTTP/2 over TLS, while websockets can also be done over plain TCP. In fact, the only way to upgrade a HTTP connection to websockets is using the HTTP/1 Upgrade: header trick, and not the ALPN method for TLS that HTTP/2 uses to reduce the number of round-trips required.

If anyone would introduce websockets over HTTP/2, they would then probably only be possible to be made over TLS from within browsers.

David BurnsSelenium WebDriver and Firefox 47

With the release of Firefox 47, the extension based version FirefoxDriver is no longer working. There was a change in Firefox that when Selenium started the browser it caused it to crash. It has been fixed but there is a process to get this to release which is slow (to make sure we don't break anything else) so hopefully this version is due for release next week or so.

This does not mean that your tests need to stop working entirely as there are options to keep them working.


Firstly, you can use Marionette, the Mozilla version of FirefoxDriver to drive Firefox. This has been in Firefox since about 24 as we, slowly working against Mozilla priorities, getting it up to Selenium level. Currently Marionette is passing ~85% of the Selenium test suite.

I have written up some documentation on how to use Marionette on MDN

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

It would be great if we could raise bugs.

Firefox 45 ESR or Firefox 46

If you don't want to worry about Marionette, the other option is to downgrade to Firefox 45, preferably the ESR as it won't update to 47 and will update in about 6-9 months time to Firefox 52 when you will need to use Marionette.

Marionette will be turned on by default from Selenium 3, which is currently being worked on by the Selenium community. Ideally when Firefox 52 comes around you will just update to Selenium 3 and, fingers crossed, all works as planned.

Robert O'CallahanNastiness Works

One thing I experienced many times at Mozilla was users pressuring developers with nastiness --- ranging from subtle digs to vitriolic abuse, trying to make you feel guilty and/or trigger an "I'll show you! (by fixing the bug)" response. I know it happens in most open-source projects; I've been guilty of using it myself.

I particularly dislike this tactic because it works on me. It really makes me want to fix bugs. But I also know one shouldn't reward bad behavior, so I feel bad fixing those bugs. Maybe the best I can do is call out the bad behavior, fix the bug, and avoid letting that same person use that tactic again.

Perhaps you're wondering "what's wrong with that tactic if it gets bugs fixed?" Development resources are finite so every bug or feature is competing with others. When you use nastiness to manipulate developers into favouring your bug, you're not improving quality generally, you're stealing attention away from other issues whose proponents didn't stoop to that tactic and making developers a little bit miserable in the process. In fact by undermining rational triage you're probably making quality worse overall.

Mitchell BakerExpanding Mozilla’s Boards

This post was originally published on the Mozilla Blog.

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Foundation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.

The Mozilla BlogExpanding Mozilla’s Boards

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Corporation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.


Robert O'Callahan"Safe C++ Subset" Is Vapourware

In almost every discussion of Rust vs C++, someone makes a comment like:

the subset of C++14 that most people will want to use and the guidelines for its safe use are already well on their way to being defined ... By following the guidelines, which can be verified statically at compile time, the same kind of safeties provided by Rust can be had from C++, and with less annotation effort.
This promise is vapourware. In fact, it's classic vapourware in the sense of "wildly optimistic claim about a future product made to counter a competitor". (Herb Sutter says in comments that this wasn't designed with the goal of "countering a competitor" so I'll take him at his word (though it's used that way by others). Sorry Herb!)

(FWIW the claim quoted above is actually an overstatement of the goals of the C++ Core Guidelines to which it refers, which say "our design is a simpler feature focused on eliminating leaks and dangling only"; Rust provides important additional safety properties such as data-race freedom. But even just the memory safety claim is vapourware.)

To satisfy this claim, we need to see a complete set of statically checkable rules and a plausible argument that a program adhering to these rules cannot exhibit memory safety bugs. Notably, languages that offer memory safety are not just claiming you can write safe programs in the language, nor that there is a static checker that finds most memory safety bugs; they are claiming that code written in that language (or the safe subset thereof) cannot exhibit memory safety bugs.

AFAIK the closest to this C++ gets is the Core Guidelines Lifetimes I and II document, last updated December 2015. It contains only an "informal overview and rationale"; it refers to "Section III, analysis rules (forthcoming this winter)", which apparently has not yet come forth. (I'm pretty sure they didn't mean the New Zealand winter.) The informal overview shows a heavy dependence on alias analysis, which does not inspire confidence because alias analysis is always fragile. The overview leaves open critical questions about even trivial examples. Consider:

unique_ptr<int> p;
void foo(const int& v) {
p = nullptr;
cout << v;
void bar() {
p = make_unique(7);
Obviously this program is unsafe and must be forbidden, but what rule would reject it? The document says
  • In the function body, by default a Pointer parameter param is assumed to be valid for the duration of the function call and not depend on any other parameter, so at the start of the function lset(param) = param (its own lifetime) only.
  • At a call site, by default passing a Pointer to a function requires that the argument’s lset not include anything that could be invalidated by the function.
Clearly the body of foo is OK by those rules. For the call to foo from bar, it depends on what is meant by "anything that could be invalidated by the function". Does that include anything reachable via global variables? Because if it does, then you can't pass anything reachable from a global variable to any function by reference, which is crippling. But if it doesn't, then what rejects this code?

Update Herb points out that example 7.1 covers a similar situation with raw pointers. That example indicates that anything reachable through a global variable cannot be passed by to a function by raw-pointer or reference. That still seems like a crippling limitation to me. You can't, for example, copy-construct anything (indirectly) reachable through a global variable:

unique_ptr<Foo> p;
void bar() {
p = make_unique<Foo>(...);
Foo xyz(*p); // Forbidden!

This is not one rogue example that is easily addressed. This example cuts to the heart of the problem, which is that understanding aliasing in the face of functions with potentially unbounded side effects is notoriously difficult. I myself wrote a PhD thesis on the subject, one among hundreds, if not thousands. Designing your language and its libraries from the ground up to deal with these issues has been shown to work, in Rust at least, but I'm deeply skeptical it can be bolted onto C++.


Aren't clang and MSVC already shipping previews of this safe subset? They're implementing static checking rules that no doubt will catch many bugs, which is great. They're nowhere near demonstrating they can catch every memory safety bug.

Aren't you always vulnerable to bugs in the compiler, foreign code, or mistakes in the safety proofs, so you can never reach 100% safety anyway? Yes, but it is important to reduce the amount of trusted code to the minimum. There are ways to use machine-checked proofs to verify that compilation and proof steps do not introduce safety bugs.

Won't you look stupid when Section III is released? Occupational hazard, but that leads me to one more point: even if and when a statically checked, plausibly safe subset is produced, it will take significant experience working with that subset to determine whether it's viable. A subset that rejects core C++ features such as references, or otherwise excludes most existing C++ code, will not be very compelling (as acknowledged in the Lifetimes document: "Our goal is that the false positive rate should be kept at under 10% on average over a large body of code").

Jen Kaganday 16: helpful git things

it’s been important for me to get comfortable-ish with git. i’m slowly learning about best practices on a big open source project that’s managed through github.

one example: creating a separate branch for each feature i work on. in the case of min-vid, this means i created one branch to add support, a different branch to add to the project’s README, a different branch to work on vine support, etc. that way, if my changes aren’t merged into the main project’s master, i don’t have to re-clone the project. i just keep working on the branch or delete it or whatever. this also lets me bounce between different features if i get stuck on one and need to take a break by working on another one. i keep the workflow on a post-it on my desktop so i don’t have to think about it (a la atul gawande’s so good checklist manifesto):

git checkout master

git pull upstream master
(to get new changes from the main project’s master branch)

git push origin master
(to push new changes up to my own master branch)

git checkout -b [new branch]
(to work on a feature)

npm run package
(to package the add-on before submitting the PR)

git add .

git commit -m '[commit message]

git push origin [new branch]
(to push my changes to my feature branch; from here, i can submit a PR)

git checkout master

another important git practice: squashing commits so my pull request doesn’t include 1000 commits that muddy the project history with my teensy changes. this is the most annoying thing ever and i always mess it up and i can’t even bear to explain it because this person has done a pretty good job already. just don’t ever ever forget to REBASE ON TOP OF MASTER, people!

last thing, which has been more important on my side project that i’m hosting on gh-pages: updating my gh-pages branch with changes from my master branch. this is crucial because the gh-pages branch, which displays my website, doesn’t automatically incorporate changes i make to my index.html file on my master branch. so here’s the workflow:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[commit message]'

git push origin master
(the previous commands push your changes to your master branch. now, to update your gh-pages branch:)

git checkout gh-pages

git merge master

git push origin gh-pages

yes, that’s it, the end, congrats!

p.s. that all assumes that you already created a gh-pages branch to host your website. if you haven’t and want to, here’s how you do it:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[message]'

git push origin master
(same as before. this is just normal, updating-your-master-branch stuff. so, next:)

git checkout -b gh-pages
(-b creates a new branch, gh-pages names the new branch “gh-pages”)

git push origin gh-pages
(this pushes changes from your origin/master branch to your new gh-pages branch)

yes, that’s it, the end, congrats!

This Week In RustThis Week in Rust 134

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42 and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is petgraph, which provides graph structures and algorithms. Thanks to /u/diwic for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

110 pull requests were merged in the last two weeks.

New Contributors

  • Andrew Brinker
  • Chris Tomlinson
  • Hendrik Sollich
  • Horace Abenga
  • Jacob Clark
  • Jakob Demler
  • James Alan Preiss
  • James Lucas
  • Joachim Viide
  • Mark Côté
  • Mathieu De Coster
  • Michael Necio
  • Morten H. Solvang
  • Wojciech Nawrocki

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Isn’t rust too difficult to be widely adopted?

I believe in people.

Steve Klabnik on TRPLF

Thanks to Steven Allen for the suggestion.

Submit your quotes for next week!

The Servo BlogThis Week In Servo 67

In the last week, we landed 85 PRs in the Servo organization’s repositories.

That number is a bit low this week, due to some issues with our CI machines (especially the OSX boxes) that have hurt our landing speed. Most of the staff are in London this week for the Mozilla All Hands meeting, but we’ll try to look at it.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • glennw upgraded our GL API usage to rely on more GLES3 features
  • ms2ger removed some usage of transmute
  • nox removed some of the dependencies on crates that are very fragile to rust nightly changes
  • nox reduced the number of fonts that we load unconditionally
  • larsberg added the ability to open web pages in Servo on Android
  • anderco fixed some box shadow issues
  • ajeffrey implemented the beginnings of the top level browsing context
  • izgzhen improved the implementation and tests for the file manager thread
  • edunham expanded the ./mach package command to handle desktop platforms
  • daoshengmu implemented TexSubImage2d for WebGL
  • pcwalton fixed an issue with receiving mouse events while scrolling in certain situations
  • danlrobertson continued the quest to build Servo on FreeBSD
  • manishearth reimplemented XMLHttpRequest in terms of the Fetch specification
  • kevgs corrected a spec-incompatibility in Document.defaultView
  • fduraffourg added a mechanism to update the list of public suffixes
  • farodin91 enabled using WindowProxy types in WebIDL
  • bobthekingofegypt prevented some unnecesary echoes of websocket quit messages

New Contributors

There were no new contributors this week.

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


No screenshots this week.

Air MozillaHackathon Open Democracy Now Day 2

Hackathon Open Democracy Now Day 2 Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Robert O'CallahanSome Dynamic Measurements Of Firefox On x86-64

This follows up on my previous measurements of static properties of Firefox code on x86-64 with some measurements of dynamic properties obtained by instrumenting code. These are mostly for my own amusement but intuitions about how programs behave at the machine level, grounded in data, have sometimes been unexpectedly useful.

Dynamic properties are highly workload-dependent. Media codecs are more SSE/AVX intensive than regular code so if you do nothing but watch videos you'd expect qualitatively different results than if you just load Web pages. I used a mixed workload that starts Firefox (multi-process enabled, optimized build), loads the NZ Herald, scrolls to the bottom, loads an article with a video, plays the video for several seconds, then quits. It ran for about 30 seconds under rr and executes about 60 billion instructions.

I repeated my register usage result analysis, this time weighted by dynamic execution count and taking into account implicit register usage such as push using rsp. The results differ significantly on whether you count the consecutive iterations of a repeated string instruction (e.g. rep movsb) as a single instruction execution or one instruction execution per iteration, so I show both. Unlike the static graphs, these results for all instructions executed anywhere in the process(es), including JITted code, not just libxul.

  • As expected, registers involved in string instructions get a big boost when you count string instruction repetitions individually. About 7 billion of the 64 billion instruction executions "with string repetitions" are iterations of string instructions. (In practice Intel CPUs can optimize these to execute 64 iterations at a time, under favourable conditions.)
  • As expected, sp is very frequently used once you consider its implicit uses.
  • String instructions aside, the dynamic results don't look very different from the static results. Registers R8 to R11 look a bit more used in this graph, which may be because they tend to be allocated in highly optimized leaf functions, which are more likely to be hot code.

  • The surprising thing about the results for SSE/AVX registers is that they still don't look very different to the static results. Even the bottom 8 registers still aren't frequently used compared to most general-purpose registers, even though I deliberately tried to exercise codec code.
  • I wonder why R5 is the least used bottom-8 register by a significant margin. Maybe these results are dominated by a few hot loops that by chance don't use that register much.

I was also interested in exploring the distribution of instruction execution frequencies:

A dot at position x, y on this graph means that fraction y of all instructions executed at least once is executed at most x times. So, we can see that about 19% of all instructions executed are executed only once. About 42% of instructions are executed at most 10 times. About 85% of instructions are executed at most 1000 times. These results treat consecutive iterations of a string instruction as a single execution. (It's hard to precisely define what it means for an instruction to "be the same" in the presence of dynamic loading and JITted code. I'm assuming that every execution of an instruction at a particular address in a particular address space is an execution of "the same instruction".)

Interestingly, the five most frequently executed instructions are executed about 160M times. Those instructions are for this line, which is simply filling a large buffer with 0xff000000. gcc is generating quite slow code:

132e7b2: cmp    %rax,%rdx
132e7b5: je 132e7d1
132e7b7: movl $0xff000000,(%r9,%rax,4)
132e7bf: inc %rax
132e7c2: jmp 132e7b2
That's five instructions executed for every four bytes written. This could be done a lot faster in a variety of different ways --- rep stosd or rep stosq would probably get the fast-string optimization, but SSE/AVX might be faster.

Anthony HughesLondon Calling

I’m happy to share that I will be hosting my first ever All-hands session, Graphics Stability – Tools and Measurements (12:30pm on Tuesday in the Hilton Metropole), at the upcoming Mozilla All-hands in London. The session is intended to be a conversation about taking a more collaborative approach to data-driven decision-making as it pertains to improving Graphics stability.

I will begin by presenting how the Graphics team is using data to tackle the graphics stability problem, reflecting on the problem at hand and the various approaches we’ve taken to date. My hope is this serves as a catalyst to lively discussion for the remainder of the session, resulting in a plan for more effective data-driven decision-making in the future through collaboration.

I am extending an invitation to those outside the Graphics team, to draw on a diverse range of backgrounds and expertise. As someone with a background in QA, data analysis is an interesting diversion (some may call it an evolution) in my career — it’s something I just fell in to after a fairly lengthy and difficult transitional period. While I’ve learned a lot recently, I am an amateur data scientist at best and could certainly benefit from more developed expertise.

I hope you’ll consider being a part of this conversation with the Graphics team. It should prove to be both educational and insightful. If you cannot make it, not to worry, I will be blogging more on this subject after I return from London.

Feel free to reach out to me if you have questions.

Karl Dubost[worklog] Edition 025. Forest and bugs

In France, in the forest, listening to the sound of leaves on oak trees, in between bugs and preparing for London work week. Tune of the week: Working Class Hero.

Webcompat Life

Progress this week:

Today: 2016-06-14T06:14:05.461238
338 open issues
needsinfo       4
needsdiagnosis  92
needscontact    23
contactready    46
sitewait        161

You are welcome to participate

London agenda.

Webcompat issues

(a selection of some of the bugs worked on this week). dev

  • I'm wondering if we are not missing a step once we have contacted a Web site.

Reading List

Follow Your Nose


  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Air MozillaHackathon Open Democracy Now

Hackathon Open Democracy Now Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Robert O'CallahanAre Dynamic Control-Flow Integrity Schemes Worth Deploying?

Most exploits against C/C++ code today rely on hijacking CPU-level control flow to execute the attacker's code. Researchers have developed schemes to defeat such attacks based on the idea of control flow integrity: characterize a program's "valid control flow", and prevent deviations from valid control flow at run time. There are lots of CFI schemes, employing combinations of static and dynamic techniques. Some of them don't even call themselves CFI, but I don't have a better term for the general definition I'm using here. Phrased in this general way, it includes control-transfer instrumentation (CCFIR etc), pointer obfuscation, shadow stacks, and even DEP and ASLR.

Vendors of C/C++ software need to consider whether to deploy CFI (and if so, which scheme). It's a cost/benefit analysis. The possible benefit is that many bugs may become significantly more difficult --- or even impossible --- to exploit. The costs are complexity and run-time overhead.

A key question when evaluating the benefit is, how difficult will it be for CFI-aware attackers to craft exploits that bypass CFI? That has two sub-questions: how often is it possible to weaponize a memory-safety bug that's exploited via control-flow hijacking today, with an exploit that is permitted by the CFI scheme? And, crucially, will it be possible to package such exploitation techniques so that weaponizing common C/C++ bugs into CFI-proof exploits becomes cheap? A very interesting paper at Oakland this year, and related work by other authors, suggests that the answer to the first sub-question is "very often" and the answer to the second sub-question is "don't bet against it".

Coincidentally, Intel has just unveiled a proposal to add some CFI features to their CPUs. It's a combination of shadow stacks with dynamic checking that the targets of indirect jumps/calls are explicitly marked as valid indirect destinations. Unlike some more precise CFI schemes, you only get one-bit target identification; a given program point is a valid destination for all indirect transfers or none.

So will CFI be worth deploying? It's hard to say. If you're offered a turnkey solution that "just works" with negligible cost, there may be no reason not to use it. However, complexity has a cost, and we've seen that sometimes complex security measures can even backfire. The tail end of Intel's document is rather terrifying; it tries to enumerate the interactions of their CFI feature with all the various execution modes that Intel currently supports, and leaves me with the impression that they're generally heading over the complexity event horizon.

Personally I'm skeptical that CFI will retain value over the long term. The Oakland DOP paper is compelling, and I think we generally have lots of evidence that once an attacker has a memory safety bug to work on, betting against the attacker's ingenuity is a loser's game. In an arms race between dynamic CFI (and its logical extension to dynamic data-flow integrity) and attackers, attackers will probably win, not least because every time you raise the CFI bar you'll pay with increased complexity and overhead. I suggest that if you do deploy CFI, you should do so in a way that lets you pull it out if the cost-benefit equation changes. Baking it into the CPU does not have that property...

One solution, of course, is to reduce the usage of C/C++ by writing code in a language whose static invariants are strong enough to give you CFI, and much stronger forms of integrity, "for free". Thanks to Rust, the old objections that memory-safe languages were slow, tied to run-time support and cost you control over resources don't apply anymore. Let's do it.

Mozilla Addons BlogWebExtensions for Firefox 49

Firefox 49 landed in Developer Edition this week, so we have another update on WebExtensions for you!

Since the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Since the last release, more than 35 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious about how it’s affected by the upcoming changes, please use this lookup tool. There is also a wiki page and MDN articles filled with resources to support you through the changes.

APIs Implemented

The history API allows you to interact with the browser history. In Firefox 49 the APIs to add, query, and delete browser history have been merged. This is ideal for developers who want to build add-ons to manage user privacy.

In Firefox 48, Android support was merged, and in Firefox 49 support for some of the native elements has started to land. Firefox 49 on Android supports some of the pageAction APIs. This work lays the foundation for new APIs such as tabs, windows, browserAction in Android.

The WebNavigation API now supports the manual_subframe transitionType and keeps track of user interaction with the url bar appropriately. The downloads API now lets you download a blob created in a background script.

For a full list of bugs, please check out Bugzilla.

In progress

Things are a little bit quieter recently because there are things in progress that have absorbed a lot of developer time. They won’t land in the tree for Firefox 49, but we’ll keep you updated on their progress in later releases.


This API allows you to store some data for an add-on in Firefox and have it synced to another Firefox browser. It is most commonly used for storing add-on preferences, because it is not designed to be a robust data storage and syncing tool. For sync, we will use Firefox Accounts to authenticate users and enforce quota limits.

Whilst the API contains the word “sync” and it uses Firefox Accounts, it should be noted that it is different from Firefox Sync. In Firefox Sync there is an attempt to merge data and resolve conflicts. There is no similar logic in this API.

You can track the progress of storage.sync in Bugzilla.


This API allows you to communicate with other processes on the host’s operating system. It’s a commonly used API for password managers and security software which needs to communicate with external processes.

To communicate with a native process, there’s a two-step process. First, your installer needs to install a JSON manifest file at an appropriate file location on the target computer. That JSON manifest provides the link between Firefox and the process. Secondly, the user installs the add-on. Then the add-on can call the connectNative, sendNativeMessage and other APIs:

  { text: "Hello" },
  function(response) {
    console.log("Received " + response);

Firefox will start the process if it hasn’t started already, and pipe commands through to the process. Follow along with the progress of runtime.connectNative on Bugzilla.

WebExtensions transition

With these ongoing improvements, we realise there are lots of add-ons that might want to start moving towards WebExtensions and utilise the new APIs.

To allow this, you will soon be able to embed a WebExtension inside an add-on. This allows you to message the WebExtension add-on.

The following example works with SDK add-ons, but this should work with any bootstrapped add-on. Inside your SDK add-on you’d have a directory called webextensions containing a full-fledged WebExtension. In the background page of that WebExtension will be the following:

chrome.runtime.sendMessage("test message", (reply) => {
  console.log("embedded webext got a reply", reply);

Then you’d be able to reply in the SDK add-on:

var { api } = require('sdk/webextension');
api.onMessage.addListener((msg, sender, sendReply) =>
  console.log("SDK add-on got a message", {msg,sender}); 

This demonstrates sending a message from the WebExtension to the SDK add-on.Persistent bi-directional ports will also be available.

Using this technique, we hope add-on developers can leverage WebExtensions APIs as they start migrating their add-on over to WebExtensions. Follow along with the progress of communication on Bugzilla.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Mitchell BakerJoi Ito changes role and starts new “Practicing Open” project with Mozilla Foundation

Since the Mozilla Foundation was founded in 2003, we’ve grown remarkably – from impact to the size of our staff and global community. We’re indebted to the people whose passion and creativity made this possible, people like Joi Ito.

Joi is a long-time friend of Mozilla. He’s a technologist, a thinker, an activist and an entrepreneur. He’s been a Mozilla Foundation board member for many years. He’s also Director of the MIT Media Lab and was very recently appointed Professor of the Practice by MIT.

As Joi has become more deeply involved with the Media Lab over the past few years, we’ve come to understand that his most important future contributions are, rather than as a Board member, to spur innovative activities that advance the goals of both the Mozilla Foundation and the Media Lab.

The first such project and collaboration between Mozilla and the Media Lab, is an “Open Leadership Camp” for senior executives in the nonprofit and public sectors.

The seeds of this idea have been germinating for a while. Joi and I have had an ongoing discussion about how people build open, participatory, web-like organizations for a year or so now. The NetGain consortium led by Ford, Mozilla and a number of foundations, has shown the pressing need for deeper Internet knowledge in the nonprofit and public sectors. Also, Mozilla’s nascent Leadership Network has been working on how to provide innovative ways for leaders in the more publicly-minded tech space to learn new skills. All these things felt like the perfect storm for a collaborative project on open leadership and to work with other groups already active in this area.

The project we have in mind is simple:

  1. Bring together a set of experienced leaders from ‘open organizations’ and major non-profit and public sector organizations.
  2. Get them working on practical projects that involve weaving open techniques into their organizations.
  3. Document and share the learning as we go.

Topics we’ll cover include everything from design thinking (think: sticky notes) to working in the open (think: github) to the future of open technologies (think: blockchain). The initial camp will run at MIT in early 2017, with Joi and myself as the hosts. Our hope is that a curriculum and method can grow from there to seed similar camps within public-interest leadership programs in many other places.

I’m intensely grateful for Joi’s impact. We’ve been lucky to have him involved with Mozilla and the open Internet. We’re lucky to have him at the Media Lab and I’m looking forward to our upcoming work together.

Chris AtLeePyCon 2016 report

I had the opportunity to spend last week in Portland for PyCon 2016. I'd like to share some of my thoughts and some pointers to good talks I was able to attend. The full schedule can be found here and all the videos are here.


Brandon Rhodes' Welcome to PyCon was one of the best introductions to a conference I've ever seen. Unfortunately I can't find a link to a recording... What I liked about it was that he made everyone feel very welcome to PyCon and to Portland. He explained some of the simple (but important!) practical details like where to find the conference rooms, how to take transit, etc. He noted that for the first time, they have live transcriptions of the talks being done and put up on screens beside the speaker slides for the hearing impaired.

He also emphasized the importance of keeping questions short during Q&A after the regular sessions. "Please form your question in the form of a question." I've been to way too many Q&A sessions where the person asking the question took the opportunity to go off on a long, unrelated tangent. For the most part, this advice was followed at PyCon: I didn't see very many long winded questions or statements during Q&A sessions.

Machete-mode Debugging

(abstract; video)

Ned Batchelder gave this great talk about using python's language features to debug problematic code. He ran through several examples of tricky problems that could come up, and how to use things like monkey patching and the debug trace hook to find out where the problem is. One piece of advice I liked was when he said that it doesn't matter how ugly the code is, since it's only going to last 10 minutes. The point is the get the information you need out of the system the easiest way possible, and then you can undo your changes.

Refactoring Python

(abstract; video)

I found this session pretty interesting. We certainly have lots of code that needs refactoring!

Security with object-capabilities

(abstract; video; slides)

I found this interesting, but a little too theoretical. Object capabilities are a completely orthogonal way to access control lists as a way model security and permissions. It was hard for me to see how we could apply this to the systems we're building.

Awaken your home

(abstract; video)

A really cool intro to the Home Assistant project, which integrates all kinds of IoT type things in your home. E.g. Nest, Sonos, IFTTT, OpenWrt, light bulbs, switches, automatic sprinkler systems. I'm definitely going to give this a try once I free up my raspberry pi.

Finding closure with closures

(abstract; video)

A very entertaining session about closures in Python. Does Python even have closures? (yes!)

Life cycle of a Python class

(abstract; video)

Lots of good information about how classes work in Python, including some details about meta-classes. I think I understand meta-classes better after having attended this session. I still don't get descriptors though!

(I hope Mike learns soon that __new__ is pronounced "dunder new" and not "under under new"!)

Deep learning

(abstract; video)

Very good presentation about getting started with deep learning. There are lots of great libraries and pre-trained neural networks out there to get started with!

Building protocol libraries the right way

(abstract; video)

I really enjoyed this talk. Cory Benfield describes the importance of keeping a clean separation between your protocol parsing code, and your IO. It not only makes things more testable, but makes code more reusable. Nearly every HTTP library in the Python ecosystem needs to re-implement its own HTTP parsing code, since all the existing code is tightly coupled to the network IO calls.


Guido's Keynote


Some interesting notes in here about the history of Python, and a look at what's coming in 3.6.


(abstract; video)

An intro to the click module for creating beautiful command line interfaces.

I like that click helps you to build testable CLIs.

HTTP/2 and asynchronous APIs

(abstract; video)

A good introduction to what HTTP/2 can do, and why it's such an improvement over HTTP/1.x.

Remote calls != local calls

(abstract; video)

Really good talk about failing gracefully. He covered some familiar topics like adding timeouts and retries to things that can fail, but also introduced to me the concept of circuit breakers. The idea with a circuit breaker is to prevent talking to services you know are down. For example, if you have failed to get a response from service X the past 5 times due to timeouts or errors, then open the circuit breaker for a set amount of time. Future calls to service X from your application will be intercepted, and will fail early. This can avoid hammering a service while it's in an error state, and works well in combination with timeouts and retries of course.

I was thinking quite a bit about Ben's redo module during this talk. It's a great module for handling retries!

Diving into the wreck

(abstract; video)

A look into diagnosing performance problems in applications. Some neat tools and techniques introduced here, but I felt he blamed the DB a little too much :)


Magic Wormhole

(abstract; video; slides)

I didn't end up going to this talk, but I did have a chance to chat with Brian before. magic-wormhole is a tool to safely transfer files from one computer to another. Think scp, but without needing ssh keys set up already, or direct network flows. Very neat tool!

Computational Physics

(abstract; video)

How to do planetary orbit simulations in Python. Pretty interesting talk, he introduced me to Feynman, and some of the important characteristics of the simulation methods introduced.

Small batch artisinal bots

(abstract; video)

Hilarious talk about building bots with Python. Definitely worth watching, although unfortunately it's only a partial recording.


(abstract; video)

The infamous GIL is gone! And your Python programs only run 25x slower!

Larry describes why the GIL was introduced, what it does, and what's involved with removing it. He's actually got a fork of Python with the GIL removed, but performance suffers quite a bit when run without the GIL.

Lars' Keynote


If you watch only one video from PyCon, watch this. It's just incredible.

Support.Mozilla.OrgWhat’s Up with SUMO – 9th June

Hello, SUMO Nation!

I wonder how many football fans do we have among you… The Euro’s coming! Some of us will definitely be watching (and being emotional) about the games played out in the next few weeks. If you’re a football fan, let’s talk about it in our forums!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is most likely happening after the London Work Week (which is happening next week)
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.



Support Forum

Knowledge Base & L10n


  • for Android
    • Version 47 launched – woohoo!
      • You can now show or hide web fonts” in advanced settings, to save your data and increase page loading speeds.
    • Final reminder: Android 2.3 is no longer a supported platform after the recent release.
    • Version 48 articles will be coming after June 18, courtesy of Joni!

That’s it for this week – next week the blog post may not be here… but if you keep an eye open for our Twitter updates, you may see a lot of smiling faces.

Mark SurmanMaking the open internet a mainstream issue

The Internet as a global public resource is at risk. How do we grow the movement to protect it? Thoughts from PDF

Today I’m in New York City at the 13th-annual Personal Democracy Forum, where the theme is “The Tech We Need.” A lot of bright minds are here tackling big issues, like civic tech, data privacy, Internet policy and the sharing economy. PDF is one of the world’s best spaces for exploring the intersection of the Internet and society — and we need events like this now more than ever.

This afternoon I’ll be speaking about the open Internet movement: its genesis, its ebb and why it needs a renaissance. I’ll discuss how the open Internet is much like the environment: a resource that’s delicate and finite. And a resource that, without a strong movement, is spoiled by bad laws and consolidation of power by a few companies.

At its core, the open Internet movement is about more than just technology. It’s about free expression and democracy. That’s why members of the movement are so diverse: Activists and academics. Journalists and hackers.

photo via Flickr/ Stacie Isabella Turk/Ribbonheadphoto via Flickr/ Stacie Isabella Turk/Ribbonhead

Today, this movement is at an inflection point. The open Internet is increasingly at risk. Openness and freedom online are being eroded by governments creating bad or uninformed policy, and by tech companies that are creating monopolies and walled gardens. This is all compounded by a second problem: Many people still don’t perceive the health of the Internet as a mainstream issue.

In order to really demonstrate the importance of the open Internet movement, I like to use an analogue: The environmental movement. The two have a lot in common. Environmentalists are all about preserving the health of the planet. Forests, not clearcutting. Habitats, not smokestacks. Open Internet activists are all about preserving the health of the Internet. Open source code, not proprietary software. Hyperlinks, not walled gardens.

The open Internet is also like the environmental movement in that it has rhythm. Public support ebbs and flows — there are crescendos and diminuendos. Look at the cadence of the environmental movement: It became a number of times in a number of places. For example, an early  crescendo in the US came in the late 19th century. On the heels of the Industrial Revolution, there’s resistance. Think of Thoreau, of “Walden.” Soon after, Theodore Roosevelt and John Muir emerge as champions of the environment, creating the Sierra Club and the first national parks. Both national parks and a conservation movement filled with hikers who use them both become mainstream — it’s a major victory.

But movements ebb. In the mid-20th century, environmental destruction continues. We build nuclear and chemical plants. We pollute rivers and air space. We coat our food and children with DDT. It’s ugly — and we did irreparable damage while most people just went about their lives. In many ways, this is where we’re at with the Internet today. There is reason to worry that we’re doing damage and that we might even lose what we built without even knowing it. .

In reaction, the US environmental movement experiences a second mainstream moment. It starts in the 60s: Rachel Carson releases “Silent Spring,” exposing the dangers of DDT and other pesticides. This is a big deal: Citizens start becoming suspicious of big companies and their impact on the environment. Governments begin appointing environmental ministers. Organizations like Greenpeace emerge and flourish.

For a second time, the environment becomes an issue worthy of policy and public debate. Resting on the foundations built by 1960s environmentalism, things like recycling are a civic duty today. And green business practices are the expectation, not the exception.

The open Internet movement has had a similar tempo. It’s first crescendo — its “Walden” moment — was in the 90s. Users carved out and shaped their own spaces online — digital homesteading. No two web pages were the same, and open was the standard. A rough analogue to Thoreau’s “Walden” is John Perry Barlow’s manifesto “A Declaration of the Independence of Cyberspace.” Barlow boldly wrote that governments and centralized power have no place in the digital world.

It’s during this time that the open Internet faces its first major threat: centralization at the hands of Internet Explorer. Suddenly, it seems the whole Web may fall into the hands of Microsoft technology. But there was also a push back and  crescendo — hackers and users rallied to create open alternatives like Firefox. Quickly, non-proprietary web standards re-emerge. Interoperability and accessibility become driving principles behind building the Web. The Browser Wars are won: Microsoft as monopoly over web technology is thwarted.

But then comes inertia. We could be in the open Internet movement’s DDT moment. Increasingly, the Internet is becoming a place of centralization. The Internet is increasingly shaped by a tiny handful of companies, not individuals. Users are transforming from creators into consumers. In the global south, millions of users equate the Internet with Facebook. These developments crystallize as a handful of threats: Centralization. Loss of privacy. Digital exclusion.

Screen Shot 2016-06-09 at 1.35.12 PM

It’s a bit scary: Like the environment, the open Internet is fragile. There may be a point of no return. What we want to do — what we need to do — is make the health of the open Internet a mainstream issue. We need to make the health of the Internet an indelible issue, something that spurs on better policy and better products. And we need a movement to make this happen.

This is on us: everyone who uses the internet needs to take notice. Not just the technologists — also the activists, academics, journalists and everyday Internet users who treasure freedom of expression and inclusivity online.

There’s good news: This is already happening. Starting with SOPA and ACTA a citizen movement for an open Internet started accelerating. We got organized, we rallyied citizens and we took stands on issues that mattered. Think of the recent headlines. When Edward Snowden revealed the extent of mass surveillance, people listened. Privacy and freedom from surveillance online were quickly enshrined as rights worth fighting for. The issue gained momentum among policymakers — and in 2015, the USA Freedom Act was passed.

Then there is 2015’s net neutrality victory: Over 3 million comments flooded the FCC protesting fast lanes and slow lanes. Most recently, Apple and the FBI clashed fiercely over encryption. Apple refused to concede, standing up for users’ privacy and security. Tim Cook was applauded, and encryption became a word spoken at kitchen tables and coffee shops.

Of course, this is just the beginning. These victories are heartening, for sure. But even as this new wave of internet activism builds, the threats are becoming worse, more widespread. We need to fuel the movement with concrete action — if we don’t, we may lose the open Web for good. Today, upholding the health of the planet is an urgent and enduring enterprise. So too should upholding the health of the Internet.

A small PS, I also gave a talk on this topic at re:publica in Berlin last month. If you want to watch that talk, the video is on the re:publica site.

The post Making the open internet a mainstream issue appeared first on Mark Surman.

Air MozillaMapathon Missing Maps #4

Mapathon Missing Maps #4 Ateliers de cartographie collaborative sur OpenStreetMap de régions du monde peu ou pas encore cartographiées. Organisé par Missing Maps et MSF.

Air MozillaWeb QA team meeting

Web QA team meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 09 Jun 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogHelp Make Open Source Secure

Major security bugs heartbleed bandagein core pieces of open source software – such as Heartbleed and Shellshock – have elevated highly technical security vulnerabilities into national news headlines. Despite these sobering incidents, adequate support for securing open source software remains an unsolved problem, as a panel of 32 security professionals confirmed in 2015. We want to change that, starting today with the creation of the Secure Open Source (“SOS”) Fund aimed at precisely this need.

Open source software is used by millions of businesses and thousands of educational and government institutions for critical applications and services. From Google and Microsoft to the United Nations, open source code is now tightly woven into the fabric of the software that powers the world. Indeed, much of the Internet – including the network infrastructure that supports it – runs using open source technologies. As the Internet moves from connecting browsers to connecting devices (cars and medical equipment), software security becomes a life and death consideration.

The SOS Fund will provide security auditing, remediation, and verification for key open source software projects. The Fund is part of the Mozilla Open Source Support program (MOSS) and has been allocated $500,000 in initial funding, which will cover audits of some widely-used open source libraries and programs. But we hope this is only the beginning. We want to see the numerous companies and governments that use open source join us and provide additional financial support. We challenge these beneficiaries of open source to pay it forward and help secure the Internet.

Security is a process. To have substantial and lasting benefit, we need to invest in education, best practices, and a host of other areas. Yet we hope that this fund will provide needed short-term benefits and industry momentum to help strengthen open source projects.

Mozilla is committed to tackling the need for more security in the open source ecosystem through three steps:

  • Mozilla will contract with and pay professional security firms to audit other projects’ code;
  • Mozilla will work with the project maintainer(s) to support and implement fixes, and to manage disclosure; and
  • Mozilla will pay for the remediation work to be verified, to ensure any identified bugs have been fixed.

We have already tested this process with audits of three pieces of open source software. In those audits we uncovered and addressed a total of 43 bugs, including one critical vulnerability and two issues with a widely-used image file format. These initial results confirm our investment hypothesis, and we’re excited to learn more as we open for applications.

We all rely on open source software. We invite other companies and funders to join us in securing the open source ecosystem. If you’re a developer, apply for support! And if you’re a funder, join us. Together, we can have a greater impact for the security of open source systems and the Internet as a whole.

More information:




Daniel Stenbergcurl on windows versions

I had to ask. Just to get a notion of which Windows versions our users are actually using, so that we could get an indication which versions we still should make an effort to keep working on. As people download and run libcurl on their own, we just have no other ways to figure this out.

As always when asking a question to our audience, we can’t really know which part of our users that responded and it is probably more safe to assume that it is not a representative distribution of our actual user base but it is simply as good as it gets. A hint.

I posted about this poll on the libcurl mailing list and over twitter. I had it open for about 48 hours. We received 86 responses. Click the image below for the full res version:

windows-versions-used-for-curlSo, Windows 10, 8 and 7 are very well used and even Vista and XP clocked in fairly high on 14% and 23%. Clearly those are Windows versions we should strive to keep supported.

For Windows versions older than XP I was sort of hoping we’d get a zero, but as you can see in the graph we have users claiming to use curl on as old versions as Windows NT 4. I even checked, and it wasn’t the same two users that checked all those three oldest versions.

The “Other” marks were for Windows 2008 and 2012, and bonus points for the user who added “Other: debian 7”. It is interesting that I specifically asked for users running curl on windows to answer this survey and yet 26% responded that they don’t use Windows at all..

Matěj Ceplvim-sticky-notes — vim support for

I have started to work on updating pastebin.vim so that it works with fpaste The result is available on my GitLab project vim-sticky-notes Any feedback, issue reports, and (of course) merge requests are very welcome.