Robert O'CallahanBlinkOn 4

Last week I went to BlinkOn 4 in Sydney, having been invited by a Google developer. It was a lot of fun and I'm glad I was able to go. A few impressions:

It was good to hear talk about acting responsibly for the Web platform. My views about Google are a matter of public record, but the Blink developers I talked to have good intentions.

The talks were generally good, but there wasn't as much audience interaction as I'd expected. In my experience interaction makes most talks a lot better, and the BlinkOn environment is well-suited to interaction, so I'd encourage BlinkOn speakers and audiences to be a bit more interactive next time. I admit I didn't ask as many questions during talks as I usually do, because I felt the time belonged to actual Blink developers.

Blink project leaders felt that there wasn't enough long-term code ownership, so they formed subteams to own specific areas. It's a tricky balance between strong ownership, agile migration to areas of need, and giving people the flexibility to work on what excites them. I think Mozilla has a good balance right now.

The Blink event scheduling work is probably the only engine work I saw at BlinkOn that I thought was really important and that we're not currently working on in Gecko. We need to get going on that.

Another nice thing that Blink has that Gecko needs is the ability to do A/B performance testing on users in the field, i.e. switch on a new code path for N% of users and see how that affects performance telemetry.

On the other hand, we're doing some cool stuff that Blink doesn't have people working on --- e.g. image downscaling during decode, and compositor-driven video frame selection.

I spent a lot of time talking to Google staff working on the Blink "slimming paint" project. Their design is similar to some of what Gecko does, so I had information for them, but I also learned a fair bit by talking to their people. I think their design can be improved on, but we'll have to see about that.

Perhaps the best part of the conference was swapping war stories, realizing that we all struggle with basically the same set of problems, and remembering that the grass is definitely not all green on anyone's side of the fence. For example, Blink struggles with flaky tests just as we do, and deals with them the same way (by disabling them!).

It would be cool to have a browser implementors' workshop after some TPAC; a venue to swap war stories and share knowledge about how to implement all the specs we agreed on at TPAC :-).

Monica ChewTracking Protection for Firefox at Web 2.0 Security and Privacy 2015

My paper with Georgios Kontaxis got best paper award at the Web 2.0 Security and Privacy workshop today! Georgios re-ran the performance evaluations on top news sites and the decrease in page load time with tracking protection enabled is even higher (44%!) than in our Air Mozilla talk last August, due to prevalence of embedded third party content on news sites. You can read the paper here.

This paper is the last artifact of my work at Mozilla, since I left employment there at the beginning of April. I believe that Mozilla can make progress in privacy, but leadership needs to recognize that current advertising practices that enable "free" content are in direct conflict with security, privacy, stability, and performance concerns -- and that Firefox is first and foremost a user-agent, not an industry-agent.

Advertising does not make content free. It merely externalizes the costs in a way that incentivizes malicious or incompetent players to build things like Superfish, infect 1 in 20 machines with ad injection malware, and create sites that require unsafe plugins and take twice as many resources to load, quite expensive in terms of bandwidth, power, and stability.

It will take a major force to disrupt this ecosystem and motivate alternative revenue models. I hope that Mozilla can be that force.

Monica ChewFirefox 32 supports Public Key Pinning

Public Key Pinning helps ensure that people are connecting to the sites they intend. Pinning allows site operators to specify which certificate authorities (CAs) issue valid certificates for them, rather than accepting any one of the hundreds of built-in root certificates that ship with Firefox. If any certificate in the verified certificate chain corresponds to one of the known good certificates, Firefox displays the lock icon as normal.

Pinning helps protect users from man-in-the-middle-attacks and rogue certificate authorities. When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error. This type of error can also occur if a CA mis-issues a certificate.

Pinning errors can be transient. For example, if a person is signing into WiFi, they may see an error like the one below when visiting a pinned site. The error should disappear if the person reloads after the WiFi access is setup.

Firefox 32 and above supports built-in pins, which means that the list of acceptable certificate authorities must be set at time of build for each pinned domain. Pinning is enforced by default. Sites may advertise their support for pinning with the Public Key Pinning Extension for HTTP, which we hope to implement soon. Pinned domains include addons.mozilla.org and Twitter in Firefox 32, and Google domains in Firefox 33, with more domains to come. That means that Firefox users can visit Mozilla, Twitter and Google domains more safely. For the full list of pinned domains and rollout status, please see the Public Key Pinning wiki.

Thanks to Camilo Viecco for the initial implementation and David Keeler for many reviews!

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Mozilla WebDev CommunityBeer and Tell – May 2015

Once a month, web developers from across the Mozilla Project get together to organize our poltical lobbying group, Web Developers Against Reality. In between sessions with titles like “Three Dimensions: The Last Great Lie” and “You Aren’t Real, Start Acting Like It”, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Groovecoder: WellHub

Groovecoder stopped by to share WellHub, a site for storing and visualizing log data from wells. The site was created for StartupWeekend Tulsa, and uses WebGL (via ThreeJS) + WebVR to allow for visualization of the wells based on their longitude/latitude and altitude using an Oculus Rift or similar virtual reality headset.

Osmose: Refract

Next up was Osmose (that’s me!), who shared some updates to Refract, a webpage previously shown in Beer and Tell that turns any webpage into an installable application. The main change this month was added support for generating Chrome Apps in addition to the Open Web Apps that it already supported.


This month’s session was a productive one, up until a pro-reality plant asked why we were having a real-life meetup for an anti-reality group, at which point most of the people in attendance began to scream uncontrollably.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Aaron ThornburghWhy a Triceratops?

How to represent everyone without representing anyone.

Main image - Detail of infographic

Illustrating something highly-technical is more about storytelling than it is about design. My personal process often starts with a deluge of diagrams, wiki pages, stakeholder meetings, and follow-up discussions with engineers. Once I finally understand the details myself, it’s then my job to distill all that raw information into a single, coherent story.

That’s where the plot usually takes an interesting detour.

+++++

The Content Services team recently asked me to develop an infographic depicting “How user data is protected on Firefox New Tab” (PDF – 633 kB). The narrative itself was easy to illustrate because I had tremendous help from my teammates. But regardless of the refinements I continued making to the design, a crucial element always remained conspicuously absent:

The main character.

In this case, the main character was a Firefox User. My principle challenge, of course, was representing a person of any age, gender, ethnicity or language from around the globe. Secondarily, I wanted readers to feel something – maybe even smile. But most importantly, I wanted readers to clearly identify the User as the star of the infographic.

In other words, I needed a good mascot.

Folks don’t generally connect with the generic on an emotional level; so, I instinctively knew that flat, vaguely male or female silhouettes would be overly general for a global audience.

Maybe an animal? The Firefox mascot is a fox, after all, and small furry creatures are inherently disarming. I quickly discovered, though, that many animals could be interpreted as personalities types or even specific nations. Every option seemed close to the mark, but fell short upon further reflection.

Then the obvious roared in my face.

Historically, Mozilla has been represented by a dinosaur. And not the dead-fossil kind, either, but a living, breathing carnivore. I’ve always liked that image. The Mozilla T-rex, however, wasn’t the star of the story (and Mozillians aren’t all that carnivorous, anyway). Still, I could easily build upon this imagery without fear of alienating any particular person or group.

In the end, the species I chose to represent Users is one of the most recognizable. Besides being herbivores (which somehow seemed more appropriate), Triceratops command attention and demand respect. They’re creatures who appeal to our cooperative, yet intensely protective, instincts. They’re  important, impossible to ignore.

And when they’re smiling, it’s hard not to love them.

Done and done.


Air MozillaMay Brantina: Onboarding and the Cost of Team Debt with Kate Heddleston

May Brantina: Onboarding and the Cost of Team Debt with Kate Heddleston At our May Brantina (Breakfast + Cantina), we'll be joined by Kate Heddleston, a software engineer in San Francisco. Kate will share how effective onboarding...

Mozilla Privacy BlogPutting Our Data Privacy Principles Into Action

In November, we told you about Mozilla’s updated Data Privacy Principles, which inform how we build products, manage user data, and select and interact with partners. Today, Mozilla’s Content Services team is announcing its latest innovation in Web advertising – … Continue reading

Advancing ContentProviding a Valuable Platform for Advertisers, Content Publishers, and Users

Mozilla has a long history of innovating with how users interact with content: tabs, add-ons, live bookmarks, the Awesome bar – these and many more innovations  have helped the Web to dominate desktop computing for the last decade. Six months ago we launched Directory Tiles in Firefox, and have had great success with commercial partnerships and in aiding awareness for content important to the project, including Mozilla advocacy campaigns in support of net neutrality and the Mozilla Manifesto.

Today, I’m pleased to announce Suggested Tiles – our latest innovation and complement to Directory Tiles, as we work to create a more powerful and personalized Web experience for our users.  I discussed the Mozilla mission in the context of digital advertising earlier this year.  Suggested Tiles represents an important step for us to improve the state of digital advertising for the Web, and to deliver greater user agency.

Much of today’s digital advertising utilizes data harvested through a user’s browsing habits to target ads. However, many consumers are increasingly weary of how their data is being collected and shared in the advertising ecosystem without transparency and consent – and complex opt-outs or unreadable privacy policies exacerbate this. Many users even block advertisements altogether.  This situation is bad for users, bad for advertisers and bad for the Web.

With Suggested Tiles, we want to show the world that it is possible to do relevant advertising and content recommendations while still respecting users’ privacy and giving them control over their data.  And to bring influence to bear on the whole industry, we know we will need to deliver a highly effective advertising product.

A Suggested Tile In Context

A Suggested Tile In Context

 

We believe users should be able easily to understand what content is promoted, who it is from and why they are seeing it.  It is the user who owns the profile: only a Firefox user can edit their own browsing history.  And for users who do not want to see Suggested Tiles, opting out only takes two clicks from the New Tab page, without having to read a lot of instructions.  To deliver Suggested Tiles we do not retain or share personal data, nor are we using cookies.  If you want to learn more about how Suggested Tiles protect a user’s data, we produced this infographic, and the Mozilla policy team have described the details of how our data principles translate to the data policy for Suggested Tiles.

New Tab Controls

New Tab Controls

 

Suggested Tiles are controlled by the user, respect their privacy and are not directed towards a captive audience.  As different as this sounds, we believe that this makes Tiles a better experience for users and for advertisers.

Suggested Tiles will help advertisers and content owners connect with millions of Firefox users, and do so at a time when the user is receptive to hearing from them, making it a much more valuable connection. By delivering content experiences based on the user’s recent and most frequent browsing, we know when content will have high relevance.  And because we are delivering this content early in a browsing session – rather than mixed in with the user’s activity – we know they are more likely to engage with it.  We already have some very satisfied partners for Directory Tiles, and I am confident that Suggested Tiles will deliver even higher levels of engagement.

For partners who are interested in getting involved with the Suggested Tiles initiative, we have a site where you can learn more and register your interest: http://content.mozilla.org.

So what happens next? Suggested Tiles will be going to Beta soon and then live later in the summer. Initially, users will first see “Affiliate” Tiles advertisements for other Mozilla causes and Firefox products before Suggested Tiles from our content partners appear. Note that we’ll be rolling out the product in phases starting first with Firefox users in the US.

If you have any questions about how Suggested Tiles will work, need more information or want to explore a potential partnership with us, please visit content.mozilla.org.

This is still one of our early steps towards our goal of improving the state of digital advertising for the Web – delivering greater transparency for advertisers, better, more relevant content experiences and, above all, greater control for Firefox users.

Daniel Stenbergstatus update: http2 multiplexed uploads

I wrote a previous update about my work on multiplexing in curl. This is a follow-up to describe the status as of today.

I’ve successfully used the http2-upload.c code to upload 600 parallel streams to the test server and they were all sent off fine and the responses received were stored fine. MAX_CONCURRENT_STREAMS on the server was set to 100.

This is using curl git master as of right now (thus scheduled for inclusion in the pending curl 7.43.0 release).  I’m not celebrating just yet, but it is looking pretty good. I’ll continue testing.

Commit b0143a2a3 was crucial for this, as I realized we didn’t store and use the read callback in the easy handle but in the connection struct which is completely wrong when many easy handles are using the same connection! I don’t recall the exact reason why I put the data in that struct (I went back and read the commit messages etc) but I think this setup is correct conceptually and code-wise, so if this leads to some side-effects I think we need to just fix it.

Next up: more testing, and then taking on the concept of server push to make libcurl able to support it. It will certainly be a subject for future blog posts…

cURL

Mozilla Security BlogMozDef: The Mozilla Defense Platform v1.9

At Mozilla we’ve been using The Mozilla Defense Platform (lovingly referred to as MozDef) for almost two years now and we are happy to release v1.9. If you are unfamiliar, MozDef is a Security Information and Event Management (SIEM) overlay for ElasticSearch.

MozDef aims to bring real-time incident response and investigation to the defensive tool kits of security operations groups in the same way that Metasploit, LAIR and Armitage have revolutionized the capabilities of attackers.

We use MozDef to ingest security events, alert us to security issues, investigate suspicious activities, handle security incidents and to visualize and categorize threat actors. The real-time capabilities allow our security personnel all over the world to work collaboratively even though we may not sit in the same room together and see changes as they occur. The integration plugins allow us to have the system automatically respond to attacks in a preplanned fashion to mitigate threats as they occur.

We’ve been on a monthly release cycle since the launch, adding features and squashing bugs as we find them. You can find the release notes for this version here.

Notable changes include:

  •  Support for Google API logs (login/logout/suspicious activity for Google Drive/Docs)
  •  http://cymon.io api integration
  •  myo armband integration

Using the Myo armband in a TLS environment may require some tweaking to allow the browser to connect to the local Myo agent. Look for a how-to in the docs section soon.

Feel free to take it for a spin on the demo site. You can login by creating any test email/password combination you like. The demo site is rebuilt occasionally so don’t expect anything you put there to live for more than a couple days but feel free to test it out.

Development for the project takes place at mozdef.com and report any issues using the github issue tracker.

Air MozillaKids' Vision - Mentorship Series

Kids' Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Mozilla Addons BlogAdd-ons Update – Week of 2015/05/20

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

  • Most nominations for full review are taking less than 10 weeks to review.
  • 194 nominations in the queue awaiting review.
  • Most updates are being reviewed within 6 weeks.
  • 112 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 9 weeks.
  • 222 preliminary review submissions in the queue awaiting review.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 38 Compatibility

The Firefox 38 compatibility blog post is up. The automatic AMO validation was already run. There’s a second blog post covering the upcoming 38.0.5 release and in-content preferences, which were an oversight in the first post.

Firefox 39 Compatibility

The Firefox 39 compatibility blog post is up. I don’t know when the compatibility validation will be run yet.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

We announced that we will require extensions to be signed in order for them to continue to work in release and beta versions of Firefox. A followup post was published recently, addressing some of the reasons behind this initiative.

A couple notable things are happening related to signing:

  • Signing will be enabled for AMO-listed add-ons. This means that new versions will be automatically signed, and the latest versions of all listed add-ons will also be signed. Expect this to happen within a week or so (developers will be emailed when this happens). Signing for unlisted (non-AMO) add-ons is still not enabled.
  • The signature verification code is now active on Developer Edition, in case you want to try it out with unsigned extensions. The preference is set to warn about unsigned extensions, but still accept and install them. You can use Developer Edition to test your extensions after we let you know they’ve been signed.
  • A new Developer Agreement will be published on AMO. This is a significant update over the current years-old agreement, covering signing, listed and unlisted add-ons, themes, and other developments that have happened since. Developers will be notified when the new agreement is up.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running each content tab in a different one. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

We will be talking more about these changes in this blog in the future. For now we recommend you start looking at the available documentation.

Jim ChenPost Fennec logs to Pastebin with LogView add-on

The LogView add-on for Fennec now lets you copy the logcat to clipboard or post the logcat to pastebin.mozilla.org. Simply go to the about:logs page from Menu → Tools → Logs and tap on “Copy” or “Pastebin”. This feature is very useful if you encounter a bug and need the logs, but you are not next to a computer or don't have the Android SDK installed.

Copy to clipboard Posting to Pastebin Posted to Pastebin


Last modified: 2015/05/20 15:49

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Joel Maherre-triggering for a [root] cause – version 1.57

Last week I wrote some notes about re-triggering jobs to find a root cause.  This week I decided to look at the orange factor email of the top 10 bugs and see how I could help.  Looking at each of the 10 bugs, I had 3 worth investigating and 7 I ignored.

Investigate:

  • Bug 1163911 test_viewport_resize.html – new test which was added 15 revisions back from the first instance in the bug.  The sheriffs had already worked to get this test disabled prior to my results coming in!
  • Bug 1081925 browser_popup_blocker.js – previous test in the directory was modified to work in e10s 4 revisions back from the first instance reported in the bug, causing this to fail
  • Bug 1118277 browser_popup_blocker.js (different symptom, same test pattern and root cause as bug 1081925)

Ignore:

  • Bug 1073442 – Intermittent command timed out; might not be code related and >30 days of history.
  • Bug 1096302test_collapse.html | Test timed out. >30 days of history.
  • Bug 1151786 testOfflinePage. >30 days of history. (and a patch exists).
  • Bug 1145199 browser_referrer_open_link_in_private.js. >30 days of history.
  • Bug 1073761 test_value_storage.html. >30 days of history.
  • Bug 1161537 test_dev_mode_activity.html. resolved (a result from the previous bisection experiment).
  • Bug 1153454 browser_tabfocus.js. >30 days of history.

Looking at the bugs of interest, I jumped right in in retriggering.  This time around I did 20 retriggers for the original changeset, then went back to 30 revisions (every 5th) doing the same thing.  Effectively this was doing 20 retriggers for the 0, 5th, 10th, 15th, 20th, 25th, and 30th revisions in the history list (140 retriggers).

I ran into issues doing this, specifically on Bug 1073761.  The reason why is that for about 7 revisions in history the windows 8 builds failed!  Luckily the builds finished enough to get a binary+tests package so we could run tests, but mozci didn’t understand that the build was available.  That required some manual retriggering.  Actually a few cases on both retriggers were actual build failures which resulted in having to manually pick a different revision to retrigger on.  This was fairly easy to then run my tool again and fill in the 4 missing revisions using slightly different mozci parameters.

This was a bit frustrating as there was a lot of manual digging and retriggering due to build failures.  Luckily 2 of the top 10 bugs are the same root cause and we figured it out.  Including irc chatter and this blog post, I have roughly 3 hours invested into this experiment.


Mozilla Release Management TeamFirefox 38.0.5 beta 2 to 38.0.5 beta 3

As all the release of this special 38.0.5 cycle, we took mostly patches for the Pocket integration, some reader view improvements and stability fixes.

The next version should be 38.0.5 rc1 (go to build Thursday, go live Friday).

  • 21 changesets
  • 38 files changed
  • 393 insertions
  • 132 deletions

ExtensionOccurrences
js6
jsm5
cpp5
properties4
ini3
html3
txt2
inc2
h2
xul1
sh1
nsi1
mk1
dtd1
css1

ModuleOccurrences
browser21
dom7
testing3
widget2
mobile2
toolkit1
editor1
config1

List of changesets:

Gijs KruitboschBug 1160775 - fix reader mode detection to force 1 flush so we don't think the entire page is invisible, r=margaret a=lmandel - 93b96d846d47
Margaret LeibovicBug 1152412 - Handle errors downloading and parsing documents for reader view. r=bnicholson a=lmandel - 964442785c00
Gijs KruitboschBug 1134501 - add way for UITour'd page to force-show the reader mode button, r=margaret a=lmandel - 5741ccc7bb74
Kartikaya GuptaBug 1163640 - Fix the test for Bug 417418 to not leave the widget in a drag session. r=ehsan, a=test-only - df02fefaa438
Mike ShalBug 1122746 - Ignore *.pyc in zip instead of removing them. r=ted, a=test-only - 06cc113b476f
James GrahamBug 1135515 - Fix relevant mutations tests to avoid intermittent issues. a=test-only - 82e59df1da4e
Martin ThomsonBug 1158296 - Allow ECDSA key export in WebCrypto. r=rbarnes, a=sledru - 1a8cd9f5bdad
Karl TomlinsonBug 1159456 - Finish and exit from Flush() even if MFTManager rejects sample. r=cpearce, a=sledru - 825e8ac4ab29
Jan-Ivar BruaroeyBug 1150539 - getUserMedia: default to aPrefs.mFPS, not aPrefs.mMinFPS. r=jesup, a=lizzard - cfa10b9f0f9d
Jan-Ivar BruaroeyBug 1162412 - Part 1: Don't treat plain facingMode constraint as required. r=jesup, a=lmandel - c14434ed2197
Jan-Ivar BruaroeyBug 1162412 - Part 2: Order devices by shortest fitness distance. r=jesup, a=lmandel - 9c1d3c0257ec
Jan-Ivar BruaroeyBug 1162412 - Part 3: Treat plain values as exact in advanced. r=jesup, a=lmandel - e3045256cb27
Jeff MuizelaarBug 1157784 - Avoid compositing at the same time as WM_SETTEXT. r=bas, f=jimm, a=sledru - ecbce7532a0a
Jared WeinBug 1162713 - Implement "Save Link to Pocket" context menu item. r+a=dolske, l10n=dolske - e13a7a312aa5
Gijs KruitboschBug 1164940 - Lazily create iframe. r=jaws, a=sledru - 24524667128b
Gijs KruitboschBug 1164426 - Build reader mode blocklist. r=margaret, a=sledru - 6ec85b777880
Robert StrongBug 1165135 - Distribution directory not removed on pave over install. r=spohl, a=sledru - 4bfd19d00ed4
Jared WeinBug 1163917 - Remove the widget from its area if the conditionalDestroy promise is resolved truthy. r=gijs, a=sledru - f9328a6ea6bd
Nate WeinerBug 1165416 - Update Pocket code to latest version (May 15th code drop). r=dolske, r=jaws, a=sledru - e6f89a184268
Jared WeinBug 1160407 - Redirect links within the Pocket panel to open in non-private windows when temporary Private Browsing is used. r=dolske, a=sledru - f5828f333524
Gijs KruitboschBug 1147487 - Don't try to reader-ize non-HTML documents. r=margaret, r=jaws, a=lmandel - f44dff585598

Mike ConleyLost in Data!

Keeping Firefox zippy involves running performance tests on each push to make sure we’re not making Firefox slower.

How does that even work? This used to be a mystery. NO LONGER. jmaher lets you peek behind the curtain here in the first episode of Lost in Data!

Christie KoehlerThe Recompiler: Now with more podcast!

The Recompiler logoIf you’ve been watching my tweetstream recently, you know that The Recompiler (@recompilermag), a magazine about technology, is in the final days hours of it’s inaugural subscription drive.

Yesterday, Audrey announced that we’re going create a podcast version of The Recompiler!

Some of you may have listened to In Beta, which I co-hosted last year. Doing that podcast was great fun and I’m so looking forward to hosting this supplement to The Recompiler. The podcast will enhance the written version of the magazine with tech news, criticism & commentary plus interviews with our authors.

If you’re craving awesome, insightful conversation on technical topics from fresh, less-heard-from voices, then The Recompiler podcast is for you!

Get involved and support The Recompiler today by purchasing a subscription and look for the first written issue and episode this summer!

Gavin Sharpleaving mozilla

I started contributing to Mozilla nearly 11 years ago, in 2004, and joined as a full-time employee over 8 years ago. Suffice it to say, Mozilla has been a big part of my life in a lot of ways. I’ve dedicated essentially my entire career so far to Mozilla, and it introduced me to my wife, just to name a couple.

Mozilla’s in a great place now – still a tough challenge ahead, but plenty of great people willing and able to tackle it. Firefox has new leadership, the team is being re-organized and groups merged together in what I think are the right ways, and I’m optimistic about their prospects. But it’s time for me to move on and find The Next Thing – that I have no idea what it is now is both exciting and a little bit terrifying, but I feel good about it.

It will probably take me a while to figure out what exactly a post-Mozilla Gavin-life looks like, given the various ways Mozilla’s been injected into my life. I won’t disappear entirely, certainly, and I know the many relationships I’ve built through Mozilla will continue to be a huge part of my life.

I’m looking forward to what’s next!

Air MozillaLost in Data - Episode 1

Lost in Data - Episode 1 Join Joel Maher in a livehacking session where he is triaging and investigating Firefox performance alerts.

Mozilla Open Policy & Advocacy BlogMozilla Advocacy – 2015 Plan

Mozilla Advocacy — Our 2015 Plan for Protecting and Advancing the Open Web

Advocacy is a relatively new area of focus for Mozilla. Our increased emphasis on advocacy is born out of the recognition that, like code, public policy has an impact on the shape and health of the open web — and that a vital force protecting the web will be the millions of people who consider themselves to be citizens of the web.

Over the next few weeks, the Mozilla Advocacy team — including Andrea Wood, Director of Digital Advocacy and Fundraising; Melissa Romaine, Advocacy Manager; Chris Riley, Head of Public Policy; Stacy Martin, Senior Manager of Privacy and Engagement; Jochai Ben-Avie, Internet Policy Manager; and, Alina Hua, Senior Data Privacy Manager —  will lay out our latest thinking about how we’re developing public policy and creating advocacy initiatives.

Our goal with Mozilla Advocacy is to advance the Mozilla mission by empowering people to create measurable changes in public policy to protect the Internet as a global public resource, open and accessible to all. Our three strategies to achieve this goal are:

  1. Leadership Development — Grow a global cadre of leaders — activists, technologists, policy experts — who advance the free and open web.

  2. Community — Assist, grow, and enable the wider policy & advocacy community.

  3. Grassroots Advocacy — Run issue-based campaigns to grow mainstream engagement with Mozilla and open web issues.

Each of these strategies ties directly to the goal of empowering people. Yet, as we execute there are still open questions that need input and more thought from the community. For instance, how can we create better scale and participation, recognizing that real impact happens when the community is empowered to take action on policy and advocacy initiatives. A key to this is making our own policy positions and advocacy efforts easier for people to understand and engage with.

We need you to play an active role. Because the web is growing in markets where we are not experts, the Mozilla community will play a central role in scaling efforts to protect the open web throughout the world. We invite you to help shape our thinking by reading the 2015 Policy & Advocacy Plan and offering input through this thread in the Mozilla Advocacy Community.

–Dave Steer, Director of Advocacy, Mozilla

Mozilla Addons BlogWhich T-shirt Design is Your Favorite?

The judges have selected their three favorite designs for a new AMO t-shirt, and now it’s your turn to tell us which one you’d like to see printed. The shirts will be sent to add-on developers as a thank-you gift, so choose wisely!

The deadline to vote is Tuesday, June 2.

Michael Kaplydistribution/bundles Directory Gone in Firefox 40

Bug 1144127 was checked in. This means that starting in Firefox 40, placing add-ons in the distribution/bundles directory will no longer work.

For many years I recommended distribution/bundles as the best place for enterprises to deploy non bootstrapped extensions. It allowed them to make their extensions a part of core Firefox and prevent users from removing them. Unfortunately adware/spyware folks started using this method as well, so we lost this ability. (This is why we can't have nice things.)

So what does this mean going forward?

  • You will no longer be able to disable safe mode. You can set the environment variable MOZ_DISABLE_SAFE_MODE_KEY to prevent using the startup shortcut or set MOZ_DISABLE_AUTO_SAFE_MODE to prevent crashes from starting safe mode, but a user will always be able to start Firefox in safe mode from the command line.
  • It's much more difficult for you to prevent a user from disabling any extensions you need to add for your company. You'll probably need to do something evil like hide them inside of the add-ons manager. You can contact me if you need code to do that.
  • AutoConfig now becomes the preferred method of doing pretty much any Firefox configuration (since you can't place a custom extension into the distribution/bundles directory).

I'm actively working on making the CCK2 work without the distribution directory. The latest beta is here. Obviously some features will be lost st first. I hope to bring as many back as I can. It should be ready by the end of the week I hope.

As a side note, this means that many of my blog posts will have incorrect information. I'm still trying to figure out how to solve that going forward.

Adam LoftingThe importance of retention rates, explained by @bbalfour

In my last post I shared a tool for playing with the numbers that matter for growing a product or service. (I.e. Conversion, retention and referral rates).

This video of a talk by Brian Balfour is a perfect introduction / guide to watch if you’re also playing with that tool. In particular, the graphs from 1:46 onwards.

Mozilla Release Management TeamFirefox 38.0.5b1 to 38.0.5b2

Mostly a Pocket's change beta release.

  • 23 changesets
  • 45 files changed
  • 549 insertions
  • 136 deletions

ExtensionOccurrences
js13
jsm5
css4
mn3
cpp3
ini2
handlebars2
h2
xul1
txt1
svg1
java1
inc1

ModuleOccurrences
browser27
toolkit5
image4
mobile1
gfx1
config1
browser1

List of changesets:

tbirdbldAutomated checkin: version bump for thunderbird 38.0b5 release. DONTBUILD CLOSED TREE a=release - 88aaccce3910
Nick ThomasBacked out changeset 88aaccce3910, a thunderbird specific version change on default DONTBUILD CLOSED TREE, a=release - 983ca4a03205
David MajorBug 1154703 - Fix typo in nvdxgiwrap filename. r=jrmuizel, a=lmandel - fff54632eedd
Matthew NoorenbergheBug 1162205 - Don't import encrypted cookies from Chrome. r=mak a=lmandel - c5a80a2102b6
Seth FowlerBug 1161859 - Compute the size of animated image frames correctly in the SurfaceCache. r=dholbert, a=lmandel - aa3a683fd335
Garvan KeeleyBug 1164468 - Boolean got incorrectly flipped and stumbling uploads stopped. r=rnewman, a=lmandel - 4aac185d033d
Blake WintonBug 1158289 - Use ems to keep the Reader View's line length between 45 and 75 characters. ui-r=mmaslaney, r=margaret, a=lmandel - 5fff1e20ed9c
Seth FowlerBug 1161859 (Followup) - Correct nsIntSize / IntSize mismatch in Decoder.cpp on a CLOSED TREE. a=KWierso - 5da39cd23ade
Jared WeinBug 1162316 - Update the Pocket Toolbar @2x asset on OSX with the correct aspect ratios. r=dolske a=dolske - 21c86665a21d
Jared WeinBug 1155517 - Change Reader View to have a "Save Page to Pocket" button instead of "Add To Reader List". r=dolske a=dolske - 921eb304600e
Nate WeinerBug 1163576 - Pages that were only added to Pocket by one user failed to get removed. r=jaws a=dolske - f7f9fc975cdc
Jared WeinBug 1163651 - [Windows]View Pocket List icon from Bookmarks menu is missing. r=dolske a=dolske - 01c7b55e4a28
Nate WeinerBug 1164161 - Panel dictionary file missing entries for some languages. r=jaws a=dolske - 98b2f2b5af65
Justin DolskeBug 1164253 - Save request is sent twice for every button press. r=jaws a=dolske - 89ef57a1733a
Justin DolskeBug 1164208 - Update Pocket code to latest version (May 11th code drop) r=jaws a=dolske - 55c04a549775
Nate WeinerBug 1163411 - Update View Pocket Menu Link. r=jaws a=dolske - 9a7a198e1b06
Gijs KruitboschBug 1164302 - pocket button gets lost after a restart, r=jaws a=dolske - d5ba1bc97911
Matthew NoorenbergheBug 1161810 - UITour: Allow opening the Pocket panel via showMenu("pocket"). r=jaws a=dolske - 06499c7a81a9
Gijs KruitboschBug 1164410 - fix l10n use in pocket, r=jaws a=dolske - 48eaac80d6b5
Justin DolskeBug 1164407 - Pocket not enabled on ja builds under Mac OS X. r=adw a=dolske - 99ea3c3c13f6
Nate WeinerBug 1164419 - [OSX] Pocket panel for ru locale build has misaligned elements. r=dolske a=dolske - f724af08988f
Nate WeinerBug 1164698 - Update Pocket code to latest version (May 13th code drop). r=dolske a=dolske - 11c4678a21bb
Jared WeinBug 1163519 - Add in missing CustomizableUI getter to ReaderParent.jsm. r=gijs, a=dolske - 195e873a8ab1

Mozilla Release Management TeamFirefox 38.0.1 to 38.0.5b1

For this first beta of this special cycle, we took two kind of changes: * the Pocket feature * stability fixes

  • 59 changesets
  • 226 files changed
  • 12063 insertions
  • 919 deletions

ExtensionOccurrences
java36
js29
css16
mn8
jsm8
xul5
properties5
html5
handlebars5
gradle5
cpp5
inc4
h4
txt3
ini3
in3
build3
xml2
sh2
rst2
cfg2
py1
json1
hgtags1
dtd1
cc1

ModuleOccurrences
mobile69
browser38
browser30
toolkit11
layout7
browser6
testing3
media3
dom3
config1

List of changesets:

Shane CaraveoBug 936426 - Fix intermittent error, reduce testing to what we actually need here. r=markh, a=test-only - f33925faccee
Ryan VanderMeulenMerge release to beta. CLOSED TREE - f84585d763a5
Rail AliievBug 1158760 - Wrong branding on the 38 Beta 8, backout d27c9211ebb3. IGNORE BROKEN CHANGESETS CLOSED TREE a=release ba=release - b91226cec861
Ryan VanderMeulenBacked out changeset b1bfde2ccb22 to revert back to beta branding while Fx 38.0.5 is still shipping betas. - 27bacb9dff64
Ed LeeBug 1161245 - Backout Suggested Tiles (Bug 1120311) from 38.0.5 [a=sylvestre, a=lmandel] - 9a494b64194e
Margaret LeibovicBug 1144822 - Hide elements with common hidden class names in reader content. r=Gijs, a=sledru - e4a70d181871
Margaret LeibovicBug 1154028 - Move reader content styles to scoped style sheet. r=Gijs, a=sledru - 80a9584ac5e4
Margaret LeibovicBug 1154028 - Move reader controls styles to scoped style sheet. r=Gijs, a=sledru - c64ca42b7490
Blake WintonBug 1158302 - Increase the Font Size of Reader's H1 and H2 Headers. ui-r=mmaslaney, r=Gijs, a=lizzard - 3058929d4335
Blake WintonBug 1158294 - Increase Reader Views Default Type Size. ui-r=mmaslaney, r=margaret, a=lizzard - 8cba8416a229
Matthew NoorenbergheBug 1134507 - Implement infopanel to promote Reader View when first available. r=Gijs, a=sledru - f53c601dafa3
Blake WintonBug 1158281 - Match Pocket's Reader View Sepia Theme. ui-r=mmaslaney, r=margaret, a=sledru - 810e81a9bced
Gijs KruitboschBug 1154063 - Fix CSS issue in aboutReader.css. r=bwinton, a=sledru - cc2718d0f570
Gijs KruitboschBug 1158322 - force-display-none the toolbar and footer when printing. r=margaret, a=sledru - 16cdaa6a3712
Ryan VanderMeulenBug 1131931 - Skip various tests on OSX and Windows debug for intermittent crashes. a=test-only - 010ace914d50
Morris TsengBug 1151111 - Append iframe2 after iframe1 has loaded. r=kchen, a=test-only - e4e557754405
Maire ReavyBug 1159659 - Allow tab sharing on XP and OSX 10.6. r=pkerr, a=lizzard - db14fef19c05
Margaret LeibovicBug 1158228 - Merge github's readability code into m-c. a=sledru - 503f9aa61c25
Margaret LeibovicBug 1158228 - Disable visibility check helper function to avoid test bustage. a=sledru - 46b968653f4d
Jared WeinBug 1155523 - Implement Pocket toolbarbutton and subview. r=gijs - 3e9805c11aa3
Florian QuèzeBug 1156878 - Send a request to the server when clicking the Pocket toolbar button, r=jaws. - 16e406d46c18
Jared WeinBug 1159744 - Use the panel implementations from the Pocket add-on for the Pocket feature. r=dolske - 1c86609b511c
Florian QuèzeBug 1155518 - Implement "Save to Pocket" context menu item, r=jaws. - 0a18ef5ab9b7
Florian QuèzeBug 1155519 - Add "View Pocket Items" menuitem to the bookmarks menu, r=dolske. - a1b09394f8c5
Jared WeinBug 1161654 - Import latest Pocket code. r=dolske - 3d9d572c9ec4
Jared WeinBug 1160578 - Disable the Pocket button for logged-in users on internal Firefox pages. r=dolske - 77ec9aee0263
Jared WeinBug 1161654 - Remove some dead code in Pocket.jsm and use pktApi for checking if the user is logged in. r=dolske - 125c7dbe7528
Jared WeinBug 1160678 - Pocket door hangers arent automatically closed. r=dolske a=sledru - 53b766c68811
Gavin SharpBug 1138079 - Fix focus issue that sometimes affects browser-chrome test runs. r=enndeakin, a=test-only - 96da8302e8a2
Justin DolskeBug 1162198 - [EME] Doorhanger that notifies user of DRM usage should include a Learn More link. r=gijs, a=sledru - 121ed6b9b6dd
David MajorBug 1155836: Template on aComputeData in the DoGetStyle* helpers. r=dbaron f=bz a=sylvestre - 7e44bac27dd6
Randell JesupBug 1162251: Fix WebRTC jitter buffer ignoring partial frames if the packet holds a complete NAL r=ehugg a=sylvestre - 124857c54a1b
Byron Campen [:bwc]Bug 1161317: Fix bug where sendonly video RTCP would be treated as outgoing RTP r=jesup a=sylvestre - 62ee103ccbbe
Gijs KruitboschBug 1158884 - hide pocket on android, fix AboutReader.jsm on android, r=margaret,jaws a=dolske - 20872d739a18
Jared WeinBug 1158960 - Reader view is broken in e10s mode. r=Gijs a=dolske - 92c7576dce37
Jared WeinBug 1159410 - Update the Pocket toolbar icon highlight to coral. r=dolske a=dolske - 8c8f410e61e8
Justin DolskeBug 1161796 - Remove unused strings from Pocket. r=jaws a=dolske: - 52bc3790d7b0
Justin DolskeBug 1160663 - Allow hilighting the Pocket button via UITour. r=MattN a=sledru - 1701e22c91f6
Gijs KruitboschBug 1155521 - Migrate Pocket add-on and social provider users to the new Pocket button (part 1, CustomizableUI changes). r=jaws, a=dolske - 6be4fccbdfa3
Drew WillcoxonBug 1155521 - Migrate Pocket add-on and social provider users to the new Pocket button (part 2, migration). r=jaws, a=dolske - 257c096c7673
Gijs KruitboschBug 1161838 - fix positioning of newly added widgets, r=jaws a=dolske - 2eeb61f35995
Jared WeinBug 1162735 - Re-add code that got removed accidentally to fix context menus. r=florian a=dolske - ccec3836123c
Jared WeinBug 1161793 - Wait to run the Pocket popupshowing code until the popupshowing event is dispatched, same for the popupshown code. r=dolske a=dolske - 18bf7b4baaac
Justin DolskeBug 1161881 - Enable Pocket by default (in supported locales), r=gavin a=sledru - 067c9c7a5e75
Justin DolskeBug 1162253 - Update the Pocket Menu Icon with the correct aspect ratios. r=jaws, a=dolske - 3f2619b0d039
Justin DolskeBug 1162147 - "View Pocket List" menuitem should be at top of bookmarks menu. r=jaws, a=dolske - 740f3d68a0f6
Justin DolskeBug 1163349 - "View Pocket List" menuitem not working. r=gavin, a=dolske - 83c0c74947a3
Jared WeinBug 1163111 - Update Pocket code to latest version (May 7th code drop). r=dolske a=dolske - a1c5d7a6a784
Drew WillcoxonBug 1162283 - Add support for limited hard-coded localizations to Pocket. r=dolske, a=dolske - e7c47480555d
Justin DolskeBug 1163265 - Update Pocket code to latest version (May 8th code drop) r=jaws, a=dolske - 86e98ffc152b
Justin DolskeBug 1163360 - Update Pocket code to latest version (May 9th code drop) r=jaws, a=dolske - f4179577249b
Justin DolskeBug 1163319 - Pocket button in hamburger menu breaks layout. r=jaws, a=dolske - 32b69592b334
Shane CaraveoBug 1024253 - Fix chat tests on ubuntu. r=markh, a=test-only - 5081fb1d38f0
Tim TaubertBug 961215 - Fix intermittent browser_tabview_bug625269.js failures by taking into account that window.resizeTo() can fail to change the window size sometimes. r=MattN, a=test-only - 97b29f79be5c
Margaret LeibovicBug 1160577 - Set styles on #reader-message div instead of wrapper div. r=MattN a=sledru - ad9164105253
Florian QuèzeBug 1160076 - Hide the in-content preferences Search pane when browser.search.showOneOffButtons is false. r=Gijs, a=sledru - 855c88138927
Gijs KruitboschBug 1162917 - Update readability from github repo. a=sledru - 5fc66f6dd277
Margaret LeibovicBug 1129029 - Telemetry probes for reader mode performance. r=Gijs, a=sledru - 85229fbaf017
Justin DolskeBug 1163645 - Pocket only enabled on en-US, hard-coded locales aren't picked up. r=adw, a=dolske - fff143cacb66

Mozilla Release Management TeamFirefox 38.0 to 38.0.1

This dot release contains some important fixes impacting a lartge number of users. For more information => the release notes.

  • 6 changesets
  • 14 files changed
  • 136 insertions
  • 18 deletions

ExtensionOccurrences
js3
cpp3
txt2
h2
java1
ini1

ModuleOccurrences
browser6
image4
mobile1
gfx1
config1

List of changesets:

Matthew NoorenbergheBug 1162205 - Don't import encrypted cookies from Chrome. r=mak a=lmandel - 7bf6c9a78588
David MajorBug 1154703 - Fix typo in nvdxgiwrap filename. r=jrmuizel, a=lmandel - d204dd3fd48b
Matthew NoorenbergheBustage fix for 7bf6c9a78588 due to lack of Bug 982852. a=bustage - f0fbb7ca3977
Seth FowlerBug 1161859 - Compute the size of animated image frames correctly in the SurfaceCache. r=dholbert, a=lmandel - 570b63d791b9
Garvan KeeleyBug 1164468 - Boolean got incorrectly flipped and stumbling uploads stopped. r=rnewman, a=lmandel - 273d39c4aa20
Seth FowlerBug 1161859 (Followup) - Correct nsIntSize / IntSize mismatch in Decoder.cpp on a CLOSED TREE. a=KWierso - bb7af314a8ac

Air MozillaMartes Mozilleros

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Christian Heilmann</may-tour> – I did it!

Sitting in the lovely conference hotel Estherea in Amsterdam, I am ready to go to Schipol to fly back home to London. This marks the end of the massive conference tour in beggining May. I can’t believe it all worked out, although I had to re-book one flight and I stayed for two days at each location.

Chris pumping up a boat called internet others punch holes in
(sketch from my Beyond Tellerand keynote by Manuel Ortiz)

Here’s what happened:

Now it is time to wash my clothes, send all the emails I stacked up during bad connectivity times and clean out my flat to move to another one. Oh yeah, and two more conferences this month :)

Marco ZeheAccessibility in 64-bit versions of Firefox for Windows

Over the past two weeks, Trevor, Alex and I worked on 64-bit support for Firefox on Windows. I am pleased to announce that we were successful, and that Win-64 versions of Firefox Nightly builds should now work with screen readers. So if you have a 64-bit edition of Windows 7, 8.x or 10 Preview, and run NVDA, JAWS, Window-Eyes or other screen readers that support Firefox, you should be able to uninstall the 32-bitz version of Firefox Nightly if you have it installed, and download and run the 64-bit installer from the above linked page.

It is expected that everything works as in the 32-bit version. If, for some reason, you find oddities, we do want to know about them ASAP. We plan to backport support to Firefox Dev Edition (currently at version 40), and maybe 39 beta, but the latter is not certain yet.

So if you use Nightly anyway, and would like to help, we definitely appreciate you switching over to the 64 bit edition and give us feedback! Thanks!

Air MozillaMaintaining & growing a technical community

Maintaining & growing a technical community How do you support a diverse community, acknowledge many different voices and perspectives, be open and inclusive, and still get things done (especially when you...

Air MozillaWhat's new in Firefox?

What's new in Firefox? Let's review together what happened with Firefox in 2014 and where we are headed in 2015.

Wil ClouserEnabling Encryption in Weechat

Here are some step by step instructions for enabling encryption using crypt.py in weechat.

First, ensure the crypt.py script is installed. The easiest way is from within weechat itself:

/plugin load script
/script install crypt.py
/script autoload crypt.py

You should see some simple messages saying crypt.py was installed and enabled for automatic loading.

Next you need to generate an encryption key. This just needs to be named with the channel you want to use the key with (I use #channel below). For example:

cd .weechat
openssl genrsa -out cryptkey.#channel 4096

We might as well make sure only we can read it:

chmod 600 cryptkey.#channel

At this point typing any text in #channel will automatically be encrypted (use a second client if you'd like to verify it). For example, I typed:

1811 clouserw │ testo

And the other clients in #channel see:

1811 clouserw | +qRy3GsV2sPRlJSdP1IqqV|

The next step is to distribute the key to the other people who will need to decrypt the chat. Take a minute to consider the best way to do this as the chat will only be as secure as this key.

Lastly, you can optionally add an indicator to the status bar by adding 'encrypted' to weechat.bar.status.items. This command will tell you your current value:

/set weechat.bar.status.items

Copy that value and add 'encrypted' where you'd like it to show up. Mine is:

/set weechat.bar.status.items "[time],[buffer_count],[buffer_plugin],buffer_number+:+buffer_name+{buffer_nicklist_count}+buffer_filter,encryption,[lag],[hotlist],completion,scroll"

which looks like this:

[18:16] [32] [irc/freenode] 30:#channel{3}⚑ (encrypted)  [H: 3, 4]

That's all there is too it!

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1146770] implement comment preview
  • [1116118] 003safesys.t shouldn’t compile all files by default
  • [1163326] implement dirty select tracking in bug-modal to address firefox refresh issue
  • [1160430] Add the ability to deactivate keywords
  • [1164863] checksetup.pl is unable to run if File::Slurp is missing
  • [908387] product/component searching should sort hits on product/component name before hits on descriptions
  • [1165741] query.cgi’s Component list should be sorted case-independent
  • [1165464] Incorrect link used for firefox help
  • [1162334] email_enabled value inverted in User.update RPC call
  • [1165917] support tbplbot@gmail.com and treeherder@bots.tld as the tbpl/treeheder bot name

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

The Mozilla BlogOpen Web Device Compliance Review Board Certifies First Handsets

Announcement Marks Key Point in Development of Open Source Mobile Ecosystem

San Francisco, Calif. – May 18, 2015: – The Open Web Device Compliance Review Board (CRB), in conjunction with its members ALCATEL ONE TOUCH, Deutsche Telekom, Mozilla, Qualcomm Technologies, Inc., and Telefónica, has announced the first handsets to be certified by the CRB. The CRB is an independently operated organization designed to promote the success of the open Web device ecosystem by encouraging API compliance as well as ensuring competitive performance.

The two devices are the Alcatel ONETOUCH Fire C and the Alcatel ONETOUCH Fire E. ALCATEL ONETOUCH has also authorized a CRB lab.

The certification process involves OEMs applying to the CRB for their device to be certified. CRB’s authorized labs test the device for open web APIs and key performance benchmarks. CRB’s subject matter experts review the results and validate against CRB stipulated benchmarks with a reference device to ensure compatibility and performance across key use cases. The two ALCATEL ONETOUCH devices passed the CRB authorized test lab procedure and met all CRB certification requirements.

The process is open to all device vendors whether they are a member of CRB or not. The CRB website www.openwebdevice.org will publish the process for applying for certification.

CRB certification testing is conducted by industry labs authorized by the CRB, with each submission expected to be completed within approximately three days. The CRB offers a platform for the rest of the industry to request certification.

“As an initial founding member of the CRB, we are pleased to know that the Board has achieved one of its major objectives in certifying Firefox OS devices on a standard set of Web APIs and performance metrics,” said Jason Bremner, Senior Vice President of Product Management, Qualcomm Technologies, Inc. “We expect other companies will also certify, improving their product development cycle time while ensuring a compelling user experience and compliance to standard Web APIs.”

“As one of the partners of the CRB and owners of these certified devices, ALCATEL ONETOUCH is excited to witness the solid progress and achievements made by all members,” said Alain Lejeune, Senior Vice President, ALCATEL ONETOUCH. “In the coming year, ALCATEL ONETOUCH will continue to contribute to the CRB and establishment of the Firefox OS ecosystem. This news is not only an honor for us but will inspire more Firefox OS partners to strive for certification.”

“In the last three years Mozilla has proven with Firefox OS that open Web technology is a strong, viable platform for mobile,” said Andreas Gal, Chief Technology Officer, Mozilla. “Certification by the CRB provides a launch pad for those who complete to prove that their device offers a consistent and excellent experience for users, reducing time and cost to qualify across operators and markets. Today’s announcement paves the way for other device makers to reach this milestone.”

“TELEFÓNICA supports the opportunities that an open Web ecosystem delivers to mobile consumers,” said Francisco Montalvo, Head of Group Devices Unit at TELEFÓNICA S.A.. “Having CRB as a product certification scheme helps all the partners guarantee that rich Web content is delivered to certified devices with the right level of quality. We are glad to collaborate on this effort.”

“Deutsche Telekom is pleased to be a close partner with Mozilla, Qualcomm, Telefonica, and ALCATEL ONETOUCH among others in the development of the Firefox OS,” said Louis Schreier, Vice President of Telekom Innovation Laboratories’ Silicon Valley Innovation Center. “As one of the founding members of the CRB, our goal in focusing on API compliance and performance is to establish a uniform set of requirements, test and acceptance criteria, enabling uniform and independent testing by accredited labs.”

For more information about the Open Web Device Compliance Review Board, please visit https://openwebdevice.org.

About the CRB
The Open Web Device Compliance Review Board (CRB) is an independently operated organization designed to promote the success of the open Web device ecosystem. It is a partnership between operators, device OEMs, silicon vendors and test solution providers to define and evolve a process to encourage API compatibility and competitive performance for devices. Standards are based on Mozilla’s principles of user privacy and control.

Media Contact: press@mozilla.com

The Mozilla BlogOpen Web Device Compliance Review Board Certifies First Handsets

Announcement Marks Key Point in Development of Open Source Mobile Ecosystem

San Francisco, Calif. – May 18, 2015: – The Open Web Device Compliance Review Board (CRB), in conjunction with its members ALCATEL ONE TOUCH, Deutsche Telekom, Mozilla, Qualcomm Technologies, Inc., and Telefónica, has announced the first handsets to be certified by the CRB. The CRB is an independently operated organization designed to promote the success of the open Web device ecosystem by encouraging API compliance as well as ensuring competitive performance.

The two devices are the Alcatel ONETOUCH Fire C and the Alcatel ONETOUCH Fire E. ALCATEL ONETOUCH has also authorized a CRB lab.

The certification process involves OEMs applying to the CRB for their device to be certified. CRB’s authorized labs test the device for open web APIs and key performance benchmarks. CRB’s subject matter experts review the results and validate against CRB stipulated benchmarks with a reference device to ensure compatibility and performance across key use cases. The two ALCATEL ONETOUCH devices passed the CRB authorized test lab procedure and met all CRB certification requirements.

The process is open to all device vendors whether they are a member of CRB or not. The CRB website will publish the process for applying for certification for their devices.

CRB certification testing will be conducted by industry labs authorized by the CRB, with each submission expected to be completed within approximately three days. The CRB offers a platform for the rest of the industry to request certification.

“As an initial founding member of the CRB, we are pleased to know that the Board has achieved one of its major objectives in certifying Firefox OS devices on a standard set of Web APIs and performance metrics,” said Jason Bremner, Senior Vice President of Product Management, Qualcomm Technologies, Inc. “We expect other companies will also certify, improving their product development cycle time while ensuring a compelling user experience and compliance to standard Web APIs.”

“As one of the partners of the CRB and owners of these certified devices, ALCATEL ONETOUCH is excited to witness the solid progress and achievements made by all members,” said Alain Lejeune, Senior Vice President, ALCATEL ONETOUCH. “In the coming year, ALCATEL ONETOUCH will continue to contribute to the CRB and establishment of the Firefox OS ecosystem. This news is not only an honor for us but will inspire more Firefox OS partners to strive for certification.”

“In the last three years Mozilla has proven with Firefox OS that open Web technology is a strong, viable platform for mobile,” said Andreas Gal, Chief Technology Officer, Mozilla. “Certification by the CRB provides a launch pad for those who complete to prove that their device offers a consistent and excellent experience for users, reducing time and cost to qualify across operators and markets. Today’s announcement paves the way for other device makers to reach this milestone.”

“TELEFÓNICA supports the opportunities that an open Web ecosystem delivers to mobile consumers,” said Francisco Montalvo, Head of Group Devices Unit at TELEFÓNICA S.A.. “Having CRB as a product certification scheme helps all the partners guarantee that rich Web content is delivered to certified devices with the right level of quality. We are glad to collaborate on this effort.”

“Deutsche Telekom is pleased to be a close partner with Mozilla, Qualcomm, Telefonica, and ALCATEL ONETOUCH among others in the development of the Firefox OS,” said Louis Schreier, Vice President of Telekom Innovation Laboratories’ Silicon Valley Innovation Center. “As one of the founding members of the CRB, our goal in focusing on API compliance and performance is to establish a uniform set of requirements, test and acceptance criteria, enabling uniform and independent testing by accredited labs.”

For more information about the Open Web Device Compliance Review Board, please visit https://openwebdevice.org.

About the Open Web Device Compliance Review Board
The Open Web Device Compliance Review Board (CRB) is an independently operated organization designed to promote the success of the open Web device ecosystem. It is a partnership between operators, device OEMs, silicon vendors and test solution providers to define and evolve a process to encourage API compatibility and competitive performance for devices. Standards are based on Mozilla’s principles of user privacy and control.

Media Contact:
press@mozilla.com

Joel MaherA-Team contribution opportunity – Dashboard Hacker

I am excited to announce a new focused project for contribution – Dashboard Hacker.  Last week we gave a preview that today we would be announcing 2 contribution projects.  This is an unpaid program where we are looking for 1-2 contributors who will dedicate between 5-10 hours/week for at least 8 weeks.  More time is welcome, but not required.

What is a dashboard hacker?

When a developer is ready to land code, they want to test it. Getting the results and understanding the results is made a lot easier by good dashboards and tools. For this project, we have a starting point with our performance data view to fix up a series of nice to have polish features and then ensure that it is easy to use with a normal developer workflow. Part of the developer work flow is the regular job view, If time permits there are some fun experiments we would like to implement in the job view.  These bugs, features, projects are all smaller and self contained which make great projects for someone looking to contribute.

What is required of you to participate?

  • A willingness to learn and ask questions
  • A general knowledge of programming (most of this will be in javascript, django, angularJS, and some work will be in python.
  • A promise to show up regularly and take ownership of the issues you are working on
  • Good at solving problems and thinking out of the box
  • Comfortable with (or willing to try) working with a variety of people

What we will guarantee from our end:

  • A dedicated mentor for the project whom you will work with regularly throughout the project
  • A single area of work to reduce the need to get up to speed over and over again.
    • This project will cover many tools, but the general problem space will be the same
  • The opportunity to work with many people (different bugs could have a specific mentor) while retaining a single mentor to guide you through the process
  • The ability to be part of the team- you will be welcome in meetings, we will value your input on solving problems, brainstorming, and figuring out new problems to tackle.

How do you apply?

Get in touch with us either by replying to the post, commenting in the bug or just contacting us on IRC (I am :jmaher in #ateam on irc.mozilla.org, wlach on IRC will be the primary mentor).  We will point you at a starter bug and introduce you to the bugs and problems to solve.  If you have prior work (links to bugzilla, github, blogs, etc.) that would be useful to learn more about you that would be a plus.

How will you select the candidates?

There is no real criteria here.  One factor will be if you can meet the criteria outlined above and how well you do at picking up the problem space.  Ultimately it will be up to the mentor (for this project, it will be :wlach).  If you do apply and we already have a candidate picked or don’t choose you for other reasons, we do plan to repeat this every few months.

Looking forward to building great things!


Joel MaherA-Team contribution opportunity – DX (Developer Ergonomics)

I am excited to announce a new focused project for contribution – Developer Ergonomics/Experience, otherwise known as DX.  Last week we gave a preview that today we would be announcing 2 contribution projects.  This is an unpaid program where we are looking for 1-2 contributors who will dedicate between 5-10 hours/week for at least 8 weeks.  More time is welcome, but not required.

What does DX mean?

We chose this project as we continue to experience frustration while fixing bugs and debugging test failures.  Many people suggest great ideas, in this case we have set aside a few ideas (look at the dependent bugs to clean up argument parsers, help our tests run in smarter chunks, make it easier to run tests locally or on server, etc.) which would clean up stuff and be harder than a good first bug, yet each issue by itself would be too easy for an internship.  Our goal is to clean up our test harnesses and tools and if time permits, add stuff to the workflow which makes it easier for developers to do their job!

What is required of you to participate?

  • A willingness to learn and ask questions
  • A general knowledge of programming (this will be mostly in python with some javascript as well)
  • A promise to show up regularly and take ownership of the issues you are working on
  • Good at solving problems and thinking out of the box
  • Comfortable with (or willing to try) working with a variety of people

What we will guarantee from our end:

  • A dedicated mentor for the project whom you will work with regularly throughout the project
  • A single area of work to reduce the need to get up to speed over and over again.
    • This project will cover many tools, but the general problem space will be the same
  • The opportunity to work with many people (different bugs could have a specific mentor) while retaining a single mentor to guide you through the process
  • The ability to be part of the team- you will be welcome in meetings, we will value your input on solving problems, brainstorming, and figuring out new problems to tackle.

How do you apply?

Get in touch with us either by replying to the post, commenting in the bug or just contacting us on IRC (I am :jmaher in #ateam on irc.mozilla.org).  We will point you at a starter bug and introduce you to the bugs and problems to solve.  If you have prior work (links to bugzilla, github, blogs, etc.) that would be useful to learn more about you that would be a plus.

How will you select the candidates?

There is no real criteria here.  One factor will be if you can meet the criteria outlined above and how well you do at picking up the problem space.  Ultimately it will be up to the mentor (for this project, it will be me).  If you do apply and we already have a candidate picked or don’t choose you for other reasons, we do plan to repeat this every few months.

Looking forward to building great things!


Daniel PocockFree and open WebRTC for the Fedora Community

In January 2014, we launched the rtc.debian.org service for the Debian community. An equivalent service has been in testing for the Fedora community at FedRTC.org.

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.

Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from rtc.debian.org or from freephonebox.net.

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

Mozilla Reps CommunityNew council members – Spring 2015

We are happy to announce that three new members of the Council have been elected.

Welcome Michael, Shahid and Christos! They bring with them skills they have picked up as Reps mentors, and as community leaders both inside Mozilla and in other fields. A HUGE thank you to the outgoing council members – Arturo, Emma and Raj. We are hoping you continue to use your talents and experience to continue in a leadership role in Reps and Mozilla.

The new members will be gradually on boarding during the following 3 weeks.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the ReMo wiki.

Congratulate new Council members on this Discourse topic!

Air MozillaFirefox OS Tricoder

Firefox OS Tricoder Reading device sensor data in JavaScript

Tim TaubertImplementing a PBKDF2-based Password Storage Scheme for Firefox OS

My esteemed colleague Frederik Braun recently took on to rewrite the module responsible for storing and checking passcodes that unlock Firefox OS phones. While we are still working on actually landing it in Gaia I wanted to seize the chance to talk about this great use case of the WebCrypto API in the wild and highlight a few important points when using password-based key derivation (PBKDF2) to store passwords.

The Passcode Module

Let us take a closer look at not the verbatim implementation but at a slightly simplified version. The API offers the only two operations such a module needs to support: setting a new passcode and verifying that a given passcode matches the stored one.

let Passcode = {
  store(code) {
    // ...
  },

  verify(code) {
    // ...
  }
};

When setting up the phone for the first time - or when changing the passcode later - we call Passcode.store() to write a new code to disk. Passcode.verify() will help us determine whether we should unlock the phone. Both methods return a Promise as all operations exposed by the WebCrypto API are asynchronous.

Passcode.store("1234").then(() => {
  return Passcode.verify("1234");
}).then(valid => {
  console.log(valid);
});

// Output: true

Make the passcode look “random”

The module should absolutely not store passcodes in the clear. We will use PBKDF2 as a pseudorandom function (PRF) to retrieve a result that looks random. An attacker with read access to the part of the disk storing the user’s passcode should not be able to recover the original input, assuming limited computational resources.

The function deriveBits() is a PRF that takes a passcode and returns a Promise resolving to a random looking sequence of bytes. To be a little more specific, it uses PBKDF2 to derive pseudorandom bits.

function deriveBits(code) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // Salt should be at least 64 bits.
    let salt = crypto.getRandomValues(new Uint8Array(8));

    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000};

    // Derive 160 bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, 160);
  });
}

Choosing PBKDF2 parameters

As you can see above PBKDF2 takes a whole bunch of parameters. Choosing good values is crucial for the security of our passcode module so it is best to take a detailed look at every single one of them.

Select a cryptographic hash function

PBKDF2 is a big PRF that iterates a small PRF. The small PRF, iterated multiple times (more on why this is done later), is fixed to be an HMAC construction; you are however allowed to specify the cryptographic hash function used inside HMAC itself. To understand why you need to select a hash function it helps to take a look at HMAC’s definition, here with SHA-1 at its core:

HMAC-SHA-1(k, m) = SHA-1((k ⊕ opad) + SHA-1((k ⊕ ipad) + m))

The outer and inner padding opad and ipad are static values that can be ignored for our purpose, the important takeaway is that the given hash function will be called twice, combining the message m and the key k. Whereas HMAC is usually used for authentication PBKDF2 makes use of its PRF properties, that means its output is computationally indistinguishable from random.

deriveBits() as defined above uses SHA-1 as well, and although it is considered broken as a collision-resistant hash function it is still a safe building block in the HMAC-SHA-1 construction. HMAC only relies on a hash function’s PRF properties, and while finding SHA-1 collisions is considered feasible it is still believed to be a secure PRF.

That said, it would not hurt to switch to a secure cryptographic hash function like SHA-256. Chrome supports other hash functions for PBKDF2 today, Firefox unfortunately has to wait for an NSS fix before those can be unlocked for the WebCrypto API.

Pass a random salt

The salt is a random component that PBKDF2 feeds into the HMAC function along with the passcode. This prevents an attacker from simply computing the hashes of for example all 8-character combinations of alphanumerics (~5.4 PetaByte of storage for SHA-1) and use a huge lookup table to quickly reverse a given password hash. Specify 8 random bytes as the salt and the poor attacker will have to suddenly compute (and store!) 264 of those lookup tables and face 8 additional random characters in the input. Even without the salt the effort to create even one lookup table would be hard to justify because chances are high you cannot reuse it to attack another target, they might be using a different hash function or combine two or more of them.

The same goes for Rainbow Tables. A random salt included with the password would have to be incorporated when precomputing the hash chains and the attacker is back to square one where she has to compute a Rainbow Table for every possible salt value. That certainly works ad-hoc for a single salt value but preparing and storing 264 of those tables is impossible.

The salt is public and will be stored in the clear along with the derived bits. We need the exact same salt to arrive at the exact same derived bits later again. We thus have to modify deriveBits() to accept the salt as an argument so that we can either generate a random one or read it from disk.

function deriveBits(code, salt) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000};

    // Derive 160 bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, 160);
  });
}

Keep in mind though that Rainbow tables today are mainly a thing from the past where password hashes were smaller and shittier. Salts are the bare minimum a good password storage scheme needs, but they merely protect against a threat that is largely irrelevant today.

Specify a number of iterations

As computers became faster and Rainbow Table attacks infeasible due to the prevalent use of salts everywhere, people started attacking password hashes with dictionaries, simply by taking the public salt value and passing that combined with their educated guess to the hash function until a match was found. Modern password schemes thus employ a “work factor” to make hashing millions of password guesses unbearably slow.

By specifying a sufficiently high number of iterations we can slow down PBKDF2’s inner computation so that an attacker will have to face a massive performance decrease and be able to only try a few thousand passwords per second instead of millions.

For a single-user disk or file encryption it might be acceptable if computing the password hash takes a few seconds; for a lock screen 300-500ms might be the upper limit to not interfere with user experience. Take a look at this great StackExchange post for more advice on what might be the right number of iterations for your application and environment.

A much more secure version of a lock screen would allow to not only use four digits but any number of characters. An additional delay of a few seconds after a small number of wrong guesses might increase security even more, assuming the attacker cannot access the PRF output stored on disk.

Determine the number of bits to derive

PBKDF2 can output an almost arbitrary amount of pseudo-random data. A single execution yields the number of bits that is equal to the chosen hash function’s output size. If the desired number of bits exceeds the hash function’s output size PBKDF2 will be repeatedly executed until enough bits have been derived.

function getHashOutputLength(hash) {
  switch (hash) {
    case "SHA-1":   return 160;
    case "SHA-256": return 256;
    case "SHA-384": return 384;
    case "SHA-512": return 512;
  }

  throw new Error("Unsupported hash function");
}

Choose 160 bits for SHA-1, 256 bits for SHA-256, and so on. Slowing down the key derivation even further by requiring more than one round of PBKDF2 will not increase the security of the password storage.

Do not hard-code parameters

Hard-coding PBKDF2 parameters - the name of the hash function to use in the HMAC construction, and the number of HMAC iterations - is tempting at first. We however need to be flexible if for example it turns out that SHA-1 can no longer be considered a secure PRF, or you need to increase the number of iterations to keep up with faster hardware.

To ensure future code can verify old passwords we store the parameters that were passed to PBKDF2 at the time, including the salt. When verifying the passcode we will read the hash function name, the number of iterations, and the salt from disk and pass those to deriveBits() along with the passcode itself. The number of bits to derive will be the hash function’s output size.

function deriveBits(code, salt, hash, iterations) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // Output length in bits for the given hash function.
    let hlen = getHashOutputLength(hash);

    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash, salt, iterations};

    // Derive |hlen| bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, hlen);
  });
}

Storing a new passcode

Now that we are done implementing deriveBits(), the heart of the Passcode module, completing the API is basically a walk in the park. For the sake of simplicity we will use localforage as the storage backend. It provides a simple, asynchronous, and Promise-based key-value store.

// <script src="localforage.min.js"/>

const HASH = "SHA-1";
const ITERATIONS = 4096;

Passcode.store = function (code) {
  // Generate a new random salt for every new passcode.
  let salt = crypto.getRandomValues(new Uint8Array(8));

  return deriveBits(code, salt, HASH, ITERATIONS).then(bits => {
    return Promise.all([
      localforage.setItem("digest", bits),
      localforage.setItem("salt", salt),
      localforage.setItem("hash", HASH),
      localforage.setItem("iterations", ITERATIONS)
    ]);
  });
};

We generate a new random salt for every new passcode. The derived bits are stored along with the salt, the hash function name, and the number of iterations. HASH and ITERATIONS are constants that provide default values for our PBKDF2 parameters and can be updated whenever desired. The Promise returned by Passcode.store() will resolve when all values have been successfully stored in the backend.

Verifying a given passcode

To verify a passcode all values and parameters stored by Passcode.store() will have to be read from disk and passed to deriveBits(). Comparing the derived bits with the value stored on disk tells whether the passcode is valid.

Passcode.verify = function (code) {
  let loadValues = Promise.all([
    localforage.getItem("digest"),
    localforage.getItem("salt"),
    localforage.getItem("hash"),
    localforage.getItem("iterations")
  ]);

  return loadValues.then(([digest, salt, hash, iterations]) => {
    return deriveBits(code, salt, hash, iterations).then(bits => {
      return compare(bits, digest);
    });
  });
};

Should compare() be a constant-time operation?

compare() does not have to be constant-time. Even if the attacker learns the first byte of the final digest stored on disk she cannot easily produce inputs to guess the second byte - the opposite would imply knowing the pre-images of all those two-byte values. She cannot do better than submitting simple guesses that become harder the more bytes are known. For a successful attack all bytes have to be recovered, which in turns means a valid pre-image for the full final digest needs to be found.

If it makes you feel any better, you can of course implement compare() as a constant-time operation. This might be tricky though given that all modern JavaScript engines optimize code heavily.

What about bcrypt or scrypt?

Both bcrypt and scrypt are probably better alternatives to PBKDF2. Bcrypt automatically embeds the salt and cost factor into its output, most APIs are clever enough to parse and use those parameters when verifying a given password.

Scrypt implementations can usually securely generate a random salt, that is one less thing for you to care about. The most important aspect of scrypt though is that it allows consuming a lot of memory when computing the password hash which makes cracking passwords using ASICs or FPGAs close to impossible.

The Web Cryptography API does unfortunately support neither of the two algorithms and currently there are no proposals to add those. In the case of scrypt it might also be somewhat controversial to allow a website to consume arbitrary amounts of memory.

Gregory SzorcFirefox Mercurial Repository with CVS History

When Firefox made the switch from CVS to Mercurial in March 2007, the CVS history wasn't imported into Mercurial. There were good reasons for this at the time. But it's a decision that continues to have side-effects. I am surprised how often I hear of engineers wanting to access blame and commit info from commits now more than 9 years old!

When individuals created a Git mirror of the Firefox repository a few years ago, they correctly decided that importing CVS history would be a good idea. They also correctly decided to combine the logically same but physically separate release and integration repositories into a unified Git repository. These are things we can't easily do to the canonical Mercurial repository because it would break SHA-1 hashes, breaking many systems, and it would require significant changes in process, among other reasons.

While Firefox developers do have access to a single Firefox repository with full CVS history (the Git mirror), they still aren't satisfied.

Running git blame (or hg blame for that matter) can be very expensive. For this reason, the blame interface is disabled on many web-based source viewers by default. On GitHub, some blame URLs for the Firefox repository time out and cause GitHub to display an error message. No matter how hard you try, you can't easily get blame results (running a local Git HTTP/HTML interface is still difficult compared to hg serve).

Another reason developers aren't satisfied with the Git mirror is that Git's querying tools pale in comparison to Mercurial's. I've said it before and I'll say it again: Mercurial's revision sets and templates are incredibly useful features that enable advanced repository querying and reporting. Git's offerings come nowhere close. (I really wish Git would steal these awesome features from Mercurial.)

Anyway, enough people were complaining about the lack of a Mercurial Firefox repository with full CVS history that I decided to create one. If you point your browsers or Mercurial clients to https://hg.mozilla.org/users/gszorc_mozilla.com/gecko-full, you'll be able to access it.

The process used for the conversion was the simplest possible: I used hg-git to convert the Git mirror back to Mercurial.

Unlike the Git mirror, I didn't include all heads in this new repository. Instead, there is only mozilla-central's head (the current development tip). If I were doing this properly, I'd include all heads, like gecko-aggregate.

I'm well aware there are oddities in the Git mirror and they now exist in this new repository as well. My goal for this conversion was to deliver something: it wasn't a goal to deliver the most correct result possible.

At this time, this repository should be considered an unstable science experiment. By no means should you rely on this repository. But if you find it useful, I'd appreciate hearing about it. If enough people ask, we could probably make this more official.

Gervase MarkhamEurovision Bingo

Some people say that all Eurovision songs are the same. That’s probably not quite true, but there is perhaps a hint of truth in the suggestion that some themes tend to recur from year to year. Hence, I thought, Eurovision Bingo.

I wrote some code to analyse a directory full of lyrics, normally those from the previous year of the competition, and work out the frequency of occurrence of each word. It will then generate Bingo cards, with sets of words of different levels of commonness. You can then use them to play Bingo while watching this year’s competition (which is on Saturday).

There’s a Github repo, or if you want to go straight to pre-generated cards for this year, they are here.

Here’s a sample card from the 2014 lyrics:

fell cause rising gonna rain
world believe dancing hold once
every mean LOVE something chance
hey show or passed say
because light hard home heart

Have fun :-)

Air MozillaOuiShare Labs Camp #3

OuiShare Labs Camp #3 OuiShare Labs Camp #3 is a participative conference dedicated to decentralization, IndieWeb, semantic web and open source community tools.

This Week In RustThis Week in Rust 81

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

273 pull requests were merged in the last two weeks, and 4 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • らいどっと
  • Aaron Gallagher
  • Alexander Polakov
  • Alex Burka
  • Andrei Oprea
  • Andrew Kensler
  • Andrew Straw
  • Ben Gesoff
  • Chris Hellmuth
  • Cole Reynolds
  • Colin Walters
  • David Reid
  • Don Petersen
  • Emilio Cobos Álvarez
  • Franziska Hinkelmann
  • Garming Sam
  • Hika Hibariya
  • Isaac Ge
  • Jan Andersson
  • Jan-Erik Rediger
  • Jannis Redmann
  • Jason Yeo
  • Jeremy Schlatter
  • Johann
  • Johann Hofmann
  • Lee Jeffery
  • leunggamciu
  • Marin Atanasov Nikolov
  • Mário Feroldi
  • Mathieu Rochette
  • Michael Park
  • Michael Wu
  • Michał Czardybon
  • Mike Sampson
  • Nick Platt
  • parir
  • Paul Banks
  • Paul Faria
  • Paul Quint
  • peferron
  • Pete Hunt
  • robertfoss
  • Rob Young
  • Russell Johnston
  • Shmuale Mark
  • Simon Kern
  • Sindre Johansen
  • sumito3478
  • Swaroop C H
  • Tincan
  • Wei-Ming Yang
  • Wilfred Hughes
  • Will Engler
  • Wojciech Ogrodowczyk
  • XuefengWu
  • Z1

Approved RFCs

New RFCs

Betawatch!

The current beta is 1.1.0-beta (cd7d89af9 2015-05-16) (built 2015-05-16).

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Yes, because laundry eating has evolved to be a specific design goal now; and the initial portions of the planned laundry eating API have been landed behind the #![feature(no_laundry)] gate. no_laundry should become stable in 6-8 weeks, though the more complicated portions, including DRY cleaning, Higher Kinded T-shirts, Opt-in Builtin Detergent, and Rinse Time Optimization will not be stabilized until much later."

"We hope this benefits the Laundry as a Service community immensely."

Manish explains Rust's roadmap for laundry-eating.

Thanks to filsmick for the tip.

And since there were so many quotables in the last two weeks, here's one from Evan Miller's evaluation of Rust:

"Rust is a systems language. I’m not sure what that term means, but it seems to imply some combination of native code compilation, not being Fortran, and making no mention of category theory in the documentation."

Thanks to ruudva for the tip. Submit your quotes for next week!.

Mark CôtéIntegration

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Mark CôtéProject Isolation

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Mike ConleyThe Joy of Coding (Ep. 14): More OS X Printing

In this episode, I kept working on the same bug as last week – proxying the print dialog from the content process on OS X. We actually finished the serialization bit, and started doing deserialization!

Hopefully, next episode we can polish off the deserialization and we’l be done. Fingers crossed!

Note that this episode was about 2 hours and 10 minutes, but the standard-definition recording up on Air Mozilla only plays for about 13 minutes and 5 seconds. Not too sure what’s going on there – we’ve filed a bug with the people who’ve encoded it. Hopefully, we’ll have the full episode up for standard-definition soon.

In the meantime, if you’d like to watch the whole episode, you can go to the Air Mozilla page and watch it in HD, or you can go to the YouTube mirror.

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

Manish GoregaokarThe Problem With Single-threaded Shared Mutability

This is a post that I’ve been meaning to write for a while now; and the release of Rust 1.0 gives me the perfect impetus to go ahead and do it.

Whilst this post discusses a choice made in the design of Rust; and uses examples in Rust; the principles discussed here apply to other languages for the most part. I’ll also try to make the post easy to understand for those without a Rust background; please let me know if some code or terminology needs to be explained.

What I’m going to discuss here is the choice made in Rust to disallow having multiple mutable aliases to the same data (or a mutable alias when there are active immutable aliases), even from the same thread. In essence, it disallows one from doing things like:

let mut x = Vec::new();
{
    let ptr = &mut x; // Take a mutable reference to `x`
    ptr.push(1); // Allowed
    let y = x[0]; // Not allowed (will not compile): as long as `ptr` is active,
                  // x cannot be read from ...
    x.push(1);    // .. or written to
}


// alternatively,

let mut x = Vec::new();
x.push(1); // Allowed
{
    let ptr = &x; // Create an immutable reference
    let y = ptr[0]; // Allowed, nobody can mutate
    let y = x[0]; // Similarly allowed
    x.push(1); // Not allowed (will not compile): as long as `ptr` is active,
               // `x` is frozen for mutation
}

This is essentially the “Read-Write lock” (RWLock) pattern, except it’s not being used in a threaded context, and the “locks” are done via static analysis (compile time “borrow checking”).

Newcomers to the language have the recurring question as to why this exists. Ownership semantics and immutable borrows can be grasped because there are concrete examples from languages like C++ of problems that these concepts prevent. It makes sense that having only one “owner” and then multiple “borrowers” who are statically guaranteed to not stick around longer than the owner will prevent things like use-after-free.

But what could possibly be wrong with having multiple handles for mutating an object? Why do we need an RWLock pattern? 1

It causes memory unsafety

This issue is specific to Rust, and I promise that this will be the only Rust-specific answer.

Rust enums provide a form of algebraic data types. A Rust enum is allowed to “contain” data, for example you can have the enum

enum StringOrInt {
    Str(String),
    Int(i64)
}

which gives us a type that can either be a variant Str, with an associated string, or a variant Int2, with an associated integer.

With such an enum, we could cause a segfault like so:

let x = Str("Hi!".to_string()); // Create an instance of the `Str` variant with associated string "Hi!"
let y = &mut x; // Create a mutable alias to x

if let Str(ref insides) = x { // If x is a `Str`, assign its inner data to the variable `insides`
    *y = Int(1); // Set `*y` to `Int(1), therefore setting `x` to `Int(1)` too
    println!("x says: {}", insides); // Uh oh!
}

Here, we invalidated the insides reference because setting x to Int(1) meant that there is no longer a string inside it. However, insides is still a reference to a String, and the generated assembly would try to dereference the memory location where the pointer to the allocated string was, and probably end up trying to dereference 1 or some nearby data instead, and cause a segfault.

Okay, so far so good. We know that for Rust-style enums to work safely in Rust, we need the RWLock pattern. But are there any other reasons we need the RWLock pattern? Not many languages have such enums, so this shouldn’t really be a problem for them.

Iterator invalidation

Ah, the example that is brought up almost every time the question above is asked. While I’ve been quite guilty of using this example often myself (and feel that it is a very appropriate example that can be quickly explained), I also find it to be a bit of a cop-out, for reasons which I will explain below. This is partly why I’m writing this post in the first place; a better idea of the answer to The Question should be available for those who want to dig deeper.

Iterator invalidation involves using tools like iterators whilst modifying the underlying dataset somehow.

For example,


let buf = vec![1,2,3,4];

for i in &buf {
    buf.push(i);
}

Firstly, this will loop infinitely (if it compiled, which it doesn’t, because Rust prevents this). The equivalent C++ example would be this one, which I use at every opportunity.

What’s happening in both code snippets is that the iterator is really just a pointer to the vector and an index. It doesn’t contain a snapshot of the original vector; so pushing to the original vector will make the iterator iterate for longer. Pushing once per iteration will obviously make it iterate forever.

The infinite loop isn’t even the real problem here. The real problem is that after a while, we could get a segmentation fault. Internally, vectors have a certain amount of allocated space to work with. If the vector is grown past this space, a new, larger allocation may need to be done (freeing the old one), since vectors must use contiguous memory.

This means that when the vector overflows its capacity, it will reallocate, invalidating the reference stored in the iterator, and causing use-after-free.

Of course, there is a trivial solution in this case — store a reference to the Vec/vector object inside the iterator instead of just the pointer to the vector on the heap. This leads to some extra indirection or a larger stack size for the iterator (depending on how you implement it), but overall will prevent the memory unsafety.

This would still cause problems with more comple situations involving multidimensional vectors, however.

“It’s effectively threaded”

Aliasing with mutability in a sufficiently complex, single-threaded program is effectively the same thing as accessing data shared across multiple threads without a lock

(The above is my paraphrasing of someone else’s quote; but I can’t find the original or remember who made it)

Let’s step back a bit and figure out why we need locks in multithreaded programs. The way caches and memory work; we’ll never need to worry about two processes writing to the same memory location simultaneously and coming up with a hybrid value, or a read happening halfway through a write.

What we do need to worry about is the rug being pulled out underneath our feet. A bunch of related reads/writes would have been written with some invariants in mind, and arbitrary reads/writes possibly happening between them would invalidate those invariants. For example, a bit of code might first read the length of a vector, and then go ahead and iterate through it with a regular for loop bounded on the length. The invariant assumed here is the length of the vector. If pop() was called on the vector in some other thread, this invariant could be invalidated after the read to length but before the reads elsewhere, possibly causing a segfault or use-after-free in the last iteration.

However, we can have a situation similar to this (in spirit) in single threaded code. Consider the following:

let x = some_big_thing();
let len = x.some_vec.len();
for i in 0..len {
    x.do_something_complicated(x.some_vec[i]);
}

We have the same invariant here; but can we be sure that x.do_something_complicated() doesn’t modify x.some_vec for some reason? In a complicated codebase, where do_something_complicated() itself calls a lot of other functions which may also modify x, this can be hard to audit.

Of course, the above example is a simplification and contrived; but it doesn’t seem unreasonable to assume that such bugs can happen in large codebases — where many methods being called have side effects which may not always be evident.

Which means that in large codebases we have almost the same problem as threaded ones. It’s very hard to maintain invariants when one is not completely sure of what each line of code is doing. It’s possible to become sure of this by reading through the code (which takes a while), but further modifications may also have to do the same. It’s impractical to do this all the time and eventually bugs will start cropping up.

On the other hand, having a static guarantee that this can’t happen is great. And when the code is too convoluted for a static guarantee (or you just want to avoid the borrow checker), a single-threaded RWlock-esque type called RefCell is available in Rust. It’s a type providing interior mutability and behaves like a runtime version of the borrow checker. Similar wrappers can be written in other languages.

Edit: In case of many primitives like simple integers, the problems with shared mutability turn out to not be a major issue. For these, we have a type called Cell which lets these be mutated and shared simultaenously. This works on all Copy types; i.e. types which only need to be copied on the stack to be copied. (Unlike types involving pointers or other indirection)

This sort of bug is a good source of reentrancy problems too.

Safe abstractions

In particular, the issue in the previous section makes it hard to write safe abstractions, especially with generic code. While this problem is clearer in the case of Rust (where abstractions are expected to be safe and preferably low-cost), this isn’t unique to any language.

Every method you expose has a contract that is expected to be followed. Many times, a contract is handled by type safety itself, or you may have some error-based model to throw out uncontractual data (for example, division by zero).

But, as an API (can be either internal or exposed) gets more complicated, so does the contract. It’s not always possible to verify that the contract is being violated at runtime either, for example many cases of iterator invalidation are hard to prevent in nontrivial code even with asserts.

It’s easy to create a method and add documentation “the first two arguments should not point to the same memory”. But if this method is used by other methods, the contract can change to much more complicated things that are harder to express or check. When generics get involved, it only gets worse; you sometimes have no way of forcing that there are no shared mutable aliases, or of expressing what isn’t allowed in the documentation. Nor will it be easy for an API consumer to enforce this.

This makes it harder and harder to write safe, generic abstractions. Such abstractions rely on invariants, and these invariants can often be broken by the problems in the previous section. It’s not always easy to enforce these invariants, and such abstractions will either be misused or not written in the first place, opting for a heavier option. Generally one sees that such abstractions or patterns are avoided altogether, even though they may provide a performance boost, because they are risky and hard to maintain. Even if the present version of the code is correct, someone may change something in the future breaking the invariants again.

My previous post outlines a situation where Rust was able to choose the lighter path in a situation where getting the same guarantees would be hard in C++.

Note that this is a wider problem than just with mutable aliasing. Rust has this problem too, but not when it comes to mutable aliasing. Mutable aliasing is important to fix however, because we can make a lot of assumptions about our program when there are no mutable aliases. Namely, by looking at a line of code we can know what happened wrt the locals. If there is the possibility of mutable aliasing out there; there’s the possibility that other locals were modified too. A very simple example is:

fn look_ma_no_temp_var_l33t_interview_swap(&mut x, &mut y) {
    *x = *x + *y;
    *y = *x - *y;
    *x = *x - *y;
}
// or
fn look_ma_no_temp_var_rockstar_interview_swap(&mut x, &mut y) {
    *x = *x ^ *y;
    *y = *x ^ *y;
    *x = *x ^ *y;
}

In both cases, when the two references are the same3, instead of swapping, the two variables get set to zero. A user (internal to your library, or an API consumer) would expect swap() to not change anything when fed equal references, but this is doing something totally different. This assumption could get used in a program; for example instead of skipping the passes in an array sort where the slot is being compared with itself, one might just go ahead with it because swap() won’t change anything there anyway; but it does, and suddenly your sort function fills everything with zeroes. This could be solved by documenting the precondition and using asserts, but the documentation gets harder and harder as swap() is used in the guts of other methods.

Of course, the example above was contrived. It’s well known that those swap() implementations have that precondition, and shouldn’t be used in such cases. Also, in most swap algorithms it’s trivial to ignore cases when you’re comparing an element with itself, generally done by bounds checking.

But the example is a simplified sketch of the problem at hand.

In Rust, since this is statically checked, one doesn’t worry much about these problems, and robust APIs can be designed since knowing when something won’t be mutated can help simplify invariants.

Wrapping up

Aliasing that doesn’t fit the RWLock pattern is dangerous. If you’re using a language like Rust, you don’t need to worry. If you’re using a language like C++, it can cause memory unsafety, so be very careful. If you’re using a language like Java or Go, while it can’t cause memory unsafety, it will cause problems in complex bits of code.

This doesn’t mean that this problem should force you to switch to Rust, either. If you feel that you can avoid writing APIs where this happens, that is a valid way to go around it. This problem is much rarer in languages with a GC, so you might be able to avoid it altogether without much effort. It’s also okay to use runtime checks and asserts to maintain your invariants; performance isn’t everything.

But this is an issue in programming; and make sure you think of it when designing your code.

Discuss: HN, Reddit


  1. Hereafter referred to as “The Question”

  2. Note: Str and Int are variant names which I chose; they are not keywords. Additionally, I’m using “associated foo” loosely here; Rust does have a distinct concept of “associated data” but it’s not relevant to this post.

  3. Note that this isn’t possible in Rust due to the borrow checker.

Emma IrwinTowards a Participation Standard

Participation at Mozilla is a personal journey, no story  the same, no path identical and while motivations may be similar at times,  what sustains and rewards our participation is unique. Knowing this, it feels slightly ridiculous to use the visual of a ladder to model the richness of opportunity and value/risk of ‘every step’.  The impression that there is a single starting point, a single end and a predictable series of rigid steps between seems contrary to the journey.

Yet… the ‘ladder’ to me has always seemed like the perfect way to visualize the potential of ‘starting’.  Even more importantly, I think ladders help people visualize how finishing a single step leads to greater things: greater impact, depth of learning and personal growth among other things.

After numerous conversations (inside and outside Mozilla) on this topic, I’ve come to realize that focus should be more on the rung or ‘step’, and not on building a rigid project-focused connection between them. In the spirit of our virtuous circle, I believe that being thoughtful  and deliberate about step design, lends to the emergence of personalized learning  and participating pathways. “Cowpaths of participation”.

6227345941_01f3087878_m

In designing steps, we  also need to consider that not everyone needs to jump to a next thing, and that specializations and ‘depth’ exists in opportunities as well.  Here’s template I’m using to build participation steps right now:

step

 

 

 

 

 

 

* Realize I need to add ‘mentorship available’ as well.

This model (or an evolution of it) if adopted could provide a way for contributors to traverse between projects and grow valuable skillsets and experience for life with increasing impact to Mozilla’s mission.  For example, as a result of participating in the Marketpulse project I find my ‘place’ in User Research, I can also look for steps across the project in need of that skill, or offering ways to specialize even further.  A Python developer perhaps,  can look for QA ‘steps’  after realizing the most enjoyable part of one project ladder was actually the QA process.

I  created a set of Participation Personas to help me visualize the people we’re engaging, and what their  unique perspectives, opportunities and risks are.  I’m building these on the ‘side of my desk’ so only Lurking Lucinda has a full bio at the moment, but you can see all profiles in this document (feel free to add comments).

I believe all of this thinking, and design have helped me build a compelling and engaging ladder for  Marketpulse, where one of our goals is to sustain project-connection through learning opportunities.

2015-05-16_0953

In reality though, while this can help us design for single projects – really well,  to actually support personalized ladders we need adoption across the project.  At some point we just need to get together on standards that help us scale participation – a “Participation Standard” .

Last year I spent a lot of time working with a number of other open projects, trying to solve for a lot of these same participation challenges present in Mozilla.   And so,  I also dream of that something like this can empowers other projects in a similar way: where  personalized learning and participating pathways can extend between Mozilla and other projects with missions people care about.  Perhaps this is something Mark can consider in thinking for the ‘Building a Mozilla Academy‘.

Air MozillaWebdev Beer and Tell: May 2015

Webdev Beer and Tell: May 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Armen Zambranomozci 0.6.0 - Trigger based on Treeherder filters, Windows support, flexible and encrypted password managament

In this release of mozci we have a lot of developer facing improvements like Windows support or flexibility on password management.
We also have our latest experimental script mozci-triggerbyfilters (http://mozilla-ci-tools.readthedocs.org/en/latest/scripts.html#triggerbyfilters-py).

How to update

Run "pip install -U mozci" to update.

Notice

We have move all scripts from scripts/ to mozci/scripts/.
Note that you can now use "pip install" and have all scripts available as mozci-name_of_script_here in your PATH.

Contributions

We want to welcome @KWierso as our latest contributor!
Our gratitude @Gijs for reporting the Windows issues and for all his feedback.
Congratulations to @parkouss for making https://github.com/parkouss/mozbattue the first project using mozci as its dependency.
In this release we had @adusca and @vaibhavmagarwal as our main and very active contributors.

Major highlights

  • Added script to trigger jobs based on Treeherder filters
    • This allows using filters like --include "web-platform-tests" and that will trigger all matching builders
    • You can also use --exclude to exclude builders you don't want
  • With the new trigger by filters script you can preview what will be triggered:
233 jobs will be triggered, do you wish to continue? y/n/d (d=show details) d
05/15/2015 02:58:17 INFO: The following jobs will be triggered:
Android 4.0 armv7 API 11+ try opt test mochitest-1
Android 4.0 armv7 API 11+ try opt test mochitest-2
  • Remove storing passwords in plain-text (Sorry!)
    • We now prompt the user if he/she wants to store their password enctrypted
  • When you use "pip install" we will also install the main scripts as mozci-name_of_script_here binaries
    • This makes it easier to use the binaries in any location
  • Windows issues
    • The python module gzip.py is uncapable of decompressing large binaries
    • Do not store buildjson on a temp file and then move

Minor improvements

  • Updated docs
  • Improve wording when triggering a build instead of a test job
  • Loosened up the python requirements from == to >=
  • Added filters to alltalos.py

All changes

You can see all changes in here:
0.5.0...0.6.0

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Joel MaherWatching the watcher – Some data on the Talos alerts we generate

What are the performance regressions at Mozilla- who monitors them and what kind of regressions do we see?  I want to answer this question with a few peeks at the data.  There are plenty of previous blog posts I have done outlining stats, trends, and the process.  Lets recap what we do briefly, then look at the breakdown of alerts (not necessarily bugs).

When Talos uploads numbers to graph server they get stored and eventually run through a calculation loop to find regressions and improvements.  As of Jan 1, 2015, we upload these to mozilla.dev.tree-alerts as well as email to the offending patch author (if they can easily be identified).  There are a couple folks (performance sheriffs) who look at the alerts and triage them.  If necessary a bug is filed for further investigation.  Reading this brief recap of what happens to our performance numbers probably doesn’t inspire folks, what is interesting is looking at the actual data we have.

Lets start with some basic facts about alerts in the last 12 months:

  • We have collected 8232 alerts!
  • 4213 of those alerts are regressions (the rest are improvements)
  • 3780 of those above alerts have a manually marked status
    • the rest have been programatically marked as merged and associated with the original
  • 278 bugs have been filed (or 17 alerts/bug)
    • 89 fixed!
    • 61 open!
    • 128 (5 invalid, 8 duplicate, 115 wontfix/worksforme)

As you can see this is not a casual hobby, it is a real system helping out in fixing and understanding hundreds of performance issues.

We generate alerts on a variety of branches, here is the breakdown of branches and alerts/branch;

number of regression alerts we have received per branch

number of regression alerts we have received per branch

There are a few things to keep in mind here, mobile/mozilla-central/Firefox are the same branch, and for non-pgo branches that is only linux/windows/android, not osx. 

Looking at that graph is sort of non inspiring, most of the alerts will land on fx-team and mozilla-inbound, then show up on the other branches as we merge code.  We run more tests/platforms and land/backout stuff more frequently on mozilla-inbound and fx-team, this is why we have a larger number of alerts.

Given the fact we have so many alerts and have manually triaged them, what state the the alerts end up in?

Current state of alerts

Current state of alerts

The interesting data point here is that 43% of our alerts are duplicates.  A few reasons for this:

  • we see an alert on non-pgo, then on pgo (we usually mark the pgo ones as duplicates)
  • we see an alert on mozilla-inbound, then the same alert shows up on fx-team,b2g-inbound,firefox (due to merging)
    • and then later we see the pgo versions on the merged branches
  • sometimes we retrigger or backfill to find the root cause, this generates a new alert many times
  • in a few cases we have landed/backed out/landed a patch and we end up with duplicate sets of alerts

The last piece of information that I would like to share is the break down of alerts per test:

Alerts per test

number of alerts per test (some excluded)

There are a few outliers, but we need to keep in mind that active work was being done in certain areas which would explain a lot of alerts for a given test.  There are 35 different test types which wouldn’t look good in an image, so I have excluded retired tests, counters, startup tests, and android tests.

Personally, I am looking forward to the next year as we transition some tools and do some hacking on the reporting, alert generation and overall process.  Thanks for reading!


Mozilla Reps CommunityReps Weekly Call – May 14th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

fox-elevator

Summary

  • Advocacy Task force
  • Featured Events
  • Council elections results.
  • Help me with my project.

AirMozilla video

Detailed notes

Advocacy Task force

Stacy and Jochai joined the call to hear Reps feedback on two new initiatives: Request for Policy Support & Advocacy Task Forces.

Request for Policy Support

The goal is to enable mozillians to request support for policy in their local countries. Mozillians will be able to collaborate with a Mozilla Rep to submit an issue which is reviewed and acted upon by public policy team.

Prior to rollout, they will develop training for Mozilla Reps.

Advocacy task force

These task forces will be self organized local groups focused on Educating people about open Web issues and Organizing action on regional political issues.

The members will partner with a Mozilla Rep and communicate using the Advocacy Discourse.

You can check the full presentation and send any feedback to smartin@mozilla.com and jochai@mozilla.com.

Featured Events of the week

In this new section we want to talk about some of the events happening this week.

Council elections results

Council elections are over and we have just received the results.

Three seats had to be renewed so the next new council members will be:

On-boarding process will start now and they should be fully integrated in the Council in the coming weeks. More announcements about this will be done in all Rep’s channels.

Help me with my project!

In this new section, the floor is yours to present in 1 minute a project you are working on and ask other Reps for help and support.

If you can’t make the call, you can add your project and a link with more information and we’ll read it for you during the call.

In this occasion we talked about:

  • FSA WoMoz recruitment campaign – Manuela
    • She needs feedback on the project.
  • Mozilla Festival East Africa – Lawrence
    • They need help promoting and getting partners.
  • Marketpulse – Emma
    • She needs help with outreach in Brazil, Mexico, India, Bangladesh, Philippines, Russia, Colombia

More details in the pad.

For next week

Next week Greg will be on this call to talk about Shape of the web project. Please, check the presentation on air mozilla (starting at min. 15), the site and gather all questions you might have.

Amira and the webmaker team will be also next week on the call, check her email on reps-general and gather questions too.

Full raw notes.

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Robert O'CallahanUsing rr To Debug Dropped Video Frames In Gecko

Lately I've been working on a project to make video rendering smoother in Firefox, by sending our entire queue of decoded timestamped frames to the compositor and making the compositor responsible for choosing the correct video frame to display every time we composite the window. I've been running a testcase with a 60fps video, which should draw each video frame exactly once, with logging to show when that isn't the case (i.e. we dropped or duplicated a frame). rr is excellent for debugging such problems! It's low overhead so it doesn't change the results much. After recording a run, I examine the log to identify dropped or dup'ed frames, and then it's easy to use rr to replay the execution and figure out exactly why each frame was dropped or dup'ed. Using a regular interactive debugger to debug such issues is nigh-impossible since stopping the application in the debugger totally messes up the timing --- and you don't know which frame is going to have a problem, so you don't know when to stop anyway.

I've been using rr on optimized Firefox builds because debug builds are too slow for this work, and it turns out rr with reverse execution really helps debugging optimized code. One of the downsides of debugging optimized code is the dreaded "Value optimized out" errors you often get trying to print values in gdb. When that happens under rr, you can nearly always find the correct value by doing "reverse-step" or "reverse-next" until you reach a program point where the variable wasn't optimized out.

I've found it's taking me some time to learn to use reverse execution effectively. Finding the fastest way to debug a problem is a challenge, because reverse execution makes new and much more effective strategies available that I'm not used to having. For example, several times I've found myself exploring the control flow of a function invocation by running (forwards or backwards) to the start of a function and then stepping forwards from there, because that's how I'm used to working, when it would be more effective to set a breakpoint inside the function and then reverse-next a few times to see how we got there.

But even though I'm still learning, debugging is much more fun now!

Pascal FinetteAn Open Letter to Fast Company: Drop the Sexism

Dear Chris Gayomali, Dear Fast Company-Team,

I am extremely disappointed seeing you, a publication which self-proclaims to "inspire a new breed of innovative and creative thought leaders who are actively inventing the future of business", perpetuating a view of the world and fellow entrepreneurs which is sexist and one-dimensional.

Your recent article on Birch Box, the phenomenally successful ecommerce startup founded by Katia Beauchamp and Hayley Barna starts out with the following sentence:

"Birchbox’s co–CEO, wearing a dark monochrome dress that provides an understated canvas for her impeccable jewelry game [...]"

Reducing Katia to her "impeccable jewelry game" is offensive and sexist. I am sure you would never start out an article about a male founder with a statement about his "impeccable tie game".

Writing this as a white male - it offends me that you (and many of your colleagues) reduce my women entrepreneur colleagues to their choice of fashion instead of their incredible achievements.

Please live up to your motto and see people for what they do - not their gender, ethnicity or any other superficial distinction.

The Mozilla BlogFirst Panasonic Smart TVs powered by Firefox OS Debut Worldwide

The first Panasonic VIERA Smart TVs powered by Firefox OS are now available in Europe and will be available worldwide in the coming months.Firefox OS Smart TV“Through our partnership with Mozilla and the openness and flexibility of Firefox OS, we have been able to create a more user friendly and customizable TV UI. This allows us to provide a better user experience for our consumers providing a differentiator in the Smart TV market,” said Masahiro Shinada, Director of the TV Business Division at Panasonic Corporation.

The Panasonic 2015 Smart TV lineup includes these models powered by Firefox OS: CR850, CR730, CX800, CX750, CX700 and CX680 (models vary by country).

“We’re happy to partner with Panasonic to bring the first Smart TVs powered by Firefox OS to the world,” said Andreas Gal, Mozilla CTO. “With Firefox and Firefox OS powered devices, users can enjoy a custom and connected Web experience and take their favorite content (apps, videos, photos, websites) across devices without being locked into one proprietary ecosystem or brand.”

Panasonic Smart TVs powered by Firefox OS are optimized for HTML5 to provide strong performance of Web apps and come with a new intuitive and customizable user interface which allows quick access to favorite channels, apps, websites and content on other devices. Through Mozilla-pioneered WebAPIs, developers can leverage the flexibility of the Web to create customized and innovative apps and experiences across connected devices.

Firefox OS is the first truly open mobile platform built entirely on Web technologies, bringing more choice and control to users, developers, operators and hardware manufacturers.

The Servo BlogThis Week In Servo 32

In the past three weeks, we merged 141 pull requests.

Samsung OSG published another blog post by Lars and Mike. This one focuses on Servo’s support for embedding via the CEF API.

The Rust upgrade of doom is finally over. This brings us up to a Rust version from late April. We’ve now cleared all of the pre-1.0 breaking changes!

Firefox Nightly now has experimental support for components written in Rust. There’s a patch up to use Servo’s URL parser, and another team is working on media libraries.

Notable additions

New contributors

  • Emilio Cobos Álvarez
  • Allen Chen
  • Andrew Foote
  • William Galliher
  • Jinank Jain
  • Rucha Jogaikar
  • Cyryl Płotnicki-Chudyk
  • Jinwoo Song
  • Jacob Taylor-Hindle
  • Shivaji Vidhale

Screenshots

Having previously conquered rectangles, Servo’s WebGL engine is now capable of drawing a triangle inside a rectangle:

Meetings

We’ve switched from Critic to Reviewable and it’s working pretty well.

Mozillians will be gathering in Whistler, BC next month, and we’ve started planning out how the Servo team will participate. We’re going to run Rust and Servo training sessions, as well as meetings with other teams to plan for the shared future of Gecko and Servo.

Aside from those ongoing topics, here’s the breakdown by date of what we’ve discussed:

April 27

  • Intermittent test failures on the builders
  • We talked about what it would take to use Bugzilla instead of GitHub Issues.
  • We discussed what to blog about next; suggestions are welcome!

May 4

  • The Rust upgrade of doom

May 11

  • Discussion with Brian Birtles about the emerging Web Animations API
  • We’re going to start assigning PRs to their reviewers on GitHub.
  • Status update on Rust in Gecko. The Gecko teams are doing most of the work :D
  • We talked about issues with the switch to Piston’s image library.

The Rust Programming Language BlogAnnouncing Rust 1.0

Today we are very proud to announce the 1.0 release of Rust, a new programming language aiming to make it easier to build reliable, efficient systems. Rust combines low-level control over performance with high-level convenience and safety guarantees. Better yet, it achieves these goals without requiring a garbage collector or runtime, making it possible to use Rust libraries as a “drop-in replacement” for C. If you’d like to experiment with Rust, the “Getting Started” section of the Rust book is your best bet (if you prefer to use an e-reader, Pascal Hertleif maintains unofficial e-book versions as well).

What makes Rust different from other languages is its type system, which represents a refinement and codification of “best practices” that have been hammered out by generations of C and C++ programmers. As such, Rust has something to offer for both experienced systems programmers and newcomers alike: experienced programmers will find they save time they would have spent debugging, whereas newcomers can write low-level code without worrying about minor mistakes leading to mysterious crashes.

What does it mean for Rust to be 1.0?

The current Rust language is the result of a lot of iteration and experimentation. The process has worked out well for us: Rust today is both simpler and more powerful than we originally thought would be possible. But all that experimentation also made it difficult to maintain projects written in Rust, since the language and standard library were constantly changing.

The 1.0 release marks the end of that churn. This release is the official beginning of our commitment to stability, and as such it offers a firm foundation for building applications and libraries. From this point forward, breaking changes are largely out of scope (some minor caveats apply, such as compiler bugs).

That said, releasing 1.0 doesn’t mean that the Rust language is “done”. We have many improvements in store. In fact, the Nightly builds of Rust already demonstrate improvements to compile times (with more to come) and includes work on new APIs and language features, like std::fs and associated constants.

To help ensure that compiler and language improvements make their way out into the ecosystem at large as quickly as possible, we’ve adopted a train-based release model. This means that we’ll be issuing regular releases every six weeks, just like the Firefox and Chrome web browsers. To kick off that process, we are also releasing Rust 1.1 beta today, simultaneously with Rust 1.0.

Cargo and crates.io

Building a real project is about more than just writing code – it’s also about managing dependencies. Cargo, the Rust package manager and build system, is designed to make this easy. Using Cargo, downloading and installing new libraries is as simple as adding one line to your manifest.

Of course, to use a dependency, you first have to find it. This is where crates.io comes in – crates.io is a central package repository for Rust code. It makes it easy to search for other people’s packages or to publish your own.

Since we announced cargo and crates.io approximately six months ago, the number of packages has been growing steadily. Nonetheless, it’s still early days, and there are still lots of great packages yet to be written. If you’re interested in building a library that will take the Rust world by storm, there’s no time like the present!

Open Source and Open Governance

Rust has been an open-source project from the start. Over the last few years, we’ve been constantly looking for ways to make our governance more open and community driven. Since we introduced the RFC process a little over a year ago, all major decisions about Rust are written up and discussed in the open in the form of an RFC. Recently, we adopted a new governance model, which establishes a set of subteams, each responsible for RFCs in one particular area. If you’d like help shape the future of Rust, we encourage you to get involved, either by uploading libraries to crates.io, commenting on RFCs, or writing code for Rust itself.

We’d like to give a special thank you to the following people, each of whom contributed changes since our previous release (the complete list of contributors is here):

  • Aaron Gallagher <_@habnab.it>
  • Aaron Turon <aturon@mozilla.com>
  • Abhishek Chanda <abhishek@cloudscaling.com>
  • Adolfo Ochagavía <aochagavia92@gmail.com>
  • Alex Burka <durka42+github@gmail.com>
  • Alex Crichton <alex@alexcrichton.com>
  • Alex Quach <alex@clinkle.com>
  • Alexander Polakov <plhk@sdf.org>
  • Andrea Canciani <ranma42@gmail.com>
  • Andreas Martens <andreasm@fastmail.fm>
  • Andreas Tolfsen <ato@mozilla.com>
  • Andrei Oprea <andrei.br92@gmail.com>
  • Andrew Paseltiner <apaseltiner@gmail.com>
  • Andrew Seidl <dev@aas.io>
  • Andrew Straw <strawman@astraw.com>
  • Andrzej Janik <vosen@vosen.pl>
  • Aram Visser <aramvisser@gmail.com>
  • Ariel Ben-Yehuda <arielb1@mail.tau.ac.il>
  • Augusto Hack <hack.augusto@gmail.com>
  • Avdi Grimm <avdi@avdi.org>
  • Barosl Lee <vcs@barosl.com>
  • Ben Ashford <ben@bcash.org>
  • Ben Gesoff <ben.gesoff@gmail.com>
  • Björn Steinbrink <bsteinbr@gmail.com>
  • Brad King <brad.king@kitware.com>
  • Brendan Graetz <github@bguiz.com>
  • Brett Cannon <brettcannon@users.noreply.github.com>
  • Brian Anderson <banderson@mozilla.com>
  • Brian Campbell <lambda@continuation.org>
  • Carlos Galarza <carloslfu@gmail.com>
  • Carol (Nichols || Goulding) <carol.nichols@gmail.com>
  • Carol Nichols <carol.nichols@gmail.com>
  • Chris Morgan <me@chrismorgan.info>
  • Chris Wong <lambda.fairy@gmail.com>
  • Christopher Chambers <chris.chambers@peanutcode.com>
  • Clark Gaebel <cg.wowus.cg@gmail.com>
  • Cole Reynolds <cpjreynolds@gmail.com>
  • Colin Walters <walters@verbum.org>
  • Conrad Kleinespel <conradk@conradk.com>
  • Corey Farwell <coreyf@rwell.org>
  • Dan Callahan <dan.callahan@gmail.com>
  • Dave Huseby <dhuseby@mozilla.com>
  • David Reid <dreid@dreid.org>
  • Diggory Hardy <github@dhardy.name>
  • Dominic van Berkel <dominic@baudvine.net>
  • Dominick Allen <dominick.allen1989@gmail.com>
  • Don Petersen <don@donpetersen.net>
  • Dzmitry Malyshau <kvarkus@gmail.com>
  • Earl St Sauver <estsauver@gmail.com>
  • Eduard Burtescu <edy.burt@gmail.com>
  • Erick Tryzelaar <erick.tryzelaar@gmail.com>
  • Felix S. Klock II <pnkfelix@pnkfx.org>
  • Florian Hahn <flo@fhahn.com>
  • Florian Hartwig <florian.j.hartwig@gmail.com>
  • Franziska Hinkelmann <franziska.hinkelmann@gmail.com>
  • FuGangqiang <fu_gangqiang@163.com>
  • Garming Sam <garming_sam@outlook.com>
  • Geoffrey Thomas <geofft@ldpreload.com>
  • Geoffry Song <goffrie@gmail.com>
  • Gleb Kozyrev <gleb@gkoz.com>
  • Graydon Hoare <graydon@mozilla.com>
  • Guillaume Gomez <guillaume1.gomez@gmail.com>
  • Hajime Morrita <omo@dodgson.org>
  • Hech <tryctor@gmail.com>
  • Heejong Ahn <heejongahn@gmail.com>
  • Hika Hibariya <hibariya@gmail.com>
  • Huon Wilson <dbau.pp+github@gmail.com>
  • Igor Strebezhev <xamgore@ya.ru>
  • Isaac Ge <acgtyrant@gmail.com>
  • J Bailey <jj2baile@uwaterloo.ca>
  • Jake Goulding <jake.goulding@gmail.com>
  • James Miller <bladeon@gmail.com>
  • James Perry <james.austin.perry@gmail.com>
  • Jan Andersson <jan.andersson@gmail.com>
  • Jan Bujak <j@exia.io>
  • Jan-Erik Rediger <janerik@fnordig.de>
  • Jannis Redmann <mail@jannisr.de>
  • Jason Yeo <jasonyeo88@gmail.com>
  • Johann <git@johann-hofmann.com>
  • Johann Hofmann <git@johann-hofmann.com>
  • Johannes Oertel <johannes.oertel@uni-due.de>
  • John Gallagher <jgallagher@bignerdranch.com>
  • John Van Enk <vanenkj@gmail.com>
  • Jonathan S <gereeter+code@gmail.com>
  • Jordan Humphreys <mrsweaters@users.noreply.github.com>
  • Joseph Crail <jbcrail@gmail.com>
  • Josh Triplett <josh@joshtriplett.org>
  • Kang Seonghoon <kang.seonghoon@mearie.org>
  • Keegan McAllister <kmcallister@mozilla.com>
  • Kelvin Ly <kelvin.ly1618@gmail.com>
  • Kevin Ballard <kevin@sb.org>
  • Kevin Butler <haqkrs@gmail.com>
  • Kevin Mehall <km@kevinmehall.net>
  • Krzysztof Drewniak <krzysdrewniak@gmail.com>
  • Lee Aronson <lee@libertad.ucsd.edu>
  • Lee Jeffery <leejeffery@gmail.com>
  • Liam Monahan <liam@monahan.io>
  • Liigo Zhuang <com.liigo@gmail.com>
  • Luke Gallagher <luke@hypergeometric.net>
  • Luqman Aden <me@luqman.ca>
  • Manish Goregaokar <manishsmail@gmail.com>
  • Manuel Hoffmann <manuel@polythematik.de>
  • Marin Atanasov Nikolov <dnaeon@gmail.com>
  • Mark Mossberg <mark.mossberg@gmail.com>
  • Marvin Löbel <loebel.marvin@gmail.com>
  • Mathieu Rochette <mathieu@rochette.cc>
  • Mathijs van de Nes <git@mathijs.vd-nes.nl>
  • Matt Brubeck <mbrubeck@limpet.net>
  • Michael Alexander <beefsack@gmail.com>
  • Michael Macias <zaeleus@gmail.com>
  • Michael Park <mcypark@gmail.com>
  • Michael Rosenberg <42micro@gmail.com>
  • Michael Sproul <micsproul@gmail.com>
  • Michael Woerister <michaelwoerister@gmail>
  • Michael Wu <mwu@mozilla.com>
  • Michał Czardybon <mczard@poczta.onet.pl>
  • Mickaël Salaün <mic@digikod.net>
  • Mike Boutin <mike.boutin@gmail.com>
  • Mike Sampson <mike@sambodata.com>
  • Ms2ger <ms2ger@gmail.com>
  • Nelo Onyiah <nelo.onyiah@gmail.com>
  • Nicholas <npmazzuca@gmail.com>
  • Nicholas Mazzuca <npmazzuca@gmail.com>
  • Nick Cameron <ncameron@mozilla.com>
  • Nick Hamann <nick@wabbo.org>
  • Nick Platt <platt.nicholas@gmail.com>
  • Niko Matsakis <niko@alum.mit.edu>
  • Oak <White-Oak@users.noreply.github.com>
  • Oliver Schneider <github6541940@oli-obk.de>
  • P1start <rewi-github@whanau.org>
  • Pascal Hertleif <killercup@gmail.com>
  • Paul Banks <banks@banksdesigns.co.uk>
  • Paul Faria <Nashenas88@users.noreply.github.com>
  • Paul Quint <DrKwint@gmail.com>
  • Pete Hunt <petehunt@users.noreply.github.com>
  • Peter Marheine <peter@taricorp.net>
  • Phil Dawes <phil@phildawes.net>
  • Philip Munksgaard <pmunksgaard@gmail.com>
  • Piotr Czarnecki <pioczarn@gmail.com>
  • Piotr Szotkowski <chastell@chastell.net>
  • Poga Po <poga.bahamut@gmail.com>
  • Przemysław Wesołek <jest@go.art.pl>
  • Ralph Giles <giles@mozilla.com>
  • Raphael Speyer <rspeyer@gmail.com>
  • Remi Rampin <remirampin@gmail.com>
  • Ricardo Martins <ricardo@scarybox.net>
  • Richo Healey <richo@psych0tik.net>
  • Rob Young <rob.young@digital.cabinet-office.gov.uk>
  • Robin Kruppe <robin.kruppe@gmail.com>
  • Robin Stocker <robin@nibor.org>
  • Rory O’Kane <rory@roryokane.com>
  • Ruud van Asseldonk <dev@veniogames.com>
  • Ryan Prichard <ryan.prichard@gmail.com>
  • Scott Olson <scott@scott-olson.org>
  • Sean Bowe <ewillbefull@gmail.com>
  • Sean McArthur <sean.monstar@gmail.com>
  • Sean Patrick Santos <SeanPatrickSantos@gmail.com>
  • Seo Sanghyeon <sanxiyn@gmail.com>
  • Shmuale Mark <shm.mark@gmail.com>
  • Simon Kern <simon.kern@rwth-aachen.de>
  • Simon Sapin <simon@exyr.org>
  • Simonas Kazlauskas <git@kazlauskas.me>
  • Sindre Johansen <sindre@sindrejohansen.no>
  • Skyler <skyler.lipthay@gmail.com>
  • Steve Klabnik <steve@steveklabnik.com>
  • Steven Allen <steven@stebalien.com>
  • Swaroop C H <swaroop@swaroopch.com>
  • Sébastien Marie <semarie@users.noreply.github.com>
  • Tamir Duberstein <tamird@gmail.com>
  • Tero Hänninen <tejohann@kapsi.fi>
  • Theo Belaire <theo.belaire@gmail.com>
  • Theo Belaire <tyr.god.of.war.42@gmail.com>
  • Thiago Carvalho <thiago.carvalho@westwing.de>
  • Thomas Jespersen <laumann.thomas@gmail.com>
  • Tibor Benke <ihrwein@gmail.com>
  • Tim Cuthbertson <tim@gfxmonk.net>
  • Tincan <tincann@users.noreply.github.com>
  • Ting-Yu Lin <aethanyc@gmail.com>
  • Tobias Bucher <tobiasbucher5991@gmail.com>
  • Toni Cárdenas <toni@tcardenas.me>
  • Tshepang Lekhonkhobe <tshepang@gmail.com>
  • Ulrik Sverdrup <root@localhost>
  • Vadim Chugunov <vadimcn@gmail.com>
  • Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
  • Valerii Hiora <valerii.hiora@gmail.com>
  • Wangshan Lu <wisagan@gmail.com>
  • Wei-Ming Yang <rick68@users.noreply.github.com>
  • Will <will@glozer.net>
  • Will Hipschman <whipsch@gmail.com>
  • Wojciech Ogrodowczyk <github@haikuco.de>
  • Xue Fuqiao <xfq.free@gmail.com>
  • Xuefeng Wu <xfwu@thoughtworks.com>
  • York Xiang <bombless@126.com>
  • Young Wu <doomsplayer@gmail.com>
  • bcoopers <coopersmithbrian@gmail.com>
  • critiqjo <john.ch.fr@gmail.com>
  • diwic <diwic@users.noreply.github.com>
  • fenduru <fenduru@users.noreply.github.com>
  • gareins <ozbolt.menegatti@gmail.com>
  • github-monoculture <eocene@gmx.com>
  • inrustwetrust <inrustwetrust@users.noreply.github.com>
  • jooert <jooert@users.noreply.github.com>
  • kgv <mail@kgv.name>
  • klutzy <klutzytheklutzy@gmail.com>
  • kwantam <kwantam@gmail.com>
  • leunggamciu <gamciuleung@gmail.com>
  • mdinger <mdinger.bugzilla@gmail.com>
  • nwin <nwin@users.noreply.github.com>
  • pez <james.austin.perry@gmail.com>
  • robertfoss <dev@robertfoss.se>
  • rundrop1 <rundrop1@zoho.com>
  • sinkuu <sinkuupump@gmail.com>
  • tynopex <tynopex@users.noreply.github.com>
  • Łukasz Niemier <lukasz@niemier.pl>
  • らいどっと <ryogo.yoshimura@gmail.com>

Daniel StenbergRFC 7540 is HTTP/2

HTTP/2 is the new protocol for the web, as I trust everyone reading my blog are fully aware of by now. (If you’re not, read http2 explained.)

Today RFC 7540 was published, the final outcome of the years of work put into this by the tireless heroes in the HTTPbis working group of the IETF. Closely related to the main RFC is the one detailing HPACK, which is the header compression algorithm used by HTTP/2 and that is now known as RFC 7541.

The IETF part of this journey started pretty much with Mike Belshe’s posting of draft-mbelshe-httpbis-spdy-00 in February 2012. Google’s SPDY effort had been going on for a while and when it was taken to the httpbis working group in IETF, where a few different proposals on how to kick off the HTTP/2 work were debated.

HTTP team working in LondonThe first “httpbis’ified” version of that document (draft-ietf-httpbis-http2-00) was then published on November 28 2012 and the standardization work began for real. HTTP/2 was of course discussed a lot on the mailing list since the start, on the IETF meetings but also in interim meetings around the world.

In Zurich, in January 2014 there was one that I only attended remotely. We had the design team meeting in London immediately after IETF89 (March 2014) in the Mozilla offices just next to Piccadilly Circus (where I took the photos that are shown in this posting). We had our final in-person meetup with the HTTP team at Google’s offices in NYC in June 2014 where we ironed out most of the remaining issues.

In between those two last meetings I published my first version of http2 explained. My attempt at a lengthy and very detailed description of HTTP/2, including describing problems with HTTP/1.1 and motivations for HTTP/2. I’ve since published eleven updates.

HTTP team in London, debating protocol detailsThe last draft update of HTTP/2 that contained actual changes of the binary format was draft-14, published in July 2014. After that, the updates were in the language and clarifications on what to do when. There are some functional changes (added in -16 I believe) for like when which sort of frames are accepted that changes what a state machine should do, but it doesn’t change how the protocol looks on the wire.

RFC 7540 was published on May 15th, 2015

I’ve truly enjoyed having had the chance to be a part of this. There are a bunch of good people who made this happen and while I am most certainly forgetting key persons, some of the peeps that have truly stood out are: Mark, Julian, Roberto, Roy, Will, Tatsuhiro, Patrick, Martin, Mike, Nicolas, Mike, Jeff, Hasan, Herve and Willy.

http2 logo

Mozilla Science LabGet involved: Contributorship Badges project call – May 20

Since our launch two years ago, we’ve been working on prototypes to not only address meaty problems in the research space, but also show how the web and open technology can further scientific tools. We believe there’s a tremendous amount one can learn about reuse and working collaboratively through open source projects, and want to (more explicitly) test that out as we develop our next prototype. On May 20th (next Wednesday) at 11am ET, I’ll be hosting a call for anyone interested in contributing to the Contributorship Badges project. We hope you’ll join us.

WHAT: Call for anyone interested in helping build Contributorship Badges for Science
WHEN: May 20, 11am ET
PAD: https://etherpad.mozilla.org/sciencelab-project-call-may20

What we’re trying: learn by building

We want to help researchers learn how to leverage the web by building within the open source community. By encouraging participation in an ongoing project, more researchers will experience best practices in open source and learn how to run their own projects in a way that engages the community and builds skills. This builds on a core belief part of our pedagogy here at Mozilla, that the most transformative moments often come from working on things that matter, building them together, and empowering others to get involved..

By learning best practices in open source, we can foster a community in research that shares skills, increases code quality and encourages discoverability and use through collaboration.

What are Contributorship Badges?

The Contributorship Badges project was announced late last year as a collaboration testing out the use of badges as a standardized digital credential for the work done by each author on an academic article. Since then, we’ve starting building a prototype that uses Mozilla’s Open Badges infrastructure to store and issue badges to an author’s ORCID (a unique researcher identifier) based on their contribution to a paper published by BioMed Central (BMC) or Public Library of Science (PLoS). The badges awarded are based on the 14-role author contribution ‘taxonomy’ being developed by the Wellcome Trust, Digital Science and others.

I’ve worked on setting up the bare-bones skeleton and general plan of this project, including getting started and contributing guidelines. Next wednesday, I’ll be hosting a call for anyone who wants to get involved where I’ll go over what’s been done and where people can jump in. I plan on spending majority of my time on badges mentoring new contributors and reviewing patches.

The Mozilla Science Lab will still be leading development on the prototype. I’d like to see more researchers contributing code, design, and other suggestions and experiencing being a part of an open source project.

Join us May 20

This call is open to designers, developers, researchers, publishers — pretty much anyone interested in participating and learning more about the project. We could use all sorts of input on this (beyond just code), so if you’re new to open source, come join us – we’d love to have you involved. We want to give researchers a painless way to experience open, iterative workflow and learn how open source can help their community.

Building Contributorship Badges for Science has the potential to be a great learning experience for us. I hope you’ll join us as we learn and build together!

Further reading

Michael KaplyFirst Beta of CCK2 2.1 Available

The first beta of the next CCK2 is available here.

This upgrade has three main areas of focus:

  1. Support for the new in content preferences
  2. Remove the need for the distribution directory (except in the case of disabling safe mode)
  3. Support for new Firefox 38 features (not done yet).

Removing support for the distribution directory was a major internal change, so I would appreciate any testing you can do.

My plan is to finish support for a few Firefox 38 specific features and then release next week.

Soledad Penades“The disconnected ensemble”, at JSConf.Budapest

Here I am in Budapest (for the first time ever 😮)! I’m back in the hotel after having a quick dinner on my own. I didn’t join the party because I had a massive headache and also I was getting so sleepy, no coffee could fight that (also probably the two things were related). But once I started wandering towards my hotel I found myself feeling so much better, and stumbled upon a cosy nice place and ended up stopping there for some food.

When I came back from the speakers’ dinner yesterday, I practiced setting up all my stuff and going through the demos again, which are in fact ran on real, physical devices, i.e. phones.

Because this is early days it is sometimes a bit flimsy and although it is generally quite robust, we shouldn’t forget the fact that I’m running Nightly builds and using pre-production APIs which are also barely documented. I’m probably one of the few developers using those APIs without also having written their implementations too, so I’m surely doing things wrong.

And yesterday night was one of those “everything is crashing and I hate my life” moments:

  • The “master device” i.e. the Nexus 4 that acts as provider of musical toys got confused and wouldn’t stop serving me Animated GIFs of cats. Which is good but not what I wanted.
  • So I uninstalled the catserver app, but then the app with the musical toys disappeared as well (wut!?)
  • I connected it to my laptop to push the app again. The laptop would not find the device. Turns out it was the USB cable, so replacing it with another one was enough. Still, panic: what if adb has stopped working completely and I can’t push the app to the phone!!!???
  • Then my two Flames refused to connect to the WiFi. No, make it that they refused to even enable the WiFi. Thankfully, I fixed that by restarting (like so many things in life).
  • But then one of the Flames would report NFC available and up, but would not respond to NFC bumps. I somehow fixed that by removing the cover and putting it back again after gently caressing the NFC antenna contacts (wut?!)
  • Then I found that bumping the Nexus 4 against my Nexus 5 would end up in either no result, or on the Nexus 4 restarting Gaia (i.e. not a hard restart but just the UI)

There were more things but those were the most terrifying. I fixed everything and went to sleep, but I kept waking up every 2 hours with my brain going all…

  • this is going to crash on stage
  • the Nexus 4 will run out of battery and you’ll be done
  • the Nexus 5 will give up being a hotspot in your face
  • the WiFi will be jammed by other people in the conference
  • your slides will crash
  • this will teach you not to commit yourself to live demos next time
  • why hardware demos?
  • etc etc etc

And I kept telling to my brain:

CHILL
DOWN.

But it didn’t really work because it kept waking me up. Urgh.

Turns out that everything has gone quite smoothly, nothing has crashed, and also the organisers of the conference have been super great, because they didn’t forget my request for a table with a camera pointing to it, in which I could install all the equipment and people in the audience could see what was going on, so it’s been super coooool. Thanks, organisers! 😀

I also want to thank Sufian for willing to join this pop up band and play JS-Theremin without any hesitation! Yay! 😎

If you’re intrigued about this live act, I’m doing this talk next week in Manchester for UpFrontConf and in Melbourne for CampJS. Also, yes: I’m going to spend an incredible amount of time travelling, but I’m also super excited because I haven’t been to Manchester in almost 20 years and I’ve never been to Australia sooooo I’m all pumped. WEEEEEEEEEEEEE! 🎉🎆

flattr this!

The Servo BlogThis Week In Servo 32

In the past three weeks, we merged 141 pull requests.

Samsung OSG published another blog post by Lars and Mike. This one focuses on Servo’s support for embedding via the CEF API.

The Rust upgrade of doom is finally over. This brings us up to a Rust version from late April. We’ve now cleared all of the pre-1.0 breaking changes!

Firefox Nightly now has experimental support for components written in Rust. There’s a patch up to use Servo’s URL parser, and another team is working on media libraries.

Notable additions

New contributors

  • Emilio Cobos Álvarez
  • Allen Chen
  • Andrew Foote
  • William Galliher
  • Jinank Jain
  • Rucha Jogaikar
  • Cyryl Płotnicki-Chudyk
  • Jinwoo Song
  • Jacob Taylor-Hindle
  • Shivaji Vidhale

Screenshots

Having previously conquered rectangles, Servo’s WebGL engine is now capable of drawing a triangle inside a rectangle:

Meetings

We’ve switched from Critic to Reviewable and it’s working pretty well.

Mozillians will be gathering in Whistler, BC next month, and we’ve started planning out how the Servo team will participate. We’re going to run Rust and Servo training sessions, as well as meetings with other teams to plan for the shared future of Gecko and Servo.

Aside from those ongoing topics, here’s the breakdown by date of what we’ve discussed:

April 27

  • Intermittent test failures on the builders
  • We talked about what it would take to use Bugzilla instead of GitHub Issues.
  • We discussed what to blog about next; suggestions are welcome!

May 4

  • The Rust upgrade of doom

May 11

  • Discussion with Brian Birtles about the emerging Web Animations API
  • We’re going to start assigning PRs to their reviewers on GitHub.
  • Status update on Rust in Gecko. The Gecko teams are doing most of the work :D
  • We talked about issues with the switch to Piston’s image library.