Henrik SkupinFirefox Automation report – week 51/52 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 51 and 52 of 2014. I’m sorry for this very late post but changes to our team, which I will get to in my next upcoming post, caught me up with lots of more work and didn’t give me the time for writing status reports.

Highlights

Henrik started work towards a Mozmill 2.1 release. Therefore he had to upgrade a couple of mozbase packages first to get latest Mozmill code on master working again. Once done the patch for handling parent sections in manifest files finally landed, which was originally written by Andrei Eftimie and was sitting around for a while. That addition allows us to use mozhttpd for serving test data via a local HTTP server. Last but not least another important feature went in, which let us better handle application disconnects. There are still some more bugs to fix before we can actually release version 2.1 of Mozmill.

Given that we only have the capacity to fix the most important issues for the Mozmill test framework, Henrik started to mass close existing bugs for Mozmill. So only a hand-full of bugs will remain open. If there is something important you want to see fixed, we would encourage you to start working on the appropriate bug.

For Mozmill CI we got the new Ubuntu 14.10 boxes up and running in our staging environment. Once we can be sure they are stable enough, they will also be enabled in production.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 51 and week 52.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 51 and week 52.

Mozilla Open Policy & Advocacy BlogCISA threatens Internet security and undermines user trust

Protecting the privacy of users and the information collected about them online is crucial to maintaining and growing a healthy and open Web. Unfortunately, there have been massive threats that weaken our ability to create the Web that we want to see. The most notable and recent example of this is the expansive surveillance practices of the U.S. government that were revealed by Edward Snowden. Even though it has been nearly two years since these revelations began, the U.S. Congress has failed to pass any meaningful surveillance reform, and is about to consider creating new surveillance authorities in the form of the Cybersecurity Information Sharing Act of 2015.

We opposed the Cyber Intelligence Sharing and Protection Act in 2012 – as did a chorus of privacy advocates, information security professionals, entrepreneurs, and leading academics, with the President ultimately issuing a veto threat. We believe the newest version of CISA is worse in many respects, and that the bill fundamentally undermines Internet security and user trust.

CISA is promoted as facilitating the sharing of cyber threat information, but:

  • is overbroad in scope, allowing virtually any type information to be shared and to be used, retained, or further shared not just for cybersecurity purposes, but for a wide range of other offences including arson and carjacking;
  • allows information to be shared automatically between civilian and military agencies including the NSA regardless of the intended purpose of sharing, which limits the capacity of civilian agencies to conduct and oversee the exchange of cybersecurity information between the private sector and sector-specific Federal agencies;
  • authorizes dangerous countermeasures that could seriously damage the Internet; and
  • provides blanket immunity from liability with shockingly insufficient privacy safeguards.

The lack of meaningful provisions requiring companies to strip out personal information before sharing with the government, problematic on its own, is made more egregious by the realtime sharing, data retention, lack of limitations, and sweeping permitted uses envisioned in the bill.

Unnecessary and harmful sharing of personal information is a very real and avoidable consequence of this bill. Even in those instances where sharing information for cybersecurity purposes is necessary, there is no reason to include users’ personal information. Threat indicators rarely encompass such details. Furthermore, it’s not a difficult or onerous process to strip out personal information before sharing. In the exceptional cases where personal information is relevant to the threat indicator, those details would be so relevant to mitigating the threat at hand that blanket immunity from liability for sharing would not be necessary.

We believe Congress should focus on reining in the NSA’s sweeping surveillance authority and practices. Concerns around information sharing are at best a small part of the problem that needs to be solved in order to secure the Internet and its users.

Daniel StenbergMore HTTP framing attempts

Previously, in my exciting series “improving the HTTP framing checks in Firefox” we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.

And now I’ve landed my third version. The amendment I did this time:

When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)

This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing…

Will this version of the fix actually stick? I don’t know. There’s lots of bad voodoo out there in the HTTP world and I’m putting my finger right in the middle of some of it with this change. I’m pretty sure I’ve not written my last blog post on this topic just yet… If it sticks this time, it should show up in Firefox 39.

bolt-cutter

This Week In RustThis Week in Rust 72

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

135 pull requests were merged in the last week, and 1 RFC PR.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • defuz
  • FuGangqiang
  • JP-Ellis
  • lummax
  • Michał Krasnoborski
  • nwin
  • Raphael Nestler
  • Ryan Prichard
  • Scott Olson

Approved RFCs

Mysteriously, during the week of February 23 to March 1 there were no RFCs approved to The Rust Language.

New RFCs

Quote of the Week

"I must kindly ask that you please not go around telling people to disregard the rules of our community. Violations of Rule #6 will absolutely not be tolerated."

kibwen is serious about upholding community standards.

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

The Mozilla BlogFirefox OS Proves Flexibility of Web: Ecosystem Expands with More Partners, Device Categories and Regions in 2015

Orange to bring Firefox OS to 13 new markets in Africa and Middle East; Mozilla, KDDI, LG U+, Telefónica and Verizon collaborate on new category of phones based on Firefox OS

Barcelona, Spain – Mobile World Congress – March 1st, 2015 – Mozilla, the mission-based organization dedicated to keeping the power of the Web in people’s hands, welcomed new partners and devices to the Firefox OS ecosystem at an event in Barcelona, leading into Mobile World Congress.

Mozilla President Li Gong summarized the status of Firefox OS, which currently scales across devices ranging from the world’s most affordable smartphone to 4K Ultra HD TVs. “Two years ago Firefox OS was a promise. At MWC 2014, we were able to show that Firefox OS scales across price ranges and form factors. Today, at MWC 2015, we celebrate dozens of successful device launches across continents, adoption of Firefox OS beyond mobile, as well as growing interest and innovation around the only truly open mobile platform. Also, we are proud to report that three major chip vendors contribute to the Firefox OS ecosystem.”

Firefox OS MWC 2015 News in Detail:

•    Mozilla, KDDI, LG U+, Telefonica and Verizon Wireless collaborate to create a new category of intuitive and easy to use Firefox OS phones: The companies are collaborating to contribute to the Mozilla community and create a new range of Firefox OS phones for a 2016 launch in various form factors – flips, sliders and slates – that balance the simplicity of a basic phone (calls, texts) with the more advanced features of a smartphone such as fun applications, content, navigation, music players, camera, video, LTE, VoLTE, email and Web browsing. For more details and supporting quotes see blog.mozilla.org

•    Orange announces bringing Firefox OS to 13 markets as part of a new digital offer: Today, Orange puts the mobile Internet within reach of millions more people, otherwise not previously addressed, with the launch of a new breakthrough digital offer across its significant African and Middle Eastern footprint. The Orange Klif digital offer starts from under US$40 (€35), inclusive of data, voice and text bundle and sets a new benchmark in price that will act as a major catalyst for smartphone and data adoption across the region. The 3G Firefox OS smartphone is exclusive to Orange and will be available from Q2 in 13 of Orange’s markets in the region, including, but not limited to, Egypt, Senegal, Tunisia, Cameroon, Botswana, Madagascar, Mali, The Ivory Coast, Jordan, Niger, Kenya, Mauritius and Vanuatu.
ALCATEL ONETOUCH collaborates with Orange and announced more details on the new phone today:

Orange Klif 3G-Volcano-Black-_LO•    ALCATEL ONETOUCH expands mobile internet access with the newest Firefox OS phone, the Orange Klif. The Orange Klif offers connectivity speeds of up to 21 Mbps, is dual SIM, and includes a two-megapixel camera and micro-SD slot. The addition of the highly optimised Firefox OS meanwhile allows for truly seamless Web browsing experiences, creating a powerful Internet-ready package.
The Orange Klif is the first Firefox OS phone powered by a MediaTek processor.

•    Mozilla revealed further details about upcoming versions of Firefox OS, among them: Improved performance and support of multi-core processors, enhanced privacy features, additional support for WebRTC, right to left language support and an NFC payments infrastructure.

Runcible by Monohm•    Earlier this week, KDDI Corporation announced an investment in Monohm, a US based provider of innovative IoT devices based on Firefox OS. Monohm’s first product “Runcible” will be showcased at the Mozilla booth at MWC 2015.

Panasonic VIERA TX-CR730

The Firefox OS ecosystem continues to expand with new partners and devices ranging from the line of Panasonic 4K Ultra HD TVs to the world’s most affordable smartphone:

“Just months ago, Cherry Mobile introduced the ACE, the first Firefox OS smartphone in the Philippines, which is also the most affordable smartphone in the world. We are excited that the ACE, which keeps gaining positive feedback in the market, is helping lots of consumers move from feature phones to smartphones. Through the close partnership with Mozilla Firefox OS, we will continue to bring more affordable quality mobile devices to consumers,” said Maynard Ngu, Cherry Mobile CEO.

With today’s announcements, Firefox OS will be available from leading operator partners in more than 40 markets in the next year on a total of 17 smartphones.

Firefox OS unlocks the power of the Web as the platform and will continue to expand across markets and device categories as we move forward the Internet of Things (IOT), using open Web technology to enable operators, hardware manufacturers and developers to create innovative and customized applications and products for consumers to use across these connected devices.

Creating Content for Mobile, on Mobile Devices
Mozilla today unveiled the beta version of Webmaker, a free and open source mobile content creation app, which strips away the complexity of traditional Web creation. Webmaker will be available for Android, Firefox OS, and via a modern mobile browser on other devices in over 20 languages later this year. For more info, please visit webmaker.org/localweb

The Mozilla BlogMozilla, KDDI, LG U+, Telefónica and Verizon Wireless Collaborate to Create New Category of Firefox OS Phones

New range of intuitive and easy-to-use phones to be powered by Firefox OS

Barcelona, Spain – Mobile World Congress – March 1, 2015
Mozilla, the mission based organization dedicated to keeping the power of the Web in people’s hands, together with KDDI, LG U+, Telefónica and Verizon Wireless, today announced at Mobile World Congress a new initiative to create devices based on Firefox OS.

The goal of this initiative is to create a more intuitive and easy-to-use experience (powered by Firefox OS) for consumers around the world. The companies are collaborating to contribute to the Mozilla community and create a new range of Firefox OS phones for a 2016 launch in various form factors – flips, sliders and slates – that balance the simplicity of a basic phone (calls, texts) with the more advanced features of a smartphone such as fun applications, content, navigation, music players, camera, video, LTE, VoLTE, email and Web browsing.

Firefox OS was chosen as the platform for this initiative because it unlocks the mobile ecosystem and enables independence and innovation. This results in more flexibility for network operators and hardware manufacturers to provide a differentiated experience and explore new business ventures, while users get the performance, personalization and affordability they want packaged in a beautiful, clean and easy-to-use experience.

“By leveraging Firefox OS and the power of the Web, we are re-imagining and providing a modern platform for entry-level phones, said Li Gong, President of Mozilla. “We’re excited to work with operator partners like KDDI, LG U+, Telefonica and Verizon Wireless to reach new audiences in both emerging and developed markets and offer customers differentiated services.”

Yasuhide Yamamoto, Vice President, Product Sector at KDDI said “We have been gaining high attention from the market with Fx0, a high tier LTE based Firefox OS smartphone launched last December, and we have faith in the unlimited potential of Firefox OS. KDDI has been very competitive in the Japanese mature mobile phone market for decades, so we are confident that we can contribute to the Mozilla community in developing this new concept product.”

“Telefónica is actively supporting Firefox OS, aligned with our strategy of bringing more options and more openness to our customers. Firefox OS smartphones are currently offered in 14 markets across our footprint and are helping to bring connectivity to more people who are looking for a reliable and simple user experience at affordable prices,” said Francisco Montalvo, Director, Telefónica Group Devices Unit.

Rosemary McNally, Vice President, Device Technology at Verizon said “Verizon aims to deliver innovative new products to its customers, and this initiative is about creating a modern, simple and smart platform for basic phones. We’re looking forward to continuing to work with Mozilla and other service providers to leverage the power of Firefox OS and the Web community.”
###

About Mozilla
Mozilla has been a pioneer and advocate for the Web for more than 15 years. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. With Firefox OS and Firefox Marketplace, Mozilla is driving a mobile ecosystem that is built entirely on open Web standards, freeing mobile providers, developers and end users from the limitations and restrictions imposed by proprietary platforms. For more information, visit www.mozilla.org.

About KDDI Corporation
KDDI, a comprehensive communications company offering fixed-line and mobile communications services, strives to be a leading company for changing times. For individual customers, KDDI offers its mobile communications (mobile phone) and fixed-line communications (broadband Internet/telephone) services under the brand name au, helping to realize Fixed Mobile and Broadcasting Convergence (FMBC). For business clients, KDDI provides comprehensive Information and Communications services, from Fixed Mobile Convergence (FMC) networks to data centers, applications, and security strategies, which helps clients strengthen their businesses. For more information please visit http://www.kddi.com/english.

About Telefónica
Telefónica is one of the largest telecommunications companies in the world in terms of market capitalisation and number of customers. With its best in class mobile, fixed and broadband networks, and innovative portfolio of digital solutions, Telefónica is transforming itself into a ‘Digital Telco’, a company that will be even better placed to meet the needs of its customers and capture new revenue growth. The company has a significant presence in 21 countries and a customer base of 341 million accesses around the world. Telefónica has a strong presence in Spain, Europe and Latin America, where the company focuses an important part of its growth strategy. Telefónica is a 100% listed company, with more than 1.5 million direct shareholders. Its share capital currently comprises 4,657,204,330 ordinary shares traded on the Spanish Stock Market  and on those in London, New York, Lima, and Buenos Aires.

About Verizon Wireless
Verizon Wireless operates the nation’s largest and most reliable 4G LTE network.  As the largest wireless company in the U.S., Verizon Wireless serves 108.2 million retail customers, including 102.1 million retail postpaid customers.  Verizon Wireless is wholly owned by Verizon Communications Inc. (NYSE, Nasdaq: VZ).  For more information, visit www.verizonwireless.com.  For the latest news and updates about Verizon Wireless, visit our News Center at http://www.verizonwireless.com/news or follow us on Twitter at http://twitter.com/VZWNews.

Pascal FinetteLink Pack (March 1st)

What I was reading this week:

The Mozilla BlogWebmaker App Takes Fresh Approach to Digital Literacy

Tomorrow at Mobile World Congress in Barcelona, Mozilla will release an open beta of the Webmaker app: a free, independent web publishing tool. This is an important next step in Mozilla’s effort to dramatically increase digital literacy around the world.

The Webmaker app emerged from a year of research in Bangladesh, India and Kenya. The research pointed to two things: new smartphone users face a steep learning curve, often limiting themselves to basic apps like Facebook and not even knowing they are on the Internet; and users yearn for — and can benefit greatly from — the ability to create local, relevant content.

Webmaker app is designed to address these needs by making it possible for anyone to quickly publish a website or an app from the moment they turn on their first smartphone. Students can build a digital bulletin board for their peers, teachers can create and distribute lesson plans, and merchants can produce websites to promote their products.

The idea is to get new smartphone users making things quickly when they get online — and then to help them do more sophisticated things over time. This ‘make first’ approach to digital literacy encourages people to see themselves as active creators rather than passive consumers. This mindset will be critical as billions people grapple with the question ‘how and why should I use the internet?’ for the first time over the next few years.

Webmaker app is free, open source and available in over 20 languages. Users can share their creations using a simple URL via SMS, Facebook, WhatsApp and more. Content created in Webmaker will load in any mobile web browser. The current open beta version is available for Android, Firefox OS and modern mobile browsers. A full release is planned for later this year.

Complementing the Webmaker app are Mozilla’s far-reaching, face-to-face learning programs. Our network of volunteer makers, mentors and educators operate in more than 80 countries. These volunteers — equipped with the app and other tools — run informal workshops in  schools, libraries and other public places to help people understand how the Web works and create content relevant to their everyday lives.  Last year alone, Mozilla volunteers ran 2,513 workshops across 450 cities.

All of these digital literacy activities are driven by partnerships. Mozilla partners with NGOs, mobile carriers and other global organizations to ensure our digital literacy programs reach individuals who need it most. We’re joining forces with influential partners who share our passion for an open Web, local content creation and empowered users.

When billions of first-time Web users come online, they will find a platform they can build, mold and use everyday to better their lives, businesses and education. It’s an ambitious order, but Mozilla is prepared. To participate, or learn more about our digital literacy initiatives, visit webmaker.org/localweb.

Gervase MarkhamTop 50 DOS Problems Solved: Doubling Disk Capacity

Q: I have been told that it is possible to convert 720K 3.5-inch floppy disks into 1.44Mb versions by drilling a hole in the casing. Is this true? How is it done? Is it safe?

A: It is true for the majority of disks. A few fail immediately, but the only way to tell is to try it. The size and placement of the hole is, near enough, a duplicate of the write-protect hole.

If the write-protect hole is in the bottom left of the disk, the extra hole goes in a similar position in the bottom right. Whatever you do, make sure that all traces of plastic swarf are cleared away. As to whether this technique is safe, it is a point of disagreement. In theory, you could find converted disks less reliable. My own experience over several years has been 100 per cent problem free other than those disks which have refused to format to 1.44Mb in the first place.

You can perform a similar trick with 360K and 1.2Mb 5.25-inch disks.

Hands up who remembers doing this. I certainly do…

Doug BelshawWeeknotes 08/2015 and 09/2015

Last week I was in Dubai on holiday with my family thanks to the generosity of my Dad. Here’s a couple of photos from that trip. Scroll down for this week’s updates!

Dubai Marina

Giraffes feeding at Al Ain Zoo

Doug

This (four-day) work week I’ve been:

Mozilla

 Dynamic Skillset

Other

Digital Maker/Citizen badges

Next week I’ll be at home working more on the Learning Pathways whitepaper and Web Literacy Map v1.5. I’ll also be helping out with the Clubs curriculum work where necessary.

Finally, I’m considering doing more work I originally envisaged this year with Dynamic Skillset, so email hello@nulldynamicskillset.com if you think I can help you or your organisation!

All images by me, except header image CC BY-NC NASA’s Marshall Space Flight Center

Cameron KaiserThe oldest computer running TenFourFox

In the "this makes me happy" department, Miles Raymond posted his Power Macintosh 9500 (with a 700MHz G4 and 1.5GB of RAM) running TenFourFox in Tiger. I'm pretty sure there is no older system that can boot and run 10.4, but I'd be delighted to see if anyone can beat this. FWIW, the 9500 was released May 1995, making it 20 years old this year and our very own "Twentieth Anniversary" Macintosh.

And Mozilla says you need an Intel Mac to run Firefox! Oh, those kidders! They're a laugh a minute!

Yunier José Sosa VázquezFirefox 38 brindará soporte para Windows de 64 bits

Firefox 38 -actualmente en el canal Developer Edition- será la primera versión de Firefox con soporte para Windows de 64 bits, con este lanzamiento Mozilla completará el soporte a dicha arquitectura pues ya se contaba con versiones en Linux y Mac.

Anteriormente, solo de contaba con Firefox de 64 bits para Windows en el canal Nightly bajo etapa de pruebas y correcciones, ahora los usuarios de Windows podrán utilizar una versión optimizada de Firefox para dicha plataforma. Firefox 38 incorpora otras novedades interesantes y más adelante hablaremos sobre ello.

La liberación de Firefox 38 final está planificada para el 12 de mayo y aún falta tiempo por definir si finalmente se contará con esta versión, esperamos que sí. Esta edición la pueden descargar desde la sección Aurora de nuestra zona de Descargas para Linux y Windows en español.

Mozilla Reps CommunityRep of the month: February 2015

Stefania Ioana Chiorean is one of the most humble, inspiring and hard working contributor of the Reps community.

She has always been an inspiration of enthusiasm for the Mozilla community worldwide. Her proactive nature of getting things done has motivated Reps throughout. Being the part of Mozilla Romania Community, Ioana helps out anyone and everyone who wants to learn and make the web better. Spreading around Mozillian News through Social Media accounts of Mozilla Romania Community she enjoys helping the SUMO community. An emboldening persona in Womoz, Ioana encourage women participation in tech.

ioana

During the last few months, Ioana has been organizing and participating in several events to promote Mozilla like FOSDEM, OSOM, and also to involve more women into Free/Open Source communities and Mozilla through WoMoz initiative, highly involved in Mozilla QA helping to smash as many bugs as possible in several Mozilla products.

Ioana is now driving the Buddy Up QA Pilot program, which aims to recruit and train community members to actively own testing of this project.

Also we welcome Ioana as a Peer of the Reps Module and congratulate her for being the Rep of the Month!

Thanks Ioana for all you do for the the Reps, Mozilla and the Open Web.

Cheers little romanian vampire!

Don’t forget to congratulate her on Discourse!

Adrian GaudebertSpectateur, custom reports for crash-stats

The users of Socorro at Mozilla, the Stability team, have very specific needs that vary over time. They need specific reports for the data we have, new aggregations or views with some special set of parameters. What we developers of Socorro used to do was to build those reports for them. It's a long process that usually requires adding something to our database's schema, adding a middleware endpoint and creating a new page in our webapp. All those steps take a long time, and sometimes we understand the needs incorrectly, so it takes even longer. Not the best way to invest our time.

Nowadays, we have Super Search, a flexible interface to our data, that allows users to do a lot of those specific things they need. As it is highly configurable, it's easy to keep the pace of new additions to the crash reports and to evolve the capabilities of this tool. Couple that with our public API and we can say that our users have pretty good tools to solve most of their problems. If Super Search's UI is not good enough, they can write a script that they run locally, hitting our API, and they can do pretty much anything we can do.

But that still has problems. Local scripts are not ideal: it's inconvenient to share them or to expose their results, it's hard to work on them collaboratively, it requires working on some rendering and querying the API where one could just focus on processing the data, and it doesn't integrate with our Web site. I think we can do better. And to demonstrate that, I built a prototype. Introducing...

Spectateur

spectateur.jpg Spectateur is a service that takes care of querying the API and rendering the data for you. All you need to do is work on the data, make it what you want it to be, and share your custom report with the rest of the world. It uses a language commonly known, JavaScript, so that most people (at least at Mozilla) can understand and hack what you have done. It lets you easily save your report and gives you a URL to bookmark and to share. And that's about it, because it's just a prototype, but it's still pretty cool, isn't it?

To explain it a little more: Spectateur contains three parts. The Model lets you choose what data you want. It uses Super Search and gives you about the same capabilities that Socorro's UI has. Once you have set your filters and chosen the aggregations you need, we move to the Controller. That's a simple JavaScript editor (using Ace) and you can type almost anything in there. Just keep the function transform, the callback and the last lines that set the interface, otherwise it won't work at all. There are also some limitations for security: the code is executed in a Web Worker in an iframe, so you have no access to the main page's scope. Network requests are blocked, among other things. I'm using a wonderful library called jailed, if you want to know more, please read its documentation.

Once you are done writing your controller, and you have exposed your data, you can click the Run button to create the View. It will fetch the data, run your processor on that data and then render the results following the rules you have exposed. The data can currently be displayed as a table (using jsGrid) or as a chart (using Chart.js). For details, please read the documentation of Spectateur (there's a link at the top). When you are satisfied with your custom report, click the button Save. That will save the Model and the Controller and give you a URL (by updating the URL bar). Come back to that URL to reload your report. Note that if you make a change to your report and click Save again, a new URL will be generated, the previous report won't be overwritten.

As an example, here is a report that shows, for our B2G product, a graph of the top versions, a chart of the top signatures and a list of crash reports, all of that based on data from the last 7 days: https://spectateur.mozilla.io/#58a036ec-c5bf-469a-9b23-d0431b67f436

I hope this tool will be useful to our users. As usual, if you have comments, feedback, criticisms, if you feel this is a waste of time and we should not invest any more time in it, or on the contrary you think this is what you needed this whole time, please please please let us know!

Robert O'CallahanGreat Barrier Island

Last weekend a couple of Mozillians --- David Baron and Jean-Yves Avenard --- plus myself and my children flew to Great Barrier Island for the weekend. Great Barrier is in the outer Hauraki Gulf, not far from Auckland; it takes about 30 minutes to fly there from Auckland Airport in a very small plane. The kids and I camped on Friday night at "The Green" campsite at Whangaparapara, while David and Jean-Yves stayed at Great Barrier Lodge nearby. On Saturday we did the Aotea Track in clockwise direction, heading up the west side of the hills past Maungapiko, then turning east along the South Fork Track to Mt Heale Hut for the night. (The usual continuation past Kaiaraara Hut along the Kaiaraara track had been washed out by storms last year, and we saw evidence of storm damage in the form of slips almost everywhere we went.) Even the South Fork Track had been partially rerouted along the bed of the Kaiaraara Stream. We were the only people at Mt Heale Hut and had a good rest after a reasonably taxing walk. But inspired by Jean-Yves, we found the energy to do a side trip to Mt Hobson --- the highest point on the island --- before sunset.

On Sunday we walked south out of the hills to the Kaitoke hot strings and had a dip in the hot, sulphurous water --- very soothing. Then along the road to Claris and a well-earned lunch at the "Claris, Texas" cafe. We still had lots of time to kill before our flight so we dropped our bags at the airport (I use the term loosely) and walked out to Kaitoke Beach. A few of us swam there, carefully, since the surf felt very treacherous.

I'd never been tramping overnight at the Barrier before and really enjoyed this trip. There aren't many weekend-sized hut tramps near Auckland, so this is a great option if you don't mind paying to fly out there. The flight itself is a lot of fun.

Mozilla ThunderbirdThunderbird Usage Continues to Grow

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24
1 Germany 1,711,834
2 Japan 1,002,877
3 United States 927,477
4 France 777,478
5 Italy 514,771
6 Russian Federation 494,645
7 Poland 480,496
8 Spain 282,008
9 Brazil 265,820
10 United Kingdom 254,381
All Others 2,543,493
Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

Liz HenryA useful Bugzilla trick

At the beginning of February I changed teams within Mozilla and am now working as a release manager. It follows naturally from a lot of the work I’ve already been doing at Mozilla and I’m excited to join the team working with Lukas, Lawrence, and Sylvestre!

I just learned a cool trick for dealing with several bugzilla.mozilla.org bugs at once, on MacOS X.

1) Install Bugzilla Services.

2) Add a keyboard shortcut as Alex Keybl describes in the blog post above. (I am using Control-Command-B)

3) Install the BugzillaJS (Tweaks for Bugzilla) addon.

4) Install the Tree Style Tab addon.

Now, from any text, whether in email, a desktop text file, or anywhere in the browser, I can highlight a bunch of text and bug number will be parsed out of the text. For example, from an email this morning:

Bug 1137050 - Startup up Crash - patch should land soon, potentially risky
David Major seems to think it is risky for the release.

Besides that, we are going to take:
Bug 1137469 - Loop exception - patch waiting for review
Bug 1136855 - print preferences - patch approved
Bug 1137141 - Fx account + hello - patch waiting for review
Bug 1136300 - Hello + share buttons - Mike  De Boer will work on a patch today

And maybe a fix for the ANY query (bug 1093983) if we have one...

I highlighted the entire email and hit the “open in bugzilla” keystroke. This resulted in a Bugzilla list view for the 6 bugs mentioned in the email.

Bugzilla list view example

With BugzillaJS installed, I have an extra option at the bottom of the page, “Open All in Tabs”, so if I wanted to triage these bugs, I can open them all at once. The tabs show up in my sidebar, indented from their parent tab. This is handy if I want to collapse this group of tabs, or close the parent tab and all its children at once (The original list view of these 6 bugs, and each of its individual tabs.) Tree Style Tab is my new favorite thing!

Tree style tabs bugzilla

In this case, after I had read each bug from this morning and closed the tabs, my coworker Sylvestre asked me to make sure I cc-ed myself into all of them to keep an eye on them later today and over the weekend so that when fixes are checked in, I can approve them for release.

Here I did not want to open up every bug in its own tab but instead went for “Change Several Bugs at Once” which is also at the bottom of the page.

Bugzilla batch edit

This batch edit view of bugs is a bit scarily powerful since it will result in bugmail to many people for each bug’s changes. When you need it, it’s a great feature. I added myself to the cc: field all in one swoop instead of having to click each tab open, click around several times in each bug to add myself and save and close the tab again.

It was a busy day yesterday at work but I had a nice time working from the office rather than at home. Here is the view from the SF Mozilla office 7th floor deck where I was working and eating cake in the sun. Cannot complain about life, really.
Mozilla bridge view

Related posts:

Chris AtLeeDiving into python logging

Python has a very rich logging system. It's very easy to add structured or unstructured log output to your python code, and have it written to a file, or output to the console, or sent to syslog, or to customize the output format.

We're in the middle of re-examining how logging works in mozharness to make it easier to factor-out code and have fewer mixins.

Here are a few tips and tricks that have really helped me with python logging:

There can be only more than one

Well, there can be only one logger with a given name. There is a special "root" logger with no name. Multiple getLogger(name) calls with the same name will return the same logger object. This is an important property because it means you don't need to explicitly pass logger objects around in your code. You can retrieve them by name if you wish. The logging module is maintaining a global registry of logging objects.

You can have multiple loggers active, each specific to its own module or even class or instance.

Each logger has a name, typically the name of the module it's being used from. A common pattern you see in python modules is this:

# in module foo.py
import logging
log = logging.getLogger(__name__)

This works because inside foo.py, __name__ is equal to "foo". So inside this module the log object is specific to this module.

Loggers are hierarchical

The names of the loggers form their own namespace, with "." separating levels. This means that if you have have loggers called foo.bar, and foo.baz, you can do things on logger foo that will impact both of the children. In particular, you can set the logging level of foo to show or ignore debug messages for both submodules.

# Let's enable all the debug logging for all the foo modules
import logging
logging.getLogger('foo').setLevel(logging.DEBUG)

Log messages are like events that flow up through the hierarchy

Let's say we have a module foo.bar:

import logging
log = logging.getLogger(__name__)  # __name__ is "foo.bar" here

def make_widget():
    log.debug("made a widget!")

When we call make_widget(), the code generates a debug log message. Each logger in the hierarchy has a chance to output something for the message, ignore it, or pass the message along to its parent.

The default configuration for loggers is to have their levels unset (or set to NOTSET). This means the logger will just pass the message on up to its parent. Rinse & repeat until you get up to the root logger.

So if the foo.bar logger hasn't specified a level, the message will continue up to the foo logger. If the foo logger hasn't specified a level, the message will continue up to the root logger.

This is why you typically configure the logging output on the root logger; it typically gets ALL THE MESSAGES!!! Because this is so common, there's a dedicated method for configuring the root logger: logging.basicConfig()

This also allows us to use mixed levels of log output depending on where the message are coming from:

import logging

# Enable debug logging for all the foo modules
logging.getLogger("foo").setLevel(logging.DEBUG)

# Configure the root logger to log only INFO calls, and output to the console
# (the default)
logging.basicConfig(level=logging.INFO)

# This will output the debug message
logging.getLogger("foo.bar").debug("ohai!")

If you comment out the setLevel(logging.DEBUG) call, you won't see the message at all.

exc_info is teh awesome

All the built-in logging calls support a keyword called exc_info, which if isn't false, causes the current exception information to be logged in addition to the log message. e.g.:

import logging
logging.basicConfig(level=logging.INFO)

log = logging.getLogger(__name__)

try:
    assert False
except AssertionError:
    log.info("surprise! got an exception!", exc_info=True)

There's a special case for this, log.exception(), which is equivalent to log.error(..., exc_info=True)

Python 3.2 introduced a new keyword, stack_info, which will output the current stack to the current code. Very handy to figure out how you got to a certain point in the code, even if no exceptions have occurred!

"No handlers found..."

You've probably come across this message, especially when working with 3rd party modules. What this means is that you don't have any logging handlers configured, and something is trying to log a message. The message has gone all the way up the logging hierarchy and fallen off the...top of the chain (maybe I need a better metaphor).

import logging
log = logging.getLogger()
log.error("no log for you!")

outputs:

No handlers could be found for logger "root"

There are two things that can be done here:

  1. Configure logging in your module with basicConfig() or similar

  2. Library authors should add a NullHandler at the root of their module to prevent this. See the cookbook and this blog for more details here.

Want more?

I really recommend that you read the logging documentation and cookbook which have a lot more great information (and are also very well written!) There's a lot more you can do, with custom log handlers, different output formats, outputting to many locations at once, etc. Have fun!

Blake WintonA long time ago, on a computer far far away…

Six years ago, I started contributing to Mozilla.

Read more… (3 min remaining to read)

Mike ConleyThe Joy of Coding (Episode 3)

The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

Doug BelshawThe final push for Web Literacy Map v1.5 (and how you can get involved!)

By the end of March 2015 we should have a new, localised iteration of Mozilla’s Web Literacy Map. We’re calling this ‘version 1.5’ and it’s important to note that this is a point release rather than a major new version.

Cat with skills

Right now we’re at the point where we’ve locked down the competencies and are now diving into the skills underpinning those competencies. To help, we’ve got a epic spreadsheet with a couple of tabs:

Tabs

The REVIEW tab contains lots of comments about the suitability of the skills for v1.5. On this week’s community call we copied those skills that had no comments about them to the REFINE tab:

REFINE tab

This is where we need your help. We’ve got skills in the REVIEW tab that, with some tweaking, can help round out those skills we’ve already transferred. It would be great if you could help us discuss and debate those. There’s also some new competencies that have no skills defined at present.

We’ve got weekly community calls where we work on this stuff, but not everyone can make these. That’s why we’re using GitHub issues to discuss and debate the skills asychronously.

Here’s how to get involved:

  1. Make sure you’ve got a (free) GitHub account
  2. Head to the meta-issue for all of the work we’re doing around v1.5 skills
  3. Have a look through the skills under the various competencies (e.g. Remixing)
  4. Suggest an addition, add a question, or point out overlaps
  5. Get email updates when people reply - and then continue the conversation!

We really do need as many eyes as possible on this. No matter whether you’re an old hand or a complete n00b, you’re welcome. The community is very inviting and tolerant, so please dive in!


Comments? Questions? These would be better in GitHub, but if you want to get in touch directly I’m @dajbelshaw on Twitter or you can email me: doug@mozillafoundation.org

Mozilla Reps CommunityReps Weekly Call – February 26th 2015

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

serious-funny

Summary

  • RepsIDMeetup
  • Alumni status and leaving SOP
  • New mentors coming soon
  • GMRT event Pune
  • Teach The Web Talks
  • FOSS Asia
  • BuddyUp
  • Say Hello Day

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Kaustav Das ModakFirefox OS for everyone

While a good number of events around Firefox OS have focused on helping developers understand the technology behind it, there has been very little effort in helping people understand Firefox OS from a non-technical and non-developer perspective. I’ve started a series of events titled “Firefox OS for everyone” to solve this gap. The content has […]

Pierros PapadeasFOSDEM 2015 Bug sprint tool

“Reclama” Experiment report

Intro

FOSDEM 2015 is a premier open source developers’ event in Europe. Mozilla is heavily participating in the event for over 10 years now, with booths and dev-rooms. This year the Community Development team decided to run an experimental approach on recruiting new contributors by promoting good-first bugs.

Scope

Given the highly technical nature of the audience in FOSDEM 2015, the approach decided has a straight-forward promotion of bugs and prizes for them, for people to track and get involved.

Following sign-off meeting with the stakeholders (William Quiviger, Francisco Piccolini and Brian King) the specifications agreed for the first iteration of the experiment were as follows:

  1. A straight-forward interface displaying hand-selected good first bugs, ready to be worked on
  2. Next to bugs, prizes should be displayed.
  3. Viewing from mobile devices should be also accounted for.
  4. Public access to all content. Ability to edit and add events on selected accounts (hardcoded for v1)
  5. The interface has to be properly Mozilla branded (high visibility and promotion in an event)
  6. It needs to be future-proof for event to come (easily add new events and/or bugs after deployment)
  7. Solution URL should be short and easily memorable and digestible.
  8. It needs to be delivered before the start of FOSDEM 2015 ;)
  9. A report should be compiled with usage statistics and impact analysis for the experiment

Development

Given the extremely short timeframe (less than 3 days), ready, quick or off-the-shelf solutions were evaluated like:

  • An online spreadsheet (gdocs)
    • Not meeting requirements #3, #5, #6, #7
  • A bugzilla query
    • Not meeting requirements #2, #7, #6
  • A scrappy HTML plain coded solution
    • Not meeting requirements #4, #6

Thus, given the expertise of the group it was decided to create a django application to meet all requirements in time.

Following 2 days of non-stop development (fantastic work from Nikos and Nemo!), testing and iteration we met all requirements by developing the app which is codenamed “reclama” (Italian -shortof-  for claim).

Code can be found here: https://github.com/mozilla/reclama

Screen Shot 2015-02-27 at 14.57.26Deployment

In order to meet requirement #7 (short and memorable URL) and given the timeframe, we decided to acquire a URL quickly ( mozbugsprints.org ) and deploy the application.

For usage statistics, awstats was deployed on top of the app to track incoming traffic.

Usage statistics

During the weekend of FOSDEM 2015, 500 people with almost 5000 hits visited the website. That’s almost 10% of the event participants.mozbugsprint-stats

Saturday 31-Jan-2015 traffic analysis

mozbugsprint-saturdayBooths and promotion of the experiment started at 9:00 as expected. With mid-day (noon) pick which is consistent with increased traffic in booths area of the event.

Traffic continues to flow in steadily even after the end of the event, which indicates that people keep the URL and interact with our experiment substantial time after the face to face interaction with our booth. Browsing continues through the night, and help might be needed (on call people/mentors ?) during that too.

Sunday 1-Feb-2015 traffic analysis

mozbugsprint-sundaySecond day in FOSDEM 2015 included a Mozilla dedicated dev-room. The assumption that promotion through the dev-room would increase the traffic in our experiment proved to be false, as the traffic continued on the same levels for day 2.
As expected there was a sharp cut-off after 16:00 (when the final keynote starts) and also people seem to not come back after the event. Thus the importance of hyper-specific event-focused (and branded) challenge seems to be high, as people relate to that understanding the one-off nature of it.

Impact

32 coding bugs from different product areas were presented to people. 9 of them were edited (assigned, commented, worked on) during or immediately (one day) after FOSDEM 2015. Out of those 9 , 4 ended up on first patch submission (new contributors) and 3 received no response (blocked contributor) from mozilla staff (or core contributors).

Recommendations

  • Re-do experiment in a different cultural environment, but still with related audience, so we can cross-compare.
  • Continue the experiment by implementing new functionality as suggested by the stakeholders (notes, descriptions etc).
  • Experiment random sorting of bugs, as current order seemed to affect what has been worked on.
  • Submission of bugs to be featured on an event should be coordinated by event owner and related to event theme and topics.
  • A/B test how prize allocation affects what bugs are worked on.
  • Expand promotional opportunities throughout the event. (booth ideas?)
  • On call developers on promoted bugs would eliminate the non-answered section of our bugs

PS. Special thanks to Elio for crafting an excellent visual identity and logo once again!

Gervase MarkhamAn Encounter with Ransomware

An organization which I am associated with (not Mozilla) recently had its network infected with the CryptoWall 3.0 ransomware, and I thought people might be interested in my experience with it.

The vector of infection is unknown but once the software ran, it encrypted most data files (chosen by extension) on the local hard drive and all accessible shares, left little notes everywhere explaining how to get the private key, and deleted itself. The notes were placed in each directory where files were encrypted, as HTML, TXT, PNG and as a URL file which takes you directly to their website.

Their website is accessible as either a TOR hidden service or over plain HTTP – both options are given. Presumably plain HTTP is for ease for less technical victims; Tor is for if their DNS registrations get attacked. However, as of today, that hasn’t happened – the site is still accessible either way (although it was down for a while earlier in the week). Access is protected by a CAPTCHA, presumably to prevent people writing automated tools that work against it. It’s even localised into 5 languages.

CryptoWall website CAPTCHA

The price for the private key was US$500. (I wonder if they set that based on GeoIP?) However, as soon as I accessed the custom URL, it started a 7-day clock, after which the price doubled to US$1000. Just like parking tickets, they incentivise you to pay up quickly, because argument and delay will just make it cost more. If you haven’t paid after a month, they delete your secret key and personal page.

While what these thieves do is illegal, immoral and sinful, they do run a very professional operation. The website had the following features:

  • A “decrypt one file” button, which allows them to prove they have the private key and re-establish trust. It is, of course, also protected by a CAPTCHA. (I didn’t investigate to see whether it was also protected by numerical limits.)
  • A “support” button, which allows you to send a message to the thieves in case you are having technical difficulties with payment or decryption.

The organization’s last backup was a point-in-time snapshot from July 2014. “Better backups” had been on the ToDo list for a while, but never made it to the top. After discussion with the organization, we decided that recreating the data would have taken much more time than the value of the ransom, and so were were going to pay. I tried out the “Decrypt One File” function and it worked, so I had some confidence that they were able to provide what they said they were.

I created a wallet at blockchain.info, and used an exchange to buy exactly the right amount of Bitcoin. (The first exchange I tried had a ‘no ransomware’ policy, so I had to go elsewhere.) However, when I then went to pay, I discovered that there was a 0.0001BTC transaction fee, so I didn’t have enough to pay them the full amount! I was concerned that they had automated validation and might not release the key if the amount was even a tiny bit short. So, I had to go on IRC and talk to friends to blag a tiny fraction of Bitcoin in order to afford the transfer fee.

I made the payment, and pasted the transaction ID into the form on the ransomware site. It registered the ID and set status to “pending”. Ten or twenty minutes later, once the blockchain had moved on, it accepted the transaction and gave me a download link.

While others had suggested that there was no guarantee that we’d actually get the private key, it made sense to me. After all, word gets around – if they don’t provide the keys, people will stop paying. They have a strong incentive to provide good ‘customer’ service.

The download was a ZIP file containing a simple Windows GUI app which was a recursive decryptor, plus text files containing the public key and the private key. The app worked exactly as advertised and, after some time, we were able to decrypt all of the encrypted files. We are now putting in place a better backup solution, and better network security.

A friend who is a Bitcoin expert did do a little “following the money”, although we think it went into a mixer fairly quickly. However, before it did so, it was aggregated into an account with $80,000+ in it, so it seems that this little enterprise is fairly lucrative.

So, 10/10 for customer service, 0/10 for morality.

The last thing I did was send them a little message via the “Support” function of their website, in both English and Russian:

Such are the ways of everyone who is greedy for unjust gain; it takes away the life of its possessors.

Таковы пути всех, кто жаждет преступной добычи; она отнимает жизнь у завладевших ею.

‘The time has come,’ Jesus said. ‘The kingdom of God has come near. Repent and believe the good news!’

– Пришло время, – говорил Он, – Божье Царство уже близко! Покайтесь и верьте в Радостную Весть!

Mozilla Reps CommunityImpact teams: a new approach for functional impact at Reps

When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.

impact-team-2

These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!

Tantek Çelik#IndieWeb: Homebrew Website Club 2015-02-25 Summary

2015-02-25 Homebrew Website Club participants, seven of them, sit in two rows for a photograph

At last night's Homebrew Website Club we discussed, shared experiences, and how-tos about realtime indie readers, changing/choosing your webhost, indie RSVPs, moving from Blogger/Tumblr to your own site, new IndieWebCamp Slack channel, and ifthisthen.cat.

See kevinmarks.com/hwc2015-02-25.html for the writeup.

Tantek ÇelikDisappointed in @W3C for Recommending Longdesc

W3C has advanced the longdesc attribute to a Recommendation, overruling objections from browser makers.

Not a single browser vendor supported advancing this specification to recommendation.

Apple formally objected when it was a Candidate Recommendation and provided lengthy research and documentation (better than anyone has before or since) on why longdesc is bad technology (in practice has not and does not solve the problems it claims to).

Mozilla formally objected when it was a Proposed Recommendation, agreeing with Apple’s research and reasoning.

Both formal objections were overruled.

For all the detailed reasons noted in Apple’s formal objection, I also recommend avoid using longdesc, and instead:

  • Always provide good alt (text alternative) attributes for images, that read well inline if and when the image does not load. Or if there’s no semantic loss without the image, use an empty alt="".
  • For particularly rich or complex images, either provide longer descriptions of images in normal visible markup, or linked from a image caption or other visible affordance. See accessibility expert James Craig’s excellent Longdesc alternatives in HTML5 resource for even more and better techniques.

Perhaps the real tragedy is that many years have been wasted on a broken technology that could have been spent on actually improving accessibility of open web technologies. Not to mention the harrassment that’s occurred in the name of longdesc.

Sometimes web standards go wrong. This is one of those times.

Planet Mozilla InternsMichael Sullivan: MicroKanren (μKanren) in Haskell

Our PL reading group read the paper “μKanren: A Minimal Functional Core for Relational Programming” this week. It presents a minimalist logic programming language in Scheme in 39 lines of code. Since none of us are really Schemers, a bunch of us quickly set about porting the code to our personal pet languages. Chris Martens produced this SML version. I hacked up a version in Haskell.

The most interesting part about this was the mistake I made in the initial version. To deal with recursion and potentially infinite search trees, the Scheme version allows some laziness; streams of results can be functions that delay search until forced; when a Scheme μKanren program wants to create a recursive relation it needs wrap the recursive call in a dummy function (and plumb through the input state); the Scheme version wraps this in a macro called Zzz to make doing it more palatable. I originally thought that all of this could be dispensed with in Haskell; since Haskell is lazy, no special work needs to be done to prevent self reference from causing an infinite loop. It served an important secondary purpose, though: providing a way to detect recursion so that we can switch which branch of the tree we are exploring. Without this, although the fives test below works, the fivesRev test infinite loops without producing anything.

The initial version was also more generalized. The type signatures allowed for operating over any MonadPlus, thus allowing pluggable search strategies. KList was just a newtype wrapper around lists. When I had to add delay I could have defined a new MonadPlusDelay typeclass and parametrized over that, but it didn’t’ seem worthwhile.

A mildly golfed version that drops blank lines, type annotations, comments, aliases, and test code clocks in at 33 lines.

<noscript>View the code on <a href="https://gist.github.com/msullivan/4223fd47991acbe045ec">Gist</a>.</noscript>

Doug BelshawAn important day for the Internet

As I’ve just been explaining to my son, when he’s my age and looks back at the history of the Internet, 26 February 2015 will be seen as a very important day.

Why? The Mozilla blog summarises it well:

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

Net Neutrality can be a difficult thing to understand and, especially if you’re not based in the USA, it can feel like something that doesn’t affect you. However, it is extremely important, and it impacts everyone.

Last year we put together a Webmaker training module on Net Neutrality. I’d like to think it helped towards what was achieved today. As Mitchell Baker stated, this victory was far from inevitable, and the success will benefit all of humankind.

It’s worth finding out more about Net Neutrality, for the next time it’s threatened. Forewarned is forearmed.

Image CC BY Kendrick Erickson

Air MozillaBrown Bag, "Analysing Gaia with Semmle"

Brown Bag, "Analysing Gaia with Semmle" Title: Analysing Gaia with Semmle Abstract: Semmle has recently added support for JavaScript to its analysis platform. As one of our first major JavaScript analysis...

The Mozilla BlogA Major Victory for the Open Web

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

Rosana ArdilaImpact teams: a new approach for functional impact at Reps

When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.

impact-team-2

These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!


Pascal FinetteWhat Every Entrepreneur Should Know About Pitching

The following post is a summary of a series of earlier Heretic posts on the subject compiled into one comprehensive list - compiled by the wonderful folks at Unreasonable.is

Your pitch deck MUST start with a description of what it is that you’re doing. Your second slide (after the cover slide) is titled “What is NAME-OF-YOUR-COMPANY” (e.g. “What is eBay”). Explain in simple English what you’re doing. This is not the place to be clever, show off your extraordinary grasp of the English language or think that your pitch deck is a novel where you build tension and excitement in the first half and surprise the reader in the end.

If I (or any investor for that matter) don’t understand what you are doing in the first 10-15 seconds you already lost me. I know investors who don’t go past slide two if they don’t grasp what the company does.

Simple and obvious, eh? The crazy thing is, I get tons of decks which make me literally go “WTF?!” Let me illustrate the point. Here are two pitches which get it right (names and some product details have been changed):

What is ACME CORP?

  • A unique coffee subscription business…
  • 1) Rare, unique, single-origin coffee
  • 2) Freshly roasted to order each month
  • 3) A different coffee every month, curated by experts
  • delivered monthly through the letterbox.
  • A subscription-only business—sign up online—available for self-purchase or as a special gift

Moving on — pitch #2:

ACME CORP is an E-FASHION club that provides affordable trendy shoes, accessories and PERSONALIZED style recommendations to European Women!

Clear. They are a shopping club focussed on shoes for European women. Crisp and clear. No fancy language. Just the facts.

And now for something different—pitch #3:

ACME CORP is an online collaboration hub and development environment for makers, hobbyist and engineers.

I have mostly no clue what they are doing. Worse — as I actually know the team and their product — this isn’t even an accurate statement of what they are up to. What they build is Github plus a deployment system for Arduinos. Their statement is overly broad and unspecific.

So your first slide (not the cover — your first actual content slide) is the most important slide in your deck. It’s the slide where I decide if I want to see and hear more. It’s the slide which sets the tone for the rest of your interaction. And it’s the slide which forms my first image of you and your company. Spend time on it. Make it perfect. Pitch it to strangers who know nothing about your company and see if they get it. Show them just this one slide and then ask them to describe back to you what your company does. If they don’t get it, or it’s inaccurate, go back and revise the slide until you get it right.

The Team Slide

We all know that investors put their money into teams not ideas. Ideas are cheap and plentiful. Incredible execution is rare. Ideas evolve (yeah — sometimes they “pivot“). Great teams excel at this, mediocre teams fall apart. So you invest into people.

Which means your team slide better be good.

I can’t tell you too much about the composition of your team — as this is highly dependent on your idea and the industry you’re in.

Teams of one are usually a bad sign. If you can’t bring a team to the table when you ask for funding it just doesn’t reflect well on your ability to recruit. Teams that have a bunch of people listed as “will come on board once we can pay her/him a salary” don’t work. People who are not willing to take the risk are employees, not cofounders.

Don’t bullshit when you talk about your team. Sentences such as “X has 10 years of Internet experience” make me cringe, then laugh and then delete your email. Every man and his dog has ten years of “Internet experience” by now. Be honest, tell me what your team did. If your team hasn’t done anything interesting. Well, that’s something you should think about. You won’t be able to hide it anyway. “Y is a Ruby ninja?” I let your teammate speak with one of our portfolio companies for three minutes and I know if he’s a ninja or a backwater coder who still designs MySpace pages for his school chorus. Oh, and by the way: Nobody is a fucking ninja, superstar or what-have-you. Cut the lingo.

Lastly—and this shows your attention to detail — make sure the pictures you use have a common look and feel and don’t look like a collection of randomly selected vacation snapshots. Unless you’re all totally wasted in the shots. That might be funny.

Let’s Talk About Advisors

Typically you add them to your team slide. And most of the time you see something along the lines of a list of names a la “John Doe, Founder ACME Corp.,” sometimes with a picture.

Here’s the deal—advisors provide two signals for an investor:

  1. Instant credibility if you have “brand name” advisors
  2. A strong support network

The first one only works if you have truly recognizable names which are relevant to your field. Sergey Brin works, a random director at a large company doesn’t. The second one is trickier. In pitch presentations I often wonder what those people actually do for you — as often the entrepreneurs either just rattle down the names of the advisors on their slide or even glance over them and say something to the tune of, “and we have a bunch of awesome advisors.”

If you want to make your pitch stronger I recommend you make sure that your advisors are relevant (no, your dad’s buddy from the plumbing shop down the road most likely doesn’t count) and that they are actual advisors and not only people with whom you emailed once. You can spend 15 seconds in your pitch describing the relationship you have with your advisors. (e.g. “We assembled a team of relevant advisors with deep expertise in X, Y and Z. To leverage their expertise and network we meet with them every month for a one-hour session and can also ask for advice via phone and email anytime in between.”)

By the way, there is something to be said about the celebrity advisor. As awesome as it might be that you got Tim Cook from Apple to be an advisor I instantly ask myself how much time you actually get out of him—he’s busy as hell. So you might want to anticipate that (silent) question and address it head on in your pitch.

The Dreaded Finance Slide

The one slide that is made up out of one hundred percent pure fantasy. And yet it seems (and it is) so important. Everybody knows that whatever you write down on your finance slide is not reality but (in the best case) wishful thinking. That includes your investor.

Why bother? Simple. The finance slide shows your investor that you understand the fundamentals of your business. That you have a clue. That he can trust you with his money.

So what do you need to know about your finance slide? As so often in life the answer is: it depends. Here’s my personal take. For starters you want to show a one-year plan which covers month-by-month revenue and costs. Lump costs into broader buckets and don’t fall into the trap of pretended accuracy by showing off precise numbers. Nobody will believe that you really know that your marketing costs will be precisely $6,786 in month eight. You’re much better off eyeballing these numbers. Also don’t present endless lists of minutia details such as your telecommunication costs per month. Show your business literacy by presenting ballpark numbers that make sense (e.g. salaries should come as fully loaded head counts—not doing this is a strong indicator that you don’t know the fundamentals of running a business).

On the revenue side you want to do a couple of things. First of all explain your business model (this is something you might want to pull out on its own slide). Then give me your assumptions—sometimes it makes sense to work with scenarios (best, base and worst case). And then use this model to validate your revenue assumptions bottom-up. If you say you will have 5,000 customers in month three, what does that mean in terms of customer acquisition per day, how does that influence your cost model, etc.

This is probably the most useful part of putting together your financials. It allows you to make an educated guess about your model and will, if done right, feed you tons of information back into your own thinking. Weirdly a lot of entrepreneurs don’t do this and then fall onto their face when, in a pitch meeting, their savvy investor picks the numbers apart and points out the gapping holes (something I like to do—it really shows if someone thought hard about their business or if they are only wanna-be entrepreneurs).

And then you do the same for year two and three — this time on a quarterly basis.

Above all, do this for yourself, not because you need to do this for your investors. Use this exercise to validate your assumptions, to get a deep understanding of the underlying logic of your business. Your startup will be better off for it.

The Business Model Slide

You have a business model, right? Or at least you pretend to have one? If you don’t and you believe you can get by, by using a variant of the “we’ll figure it out” phrase you better have stratospheric growth and millions of users.

Here’s the thing about your business model slide in your pitch deck: If you spend time to make it clear, concise, easy to understand and grasp, I am so much more likely to believe that you have at least a foggy clue about what you’re doing. If you, in contrast, do what I see so many startups do and give me a single bullet, hidden on one of your other slides which reads something like “Freemium Model” and that’s it… well, that’s a strong indicator that you haven’t thought about this a whole lot, that you are essentially clueless about your business and that I really shouldn’t trust you with my money.

With that being said, what does a great business model slide look like? It really depends on what your business model is (or you believe it is to be precise—these things tend to change). What I look for is a clear expression of your model, the underlying assumptions and the way the model works out. Often this can be neatly expressed in an info graphic—showing where and how the money comes in, how the value chain looks like and what the margins are alongside the chain. Here’s an example: it’s not perfect yet much better than simply expressing your model as “we take a 25% margin.”

Spend some time on your business model slide. Make it clear and concise. The litmus test is: Show someone who doesn’t know anything about your company just this one slide and ask them to explain back to you your business model. If they get it and they get it in its entirety you are off to the races.

The Ask

You know that you always have to have an ask in your pitch, right? It might be for money, it might be for partnerships or just feedback—but never ever leave an audience without asking for something.

There’s a lot of art and science to asking. Here’s my view on the science piece: Let’s assume you pitch for money. Don’t start your pitch with your ask. By putting your ask first you A) rob yourself of the wonderful poetry of a well delivered pitch (as everyone only thinks about the dollars), B) you might loose a good chunk of your audience who might not fall into your price bracket (believe me, more often than not they gladly invest if they just hear you out and get excited by your company) and C) you will have every investor judge every single slide against the dollar amount you put up.

That said, don’t just close your pitch with a “We’re raising $500k. Thank you very much.” but give me a bit more detail what you need the money for and how long it is projected to last (pro tip: Make sure you budget for enough runway—raising for too short an amount of time is a classic beginners mistake). You want to say something like: “We’re raising $500k which will go towards building out our engineering team, building our Android app and getting the first wave of marketing out which should get us to 500k users. The round is projected to last 12 months and will validate our milestones.”

A Word About Design

One question I get quite often about pitch decks is: How much text should be or need to be on the slides? This can be a tricky question—you will both present the deck to an audience (in which case you tend to want to have less text and more emphasis on your delivery) and you’ll send the deck to investors via email (in which case a slide with just an image on it won’t work—the recipient doesn’t have the context of your verbal presentation).

Guy Kawasaki famously formulated the 10/20/30 rule: 10 Slides, 20 Minutes, 30 Point Minimal Font Size. This is a great starting point—and what I would recommend for your in-person investor presentation. But it might not work for the deck you want to email.

Here’s what I would do (and have done): Start with a slide deck where each and every slide can stand on its own. Assume you give the slide deck to someone who has no context about you and your startup. And at the same time—treat your deck like a deck and not a word document. Keep text short, reduce it to the bare necessity, cut out unnecessary language. Keep the whole deck short—one should be able to flip through your deck in a few minutes and digest the whole thing in 10-15 minutes.

Once you have this, which will be the deck you send out to investors, you take the deck and cut out all the words which are not necessary for an in-person presentation. This will give you the deck that you present. Keeping the two decks in sync with regards to slides, order and design will make it easier for someone who saw your deck recognize it in your pitch.

Flow

The best pitch deck fails to deliver if your overall flow isn’t right. Flow has as much to do with the logical order of your slides as it has with a certain level of theatrical drama (tension/relieve) and your personal delivery.

Guy recommends ten slides in the following order:

  • Problem
  • Your solution
  • Business model
  • Underlying magic/technology
  • Marketing and sales
  • Competition
  • Team
  • Projections and milestones
  • Status and timeline
  • Summary and call to action

Personally I think this is as good an order as most. Some people like to talk about the team earlier (as investors invest into people first and foremost), others have specific slides talking about some intricate issues specific to the business they are pitching.

For me, it comes down to a logical order: talk about the problem you’re solving first, then present the solution (the tension and relieve arch), and what feels good for you. I prefer a slide deck that is a bit off but comes with amazing in-person presence over a great slide deck and an uninspired presentation any day.

Note that you want to have a bit of drama in your deck—yet it’s not your school play where you try to outcompete Shakespeare. Don’t spend half an hour on building tension and then, TADA!, present your solution. In the end it’s all about balance.

And hey, send me your deck and I’ll provide honest, direct feedback. I won’t bite. Promise.

Cameron KaiserIonPower passes V8!

At least in Baseline-only mode, but check it out!

Starting program: /Volumes/BruceDeuce/src/mozilla-36t/obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -f run.js
warning: Could not find malloc init callback function.
Make sure malloc is initialized before calling functions.
Reading symbols for shared libraries ....................................................................+++......... done
Richards: 144
DeltaBlue: 137
Crypto: 215
RayTrace: 230
EarleyBoyer: 193
RegExp: 157
Splay: 140
NavierStokes: 268
----
Score (version 7): 180

Program exited normally.

Please keep in mind this is a debugging version and performance is impaired relative to PPCBC (and if I had to ship a Baseline-only compiler in TenFourFox 38, it would still be PPCBC because it has the best track record). However, all of the code cleanup for IonPower and its enhanced debugging capabilities paid off: with one exception, all of the bugs I had to fix to get it passing V8 were immediately flagged by sanity checks during code generation, saving much labourious single stepping through generated assembly to find problems.

I have a Master's program final I have to study for, so I'll be putting this aside for a few days, but after I thoroughly bomb it the next step is to mount phase 4, where IonPower can pass the test suite in Baseline mode. Then the real fun will begin -- true Ion-level compilation on big-endian PowerPC. We are definitely on target for 38, assuming all goes well.

I forgot to mention one other advance in IonPower, which Ben will particularly appreciate if he still follows this blog: full support for all eight bitfields of the condition register. Unfortunately, it's mostly irrelevant to generated code because Ion assumes, much to my disappointment, that the processor possesses only a single set of flags. However, some sections of code that we fully control can now do multiple comparisons in parallel over several condition registers, reducing our heavy dependence upon (and usually hopeless serialization of) cr0, and certain FPU operations that emit to cr1 (or require the FPSCR to dump to it) can now branch directly upon that bitfield instead of having to copy it. Also, emulation of mcrxr on G5/POWER4+ no longer has a hard-coded dependency upon cr7, simplifying much conditional branching code. It's a seemingly minor change that nevertheless greatly helps to further unlock the untapped Power in PowerPC.

Karl DubostBugs and spring cleaning

We have a load of bugs in Tech Evangelism which are not taken care of. Usually the pattern is:

  1. UNCONFIRMED: bug reported
  2. NEW: analysis done ➜ [contactready]
  3. ASSIGNED: attempt at a contact ➜ [sitewait]
  4. 🌱, ☀️, 🍂, ❄️… and 🌱
  5. one year later nothing has happened. The bug is all dusty.

A couple of scenarios might have happened during this time:

  • The bug has been silently fixed (Web dev didn't tell, the site changed, the site disappeared, etc.)
  • We didn't get a good contact
  • We got a good contact, but the person has forgotten to push internally
  • The contact/company decided it would be a WONTFIX (Insert here 👺 or 👹)

I set for myself a whining mail available from the bugzilla administration panel. The feature is defined by.

Whining: Set queries which will be run at some specified date and time, and get the result of these queries directly per email. This is a good way to create reminders and to keep track of the activity in your installation.

The event is defined by associating a saved search and associating it with a recurrent schedule.

Admin panel for setting the whining mail

Every monday (Japanese time), I'm receiving an email which lists the bugs which have not received comments for the last 30 weeks (6+ months). It's usually in between 2 to 10 bugs to review every monday. Not a high load of work.

Email sent by my whining schedule

Still that doesn't seem very satisfying and it relies specifically on my own setup. How do we help contributors, volunteers or even other employees at Mozilla.

All doomed?

There might be actions to improve. Maybe some low hanging fruits could help us.

  • To switch from [sitewait] to [needscontact] when there was no comment on the bug for 𝓃 weeks. The rationale here is that the contact was probably not good, and we need to make the bug again has something that someone can feel free to take actions.
  • To improve the automated testing of Web sites (bottom of the page) done by Hallvord. The Web site tests are collected on Github. And you can contribute new tests. For example, when the test reports that the site has been FIXED, a comment could be automatically posted on the bug itself with a needinfo on the ASSIGNED person. Though that might create an additional issue related to false positives.
  • To have a script for easily setting up the whining email that I set up for myself to help contributors have this kind of reminder if they wish so.

Other suggestions? This is important, because apart of Mozilla, we will slowly have the same issue on 💡 webcompat site for the already contacted Web sites. Please discuss on your own blog and/or send an email to compatibility list on this thread.

Otsukare!

PS: No emojis were harmed in this blog post. All emojis appearing in this work are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.

The Mozilla BlogA Major Victory for the Open Web

We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

Planet Mozilla InternsMichael Sullivan: Parallelizing compiles without parallelizing linking – using make

I have to build LLVM and Clang a lot for my research. Clang/LLVM is quite large and takes a long time to build if I don’t use -j8 or so to parallelize the build; but I also quickly discovered that parallelizing the build didn’t work either! I work on a laptop with 8gb of RAM and while this can easily handle 8 parallel compiles, 8 parallel links plus Firefox and Emacs and everything else is a one way ticket to swap town.

So I set about finding a way to parallelize the compiles but not the links. Here I am focusing on building an existing project. There are probably nicer ways that someone writing the Makefile could use to make this easier for people or the default, but I haven’t really thought about that.

My first attempt was the hacky (while ! pgrep ld.bfd.real; do sleep 1; done; killall make ld.bfd.real) & make -j8; sleep 2; make. Here we wait until a linker has run, kill make, then rerun make without parallel execution. I expanded this into a more general script:

<noscript>View the code on <a href="https://gist.github.com/3290e688670c54f8d1a2">Gist</a>.</noscript>

This approach is kind of terrible. It’s really hacky, it has a concurrency bug (that I would fix if the whole thing wasn’t already so bad), and it slows things down way more than necessary; as soon as one link has started, nothing more is done in parallel.

A better approach is by using locking to make sure only one link command can run at a time. There is a handy command, flock, that does just that: it uses a file link to serialize execution of a command. We can just replace the Makefile’s linker command with a command that calls flock and everything will sort itself out. Unfortunately there is no totally standard way for Makefiles to represent how they do linking, so some Makefile source diving becomes necessary. (Many use $(LD); LLVM does not.) With LLVM, the following works: make -j8 'Link=flock /tmp/llvm-build $(Compile.Wrapper) $(CXX) $(CXXFLAGS) $(LD.Flags) $(LDFLAGS) $(TargetCommonOpts) $(Strip)'

That’s kind of nasty, and we can do a bit better. Many projects use $(CC) and/or $(CXX) as their underlying linking command; if we override that with something that uses flock then we’ll wind up serializing compiles as well as links. My hacky solution was to write a wrapper script that scans its arguments for “-c”; if it finds a “-c” it assumes it is a compile, otherwise it assumes it is a link and uses locking. We can then build LLVM with: make -j8 'CXX=lock-linking /tmp/llvm-build-lock clang++'.

<noscript>View the code on <a href="https://gist.github.com/d33029fcda6889b7d097">Gist</a>.</noscript>

Is there a better way to do this sort of thing?

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Kim MoirRelease Engineering special issue now available

The release engineering special issue of IEEE software was published yesterday.  This issue focuses on the current state of release engineering, from both an industry and research perspective. Lots of exciting work happening in this field!

I'm interviewed in the roundtable article on the future of release engineering, along with Chuck Rossi of Facebook and Boris Debic of Google.  Interesting discussions on the current state of release engineering at organizations that scale large number of builds and tests, and release frequently.  As well,  the challenges with mobile releases versus web deployments are discussed. And finally, a discussion of how to find good release engineers, and what the future may hold.

Thanks to the other guest editors on this issue -  Stephany Bellomo, Tamara Marshall-Klein, Bram Adams, Foutse Khomh and Christian Bird - for all their hard work that make this happen!


As an aside, when I opened the issue, the image on the front cover made me laugh.  It's reminiscent of the cover on a mid-century science fiction anthology.  I showed Mr. Releng and he said "Robot birds? That is EXACTLY how I pictured working in releng."  Maybe it's meant to represent that we let software fly free.  In any case, I must go back to tending the flock of robotic avian overlords.

David HumphreyRepeating old mistakes

This morning I've been reading various posts about the difficulty ahead for the Pointer Events spec, namely, Apple's (and by extension Google's) unwillingness to implement it. I'd encourage you to read both pieces, and whatever else gets written on this in the coming days.

I want to comment not on the spec as such, but on the process on display here, and the effect it has on the growth and development of the web. There was a time when the web's course was plotted by a single vendor (at the time, Microsoft), and decisions about what was and wasn't headed for the web got made by employees of that corporation. This story is so often retold that I won't pretend you need to read it again.

And yet here we are in 2015, where the web on mobile, and therefore the web in general, is effectively in the control of one vendor; a vendor who, despite its unmatched leadership and excellence in creating beautiful hardware, has shown none of the same abilities in its stewardship and development of the web.

If the only way to improve and innovate the web is to become an employee of Apple Inc., the web is in real danger.

I think that the larger web development community has become lax in its care for the platform on which it relies. While I agree with the trend toward writing JS to supplement and extend the browser, I think that it also tends to lull developers into thinking that their job is done. We can't simply write code on the platform, and neglect writing code for the platform. To ignore the latter, to neglect the very foundations of our work, is to set ourselves up for a time when everything collapses into the sand.

We need more of our developer talent and resources going into the web platform. We need more of our web developers to drop down a level in the stack and put their attention on the platform. We need more people in our community with the skills and resources to design, implement, and test new specs. We need to ensure that the web isn't something we simply use, but something we are active in maintaining and building.

Many web developers that I talk to don't think about the issues of the "web as platform" as being their problem. "If only Vendor X would fix their bugs or implement Spec Y--we'll have to wait and see." There is too often a view that the web is the problem of a small number of vendors, and that we're simply powerless to do anything other than complain.

In actual fact there is a lot that one can do even without the blessing or permission of the browser vendors. Because so much of the web is still open, and the code freely available, we can and should be experimenting and innovating as much as possible. While there's no guarantee that code you write will be landed and shipped, there is still a great deal of value in writing patches instead of just angry tweets: it is necessary to change peoples' minds about what the web is and what it can do, and there is no better way that with working code.

I would encourage the millions of web developers who are putting time into technologies and products on top of the web to also consider investing some time in the web itself. Write a patch, make some builds, post them somewhere, and blog about the results. Let data be the lever you use to shift the conversation. People will tell you that something isn't possible, or that one spec is better than another. Nothing will do more to convince people than a working build that proves the opposite.

There's no question that working on the web platform is harder than writing things for the web. The code is bigger, older, more complicated, and requires different tooling and knowledge. However, it's not impossible. I've been doing it for years with undergraduate students at Seneca, and if 3rd and 4th years can tackle it, so too can the millions of web developers who are betting their careers and companies on the web.

Having lived through and participated in every stage of the web's development, it's frustrating to see that we're repeating mistakes of the past, and allowing large vendors to control too great a stake of the web. The web is too important for that, and it needs the attention and care of a global community. There's something you can do, something I can do, and we need to get busy doing it.

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Christian HeilmannSimple things: Storing a list of booleans as a single number

This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

Yesterday I was asked by someone if there is a possibility to store the state of a collection of checkboxes in a single value. The simplest way I could think of doing this is by using binary conversion.

Tron binary

You can see the result of my approach in this JSBin:

Storing a list of booleans as a single number

What’s going on here? The state of a checkbox is a Boolean, meaning it is checked or unchecked. This could be true or false, or, as JavaScript is a lenient language, 1 or 0. That’s why we can loop over all the checkboxes and assemble a string of their state that compromises of zeros and ones:

var inputs = document.querySelectorAll('input');
var all = inputs.length;
for (var i = 0; i < all; i++){
  state += inputs[i].checked ? '1' : '0';
}

This results in a string like 1001010101. This could be our value to store, but looks pretty silly and with 50 or more checkboxes becomes very long to store, for example, as a URL parameter string.

That’s why we can use parseInt() to convert this binary number into a decimal one. That’s what the second parameter in parseInt() is for – not only to please Douglas Crockford and JSLint (as it is preset to decimal – 10 – people keep omitting it). The counterpart of parseInt() in this case is toString() and that one also takes an optional parameter that is the radix of the number system you convert from. That way you can convert this state back and forth:

x = parseInt('1001010101',2);
// x -> 579
x.toString(2);
// "1001010101"

Once converted, you turn it back into a string and loop over the values to set the checked state of the checkboxes accordingly.

A small niggle: leading zeroes don’t work

One little problem here is that if the state results in a string with leading zeroes, you get a wrong result back as toString() doesn’t create them (there is no knowing how long the string needs to be, all it does is convert the number).

x = parseInt('00001010101',2);
x.toString(2);
"1010101"

You can avoid this is in two ways: either pad the string by always starting it with a 1 or by reversing the string and looping over the checkboxes in reverse. In the earlier example I did the padding part, in this JSBin you can see the reversing trick:

Storing a list of booleans as a single number (reverse)r

Personally, I like the reversing method better, it just feels cleaner. It does rely a lot on falsy/truthy though as the size of the resulting arrays differs.

Limitation

In any case, this only works when the amount of checkboxes doesn’t change in between the storing and re-storing, but that’s another issue.

As pointed out by Matthias Reuter on Twitter this is also limited to 52 checkboxes, so if you need more, this is not the solution.

Karl DubostWeb Compatibility Summit 2015

The Web Compatibility Summit was organized in Mountain View (USA) on February 18, 2015. I summarize the talks that we have been given during the day. I encourage to continue the discussion on compatibility@lists.mozilla.org.

If you want to look at the talks:

Intro to Web Compatibility

Mike Taylor (Mozilla) has introduced the Web Compatibility topic. The Web being a giant set of new and old things. We need to care for a lot of different things sometimes incompatible. Features will disappear, new features emerge, but you still need to make the Web usable for all users whatever their devices, countries, browsers.

Mike reminded us of the evolution of User Agent strings and how they grew with more and more terms. The reason is that the User Agent string became an ID for having the content rendered. So any new User Agent is trying to get access to the Web site content by not being filtered out.

Then he went through some traditional bugs (horrors) be JavaScript, CSS, etc.

WebCompat.com has been created to give a space for users to report Web Compatibility issues they have on a Web site. But the space is useful for browser vendors, it is relatively easy to tie the bug reporting of browsers directly to webcompat.com.

Discussions

Beyond Vendor Prefixes

Jacob Rossi (Microsoft) introducing the purpose and the caveats of vendor prefixes. Vendor prefixes have been created for helping people to test the new API safely without breaking other browser vendors. It shields against collision with other implementations, but it also creates Web Compatibility issues. The main issue being that Web developers use these on production sites and on articles, examples, documentations.

Microsoft tried to contact Web sites with bogus code examples and articles. They also created new content with the correct way of writing things. The results were promising for the FullScreen API but the experiment was less successful for other properties. Basically, fix the problem before it happens.

So how do we keep the possibility to have a large surface for feedback and at the same time limit the usage so that it doesn't become a requirement to use a specific browser. The first idea to put new features behind flags. Then the issue becomes that the feedback is shallow. So Jacob is floating the idea of an API trial, where someone would register to get a key for enabling a feature. It would help the developer to test and at the same time, make it possible to set deadlines for the test.

It would probably require a community effort to set up. It has a cost. Where this discussion should happen? It could be a community group at W3C. Not all APIs need to be used at scale for having a real sense of feedback. IndexedDB, appcache would have been good candidates for this thing. If there was a community site, it would be yet another good way to build awareness about the features.

A good discussion has been recorded on the video.

How CSS is being used in the wild

Alex McPherson (QuickLeft) introduced the wonderful work he did about CSS properties on the Web as they are currently (2014) used by Web devs. The report was initially done for understanding what QuickLeft should recommend in terms of technology when they tackle a new project. For this report, they scrap the CSS of the top 15000 Web sites and checking the frequency, values, doing graph distributions. The purpose was not to be an exact academic research. So they are definitely caveats in the study, but it gives a rough overview of what is done. The data were collected through a series of HTTP requests. One of the consequences is that we probably miss everything which is set through JavaScript. A better study would include a browser crawler handling the DOM. There are probably variations with regards to the user agent too.

It would be good to have a temporal view of these data and repeat the study continuously. So we can identify how the Web is evolving. Browser vendors seems to have more resources to do this type of studies than a single person in an agency.

Engaging with Web Developers for Site Outreach

Colleen Williams (Microsoft) talked about what it takes to do daily Web Compatibility work. Contacting Web developers is really about trying to convince people that there could be a benefit for them to implement with a wider scope of platforms.

Social networking and using linkedin are very useful to be to contact the right persons in companies. It's very important to be very upfront and to tell developers:

  • Who we are?
  • What we are doing?
  • Why are we contacting them?

Microsoft IE team has a list of top sites and systematically test every new version of IE into these sites. It's an opportunity for contacting Web sites which are broken. When contacting Web sites, it's important to understand that you are building a relationship on a longterm. You might have to recontact the people working for this company and/or Web sites in a couple of months. It's important to nurture a respectful and interesting relation with the Web developers. You can't say "your code sucks". It's important to talk to technical people directly.

Do not share your contact list with business departements. We need a level of privacy with the persons we are contacting. They keep contact with you because they are eager to help of solving technical issues. Empathy in the relationship goes a long way in terms of creating a mutual trust. The relationship should always go both ways.

Having a part or the full solution for solving the issue will help you a lot in getting the issue fixed. It's better to show code and help the developer demonstrate what could work.

Companies user support channels are usually not the best tool, unfortunately, for reaching the company. There's a difference in between having a contact and having the right contact.

Web Compatibility Data: A Shared Resource

Finally Justin Crawford (Mozilla) introduced the project about having a better set of site compatibility data. But I encouraged you to read his own summary on his blog.

Unconference

We discuss at the end of the day using an unconference format. Alexa Roman moderated the session. We discussed about User Agent Sniffing and the format of the UA string, console.log for reporting Web Compatibility issues, API trials, documentation, etc.

More information

You can contact and interact with us: IRC: irc.mozilla.org #webcompat Discuss issues on compatibility@lists.mozilla.org Twitter: @webcompat Report bug: http://webcompat.com/

Planet Mozilla InternsMichael Sullivan: Forcing memory barriers on other CPUs with mprotect(2)

I have something of an unfortunate fondness for indefensible hacks.

Like I discussed in my last post, RCU is a synchronization mechanism that excels at protecting read mostly data. It is a particularly useful technique in operating system kernels because full control of the scheduler permits many fairly simple and very efficient implementations of RCU.

In userspace, the situation is trickier, but still manageable. Mathieu Desnoyers and Paul E. McKenney have built a Userspace RCU library that contains a number of different implementations of userspace RCU. For reasons I won’t get into, efficient read side performance in userspace seems to depend on having a way for a writer to force all of the reader threads to issue a memory barrier. The URCU library has one version that does this using standard primitives: it sends signals to all other threads; in their signal handlers the other threads issue barriers and indicate so; the caller waits until every thread has done so. This is very heavyweight and inefficient because it requires running all of the threads in the process, even those that aren’t currently executing! Any thread that isn’t scheduled now has no reason to execute a barrier: it will execute one as part of getting rescheduled. Mathieu Desnoyers attempted to address this by adding a membarrier() system call to Linux that would force barriers in all other running threads in the process; after more than a dozen posted patches to LKML and a lot of back and forth, it got silently dropped.

While pondering this dilemma I thought of another way to force other threads to issue a barrier: by modifying the page table in a way that would force an invalidation of the Translation Lookaside Buffer (TLB) that caches page table entries! This can be done pretty easily with mprotect or munmap.

Full details in the patch commit message.

Planet Mozilla InternsMichael Sullivan: Why We Fight

Why We Fight, or

Why Your Language Needs A (Good) Memory Model, or

The Tragedy Of memory_order_consume’s Unimplementability

This, one of the most terrifying technical documents I’ve ever read, is why we fight: https://www.kernel.org/doc/Documentation/RCU/rcu_dereference.txt.

Background

For background, RCU is a mechanism used heavily in the Linux kernel for locking around read-mostly data structures; that is, data structures that are read frequently but fairly infrequently modified. It is a scheme that allows for blazingly fast read-side critical sections (no atomic operations, no memory barriers, not even any writing to cache lines that other CPUs may write to) at the expense of write-side critical sections being quite expensive.

The catch is that writers might be modifying the data structure as readers access it: writers are allowed to modify the data structure (often a linked list) as long as they do not free any memory removed until it is “safe”. Since writers can be modifying data structures as readers are reading from it, without any synchronization between them, we are now in danger of running afoul of memory reordering. In particular, if a writer initializes some structure (say, a routing table entry) and adds it to an RCU protected linked list, it is important that any reader that sees that the entry has been added to the list also sees the writes that initialized the entry! While this will always be the case on the well-behaved x86 processor, architectures like ARM and POWER don’t provide this guarantee.

The simple solution to make the memory order work out is to add barriers on both sides on platforms where it is need: after initializing the object but before adding it to the list and after reading a pointer from the list but before accessing its members (including the next pointer). This cost is totally acceptable on the write-side, but is probably more than we are willing to pay on the read-side. Fortunately, we have an out: essentially all architectures (except for the notoriously poorly behaved Alpha) will not reorder instructions that have a data dependency between them. This means that we can get away with only issuing a barrier on the write-side and taking advantage of the data dependency on the read-side (between loading a pointer to an entry and reading fields out of that entry). In Linux this is implemented with macros “rcu_assign_pointer” (that issues a barrier if necessary, and then writes the pointer) on the write-side and “rcu_dereference” (that reads the value and then issues a barrier on Alpha) on the read-side.

There is a catch, though: the compiler. There is no guarantee that something that looks like a data dependency in your C source code will be compiled as a data dependency. The most obvious way to me that this could happen is by optimizing “r[i ^ i]” or the like into “r[0]”, but there are many other ways, some quite subtle. This document, linked above, is the Linux kernel team’s effort to list all of the ways a compiler might screw you when you are using rcu_dereference, so that you can avoid them.

This is no way to run a railway.

Language Memory Models

Programming by attempting to quantify over all possible optimizations a compiler might perform and avoiding them is a dangerous way to live. It’s easy to mess up, hard to educate people about, and fragile: compiler writers are feverishly working to invent new optimizations that will violate the blithe assumptions of kernel writers! The solution to this sort of problem is that the language needs to provide the set of concurrency primitives that are used as building blocks (so that the compiler can constrain its code transformations as needed) and a memory model describing how they work and how they interact with regular memory accesses (so that programmers can reason about their code). Hans Boehm makes this argument in the well-known paper Threads Cannot be Implemented as a Library.

One of the big new features of C++11 and C11 is a memory model which attempts to make precise what values can be read by threads in concurrent programs and to provide useful tools to programmers at various levels of abstraction and simplicity. It is complicated, and has a lot of moving parts, but overall it is definitely a step forward.

One place it falls short, however, is in its handling of “rcu_dereference” style code, as described above. One of the possible memory orders in C11 is “memory_order_consume”, which establishes an ordering relationship with all operations after it that are data dependent on it. There are two problems here: first, these operations deeply complicate the semantics; the C11 memory model relies heavily on a relation called “happens before” to determine what writes are visible to reads; with consume, this relation is no longer transitive. Yuck! Second, it seems to be nearly unimplementable; tracking down all the dependencies and maintaining them is difficult, and no compiler yet does it; clang and gcc both just emit barriers. So now we have a nasty semantics for our memory model and we’re still stuck trying to reason about all possible optimizations. (There is work being done to try to repair this situation; we will see how it turns out.)

Shameless Plug

My advisor, Karl Crary, and I are working on designing an alternate memory model (called RMC) for C and C++ based on explicitly specifying the execution and visibility constraints that you depend on. We have a paper on it and I gave a talk about it at POPL this year. The paper is mostly about the theory, but the talk tried to be more practical, and I’ll be posting more about RMC shortly. RMC is quite flexible. All of the C++11 model apart from consume can be implemented in terms of RMC (although that’s probably not the best way to use it) and consume style operations are done in a more explicit and more implementable (and implemented!) way.

Planet Mozilla InternsMichael Sullivan: The x86 Memory Model

Often I’ve found myself wanting to point someone to a description of the x86’s memory model, but there wasn’t any that quite laid it out the way I wanted. So this is my take on how shared memory works on multiprocessor x86 systems. The guts of this description is adapted/copied from “A Better x86 Memory Model: x86-TSO” by Scott Owens, Susmit Sarkar, and Peter Sewell; this presentation strips away most of the math and presents it in a more operational style. Any mistakes are almost certainly mine and not theirs.

Components of the System:

There is a memory subsystem that supports the following operations: store, load, fence, lock, unlock. The memory subsystem contains the following:

  1. Memory: A map from addresses to values
  2. Write buffers: Per-processor lists of (address, value) pairs; these are pending writes, waiting to be sent to memory
  3. “The Lock”: Which processor holds the lock, or None, if it is not held. Roughly speaking, while the lock is held, only the processor that holds it can perform memory operations.

There is a set of processors that execute instructions in program order, dispatching commands to the memory subsystem when they need to do memory operations. Atomic instructions are implemented by taking “the lock”, doing whatever reads and writes are necessary, and then dropping “the lock”. We abstract away from this.

Definitions

A processor is “not blocked” if either the lock is unheld or it holds the lock.

Memory System Operation

Processors issue commands to the memory subsystem. The subsystem loops, processing commands; each iteration it can pick the command issued by any of the processors to execute. (Each will only have one.) Some of the commands issued by processors may not be eligible to execute because their preconditions do not hold.

  1. If a processor p wants to read from address a and p is not blocked:
    a. If there are no pending writes to a in p’s write buffer, return the value from memory
    b. If there is a pending write to a in p’s write buffer, return the most recent value in the write buffer
  2. If a processor p wants to write value v to address a, add (a, v) to the back of p’s write buffer
  3. At any time, if a processor p is not blocked, the memory subsystem can remove the oldest entry (a, v) from p’s write buffer and update memory so that a maps to v
  4. If a processor p wants to issue a barrier
    a. If the barrier is an MFENCE, p’s write buffer must be empty
    b. If the barrier is an LFENCE/SFENCE, there are no preconditions; these are no-ops **
  5. If a processor p’s wants to lock the lock, the lock must not be held and p’s write buffer must be empty; the lock is set to be p
  6. If a processor p’s wants to unlock the lock, the lock must held by p and p’s write buffer must be empty; the lock is set to be None

Remarks

So, the only funny business that can happen is that a load can happen before a prior store to a different location has been flushed from the write buffer into memory. This means that if CPU0 executes “x = 1; r0 = y” and CPU1 executes “y = 1; r1 = x”, with x and y both initially zero, we can get “r0 == r1 == 0″.

The common intuition that atomic instructions act like there is an MFENCE before and after them is basically right; MFENCE requires the write buffer to empty before it can execute and so do lock and unlock.

x86 is a pleasure to compile atomics code for. The “release” and “acquire” operations in the C++11 memory model don’t require any fencing to work. Neither do the notions of “execution order” and “visibility order” in my advisor and my RMC memory model.

** The story about LFENCE/SFENCE is a little complicated. Some sources insist that they actually do things. The Cambridge model models them as no-ops. The guarantees that they are documented to provide are just true all the time, though. I think they are useful when using non-temporal memory accesses (which I’ve never done), but not in general.

 

Brad LasseyDetecting slow add-ons

As we have been progressing towards a multi-process Firefox we have seen many bugs filed about slowness that after investigation turned out to be related to add-ons interacting poorly with the way multi-process Firefox now operates. This doesn’t necessarily mean that the add-on author has done anything wrong. Instead code that worked fine with single-process Firefox now behaves differently as seemingly innocuous calls can cause synchronous calls from one process to the other. See my previous post on unsafe CPOW usage as example of how this can occur.

This motivated us to start trying to measure add-ons and detect when they may be slowing down a user’s browser such that we can notify the user. As luck would have it, we recently made a key change that makes all of this possible. In bug 990729 we introduced “compartment-per-add-on” (though there may be multiple compartments per add-on), such that add-on code and Firefox’s js code run in separate compartments. This allowed us to then track the amount of time we running a particular add-on by measuring the time spent in the compartments associated with that add-on. We did that in bug 1096666.

That has allowed us to start notifying users when we notice add-ons running slowly on their system. The first cut of that recently landed from bug 1071880, but a more polished UX is being tracked by bug 1124073. If you’re using Nightly and testing e10s (multi-process Firefox) and are notified of a slow add-on, please try disabling it to see if it improves how Firefox is running and report your results to AreWeE10SYet.com. You can see the raw data of what we are measuring by having a look at about:compartments. Also note that all of this is experimental. In fact, we’re already rewriting how we measure the time we spend in compartments in bug 674779.

Rizky AriestiyansyahUndo git rm -r command in github

Well, when I tried to delete remote folder on my github repository the command that I used 100% is wrong, I am using git rm -r folder/ command ;( it also delete the folder in local repo.

Armen Zambranomozci 0.2.4 released - Support for intermittent oranges + major bug fixes

Big thanks to vaibhav1994 for his intermittent orange script contribution.

Also thanks to adusca and valeriat for their many contributions in this release.

Release notes

The major feature is being able to analyze data about reported intermittent oranges on bugzilla and give the user the ability to trigger jobs to spot where the regression started (generate_triggercli.py).

A lot of bug fixed and optimizations. I'm only highligthing some from this list: 0.2.3...0.2.4

Highlighted fixes:
  • Fixed/improved issues with jobs completed today
  • Added builds-4hr support
  • allthethings.json gets clobbered after 24 hours
    • This prevents relying on an old file
  • Drop the need to use --repo-name for all scripts
    • his prevents the user having to add a redundant option
Release notes: https://github.com/armenzg/mozilla_ci_tools/releases/tag/0.2.4
PyPi package: https://pypi.python.org/pypi/mozci/0.2.4
Changes: https://github.com/armenzg/mozilla_ci_tools/compare/0.2.3...0.2.4


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Justin CrawfordReport: Web Compatibility Summit

Last week I spoke at Mozilla’s Compatibility Summit. I spoke about compatibility data. Here are my slides.

Compatibility data…

  • is information curated by web developers about the cross-browser compatibility of the web’s features.
  • documents browser implementation of standards, browser anomalies (both bugs and features), and workarounds for common cross-browser bugs. It theoretically covers hundreds of browser/version combinations, but naturally tends to focus on browsers/versions with the most use.
  • is partly structured (numbers and booleans can answer the question, “does Internet Explorer 10 support input type=email?”) and partly unstructured (numbers and booleans cannot say, “Safari Mobile for iOS applies a default style of opacity: 0.4 to disabled textual <input> elements.”).

Compatibility data changes all the time. As a colleague pointed out last week, “Every six weeks we have all new browsers” — each with an imperfect implementation of web standards, each introducing new incompatibilities into the web.

Web developers are often the first to discover and document cross-browser incompatibilities since their work product is immediately impacted. And they do a pretty good job of sharing compatibility data. But as I said in my talk: Compatibility data is an oral tradition. Web developers gather at the town pump and share secrets for making boxes with borders float in Browser X. Web developers sit at the feet of venerated elders and hear how to make cross-browser compatible CSS transitions. We post our discoveries and solutions in blogs; in answers on StackOverflow; in GitHub repositories. We find answers on old blogs; in countless abandoned PHP forums; on the third page of search results.

There are a small number of truly canonical sources for compatibility data. We surveyed MDN visitors in January and learned that they refer to two sources far more than any others: MDN and caniuse.com. MDN has more comprehensive information — more detailed, for more browser versions, accompanied by encyclopedic reference materials. caniuse has a much better interface and integrates with browser market share data. Both have good communities of contributors. Together, they represent the canon of compatibility data.

Respondents to our recent survey said they use the two sites differently: They use caniuse for planning new sites and debugging existing issues, and use MDN for exploring new features and when answering questions that come up when writing code.

On MDN, we’re working on a new database of compatibility data with a read/write API. This project solves some maintenance issues for us, and promises to create more opportunities for web developers to build automation around web compatibility data (for an example of this, see doiuse.com, which scans your CSS for features covered by caniuse and returns a browser coverage report).

Of course the information in MDN and caniuse is only one tool for improving web compatibility, which is why the different perspectives at the Compat Summit were so valuable.

concentric_compatibility

If we think of web compatibility as a set of concentric circles, MDN and caniuse (and the entire, sprawling web compatibility data corpus) occupy a middle ring. In the illustration above, the rings get more diffuse as they get larger, representing the increasing challenge of finding and solving incompatibility as it moves from vendor to web developer to site.

  • By the time most developers encounter cross-browser compatibility issues, those issues have been deployed in browser versions. So browser vendors have a lot of responsibility to make web standards possible; to deploy as few standards-breaking features and bugs as possible. Jacob Rossi from Microsoft invited Compat Summit participants to collaborate on a framework that would allow browser vendors to innovate and push the web forward without creating durable incompatibility issues in deployed websites.
  • When incompatibilities land in browser releases, web developers find them, blog about them, and build their websites around them. At the Compat Summit, Alex McPherson from Quickleft presented his clever work quantifying some of these issues, and I invited all present to start treating compatibility data like an important public resource (as described above).
  • Once cross-browser incompatibilities are discussed in blog posts and deployed on the web, the only way to address incompatibilities is to politely ask web developers to fix them. Mike Taylor and Colleen Williams talked about Mozilla’s and Microsoft’s developer outreach activities — efforts like Webcompat.com, “bug reporting for the internet”.

At the end of the Compat Summit, Mike Taylor asked the participants whether we should have another. My answer takes the form of two questions:

  1. Should someone work to keep the importance of cross-browser compatibility visible among browser vendors and web developers?
  2. If we do not, who will?

I think the answer is clear.

Special thanks to Jérémie Patonnier for helping me get up to speed on web compatibility data.

Armen ZambranoListing builder differences for a buildbot-configs patch improved

Up until now, we updated the buildbot-configs repository to the "default" branch instead of "production" since we normally write patches against that branch.

However, there is a problem with this, buildbot-configs is always to be on the same branch as buildbotcustom. Otherwise, we can have changes land in one repository which require changes on the other one.

The fix was to simply make sure that both repositories are either on default or their associated production branches.

Besides this fix, I have landed two more changes:

  1. Use the production branches instead of 'default'
    • Use -p
  2. Clobber our whole set up (e.g. ~/.mozilla/releng)
    • Use -c

Here are the two changes:
https://hg.mozilla.org/build/braindump/rev/7b93c7b7c46a
https://hg.mozilla.org/build/braindump/rev/bbb5c54a7d42


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Nathan Froydmeasuring power usage with power gadget and joulemeter

In the continuing evaluation of how Firefox’s energy usage might be measured and improved, I looked at two programs, Microsoft Research’s Joulemeter and Intel’s Power Gadget.

As you might expect, Joulemeter only works on Windows. Joulemeter is advertised as “a software tool that estimates the power consumption of your computer.” Estimates for the power usage of individual components (CPU/monitor/disk/”base”) are provided while you’re running the tool. (No, I’m not sure what “base” is, either. Perhaps things like wifi?) A calibration step is required for trying to measure anything. I’m not entirely sure what the calibration step does, but since you’re required to be running on battery, I presume that it somehow obtains statistics about how your battery drains, and then apportions power drain between the individual components. Desktop computers can use a WattsUp power meter in lieu of running off battery. Statistics about individual apps are also obtainable, though only power draw based on CPU usage is measured (estimated). CSV logfiles can be captured for later analysis, taking samples every second or so.

Power Gadget is cross-platform, and despite some dire comments on the download page above, I’ve had no trouble running it on Windows 7 and OS X Yosemite (though I do have older CPUs in both of those machines). It works by sampling energy counters maintained by the processor itself to estimate energy usage. As a side benefit, it also keeps track of the frequency and temperature of your CPU. While the default mode of operation is to draw pretty graphs detailing this information in a window, Power Gadget can also log detailed statistics to a CSV file of your choice, taking samples every ~100ms. The CSV file also logs the power consumption of all “packages” (i.e. CPU sockets) on your system.

I like Power Gadget more than Joulemeter: Power Gadget is cross-platform, captures more detailed statistics, and seems a little more straightforward in explaining how power usage is measured.

Roberto Vitillo and Joel Maher wrote a tool called energia that compares energy usage between different browsers on pre-selected sets of pages; Power Gadget is one of the tools that can be used for gathering energy statistics. I think this sort of tool is the primary use case of Power Gadget in diagnosing power problems: it helps you see whether you might be using too much power, but it doesn’t provide insight into why you’re using that power. Taking logs along with running a sampling-based stack profiler and then attempting to correlate the two might assist in providing insight, but it’s not obvious to me that stacks of where you’re spending CPU time are necessarily correlated with power usage. One might have turned on discrete graphics in a laptop, or high-resolution timers on Windows, for instance, but that wouldn’t necessarily be reflected in a CPU profile. Perhaps sampling something different (if that’s possible) would correlate better.

The Servo BlogThis Week In Servo 25

This week, we merged 60 pull requests.

Selector matching has been extracted into an independent library! You can see it here.

We now have automation set up for our Gonk (Firefox OS) port, and gate on it.

Notable additions

Screenshots

Parallel painting, visualized:

New contributors

Meeting

Minutes

  • Rust in Gecko: We’re slowly figuring out what we need to do (and have a tracking bug). Some more candidates for a Rust component in Gecko are an MP4 demultiplexer, and a replacement of safe browsing code.
  • Selector matching is now in a new library
  • ./mach build-gonk works. At the moment it needs a build of B2G, which is huge (and requires a device), but we’re working on packaging up the necessary bits which are relatively small.

Cameron KaiserTwo victories

Busted through my bug with stubs late last night (now that I've found the bug I am chagrined at how I could have been so dense) and today IonPower's Baseline implementation successfully computed π to an arbitrary number of iterations using the nice algorithm by Daniel Pepin:

% ../../../obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1738423371907505
% ../../../obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.141625985812036

Still work to be done on the rudiments before attacking the test suite, but code of this complexity running correctly so far is a victory. And, in a metaphysical sense, speaking from my perspective as a Christian (and a physician aware of the nature of his illness), here is another victory: a Mozillian's last post from the end stages of his affliction. Even for those who do not share that religious perspective, it is a truly brave final statement and one I have not seen promoted enough.

Daniel Stenbergcurl, smiley-URLs and libc

Some interesting Unicode URLs have recently been seen used in the wild – like in this billboard ad campaign from Coca Cola, and a friend of mine asked me about curl in reference to these and how it deals with such URLs.

emojicoke-by-stevecoleuk-450

(Picture by stevencoleuk)

I ran some tests and decided to blog my observations since they are a bit curious. The exact URL I tried was ‘www.😃.ws’ (not the same smiley as shown on this billboard: 😂) – it is really hard to enter by hand so now is the time to appreciate your ability to cut and paste! It appears they registered several domains for a set of different smileys.

These smileys are not really allowed IDN (where IDN means International Domain Names) symbols which make these domains a bit different. They should not (see below for details) be converted to punycode before getting resolved but instead I assume that the pure UTF-8 sequence should or at least will be fed into the name resolver function. Well, either way it should either pass in punycode or the UTF-8 string.

If curl was built to use libidn, it still won’t convert this to punycode and the verbose output says “Failed to convert www.😃.ws to ACE; String preparation failed

curl (exact version doesn’t matter) using the stock threaded resolver

  • Debian Linux (glibc 2.19) – FAIL
  • Windows 7 - FAIL
  • Mac OS X 10.9 – SUCCESS

But then also perhaps to no surprise, the exact same results are shown if I try to ping those host names on these systems. It works on the mac, it fails on Linux and Windows. Wget 1.16 also fails on my Debian systems (just as a reference and I didn’t try it on any of the other platforms).

My curl build on Linux that uses c-ares for name resolving instead of glibc succeeds perfectly. host, nslookup and dig all work fine with it on Linux too (as well as nslookup on Windows):

$ host www.😃.ws
www.\240\159\152\131.ws has address 64.70.19.202
$ ping www.😃.ws
ping: unknown host www.😃.ws

While the same command sequence on the mac shows:

$ host www.😃.ws
www.\240\159\152\131.ws has address 64.70.19.202
$ ping www.😃.ws
PING www.😃.ws (64.70.19.202): 56 data bytes
64 bytes from 64.70.19.202: icmp_seq=0 ttl=44 time=191.689 ms
64 bytes from 64.70.19.202: icmp_seq=1 ttl=44 time=191.124 ms

Slightly interesting additional tidbit: if I rebuild curl to use gethostbyname_r() instead of getaddrinfo() it works just like on the mac, so clearly this is glibc having an opinion on how this should work when given this UTF-8 hostname.

Pasting in the URL into Firefox and Chrome works just fine. They both convert the name to punycode and use “www.xn--h28h.ws” which then resolves to the same IPv4 address.

Update: as was pointed out in a comment below, the “64.70.19.202″ IP address is not the correct IP for the site. It is just the registrar’s landing page so it sends back that response to any host or domain name in the .ws domain that doesn’t exist!

What do the IDN specs say?

The U-263A smileyThis is not my area of expertise. I had to consult Patrik Fältström here to get this straightened out (but please if I got something wrong here the mistake is still all mine). Apparently this smiley is allowed in RFC 3940 (IDNA2003), but that has been replaced by RFC 5890-5892 (IDNA2008) where this is DISALLOWED. If you read the spec, this is 263A.

So, depending on which spec you follow it was a valid IDN character or it isn’t anymore.

What does the libc docs say?

The POSIX docs for getaddrinfo doesn’t contain enough info to tell who’s right but it doesn’t forbid UTF-8 encoded strings. The regular glibc docs for getaddrinfo also doesn’t say anything and interestingly, the Apple Mac OS X version of the docs says just as little.

With this complete lack of guidance, it is hardly any additional surprise that the glibc gethostbyname docs also doesn’t mention what it does in this case but clearly it doesn’t do the same as getaddrinfo in the glibc case at least.

What’s on the actual site?

A redirect to www.emoticoke.com which shows a rather boring page.

emoticoke

Who’s right?

I don’t know. What do you think?

Michael KaplyWhat I Do

One of the trickier things about being self-employed, especially around an open source project like Firefox, is knowing where to draw the line between giving out free advice and asking people to pay for my services. I'm always hopeful that answering a question here and there will one day lead to folks hiring me or purchasing CCK2 support. That's why I try to be as helpful as I can.

That being said, I'm still surprised at the number of times a month I get an email from someone requesting that I completely customize Firefox to their requirements and deliver it to them for free. Or write an add-on for them. Or diagnose the problem they are having with Firefox.

While I appreciate their faith in me, somehow I think it's gotten lost somewhere that this is what I do for a living.

So I just wanted to take a moment and let people know that you can hire me. If you need help with customizing and deploying Firefox, or building Firefox add-ons or building a site-specific browser, that's what I do. And I'd be more than happy to help you do that.

But if all you have is a question, that's great too. The best place to ask is on my support site at cck2.freshdesk.com.

Yunier José Sosa VázquezNueva interfaz para Firefox en tabletas

Nuevamente es martes de actualizaciones para Firefox y tengo el placer de compartir con ustedes las novedades que encontrarán en la nueva versión del panda rojo tanto para escritorios como dispositivos móviles. En este lanzamiento sobresalen la nueva interfaz para las tabletas y la introducción de los primeros cambios hacia una arquitectura multiprocesos.

En aras de mejorar la experiencia de usuario, el nuevo Firefox adquiere un buen balance entre simplicidad y poder al ofrecer una barra de pestañas horizontal y un panel de pestañas a pantalla completa. Diseñada para uso horizontal y vertical, la nueva interfaz permite aprovechar todo el tamaño de pantalla que ofrecen las tabletas disponibles en el mercado.

firefox-tablets

La introducción de los primeros cambios hacia una arquitectura multiprocesos, hará que Firefox maneje las pestañas, complementos y plugins fuera del proceso principal. Electrolysis (e10s) es el nombre del proyecto que busca añadir esta característica al núcleo del navegador y una vez terminado, proveerá mayor seguridad, estabilidad y, si algo falla, Firefox no se cerrará inesperadamente.

El moderno diseño para las búsquedas ahora debuta en otras localizaciones del navegador más traducido del mundo. Al visitar una página web que sugiera adicionar un motor de búsqueda a Firefox, se mostrará un ícono + en la barra de búsqueda.

sugerencias-busquedas-firefox36

Los sitios “fijados” en la página nueva pestaña se podrán sincronizar a través de las cuentas Firefox en todos tus dispositivos. Se ha dado soporte completo a la versión 2 del protocolo HTTP, el cual habilita una web más rápida, escalable y responsiva. También se ha localizado el Panda Rojo en uzbeco gracias a la Comunidad de Uzbequistán.

Para Android Lollipop se ha añadido el idioma Maithili (mai).

Otras novedades

Si deseas conocer más, puedes leer las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

The Mozilla BlogFirefox for Android Gets a Simpler, Sleeker New Look for Tablets

We’re constantly working on ways to give you the best Firefox experience, everywhere you are. That’s why our goal with the new Firefox for Android look on tablets was to simplify the interaction with tabs and allow users to create, remove and switch tabs with a single tap.

We’ve also revamped the toolbar and added a new, full screen tab panel for a sleek, visual overview of tabs.

The Mozilla BlogUnreal Engine 4.7 Binary Release Includes HTML5 Export

With the launch of Unreal Engine 4.7, Epic Games has added the ability to export to HTML5 as a regular part of their Windows binary engine releases. One of the world’s most advanced game engines is one step closer to delivering truly amazing content right in your browser. With this addition, developers will be able to create content in Unreal Engine 4.7, immediately compile it to the Web, and launch it in their browser of choice with the click of a button.

InstructionEpic Games’ work with WebGL and HTML5 has been part of the Unreal Engine 4 source code distribution for many months and has been maturing over the past year. While still a pre-release version, the HTML5 output is robust enough for developers to use it with their content and give feedback on the technology. Mozilla is excited to support Epic Games in their continuing effort to bring this amazing engine to the Web.

FF

Screenshot of output from UE4

In the leadup to GDC, Mozilla will be publishing a series of articles about different aspects of the technology that makes it possible to bring native engines to the Web. Mozilla will also be showcasing several next generation Web technologies at our booth including WebVR demos built in Unreal Engine 4. Output from the engine will also be used to showcase Firefox Developer Tools and demonstrate how they can be leveraged with this type of content.

Mozilla will be taking part in Epic’s HTML5 export presentation which will be broadcast live on Twitch at 2pm PT Thursday, March 5th, and viewed at www.twitch.tv/unrealengine.

For more information on this news from Epic Games, visit their blog.

Come take a look at where the Web is heading at the Firefox Booth (South Hall Booth #2110) or learn more about Unreal Engine 4.7 at Epic Games’ Booth (South Hall Booth #1024).

Christian HeilmannMaking distributed team meetings work

Being in a distributed team can be tough. Here are a few tricks I learned over the years to make distributed meetings easier.
This is cross-posted on Medium, you can comment there.

Man on phone with 'shut up' on whiteboard behind himPhoto Credit: Tim Caynes

Working as distributed teams is a red flag for a lot of companies. Finding a job that allows you to work remote is becoming a rare opportunity. As someone who worked remote for the last few years I can say though that it is a much better way of working. And it results in more effective and above all, happier, teams.

What it needs is effort by all involved to succeed and often a few simple mistakes can derail the happy train.

The big issue, of course, is time difference. It is also a massive opportunity. A well organised team can cover a product in half the time. To achieve that, you need well organised and honest communication. Any promise you can’t keep means you lost a whole day instead of being able to deliver quicker.

There are many parts to this, but today, I want to talk about meetings with distributed teams. In the worst case, these are phone conferences. In the best case, you have a fancy video conferencing system all people in your team can use.

Quick aside: a lot of people will use this as an opportunity to tell me about their amazing software solutions for this problem. I am sure you can tell me a lot about your multi-media communication system with supercharged servers and affordable rates. Please, don’t. I don’t buy these things and I also have seen far too many fail. Connectivity and travel don’t go too well together. The crystal clear, uninterrupted, distraction-free meetings we see in advertisements are a product of fiction. You might as well add unicorns, at least that would make them fabulous. I’ve seen too many bad connections and terrible surroundings. What they taught me is that planning for failure beats any product you can buy. Hence these tips here.

Synchronous Asynchronicity?


World online information
Meetings are synchronous – you expect all people to be in one place at the same time. Distributed teams work asynchronous. While one part is doing the work, the others are sleeping or having a social life until it is time to switch over.
Thus, if you have a meeting at 10 in the morning California time and you talk to people in Europe you have two kind of people in the virtual room:

  • Those who already had a whole day of work with all the joys and annoyances it brings
  • Those who just got up, far too early for their own liking and got stuck in transit. Either on a packed motorway or involuntarily nestling in the armpit of a total stranger in an overcrowded train

In any case, both are kind of weary and annoyed. Without proper planning, this isn’t an opportunity for knowledge sharing. It is a recipe for disaster as both groups have wildly diverging agendas about this meeting.

  • One group wants to give an update what they did today, hand over and call it a day and
  • The other group wants to know what is important today and get on with it to fight off the cobwebs of commuting

As an expert remote worker you start to diffuse this issue a bit by shifting your daily agenda around. You allow for a few hours of overlap, either by staying longer on the job and starting later in the early afternoon. Or by getting up much earlier and communicating with your colleagues at home before leaving for the office. This works well as long as you don’t have a partner that works normal hours or you live in a country where shops close early.

In any case, you need to find a way to make both groups more efficient in this meeting, so here is the first trick:

Separate the meeting into remote updates and social interactions


sandwichPhoto by Ron Dolette

This may sound weird, but you can’t have both. Having a meeting together in a room in the office is great for the locals:

  • You can brainstorm some ideas in an animated discussion where everyone talks
  • You can cover local happenings (“did you see the game last night? What a ludicrous display”)
  • You can have a chat about what’s bothering you (“damn, the office is cold today”) and
  • talk about location-specific issues and socialise.

This doesn’t work when the topic of discussion is current and about the location you are in. Telling someone eight hours ahead and far away from you that you will enjoy company-provided food right after the event is not helping their morale — on the contrary. It alienates people outside the group even more than they already feel. It is great for the team in the room though. Don’t let one’s group perk become the others reason for jealousy.

By separating your meeting into four parts, you can focus better on giving the right people what they came for and get their updates. Make it a meeting sandwich:

  • Meet in the room, have a quick social chat,
  • Dial in the remote participants, ask them about a quick social update,
  • Have a focused info session with both groups,
  • Let the remote people disconnect and phase out the meeting with another local, social update.

This way everybody uses their time efficiently, feels listened to and not bothered by updates or benefits that don’t apply to them. I have seen this work well in the past. It also resulted in much shorter meetings and we all love those.

Have a clear agenda and turn it into live meeting notes


a clear plan where we are and what's coming
Sending out an agenda before the meeting makes things crystal clear. People know if it is worth to join and can choose not to. Make sure that all the resources you cover in the meeting are linked in the agenda. Make sure that these resources are publicly available to all people in the meeting and not on someone’s internal IP or file share only a few have access to. Having to ask “what is Margaret talking about, where can I find this?” is distracting and frustrating.

During the meeting you can add notes and findings to the agenda items. This has a lot of benefits:

  • People who can not attend the meeting or drop off half way through can look up what happened later.
  • You have an archive of your meetings without having the extra work of publishing meeting notes.
  • People who missed the meeting can scan the meeting results. This is much easier than listening to an hour long recording or watching a video of people in a room talking to a microphone. As beneficial as a video call is when you are all live, it gets tedious and hard to navigate to the items you care about when it is a recording.

Be aware of sound distractions


tis be the time of hammer
In essence what you are dealing with here is a many to one conversation. If you have several people talking to you and you can see them, this is not an issue. If you can’t see them and you don’t even have a spatial recognition where the sound comes from, it all blurs together. That’s why it is important to have one person speak at any time and others to be aware of the fact that any noise they make is distracting. This means:

  • As someone remote, mute your microphone. There is nothing more annoying than the clatter of a keyboard magnified by the microphone just above it
  • As someone in the room with others, lean away from the microphone. Don’t cough into it, don’t shift things around on the table the mic is standing on. Coffee mugs, spoons and pens can be incredibly loud on those.
  • As the speaker, lean into the microphone and speak clearly – always assume there is static and sound loss. A mumbled remark in the back of the room followed by laughter by all could come across to a remote listener as an inside joke or even an insult. No need to risk such misunderstandings.
  • If you switch the speaker, tell them to introduce themselves. This may feel silly in the room, but it helps avoiding confusion on the other side.

Use a chat system as the lifeline of the meeting


Bad skype quality
Video and Audio chat will always go wrong in one way or another. The problem is that when you are presenting the system won’t tell you that. You are not aware that the crystal clear image of yourself with perfect sound is a pixelated mess with a robot voice that makes Daft Punk jealous on the other end.

Having a chat running at the same time covers a few scenarios:

  • People can tell you when something is wrong on their end or when you stopped being comprehensible
  • People can comment without interrupting your audio flow. Many systems switch the presenter as soon as someone speaks – which is unnecessary in this case.
  • People can post resources without having to interrupt you. “In case you missed it, Jessica is talking about http://frolickinghedgehogs.com”

Have a consumable backup for each live update


upload and downloadPhoto by
John Trainor

If you are presenting things in the meeting, send out screencasts or slide decks of what you present beforehand. Far too many times you can not see it as a remote participant when someone shares their screens. Switching from presenter to presenter always ends up in a lot of time wasted waiting for the computer to do the thing we want it to.

For the presenter this allows for better explanations of what is happening. It is distracting when you present and press a button and nothing happens. The remote attendees might also have some lag and get a delayed display. This means you talk about something different than they are currently seeing on their screen.

Plan for travel and meeting as a whole team


group hugPhoto by Joris Louwes

Last, but not least, plan to have visits where you meet as a whole team. Do social things together, give people a chance to get to know one another. Once you connected a real human with the flat, blurry face on the screen, things get much easier. Build up a cadence of this. Meet every quarter or twice a year. This might look expensive at first sight, but pays out in team efficiency. My US colleagues had an “a-hah” moment when they were on a laptop late in the evening waiting for others to stop talking about local things they couldn’t care less about. I felt much more empathy for my colleagues once I had to get up much earlier than I wanted to be in the office in time to meet my European colleagues. Let the team walk in each other’s shoes for a while and future meetings will be much smoother.

Mike TaylorWebKitCSSMatrix and ehow.com and web compatibility stuff

Wow, early January late February 2015, time to blog about more exciting web compatibility bugs.

Lately we've been receiving a number of reports, both in Bugzilla and on webcompat.com, that ehow.com isn't working in Firefox mobile browsers. With compelling articles like How to blog for cash and How to do your own SEO for your blog or website, this is not a great situation for my fellow Firefox Mobile-using-"how-to-get-rich-by-web-logging" friends.

And now for a brief message from our sponsor, http://vart.institute/: go learn about Art and programming and Mary Cassatt. // TODO(mike): ask jenn for money or how to do SEO.

The biggest obstacle to getting the site to behave is: ReferenceError: WebKitCSSMatrix is not defined. There are few minor CSS issues, but nothing that sprinkling a few unprefixed properties won't solve.

If you're not familiar with WebKitCSSMatrix, Apple has some docs. It's pretty cool (and much nicer than using regular expressions to dice up serialized transform matrices and then doing math and string wrangling yourself). Microsoft even has an equivalent called MSCSSMatrix (which WebKitCSSMatrix is mapped to for compat reasons).

Once upon a time, this thing was specced as CSSMatrix in the CSS3 2D Transforms spec, but eventually was removed (because I forget and am too lazy to search through www-style archives). It returned as a superset of CSSMatrix and SVGMatrix and got the sweet new name of DOMMatrix—which now allows it to be mutuble or immutable, depending on your needs, i.e., do I want a billion new object instances or can I just modify this thing in place.

There's a handful of polyshims available on GitHub if you need them, but the simplest is to just map WebKitCSSMatrix to DOMMatrix, which is supported in Firefox since Gecko 33.

Impress your friends like so:

if (!'WebKitCSSMatrix' in window && window.DOMMatrix) {
  window.WebKitCSSMatrix = window.DOMMatrix;
};

In theory that's cool and useful, but if you do a little bit more digging you'll find that it only gets you so far on the web. Back to ehow.com, here's the code that powers the PageSlider class here:

function r() {
  var e = document.defaultView.getComputedStyle(E[0], null),
  t = new WebKitCSSMatrix(e.webkitTransform);
  return t.m41
}

// some method does W.At = r()
// then some other method calls d(W.At + o, 'none')

function d(e, t) {
  var n = 'none' != t ? 'all ' + t + 's ease-out' : 'none';
  E[0].style.webkitTransition = n,
  E[0].style.webkitTransform = 'translate3d(' + e + 'px,0,0)'
}

Unfortunately, WebKitCSSMatrix and webkitTransform aren't really possible to tease apart, if you're trying to be compatibile with the deployed web. So a straightforward WebKitCSSMatrix to DOMMatrix mapping won't get you very far.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1088086] Possible duplicate search doesn’t return any results if you input “a->b” (for any a/b)
  • [1102364] Add microdata to HTML bugmail so GMail can display a “View bug” button
  • [1130721] Implement support for the attachment author for pronoun substitution in Custom Search
  • [1123275] Changes to form.reps.mentorship

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Nicholas NethercoteFix your damned data races

Nathan Froyd recently wrote about how he has been using ThreadSanitizer to find data races in Firefox, and how a number of Firefox developers — particular in the networking and JS GC teams — have been fixing these.

This is great news. I want to re-emphasise and re-state one of the points from Nathan’s post, which is that data races are a class of bug that almost everybody underestimates. Unless you have, say, authored a specification of the memory model for a systems programming language, your intuition about the potential impact of many data races is probably wrong. And I’m going to give you three links to explain why.

Hans Boehm’s paper How to miscompile programs with “benign” data races explains very clearly that it’s possible to correctly conclude that a data race is benign at the level of machine code, but it’s almost impossible at the level of C or C++ code. And if you try to do the latter by inspecting the code generated by a C or C++ compiler, you are not allowing for the fact that other compilers (including future versions of the compiler you used) can and will generate different code, and so your conclusion is both incomplete and temporary.

Dmitri Vyukov’s blog post Benign data races: what could possibly go wrong? covers similar ground, giving more examples of how compilers can legitimately compile things in surprising ways. For example, at any point the storage used by a local variable can be temporarily used to hold a different variable’s value (think register spilling). If another thread reads this storage in an racy fashion, it could read the value of an unrelated value.

Finally, John Regehr’s blog has many posts that show how C and C++ compilers take advantage of undefined behaviour to do surprising (even shocking) program transformations, and how the extent of these transformations has steadily increased over time. Compilers genuinely are getting smarter, and are increasingly pushing the envelope of what a language will let them get away with. And the behaviour of a C or C++ programs is undefined in the presence of data races. (This is sometimes called “catch-fire” semantics, for reasons you should be able to guess.)

So, in summary: if you write C or C++ code that deals with threads in Firefox — and that probably includes everybody who writes C or C++ code in Firefox — you should have read at least the first two links I’ve given above. If you haven’t, please do so now. If you have read them and they haven’t made you at least slightly terrified, you should read them again. And if TSan identifies a data race in code that you are familiar with, please take it seriously, and fix it. Thank you.

Mozilla WebDev CommunityBeer and Tell – February 2015

Once a month, web developers from across the Mozilla Project get together to speedrun classic video games. Between runs, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Michael Kelly: Refract

Osmose (that’s me!) started off with Refract, a website that can turn any website into an installable application. It does this by generating an Open Web App on the fly that does nothing but redirect to the specified site as soon as it is opened. The name and icon of the generated app are auto-detected from the site, or they can be customized by the user.

Michael Kelly: Sphere Online Judge Utility

Next, Osmose shared spoj, a Python-based command line tool for working on problems from the Sphere Online Judge. The tool lets you list and read problems, as well as create solutions and test them against the expected input and output.

Adrian Gaudebert: Spectateur

Next up was adrian, who shared Spectateur, a tool to run reports against the Crash-Stats API. The webapp lets you set up a data model using attributes available from the API, and then process that data via JavaScript that the user provides. The JavaScript is executed in a sandbox, and the resulting view is displayed at the bottom of the page. Reports can also be saved and shared with others.

Peter Bengtsson: Autocompeter

Peterbe stopped by to share Autocompeter, which is a service for very fast auto-completion. Autocompeter builds upon peterbe’s previous work with fast autocomplete backed by Redis. The site is still not production-ready, but soon users will be able to request an API key to send data to the service for indexing, and Air Mozilla will be one of the first sites using it.

Pomax: inkdb

The ever-productive Pomax returns with inkdb.org, a combination of the many color- and ink-related tools he’s been sharing recently. Among other things, inkdb lets you browse fountain pen inks, map them on a graph based on similarity, and find inks that match the colors in an image. The website is also a useful example of the Mozilla Foundation Client-side Prototype in action.

Matthew Claypotch: rockbot

Lastly, potch shared a web interface for suggesting songs to a Rockbot station. Rockbot currently only has Android and iOS apps, and potch decided to create a web interface to allow people without Rockbot accounts or phones to suggest songs.


No one could’ve anticipated willkg’s incredible speedrun of Mario Paint. When interviewed after his blistering 15 hour and 24 minute run, he refused to answer any questions and instead handed out fliers for the grand opening of his cousin’s Inkjet Cartridge and Unlicensed Toilet Tissue Outlet opening next Tuesday at Shopper’s World on Worcester Road.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Nick CameronCreating a drop-in replacement for the Rust compiler

Many tools benefit from being a drop-in replacement for a compiler. By this, I mean that any user of the tool can use `mytool` in all the ways they would normally use `rustc` - whether manually compiling a single file or as part of a complex make project or Cargo build, etc. That could be a lot of work; rustc, like most compilers, takes a large number of command line arguments which can affect compilation in complex and interacting ways. Emulating all of this behaviour in your tool is annoying at best, especially if you are making many of the same calls into librustc that the compiler is.

The kind of things I have in mind are tools like rustdoc or a future rustfmt. These want to operate as closely as possible to real compilation, but have totally different outputs (documentation and formatted source code, respectively). Another use case is a customised compiler. Say you want to add a custom code generation phase after macro expansion, then creating a new tool should be easier than forking the compiler (and keeping it up to date as the compiler evolves).

I have gradually been trying to improve the API of librustc to make creating a drop-in tool easier to produce (many others have also helped improve these interfaces over the same time frame). It is now pretty simple to make a tool which is as close to rustc as you want it to be. In this tutorial I'll show how.

Note/warning, everything I talk about in this tutorial is internal API for rustc. It is all extremely unstable and likely to change often and in unpredictable ways. Maintaining a tool which uses these APIs will be non- trivial, although hopefully easier than maintaining one that does similar things without using them.

This tutorial starts with a very high level view of the rustc compilation process and of some of the code that drives compilation. Then I'll describe how that process can be customised. In the final section of the tutorial, I'll go through an example - stupid-stats - which shows how to build a drop-in tool.

Continue reading on GitHub...

Andreas GalSearch works differently than you may think

Search is the main way we all navigate the Web, but it works very differently than you may think. In this blog post I will try to explain how it worked in the past, why it works differently today and what role you play in the process.

The services you use for searching, like Google, Yahoo and Bing, are called a search engines. The very name suggests that they go through a huge index of Web pages to find every one that contains the words you are searching for. 20 years ago search engines indeed worked this way. They would “crawl” the Web and index it, making the content available for text searches.

As the Web grew larger, searches would often find the same word or phrase on more and more pages. This was starting to make search results less and less useful because humans don’t like to read through huge lists to manually find the page that best matches their search. A search for the word “door” on Google, for example, gives you more than 1.9 billion results. It’s impractical — even impossible — for anyone look through all of them to find the most relevant page.

search-1

Google finds about 1.9 billion results for the search query “door”.

To help navigate the ever growing Web, search engines introduced algorithms to rank results by their relevance. In 1996, two Stanford graduate students, Larry Page and Sergey Brin, discovered a way to use the information available on the Web itself to rank results. They called it PageRank.

Pages on the Web are connected by links. Each link contains anchor text that explains to readers why they should follow the link. The link itself points to another page that the author of the source page felt was relevant to the anchor text. Page and Brin discovered that they could rank results by analyzing the incoming links to a page and treating each one as a vote for its quality. A result is more likely to be relevant if many links point to it using anchor text that is similar to the search terms. Page and Brin founded a search engine company in 1998 to commercialize the idea: Google.

PageRank worked so well that it completely changed the way people interact with search results. Because PageRank correctly offered the most relevant results at the top of the page, users started to pay less attention to anything below that. This also meant that pages that didn’t appear on top of the results page essentially started to become “invisible”: users stopped finding and visiting them.

To experience the “invisible Web” for yourself, head over to Google and try to look through more than just the first page of results. So few users ever wander beyond the first page that Google doesn’t even bother displaying all the 1.9 billion search results it claims to have found for “door.” Instead, the list just stops at page 63, about a 100 million pages short of what you would have expected.

Despite reporting over 1.9 billion results, in reality Google’s search results for “door” are quite finite and end at page 63.

With publishers and online commerce sites competing for that small number of top search results, a new business was born: search engine optimization (or SEO). There are many different methods of SEO, but the principal goal is to game the PageRank algorithm in your favor by increasing the number of incoming links to your own page and tuning the anchor text. With sites competing for visitors — and billions in online revenue at stake — PageRank eventually lost this arms race. Today, links and anchor text are no longer useful to determine the most relevant results and, as a result, the importance of PageRank has dramatically decreased.

Search engines have since evolved to use machine learning to rank results. People perform 1.2 trillion searches a year on Google alone  — that’s about 3 billion a day and 40,000 a second. Each search becomes part of this massive query stream as the search engine simultaneously “sees” what billions of people are searching for all over the world. For each search, it offers a range of results and remembers which one you considered most relevant. It then uses these past searches to learn what’s most relevant to the average user to provide the most relevant results for future searches.

Machine learning has made text search all but obsolete. Search engines can answer 90% or so of searches by looking at previous search terms and results. They no longer search the Web in most cases — they instead search past searches and respond based on the preferred result of previous users.

This shift from PageRank to machine learning also changed your role in the process. Without your searches — and your choice of results — a search engine couldn’t learn and provide future answers to others. Every time you use a search engine, the search engine uses you to rank its results on a massive scale. That makes you its most important asset.


Filed under: Mozilla

David TenserUser Success – We’re hiring!

Just a quick +1 to Roland’s plug for the Senior Firefox Community Support Lead:

  • Ever loved a piece of software so much that you learned everything you
    could about it and helped others with it?
  • Ever coordinated an online community? Especially one around supporting users?
  • Ever measured and tweaked a website’s content so that more folks could find it and learn from it?

Got 2 out of 3 of the above?

Then work with me (since Firefox works closely with my area: Firefox for Android and in the future iOS via cloud services like Sync) and the rest of my colleagues on the fab Mozilla User Success team (especially my fantastic Firefox savvy colleagues over at User Advocacy).

And super extra bonus: you’ll also work with our fantastic community like all Mozilla employees AND Firefox product management, marketing and engineering.

Take a brief detour and head over to Roland’s blog to get a sense of one of the awesome people you’d get to work closely with in this exciting role (trust me, you’ll want to work with Roland!). After that, I hope you know what to do! :)


Ben KellyThat Event Is So Fetch

The Service Workers builds have been updated as of yesterday, February 22:

Firefox Service Worker Builds

Notable contributions this week were:

  • Josh Mathews landed Fetch Event support in Nightly. This is important, of course, because without the Fetch Event you cannot actually intercept any network requests with your Service Worker. | bug 1065216
  • Catalin Badea landed more of the Service Worker API in Nightly, including the ability to communicate with the Service Worker using postMessage(). | bug 982726
  • Nikhil Marathe landed some more of his spec implementations to handle unloading documents correctly and to treat activations atomically. | bug 1041340 | bug 1130065
  • Andrea Marchesini landed fixes for FirefoxOS discovered by the team in Paris. | bug 1133242
  • Jose Antonio Olivera Ortega contributed a work-in-progress patch to force Service Worker scripts to update when dom.serviceWorkers.test.enabled is set. | bug 1134329
  • I landed my implementation of the Fetch Request and Response clone() methods. | bug 1073231

As always, please let us know if you run into any problems. Thank you for testing!