Nicholas NethercoteFirefox 64-bit for Windows can take advantage of more memory

By default, on Windows, Firefox is a 32-bit application. This means that it is limited to using at most 4 GiB of memory, even on machines that have more than 4 GiB of physical memory (RAM). In fact, depending on the OS configuration, the limit may be as low as 2 GiB.

Now, 2–4 GiB might sound like a lot of memory, but it’s not that unusual for power users to use that much. This includes:

  • users with many (dozens or even hundreds) of tabs open;
  • users with many (dozens) of extensions;
  • users of memory-hungry web sites and web apps; and
  • users who do all of the above!

Furthermore, in practice it’s not possible to totally fill up this available space because fragmentation inevitably occurs. For example, Firefox might need to make a 10 MiB allocation and there might be more than 10 MiB of unused memory, but if that available memory is divided into many pieces all of which are smaller than 10 MiB, then the allocation will fail.

When an allocation does fail, Firefox can sometimes handle it gracefully. But often this isn’t possible, in which case Firefox will abort. Although this is a controlled abort, the effect for the user is basically identical to an uncontrolled crash, and they’ll have to restart Firefox. A significant fraction of Firefox crashes/aborts are due to this problem, known as address space exhaustion.

Fortunately, there is a solution to this problem available to anyone using a 64-bit version of Windows: use a 64-bit version of Firefox. Now, 64-bit applications typically use more memory than 32-bit applications. This is because pointers, a common data type, are twice as big; a rough estimate for 64-bit Firefox is that it might use 25% more memory. However, 64-bit applications also have a much larger address space, which means they can access vast amounts of physical memory, and address space exhaustion is all but impossible. (In this way, switching from a 32-bit version of an application to a 64-bit version is the closest you can get to downloading more RAM!)

Therefore, if you have a machine with 4 GiB or less of RAM, switching to 64-bit Firefox probably won’t help. But if you have 8 GiB or more, switching to 64-bit Firefox probably will help the memory usage situation.

Official 64-bit versions of Firefox have been available since December 2015. If the above discussion has interested you, please try them out. But note the following caveats.

  • Flash and Silverlight are the only supported 64-bit plugins.
  • There are some Flash content regressions due to our NPAPI sandbox (for content that uses advanced features like GPU acceleration or microphone APIs).

On the flip side, as well as avoiding address space exhaustion problems, a security feature known as ASLR works much better in 64-bit applications than in 32-bit applications, so 64-bit Firefox will be slightly more secure.

Work is being ongoing to fix or minimize the mentioned caveats, and it is expected that 64-bit Firefox will be rolled out in increasing numbers in the not-too-distant future.

UPDATE: Chris Peterson gave me the following measurements about daily active users on Windows.

  • 66.0% are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
  • 32.3% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
  • 1.7% are running 64-bit Firefox already.

UPDATE 2: Also from Chris Peterson, here are links to 64-bit builds for all the channels:

Daniel StenbergA workshop Monday

http workshopI decided I’d show up a little early at the Sheraton as I’ve been handling the interactions with hotel locally here in Stockholm where the workshop will run for the coming three days. Things were on track, if we ignore how they got the wrong name of the workshop on the info screens in the lobby, instead saying “Haxx Ab”…

Mark welcomed us with a quick overview of what we’re here for and quick run-through of the rough planning for the days. Our schedule is deliberately loose and open to allow for changes and adaptations as we go along.

Patrick talked about the 1 1/2 years of HTTP/2 working in Firefox so far, and we discussed a lot around the numbers and telemetry. What do they mean and why do they look like this etc. HTTP/2 is now at 44% of all HTTPS requests and connections using HTTP/2 are used for more than 8 requests on median (compared to slightly over 1 in the HTTP/1 case). What’s almost not used at all? HTTP/2 server push, Alt-Svc and HTTP 308 responses. Patrick’s presentation triggered a lot of good discussions. His slides are here.

RTT distribution for Firefox running on desktop and mobile, from Patrick’s slide set:

rtt-dist

The lunch was lovely.

Vlad then continued to talk about experiences from implementing and providing server push at Cloudflare. It and the associated discussions helped emphasize that we need better help for users on how to use server push and there might be reasons for browsers to change how they are stored in the current “secondary cache”. Also, discussions around how to access pushed resources and get information about pushes from javascript were briefly touched on.

After a break with some sweets and coffee, Kazuho continued to describe cache digests and how this concept can help making servers do better or more accurate server pushes. Back to more discussions around push and what it actually solved, how much complexity it is worth and so on. I thought I could sense hesitation in the room on whether this is really something to proceed with.

We intend to have a set of lightning talks after lunch each day and we have already have twelve such suggested talks listed in the workshop wiki, but the discussions were so lively and extensive that we missed them today and we even had to postpone the last talk of today until tomorrow. I can already sense how these three days will not be enough for us to cover everything we have listed and planned…

We ended the evening with a great dinner sponsored by Mozilla. I’d say it was a great first day. I’m looking forward to day 2!

Air MozillaMozilla Weekly Project Meeting, 25 Jul 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

The Mozilla BlogSusan Chen, Promoted to Vice President of Business Development

I’m excited to announce that Susan Chen has been appointed Vice President of Business Development at Mozilla, a new role we are creating to recognize her achievements.

Susan ChenSusan joined Mozilla in 2011 as Head of Strategic Development. During her five years at Mozilla, Susan has worked with the Mozilla team to conceive and execute multiple complex negotiations and concluded hundreds of millions dollar revenue and partnership deals for Mozilla products and services.

As Vice President of Business Development, Susan is now responsible for planning and executing major business deals and partnerships for Mozilla across its product lines including search, commerce, content, communications, mobile and connected devices. She is also in charge of managing the business development team working across the globe.

We are pleased to recognize Susan’s achievements and expanded scope with the title of Vice President. Please join me in welcoming Susan to the leadership team at Mozilla!

Background:

Susan’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Mitchell BakerUpdate on the United Nations High Level Panel on Women’s Economic Empowerment

It is critical to ensure that women are active participants in digital life. Without this we won’t reach full economic empowerment. This is the perspective and focus I bring to the UN High Level Panel for Women’s Economic Empowerment (HLP), which met last week in Costa Rica, hosted by President Luis Guillermo Solis.

(Here is the previous blog post on this topic.)

Many thanks to President Solis, who led with both commitment and authenticity. Here he shows his prowess with selfie-taking:

Screen Shot 2016-07-22 at 12.32.19 PM

Members of the High Level Panel – From Left to Right: Tina Fordham, Citi Research; Laura Tyson, UC Berkeley; Alejandra Mora, Government of Costa Rica; Ahmadou Ba, AllAfrica Global Media; Renana Jhabvala, WIEGO; Elizabeth Vazquez, WeConnect; Jeni Klugman, Harvard Business School; Mitchell Baker, Mozilla; Gwen Hines, DFID-UK; Phumzile Mlambo, UN Women; José Manuel Salazar Xirinachs, International Labour Organization; Simona Scarpaleggia, Ikea; Winnie Byanyima, Oxfam; Fiza Farhan, Buksh Foundation; Karen Grown, World Bank; Margo Thomas, HLP Secretariat.

Photo Credit: Luis Guillermo Solis, President, Costa Rica

In the meeting we learned about actions the Panel members have initiated, and provided feedback and guidelines on the first draft of the HLP report. The goal for the report is to be as concrete as possible in describing actions in women’s economic empowerment which have shown positive results so that interested parties could adopt these successful practices. An initial version of the report will be released in September, with the final report in 2017.  In the meantime, Panel members are also initiating, piloting and sometimes scaling activities that improve women’s economic empowerment.

As Phumzile Mlambo-Ngcuka, the Executive Director of UN Women often says, the best report will be one that points to projects that are known to work. One such example is a set of new initiatives, interventions and commitments to be undertaken in the Punjab, announced by the Panel Member and Deputy from Pakistan, Fiza Farhan and Mahwish Javaid.

Mozilla, too, is engaged in a set of new initiatives. We’ve been tuning our Mozilla Clubs program, which are on-going events to teach Web Literacy, to be interesting and more accessible to women and girls. We’ve entered into a partnership with UN Women to deepen this work and the pilots are underway. If you’d like to participate, consider applying your organizational, educational, or web skills to start a Mozilla Club for women and girls in your area. Here are examples of existing clubs for women in Nairobi and Cape Town.

Mozilla is also involved in the theme of digital inclusion as a cross-cutting, overarching theme of the HLP report. This is where Anar Simpson, my official Deputy for the Panel, focuses her work. We are liaising with companies in Silicon Valley who are working in the fields of connectivity and distribution of access to explore if, when and and how their projects can empower women economically.  We’re looking to gather everything they have learned about what has been effective. In addition to this information/content gathering task, Mozilla is working with the Panel on the advocacy and publicity efforts of the report.

I joined the Panel because I see it as a valuable mechanism for driving both visibility and action on this topic. Women’s economic empowerment combines social justice, economic growth benefits and the chance for more stability in a fragile world. I look forward to meeting with the UN Panel again in September and reporting back on practical and research-driven initiatives.

QMOFirefox 49.0 Aurora Testday Results

Hello mozillians!

Last week on Friday (July 22nd), we held another successful event – Firefox 49.0 Aurora Testday.

Thank you all for helping us making Mozilla a better place – Moin Shaikh, Georgiu Ciprian, Marko Andrejić, Dineesh Mv, Iryna Thompson.

From Bangladesh: Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Md. Rahimul Islam, Forhad Hossain, Akash, Roman Syed, Niaz Bhuiyan Asif, Saddam Hossain, Sajedul Islam, Md.Majedul islam, Fahim, Abdullah Al Jaber Hridoy, Raihan Ali, Md.Ehsanul Hassan, Sauradeep Dutta, Mohammad Maruf Islam, Kazi Nuzhat Tasnem, Maruf Rahman, Fatin Shahazad, Tanvir Rahman, Rakib Rahman, Tazin Ahmed, Shanjida Tahura Himi, Anika Nawar and Md. Nazmus Shakib (Robin).

From India: Nilima, Paarttipaabhalaji, Ashly Rose Mathew M, Selva Makilan R, Prasanth P, Md Shahbaz Alam and Bhuvana Meenakshi.K

A big thank you goes out to all our active moderators too!

Results:

I strongly advise everyone of you to reach out to us, the moderators, via#qa during the events when you encountered any kind of failures. Keep up the great work!

Keep an eye on QMO for upcoming events! 😉

Mike HommeyAnnouncing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

The Servo BlogThis Week In Servo 72

In the last week, we landed 79 PRs in the Servo organization’s repositories.

The team working on WebBluetooth in Servo has launched a new site! It has two demo videos and very detailed instructions and examples on how to use standards-based Web Platform APIs to connect to Bluetooth devices.

We’d like to especially thank UK992 this week for their AMAZING work helping us out with Windows support! We are really eager to get the Windows development experience from Servo up to par with that of other platforms, and UK992’s work has been essential.

Connor Brewster (cbrewster) has also been on an incredible tear, working with Alan Jeffrey, on figuring out how session history is supposed to work, clarifying the standard and landing some great fixes into Servo.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • UK992 added support for tinyfiledialogs on Windows, so that we can prompt there, too!
  • UK992 uncovered the MINGW magic to get AppVeyor building again after the GCC 6 bustage
  • jdm made it possible to generate the DOM bindings in parallel, speeding up some incremental builds by nearly a minute!
  • aneesh restored better error logging to our BuildBot configuration and provisioning steps
  • canaltinova fixed the reference test for text alignment in input elements
  • larsberg fixed up some issues preventing the Windows builder from publishing nightlies
  • upsuper added support for generating bindings for MSVC
  • heycam added FFI glue for 1-arg CSS supports() in Stylo
  • manish added Stylo bindings for calc()
  • johannhof ensured we only expose Worker interfaces to workers
  • cbrewster implemented joint session history
  • shinglyu optimized dirty flags for viewport percentage units based on viewport changes
  • stshine blockified some children of flex containers, continuing the work to flesh out flexbox support
  • creativcoder integrated a service worker manager thread
  • emilio added support for testing animations programmatically
  • izgzhen fixed Blob type-strings
  • ajeffrey integrated logging with crash reporting.
  • malisas allowed using ByteString types in WebIDL unions
  • emilio ensured that transitions and animations can be tested programmatically

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

See the aforementioned demos from the team at the University of Szeged.

The Rust Programming Language BlogThe 2016 Rust Conference Lineup

The Rust Community is holding three major conferences in the near future, and we wanted to give a shout-out to each, now that all of the lineups are fully announced.

Sept 9-10: RustConf

RustConf is a two-day event held in Portland, OR, USA on September 9-10. The first day offers tutorials on Rust given directly by members of the Rust core team, ranging from absolute basics to advanced ownership techniques. The second day is the main event, with talks at every level of expertise, covering both core Rust concepts and design patterns, production use of Rust, reflections on the RFC process, and systems programming in general. We offer scholarship for those who would otherwise find it difficult to attend. Join us in lovely Portland and hear about the latest developments in the Rust world!

Follow us on Twitter @rustconf.

Sept 17-18: Rust Fest

Join us at RustFest, Europe’s first conference dedicated to the Rust programming language. Over the weekend 17-18th September we’ll gather in Berlin to talk Rust, its ecosystem and community. All day Saturday will have talks with topics ranging from hardware and testing over concurrency and disassemblers, and all the way to important topics like community, learning and empathy. While Sunday has a focus on learning and connecting, either at one of the many workshops we are hosting or in the central meet-n-greet-n-hack area provided.

Thanks to the many awesome sponsors, we are able to offer affordable tickets to go on sale this week, with an optional combo—including both Viewsource and RustFest. Keep an eye on http://www.rustfest.eu/, get all the updates on the blog and don’t forget to follow us on Twitter @rustfest

Oct 27-28: Rust Belt Rust

Rust Belt Rust is a two-day conference in Pittsburgh, PA, USA on October 27 and 28, 2016, and people with any level of Rust experience are encouraged to attend. The first day of the conference has a wide variety of interactive workshops to choose from, covering topics like an introduction to Rust, testing, code design, and implementing operating systems in Rust. The second day is a single track of talks covering topics like documentation, using Rust with other languages, and efficient data structures. Both days are included in the $150 ticket! Come learn Rust in the Rust Belt, and see how we’ve been transforming the region from an economy built on manufacturing to an economy built on technology.

Follow us on Twitter @rustbeltrust.

Karl Dubost[worklog] Edition 028. appearance on Enoshima

Each time I "set up my office" (moved to a new place for the next 3 months, construction work on the main home), I'm mesmerized by how easy it is to set up a work environment. Laptop, wifi and electricity are the main things needed to start. A table and a chair are useful but non essential. And eventually an additional screen to have more working surface to be comfortable. Basically in 5 minutes we are ready to work. And that's one chance of our area of work. How long does it take before you can start working?

Working with a view on Enoshima for the next 3 months. Tune of the week: Omoide no Enoshima.

Webcompat Life

Progress this week:

Today: 2016-07-25T06:21:33.702789
296 open issues
----------------------
needsinfo       5
needsdiagnosis  71
needscontact    14
contactready    34
sitewait        164
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Time to time, people are reporting usability issues which are more or less cross-browsers. They basically hinder every browsers. It's out scope for the Web Compatibility project, but hint at something interesting about browsers and users perception. Often, I wonder if browsers should do more than just supporting the legacy Web site (aka it displays), but also adjust the content to a more palatable experience. Somehow the way, Reader mode is doing on user request, a beautify button for legacy content.
  • Google Image Search and black arrow. A kind of cubist arrow for Firefox. Modern design?
  • I opened an issue on Tracking Protection and Webcompat. Adam pointed me this morning to a project on moving tracking protection to a Web extension.
  • Because we have more issues on Firefox Desktop and Firefox Android, we focus our energy there, so we need someone in the community to focus on Firefox OS issues.
  • When I test Web sites on Firefox Android, I usually do it through the remote debugging in WebIDE and instead of typing a long URI on the device itself, I usually go to the console and paste the address I want window.location = 'http://example.com/long/path/with/strange/356374389dgjlkj36s'.
  • Starting to test a bit more in depth what appearance means in different browsers. Specifically to determine what is needed for Web compatibility and/or Web standards.
  • a WONTFIX which is a good news. Bug 1231829 - Implement -webkit-border-image quirks for compatibility. It means it has been fixed by the site owners.
  • On this Find my phone issue on Google search, the wrong order of CSS properties creates a layout issue where the XUL -moz-box was finally interpreted, but it triggered a good question from Xidorn. Should we expose the XUL display values to Web content? Add to that that some properties in the CSS never existed.
  • hangout doesn't work the same way for Chrome and Firefox. There's something happening either on the Chrome side or the servers, which creates the right path of actions. I haven't determined it yet.

WebCompat.com dev

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Patrick ClokeWindows Mobile (or Windows Phone) and FastMail

I’ve been a big fan of Windows Phone (now Windows Mobile) for a while and have had a few phones across versions 7, 8, and now 10. A while ago I switched to FastMail as my e-mail provider [1], but had been stuck using Google as my calendar provider still (and my contacts were on my Windows Live account). I had a desire to move all these onto a single account, but Windows 10 Mobile only officially supports e-mail from arbitrary providers. Calendar and contacts are limited to a few special providers.

Below I’ve outlined how I’ve gotten all three services (email, contacts, and calendar) from my FastMail account onto my Windows Mobile device.

Email

Email is the easy one, FastMail even has a guide to setting up email on Windows Phone. This guide did not handle sending emails with a custom domain name, if you don’t have that situation, probably just use the FastMail guide.

  1. Add a new account, choose “other account”.
  2. Type in your email address (e.g. you@yourcustomdomain.com) and password.
  3. It will complain about being unable to find proper account settings. Click “try again”.
  4. It will complain again, but not give you an option for “advanced”, click it.
  5. Choose “Internel email account”.
  6. Enter any “Account name” and “Your name” that you want.
  7. Choose “IMAP4” as the “Account type”.
  8. Change the incoming mail server to mail.messagingengine.com.
  9. Change the username to your FastMail username (e.g. you@fastmail.com).
  10. Change the outgoing mailserver to mail.messagingengine.com.

Now when you send email it should show up properly as you@yourcustomdomain.com, but be sent via FastMail’s servers!

Contacts

FastMail added support for CardDAV last year and Windows Phone added support back in 2013, so why is this hard? Well…turns out that there isn’t a way to make a CardDAV account on Windows Mobile, it’s just used for certain account types. Luckily, there is a forum post about hooking up CardDAV via a hack. Steps are reproduced below:

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username, but add +Default before the @ (e.g. you+Default@fastmail.com), note that this isn’t anything special, just the scheme FastMail uses for CardDAV usernames.
  3. Put in your password. [2]
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and calendar.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Calendar server (CalDAV)” to localhost. [3]
  7. Change “Contacts server (CardDAV)” to carddav.messagingengine.com:443/dav/addressbooks/user/you@fastmail.com/Default, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

Your contacts should eventually appear in your address book! I couldn’t figure out a way to force my phone to sync contacts, but they appeared fairly quickly.

Calendar

FastMail added support for CalDAV back in the beginning of 2014 [4]. These steps are almost identical to the Contacts section above, but using information from the guide for setting up Calendar.app.

  1. Add a new account, choose “iCloud”.
  2. Type in your FastMail username (e.g. you@fastmail.com).
  3. Put in your password.
  4. Click “sign in”, it will fail.
  5. Go back into the account settings (click “Manage”) and modify the advanced settings (“Change mailbox sync settings”). Choose manually for when to download new email. Disable syncing of email and contacts.
  6. Go to “Advanced account settings”. Change the “Incoming email server”, “Outgoing (SMTP) email server” and “Contacts server (CardDAV)” to localhost.
  7. Change “Calendar server (CalDAV)” to caldav.messagingengine.com/dav/principals/user/you@fastmail.com/, changing you@fastmail.com to your FastMail username.
  8. Click “Done”!

My default calendar appeared very quickly, but additional calendars took a bit to sync onto my phone.

Good luck and let me know if there are any errors, easier ways, or other tricks to getting the most of FastMail on a Windows Mobile device!

[1]There are a variety of reasons why I switched, I had recently bought a domain name to get better control over my online presence (email, website, etc.). I was also tired of my email being used to server me advertisements and various other issues with free webmail. I highly recommend FastMail, they have awesome security and privacy policies. They also have amazing support, give back to (a lot) to open source and a whole slew of other things.
[2]I put a dummy one in and then changed it after I updated the servers in step 6. This was to not send my password to iCloud servers. The password is hopefully encrypted and hashed, but I don’t know for sure.
[3]We’re just ensuring that our credentials for these other services will not hit Apple servers for any reason.
[4]That article talks about beta.fastmail.fm, but this is now available on the production FastMail servers too!

Daniel StenbergHTTP Workshop 2016, day -1

http workshop The HTTP Workshop 2016 will take place in Stockholm starting tomorrow Monday, as I’ve mentioned before. Today we’ll start off slowly by having a few pre workshop drinks and say hello to old and new friends.

I did a casual count, and out of the 40 attendees coming, I believe slightly less than half are newcomers that didn’t attend the workshop last year. We’ll see browser people come, more independent HTTP implementers, CDN representatives, server and intermediary developers as well as some friends from large HTTP operators/sites. I personally view my attendance to be primarily with my curl hat on rather than my Firefox one. Firmly standing in the client side trenches anyway.

Visitors to Stockholm these days are also lucky enough to arrive when the weather is possibly as good as it can get here with the warmest period through the summer so far with lots of sun and really long bright summer days.

News this year includes the @http_workshop twitter account. If you have questions or concerns for HTTP workshoppers, do send them that way and they might get addressed or at least noticed.

I’ll try to take notes and post summaries of each workshop day here. Of course I will fully respect our conference rules about what to reveal or not.

stockholm castle and ship

Cameron KaiserTenFourFox 45 is more of a thing

Since the initial liftoff of TenFourFox 45 earlier this week, much progress has been made and this blog post, ceremonially, is being typed in it. I ticked off most of the basic tests including printing, YouTube, social media will eat itself, webcam support, HTML5 audio/video, canvas animations, font support, forums, maps, Gmail, blogging and the major UI components and fixed a number of critical bugs and assertions, and now the browser is basically usable and able to function usefully. Still left to do is collecting the TenFourFox-specific strings into their own DTD for the localizers to translate (which will include the future features I intend to add during the feature parity phase) and porting our MP3 audio support forward, and then once that's working compiling some opt builds and testing the G5 JavaScript JIT pathways and the AltiVec acceleration code. After that it'll finally be time for the first beta once I'm confident enough to start dogfooding it myself. We're a little behind on the beta cycle, but I'm hoping to have 45 beta 1 ready shortly after the release of 38.10 on August 2nd (the final 38 release, barring a serious showstopper with 45), a second beta around the three week mark, and 45 final ready for general use by the next scheduled release on September 13th.

A couple folks have asked if there will still be a G3 version and I am pleased to announce the answer will very likely be yes; the JavaScript JIT in 45 does not mandate SIMD features in the host CPU, so I don't see any technical reason why not (for that matter, the debug build I'm typing this on isn't AltiVec accelerated either). Still, if you're bravely rocking a Yosemite in 2016 you might want to think about a G4 for that ZIF socket.

I've been slack on some other general interest posts such as the Power Mac security rollup and the state of the user base, but I intend to write them when 45 gets a little more stabilized since there have been some recurring requests from a few of you. Watch for those soon also.

Support.Mozilla.OrgSUMO Show & Tell: How I Got Involved With Mozilla

Hey SUMO Nation!

London Work Week 2016During the Work Week in London we had the utmost pleasure of hanging out with some of you (we’re still a bit sad about not everyone making it… and that we couldn’t organize a meetup for everyone contributing to everything around Mozilla).

Among the numerous sessions, working groups, presentations, and demos we also had a SUMO Show & Tell – a story-telling session where everyone could showcase one cool thing they think everyone should know about.

I have asked those who presented to help me share their awesome stories with everyone else – and here you go, with the second one presented by Andrew, a jack-of-all-trades and Bugzilla tamer.

Take a look below and relive the origin story of a great Mozillian – someone just like you!

WhistlerIt all started… with an issue that I had with Firefox on my desktop computer running Windows XP, back in 2011. Firefox wouldn’t stop crashing! I then discovered the support site for Firefox. There I found help with my issue through support articles, and at the same time, I was also intrigued by the ability to help other users through the very same site as well.

As I looked into the available opportunities to contribute to the support team, I landed upon live chat. Live chat was a 1-on-1 chat to help out users with the issues they had. Unfortunately, after I joined the team, the live chat was placed on a hiatus. It was recommended that I move on to the forums and knowledge base, because rather than just helping one user and only them benefiting, on the forums I could help many more people through a single suggestion. For some, this floated well and with others it didn’t, because we weren’t taking care of the user personally (like on the chat).

It definitely took some time for me to adjust to this new setting, as things were (and are) handled differently on the forum and on the knowledge base. Users on the forum sometimes do respond immediately, but most of the time they respond later, and some actually don’t respond at all. This is one of the differences between helping out through live chat and through the forums.

The knowledge base on the other hand, can be really complex. There is markup being used to present text in a different way to different users. We must be as clear and precise as possible when writing the article, since although we may know really well what we are talking about, the article reader (usually a user in need of helpful information) may not. It is definitely challenging for some Mozillians to get involved with writing, but once you do, you get the hang of it and truly enjoy it.

From there on, I kept contributing to the forum and knowledge base, but I also went to find out how I could contribute to other areas of Mozilla. I landed upon triaging bugs within Mozilla sites thanks to the help of Liz Henry and Tyler Downer. Furthermore, as Firefox OS rolled out, I started to provide support to the users, write more articles and file bugs in regards to the OS.

As things moved forward so did life – at the moment I am contributing through the Social Support team. Contributing through Social helps our users on social media realise that we are listening to them and that their comments and woes are not falling on deaf ears. We respond to all types of concerns, be they praises or complaints. Helping users on Twitter while being restricted to 140 characters is difficult, whereas on Facebook we can provide a more detailed explanation and response. With Social Support, a single response from us sometimes reaches only a single person – other times it can reach thousands through re-sharing.

Social media makes it easy to identify issues, crises, and hot topics – it is where people nowadays now go to seek assistance, rant, and share their experiences. Also, as posts and tweets can spread easily on social media, it is a double-edged sword: if something positive is spreading, we hope it spreads more. However, if something negative is spreading, we must contain it, identify, and address the root cause of the issue. The bottom line is: we must help our users while keeping everything in the balance and being constantly vigilant.

TorontoIn 2013, I was very thankful that I was able to attend the Summit that was held in 3 places across the world. I was invited to Toronto, where I held a session called “What does ‘Mozillian’ mean?” In that session, we defined what the term “Mozillian” meant, who was included, not included, and what roles and capabilities were necessary to classify an individual to be a Mozillian. At the end of the session, we touched base via email to finalize our thoughts and gather the necessary information to pass along to others. Although we made some progress, defining who a Mozillian is, who can (or can’t) be one, and setting a specific criteria is somewhat impossible. We must be accepting of those who come and go, those with different backgrounds, personal preferences regarding getting things done, and (sometimes highly) different opinions. All that said, we are a huge family – a huge Mozilla family.

Thank you Andrew for sharing your story with us. I personally appreciate your relaxed and flexible perspective on (sometimes inevitable) changes and challenges we all face when trying to make Mozilla work for the users of the web.

Here’s to many more great chances for you to rock the (helpful, but not only) web with Mozilla and others!

David BurnsWebDriver F2F - July 2016

Last week saw the latest WebDriver F2F to work on the specification. We held the meeting at the Microsoft campus in Redmond, Washington.

The agenda for the meeting was placed, as usual, on the W3 Wiki. We had quite a lot to discuss and, as always, was a very productive meeting.

The meeting notes are available for Wednesday and Thursday. The most notable items are;

  • Finalising Actions in the specification
  • newSession
  • Certificate handling on navigation
  • Specification tests

We also welcomed Apple to their first WG meeting. You may have missed it, but there is going to be a Safari Driver built in in macOS.

Honza BambasIllusion of atomic reference counting

Most people believe that having an atomic reference counter makes them safe to use RefPtr on multiple threads without any more synchronization.  Opposite may be truth, though!

Imagine a simple code, using our commonly used helper classes, RefPtr<> and an object Type with ThreadSafeAutoRefCnt reference counter and standard AddRef and Release implementations.

Sounds safe, but there is a glitch most people may not realize.  See an example where one piece of code is doing this, no additional locks involved:

RefPtr<Type> local = mMemeber; // mMember is RefPtr<Type>, holding an object

And other piece of code then, on a different thread presumably:

mMember = new Type(); // mMember's value is rewritten with a new object

Usually, people believe this is perfectly safe.  But it’s far from it.

Just break this to actual atomic operations and put the two threads side by side:

Thread 1

local.value = mMemeber.value;
/* context switch */ 
.
.
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* temporary = new Type();
temporary->AddRef();
Type* old = mMember.value; 
mMember.value = temporary; 
old->Release(); 
/* context switch */ 
.

Similar for clearing a member (or a global, when we are here) while some other thread may try to grab a reference to it:

RefPtr<Type> service = sService;
if (!service) {
  return; // service being null is our 'after shutdown' flag
}

And another thread doing, usually during shutdown:

sService = nullptr; // while sService was holding an object

And here what actually happens:

Thread 1

local.value = sService.value;
/* context switch */
.
.
.
.
local.value->AddRef();

Thread 2

.
.
Type* old = sService.value; 
sService.value = nullptr; 
old->Release(); 
/* context switch */
.

And where is the problem?  Clearly, if the Release() call on the second thread is the last one on the object, the AddRef() on the first thread will do its job on a dying or already dead object.  The only correct way is to have both in and out assignments protected by a mutex or, ensure that there cannot be anyone trying to grab a reference from a globally accessed RefPtr when it’s being finally released or just being re-assigned. The letter may not always be easy or even possible.

Anyway, if somebody has a suggestion how to solve this universally without using an additional lock, I would be really interested!

The post Illusion of atomic reference counting appeared first on mayhemer's blog.

Gervase MarkhamSamsung’s L-ish Model Numbers

A slow hand clap for Samsung, who have managed to create versions of the S4 Mini phone with model numbers (among others):

  • GT-i9195
  • GT-i9195L (big-ell)
  • GT-i9195i (small-eye)
  • GT-i9195l (small-ell)

And of course, the small-ell variant, as well as being case-confusable with the big-ell variant and visually confusable with the small-eye variant if it’s written with a capital I as, say, here, is in fact an entirely different phone with a different CPU and doesn’t support the same aftermarket firmware images that all of the other variants do.

See this post for the terrible details.

Cameron KaiserTenFourFox 45 is a thing

The browser starts. Lots of problems but it boots. More later.

Armen ZambranoMozci and pulse actions contributions opportunities

We've recently finished a season of feature development adding TaskCluster support to add new jobs to Treeherder on pulse_actions.

I'm now looking at what optimizations or features are left to complete. If you would like to contribute feel free to let me know.

Here's some highligthed work (based on pulse_action issues and bugs):
This will help us save money in Heroku since using Buildapi + buildjson files is memory hungry and requires us to use bigger Heroku nodes.
This is important to help us change the behaviour of the Heroku app without having to commit any code. I've used this in the past to modify the logging level when debugging an issue.

This is also useful if we want to have different pipelines in Heroku. 
Having Heroku pipelines help us to test different versions of the software.
This is useful if we want to have a version running from 'master' against the staging version of Treeherder.
It would also help contributors to have a version of their pull requests running live.
We don't have any tests running. We need to determine how to run a minimum set of tests to have some confident in the product.

This needs integration tests of Pulse messages.
The comment is the bug is rather accurate and it shows that there are many small things that need fixing.
Manual backfilling uses Buildapi to schedule jobs. If we switched to scheduling via TaskCluster/Buildbot-bridge we would get better results since we can guarantee proper scheduling of a build + associated dependent jobs. Buildapi does not give us this guarantee. This is mainly useful when backfilling PGO test and talos jobs.

If instead you're interested on contributing to mozci you can have a look at the issues.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Support.Mozilla.OrgWhat’s Up with SUMO – 21st July

Hello, SUMO Nation!

Chances are you have noticed that we had some weird temporal issues, possibly caused by a glitch in the spacetime continuum. I don’t think we can pin the blame on the latest incarnation of Dr Who, but you never know… Let’s see what the past of the future brings then, shall we?

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 27th of July!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • If you’re an active localizer in one of the top 20+ locales, expect a list of high priority articles coming your way within the next 24 hours. Please make sure that they are localized as soon as possible – our users rely on your awesomeness!
  • Final reminder: remember the discussion about the frequency & necessity of KB updates and l10n notifications? We’re trying to address this for KB editors and localizers alike. Give us your feedback!
  • Reminder: L10n hackathons everywhere! Find your people and get organized! If you have questions about joining, contact your global locale team.

Firefox

  • for Android
    • Version 48 is still on track – release in early August.
  • for Desktop
    • Version 48 is still on track – release in early August.

Now that we’re safely out of the dangerous vortex of a spacetime continuum loop, I can only wish you a great weekend. Take it easy and keep rocking the helpful web!

Mozilla Addons BlogNew WebExtensions Guides and How-tos on MDN

The official launch of WebExtensions is happening in Firefox 48, but much of what you need is already supported in Firefox and AMO (addons.mozilla.org). The best place to get started with WebExtensions is MDN, where you can find a trove of helpful information. I’d like to highlight a couple of recent additions that you might find useful:

Thank you to Will Bamberg for doing the bulk of this work. Remember that MDN is a community wiki, so anyone can help!

Air MozillaWeb QA Team Meeting, 21 Jul 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 21 Jul 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Reps CommunityRep of the Month – June 2016

Please join us in congratulating Alex Lakatos as Reps of the Month for June 2016!

Alex is a Mozilla Rep based in London, Great Britain, originally from Romania. He is also a Mozilla TechSpeaker, giving talks all around Europe.

In the last 2 months Alex held several technical talks all over Europe (CodeCamp Cluj, OSCAL in Albania, DevTalks in Bucharest and DevSum in Sweden just to name a few) to promote Mozilla’s mission and the Open Web. With his enthusiasm in tech he is a crucial force to promote our mission and educate developers all around Europe about new Web technologies. He covered both the transition we are doing shifting from Firefox OS to a more innovative area with Connected Devices but also changes in Firefox and why you should consider the improvements made on the DevTools side.

Please don’t forget to congratulate him on Discourse!

Adam StevensonCompatibility Screenshots

I’ve been trying to learn more about how screenshots can help us identify compatibility issues in Firefox. It started with the question:

How does Firefox compare to Chrome in the top 100 websites?

Pretty good it turns out, on the front pages at least, you can view them yourself [Some images are offensive and NSFW]. You can also check out the same list of sites but comparing Firefox to Firefox with tracking protection. I made some scripts to capture the screens in OSX. They make use of the screencapture utility and this other cool little utility called GetWindowID. GetWindowID determines which Window ID is associated to a program on the screen, Firefox or Chrome in this case.

Let’s look at how these utilities work together.

Running the GetWindowID command requires that we specify which program we are looking for and which tab is active as well. I’ve made sure that my version of Firefox starts up with the Mozilla Firefox Start Page. If we execute this command:

./GetWindowID "Firefox" "Mozilla Firefox Start Page";

It returns a numeric value like:
1072

This is great because the screencapture utility needs to know which window ID to look at.
So let’s take that same GetWindowID command from earlier and store the result into a variable called ‘gcwindow’.

gcwindow=$(./GetWindowID "Firefox" "Mozilla Firefox Start Page");

Now gcwindow has the value 1072 from before. Let’s feed that into the screencapture utility:

screencapture -t jpg -T 40 -l $gcwindow -x ~/Desktop/screens/firefoxtest/$site.jpg;

When this runs the program will wait 40 seconds from the "-T 40” parameter then take a screenshot of Window ID 1072, which is our Firefox instance. The JPG file will be stored in a folder on my desktop under screens/firefoxtest. The rest of the script is looping through each website name that we’ve entered, opening a new browser window, opening the website we want to capture, killing the browser process after each screenshot and some sleep commands in between that give the computer time to execute each step.

There are some browser preferences and considerations that you will want to be aware of before running these scripts.

Why do all this in OSX? Cause I like to work on a mac, I guess. OK I don’t have a good reason but if you want to make it work on a Linux docker or something cool that’d be super sweet. The other thing to keep in mind is I’m looking at viewport screenshots right now, full page would be nice, but we’ll get there.

So the side by side comparison of popular sites is pretty useful but looking at things is a lot of work. It would be cool if we could automate some or all of that looking, right? Luckily there are image comparison tools that can help with this. I decided to try out Yahoo’s blink-diff tool which is built using node.js.

First off only PNG’s are supported with this tool, but that’s easy to change using the screencapture command line tool.

So we use 'screencapture -t png’ instead of 'screencapture -t jpg’.

Let’s go through setting this up for a single test. You’ll need to have node.js installed first.
We need to create a new folder, the name isn’t important.

mkdir onetime-diff

Then download this javascript file from Github and put it in that folder. Now let’s initialize our project:

npm init

And just accept all the defaults. Next let’s install the dependancies:

npm install blink-diff
npm install pngjs-image

Great, it’s ready to run now. The index.js file we downloaded looks for two files in the same folder called firefox.png and chrome.png and will generate a file called output.png. If you need a couple files to test with:

firefox.png
chrome.png

Note that if you provide your own PNG files, you may need to adjust the cropping parameters. I’ve configured the script to work best for Firefox and Chrome screenshots captured on a retina display, if you aren’t using a retina display divide those numbers by 2. You can see here y:160 and y:144, this is cropping out the top portion of the screenshot where the browser's “chrome” is.

cropImageA: { x:0, y:160, width:0, height:0 }, // Firefox
cropImageB: { x:0, y:144, width:0, height:0 }, // Chrome

Once you’re ready to run the test, execute:

node index.js

After a minute, it should generate an output.png file that looks like this and the script will return a result to the command prompt:

Passed
Found 1116908 differences.

So this is a good start, we have an image comparison program and an automated screenshot utility. To make it more useful I created another script that combines these together. On a high level it works like this:

First site > Screenshot Firefox > Screenshot Chrome > Compare images in background process > Next Site...

It has the same dependancies as before, but now we run it like this:

./start.sh

After giving this is a few runs and playing with the settings, I started to see some issues.

  • Advertisements placed in different positions, sizes, style or even amount
  • Regional site redirects
  • Different home page, providing a ‘fresh look’ or they are A/B testing
  • Site surveys or other pop ups
  • Large image sliders
  • Random overlay pop up ads
  • Rotating background images
  • Very slow process when using one computer

We want each site to have a decent amount of time to load, I normally use between 30-40 seconds. But that adds up over 1000 or more sites. I decided to hack something basic together to allow multiple computers in my house to split the load. It helps but it would be much better to have this running on Linux virtual machines or dockers.

So what’s next?

  • More sample runs to find a decent set of parameters for the baseline
  • Identifying in the top 1000 sites, which ones will continue to fail
  • Can we set higher thresholds and still detect when something breaks?
  • Can the tool ignore areas that are constantly changing?
  • Get the results out in the open for others to look at

If any of this interests you and want to get involved, I’d love to hear from you. Or if you have advice on how to make this better, please reach out as well.

Matjaž HorvatImproving in-page localization in Pontoon

We’re improving the way in-page localization works in Pontoon by droping a feature instead of introducing a new one. Translating text on the web page itself using the contentEditable attribute has been turned off.

That means the actual translation (typing) always takes place in the translation editor, which gives you access to all the relevant information you need for the translation.

The sidebar is always visible, allowing you to select strings from the list and then translate them. Additionally, you can still use the Inspector-like tool to select any localizable string on the page, which will then open in the translation editor in the sidebar to be translated.

Translation within the web page has turned out to be suboptimal for various reasons:

  • Original string is not always presented unambiguously, e.g. if containing markup,
  • Additional string details like comments and file paths are not displayed,
  • Suggestions from history, machinery and other locales are not available,
  • Only the first plural form can be translated,
  • It’s hard to control markup or new lines on various sites if they’re part of the string.

Mozilla Addons BlogCompleting Firefox Accounts on AMO

In Feburary we rolled out Firefox Accounts on addons.mozilla.org (AMO). That first phase created a migration flow from old AMO accounts over to Firefox Accounts. Since then, 84% of developers who have logged in have transitioned over to a Firefox Account.

The next step is to remove the ability to log in using an old AMO account. Once this is complete, the only way to log in to AMO is by using Firefox Accounts.

If you have an old account on AMO and have not gone through the migration flow, you can still access your account if the email you use to log in through Firefox Accounts is the same as the one previously registered on AMO.

We expect that the removal of old logins will be completed in a couple of weeks, unless any unforeseen problems occur.

Frequently asked questions

What happens to the add-ons I develop when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to accounts.firefox.com, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

QMOFirefox 49.0 Aurora Testday, July 22nd

Hello Mozillians,

Good news! We are having another testday for you 😀 This time we will take a swing at Firefox 49.0 Aurora, this Friday, 22nd of July.  The main focus during the testing will be around Context Menu, PDF Viewer and Browser Customization. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

I know this is short notice but we hope you will join us in the process of making Firefox a better browser. See you on Friday!

Dustin J. MitchellRecovering from TaskWarrior Corruption

I use TaskWarrior along with TaskWarrior for Android to organize my life. I use FreeCinc to synchronize all of my desktops, VPS, and phone, using a crontask. Most of the time, it works pretty well.

FreeCinc Fail

However, yesterday, all of FreeCinc’s keys expired. There’s a big red warning on the home page instructing users to download new keys.; Since my sync’s operate on a crontask, I didn’t notice this until I discovered tasks I remembered modifying in one place did not appear in another. By that time, I had modifed tasks everywhere – a few things to buy on my phone, some work stuff on the laptop, some more work stuff on the VPS, and some personal stuff on the desktop.

So, downloading new keys is easy. However, TaskWarrior doesn’t magically take four different sets of tasks and combine them into a single coherent set of tasks, just by syncing to a server. No, in fact, since there are no changes to sync, it does nothing. Just leaves the different sets of tasks in place on different machines. So basically everything I modified in 24 hours, across four machines, was now unsynchronized. And I use this to run my life, so it was probably 100 or so changes.

What Was I Doing Again?

Here’s how I fixed this:

I copied pending.data and completed.data from all four hosts onto a single host. These files are in a pretty simple one-task-per-line format, with a uuid and modification timestamp embedded in each line. The rough approach was to take all of the tasks in all of these files, and select most recent instance for each uuid. There’s a little bit of extra complication to handle whether a task is completed or not. I used the following script to do this calculation:

import re

uuid_re = re.compile(r'uuid:"([^"]*)"')
modified_re = re.compile(r'modified:"([0-9]*)"')
def read(filename):
	with open(filename) as f:
		for l in f:
			uuid = uuid_re.search(l).group(1)
			try:
				modified = modified_re.search(l).group(1)
			except AttributeError:
				modified = 0
			yield uuid, int(modified), l

def add_to(uuid, modified, completed, line, coll):
	if uuid in coll:
		ex_modified, ex_completed, _ = coll[uuid]
		if ex_modified >= modified:
			return
		if ex_completed and not completed:
			return
	coll[uuid] = (modified, completed, line)

by_uuid = {}
for c, fn in [
	(True, "rama-completed.data"),
	(True, "hopper-completed.data"),
	(True, "dorp-completed.data"),
	(True, "android-completed.data"),
	(False, "rama-pending.data"),
	(False, "hopper-pending.data"),
	(False, "android-pending.data"),
]:
	for uuid, modified, line in read(fn):
		add_to(uuid, modified, c, line, by_uuid)



with open("completed-result.data", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if completed:
			f.write(line)

with open("pending-result.data", "w") as f:
	for _, completed, line in by_uuid.itervalues():
		if not completed:
			f.write(line)

As it turns out, I might have simplified this a little by looking at the status field: completed and deleted are in completed.data, and the rest are in pending.data.

Once I was happy with the results (approximately the right number of pending tasks, basically), I copied them into ~/.task on one machine, and ran some task queries to check everything looked good (looking for tasks I recalled adding on various machines). Satisfied with this, I downloaded yet another set of keys from FreeCinc and installed them on that same machine. I deleted ~/.task/backlog.data on that machine (just in case) and ran task sync init which appeared to upload all pending tasks. Great!

Next, I deleted ~/.task/*.data on all of the other machines, installed the new FreeCinc keys, and ran task sync. On these machines, it happily downloaded the pending tasks. And we’re back in business!

I chose not to just copy ~/.task/*.data between systems because I run slightly different versions of TaskWarrior on different systems, so the data format might be different. I might have used task export and task import with some success, but I didn’t think of it in time.

Julia ValleraMozilla Clubs end of year goals

Mozilla Clubs are excited to share our goals for the rest of 2016. We’ve come a long way since the program’s launch in 2015. What lies ahead for us is exciting and challenging. Below is what we will be working on and information about how you can join in the fun.

Curious to learn more about Mozilla Clubs? Check out our website, facebook page, event gallery, and discussion forum.

Mozilla Club leaders come together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Mozilla Club leaders came together in June 2016 at Mozilla all-hands. Photo by Randy Macdonald

Our process

In June 2016, eight Mozilla Club leaders came together in London, UK for Mozilla’s bi-annual All Hands gathering. They participated in many conversations, one of which was a 90 minute deep dive session to identify objectives for clubs over the next six months. During the session we brainstormed topics, ideated in pairs and had a group share out. In addition to informing our goals for the rest of 2016, this session gave club leaders the opportunity to learn more about each other’s work and regional challenges.

In July, we shared the results of our deep dive session more broadly during our monthly call for club leaders and internal clubs info session. This allowed us to gather more feedback and ultimately votes on what goals we should focus on for Mozilla Clubs between now and January 2017.

Here is the list of goals that resulted, why they are important to our work how we plan to approach them.

Six Month Goals

Curate and/or create new resources for running clubs offline
  • Why: We want to build and curate more web literacy curriculum that can be used without internet access so that club participants can learn offline.
  • How: We will make our current offline activities and curriculum easier to locate, curate new resources and build new ones.
Connect the community through a global gathering
  • Why: Club participants learn from each other and feel connected to a global community when they have the opportunity to see each other face-to-face.
  • How: We will draw from event models across Mozilla like global sprints, state of the Hive and Mozilla Festival to connect club participants (virtually and/or in person) to work on challenges, share experiences and exchange knowledge.
Continue to localize content and resources
  • Why: As we translate more curriculum, activities and club guides into languages other than English more people can access and learn from them.
  • How: We will work with Mozilla volunteers, staff and partners to build localization into the process of content creation and start with translating current activities and creating new location-specific resources.
Reward and recognize club leaders
  • Why: Club leaders need rewards and recognition for their work so that they feel empowered to grow and spread web literacy in their communities.
  • How: We will recognize club leaders for their work through a formal rewards process and develop an agreement policy to create more clarity around the responsibilities of being a club leader.
Strengthen clubs as an organizing model for Mozilla campaigns
  • Why: Mozilla Club participants should continue to have an active role in Mozilla campaigns like Maker Party, Copyright, Take back the Web, Encryption, etc.
  • How: We will leverage club calls, office hours, the discussion forum, etc. to get input from club participants as campaigns take shape and will share campaign related activities that can be incorporated into their offerings.
Connect club participants across Mozilla
  • Why: Mozilla program participants have a lot of expertise to share and they should be able to connect with each other easily and frequently.
  • How: Create opportunities for community members in Clubs, Hives, Open Science and Advocacy to share work with each other, get feedback, build networks and more.
Assess club activity
  • Why: It is important that we maintain an accurate and up-to-date list of active clubs so that we can provide support where it is needed most.
  • How: We will identify which clubs are active by holding individual meetings, checking in via email and reviewing the club event reporter.

Join in the fun!

Here are some ways you can contribute to our work over the next six months and beyond.

  1. Connect with a Mozilla Club in your area. Don’t see any clubs in your area? Apply to start your own!
  2. Help us translate one of our web literacy activities into your preferred language.
  3. Use our offline activities, tell us what you think and suggest new ones.
  4. Join our facebook group to get updates about upcoming events and campaigns.

Jen Kagandraggable min-vid, part 1

since merging john and i’s css PR, i’ve been digging into min-vid again. lots has changed! dave rewrote min-vid in react.js to make it easier for contributors to plug in.

why react.js? because we won’t have to write a thousand different platform checks anymore. for example, we’d have to trigger one set of behaviors if the platform was youtube.com and another set of behaviors if the platform was vimeo.com. this wasn’t scalable and it wasn’t very contributor-friendly. now, to add support for additional video-streaming platforms, contributors will just have to construct the URL to access the platform’s video files (hopefully via a well-documented API) and add the new URL constructing code to min-vid’s /lib folder in file called get-[platform]-url.js.

so that’s awesome!

right now, i’m working on how to make the video panel draggable within the browser window so you’re not just limited to watching yr vids in the lower left-hand corner:

Screen Shot 2016-07-20 at 12.23.26 PM

john came up with a hacky idea for draggability where, on mouseDown, we’ll:

  1. create an invisible container the size of the entire browser window
  2. as long as mouseDown is true, drag the panel wherever we want within the invisible container
  3. onMouseUp, snap the container to be the size of the panel again.

the idea is to make dragging less glitchy by changing our dragging process so we’re no longer sending data back and forth between react, the add-on, and the window.

how to get started? jared broke down the task into smaller pieces for me. here’s the first piece:

Screen Shot 2016-07-20 at 12.25.41 PM

the function for setting up the panel size is in the index.js file. we determine how and when to panel.show() and panel.hide() based on the block of code below. the code tells the panel to listen for

  1. a message being emitted and
  2. for the content of that message, in this case from the controls.js file:

// require the Panel element from the Mozilla SDK
var panel = require('sdk/panel').Panel({
// set the panel content using the /default.html file
  contentURL: './default.html',
// set the panel functionality using the /controls.js file
  contentScriptFile: './controls.js',
// set the panel dimensions and position
  width: 320,
  height: 180,
  position: {
    bottom: 10,
    left: 10
  }
});

then, do different stuff based on what the message said.

// turn the panel port on to listen for a 'message' being emitted
panel.port.on('message', opts => {
// assign title to be whatever 'opts' were emitted
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain, opts.id);
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
    panel.hide();
  } else if (title === 'close') {
    panel.hide();
  } else if (title === 'minimize') {
    panel.hide();
    panel.show({
      height: 40,
      position: {
        bottom: 0,
        left: 10
      }
    });
  } else if (title === 'maximize') {
    panel.hide();
    panel.show({
      height: 180,
      position: {
        bottom: 10,
        left: 10
      }
    });
  }
});

i added another little chunk in there which says: if the title is drag, hide the panel and then show it again with these new dimensions. the whole new block of code looks like this:

panel.port.on('message', opts => {
  var title = opts.action;

  if (title === 'send-to-tab') {
    const pageUrl = getPageUrl(opts.domain, opts.id);
    if (pageUrl) require('sdk/tabs').open(pageUrl);
    else console.error('could not parse page url for ', opts); // eslint-disable-line no-console
    panel.hide();
  } else if (title === 'close') {
    panel.hide();
  } else if (title === 'minimize') {
    panel.hide();
    panel.show({
      height: 40,
      position: {
        bottom: 0,
        left: 10
      }
    });
  } else if (title === 'maximize') {
    panel.hide();
    panel.show({
      height: 180,
      position: {
        bottom: 10,
        left: 10
      }
    });
  }
  else if (title === 'drag') {
    panel.hide();
    panel.show({
      height: 360,
      width: 640,
      position: {
        bottom: 0,
        left: 0
      }
    });
  }
});

so we have some new instructions for the panel. but how do we trigger them?  we trigger the instructions by creating the drag function within the PlayerView component and then rendering it. this code says: on whatever new custom event, send a message. the content of the message is an object with the format {detail: obj}—in this case, {action: 'drag'}. then, render the trigger in a <div> in an <a> tag.

function sendToAddon(obj) {
  window.dispatchEvent(new CustomEvent('message', {detail: obj}));
}

const PlayerView = React.createClass({
  getInitialState: function() {
    return {showVolume: false, hovered: false};
  },
  drag: function() {
    sendToAddon({action: 'drag'});
  },
...
render: function() {
    return (
     <div className={'right'}>
       <a onClick={this.drag} className={'drag'} />
     </div>
    );
  }
...

and we style the class in our css file:

.drag {
    background: red;
}

so we get something like this, before clicking the red square:

Screen Shot 2016-07-20 at 1.19.37 PM

and after clicking the red square:

Screen Shot 2016-07-20 at 1.19.45 PM

next, i have to see if i can make the panel fill the page, then only drag the video element inside the panel, then snap the panel its position on the window and put it back to its original size, 320 x 180.

Mozilla WebDev CommunityBeer and Tell – July 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Moby von Briesen: Jam Circle

This week’s only presenter was mobyvb, who shared Jam Circle, a webapp that lets users play music together. Users who connect join a shared room and see each other as circles connected to a central node. Using the keyboard (or, in browsers that support it, any MIDI-capable device), users can play notes that all other users in the channel hear and see as colored lines on each circle’s connection to the center.

The webapp also includes the beginnings of an editor that will allow users to write chord progressions and play them alongside live playback.

A instance of the site is up and running at jam-circle.herokuapp.com. Check it out!


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Daniel PocockHow many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.


Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Air MozillaThe Joy of Coding - Episode 64

The Joy of Coding - Episode 64 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaThe Invention Cycle: Going From Inspiration to Implementation with Tina Seelig

The Invention Cycle: Going From Inspiration to Implementation with Tina Seelig Bringing fresh ideas to life and ultimately to market is not a well charted course. In July, our guest Tina Seelig will share a new...

Daniel Stenbergcurl wants to QUIC

The interesting Google transfer protocol that is known as QUIC is being passed through the IETF grinding machines to hopefully end up with a proper “spec” that has been reviewed and agreed to by many peers and that will end up being a protocol that is thoroughly documented with a lot of protocol people’s consensus. Follow the IETF QUIC mailing list for all the action.

I’d like us to join the fun

Similarly to how we implemented HTTP/2 support early on for curl, I would like us to get “on the bandwagon” early for QUIC to be able to both aid the protocol development and serve as a testing tool for both the protocol and the server implementations but then also of course to get us a solid implementation for users who’d like a proper QUIC capable client for data transfers.

implementations

The current version (made entirely by Google and not the output of the work they’re now doing on it within the IETF) of the QUIC protocol is already being widely used as Chrome speaks it with Google’s services in preference to HTTP/2 and other protocol options. There exist only a few other implementations of QUIC outside of the official ones Google offers as open source. Caddy offers a separate server implementation for example.

the Google code base

For curl’s sake, it can’t use the Google code as a basis for a QUIC implementation since it is C++ and code used within the Chrome browser is really too entangled with the browser and its particular environment to become very good when converted into a library. There’s a libquic project doing exactly this.

for curl and others

The ideal way to implement QUIC for curl would be to create “nghttp2” alternative that does QUIC. An ngquic if you will! A library that handles the low level protocol fiddling, the binary framing etc. Done that way, a QUIC library could be used by more projects who’d like QUIC support and all people who’d like to see this protocol supported in those tools and libraries could join in and make it happen. Such a library would need to be written in plain C and be suitably licensed for it to be really interesting for curl use.

a needed QUIC library

I’m hoping my post here will inspire someone to get such a project going. I will not hesitate to join in and help it get somewhere! I haven’t started such a project myself because I think I already have enough projects on my plate so I fear I wouldn’t be a good leader or maintainer of a project like this. But of course, if nobody else will do it I will do it myself eventually. If I can think of a good name for it.

some wishes for such a library

  • Written in C, to offer the same level of portability as curl itself and to allow it to get used as extensions by other languages etc
  • FOSS-licensed suitably
  • It should preferably not “own” the socket but also work in-memory and to allow applications to do many parallel connections etc.
  • Non-blocking. It shouldn’t wait for things on its own but let the application do that.
  • Should probably offer both client and server functionality for maximum use.
  • What else?

Air MozillaConnected Devices Weekly Program Update, 19 Jul 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Localization (L10N)Localization Hackathon in Berlin

After much delays, collectively we picked a balmy first weekend of June and Berlin as our host city for a localization hackathon. We had four representing each of Dutch/Frisian and Ukrainian communities, three of German, one of South African English. Most of them had not been to an l10n hackathon, many have never not met in person within the community even though they had been collaborating for years.

Group shot

As with the other hackathons this year we allowed each team to plan how they spent their time together, and set team goals on what they wanted to accomplish over the weekend. The localization drivers would lead some group discussions. As a group, we split the weekend covering the following topics:

A series of spectrograms where attendees answer yes/no, agree/disagree questions by physically standing on a straight line from one side of the room to the other. We learned a lot about our group on recognition, about the web in their language, and about participation patterns. As we’re thinking about how to improve localization of Firefox, gaining insights into localizers hearts and life is always helpful.

Axel shared some organizational updates from the Orlando All-Hands: we recaped the status of Firefox OS and the new focus on Connected Devices. We also covered the release schedule of Firefox for iOS and Android.

We spent a bit more time talking about the upcoming changes to localization of Firefox, with L20n and repository changes coming up. In the meantime, we have a dedicated blog post on l20n for localizers, so read up on l20n there. Alongside, we’ll stop using individual repositories and workflows for localizing Firefox Nightly, Developer Edition, Beta, and release. Instead the strings needed for all of them will be in a single place. That’s obviously quite a few changes coming up, and we got quite a few questions in the conversations. At least Axel enjoys answering them.

Lunch

Our renewed focus on translation quality that resulted in development of the style guide template as a guideline for localization communities to emulate. We went through all the categories and sub-categories and explained what was expected of them to elaborate and provide locale specific examples. We stressed the importance of having one as it would help with consistency between multiple contributors to a single product or all products and projects across the board. This exercise encouraged some of the communities who thought they had a guide to review and update, and those who didn’t have one to create one. The Ukrainian community created a draft version soon after they returned home. Having an established style guide would help with training and on boarding new contributors.
We also went over the categories and definitions specified in MQM. We immediately used that knowledge to review through live demo in Pontoon-like tool some inconsistencies in the strings extracted from projects in Ukrainian. To me, that was one of the highlights of the weekend: 1) how to give constructive feedback using one of the defined categories; 2) Reoccurring type of mistakes either by a particular contributor or locale; 3). Terminology consistency within a project, product or a group of products, especially with multiple contributors; 4) Importance of peer review

For the rest of the weekend, each of the community had their own breakout sessions, reviewed their own to-do list, fixed bugs, completed some projects, and spent one on one time with the l10n drivers.

Brandenburg Gate and the teamWe were incredibly blessed with great weather. The unusually heavy rain that flooded many parts of Germany stopped during our visit. A meetup like this would not be complete without experiencing some local cultures. Axel, a Berlin native was tasked to show us around. We walked, walked and walked and with occasionally public transportation in between. We covered several landmarks such as the Berlin Wall, the Brandenburg Gate, several memorials, the landmark Gedächtniskirche as well as parks and streets crowded with the locals. Of course we sampled cuisines that reflected the diverse culture that Berlin had been: we had great kebabs and the best kebabs, Chinese fusion, the seasonal asparagus and of course the German beer. For some of us, this was not the first Berlin visit. But a group activity together, with Axel as our guide, the visit was so much memorable. Before we said goodbye, the thought of next year’s hackathon came to mind. Our Ukraine community had volunteered to host it in Lviv, a beautiful city in the western part of the country. We shall see.

Air MozillaMartes mozilleros, 19 Jul 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1283323] Rename “Triage Report” link on Reports page.
  • [1286650] Allow explicit specification of an API key in scripts/issue-api-key.pl
  • [1287039] Please add Katharina Borchert and CIO to recruiting lists
  • [1286960] certain github commit messages are not being auto-linkified properly
  • [1254882] develop a nightly script to revoke access to legal bugs from ex-employees

discuss these changes on mozilla.tools.bmo.


Armen ZambranoUsability improvements for Firefox automation initiative - Status update #1

The developer survey conducted by Engineering Productivity last fall indicated that debugging test failures that are reported by automation is a significant frustration for many developers. In fact, it was the biggest deficit identified by the survey. As a result,
the Engineering Productivity Team (aka A-Team) is working on improving the user experience for debugging test failures in our continuous integration and speeding up the turnaround for Try server jobs.

This quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

In this email you will find the progress we’ve made recently. In future updates you will see a delta from this email.

PS = These status updates will be fortnightly


Debugging tests on interactive workers
Accomplished recently:
  • Landed support for running reftest and xpcshell via tests.zip
  • Many UX improvements to the interactive loaner workflow

Upcoming:
  • Make sure Xvfb is running so you can actually run the tests!
  • Mochitest support + all other harnesses


Thunder Try - Improve end to end times on try

Project #1 - Artifact builds on automation

Accomplished recently:
  • Landed prerequisites for Windows and OS X artifact builds on try.
  • Identified which tests should be skipped with artifact builds

Upcoming:
  • Provide a try syntax flag to trigger only artifact builds instead of full builds; starting with opt Linux 64.


Project #2 - S3 Cloud Compiler Cache

Accomplished recently:
  • Sccache’s Rust re-write has reached feature parity with Python’s sccache
  • Now testing sccache2 on Try

Upcoming:
  • We want to roll out a two-tier sccache for Try, which will enable it to benefit from cache objects from integration branches


Project #3 - Metrics

Accomplished recently:

Upcoming:
  • Putting Mozharness steps’ data inside Treeherder’s database for aggregate analysis

Other
Upcoming:
  • TaskCluster Linux builds are currently built using a mix of m3/r3/c3 2xlarge AWS instances, depending on pricing and availability. We’re going to be looking to assess the effects on build speeds of using more powerful AWS instances types, as one potential way of reducing e2e Try times.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

This Week In RustThis Week in Rust 139

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

This week has a belated Crate of the Week with Vincent Esche's self-submitted cargo-modules, which gives us the cargo modules subcommand that shows the module structure of our crates in a tree view, optionally warning of orphans. Thanks, Vincent!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

105 pull requests were merged in the last two weeks.

New Contributors

  • abhi
  • Aravind Gollakota
  • Ben Boeckel
  • Ben Stern
  • David
  • Dridi Boukelmoune
  • Isaac Andrade
  • Zhen Zhang

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

  • 7/20. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
  • 7/21. Rust Hack & Learn Karlsruhe.
  • 7/27. Rust Community Team Meeting at #rust-community on irc.mozilla.org.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

fzammetti: Am I the only one that finds highly ironic the naming of something that's supposed to be new and cutting-edge after a substance universally synonymous with old, dilapidated and broken down?

paperelectron: Rust is as close to the bare metal as you can get.

On /r/programming.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Karl Dubost[worklog] Edition 027. Tracking protection and a week of boxes.

Tracking protection is an interesting beast. A feature to help users but users think the site is broken. I guess it's something similar to habits. If you put a mask on your face and you have forgotten about it, you may be surprised that people do not want to talk to you.

Webcompat Life

Progress this week:

Today: 2016-07-19T11:32:54.030052
316 open issues
----------------------
needsinfo       5
needsdiagnosis  76
needscontact    20
contactready    41
sitewait        168
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • issue with MLB site displaying the plays.
  • Interesting CSS issue about display:table and max-height having a different behavior in Chrome and Firefox, maybe something related to a known issue. To be confirmed.
  • Enabling Tracking Protection in Firefox creates a lot of issues, which are not completely understood by users. We are starting to have a set of Web Compatibility reports because the site breaks or crashes when tracking protection is enabled. Usually, the JavaScript code of the site didn't take into account that some people might want to block some of the page assets, and this creates unintended consequences. There is probably something around UX to improve here. So users really understand their choices.

WebCompat.com dev

Reading List

  • CSS Containment.

    the contain property, which indicates that the element’s subtree is independent of the rest of the page.

    If I understand, this seems like something which would answer many of the complaints we hear from Web developers about CSS isolation. Specifically the layout term: contain: layout.

    This value turns on layout containment for the element. This ensures that the containing element is totally opaque for layout purposes; nothing outside can affect its internal layout, and vice versa.

    Implemented in Blink. I didn't find an issue on WebKit project (Safari). I didn't find a bug in Mozilla Bugzilla either. Can I use?. Probably no.

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Roberto A. VitilloData Analysis Review Checklist

Writing good code is hard, writing a good analysis is harder. Peer-review is an essential tool to fight repetitive errors, omissions and more generally divulge knowledge. I found the use of a checklist to be invaluable to help me remember the most important things I should watch out for during a review. It’s far too easy to focus on few details and ignore others which might be catched (or not) in a successive round.

I don’t religiously apply every bullet point of the following checklist to every analysis, nor is this list complete; more items would have to be added depending on the language, framework, libraries, models, etc. used.

  • Is the question the analysis should answer clearly stated?
  • Is the best/fastest dataset that can answer the question being used?
  • Do the variables used measure the right thing (e.g. submission date vs activity date)?
  • Is a representative sample being used?
  • Are all data inputs checked (for the correct type, length, format, and range) and encoded?
  • Do outliers need to be filtered or treated differently?
  • Is seasonality being accounted for?
  • Is sufficient data being used to answer the question?
  • Are comparisons performed with hypotheses tests?
  • Are estimates bounded with confidence intervals?
  • Should the results be normalized?
  • If any statistical method is being used, are the assumptions of the model met?
  • Is correlation confused with causation?
  • Does each plot communicate an important piece of information or address a question of interest?
  • Are legends and axes labelled and do the they start from 0?
  • Is the analysis easily reproducible?
  • Does the code work, i.e. does it perform its intended function?
  • Is there a more efficient way to solve the problem, assuming performance matters?
  • Does the code read like prose?
  • Does the code conform to the agreed coding conventions?
  • Is there any redundant or duplicate code?
  • Is the code as modular as possible?
  • Can any global variables be replaced?
  • Is there any commented out code and can it be removed?
  • Is logging missing?
  • Can any of the code be replaced with library functions?
  • Can any debugging code be removed?
  • Where third-party utilities are used, are returning errors being caught?
  • Is any public API commented?
  • Is any unusual behavior or edge-case handling described?
  • Is there any incomplete code? If so, should it be removed or flagged with a suitable marker like ‘TODO’?
  • Is the code easily testable?
  • Do tests exist and do they actually test that the code is performing the intended functionality?

Christian HeilmannA great time and place to ask about diversity and inclusion

whiteboard code

There isn’t a single day going by right now where you can’t read a post or see a talk about diversity and inclusiveness in our market. And that’s a great thing. Most complain about the lack of them. And that’s a very bad thing.

It has been proven over and over that diverse teams create better products. Our users are all different and have different needs. If your product team structure reflects that you’re already one up against the competition. You’re also much less likely to build a product for yourself – and we are not our end users.

Let’s assume we are pro-diversity and pro-inclusiveness. And it should be simple for us – we come from a position of strength:

  • We’re expert workers and we get paid well.
  • We are educated and we have companies courting us and looking after our needs once we have been hired.
  • We’re not worried about being able to pay our bills or random people taking our jobs away.

I should say yet, because automation is on the rise and even our jobs can be optimised away sooner or later. Some of us are even working on that.

For now, though, we are in a very unique position of power. There are not enough expert workers to fill the jobs. We have job offers thrown at us and our hiring bonuses, perks and extra offers are reaching ridiculous levels. When you tell someone outside our world about them, you get shocked looks. We’re like the investment bankers and traders of the eighties and we should help to ensure that our image won’t turn into the same they have now.

If we really want to change our little world and become a shining beacon of inclusion, we need not to only talk about it – we should demand it. A large part of the lack of diversity in our market is that it is not part of our hiring practices. The demands to our new hires make it very hard for someone not from a privileged background or with a degree from a university of standing to get into our market. And that makes no sense. The people who can change that is us – the people in the market who tick all the marks.

To help the cause and make the things we demand in blog posts and keynotes happen, we should bring our demands to the table when and where they matter: in job interviews and application processes.

Instead of asking for our hardware, share options and perks like free food and dry cleaning we should ask for the things that really matter:

  • What is the maternity leave process in the company? Can paternity leave be matched? We need to make it impossible for an employer to pick a man over a woman because of this biological reason.
  • Why is a degree part of the job? I have none and had lots of jobs that required one. This seems like an old requirement that just got copied and pasted because of outdated reasons.
  • What is the long term plan the company has for me? We kept getting asked where we see ourselves in five years. This question has become cliché by now. Showing that the company knows what to do with you in the long term shows commitment, and it means you are not a young and gifted person to be burned out and expected to leave in a year.
  • Is there a chance for a 4 day week or flexible work hours? For a young person it is no problem doing an 18 hours shift in an office where all is provided for you. As soon as you have children all kind of other things add to your calendar that can’t me moved.
  • What does this company do to ensure diversity? This might be a bit direct, but it is easy to weed out those that pay lip service.
  • What is the process to move in between departments in this company? As you get older and you stay around for longer, you might want to change career. A change in your life might make that necessary. Is the company supporting this?
  • Is there a way to contribute to hiring and resourcing even when you are not in HR? This could give you the chance to ask the right questions to weed out applicants that are technically impressive but immature or terrible human beings.
  • What is done about accessibility in the internal company systems? I worked for a few companies where internal systems were inaccessible to visually impaired people. Instead of giving them extra materials we should strive for making internal systems available out-of-the-box.
  • What is the policy on moving to other countries or working remotely? Many talented people can not move or don’t want to start a new life somewhere else. And they shouldn’t have to. This is the internet we work on.
  • What do you do to prevent ageism in the company? A lot of companies have an environment that is catering to young developers. Is the beer-pong table really a good message to give?

I’ve added these questions to a repo on GitHub, please feel free to add more questions if you find them.

FWIW, I started where I am working right now because I got good answers to questions like these. My interviews were talking to mixed groups of people telling me their findings as teams and not one very aggressive person asking me to out-code them. It was such a great experience that I started here, and it wasn’t a simple impression. The year I’ve worked here now proved that even in interviewing, diversity very much matters.

Photo Credit: shawncplus

Mozilla WebDev CommunityExtravaganza – July 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Basket switch to Salesforce

First up was pmac, who shared the news that Basket, email newsletter subscription service, has switched to using Salesforce as the backend for storing newsletter subscriptions. In addition, the service now has a nifty public DataDog metrics dashboard showing off statistics about how the service is performing.

Engagement Engineering Status Board

Next was giorgos, who shared status.mozmar.org, a status page listing the current status of all the services that Engagement Engineering maintains. The status board pulls monitoring information from Dead Man’s Snitch as well as New Relic‘s application and Synthetics monitoring. The app runs a worker using AWS Lambda that pulls the information and writes it to a YAML file in the repo‘s gh-pages branch, and the status page itself reads the YAML file via JavaScript to build the display.

DXR

ErikRose stopped by to share more cool things that shipped in DXR this month:

  • Indexing for XBL and JavaScript.
  • Indexing 32+ new projects
  • Added a 3rd build server
  • Several performance optimizations that cut down build times by roughly 25%.
  • C++ macro definitions, method overrides, pure virtuals, substructs, and more are all now indexed. In addition, you can now easily jump between header files and their implementations.
  • UI improvements, including contrast improvements, a new filename filter, and jumping directly to files that are the only result of a query.

Special thanks to intern new_one and contributors twointofive and abbeyj. Also special thanks to MXR for being shut down due to security bugs and allowing DXR to flourish in its wake.

Fathom 1.0 and 1.1

Erik also brought up Fathom, an experimental framework for extracting meaning from webpages. Fathom allows you to write declarative rules that score and classify DOM nodes, and then extract those nodes from a DOM that it analyzes.

This month we shipped the 1.0 version of Fathom, as well as a 1.1 release with a bug fix for Firefox support as well as an optimization fix. It’s available as an NPM module for use as a library.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Engagement Engineering Hiring – Senior Webdev and Site Reliability Engineer

Last up was pmac again, who wanted to mention that the Mozilla Engagement Engineering team is hiring a Senior Web Developer and a Site Reliability Engineer. If you’re interested in working at Mozilla, click those links to apply on our careers site!


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Chris H-CUnits and Data Follow-Up: Pokémon GO in the United Kingdom

Hit augmented-reality mobile gaming sensation Pokémon GO is now available in the UK, so it’s time to test my hypothesis about searches for 5km converted to miles in that second bastion of “Let’s use miles as distance units in defiance of basically every other country”:

5kmUK

Results are consistent with hypothesis.

(( Now if only I could get around how the Google Play Store is identifying my Z10 as incompatible with the game… ))

:chutten


Mozilla Addons BlogA Better Add-on Discovery Experience

People who personalize Firefox like their Firefox better. However, many people don’t know that they can, and for those who know it isn’t particularly easy to do. So a few months ago, we began rethinking our entire add-on discovery experience—from helping people understand the benefits of personalization, to making it easier to install an add-on, to putting the right add-on in front of people at the right time.

The first step we’ve taken towards a better discovery experience is in the redesign of our Add-on Discovery Pane. This is typically the first page users see when they launch the Add-on Manager at about:addons.

Add-on Discovery Pane before Firefox 48

Add-on Discovery Pane before Firefox 48

We updated this page to target people who are just getting started with add-ons, by simplifying add-on installation to just one click and using clean images and text to quickly orient a new user.

Disco Pane One Click Install

It features a tightly curated list of add-ons that provide customizations that are easy for new users to understand.

Add-on Discovery Pane starting with Firefox 48

Add-on Discovery Pane starting with Firefox 48

We started with a small list and collaborated with their developers to ensure the best possible experience for users. For future releases, we will refresh the featured content on a more frequent basis and open up the nomination process for inclusion.

Our community of developers create awesome add-ons, and we want to help users discover them and love their Firefox even more. In the coming months, we are going to continue improving the experience by making recommendations that are as uniquely helpful to users as possible.

In the meantime, this first step toward improving the Firefox personalization experience will land in Firefox 48 on August 1, and is available in Firefox Beta now. So download Firefox Beta, go to about:addons and give it a try! (You can also reach this page by going to the Tools menu and choosing “Add-ons”). We would love to hear your feedback in the forums.

Hal WineLegacy vcs-sync is dead! Long live vcs-sync!

Legacy vcs-sync is dead! Long live vcs-sync!

tl;dr: No need to panic - modern vcs-sync will continue to support the gecko-dev & gecko-projects repositories.

Today’s the day to celebrate! No more bash scripts running in screen sessions providing dvcs conversion experiences. Woot!!!

I’ll do a historical retrospective in a bit. Right now, it’s time to PARTY!!!!!

Kartikaya GuptaBitcoin mining as an ad replacement?

The web as we know it basically runs on advertising. Which is not really great, for a variety of reasons. But charging people outright for content doesn't work that great either. How about bitcoin mining instead?

Webpages can already run arbitary computation on your computer, so instead of funding themselves through ads, they could instead include a script that does some mining client-side and submits the results back to their server. Instead of paying with dollars and cents you're effectively paying with electricity and compute cycles. Seems a lot more palatable to me. What do you think?

Shing LyuIdentify Performance Regression in Servo

Performance has always been a key focus for the Servo browser engine project. But just measure the performance through profilers and benchmarks is not enough. The first impression to a real user is the page load time. Although many internal, non-visible optimizations are important, we still want to make sure our page load time is doing well.

Back in April, I opened this bug #10452 to start planning the page load test. With the kind advice from the Servo community and the Treeherder people, we finally settled for a test design similar to the Talos test suite, and decided to use Perfherder for visualization.

Test Design

Talos is a performance test suite designed for Gecko, the browser engine for Firefox. It has many different kinds of tests, covering user-level UI testing and benchmarking. But what we really care about is the TP5 page load test suite. As the wiki says, TP5 use Firefox to load 51 scrapped websites selected from the Alexa Top 500 sites of its time. Those sites are hand-picked, then downloaded and cleaned to remove all external web resources. Then these web pages are hosted on a local server to reduce network latency impact.

Each page is tested three times for Servo, then we take the medium of the three. (We should test more times, but it will take too long.) Then all the mediums are averaged using geometric mean. Geometric mean has a great property that even if two test results are of different scale (e.g. 500 ms v.s. 10000 ms), if any one of them changed by 10%, they will have equal impact on the average.

Visualization

Talos test results for Gecko have been using Treeherder and Perfherder for a while. The former is a dashboard for test results per commit; the latter is a line plot visualization for the Talos results. With the help from the Treeherder team, we were able to push Servo performance test results to the Perfherder dashboard. I had a blog post on how to do this. You’ll see screenshots for Treeherder and Perfherder in the following sections.

Implementation

We created a python test runner to execute the test. To minimize the effect of hardware differences, we run the Vagrant (VirtualBox backend) virtual machine used in Servo’s CI infrastructure. (You can find the Vagrantfile here). The test is scheduled by buildbot and runs every midnight.

The test results are collected into a JSON file, then consumed by the test result uploader script. The uploader script will format the test result, calculate the average and push the data to Treeherder/Perfherder throught the Python client

The 25% Speedup!

A week before the Mozilla London Workweek, we found a big gap in the Perfherder graph. The average page load time changed from about 2000 ms to 1500 ms on June 10th.

Improvement graph

We were very excited about the significant improvement. Perfherder conveniently links to the commits in that build, but there are 26 commits in between.

Link to commits

GitHub commits

You should notice that there are many commits by the “bors-servo” bot, who is out automatic CI bot that does the pre-commit testing and auto-merging. Those commits are the merge commits generated when the pull request is merged. Other commits are commits from the contributors branch, so they may appear earlier then the corresponding merge commit. Since we only care when the commit gets merged to the master branch, not when the contributor commits to their own branch, we’ll only bisect the merge commits by bors-servo.

Buildbot provides a convenient web interface for forcing a build on certain commits.

Buildbot force build

You can simply type the commit has in the “Revision” field and buildbot will checkout that commit, build it and run all the tests.

Buildbot force build zoom in

You can track the progress on the Buildbot waterfall dashboard.

Buildbot waterfall

Finally, you’ll be able to see the test result on Treeherder and Perfherder.

Treeherder

Perfherder with bisects

The performance improvement turns out to be the result of this patch by Florian Duraffourg, he use a hashmap to replace a slow list search.

Looking Forward

In the near future, we’ll focus on improving the framework’s stability to support Servo’s performance optimization endeavor. We’ll also work closely with the Treeherder team to expand the flexibility of Treeherder and Perfherder to support more performance frameworks.

If you are interested in the framework, you can find open bugs here, or join the discussion in the tracking bug.

Thanks William Lachance for his help on the Treeherder and Perfherder stuff, and helped me a lot in setting it up on Treeherder staging server. And thanks Lars Bergstrom and Jack Moffit for their advice throughout the planning process. And thanks Adrian Utrilla for contributing many good features to this project.

Air MozillaIntroducing Mozilla Tech Speakers

Introducing Mozilla Tech Speakers Havi Hoffman introduces the Mozilla Tech Speakers Series.

The Servo BlogThese Weeks In Servo 71

In the last two weeks, we landed 173 PRs in the Servo organization’s repositories.

We have gotten great feedback and many new contributors from the release of initial Servo Nightly builds. Hopefully we can continue that as we launch Windows builds this week!

In addition to the list of CSS properties that Servo supports, we now also automatically generate a list of DOM APIs that are implemented.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Notable Additions

  • xidorn ensured our Rust bindings generator work better with MSVC
  • Andrew Mackenzie added keyboard shortcuts to quit
  • jdm improved our inlining on some DOM bindings - twice!
  • shinglyu added page-break-before/after for Stylo
  • stshine fixed the treatment of flex-flow during float calculation
  • emilio got our Rust bindings generator building with stable Rust
  • emilio also implemented dirtyness tracking for Stylo
  • SimonSapin got geckolib building with stable Rust
  • aneesh added tests of the download code on our arm32 builder
  • cbrewster made network listener runnables cancellable
  • imperio added the final infrastructure bits required for video tag metadata support
  • izgzhen implemented FileID validity checking for blob URLs
  • Steve Melia added basic support for the :active selector
  • Aravind Gollakota added origin and same-origin referrer policies, as well as the Referrer-Policy header.
  • johannhof switched Servo to use the faster Brotli crate
  • manish ensured we don’t panic when <img> fails to parse its src
  • cbrewster made Servo on macOS properly handle case-sensitive file systems
  • canaltinova implemented the referrer property on the Document object
  • Ms2ger implemented the missing Exposed WebIDL annotation
  • jdm fixed keyboard input for non-QWERTY layouts
  • emilio implemented basic CSS keyframe animation
  • notriddle added support for CSS animations using rotation

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Servo now supports CSS keyframe animations:

Dustin J. MitchellChapter in 500 Lines or Less

I wrote a chapter in the latest book in the “Architecture of Open Source Applications”, “500 Lines or Less”.

The theme of the book is to look in detail at the decisions software engineers make “in the small”. This isn’t about large-scale system design, community management, or working on huge codebases (like, say, Firefox). Nor is it about the design and implementation of “classic” computer science algorithms that a student might learn in school. The focus is in the middle ground: given a single real-world problem, how do we approach solving it and implementing the solution in an elegant, instructive form?

My chapter is on distributed consensus. I chose the topic because I was not already familiar with it, and I felt that I might produce a more instructive result if I experienced the struggles of solving a novel problem first-hand. Indeed, distributed consensus delivered! Building on some basic, published algorithms, I worked to build a practical library to provide distributed state to an application. In the process, I ran into issues from data structure aliasing to livelock (Paxos promises not to reach more than one different decisions. It does not promise to reach more than zero!)

The line limit (500 lines) was an interesting constraint. Many of my attempts to work around fundmantal issues in distributed consensus, such as detecting failed nodes, quickly ran hundreds of lines too long. Where in a professional setting I might have produced a library of many thousands of lines and great complexity. Instead, I produced a simple, instructive implementation with some well-understood limitations.

The entire book is available for reading online, or you can buy a PDF or printed copy. All proceeds go to charity, so please do consider buying, but at any rate please have a look and let me know what you think!

I haven’t yet read the other chapters, aside from a few early drafts. My copy is being printed, and once it arrives I’ll enjoy reading the remainder of the book.

Mozilla Localization (L10N)Localization Hackathon in Ljubljana

Earlier this week I came back from the Ljubljana Localization Hackathon which took place over the weekend. It was an inspiring meetup focused on translating Mozilla projects. I left full of energy and ideas and happy to have met many amazing people contributing to Mozilla.

Group photo

Almost thirty participants from Armenia, Bulgaria, Greece, Hungary, Macedonia, Romania, Serbia and Slovenia gathered in Ljubljana for three days. We discussed the current state of localization, the future of the localization process and technology at Mozilla. There was time for each of the communities to work on their goals as well as time for everyone to talk to each other and have fun.

Updates

The morning of the first day was dedicated to a series of updates from the Localization Drivers team. I started out by announcing the team’s updated mission statement which is all about becoming an efficient localization provider for Mozilla. I also summarized the recent changes in the team which is now divided in two working groups: the Technical Project Management group (Delphine, Francesco, Jeff and Peiying) and the Technical group (Axel, Matjaž, Staś and Zibi).

Francesco (flod) explained the thinking behind the new release process and the plan to use single repositories for each locale for all release channels. Right now there are five different versions of Firefox: Nightly, Dev Edition Aurora, Beta, Release, and ESR and each localization exists across four different repositories. After the migration there will only be one canonical repository for each locale. This will greatly simplify the setup for the localization teams.

Delphine then took the stage to introduce MQM which was well received. MQM is a framework for evaluating translation issues. It provides structure to the process of reviewing localizations and makes it easier to give constructive feedback to the localizers as well as track the quality of the localization over time. A central piece of MQM is a good and up-to-date style guide. Many localization teams spent the Saturday and Sunday afternoon working on their style guides. Delphine also mentioned new Transvision features available to the localizers: the Translation Consistency view and the Unchanged Strings view.

The main theme of the morning updates was simplicity and quality. We’re making a lot effort to reduce the complexity of the localization process at Mozilla and to create approachable quality benchmarks. We want to close the feedback loop between the current localizers and new contributors; help them connect, discuss and encourage participation.  The reviewers should be able to explain why a suggestion was rejected.  There needs to be an easy and contextual communication layer between new localizers and the reviewers.

An important part of this strategy is careful planning of the work load for the localization teams. In the past we ran an experiment involving Firefox for iOS: the localizers weren’t required to manually sign off on any particular changeset. It turned out to be a big success. Delphine announced that going forward there will be no more sign-off requests done by localizers themselves. Instead the Localization Drivers will verify that the changesets work and sign off on them, thus reducing the overhead for the localizers.

Without signoffs we’ll need a better way of tracking progress and understanding the state of completion for each locale. Francesco briefed everyone on the plan to group strings into so-called buckets. Buckets will be based on how visible strings are or who their target audience is. The likely candidates for individual buckets include: the main browser UI, Developer Tools, DOM/CSS Parser errors etc. Buckets will have priorities and will allow us to track progress separately per bucket.

Spectrograms

Saturday morning is also when we took it outside to do an exercise called a spectrogram. If you’ve followed the blog posts about the previous hackathons or you have been to one, you’re likely familiar with the concept. Spectrograms allow us to have a laid-back conversations on different topics. We start off by asking a question. Depending on how strongly they feel about the answer the participants then move along an imaginary line. The line allows us to capture the subtleties in opinions and feelings. That is, the whole spectrum of responses.

Spectrograms in Ljubljana

A number of responses stood out to me during the exercise. We talked about the communication inside of the localization teams and between the localization teams and the Drivers team. People were generally happy with how quick they got their answers in IRC and the mailing list. However a few contributors expressed a concern that it is not clear to the new-comers how to contact their locale’s team.

We asked how people felt about the amount of work they did and the number of projects there were to localize. Everyone ended up in the middle of the spectrum and Stoyan from Bulgaria offered an interesting explanation: it’s because localizers like what they do and they choose to do it themselves. Related to this was a question about deadlines: are they too short? A sentiment that resonated with a lot of participants was that they didn’t mind the deadlines—but rather the amount of time it takes for a translation to reach the users. This turned out to be an important insight closely related to Live Updates to Localizations that we’re working on as part of the effort to port Firefox to L20n (more on that later).

Goce from Macedonia recalled an experience which I feel we should all remember about and try not to repeat in the future. Back in the Firefox OS days there was a big rush to get a lot of content localized before the launch. In the end the release was delayed and canceled. It’s important for all involved stakeholders to have a good visibility into the release planning. Precise time estimates help prioritize community work and make sure efforts beyond the call of duty that we often see from the community aren’t in vain.

The question about mentoring new contributors vs. localizing alone spurred a long conversation with many take-aways. Fredy from Greece admitted that reviewing someone’s translations can sometimes be the same amount of work then just doing the whole translation from scratch himself. He wondered if translations could have their Discussion pages similar to definitions in Wikipedia which would be then publicly accessible and easy to reference in the future. Matjaž noted that reviewing might be in fact rather easy, but it’s giving meaningful feedback to the translators that is hard and time-consuming. MQM will definitely help in this respect. We’re working on prototyping the UI for Pontoon to make it easy to use for everyone.

Balázs from Hungary then quipped that there are two kinds of people: those who are willing to contribute and those who are able to do it. Usually these two kinds don’t overlap. The role of authoring tools like Pontoon and Pootle is to help those who are willing bridge the gap of not being able to.

The conversation about mentoring and evaluating new contributors ended on a high note with a great story from Slovenia’s very own Lan. Lan is a SUMO localizer and he was able to attract a new contributor to the Slovenian localization a year ago. When he started reviewing their contributions he realized the translations weren’t as good as he had hoped. Lan didn’t get discouraged though. He took his time to review all translations, corrected them and then asked the new contributor to go through the fixes to get an idea of how to improve. This turned out to be a good investment of Lan’s time; today the new contributor is still active and their translations are much better!

(Lan’s story is even more impressive if you consider that he is still in high school. In fact two very active members of the Slovenian community, Lan and Amadej, are both very young. I can’t wait to see what Mozilla will have become when they’re my age!)

While it’s clear that chaperoning new contributors could help foster the community growth, the group was divided with respect to how to actually do it. Some prefer to give out small independent assignments to localize real existing projects and nurture the sense of ownership. Others would rather create a single testing project with a known good translation and evaluate new contributions in reference to it.

27609225113_304d8623dd_k

The question about the preferred frequency of contribution always leads to interesting findings. The two extremes of the spectrum are “I want to localize small numbers of strings daily” and “I want to localize a big number of strings once a year”. In Ljubljana most of the participants chose to stand somewhere in the middle. A smaller group preferred daily assignments that could be completed during a coffee break. Another group would rather see the frequency of localization aligned with the frequency of releases of the software. One person in the middle was Marko from Serbia who summarized his choice by saying that he liked seeing the results of his work on a regular basis.

The last question that I would like to highlight here was about testing. We wanted to know how the localizers test their localizations. The responses covered the entire spectrum. Some localization communities have dedicated QA teams while other localizers dogfood their own work by using the software themselves. Another group wasn’t sure how to test nor where to find the nightly builds. This suggests that we can do better with documenting the best practices for testing. Matjaž also suggested that this kind of information for each project could be displayed by Pontoon.

The difficulties in testing often stem from the fact that some translations are hard to come by in regular usage and only show up in rare situations. The experiences from localizing Firefox for iOS prove that automated screenshots are an invaluable tool in such situations, in addition to being useful overall.

Pontoon and L20n

On Sunday morning Matjaž and I took the stage to represent the Technical group of the Localization Drivers team. Matjaž presented Pontoon which had already been used by some of the localization teams.  There have been many improvements to performance and usability of Pontoon as of late: faster sync, bulk actions, more readable diffs, better filtering, suggestions from other languages, support for L20n’s syntax and more!  Looking into the future, one of the most important tasks ahead of us is the merger of Pontoon and l10n.mozilla.org. We want to make sure all information relevant to localization is in one place.

The L20n presentation was divided into two parts.  I introduced L20n’s new syntax, FTL, and recommended the L20n by Example and the FTL Tinker as the learning resources.  I then showed a few use-cases where L20n really shines. The first one was according past participles with the gender of the subject. The second one was about particles governing the grammatical case of nouns.  My audience could easily relate—their native languages are among the ones with the most complex grammars in the world. In fact, it was an incredible diverse gathering of languages!  We had a strong group of Slavic languages (Bulgarian, Macedonian, Serbian, Slovenian), followed by a Romance language (Romanian) as well as two of the oldest  languages in the world: Greek and Armenian. And don’t forget Hungarian which is one of the few European languages that isn’t part of the Indo-European family of languages.

In the second part I showed a build of Firefox ported to L20n and demoed Live Updates to Localization. It seems like the ability to push translation updates and fixes almost live without having to wait for the next software update really is a game changer. The feedback I got afterwards was very positive.

Working in Groups

Both afternoons on Saturday and Sunday were dedicated to working in groups. All the participating localization teams had set goals leading up to the hackathon. The goals ranged from catching up with localizations to reviewing suggestions to discussing the health of the community to creating style guides.  The list of goals is available in the wiki.

Working in groups in Ljubljana

The Bulgarian community deserves a special mention here.  During the hackathon Ognyan transferred the leadership of the localization team to Stoyan (first from the left in the picture below).  Congratulations, Stoyan!  Ognyan and Stoyan then quickly proceeded to update the Bulgarian localization of Firefox Desktop to 100% complete.

Stoyan and Ognyan

Notice the Mozilla l10n photo frame above?  Make sure to check out more pictures with it taken by Nino. We also had custom-made name badges and other visual accessories.  All design materials were created by Mozilla Slovenija community designer Rok.  They are available on GitHub for other teams and hackathons to reuse.

Fun and Rest

Ljubljana is a beautiful city. It’s very friendly to pedestrians; almost everything was in walking distance from the venue. It’s also very green and inviting when it comes to spending time outdoors. After a period of focused effort it was great to take a short stroll across the city.

Thanks to the amazing organizer and host, Gašper, each evening was full of activities and opportunities to get to know each other. We tasted traditional dishes from Prekmurje, a region of Slovenia close to the Hungarian border. We visited the Ljubljana castle which offers fantastic views on the mountains surrounding the city. We even competed at a kart racing circuit!

28124977172_0d215afb9a_k

Helping Gašper was Nino, also from Slovenia, who managed the hackathon’s presence on social media. He took over the Mozillagram Instagram account for the weekend which resulted in a 10% increase in followers. Nino’s work was highlighted during the Weekly Meeting on Monday. I would also like to give a special shout-out to Jobava from Romania who did a great job taking notes during the whole weekend.  Having a dedicated person in charge of the note taking was instrumental to making everyone’s time productive. Jobava managed to capture a lot of details and insights. I often looked into his notes when writing this blog post. Hvala, Nino and Jobava!

The hackathon was an astounding success. It was diverse, educational and inspiring. A huge Thank you! to everyone who participated and helped organize it. I couldn’t be more excited for the future of localization at Mozilla!

Mozilla WebDev CommunityAuto-generating a changelog from git history

At Mozilla we share a lot of open source libraries. When you’re using someone else’s library, you might find that an upgrade breaks something in your application — but why? Glancing at the library’s changelog can help. However, manually maintaining a changelog when building features for a library can be a challenge.

We’ve been experimenting with auto-generating a changelog from commit history itself and so far it makes changelogs easy and painless. Here’s how to set it up. These tools require NodeJS so it’s best suited for JavaScript libraries.

First, you need to write commit messages in a way that allows you to extract metadata for a changelog. We use the Angular conventions which specify simple prefixes like feat: for new features and fix: for bug fixes. Here’s an example of a commit message that adds a new feature:

feat: Added a `--timeout` option to the `run` command

Here’s an example of a bug fix:

fix: Fixed `TypeError: runner is undefined` in the `run` command

The nice thing about this convention is that tools such as Greenkeeper, which sends pull requests for dependency updates, already support it.

The first problem with this is a social one; all your contributors need to follow the convention. We chose to solve this with automation by making the tests fail if they don’t follow the conventions 🙂 It’s also documented in our CONTRIBUTING.md file. We use the conventional-changelog-lint command as part of our continuous integration to trigger a test failure:

conventional-changelog-lint --from master

There was one gotcha in that TravisCI only does a shallow clone which doesn’t create a master branch. This will probably be fixed in the linter soon but until then we had to add this to our .travis.yml:

Alright, everybody is writing semantic commits now! We can generate a changelog before each release using the conventional-changelog tool. Since we adopted the Angular conventions, we run it like this before tagging to get the unreleased changes:

conventional-changelog -p angular -u

This scrapes our commit log, ignores merges, ignores chores (such as dependency updates), ignores documentation updates, and makes a Markdown list of features and fixes linked to their git commit. Example:

### Bug fixes
* Fixed `TypeError: runner is undefined` in the `run` command ([abc1abcd](https://github.com/.../))

### Features
* Added a `--timeout` option to the `run` command ([abc1abcd](https://github.com/.../))

As you can see, we also make sure to write commit messages in past tense so that it reads more naturally as a historic changelog. You can always edit the auto-generated changelog to make it more readable though.

The conventional-changelog tool can update a README.md file but, for us, we just paste the Markdown into our github releases so that it shows up next to each release tag.

That’s it! There are a lot of options in the tools to customize linting commits or changelog generation.

Air MozillaWebdev Beer and Tell: July 2016

Webdev Beer and Tell: July 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Addons BlogWriting an opt-in UI for an extension

Our review policies for addons.mozilla.org (AMO) require transparency when it comes to an add-on’s features and how they might affect a user’s privacy and security.

These policies are particularly relevant in cases where an add-on includes monetization features that are unrelated to its main function. For example, a video downloader add-on could include a shopping recommendation feature that injects offers from commercial websites. In cases like this, to be listed on AMO the feature would need to be opt-in. Opt-in means the add-on needs to present to the user the option to enable the feature, with a default action of keeping it disabled.

We’re often asked for examples of add-ons that do this, so I decided to create a sample WebExtension for this purpose. You can find the code on GitHub, and the built and signed example can be downloaded here.

The extension implements two opt-in prompts:

  1. A new tab. Tab opt-in
  2. A panel in the main add-on button. Panel opt-in

(If you don’t recognize the dark theme in the screenshots, it’s because I’m using Firefox Developer Edition—currently Firefox 49—for testing.)

An add-on would normally implement one prompt or the other. Opening a new tab has the advantage of getting the user’s attention right away, and has more room for content. The main disadvantage is that it can annoy users and feel too pushy. The pop-up approach is more user-friendly because it appears when the user is ready to engage with the add-on and is better integrated with the add-on UI. However, it wouldn’t work if the add-on doesn’t include buttons.

This example is completely minimal, hence the almost complete lack of styling. However, it includes the elements that AMO policies deem necessary:

  • Text explaining clearly to the user what the opt-in feature does and why it’s being offered. In this case, the extra feature is a variation of the borderify example on GitHub.
  • A link to a privacy policy and/or more information about the feature. That page should spell out its privacy and security implications.
  • A clear choice between enabling the feature and keeping it disabled, defaulting to disabled. I set autofocus="true" to the Cancel button, which can be clearly seen in the popup screenshot.

Hitting the Return key in either case, or closing the opt-in tab should be assumed to mean that the user is choosing not to accept the feature (the tab closing case isn’t implemented in this example to make it easier to test). The example uses the storage API to keep track of two flags: one that indicates the user has clicked on either button, and one that indicates if the user has enabled the feature or not. After the opt-in is registered as shown, the tab won’t show up again, and the content changes to something else.

Note: I couldn’t find a way to look at the extension’s storage in the developer tools (I suppose it’s not implemented yet). You can clear the storage to reset the state of the extension by deleting the browser-extension-data folder in the profile.

Remember that sneaking in unwanted features is bad for your users and your add-on’s reputation, so make sure you give your users control and give them clear information to make a decision. Happy coding!

Mozilla Reps CommunityRep of the Month – May 2016

Please join us in congratulating Konstantina Papadea as Rep of the Month for May.

Konstantina is a long-time Mozilla Reps from Greece. Additionally she is also responsible for the budget and swag requests in the Reps program.

In the past months Konstantina has helped out with organizing and chairing the Reps weekly call together with Ioana. That means that they are weekly in contact with many mozillians to find new interesting topics and prepare the agenda and the notifications. Further she is helping the Council with the formation of the Review Team we are implementing. This was already announce here and will give council more time to spend on mission and strategy. She became a mentor and will help inspire the new people applying for the program.

Please don’t forget to congratulate her on Discourse!

Anthony HughesReducing the NVIDIA Blacklist

We recently relaxed our graphics blocklist for users with NVIDIA hardware using certain older graphics drivers. The original blocklist entry blocked all versions less than v8.17.11.8265 due to stability issues with NVIDIA 182.65 and earlier. We recently learned however that the first two numbers in the version string indicate platform version and the latter numbers refer to the actual driver version. As a result we were inadvertently blocking newer drivers on older platforms.

We have since opened up the blacklist for versions newer than 8.15.11.8265 (Win XP) and 8.16.11.8265 (Vista/Win7) via bug 1284322, effectively drivers released beyond mid-2009.This change only exists on Nightly currently but we expect it to ride the trains unless some critical regression is discovered.

If you are triaging bugs and user feedback, or are engaging with users on social media, please keep an eye out for users with NVIDIA hardware. If the user does have NVIDIA hardware please have them check the Graphics section of about:support to confirm if they are using a driver version that was previously blocked. If they are try to help them get updated to the most recent driver version. If the issue persists, have them disable hardware acceleration to see if the issue goes away.

The same goes if you are a user experiencing quality issues (crashes, hangs, black screening, checkerboarding, etc) on NVIDIA hardware with these drivers. Please make sure your drivers are up to date.

In either case, if the issue persists please file a bug so we can investigate what is happening.

Feel free to email me if you have any questions.

Thank you for your help!

Air MozillaRelease Promotion: Keeping Tabs on Automation

Release Promotion: Keeping Tabs on Automation Reviewing Release Engineering, Release Promotion, and the Pulse-Notify microservice developed by Connor Sheehan during his internship at Mozilla.

Air MozillaRamping Up the Web

Ramping Up the Web Nancy Pang has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Air MozillaIntern Presentations 2016

Intern Presentations 2016 Group 1 of our interns are going to be presenting on what they worked on this summer.

Air MozillaGod Bless FxA

God Bless FxA Sai Chandramouli talks about what was accomplished in a summer internship at Mozilla: 1. Password Hints for weak passwords 2. Adding Geolocation data to emails...

Air MozillaEverything on the Side…

Everything on the Side… Erica Wright has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Air MozillaDXR: all the things

DXR: all the things Peter Elmers has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Air MozillaDiving into the Unknown

Diving into the Unknown Francis Kang has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Air MozillaA Swifty Internship

A Swifty Internship Tyler Lacroix has not yet given us a description of this presentation, nor any keyword tags to make searching for this event easier.

Support.Mozilla.OrgWhat’s Up with SUMO – 14th July

Hello, SUMO Nation!

How have you been doing in July? Hopefully you’re getting some well deserved holidaying/AFK-ing done. I keep hearing chasing imaginary monsters around is fashionable nowadays… Any trainers out there?

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 20th of July!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

  • for Android
    • Version 48 is still on track – release in early August.
  • for Desktop
    • Version 48 is still on track – release in early August.

Ah, yes… nearly forgot about this – congratulations to Portugal and their supporters for winning the Euro this year. So, what will keep you busy and excited in the coming week(s)? Let us know in the comments!

Air MozillaSemaphor Overview and Demo - Zero Knowledge Collaboration Tool

Semaphor Overview and Demo - Zero Knowledge Collaboration Tool SpiderOak has been described as "Snowden's favorite cloud service". They've now added group chat and will provide an overview and demo of their new Semaphor...

Mozilla Addons BlogWebExtensions support on AMO

It’s been possible to submit WebExtensions add-ons to addons.mozilla.org (AMO) since February, but I wanted to take a moment to highlight some of the improvements we’ve made since then that make it much easier to get your Chrome or Opera add-on into AMO.

These features have all been deployed to AMO over the last couple of months and are available for use.

New linter

We are now using the add-ons linter for WebExtensions which means that when we parse a WebExtensions add-on, we are running through a new JavaScript-powered linter, instead of the Python one. This parsing occurs each time you upload a new add-on or a new version of an add-on.

It uses eslint to parse the JavaScript and a schema to verify that the WebExtensions manifest is correct. Because we’ve started to implement functionality in the new add-ons linter that didn’t exist in the old version, more things will be showing up as warnings.

As an example, if you request a permission in the manifest that Firefox doesn’t support, you’ll get a warning:

image00

Relaxed upload criteria

If you’ve developed and add-on for Chrome or Opera, we’d like you to upload it to AMO. For this reason, we’ve relaxed a few criteria around AMO. Once you’ve verified that an add-on works correctly using about:debugging, you should then be able to upload it to AMO quite easily. We’ve hopefully made this easier through the following steps:

Optional add-on id

If your WebExtension does not have an id specified in the manifest.json, then AMO will generate an id for your add-on and add it to the add-on signing process. You can then get the add-on id from AMO and re-use that in later API calls, if needed.

The command line tool web-ext also supports optional ids.

Uploads

AMO will now accept an add-on as a zip file or a crx file. It will still only emit xpi files that Firefox and other Mozilla projects can consume. In the case of a crx file, the extra data added by the Chrome store will be stripped out of the file.

We recommend using a tool like web-ext to generate the appropriate file for your add-on.

Comments in the manifest file

Although comments are not valid in a JSON file, Chrome supports comments in the manifest.json file. To keep compatibility with Chrome, the addons-linter and hence AMO now support comments in the manifest file, but only comments starting with a double slash (//).

Translations

Finally, AMO now supports translations in the manifest. This again follows the Chrome standard, so that:

//in manifest.json:
"default_locale": "en",
"name": "__MSG_appName__",

//in _locales/en/messages.json:
{
  "appName": {
    "message": "My Add-on",
    "description": "The name of the add-on."
  }
}

//in _locales/de/messages.json:
{
  "appName": {
    "message": "Mein Add-on"
  }
}

The manifest will be parsed and the translations automatically entered into AMO. You need to ensure that default_locale is set in the manifest for the localisations to be activated. On the first upload, AMO will process the translations:

Screenshot 2016-07-14 09.44.11

If you’d like to get involved then join our mailing list or check out our repositories on Github.

Mozilla Localization (L10N)Localization Hackathon in Riga

Stas and I met in Riga with the Baltic and Polish l10n communities to do a hackathon. We gathered Latvian, Lithuanian, and Polish. Sadly, nobody from the Estonian community could attend due to conflicting schedules, but Merike Sell joined us on Saturday morning via Skype. We also had Dainis Šantars and Rūdolfs Mazurs join for a few hours each to work on Latgalian.

On Friday evening, we kicked things off warm and dry. That’s worth noting, as there wasn’t any weather like that for the rest of the weekend. Anyway, we grabbed some dinner, and ventured a few places to get a first feeling for Riga.

University of Latvia, Riga
Saturday morning we met at the University of Latvia, and started the actual hackathon with the obligational introductions, and some “spectrograms”. We got some insights on testing, and which versions of Firefox our community uses. Reminder, Developer Edition or Nightly are the right versions. Use the version you work on. We also talked about the web and software in local languages. Seems that using localized versions becomes normal. We also got more insights into contribution patters that people like.

I really like the conversations about contribution patterns. We start the discussion with the question on how often people would like to localize. The answers quickly lead into discussions about localization quality, and which context localizers prefer to get that. But also to the role that Mozilla plays in people’s lives. Getting a better understanding of both are critical as we’re trying to make localization at Mozilla better.

Afterwards, stas and I gave some general project updates. The Mozilla firehose is overwhelming, so it’s good to give an update on the things that matter to the people in the room. In particular, stas covered L20n, which was well received. I talked about how we want to do Firefox l10n less bound to aurora and beta release channels, which people also liked.

We also talked about MQM, a standard of classifying issues found around localization. Once again we learned that it’s hard to explain them.

As usual, we let people work within their teams for half the time. We did help out with accounts, and persuaded folks to do reviews of suggestions. We heard they also used the time to strategically set up their team, and to learn from each other.

Last but not least, huge thanks to Raivis Dejus for organizing this event. He made us all feel welcome, and as warm as it gets, and always had an anecdote about Riga to share. He turned out to be my inspiration for the Berlin hackathon.