SUMO BlogWhat’s Up with SUMO – 26th May

Hello, SUMO Nation!

We’ve been through a few holidays here and there, so there’s not a lot to report this week. We hope you’ll enjoy this much lighter-than-usual set of updates :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 1st of June – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

This is it for the diet version of WUWS. We’re going “lite” for this week, to keep your minds slim for the summer ;-) See you around SUMO!

Air MozillaTwelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly

Twelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly Much of what will happen in the next thirty years is inevitable, driven by technological trends are already in motion. Wired founder Kevin Kelly has...

Air MozillaThe Joy of Coding - Episode 58

The Joy of Coding - Episode 58 mconley livehacks on real Firefox bugs while thinking aloud.

Open Policy & AdvocacyThe countdown is on: 24 months to GDPR compliance

Twenty four months from now, a new piece of legislation will apply throughout Europe: the General Data Protection Regulation (GDPR). Broadly speaking, we see the GDPR as advantageous for both users and companies, with trust and security being key components of a successful business in today’s digital age. We’re glad to see an update for European data protection law – the GDPR is replacing the earlier data protection “directive”,  95/46/EC, which was drafted over 20 years ago when only 1% of Europeans had access to the Internet. With the GDPR’s formal adoption as of 14th April 2016, the countdown to compliance has begun. Businesses operating in all 28 European Union (EU) member states have until 25th May 2018 to get ready for compliance, or face fines of up to 4% of their worldwide turnover.

The GDPR aims to modernise data protection rules for today’s digital challenges, increase harmonisation within the EU, strengthen enforcement powers, and increase user control over personal data. The Regulation moved these goals forward, although it is not without its flaws. With some elements of it, the devil will be in the details, and it remains to be seen what the impact will be in practice.

That aside, there are many good pieces of the Regulation which stand out. We want to call out five:

  1. Less is more: we welcome the reaffirmation of core privacy principles requiring that businesses should limit the amount of data they collect and justify for what purpose they collect data. At Mozilla, we put these principles into action and advocate for businesses to adopt lean data practices.
  2. Greater transparency equals smarter individual choice: we applaud the Regulation’s endorsement of transparency and user education as key assets.
  3. Privacy as the default setting: businesses managing data will have to consider privacy throughout the entire lifecycle of products and services. That means that from the day teams start designing a product, privacy must be top of mind. It also means that strong privacy should always be the “by-default setting”.
  4. Privacy and competition are mutually reinforcing: with added controls for users like the ability to port their personal data, users remain the owner of their data, even when they leave a service. Because this increases the ability to move to another provider, this creates competition and prevents user lock-in within one online platform.
  5. What’s good for the user is good for business: strengthened data and security practices also decreases the risks associated with personal data collection and processing for both users and businesses. This is not negligible: in 2015 data breaches have cost on average USD 3.79 million per impacted company, without mentioning the customer trust they lost.

Above and beyond the direct impact of the GDPR, its standard-setting potential is substantial. It is more than a purely regional regulation, as it will have global impact. Any business that markets goods or services to users in the EU will be subject to compliance, regardless of whether their business is located in the EU.

We will continue to track the implications of the GDPR over the next 24 months as it comes into force, and will stay engaged with any opportunities to work out the final details. We encourage European Internet users and businesses everywhere to join us – stay tuned as we continue to share thoughts and updates here.

Open Policy & AdvocacyMozilla’s Transparency Report

Today, Mozilla released our transparency report. Transparency and openness are among Mozilla’s founding principles and a key part of who we are and how we operate: from our open, auditable codebase to our open development work in Bugzilla and Github. The report is another example of our commitment to these principles.

hacks.mozilla.orgExporting An Indie Unity Game to WebVR

WebVR holds the key to the future of VR content access – instant gratification without any downloads or installs. Or, at least we think so! We’re building a multi-platform digital game subscription service called Jump that delivers native web games to desktop, mobile, console, and VR devices, and we’ve bet our entire business on native web technologies – HTML5, WebGL, JS, and soon WebAssembly. We set out to demonstrate how powerful the web will be for virtual reality, by building an Oculus Rift WebVR game for Jump. We built SECVRITY in a month. With such a short window, we didn’t have time to dive into the WebVR API to build it natively on the web. So, we built the game in our engine of choice – Unity 5.

Screen Shot 2016-05-24 at 11.17.17 PM

SECVRITY is probably best described as “Whac-A-Mole for viruses”. You play as a computer security specialist trying to thwart a barrage of incoming viruses on your panoramic monitor setup. To disable viruses, you have to – you guessed it – look at the screen currently being attacked and click on it. While the potential for whiplash is fairly high, the potential for fun ended up being even higher, as evidenced by the barrage of people playing it in Mozilla’s Booth at GDC!

SECVRITY at GDC

Back to the technology though – Unity supports both WebGL and VR, but we quickly discovered that they were mutually exclusive and Unity did not have WebVR on their immediate roadmap. We started searching for ways to bridge this gap. Since Unity’s WebGL export spit the game out in website form, there had to be a way to connect the WebVR API to our Unity game to pipe the WebVR input into the engine. We were really hoping to not have to try and write it in that one-month window.

Luckily, one brave soul who goes by gtk2k on GitHub decided to build this bridge for everyone, almost a full year ago. His method is straightforward: he built a Unity WebGL template which includes JS files to handle WebVR input via the API, then he piped that code into Unity through one simple script. To implement the script properly in Unity, he created a camera prefab that houses 3 different cameras – a standard view camera, which is a normal Unity camera; and two stereo cameras that display side-by-side with slightly adjusted x-positions and viewport rects. The developer simply has to replace the main camera in their scene with this prefab, attach StereoCamera.cs to it, and watch the magic work. gtk2k’s bridge very cleverly makes the switch from standard camera to stereo cameras when the user hits the “Enter VR” button in the customized WebVR Unity template.

oculus-rift-vr-headset-1200x698

Download a Sample Unity WebVR Project or grab the UnityPackage to import the necessary files into your own project.

To try out the template yourself, here’s what you’ll need to do:

  • Get your hands on an Oculus Rift. Ensure that you enable running apps from external sources.
  • Download and install Firefox Nightly.
  • Install the WebVR enabler.
  • Grab either the entire sample project or just the UnityPackage above.
  • Install Unity 5 and be sure to enable both “WebGL Build Support” and “Windows Build Support” when prompted to select export tools.
    webgl_unity
  • Open Unity (either the sample project or your own project with the UnityPackage added) and replace your MainCamera with the WebVRCameraSet prefab. tut1
  • Make sure StereoCamera.cs is attached to the parent node of the prefab. tut2
  • From File > Build Settings, select WebGL as the platform (but leave Development Build unchecked). tut3
  • Open Edit > Project Settings > Player to access the Player settings; under Resolution and Presentation, select WebVR as your WebGL Template. tut4
  • In the same project settings, under the Publishing Settings section, ensure your WebGL Memory Size is set to a minimum of 512 MB to avoid out-of-memory errors. (For SECVRITY, we set it to 768 MB.) tut5
  • Build to WebGL, and give it a shot in Firefox Nightly!
    • You can test local builds or upload the build to your favorite web host.

Hopefully that will get you up and running with your first Unity WebVR build! To test in the editor, you’ll need to enable the standard features for desktop VR builds. Go back to Edit > Project Settings > Player, select the Standalone tab (indicated by down arrow icon) above the Resolution and Presentation section, navigate to Other Settings, and check the boxes for both Stereoscopic Rendering and Virtual Reality Supported. These aren’t necessary for the WebVR build itself, but you’ll need them to test in the editor.

tut6

To supplement the template from a design perspective, we added explicit instructions to properly get the user into VR mode in their browser window. We also decided to give the user a choice whether to play in VR or with a mouse. This is where things got tricky.

mainmenu

We wanted non-VR users on desktop to be able to play SECVRITY since, well, it’s in a browser! We supported mouse controls before we supported VR, so mouse control in itself was simple. However, mouse control while VR input was being detected caused some incredibly wonky results. Essentially, the mouse movement would throw off the viewport of the VR headset, causing the user to: A. get completely lost, and B. get super sick. We had to detect whether or not the user was in VR and then disable mouse control to solve this.

Our solution is to completely disable mouse control, whether the player is using VR input or not, until they explicitly select “mouse” control from the main menu. The user must now select their input method of choice via arrow keys or controller joystick before playing. (Quick aside: WebGL/WebVR supports the Gamepad API, so integrating a controller took 0 extra work beyond what you’d do for a standalone build.) If the user chooses “mouse” while in a VR headset, then the sickness-inducing issues begin. Caveat player: we built this game in one month! Auto-detection will resolve this in future iterations.

SECVRITY_mainmenu

We learned some valuable lessons in building for WebVR via Unity, mainly in designing for hybrid VR/non-VR experiences. A lot of our troubles should be solved in an official WebVR export from the engine. But even when that comes, it’s still important to understand what your user may or may not do to break your game, especially when the control inputs are so drastically different. We had to make a few tweaks to gtk2k’s code for the enter VR flows to work with recent changes to Firefox Nightly, but his codebase largely worked as advertised with very little effort on our end. That man is our hero.

The web is the future of gaming, and Jump, armed with games like SECVRITY, will prove it to the world. Web gaming provides almost instant access to games on desktop, mobile, console, VR headsets, and other devices, with no permanent downloads or installs required for users. The web can already deliver near-native speeds and, with WebGL 2.0 and WebAssembly on the horizon, we’ll start seeing near-current-generation graphics as well. Jump hopes to help drive the web revolution forward and make the web the ultimate home for games on all devices. If you want to follow Jump’s progress, you can sign up for our newsletter and for a beta key at www.jump.game. And if you want to play SECVRITY right now, you can find it at as a demo on mozvr.com! Take it from us: the web will revolutionize the gaming industry. And WebVR will play an important role in showcasing the web’s power to both developers and users by providing instant access to beautiful virtual reality right from a browser.

about:communityA New Firefox Development Forum

We’ve been looking for the right home for Firefox browser development Q&A for a while now. It’s taken longer than it should have, but after a lot of discussion and experimentation with different tools and forums, we’ve finally come to a conclusion.

In retrospect the decision was obvious; hindsight is like that. But here it is; if we want everyone in the community to be a part of making Firefox great, then we should be where the community is: part of the Mozilla Community Discourse forum.

Things are a bit thin on the ground there now; I’ll be migrating over some questions and answers from other forums to stock that pond shortly. In the meantime if you’re new to Discourse it’s a very civilized piece of forum software. You can keep track of discussions happening there by logging in and taking a look in the upper right-hand corner, where you’ll see “Watching”, “Tracking”, “Normal” and “Muted”. Set that to “Watching”, and you’ll get a notification when a new topic comes up for discussion. Set it to “Tracking”, and you’ll also get a note when you’re called out by name. You can also watch or track individual threads, which is a nice touch.

Alternatively, if you’re a fan of syndicated feeds you can grab an Atom feed as follows:

https://discourse.mozilla-community.org/c/firefox-development.rss

I hope you’ll join us in helping build Firefox into everything it can be, the best browser in the world and the cornerstone of a free, open and participatory Web. And as always, if you’ve got questions about that, please email me directly.

Thank you,

– mhoye

Air MozillaConnected Devices Weekly Program Update, 24 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaMartes mozilleros, 24 May 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Air MozillaBringing the Next Billion Online

Bringing the Next Billion Online Nearly 4 billion people around the world don't use the Internet. Bringing developing countries into the global digital community should be a priority for the...

CalendarGSoC 2016: Getting Oriented

Today is the first day of the “coding period” for Google Summer of Code 2016 and I’m excited to be working on the “Event in a Tab” project for Mozilla Calendar. The past month of the “community bonding period” has flown by as I made various preparations for the summer ahead. This post covers what I’ve been up to and my experience so far.

After the exciting news of my acceptance for GSoC I knew it was time to retire my venerable 2008 Apple laptop which had gotten somewhat slow and “long in the tooth.” Soon, with a newly refurbished 2014 laptop via Ebay in hand, I made the switch to GNU/Linux, dual-booting the latest Ubuntu 16.04. Having contributed to LilyPond before it felt familiar to fire up a terminal, follow the instructions for setting up my development environment, and build Thunderbird/Lightning. (I was even able to make a few improvements to the documentation – removed some obsolete info, fixed a typo, etc.) One difference from what I’m used to is using mercurial instead of git, although the two seem fairly similar. When I was preparing my application for GSoC my build succeeded but I only got a blank white window when opening Thunderbird. This time, thanks to some guidance from my mentor Philipp about selecting the revision to build, everything worked without any problems.

One of the highlights of the bonding period was meeting my mentors Philipp Kewisch (primary mentor) and MakeMyDay (secondary mentor). We had a video chat meeting to discuss the project and get me up to speed. They have been really supportive and helpful and I feel confident about the months ahead knowing that they “have my back.” That same day I also listened in on the Thunderbird meeting with Simon Phipps answering questions about his report on potential future legal homes for Thunderbird, which was an interesting discussion.

At this point I am feeling pretty well integrated into the Mozilla infrastructure after setting up a number of accounts – for Bugzilla, MDN, the Mozilla wiki, an LDAP account for making blog posts and later for commit access, etc. I got my feet wet with IRC (nick: pmorris), introduced myself on the Calendar dev team’s mailing list, and created a tracker bug and a wiki page for the project.

Following the Mozilla way of working in the open, the wiki page provides a public place to document the high-level details related to design, implementation, and the overall project plan. If you want to learn more about this “Event in a Tab” project, check out the wiki page.  It contains the mockup design that I made when applying for GSoC and my notes on the thinking behind it. I shared these with Richard Marti who is the resident expert on UI/UX for Thunderbird/Calendar and he gave me some good feedback and suggestions. I made a number of additional mockups for another round of feedback as we iterate towards the final design. One thing I have learned is that this kind of UI/UX design work is harder than it looks!

Additionally, I have been getting oriented with the code base and figuring out the first steps for the coding period, reading through XUL documentation and learning about Web Components and React, which are two options for an HTML implementation. It turns out there is a student team working on a new version of Thunderbird’s address book and they are also interested in using React, so there will be a larger conversation with the Thunderbird and Calendar dev teams about this. (Apparently React is already being used by the Developer Tools team and the Firefox Hello team.)

I think that about covers it for now. I’m excited for the coding period to get underway and grateful for the opportunity to work on this project. I’ll be posting updates to this blog under the “gsoc” tag, so you can follow my progress here.

— Paul Morris

QMOFirefox 47 beta 7 Testday Results

Howdy mozillians!

Last week on Friday (May 20th), we held another successfull event – Firefox 47 beta 7 Testday.

Thank you all – Ilse Macías, Stelian Ionce, Iryna Thompson, Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Tanvir Rahman, Zayed News, Azmina Akter Papeya, Roman Syed, Raihan Ali, Sayed Ibn Masudn, Samad Talukdar, John Sujoy, Nafis Ahmed Muhit, Sajedul Islam, Asiful Kabir Heemel, Sunny, Maruf Rahman, Md. Tanvir Ahmed, Saddam Hossain, Wahiduzzaman Hridoy, Ishak Herock, Md.Tarikul Islam Oashi, Md Rakibul Islam, Niaz Bhuiyan Asif, MD. Nnazmus Shakib (Robin), Akash, Towkir Ahmed, Saheda Reza Antora, Md. Almas Hossain, Hasibul Hasan Shanto, Tazin Ahmed, Badiuzzaman Pranto, Md.Majedul islam, Aminul Islam Alvi, Toufiqul Haque Mamun, Fahim, Zubayer Alam, Forhad Hossain, Mahfuza Humayra Mohona – for the participation!

A big thank you goes out to all our active moderators too!

Results:

  • there were no bugs verified nor triaged
  • some failures were mentioned for APZ feature in the etherpads (link 1 and link 2); therefore, please add the requested details in the etherpads or, even better, join us on #qa IRC channel and let’s figure them out 😉

I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work! \o/

And keep an eye on QMO for upcoming events! 😉

Air MozillaWebdev Beer and Tell: May 2016

Webdev Beer and Tell: May 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

SUMO BlogEvent Report: Mozilla Ivory Coast SUMO Sprint

We’re back, SUMO Nation! This time with a great event report from Abbackar Diomande, our awesome community spirit in Ivory Coast! Grab a cup of something nice to drink and enjoy his report from the Mozilla Ivory Coast SUMO Sprint.

The Mozilla Ivory Coast community is not yet ready to forget Saturday, May 15. It was then that the first SUMO Sprint in Ivory Coast took place, lasting six hours!
For this occasion, we were welcomed and hosted by the Abobo Adjame University, the second largest university in the country.
Many students, some members of the Mozilla local community, and other members of the free software community gathered on this day.

The event began with a Mozilla manifesto presentation by Kouadio – a young member of our local SUMO team and the Lead of the Firefox Club at the university.

After that, I introduced everyone to SUMO, the areas of SUMO contribution, the our Nouchi translation project, and Locamotion (the tool we use to localize).
During my presentation I learned that all the guests were really surprised and happy to learn of the existence of support.mozilla.org and a translation project for Nouchi
They were very happy and excited to participate in this sprint, and you can see that in the photos, emanating from their smiles and the joy that you can read from their the faces.

After all presentations and introductions, the really serious things could begin. Everyone spent two hours answering questions of French users on Twitter – the session passed very quickly in the friendly atmosphere.

We couldn’t reach the goal of answering all the Army of Awesome posts in French, but everyone appreciated what we achieved, providing answers to over half the posts – we were (and still are) very proud of our job!

After the Army of Awesome session, our SUMO warriors have turned to Locamotion for Nouchi localization. It was at once serious and fun. Originally planned for three hours, we localized for four – because it was so interesting :-)

Mozilla and myself received congratulations from all participants for this initiative, which promotes the Ivorian language and Ivory Coast as a digital country present on the internet.

Even though we were not able to reach all our objectives, we are still very proud of what we have done. We contributed very intensely, both to help people who needed it and to improve the scale and quality of Nouchi translations in open source, with the help of new and dynamic contributors.

The sprint ended with a group tasting of garba (a traditional local dish) and a beautiful family picture.

Thank you, Abbackar! It’s always great to see happy people contributing their skills and time to open source initiatives like this. SUMO is proud to be included in Ivory Coast’s open source movement! We hope to see more awesomeness coming from the local community in the future – in the meantime, I think it’s time to cook some garba! ;-)

Mozilla L10NLocalization Hackathon in Stockholm

For the second year in a row the l10n-drivers team – represented by Jeff Beatty and I – met in Stockholm with several members of Mozilla’s Nordic communities, guests of the local Wikimedia offices, for the Nordic Viking Mozilla l10n Hackathon. The group of languages represented at the event included Danish, Finnish, Icelandic, Norwegian (both Bokmål and Nynorsk), and Swedish.

Nordic Hackathon - StockholmUnlike last year, when the topics and schedule of the event were largely set by l10n-drivers, this time each localization team was involved in the planning phase, and in charge of setting individual and group goals for this hackathon.

We started our Saturday morning in a sunny but cold Stockholm with some organizational updates, including topics like:

  • Change in focus from Firefox OS to Connected Devices, and how that affects localization priorities.
  • Updates about release and development cycles for iOS and desktop products, and our plans to increase participation and productivity by removing some of the existing technical barriers.
  • Mozilla’s renewed focus on quality, and how that applies to localization.
  • Our goal to improve communication channels with localizers, and help making their training and mentoring process more streamlined, through better documentation and tools.
  • How to use the newly released features in Transvision focusing on quality.

We also talked with localizers about the recent organizational changes inside l10n-drivers. In fact Stockholm also hosted a short but intense work week of the entire team right after the hackathon.

The rest of Saturday and Sunday were reserved for each team to work on achieving their goals, and our role was mainly to help them with some targeted training, and facilitate some discussions.

Several teams started putting into practice our focus on quality by writing a style guide for their language, a step that we consider fundamental to ensure consistency, and help with onboarding new volunteers. They also did work on improving quality across projects using Transvision’s Consistency View as a reference.

For some languages these events represent a critical moment to meet face to face and work together on their translations, since contributors might live hundreds of kilometers from each other and can only communicate online. That’s the perfect time to figure out the division of tasks inside the team, find ways to attract new contributors, rethink tools and workflows, and create onboarding documentation. And that’s exactly what happened for several of the teams at the hackathon.

The weekend ended without a Kubb tournament because of the inclement weather – even if someone insisted on the need of going outside in the snow and play anyway, as a viking would do – and plans for next year’s hackathon. Italy was suggested as one of the potential venues for the event, but I’m quite positive that doesn’t count as a Nordic country 🙂

Air MozillaBay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day

Bay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day Digital Accessibility meetup with speakers for Global Accessibility Awareness Day. #a11ybay. 6pm Welcome with 6:30pm Start Time.

SUMO BlogWhat’s Up with SUMO – 19th May

Hello, SUMO Nation!

Glad to see all of you on this side of spring… How are you doing? Have you missed us as much as we missed you? Here we go yet again,  another small collection of updates for your reading pleasure :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 25th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • The Polish team have reached their monthly milestone – congratulations!
  • Final reminder: if you want to participate in the ongoing discussion about source material quality and frequency, take a look at this thread. We are going to propose a potential way of addressing your issues once we collate enough feedback.
  • Reminder: L10n hackathons everywhere! Find your people and get organized!

Firefox

  • for Android
    • Version 46 support discussion thread.
    • Reminder: version 47 will stop supporting Gingerbread. High time to update your Android installations!
      • Other than that, it should be a minor release. Documentation in progress!

And that’s it! We hope you are looking forward to the end of this week and the beginning of the next one… We surely are! Don’t forget to follow us on Twitter!

The Bugzilla UpdateRelease of Bugzilla 4.4.12, 5.0.3, and 5.1.1

Today we have several new releases for you!

All of today’s releases contain security fixes. We recommend that all Bugzilla administrators read the
Security Advisory that was published along with these releases.

Bugzilla 5.0.3 is our latest stable release. It contains various
useful bug fixes and security improvements:

Bugzilla 4.4.12 is a security update for the 4.4 branch:

Bugzilla 5.1.1 is an unstable development release.
This release has not received QA testing from the Bugzilla Project, and should not
be used in production environments. Development releases exist as previews of the
features that the next major release of Bugzilla will contain. They also exist for
testing purposes, to collect bug reports and feedback, so if you find a bug in this
development release (or you don’t like how some feature works) please
tell us.

Note:Make sure ImageMagick is up-to-date if the BmpConvert extension is enabled.
If no updated ImageMagick version is available for your OS, we recommend to disable the BmpConvert
extension for now (bug 1269793).


Air MozillaWeb QA Team Meeting, 19 May 2016

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Air MozillaReps weekly, 19 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

about:communityJakarta Community Space Launch

This post was written by Fauzan Alfi.

It was not an ordinary Friday 13th for Mozilla Indonesia because on May 13th, 2016, it was a very big day for us. After months of planning and preparation, the Mozilla Community Space Jakarta finally launched and opened for the community. It’s the 4th volunteer-run physical community space after Bangalore (now closed), Manila and Taipei and another one is opening soon in Berlin. Strategically located in Cikini – Central Jakarta, the Space will become a place for Mozillians from Greater Jakarta and Bandung to do many activities, especially developer-focused events, and to build relationships with other tech communities in the city.

The Space

The Space. Photo by Yofie Setiawan

Invited to the event were many open source and other communities around the city. Mozilla Reps, FSAs and Mozillians also joined to celebrate the Space opening. On his presentation, Yofie Setiawan (Mozilla Rep, Jakarta Space Manager) hopes that Jakarta Community Space can be useful for many people and communities, especially to educate anyone who comes and joins events that take place in the space.

Opening Event

Dian Ina and Rara talk to guests. Photo by Yofie Setiawan

Ceremonial first piece

Brian gets the ceremonial first bite. Photo by Yofie Setiawan

Also joining the event, Brian King from Participation Team at Mozilla. During his remarks, Brian said that the reason behind the Jakarta Community Space is because “the Mozilla community here is one of the most active globally, with deep roots and a strong network in tech scene”. He also added that “Indonesia is an important country with a very dynamic Web presence, and we’d like to engage with more people to make the online experience better for everyone.”

The Jakarta Community Space is around 40 square meters in area that fits 20-30 people inside. On the front side, it has glass wall that’s covered by frosted sticker with some Mozilla projects wording printed on it. Inside, we have some chairs, tables, home theater set, food & drink supplies and coffee machine. Most of the items were donated by Mozillians in Jakarta.

The tour

The tour. Photo by Yofie Setiawan

One area where the Jakarta Community excelled was with the planning and design. All the processes are done by the community itself. One of Reps from Indonesia, Fauzan Alfi – who has a background in architecture, helped design the space and kept the process transparent on the Community Design GitHub. The purpose is to ignite collaborative design, not only from Indonesian community but also from other parts of the globe. More creativity was shown by creating mural drawings of landmarks in selected cities around the world – including Monas of Jakarta.

Jakarta Community Space means a lot for Mozilla community in Greater Jakarta and Indonesia, in general. Having a physical place means the Indonesian community will have their own home to spread the mission and collaborate with more communities that are aligned with Mozilla, especially developer communities. Hopefully, the Space will bring more and more people to contribute to Mozilla and help shape the future of the Web.

Mozilla L10NLocalization Hackathon in the Czech Republic

When I learned that I was going to co-organize the Prague L10n Hackathon, I was very excited. This would be my second visit. I wanted to see what had changed since the last time I was there. Upon arriving at the airport, I saw multilingual signs in Czech, English, Russian and Korean. When I got to the hotel, I saw lots of businessmen from the far East and a Chinese restaurant right across the street. Prague is far more globalized than it was a decade ago.

Our Prague L10n Hackathon on April 30 – May 1 united seven localizers representing these languages: Czech, Russian, Slovak, Upper and Lower Sorbians. Three localizers had not been to any Mozilla events prior, not to mention this was the first time for all of them meeting one another. I personally had worked with a few through email communications and bugzilla, so it was great to match the names with the faces.

Spring in Prague: localization communities of Czech, Russian, Slovak, Upper and Lower Sorbians.

My colleague Matjaž started the two-day event by sharing the latest info on organization changes, including the most recent change in the localization team. He also updated the attendees with the latest in product roadmap, including the end of FFOS as in mobile phone and renewed focus of Firefox on iOS and Android. He covered the overall team goals of streamlining localization tools and repositories. We also touched on the importance of translation quality and the ways to drive that, such as establishing locale-specific style guides and terminology lists. Having these should help recruiting and onboarding new localizers as well as keeping consistency of great work done by multiple contributors.

The Czech team used the opportunity to brainstorm about wireframe and discuss ideas around redesigning the translation interface of Pontoon. We took notes, filed bugs and fixed some of them. Good news, just a few days after the hackathon, Michal Vašíček and Victor Bychek of the Russian community both submitted their first patches to Pontoon!

In addition to working on team goals separately, all of us sat together sharing stories on how they got involved, what they had tried for community outreach, and the challenges they faced in recruiting and retaining localization contributors. No wonder during the Spectrogram, most agreed that it was easier to train someone to be technical than to become a better translator. Some of them had been long time Mozillians, working tirelessly behind the scenes. More than half were fairly new. The majority of the attendees were students or just fresh out of college. Michal Vašíček was the youngest of all, only 14 years old, with lots of great ideas. Michael Wolf single-handedly covered two locales all by himself.  He finished a few projects over the weekend.  Tomáš Zelina was supposed to spend the weekend studying for his high school exams; he came anyway. He wouldn’t pass up a chance to meet and collaborate with his community, in addition to practicing some English. Juraj Cigáň was the sole representative for Slovak, but he found common interests and challenges with the Czech community, not just linguistically.  Alexander Slovesnik, a long time Mozillian and Victor Bychek, a college student of Russia lived more than 1000km from one another.  Such event would make their first face to face meeting possible.

Our local rep Michal Stanke was new in this role though a veteran as a localizer. He helped us with identifying and securing the venue, providing info on transportation, arranging dining options that highlighted authentic Czech cuisines and the famous beers the country was known for. All the contributors stayed in an AirBnb which was just walking distance away from the venue. We wrapped up our event by taking an evening walk from the Old Town to the Prague Castle and beyond. We wondered which community would host the event next year. We all looked forward to a different city and perhaps a different country with more new contributors attending.

The Mozilla BlogWelcome Alex Salkever, Vice President of Marketing Communications

I’m excited to announce that Alex Salkever joins the Mozilla leadership team today as the Vice President of Marketing Communications.

In this role, Alex Salkever will be responsible for driving strategic positioning and marketing communications campaigns. Alex will oversee the global communications, social media, user support and content marketing teams and work across the organization to develop impactful outbound communications for Mozilla and Firefox products.

Alex Salkever, MozillaAlex was most recently Chief Marketing Officer of Silk.co, a data publishing and visualization startup, where he led efforts focused on user growth and platform partnerships. Alex has held a variety of senior marketing, marketing communications and product marketing roles working on products in the fields of scientific instruments, cloud computing, telecommunications and Internet of Things. In these various capacities, Alex has managed campaigns across all aspects of marketing and product marketing including PR, content marketing, user acquisition, developer marketing and marketing analytics.

Alex also brings to Mozilla his experience as a former Technology Editor for BusinessWeek.com. Among his many accomplishments, Alex is the co-author of “The Immigrant Exodus”, a book named to The Economist Book of the Year List in the Business Books category in 2012.

Welcome Alex!

Background:

Alex’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Air MozillaThe Joy of Coding - Episode 57

The Joy of Coding - Episode 57 mconley livehacks on real Firefox bugs while thinking aloud.

hacks.mozilla.orgCSS coding techniques

Lately, we have seen a lot of people struggling with CSS, from beginners to seasoned developers. Some of them don’t like the way it works, and wonder if replacing CSS with a different language would be better—CSS processors emerged from this thinking. Some use CSS frameworks in the hopes that they will have to write less code (we have seen in a previous article why this is usually not the case). Some are starting to ditch CSS altogether and use JavaScript to apply styles.

But you don’t always need to include a CSS processor in your work pipeline. You don’t need to include a bloated framework as the default starting point for any project. And using JavaScript to do the things CSS is meant for is just a terrible idea.

In this article we will see some tips and recommendation to write better, easier-to-maintain CSS code, so your stylesheets are shorter and have fewer rules. CSS can feel like a handy tool instead of a burden.

The “minimum viable selector”

CSS is a declarative language, in which you specify rules that will style elements in the DOM. In this language, some rules take precedence over others in the order they are applied, like inline styles overriding some previous rules.

For instance, if we have this HTML and CSS code:

<button class="button-warning">
.button-warning {
  background: red;
}

button, input[type=submit] {
  background: gray;
}

Despite .button-warning being defined before the button, input[type=submit] rule, it will override the latter background property. Why? What is the criteria to decide which rule will override the styles of another one?

Specificity.

Some selectors are considered to be more specific than others: for instance an #id selector will override a .class selector.

What happens if we use a selector that is more specific than it really needs to be? If we later want to override those styles, we need an even more specific selector. And if we later need to override this more specific selector, we will need… yes, it’s a snowball growing larger and larger and will eventually become very difficult to maintain.

So, whenever you are writing your selectors, ask yourself: is this the least specific selector that can do the job here?

All of the specificity rules are officially defined in the W3C CSS Selectors specification, which is the way to find out every single detail about CSS Selectors. For something easier to understand, read this article on CSS specificity.

Don’t throw new rules at bugs

Let’s imagine this typical situation: there is a bug in your CSS and you locate which DOM element has the wrong style. And you realise it’s somehow inheriting a property that it shouldn’t have.

Don’t just throw more CSS at it. If you do, your code base will grow a bit larger, and locating future bugs will be a bit harder.

Instead, stop, step back, and use the developer tools in your browser to inspect the element and see the whole cascade. Identify exactly which rule is applying the style you don’t want. And modify that existing rule so that it doesn’t have the unintended consequence.

In Firefox you can debug the cascade by right-clicking on a element in a page and selecting Inspect element.

image00

Look at that cascade in all its glory. Here you can see all the rules applied to an element, in the order in which they are applied. The top entries are the ones with more specificity and can override previous styles. You can see that some rules have some properties struck out: that means that a more specific rule is overriding that property.

And you can not only see the rules, but you can actually switch them on and off, or change them on the fly and observe the results. It’s very useful for bug fixing!

The needed fix may be a rule change or it may be a rule change at a different point in the cascade. The fix may require a new rule. At least you will know it was the right call and something that your code base needed.

This is also a good time to look for refactoring opportunities. Although CSS is not a programming language, it is source code and you should give it the same consideration that you give to your JavaScript or Python: it should be clean, readable and be refactored when needed.

Don’t !important things

This is implied in the previous recommendations, but since it’s crucial I want to stress it: Don’t use !important in your code.

!important is a feature in CSS that allows you to break the cascade. CSS stands for “Cascading Style Sheets,” this is a hint.

!important is often used when you are rushing to fix a bug and you don’t have the time or the will to fix your cascade. It is also used a lot when you are including a CSS framework with very specific rules and it’s just too hard to override them.

When you add !important to a property, the browser will ignore other rules with higher specificity. You know you are really in trouble when you !important a rule to override another rule that was marked as !important as well.

There is one legitimate use of !important —and it’s while using  the developer tools to debug something. Sometimes you need to find which values for a property will fix your bug. Using !important in the developer tools and modifying a CSS rule on the fly lets you find these values while you ignore the cascade.

Once you know which bits of CSS will work, you can go back to your code, and look at which point of the cascade you want to include those bits of CSS.

There’s life beyond px and %

Working with px (pixels) and % (percentages) units is quite intuitive, so we will focus here on less-known or less intuitive units.

Em and rem

The most well-known relative unit is em. 1em is equivalent to the font size of that element.

Let’s imagine the following HTML bit:

<article>
  <h1>Title</h1>
  <p>One Ring to bring them all and in the darkness bind the.</p>
</article>

And a stylesheet with just this rule:

article {
  font-size: 1.25em;
}

Most browsers apply a base font size of 16 pixels to the root element by default (by the way, this is overridable—and a nice accessibility feature—by the user). So the paragraph text of that article element will probably get rendered with a font-size of 20 pixels (16 * 1.25).

What about the h1? To understand better what will happen, let’s add this other CSS rule to the stylesheet:

h1 {
  font-size: 1.25em;
}

Even though it’s also 1.25em, the same as article, we have to take into account that em units compound. Meaning, that an h1 being a direct child of a body, for instance, would have a font-size of 20 pixels (16 * 1.25). However, our h1 is inside an element that has a font-size that is different than the root (our article). In this case, the 1.25 refers to the font-size we are given by the cascade, so the h1 will be rendered with a font-size of 25 pixels (16 * 1.25 * 1.25).

By the way, instead of doing all of these multiplication chains in your head, you can just use the Computed tab in the Inspector, which displays the actual, final value in pixels:

image01

em units are really versatile and make it really easy to change—even dynamically—all the sizes of a page (not just font-size, but other properties like line-height, or width).

If you like the “relative to base size” part of em but don’t like the compounding part, you can use rem units. rem units are like em‘s that ignore compounding and just take the root element size.

So if we take our previous CSS and change em units for rem in the h1:

article { font-size: 1.25em; }
h1 { font-size: 1.25rem; }

All h1 elements would have a computed font-size of 20 pixels (assuming a 16px base size), regardless of them being inside an article or not.

vw and vh

vw and vh are viewport units. 1vw is 1% of the viewport width, whereas 1vh is 1% of the viewport height.

They’re incredibly useful if you need a UI element that needs to occupy the whole screen (like the typical semi-transparent dark background of a modal), which is not always related to the actual body size.

Other units

There are other units that might be not as common or versatile, but you will inevitably stumble upon them. You can learn more about them at the MDN.

Use flexbox

We have talked about this in a previous article about CSS frameworks, but the flexbox module simplifies the task of crafting layouts and/or aligning things. If you are new to flexbox, check out this introductory guide.

And yes, you can use flexbox today. Unless you really need to support ancient browsers for business reasons. The current support for flexbox in browsers is above 94%. So you can stop writing all of those floating div‘s which are hard to debug and maintain.

Also, keep an eye open for the upcoming Grid module, which will make implementing layouts a breeze.

When using a CSS processor…

CSS compilers like Sass or Less are very popular in the front-end development world. They are powerful tools, and—when put in good use—can allow us to work more efficiently with CSS.

Don’t abuse selector nesting

A common feature in these processors or “compilers” is selector nesting. So, for instance, this Less code:

a {
  text-decoration: none;
  color: blue;

  &.important {
    font-weight: bold;
  }
}

Would get translated to the following CSS rules:

a {
  text-decoration: none;
  color: blue;
}

a.important {
  font-weight: bold;
}

This feature allows us to write less code and to group rules that affect elements which are usually together in the DOM tree. This is handy for debugging.

However, it is also common to abuse this feature and end up replicating the whole DOM in the CSS selectors. So, if we have the following HTML:

<article class="post">
  <header>
    <!-- … -->
    <p>Tags: <a href="..." class="tag">irrelevant</a></p>
  </header>
  <!-- … -->
</article>

We might find this in the CSS stylesheet:

article.post {
  // ... other styling here
  header {
    // ...
    p {
      // ...
      a.tag {
        background: #ff0;
      }
    }
  }
}

The main drawback is that these CSS rules have extremely specific selectors. We have already seen that this is something we should avoid. There are other disadvantages as well to over-nesting, which I have talked about in another article.

In short: do not let nesting generate CSS rules you wouldn’t type yourself.

Include vs extend

Another useful feature of CSS processors is mixins, which are re-usable chunks of CSS. For instance, let’s say that we want to style buttons, and most of them have some basic CSS properties. We could create a mixin like this one in Less:

.button-base() {
  padding: 1em;
  border: 0;
}

And then create a rule like this:

.button-primary {
  .button-base();
  background: blue;
}

This would generate the following CSS:

.button-primary {
  padding: 1em;
  border: 0;
  background: blue;
}

As you can see, very handy to refactor common code!

Besides “including” a mixin, there is also the option of “extending” or “inheriting” it (the exact terminology differs from tool to tool). What this does is to combine multiple selectors in the same rule.

Let’s see an example using the previous .button-base mixin:

.button-primary {
  &:extend(.button-base)
  background: blue;
}

.button-danger {
  &:extend(.button-base)
  background: red;
}

That would be translated to:

.button-primary, .button-danger {
  padding: 1em;
  border: 0;
}

.button-primary { background: blue; }
.button-danger { background: red; }

Some articles online tell us to only use “include”, whereas some tell us to only use “extend”. The fact is that they produce different CSS, none of them is inherently wrong, and depending on your actual scenario it would be better to use one or the other.

How to choose between them? Again, the “would I write this by hand?” rule of thumb applies.


I hope this can help you to reflect on your CSS code and write better rules. Remember what we said before: CSS is code, and as such, worthy of the same level of attention and care as the rest of your code base. If you give it some love, you will reap the rewards.

CalendarGoogle Summer of Code 2016

It is about time for a new blog post. I know it has been a while and there are certainly some notable events I could have blogged about, but in today’s fast paced world I have preferred quick twitter messages.

The exciting news I would like to spread today is that we have a new Google Summer of Code student for this summer! May I introduce to you Paul Morris, who I believe is an awesome candidate. Here is a little information about Paul:

I am currently finishing my graduate degree and in my spare time I like to play music and work on alternative music notation systems (see Clairnote). I have written a few Firefox add-ons and I was interested in the “Event in a Tab” project because I wanted to contribute to Mozilla and to Thunderbird/Calendar which is used by millions of people and fills an important niche. It was also a good fit for my skills and an opportunity to learn more about using html/css/javascript for user interfaces.

Paul will be working on the Event in a Tab project, which aims to allow opening a calendar event or task in a tab, instead of in the current event dialog. Just imagine the endless possibilities we’d have with so much space! In the end you will be able to view events and tasks both in the traditional dialog and in a tab, depending on your preference and the situation you are in.

The project will have two phases, the first taking the current event dialog code and UI as is and making it possible to open it in a tab. The textboxes will inevitably be fairly wide, but I believe this is an important first step and gives users a workable result early on.

Once this is done, the second step is to re-implement the dialog using HTML instead of XUL, with a new layout that is made for the extra space we have in a tab. The layout should be adaptable, so that when the window is resized or the event is opened in a narrow dialog, the elements fall in to place, just like you’d experience in a reactive designed website. You can read more about the project on the wiki.

Paul has already made some great UI mock-ups in his proposal, we will be going through these with the Thunderbird UI experts to make sure we can provide you with the best experience possible. I am sure we will share some screenshots on the blog once the re-implementation phase comes closer.

Paul will be using this blog to give updates about his progress. The coding phase is about to start on May 22nd after which posts will become more frequent. Please join me in welcoming Paul and wishing him all the best for the summer!

 

 

 

Air MozillaConnected Devices Weekly Program Update, 17 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Add-ons BlogAdd-on Compatibility for Firefox 48

Firefox 48 will be released on August 2nd. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 48 for Developers, so you should also give it a look.

General

XPCOM and Modules

New

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 48, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 47.

about:communityReinventing Mozilla on Campus

Re-post from George Roter’s blog, “Reinventing Mozilla on Campus” .

Throughout history, University students, staff and professors have often shaped the leading edge of change and innovation. The history of the web is no different: the student-built Lynx browser was one of the first and Mosaic (Firefox’s distant ancestor!), pioneered by students and staff, opened the graphical web to millions.

I saw the impact that students and professors can make through my own experience at Engineers Without Borders Canada. Engineering students and professors on campuses across Canada and in Africa built remarkable ventures, reshaped curriculum, changed on-campus and government policy, and taught hundreds of thousands of young people about global development.

I fully believe in the potential of students, staff and professors on campuses around the world to have massive impact on Mozilla’s mission. As innovators, contributors and open web advocates. Engineers, scientists, lawyers, social scientists, economists and designers.

From what I know about my past experience and have heard in the past year working for Mozilla, our mission resonates tremendously with students and professors. The range of impact and involvement is considerable. Until now, we’ve only just scraped the surface of this potential.

We need to reinvent Mozilla on campus.

Our existing engagement on University campuses around the world is an assortment of largely disconnected programs and people. Firefox Student Ambassadors and Firefox Clubs. Mozilla Clubs. Code contribution by individual contributors. Maker Party. Mozilla Science Lab. Various professor and lab partnerships. Employee recruitment. Many of these are successful in their own right; there’s an opportunity learn from each of them, find connections, and imagine opportunity to scale their impact with a more coordinated approach.

Photo credit: Tanha Islam and Trisa Islam

The largest of these by student involvement, Firefox Student Ambassadors (FSAs) and Firefox Clubs, has been constrained by limited and variable employee support and a focus on marketing. Our student leaders have already been “hacking” this program to introduce advocacy, code contribution, support, localization, teaching and many other activities; official support for this has lagged.

Our team came into this year with a key hypothesis as part of our strategy: That we can supercharge participation with a reinvented campus program.

The Take Back the Web campus campaign focused on privacy and security has been our first effort to test this hypothesis. Already it’s showing great promise, with over 600 campus teams signed up (including hundreds of FSAs) to have impact in 3 areas. We’re focused on learning as much as we can from this campaign.

The campus campaign is a step toward reinvention. But I think it’s now time to take a step back to ask: What impact can we imagine with a coordinated effort on campuses around the world? What do students, staff and professors want and need to be involved with Mozilla’s mission? How might we evolve our existing programs? What programs and structures would we design, and how do they relate to one another? How can we invite people on campus to innovate with Mozilla?

These are the broad questions that will guide a process over the next 9 weeks. By July 15th we aim to have a clear articulation of the impact we can have, the programs we’ll invest in and how they relate to one another, and the opportunities for students, staff and professors to participate.

We’re hoping that this process of reinventing Mozilla on campus will be participatory, and we’re inviting many voices to contribute. Lucy Harris on the Participation Team will be stewarding this process and shaping the final options. Mark Surman, Mitchell Baker, Chris Lawrence, Katharina Borchert and I will be involved in making a final decision on the direction we take.

You can read more about the details of the process in this post, but let me summarize it and the opportunities you have to be involved:

Phase 1: Listening (May 16-27)

→ provide thoughts on existing programs and opportunities you see

Phase 2: Synthesis and options (May 27-June 10)

→ we’ll frame some tensions for you to weigh in on

→ we’ll shape a set of options for conversation during the London All Hands

Phase 3: Final input (June 10-24)

→ we’ll articulate a set of options for you to consider as we move forward, and will be diving deep into these and key questions during the Mozilla All Hands in London

Phase 4: Final Decision and Disseminate (June 24-July 15)

→ we’ll take all the input and decide on a direction for moving forward

Let me finish by reiterating the opportunity. University campuses are a hotbed of innovation and a locus for creating change. Mozilla can tap into this energy and catalyze involvement in unleashing the next wave of openness and opportunity in online life. Finally, our team is excited about helping to shape a direction we can take, and investing in a robust program of participation moving forward.

I’m excited for this journey of reinventing Mozilla on campus.

The Mozilla BlogMozilla Expands Its National Gigabit Project to Austin, TX

Mozilla will provide $150,000 in funding, and also grow the local maker community, to spur gigabit innovation in Texas’ capital

When you couple lightning-fast Internet with innovative projects in the realms of education and workforce development, amazing things can happen.

That’s the philosophy behind the Mozilla Gigabit Community Fund, our joint initiative with the National Science Foundation and US Ignite. The Mozilla Gigabit Community Fund brings funding and staffing to U.S. cities equipped with gigabit connectivity, the next-generation Internet that’s 250-times faster than most other connections. Our goal: Spark the creation of groundbreaking, gigabit-enabled educational technologies so that more people of all ages and backgrounds can read, write, and participate on this next-generation Web.

As we just announced at the Gigabit City Summit in Kansas City, we’re expanding our gigabit work to the city of Austin, TX in August 2016. Selected from a list of contenders from across the country, Austin stood out due to its existing city-wide digital inclusion plan, active developer community, and growing informal education landscape. Beginning this fall, Mozilla will provide $150,000 in grant funding to innovative and local projects and tools that leverage Austin’s Google Fiber network. Think: 4K streaming in classrooms, immersive virtual reality, and more.

(In the existing Mozilla Gigabit cities of Chattanooga, TN and Kansas City, projects include real-time water monitoring systems, 3D learning tools for classrooms, and specialized technology for first responder training. Read more about those projects here.)

Individuals from the Chattanooga gigabit project Hyperaudio participate in a New York City Maker Party.

Individuals from the Chattanooga gigabit project Hyperaudio participate in a New York City Maker Party.

Mozilla is also investing in the makers and educators who make Austin great. We’ll help create Gigabit Hive Austin — a network of individuals, schools, nonprofits, museums, and other local organizations passionate about teaching and learning the Web. Hive Austin will be one of 14 Mozilla Hive networks and communities across four continents that teach web literacy and 21st-century skills.

Mozilla will open the first round of grant applications in Austin this August, and accept applications through October 18, 2016. Applicants and projects don’t have to be from Austin originally, but must be piloted locally. Click here to learn about the RFP process.

This spring, Mozilla is also providing $134,000 in new gigabit funding in Chattanooga and Kansas City. Funds will support projects that explore gigabit and robotics, big data, the Internet of Things, and more. Learn more.

Over the next two years, Mozilla will be expanding its Gigabit work to two additional cities. Interested in becoming a future Gigabit Hive city? We will reopen the city application process in late 2016.

QMOFirefox 47 Beta 7 Testday, May 20th

Hey y’all!

I am writing to let you know that next week on Friday (May 20th) we are organizing Firefox 47 Beta 7 Testday. The main focus will be on APZ feature and plugin compatibility. Check out all the details via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you on Friday! \o/

Mozilla Add-ons BlogAMO technical architecture

addons.mozilla.org (AMO) has been around for more than 12 years, making it one of the oldest websites at Mozilla. It celebrated its 10th anniversary a couple of years ago, as Wil blogged about.

AMO started as a PHP site that grew and grew as new pieces of functionality were bolted on. In October 2009 the rewrite from PHP to Python began. New features were added, the site grew ever larger, and now a few cracks are starting to appear. These are merely the result of a site that has lots of features and functionality and has been around for a long time.

The site architecture is currently something like below, but please note this simplifies the site and ignores the complexities of AWS, the CDN and other parts of the site.

Basically, all the code is one repository and the main application (a Django app) is responsible for generating everything—from HTML, to emails, to APIs, and it all gets deployed at the same time. There’s a few problems with this:

  • The amount of functionality in the site has caused such a growth in interactions between the features that it is harder and harder to test.
  • Large JavaScript parts of the site have no automated testing.
  • The JavaScript and CSS spill over between different parts of the site, so changes in one regularly break other parts of the site.
  • Not all parts of the site have the same expectation of uptime but are all deployed at the same time.
  • Not all parts of the site have the same requirements for code contributions.

We are moving towards a new model similar to the one used for Firefox Marketplace. Whereas Marketplace built its own front-end framework, we are going to be using React on the front end.

The end result will start look something like this:

image00

A separate version of the site is rendered for the different use cases, for example developers or users. In this case a request comes in hits one of the appropriate front-end stacks. That will render the site using React universal in node.js on the server. It will access the data store by calling the appropriate Python REST APIs.

In this scenario, the legacy Python code will migrate to being a REST API that manages storage, transactions, workflow, permissions and the like. All the front-facing user interface work will be done in React and be independent from each other as much as possible.

It’s not quite micro services, but the breaking of a larger site into smaller independent pieces. The first part of this is happening with the “discovery pane” (accessible at about:addons). This is our first project using this infrastructure, which features a new streamlined way to install add-ons with a new technical architecture to serve it to users.

As we roll out this new architecture we’ll be doing more blog posts, so if you’d like to get involved then join our mailing list or check out our repositories on Github.

SUMO BlogWhat’s Up with SUMO – 12th May

Hello, SUMO Nation!

Yes, we know, Friday the 13th is upon us… Fear not, in good company even the most unlucky days can turn into something special ;-) Pet a black cat, find a four leaf clover, smile and enjoy what the weekend brings!

As for SUMO, we have a few updates coming your way. Here they are!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 18th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

  • for iOS
    • Firefox for iOS 4.0 IS HERE! The highlights are:
      • Firefox is now present on the Today screen.
      • You can access your bookmarks in the search bar.
      • You can override the certificate warning on sites that present them (but be careful!).
      • You can print webpages.
      • Users with older versions of iOS 8 or lower will not be able to add the Firefox widget. See Common Response Available.
    • Start your countdown clocks ;-) Firefox for iOS 5.0 should be with us in approximately 6 weeks!

Thanks for your attention and see you around SUMO, soon!

WebmakerMozilla Badges Update: Backpack to the Future

This post was written by Tim Riches of Digitalme and originally appeared on Medium.com

In 2011, Mozilla’s bold Open Badges project changed the digital credentialing landscape as we knew it. With support from the MacArthur Foundation the project brought partners together worldwide, empowered by the new opportunity to recognise learning wherever it happens, and created a disruptive global movement and the potential for a new skills currency.

Backpack brainstorming session MozFest

Five years on, thousands of organisations are now issuing badges, from non-profits to major employers and universities. These organisations have embraced the Open Badge standard and philosophy, and have developed badge programs to support their learners to achieve their aspirations.

At Digitalme, we are proud to have been there from the start and witness firsthand the transformative potential of Open Badges. Whether it’s providing people who have fallen out of education with a way to articulate their skills and passions, or providing employers with an agile tool to recruit, retain and progress talent, it is clear that Open Badges are delivering real value and helping respond to the global education challenges we are all facing.

However, in order for this value proposition to be fully realised, Open Badges needs to be underpinned by an active community of committed players who support the evolution of the technical infrastructure, compelling educational content and advocacy work (Mark Surman, Jan 2016).

Over the past four years, Digitalme has worked closely with Mozilla to advocate for the Open Badge standard, to implement badge projects in the UK and Europe, and, more recently, provide code contributions to improve the Backpack. As a non-profit, we are proud to have had support from Mark Surman as a member of the Digitalme board of Directors, to help steer and influence this work. We also value the support from Nominet Trust for the Badge the UK project and, most recently, Erasmus+, enabling us to co-found The Open Badge Network, which is amplifying the work of educators and employers across Europe, providing guidance & information for new badge issuers and supporting the development of the Open Badge technical infrastructure.

Badge The World — Highlighting Open Badges activity worldwide — a collaboration between Mozilla & DigitalMe

We are delighted that Mozilla is now deepening its relationship with Digitalme to support our on-going role within the Open Badge community. DigitalMe will now take on direct leadership of Open Badges on behalf of Mozilla, working with the community to:

  • shape the next version of the open badges technical infrastructure
  • develop strategic partnerships to further support this work
  • enable Mozilla to realize its own ambition to credential its learning programs.

Mozilla community at MozFest

The technical infrastructure is a cornerstone of this work, ensuring that this community has the right technical infrastructure to support its growth, while at the same time empowering users to share their achievements and maintain ownership of their learning data. Many platforms have integrated open badges into their offerings, providing badge discovery, evidencing and display functionality. However, the Open Badges technical infrastructure needs continued development to keep pace with these services, as the needs of badge earners, issuers and service providers evolve.

Open Badge workshop at MozFest

To improve user experience across web and mobile devices our first action will be to replace Persona with Passport.js. This will also provide us with the flexibility to enable user to login with other identity providers in the future such as Twitter, Linkedin and Facebook. We will also be improving stability and updating the code base.

We will be reviewing additional requirements for the backpack and technical infrastructure gathered from user research at MozFest supported by The Nominet Trust in the UK, to create a roadmap for further development, working closely with colleagues from Badge Alliance.

Backpack working group MozFest

We have already been approached by members of the badging community who want to assist in coding which is very encouraging. We will be releasing a plan for code contribution and partnership shortly and plan to share regular updates about this work here and via the existing Open Badge community channels. If you would like to get in touch please contact backpack@digitalme.co.uk or follow the progress @digitalme_

Air MozillaWeb QA Team Meeting, 12 May 2016

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Air MozillaReps weekly, 12 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

The Mozilla BlogAdvance Disclosure Needed to Keep Users Secure

User security is paramount. Vulnerabilities can weaken security and ultimately harm users. We want people who identify security vulnerabilities in our products to disclose them to us so we can fix them as soon as possible. That’s why we were one of the first companies to create a bug bounty program and that’s why we are taking action again – to get information that would allow us to fix a potential vulnerability before it is more widely disclosed.

Today, we filed a brief in an ongoing criminal case asking the court to ensure that, if our code is implicated in a security vulnerability, that the government must disclose the vulnerability to us before it is disclosed to any other party. We aren’t taking sides in the case, but we are on the side of the hundreds of millions of users who could benefit from timely disclosure.

The relevant issue in this case relates to a vulnerability allegedly exploited by the government in the Tor Browser. The Tor Browser is partially based on our Firefox browser code. Some have speculated, including members of the defense team, that the vulnerability might exist in the portion of the Firefox browser code relied on by the Tor Browser. At this point, no one (including us) outside the government knows what vulnerability was exploited and whether it resides in any of our code base. The judge in this case ordered the government to disclose the vulnerability to the defense team but not to any of the entities that could actually fix the vulnerability. We don’t believe that this makes sense because it doesn’t allow the vulnerability to be fixed before it is more widely disclosed.

Court ordered disclosure of vulnerabilities should follow the best practice of advance disclosure that is standard in the security research community. In this instance, the judge should require the government to disclose the vulnerability to the affected technology companies first, so it can be patched quickly.

Governments and technology companies both have a role to play in ensuring people’s security online. Disclosing vulnerabilities to technology companies first, allows us to do our job to prevent users from being harmed and to make the Web more secure.

Mozilla Add-ons BlogAdd-ons Update – Week of 2016/05/11

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1387 listed add-ons were reviewed:

  • 1314 (95%) were reviewed in fewer than 5 days.
  • 40 (3%) were reviewed between 5 and 10 days.
  • 33 (2%) were reviewed after more than 10 days.

There are 67 listed add-ons awaiting review.

You can read about the recent improvements in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Compatibility Communications

Most of you should have received an email from us about the future compatibility of your add-ons. You can use the compatibility tool to enter your add-on ID and get some info on what we think is the best path forward for your add-on. This tool only works for listed add-ons.

To ensure long-term compatibility, we suggest you start looking into WebExtensions, or use the Add-ons SDK and try to stick to the high-level APIs. There are many XUL add-ons that require APIs that aren’t available in either of these options, which is why we ran a survey so we know which APIs we should look into adding to WebExtensions. You can read about the survey results here.

We’re holding regular office hours for Multiprocess Firefox compatibility, to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Firefox 47 Compatibility

The compatibility blog post for 47 is up and the bulk validation has been run.

Firefox 48 Compatibility

The compatibility blog post for Firefox 48 is coming up soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 47.

The preference was actually removed recently in the beta channel (future Firefox 47), though this was done before the unbranded builds were available for testing. We’re trying to get those builds out as soon as possible to avoid more disruption. For now I suggest you use Developer Edition for testing or, if your add-on is restartless, you can also use the temporary load option.

Air MozillaThe Joy of Coding - Episode 56

The Joy of Coding - Episode 56 mconley livehacks on real Firefox bugs while thinking aloud.

The Mozilla BlogMozilla Open Source Support (MOSS): Now Open To All Projects

Last year, we launched the Mozilla Open Source Support Program (MOSS) – an award program specifically focused on supporting open source and free software. The first track within MOSS (“Foundational Technology”) provides support for open source and free software projects that Mozilla uses or relies on. We are now adding a second track. “Mission Partners” is open to any open source project in the world which is undertaking an activity that meaningfully furthers Mozilla’s mission.

Our mission, as embodied in our Manifesto, is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent. We know that many other software projects around the world share these goals with us, and we want to use our resources to help and encourage others to work towards them.

So if you think your project qualifies, we encourage you to apply. Applications for the Mission Partners track are open as of today. (Applications for Foundational Technology also remain open.) You can read more about our selection criteria and committee on the wiki. The budget for this track for 2016 is approximately US$1.25 million.

We are keen to enable applications from groups not currently connected with Mozilla and from communities outside the English-speaking free software world. Therefore, applications for Mission Partners do not require a Mozillian to support them. Instead, they must be endorsed by a well-known and respected figure from the wider software community of which the project is a part.

The deadline for applications for the initial batch of Mission Partners awards is Tuesday, May 31 at 11:59pm Pacific Time. The first awardees will be announced at the Mozilla All Hands in London in the middle of June. After that time, applications will continue to be accepted and will be considered on an ongoing basis.

If you want to be kept informed of updates to the MOSS program, please join our discussion forum and read our updates on the Mozilla blog.

We look forward to considering the applications.

WebmakerEmpowering Youth Around the World

How can today’s youth get started in civic engagement at home and in the world?

After discussing that question during our latest Mozilla Curriculum Workshop, a youth-facing civic engagement guide was born. You can watch the discussion and the idea for the guide unfold below.

Our distinguished guests included:

  • Rafranz Davis, Executive Director of Professional & Digital Learning for Lufkin ISD and speaker on STEM education, teacher voice, digital equity and diversity in edtech.
  • DC Vito, Executive Director of The LAMP (Learning About Media Project), a media literacy organization dedicated to fostering critical, positive, and thriving citizenship.
  • Jeremy Dean, Director of Education at Hypothes.is, an open platform for annotating and the web for collaboration, discussion, organization, and research.

A few highlights from the discussion included:

Mozilla Curriculum Workshop Etherpad

The conversation shaped into an idea for a youth-facing civic engagement guide that equips youth with…

  • privacy and safety best practices
  • a list of youth civic engagement organizations
  • code of conduct tools
  • a collection fair-use, social media, and user rights resources
  • How-to guides for convening, facilitating, and organizing

You can find the civic engagement guide prototype on the etherpad or in our episode’s GitHub repo. Please feel free to comment, ask questions and to use the materials in your own work, as well. We’d love to see this guide develop further to include even more resources, so please contribute! Let us know how to improve the guide for youth in your local community.

Our next Mozilla Curriculum Workshop is tentatively scheduled for Tuesday June 14th at 10am PT, 1pm ET, 5pm UTC (Subject to change). Join co-hosts Amira Dhalla and Chad Sansing and invited guests and drop-in Mozillians as we talk shop about summer learning and curriculum development for the web. We hope you’ll join us for this special episode!

Are you on the go or unable to tune in at our normal broadcast time? Is audio better for you than video? Listen to our March, April, and May episodes as podcasts! Download the links for .mp3 versions of each Mozilla Curriculum Workshop.

 

The Mozilla BlogFirefox for iOS Makes it Faster and Easier to Use the Mobile Web the Way You Want

We’re always focused on making the best Firefox experience we can offer. We want to give you complete control over your web experience, while also making sure to protect your privacy and security the best we can. Today, we’re pleased to share an update to Firefox for iOS that gives you a more streamlined experience and that allows for more control over your mobile browsing experience.

What’s New in Firefox for iOS?

iOS Today Widget: We know that getting to what you need on the Web fast is important, especially on your mobile, so you can access Firefox through the iOS Today widget to quickly open a new tab, a new private tab or a recently copied URL.

iOS Today Widget in Firefox for iOS

iOS Today Widget in Firefox for iOS

Awesomebar: Firefox for iOS allows you to search your bookmarks and history within the smart URL bar, making it easier to quickly access your favorite websites.

Search bookmarks

Search bookmarks in Firefox for iOS

Manage Security: By default, Firefox helps to ensure your security by warning you when a website’s connection is not secure. When you attempt to access an unsafe website, you’ll see an error message stating that the connection is untrusted, and you are prevented from accessing that site. With iOS, you can now temporarily ignore these error messages for websites you have deemed as “safe”, but that might register as potentially unsafe by Firefox.

SSL Certificate

Over-ride certificate errors in Firefox for iOS

To experience the newest features and use the latest version of Firefox for iOS, download the update and let us know what you think.

Download_on_the_App_Store_Badge_US-UK_135x40

Air MozillaFirefox Test Pilot: Suit up and take experimental features for a test flight

Firefox Test Pilot: Suit up and take experimental features for a test flight Be the first to try experimental Firefox features. Join Test Pilot to unlock access to our rainbow launchers, teleportation devices, security sphinxes, invisibility cloaks –...

Mozilla Web DevelopmentExtravaganza – May 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Normandy, the Recipe Server

First up was Osmose (that’s me!), sharing the news that Normandy has shipped! Normandy is a service that will eventually power several Firefox features that involve interacting with users and testing changes to Firefox quickly and safely, such as recommending features that may be useful to users or offering opportunities to try out changes. Right now the service is powering Heartbeat surveys being sent to release users.

Big thanks to the User Advocacy and Web Engineering teams for working on the project!

MDN Save Draft Feature

Next was shobson who talked about MDN‘s Safe Draft feature. When editing an MDN article, the site autosaves your edits to localStorage (if it’s available). Then, when you revisit the editing interface later, the site offers to let you restore or discard the draft, disabling autosave until a decision is made. Future improvements may include previewing drafts and notifying users when an article has changed since their draft was saved.

Air Mozilla Thumbnails

peterbe stopped by to talk about Air Mozilla‘s chapters feature, which allows users to mark and link to segments in a video. The site now auto-generates thumbnails for chapters to help preview what the chapter is about.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Docker Development Environments

Last up was jgmize, who asked about use of Docker for easy development environments. The general consensus was that most of the developers present had tried using Dockerized development environments, but tended towards using it only for deployed services or not at all.

Some of the interesting projects brought up for using Docker for development or deployment were:

Check ’em out!


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Air MozillaConnected Devices Weekly Program Update, 10 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaMartes mozilleros, 10 May 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

The Mozilla BlogYou Can Help Build the Future of Firefox with the New Test Pilot Program

When building features for hundreds of millions of Firefox users worldwide, it’s important to get them right. To help figure out which features should ship and how they should work, we created the new Test Pilot program. Test Pilot is a way for you to try out experimental features and let us know what you think. You can turn them on and off at any time, and you’ll always know what information you’re sharing to help us understand how these features are used. Of course, you can also use Test Pilot to provide feedback and suggestions to the teams behind each new feature.

As you’re experimenting with new features, you might experience some bugs or lose some of the polish from the general Firefox release, so Test Pilot allows you to easily enable or disable features at any time.

Feedback and data from Test Pilot will help determine which features ultimately end up in a Firefox release for all to enjoy.

What New Experimental Features Can You Test?

Activity Stream: This experiment will make it easier to navigate through browsing history to find important websites and content faster. Activity stream helps you rediscover the things you love the most on the web. Each time you open a new tab, you’ll see your top sites along with highlights from your bookmarks and history. Simply browse the visual timeline to find what you want.

Tab Center: Display tabs vertically along the side of the screen instead of horizontally along the top of the browser to give you a new way to experience tabbed browsing.

Universal search: Combines the Awesome Bar history with the Firefox Search drop down menu to give you the best recommendations so you can spend less time sifting through search results and more time enjoying the web. You’ll notice that search suggestions look different. If you have been to a site before, you will see it clearly highlighted as a search suggestion. Recommended results will include more information about the site suggestion, like top stories on the news page or featured content.

How do I get started?

Test Pilot experiments are currently available in English only and we will add more languages later this year. To download Test Pilot and help us build the future of Firefox, visit https://testpilot.firefox.com/

Mozilla Add-ons BlogResults of the WebExtensions API Survey

In March, we released a survey asking add-on developers which APIs they need to transition successfully to WebExtensions. So far, 235 people have responded, and we’ve summarized some of the findings in these slides.

Developers with the most add-ons responded at a disproportionate rate. Those with 7 or more add-ons represent 2% of the add-on developer community, and those with 4-6 add-ons represent 3%, but they comprised 36.2% of survey respondents. This didn’t come as a surprise, since the most active developers are also the most engaged and have the most to gain by migrating to WebExtensions.

How many add-ons have you worked on?

Nearly half of respondents have tried implementing their add-ons in Chrome, and the most cited limitation is that it’s restrictive. Developers could not do much with the UI other than add buttons or content tabs. We intend to offer APIs that give developers more freedom to customize their add-ons, and these results tell us we’re on the right track.

In the coming months, we’ll draw on these results to inform our decisions and priorities, to ensure WebExtensions lives up to it promise…and goes beyond.

QMOFirefox 47 Beta 3 Testday Results

Hey everyone!

Last Friday, May 6th, we held Firefox 47 beta 3 Testday, and, of course, it was another outstanding event!

A big THANK YOU goes out to Comorasu Cristian-Iulian, Luna Jernberg, Vuyisile Ndlovu, Iryna Thompson, Moin Shaikh, Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Saddam Hossain, Majedul islam, Tarikul Islam Oashi, Jobayer Ahmed Mickey, Kazi Nuzhat Tasnem, Syed Nayem Roman, Sayed Ibn Masud, Tanvir Rahman, Tazin Ahmed, Md Rakibul Islam, Mohammad Maruf Islam, Almas Hossain, Maruf Rahman, Sajedul Islam, Forhad Hossain, Md. Raihan Ali, Wahiduzzaman Hridoy, Mahfuza Humayra Mohona, Fahim, Asif Mahmud Shuvo, Mohammed Jawad Ibne Ishaque, Zayed News, Md. Rahimul islam and Akash.

Also, thanks to all our active moderators too!

Results:

  • some potential issues (currently under investigation) were noticed while testing Synced Tabs Sidebar, and none for Youtube Embedded Rewrite feature.
  • 2 bugs were verified: 1227477 and 1240729

I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work!

And keep an eye on QMO for upcoming events! \o/

Mozilla L10NLocalization Hackathon in Mexico

From April 9-10th 2016 we held a localization Hackathon in Oaxaca, Mexico. A total of 21 people gathered in this beautiful city for two days of work and fun. Eight locales were represented there, most of which were indigenous languages:

  • Spanish from Mexico
  • Triqui
  • Purépecha
  • Mozilla Nativo
  • Mixteco de suroeste
  • Mixteco de oeste central
  • Maya kaqchiquel
  • Zapoteco

As Jeff has already explained in a previous blog post this year’s l10n hackathons have a slightly different format than last year’s. Communities are more in control of their own agenda and need to determine specific and detailed goals beforehand. They are then expected to tackle those, mostly on their own, during the two days. L10n-drivers present (Jeff and I) gave summaries and presentations concerning the current status of Mozilla projects on Saturday morning – but the rest of the time we played mostly the role of observers and facilitators while the localizers took control of the event.
Exciting, right? 🙂
Here’s a recap of what happened:

SATURDAY

The morning was dedicated to updates of the current active Mozilla projects relevant to localizers, which were mostly part of the Mozlando All-Hands discussions we had in November. Jeff and I covered topics such as FirefoxOS changes, updating our communication channels, Translation Quality & Style Guides, the future of l10n hackathons, changes in the way we handle repositories, and much more.

Each community learned the importance of testing their work with Transvision, specifically using the new “unlocalized“, “consistency” and “unchanged” views. These are great steps in our path to continuously improve the quality of the localizations and ensure they are state–of-the-art!
We then had a quick and fun spectrogram session with all the participants. It’s always a great way to learn where we stand and how localizers handle their work. I won’t talk too much about this session since it has to stay somewhat a surprise for the upcoming hackathons 😉 So, suspense!

Spectrogram Session

Spectrogram Session

After a delicious typical Oaxacan lunch (thanks again to Surco Oaxaca for hosting us and providing lunch!) we went back to work and it was now time for communities to drive the event forwards.

Chapulines! Yummy!

Chapulines! Yummy!

Each team introduced themselves one by one, and presented their:

  • Active projects
  • Work flows
  • Successes since last year’s hackathon
  • Challenges since last year’s hackathon
Team presentations

Team presentations

It was interesting to hear how much progress teams had made and how the previous year’s hackathons had helped them grow. Also, some of the presentations helped other teams gather knowledge and insight into how they might work around their own challenges and find solutions to issues they were encountering.
Once this was done, the localizers split up into their teams and started working on the goals they had set for themselves beforehand. Some of those goals were: catching up with pending l10n work, coming up with recruiting strategies, review current tooling needs, testing their work, and much, much more! More details on goals can be found here: https://wiki.mozilla.org/L10n:Meetings/2016_Oaxaca_hackathon

SUNDAY

Sunday was mostly community-driven, and probably one of the most productive days as it was fully dedicated to break-out sessions and getting caught up with tasks and goals.
To start the day, Rodrigo from the Zapoteco community gave an excellent presentation on his guide to localization for under-resourced languages. After that, all participants got their hands dirty with localization tasks 😛
We also went over style guides with each and every community, gathering feedback on the English Style Guide template that the “Translation Quality Team” has recently created. Some communities started writing their own style guides, which is an important step towards ensuring consistency and quality in their translations.
At the end, we talked about the future of hackathons and what next year might look like. Some people have volunteered to lead that discussion and organization (thanks!). Things to take into account in order to plan these are for example visa needs, general cost of the city the event will be held in, if a Rep is present and can help in that city, flight costs, etc.
In all, this event was full of work and fun, which we believe are the necessary ingredients to creating the best localized products in the world.

Thank you to all that participated! As chofmann once put, “I love this community!” (and if you haven’t seen this video before, you MUST watch it NOW:)

"Oaxacathon" Participants

“Oaxacathon” Participants

Air MozillaFirefox Design Workshop Presentation (Summer Semester 2016)

Firefox Design Workshop Presentation (Summer Semester 2016) For three days, students from the university of design in Schwäbisch Gmünd have worked on concepts to improve Firefox. These are their final presentations. http://bit.ly/27b9ET0

Mozilla L10NNew directions for the Mozilla l10n-drivers

Many teams change and evolve over time. These changes involve organization changes as well as reorienting the team’s focus, mission, and function. I’m excited to announce that we’ve been experiencing some changes within the l10n-drivers team at Mozilla.

Team Organization

The biggest of these changes is that Chris Hofmann has stepped away from Mozilla after more than a decade of service. During his time, Chris ran a number of projects including starting mobile engineering. More recently, he’s been running l10n and our bug bounty program. We wish him the very best. I’ve been asked to lead the l10n-drivers team in Chris’s place as Head of Localization.

Another change is that Pascal and Arky are moving on to other projects. We’re grateful for their years of service to l10n and wish them the best in their future efforts.

Finally, we’ve made some specific decisions on team structure and responsibilities. I’m very happy to introduce you to the new l10n-drivers team organization:

Technical Group

The technical group’s focus is on simplifying the l10n process through tooling and automation for product development teams and the localization community. Additionally, they are dedicated to experimenting with state-of-the-art solutions from the localization industry and aligning our technology within the industry standards to ensure a higher degree of interoperability. The members of this group are:

  • Axel Hecht — Technical Lead
  • Staś Małolepszy — L10n/I18n engineer (L20n, mozIntl, ECMA 402)
  • Zibi Braniecki (aka gandalf) — L10n/I18n engineer (L20n, mozIntl, ECMA 402)
  • Matjaž Horvat — L10n engineer (Pontoon)

Technical Project Management (TPM) Group

The PM group’s role is to be the intermediary between Mozilla dev teams and the l10n community to make sure we’re able to ship fully localizable products with the highest possible localization coverage from release to release. They seek to represent the community within decisions involving l10n and work to empower and support l10n communities. They also manage project communications for their corresponding projects and perform the practical technical tasks associated with delivering localizations for their projects. The members of this group are:

  • Jeff Beatty — TPM Lead
  • Francesco Lodolo (aka flod) — Technical Project Manager (Firefox)
  • Delphine Lebédel — Technical Project Manager (Android & iOS product & app stores)
  • Peiying Mo — Technical Project Manager (mozilla.org, marketing, legal, etc.)

Updated L10n Mission

One of the most unique things about Mozilla is you, the dedicated community of volunteers. Your dedication and passion to breaking down language barriers on the Web is unmatched.   Let’s be frank, localization at Mozilla is complex. In many areas, it’s unnecessarily complex. Our updated mission needs to be based on simplifying localization for you (the community) and for internal Mozilla teams. As a secondary mission, we must develop tools and practices that exemplify and promote multilingualism, open & free language data exchange, and interoperable, open source community localization platforms on the Web.

The new l10n-driver motto is “simplicity and opportunity.” Simplified and open localization for internal teams and localization communities makes for a better user experience across our localized products. It helps your contributions to go farther than before and allows you to spend time on the tasks you find most valuable. Through this focus, we aim to empower global communities to advance the Mozilla manifesto principles in their language by localizing Mozilla projects.

With these principles in mind, we’re evaluating our tools vision, our processes, and practices to learn where we can simplify, automate, and improve the localization experience for everyone. We’ll be more open to your feedback on how to accomplish this, while asking for your patience and feedback when experiment with new ideas. We’re also committed to being more data-driven in our decision making. I’m sure there will come a point in time where we make both popular and unpopular decisions. I hope that we can all assume good will and trust that both the popular and unpopular decisions were made with the intent to further our team mission for localization.

I think that I can speak for the l10n-drivers when I say that these are exciting times for localization at Mozilla. Thank you in advance for being a part of them!

WebmakerMark Your Calendars for May Mozilla Learning Events

This month, Mozilla Learning will be exploring topics including youth civic engagement, and how emerging gigabit technology can lead to educational advancement. Details are below.

Mozilla Curriculum Workshop – Youth Civic Engagement
Tuesday, May 10 at 7am PT/ 10am ET/ 2pm UTC

We’ll discuss compelling examples of youth leadership before prototyping resources that might help foster both. Join us and help build something useful for youth near you!

Our distinguished guests include:

  • Rafranz Davis, Executive Director of Professional & Digital Learning for Lufkin ISD and speaker on STEM education, teacher voice, digital equity and diversity in edtech.
  • DC Vito, Executive Director of The LAMP (Learning About Media Project), a media literacy organization dedicated to fostering critical, positive, and thriving citizenship.
  • Jeremy Dean, Director of Education at Hypothes.is, an open platform for annotating and the web for collaboration, discussion, organization, and research.

Mozilla Learning Community Call – Gigabit Technology
Wednesday, May 25: 1pm PT/ 4pm ET/ 8pm UTC

High-speed, low-latency gigabit networks are allowing educational technology to advance rapidly.

On this month’s community call, we’ll explore how gigabit technology is transforming today’s classroom. We’ll discuss how emerging technologies like virtual reality, artificial intelligence, and 4K video streaming are being deployed to engage students, address learning needs, and create more immersive learning experiences.

Featured speakers include:

#TTWchat via @MozTeach
Tuesday, May 31- All Day

Continuing the discussions from the curriculum workshop and community call, we’ll dive deeper into the conversation and leverage teaching opportunities across the globe. Follow @MozTeach for details.

SUMO BlogWhat’s Up with SUMO – 5th May

Hello, SUMO Nation!

Did you have a good post-post release week? We sure did :-) Can you still remember Firefox 1.0? We’re getting to 50.0, soon! I wonder if there will be cake… Mmm, cake.

…anyway, here are the latest and greatest updates from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 11th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Firefox

  • for iOS
    • The latest version updates can be found here.
      • Firefox for iOS 4.0 should be going into review by Apple today and launching on May 10th (depends on a lot of factors)
    • Firefox for iOS 5.0 scheduled for approximately 6 weeks after 4.0 hits the interwebs!

…and that’s it for now. Keep rocking the helpful web, while spring rolls over our heads and hearts – and don’t forget to share your favourite music with us!

Air MozillaWeb QA Team Meeting, 05 May 2016

Web QA Team Meeting Weekly Web QA team meeting - we'll share updates on what we're working on, need help with, are excited by, and perhaps a demo of...

Air MozillaReps weekly, 05 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

hacks.mozilla.orgA Taste of JavaScript’s New Parallel Primitives

TL;DR – We’re extending JavaScript with a primitive API that lets programmers use multiple workers and shared memory to implement true parallel algorithms in JavaScript.

Multicore computation

JavaScript (JS) has grown up, and it works so well that virtually every modern web page contains large amounts of JS code that we don’t ever worry about — it just runs as a matter of course. JS is also being used for more demanding tasks: Client-side image processing (in Facebook and Lightroom) is written in JS; in-browser office packages such as Google Docs are written in JS; and components of Firefox, such as the built-in PDF viewer, pdf.js, and the language classifier, are written in JS. In fact, some of these applications are in the form of asm.js, a simple JS subset, that is a popular target language for C++ compilers; game engines originally written in C++ are being recompiled to JS to run on the web as asm.js programs.

The routine use of JS for these and many other tasks has been made possible by the spectacular performance improvements resulting from the use of Just-in-Time (JIT) compilers in JS engines, and by ever faster CPUs.

But JS JITs are now improving more slowly, and CPU performance improvement has mostly stalled. Instead of faster CPUs, all consumer devices — from desktop systems to smartphones — now have multiple CPUs (really CPU cores), and except at the low end they usually have more than two. A programmer who wants better performance for her program has to start using multiple cores in parallel. That is not a problem for “native” applications, which are all written in multi-threaded programming languages (Java, Swift, C#, and C++), but it is a problem for JS, which has very limited facilities for running on multiple CPUs (web workers, slow message passing, and few ways to avoid data copying).

Hence JS has a problem: if we want JS applications on the web to continue to be viable alternatives to native applications on each platform, we have to give JS the ability to run well on multiple CPUs.

Building Blocks: Shared Memory, Atomics, and Web Workers

Over the last year or so, Mozilla’s JS team has been leading a standards initiative to add building blocks for multicore computation to JS. Other browser vendors have been collaborating with us on this work, and our proposal is going through the stages of the JS standardization process. Our prototype implementation in Mozilla’s JS engine has helped inform the design, and is available in some versions of Firefox as explained below.

In the spirit of the Extensible Web we have chosen to facilitate multicore computation by exposing low-level building blocks that restrict programs as little as possible. The building blocks are a new shared-memory type, atomic operations on shared-memory objects, and a way of distributing shared-memory objects to standard web workers. These ideas are not new; for the high-level background and some history, see Dave Herman’s blog post on the subject.

The new shared memory type, called SharedArrayBuffer, is very similar to the existing ArrayBuffer type; the main difference is that the memory represented by a SharedArrayBuffer can be referenced from multiple agents at the same time. (An agent is either the web page’s main program or one of its web workers.) The sharing is created by transferring the SharedArrayBuffer from one agent to another using postMessage:

let sab = new SharedArrayBuffer(1024)
let w = new Worker("...")
w.postMessage(sab, [sab])   // Transfer the buffer

The worker receives the SharedArrayBuffer in a message:

let mem;
onmessage = function (ev) { mem = ev.data; }

This leads to the following situation where the main program and the worker both reference the same memory, which doesn’t belong to either of them:

shmem1

Once a SharedArrayBuffer is shared, every agent that shares it can read and write its memory by creating TypedArray views on the buffer and using standard array access operations on the view. Suppose the worker does this:

let ia = new Int32Array(mem);
ia[0] = 37;

Then the main program can read the cell that was written by the worker, and if it waits until after the worker has written it, it will see the value “37”.

It’s actually tricky for the main program to “wait until after the worker has written the data”. If multiple agents read and write the same locations without coordinating access, then the result will be garbage. New atomic operations, which guarantee that program operations happen in a predictable order and without interruption, make such coordination possible. The atomic operations are present as static methods on a new top-level Atomics object.

Speed and responsiveness

The two performance aspects we can address with multicore computation on the web are speed, i.e., how much work we can get done per unit of time, and responsiveness, i.e., the extent to which the user can interact with the browser while it’s computing.

We improve speed by distributing work onto multiple workers that can run in parallel: If we can divide a computation into four and run it on four workers that each get a dedicated core, we can sometimes quadruple the speed of the computation. We improve responsiveness by moving work out of the main program and into a worker, so that the main program is responsive to UI events even if a computation is ongoing.

Shared memory turns out to be an important building block for two reasons. First, it removes the cost of copying data. For example, if we render a scene on many workers but have to display it from the main program, the rendered scene must be copied to the main program, adding to rendering time and reducing the responsiveness of the main program. Second, shared memory makes coordination among the agents very cheap, much cheaper than postMessage, and that reduces the time that agents sit idle while they are waiting for communication.

No free lunch

It is not always easy to make use of multiple CPU cores. Programs written for a single core must often be significantly restructured and it is often hard to establish the correctness of the restructured program. It can also be hard to get a speedup from multiple cores if the workers need to coordinate their actions frequently. Not all programs will benefit from parallelism.

In addition, there are entirely new types of bugs to deal with in parallel programs. If two workers end up waiting for each other by mistake the program will no longer make progress: the program deadlocks. If workers read and write to the same memory cells without coordinating access, the result is sometimes (and unpredictably, and silently) garbage: the program has data races. Programs with data races are almost always incorrect and unreliable.

An example

NOTE: To run the demos in this post you’ll need Firefox 46 or later. You must also set the preference javascript.options.shared_memory to true in about:config unless you are running Firefox Nightly.

Let’s look at how a program can be parallelized across multiple cores to get a nice speedup. We’ll look at a simple Mandelbrot set animation that computes pixel values into a grid and displays that grid in a canvas, at increasing zoom levels. (Mandelbrot computation is what’s known as “embarrassingly parallel”: it is very easy to get a speedup. Things are usually not this easy.) We’re not going to do a technical deep dive here; see the end for pointers to deeper material.

The reason the shared memory feature is not enabled in Firefox by default is that it is still being considered by the JS standards body. The standardization process must run its course, and the feature may change along the way; we don’t want code on the web to depend on the API yet.

Serial Mandelbrot

Let’s first look briefly at the Mandelbrot program without any kind of parallelism: the computation is part of the main program of the document and renders directly into a canvas. (When you run the demo below you can stop it early, but later frames are slower to render so you only get a reliable frame rate if you let it run to the end.)

If you’re curious, here’s the source code:

Parallel Mandelbrot

Parallel versions of the Mandelbrot program will compute the pixels in parallel into a shared memory grid using multiple workers. The adaptation from the original program is conceptually simple: the mandelbrot function is moved into a web worker program, and we run multiple web workers, each of which computes a horizontal strip of the output. The main program will still be responsible for displaying the grid in the canvas.

We can plot the frame rate (Frames per Second, FPS) for this program against the number of cores used, to get the plot below. The computer used in the measurements is a late-2013 MacBook Pro, with four hyperthreaded cores; I tested with Firefox 46.0.

mandel3

The program speeds up almost linearly as we go from one to four cores, increasing from 6.9 FPS to 25.4 FPS. After that, the increases are more modest as the program starts running not on new cores but on the hyperthreads on the cores that are already in use. (The hyperthreads on the same core share some of the resources on the core, and there will be some contention for those resources.) But even so the program speeds up by three to four FPS for each hyperthread we add, and with 8 workers the program computes 39.3 FPS, a speedup of 5.7 over running on a single core.

This kind of speedup is very nice, obviously. However, the parallel version is significantly more complicated than the serial version. The complexity has several sources:

  • For the parallel version to work properly it needs to synchronize the workers and the main program: the main program must tell the workers when (and what) to compute, and the workers must tell the main program when to display the result. Data can be passed both ways using postMessage, but it is often better (i.e., faster) to pass data through shared memory, and doing that correctly and efficiently is quite complicated.
  • Good performance requires a strategy for how to divide the computation among the workers, to make the best use of the workers through load balancing. In the example program, the output image is therefore divided into many more strips than there are workers.
  • Finally, there is clutter that stems from shared memory being a flat array of integer values; more complicated data structures in shared memory must be managed manually.

Consider synchronization: The new Atomics object has two methods, wait and wake, which can be used to send a signal from one worker to another: one worker waits for a signal by calling Atomics.wait, and the other worker sends that signal using Atomics.wake. However, these are flexible low-level building blocks; to implement synchronization, the program will additionally have to use atomic operations such as Atomics.load,Atomics.store, and Atomics.compareExchange to read and write state values in shared memory.

Adding further to that complexity, the main thread of a web page is not allowed to call Atomics.wait because it is not good for the main thread to block. So while workers can communicate among themselves using Atomics.wait and Atomics.wake, the main thread must instead listen for an event when it is waiting, and a worker that wants to wake the main thread must post that event with postMessage.

(Those rushing out to test that should know that wait and wake are called futexWait and futexWake in Firefox 46 and Firefox 47. See the MDN page for Atomics for more information.)

It is possible to build good libraries to hide much of the complexity, and if a program — or usually, an important part of a program — can perform significantly better when running on multiple cores rather than on one, then the complexity can really be worth it. However, parallelizing a program is not a quick fix for poor performance.

With the disclaimers above, here is the code for the parallel version:

Further information

For reference material on the available APIs, read the proposed specification, which is largely stable now. The Github repository for the proposal also has some discussion documents that might be helpful.

Additionally, the Mozilla Developer Network (MDN) has documentation for SharedArrayBuffer and Atomics.

Mozilla Add-ons BlogHow an Add-on Played Hero During an Industrial Dilemma

noitA few months ago Noit Saab’s boss at a nanotech firm came to him with a desperate situation. They had just discovered nearly 900 industrial containers held corrupted semiconductor wafers.

This was problematic for a number of reasons. These containers were scattered across various stages of production, and Noit had to figure out precisely where each container was at in the process. If not, certain departments would be wrongly penalized for this very expensive mishap.

It was as much an accounting mess as it was a product catastrophe. To top it off, Noit had three days to sort it all out. In 72 hours the fiscal quarter would end, and well, you know how finance departments and quarterly books go.

Fortunately for Noit—and probably a lot of very nervous production managers—he used a nifty little add-on called iMacros to help with all his web-based automation and sorting tasks. “Without iMacros this would have been impossible,” says Noit. “With the automation, I ran it overnight and the next morning it was all done.”

Nice work, Noit and iMacros! The day—and perhaps a few jobs—were saved.

“I use add-ons daily for everything I do,” says Noit. “I couldn’t live without them.” In addition to authoring a few add-ons himself, like NativeShot (screenshot add-on with an intriguing UI twist), MouseControl (really nice suite of mouse gestures), MailtoWebmails (tool for customizing the default actions of a “mailto:” link), and Profilist (a way to manage multiple profiles that use the same computer, though still in beta), here are some of his favorites…

“I use Telegram for all my chatting,” says Noit. “I’m not a big mobile fan so it’s great to see a desktop service for this.”

Media Keys, because “I always have music playing from a YouTube list, and sometimes I need to pause it, so rather than searching for the right tab, I use a global hotkey.”

“And of course, AdBlock Plus,” concludes Noit.

If you, dear friends, use add-ons in interesting ways and want to share your experience, please email us at editor@mozilla.com with “my story” in the subject line.

Air MozillaWeekly SUMO Community Meeting May 4, 2016

Weekly SUMO Community Meeting May 4, 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-05-04

WebmakerMay Community Spotlight: Keri Randolph

We continue to celebrate the great people and work across the globe this month with Keri Randolph, Assistant Superintendent for Innovation for the Hamilton County Department of Education in Chattanooga, TN.  She began a STEM fellows program for educators while at the Public Education Foundation as the head of Southeast Tennessee STEM Hub. She spearheaded the fascinating 4K microscopy project, with support from the Enterprise Center, and has been at the forefront of creating innovative learning opportunities that leverage gigabit technology.

Keri Randolph

Photo provided by Keri Randolph

We asked Keri to tell us more about her gigabit journey. Here’s what she had to say:

What is your background with gigabit-enabled educational technologies and with Mozilla?

I was a science teacher for 10 years before moving to Chattanooga.  I worked for the Public Education Foundation as the Vice President of Learning and Director of the Southeast Tennessee STEM Innovation Hub before joining the Hamilton County school district as Assistant Superintendent of Innovation in October.

I became fascinated with the Gig around 2012.  To be honest, I didn’t really know or understand what it meant to be a GigCity until I saw LOLA (LOw LAtency audio visual streaming system) in action. Here is an example. As an educator, I saw the potential for so many amazing learning experiences from connecting and communicating in robust and authentic ways to people a world away, to having access to resources and tools not physically present in our community. In 2013, we began brainstorming applications of the Gig to education which was truly innovative, because no one was really applying the Gig to K-12.  We received funding from NSF through an Early-concept Grants for Exploratory Research (EAGER) grant to partner with the University of Southern California (USC) on a gigabit project which allowed STEM School Chattanooga students to design and conduct experiments in microbial ecology under the guidance of research scientists at USC.  The students collected data by controlling a 4K video microscope that was 1800 miles away using a GENI connection. You can watch a short video about it below. I first became connected with Mozilla around the same time as part of the Hive Learning Network and the Mozilla Gigabit Community Fund.

What is your favorite thing about gigabit technology? Why?

I like that it is a disruptive space in K-12 where there aren’t many examples to copy, or even from which to learn.  This allows for true innovation and dreaming of what’s possible.  The blended learning models that are and will be possible with gigabit technology are changing education- maybe as much or more than the printing press.  The fact that we are on the ground floor of the technology means that we get to dream and think deeply about what really is best for kids and imagine learning environments that will be – not the ones that are.  That’s exciting.

Tell us about the gigabit project that you are most proud to have contributed to.

The 4K video microscope project is my proudest gigabit moment thus far.  The chance to provide a learning experience for students that wouldn’t otherwise have been possible – and inspire interest in science – that’s the ultimate as an educator.

How did the 4K microscopy project get started? How did you help? Why was gigabit technology essential for this work?

The 4K microscope project started through conversations with the Annenberg Innovation Lab at EPB, PEF and USC.  We needed ultra-high speed, low latency through the GENI for students to be able to remotely control the 4K video microscope and stream the 4K video in real time.

Keri Randolph

Keri Randolph and Maria Jefferson, student at STEM School Chattanooga, demoing the 4K video microscope at the US Ignite/GENI Summit in DC last year. Photo provided by Keri Randolph.

How are you inspiring others in STEM? Tell us more about your fellow program.

The STEM Teaching Fellows is a year-long professional learning experience for K-12 public school teachers from around the region.  We focus on project-based learning, community partnerships and best practices in STEM.  Teachers complete a job shadow, and we meet in workplaces around the region to tie our experiences to the real-world and to workplace skills.  As part of the experience, we also focus on innovation in education including makerspaces, technology, computer science, etc.  We also do leadership training and provide support for teachers to become STEM leaders in their classrooms, schools, districts and community.  They are required to complete a community partner project, with many hosting STEM nights or developing project-based learning experiences by partnering with a local business.  Now in its fourth year, the program includes almost 120 STEM Fellows.

We’d love to hear about Teacherpreneur Incubator, a new support system for teachers.

Through my work at the STEM Innovation Hub and collaboration with partners such as Hive, I became fascinated with the growing tech entrepreneurial community in Chattanooga.  The dynamic ecosystem forming around the Gig provided both support and spark for innovation.  I felt we needed this same support and community for our teachers.  After all, the best teachers are entrepreneurial in that they are constantly thinking of ways to improve and innovate to better serve their students.  In 2014, we launched the Teacherpreneur Incubator which offers teachers the time, space and support to launch big ideas in the best interests of their students, community and the teaching profession.  It brings together educators, community members and business to launch and support transformative ideas.  Modeled after Co.Lab’s 48Hour Launch, teachers receive support to develop their ideas into a pitch which they give to a team of judges at the culmination of the Teacherpreneur 48Hour Launch weekend.  Now in its third year, the Teacherpeneur Incubator has elevated the teaching profession by creating and supporting an entrepreneurial ecosystem for our teachers and  creating a public forum for their work.  Through the support of Mozilla’s Hive Network and other partners, we’ve been able to connect teachers like Cristol Kapp to technologists and makers to turn dreams like Cristol’s of turning her elementary school library into a makerspace where her kids could create and collaborate into reality. You can find more information on the Teacherpreneur Incubator here.

To learn more about Keri, follow her on Twitter.

Do you know someone that has made tremendous strides towards spreading global web literacy or has made an impact through a gigabit community, Mozilla Club, classroom, or the #teachtheweb community at large? Share their story with us.

WebmakerHTTPS, Mixed Content, and the real web… oh my!

This post was written by Pomax, a software engineer at the Mozilla Foundation
We recently fixed something around Mozilla’s X-Ray Goggles. A long running problem that caused people headaches and the feeling of lost work, while at the same time doing nothing “wrong”, from a technical perspective. This is going to be a story about how modern browsers work, how people use the web, and how those two things… don’t always align.

X-Ray Goggles by Mozilla

So let’s start with X-Ray Goggles: the X-Ray Goggles are a tool made by Mozilla that lets you “remix” web pages after loading them in your browser. You can go to your favourite place on the web, fire up the goggles (similar to how a professional web developer would open up their dev tools), and then change text, styling, images, and whatever else you might want to change, for as long as you want to change things, and then when you’re happy with the result and you want to show your remix to your friends, you can publish that remix so that it has its own URL that you can share.

However, the X-Ray Goggles use a publishing service that hosts all its content over https, because we care about secure communication at Mozilla, and using https is best practice. But in this particular case, it’s also kind of bad: large parts of the web still use http, and even if a website has an https equivalent, people usually visit the http version anyway. Unless those websites force users to the https version of the site (using a redirect message), then site they’ll be on, and the site they’ll be remixing, will use HTTP, and the moment the user publishes their remix with X-Ray Goggles, and they get an https URL back, and they open that URL in their browser….

well, let’s just say “everything looks broken” is not wrong.

But the reason for this is not because Goggles, or even the browser is doing something wrong – ironically, it’s because they’re doing something right, and in so doing, what the user wants to do turns out incompatible with what the technology wants them to do. So let’s look at what’s going on here.

HTTP, the basis upon which browsing is built

If you’re a user of the web, no doubt you’ll have heard about http and https, even if you can’t really say what they technically-precisely mean. In simple terms (but without dumbing it down), HTTP is the language that servers and browsers use to negotiate data transfers. The original intention was for those two to talk about HTML code, so that’s where the h in http comes from (it stands for “hypertext” in both http and html), but we’re mostly ignoring that these days, and HTTP is used by browsers and servers to negotiate transmission of all sorts of files – web pages, stylesheets, javascript source code, raw data, music, video, images, you name it.

However, HTTP is a bit like regular English: you can listen in on it. If you go to a bar and sit yourself with a group of people, you can listen to their conversations. The same goes for HTTP: in order for your browser and the server to talk they rely on a chain of other computers connected to the internet to get messages relayed from one ot the other, and any of those computers can listen in on what the browser and server are saying to each other. In an HTTP setting it gets a little stranger even, because any of those computers could look at what the browser or server are saying, replace what is being said with something else and then forward that on. And you’ll have no way of knowing whether that’s what happened. It’s literally as if the postal service took a letter you sent, opened it, rewrote it, resealed it, and then sent that on. We trust that they won’t, and computers connected to the internet trust that other computers don’t mess with the communication, but… they can. And sometimes they do.

And that’s pretty scary, actually. You don’t want to have to “trust” that your communication isn’t read or tampered with, you want to know that’s the case.

What can we do to fix that?

Well, we can use HTTPS, or “secure HTTP”, instead. Now, I need to be very clear here: the term “secure” in “secure HTTP” refers to secure communication. Rather than talking “in English”, the browser and server agree on a secret language that you could listen to, but you won’t know what’s being said, and so you can’t intercept-and-modify the communication willy-nilly without both parties knowing that their communications are being tampered with. However it does not mean that the data the browser and server agree to receive or send is “safe data”. It only means that both parties can be sure that what one of them receives is what the other intended to send. All we can be sure of is that no one will have been able to see what got sent, and that no one modified it somewhere along the way without us knowing.

However, those are big certainties, so for this reason the internet’s been moving more and more towards preferring HTTPS for everything. But not everyone’s using HTTPS yet, and so we run into something called the “Mixed Content” issue.

Let’s look at an example.

Imagine I run a web page, much like this one, and I run it on HTTP because I am not aware of the security issues, and my page relies on some external images, and some JavaScript for easy navigation, and maybe an embedded podcast audio file. All of those things are linked as http://......, and everything worked fine.

But then I hear about the problems with HTTP and the privacy and security implications sound horrible! So, to make sure my visitors don’t have to worry about whether the page they get from my server is my page, or a modified version of my page, I spring into action, I switch my page over to HTTPS; I get a security certificate, I set everything on my own server up so that it can “talk” in HTTPS, and done!

Except immediately after switching, my web page is completely broken! The page itself loads, but none of the images show up, and the JavaScript doesn’t seem to be working, and that podcast embed is gone! What happened??

This is a classic case of mixed-content blocking. My web page is being served on HTTPS, so it’s indicating that it wants to make sure everything is secure, but the resources I rely on still use HTTP, and now the browser has a problem: it can’t trust those resources, because it can’t trust that they won’t have been inspected or even modified when it requests them, and because the web page that’s asking them to be loaded expressed that it cares about secure communication a great deal, the browser can’t just fetch those insecure elements, things might go wrong, and there’s no way to tell!

So it does the only thing it knows is safe: better safe than sorry, and it flat out refuses to even request them, giving you a warning about “mixed content”.

Normally, that’s great. It lets people who run websites know that they’re relying on potentially insecure third party content in an undeniably clear way, but it gets a bit tricky in two situations:

  1. third party resources that themselves require other third party resources, and
  2. embedding and rehosting

The first is things like your web page using a comment thread service: your web page includes a bit of JavaScript from something like www.WeDoCommentsForYou.com and then that JavaScript then loads content from that site’s comment database, for instance comments.WeDoCommentsForYou.com. If we have a page that uses HTTPS, running on https://ourpage.org then we can certainly make sure that we load the comment system from https://www.WeDoCommentsForYou.com, but we don’t control the protocol for the URL that the JavaScript we got back uses. If “WeDoCommentsForYou” wrote their script poorly, and they try to load their comments over http://, then too bad, the browser will block that. Sure, it’s a thing that “WeDoCommentsForYou” should fix, but until they do your users can’t comment, and that’s super annoying.

The second issue is kind of like the first, but is about entire web pages. Say you want to embed a page; for instance, you’re transcluding an entire wiki page into another wiki page. If the page you’re embedding is http and the page it’s embedded on is https, too bad, that’s not going to work. Or, and that brings us to what I really want to talk about, if you remix a page on http, with http resources, and host that remix on a site that uses https, then that’s not going to work either…

Back to the X-Ray Goggles

And that’s the problem we were hitting with X-Ray Goggles, too.

While the browser is doing the same kind of user protection that it does for any other website, in this particular case it’s actually a big problem: if a user remixed an HTTP website, then knowing what we know now, obviously that’s not going to work if we try to view it using HTTPS. But that also means that instead of a cool tool that people can use to start learning about how web pages work “on the inside”, the result of which they can share with their friends, they have a tool that lets them look at the insides of a web page and then when they try to share their learning, everything breaks.

That’s not cool.

And so the solution to this problem is based on first meeting the expectations of people, and then educating them on what those expectations actually mean.

Give me HTTPS, unless I started on HTTP

There are quite a few solutions to the mixed-content problem, and some are better than others. There are some that are downright not nice to other people on the web (like making a full copy of someone’s website and then hosting that on Mozilla’s servers. That’s not okay), or may open people up exploits (like running a proxy server, which runs on HTTPS and can fetch HTTP resources, then send them on as if they were on HTTPS, effectively lying about the security of the communication), so the solution we settled on is, really, the simplest one:

If you remix an http://... website, we will give you a URL that starts with http://, and if you remix an https:// website, we will give you a URL that starts with https://.... However, we also want you to understand what’s going on with the whole “http vs https” thing, so when you visit a remix that starts with http:// the remix notice bar at the top of the page also contains a link to the https:// version –same page, just served using HTTPS instead of HTTP– so that you can see exactly how bad things get if you can’t control which protocol gets used for resources on a page.

Security vs Usability

Security is everybody’s responsibility, and explaining the risks on the web that are inherent to the technology we use every day is always worth doing. But that doesn’t mean we need to lock everything down so “you can’t use it, the end, go home, stop using HTTP”. That’s not how the real world works.

So we want you to be able to remix your favourite sites, even if they’re HTTP, and have a learning/teaching opportunity there around security. Yes, things will look bad when you try to load an HTTP site on HTTPS, but there’s a reason for that, and it’s important to talk about it.

And it’s equally important to talk about it without making you lose an hour or more of working on your awesome remix.

Air MozillaCloud Services QA Team Sync, 03 May 2016

Cloud Services QA Team Sync Weekly sync-up, volunteer, round-robin style, on what folks are working on, having challenges with, etc.

Air MozillaWebdev Extravaganza: May 2016

Webdev Extravaganza: May 2016 Once a month web developers from across Mozilla get together to share news about the things we've shipped, news about open source libraries we maintain...

Air MozillaConnected Devices Weekly Program Update, 03 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Open Policy & AdvocacyThis is what a rightsholder looks like in 2016

In today’s policy discussions around intellectual property, the term ‘rightsholder’ is often misconstrued as someone who supports maximalist protection and enforcement of intellectual property, instead of someone who simply holds the rights to intellectual property. This false assumption can at times create a kind of myopia, in which the breadth and variety of actors, interests, and viewpoints in the internet ecosystem – all of whom are rightsholders to one degree or another – are lost.

This is not merely a process issue – it undermines constructive dialogues aimed at achieving a balanced policy. Copyright law is, ostensibly, designed and intended to advance a range of beneficial goals, such as promoting the arts, growing the economy, and making progress in scientific endeavour. But maximalist protection policies and draconian enforcement benefit the few and not the many, hindering rather than helping these policy goals. For copyright law to enhance creativity, innovation, and competition, and ultimately to benefit the public good, we must all recognise the plurality and complexity of actors in the digital ecosystem, who can be at once IP rightsholders, creators, and consumers.

Mozilla is an example of this complex rightsholder stakeholder. As a technology company, a non-profit foundation, and a global community, we hold copyrights, trademarks, and other exclusive rights. Yet, in the pursuit of our mission, we’ve also championed open licenses to share our works with others. Through this, we see an opportunity to harness intellectual property to promote openness, competition and participation in the internet economy.

We are a rightsholder, but we are far from maximalists. Much of the code produced by Mozilla, including much of Firefox, is licensed using a free and open source software licence called the Mozilla Public License (MPL), developed and maintained by the Mozilla Foundation. We developed the MPL to strike a real balance between the interests of proprietary and open source developers in an effort to promote innovation, creativity and economic growth to benefit the public good.

Similarly, in recognition of the challenges the patent system raises for open source software development, we’re pioneering an innovative approach to patent licensing with our Mozilla Open Software Patent License (MOSPL). Today, the patent system can be used to hinder innovation by other creators. Our solution is to create patents that expressly permit everyone to innovate openly. You can read more in our terms of license here.

While these are just two initiatives from Mozilla amongst many more in the open source community, we need more innovative ideas in order to fully harness intellectual property rights to foster innovation, creation and competition. And we need policy makers to be open (pun intended) to such ideas, and to understand the place they have in the intellectual property ecosystem.

More than just our world of software development, the concept of a rightsholder is in reality broad and nuanced. In practice, we’re all rightsholders – we become rightsholders by creating for ourselves, whether we’re writing, singing, playing, drawing, or coding. And as rightsholders, we all have a stake in this rich and diverse ecosystem, and in the future of intellectual property law and policy that shapes it.

Here is some of our most recent work on IP reform: