Aaron KlotzAttached Input Queues on Firefox for Windows

I’ve previously blogged indirectly about attached input queues, but today I want to address the issue directly. What once was a nuisance in the realm of plugin hangs has grown into a more serious problem in the land of OMTC and e10s.

As a brief recap for those who are not very familiar with this problem: imagine two windows, each on their own separate threads, forming a parent-child relationship with each other. When this situation arises, Windows implicitly attaches together and synchronizes their input queues, putting each thread at the mercy of the other attached threads’ ability to pump messages. If one thread does something bad in its message pump, any other threads that are attached to it are likely to be affected as well.

One of the biggest annoyances when it comes to knowledge about which threads are affected, is that we are essentially flying blind. There is no way to query Windows for information about attached input queues. This is unfortunate, as it would be really nice to obtain some specific knowledge to allow us to analyze the state of Firefox threads’ input queues so that we can mitigate the problem.

I had previously been working on a personal side project to make this possible, but in light of recent developments (and a tweet from bsmedberg), I decided to bring this investigation under the umbrella of my full-time job. I’m pleased to announce that I’ve finished the first cut of a utility that I call the Input Queue Visualizer, or iqvis.

iqvis consists of two components, one of which is a kernel-mode driver. This driver exposes input queue attachment data to user mode. The iqvis user-mode executable is the client that queries the driver and outputs the results. In the next section I’m going to discuss the inner workings of iqvis. Following that, I’ll discuss the results of running iqvis on an instance of Firefox.

Input Queue Visualizer Internals

First of all, let’s start off with this caveat: Nearly everything that this driver does involves undocumented APIs and data structures. Because of this, iqvis does some things that you should never do in production software.

One of the big consequences of using undocumented information is that iqvis requires pointers to very specific locations in kernel memory to be accomplish things. These pointers will change every time that Windows is updated. To mitigate this, I kind of cheated: it turns out that debugging symbols exist for all of the locations that iqvis needs to access! I wrote the iqvis client to invoke the dbghelp engine to extract the pointers that I need from Windows symbols and send those values as the input to the DeviceIoControl call that triggers the data collection. Passing pointers from user mode to be accessed in kernel mode is a very dangerous thing to do (and again, I would never do it in production software), but it is damn convenient for iqvis!

Another issue is that these undocumented details change between Windows versions. The initial version of iqvis works on 64-bit Windows 8.1, but different code is required for other major releases, such as Windows 7. The iqvis driver theoretically will work on Windows 7 but I need to make a few bug fixes for that case.

So, getting those details out of the way, we can address the crux of the problem: we need to query input queue attachment information from win32k.sys, which is the driver that implements USER and GDI system calls on Windows NT systems.

In particular, the window manager maintains a linked list that describes thread attachment info as a triple that points to the “from” thread, the “to” thread, and a count. The count is necessary because the same two threads may be attached to each other multiple times. The iqvis driver walks this linked list in a thread-safe way to obtain the attachment data, and then copies it to the output buffer for the DeviceIoControl request.

Since iqvis involves a device driver, and since I have not digitally signed that device driver, one can’t just run iqvis and call it a day. This program won’t work unless the computer was either booted with kernel debugging enabled, or it was booted with driver signing temporarily disabled.

Running iqvis against Firefox

Today I ran iqvis using today’s Nightly 39 as well as the lastest release of Flash. I also tried it with Flash Protected Mode both disabled and enabled. (Note that these examples used an older version of iqvis that outputs thread IDs in hexadecimal. The current version uses decimal for its output.)

Protected Mode Disabled

FromTID ToTID Count
ac8 df4 1

Looking up the thread IDs:

  • df4 is the Firefox main thread;
  • ac8 is the plugin-container main thread.

I think that the output from this case is pretty much what I was expecting to see. The protected mode case, however, is more interesting.

Protected Mode Enabled

FromTID ToTID Count
f8c dbc 1
794 f8c 3

Looking up the thread IDs:

  • dbc is the Firefox main thread;
  • f8c is the plugin-container main thread;
  • 794 is the Flash sandbox main thread.

Notice how Flash is attached to plugin-container, which is then attached to Firefox. Notice that transitively the Flash sandbox is effectively attached to Firefox, confirming previous hypotheses that I’ve discussed with colleagues in the past.

Also notice how the Flash sandbox attachment to plugin-container has a count of 3!

In Conclusion

In my opinion, my Input Queue Visualizer has already yielded some very interesting data. Hopefully this will help us to troubleshoot our issues in the future. Oh, and the code is up on GitHub! It’s poorly documented at the moment, but just remember to only try running it on 64-bit Windows 8.1 for the time being!

Cameron KaiserSuperFREAK! SuperFREAK! Temptations, sing!

Remember those heady days back when people spelled out W-W-W and H-T-T-P in all their links, and we had to contend with "export only" versions of cryptography because if you let those 1024-bit crypto keys fall into the wrong hands, the terrorists would win? My favourite remnant of those days is this incredibly snide Apple ad where tanks surround the new Power Mac G4, the first personal computer classified by the Pentagon as a munition ("As for Pentium PCs, well ... they're harmless").

Unfortunately, a less amusing remnant of that bygone era has also surfaced in the form of the FREAK attack (see also POODLE, CRIME and BEAST). The idea with export-grade ciphers is that, at the time, those naughty foreign governments would have to make do with encrypting their network traffic using short keylengths that the heroic, not-at-all-dystopian denizens of the NSA could trivially break (which you can translate to mean they're basically broken by design). As a result, virtually no browser today will advertise its support for export-grade ciphers because we're not supposed to be using them anymore after the Feds realized the obvious policy flaw in this approach.

But that doesn't mean they can't use them. And to prove it, the researchers behind FREAK came up with a fuzzing tool that gets in the middle of the secure connection negotiation (which must happen in the clear, in order to negotiate the secure link) and forces the connection to downgrade. Ordinarily you'd realize that something was in the middle because completing the handshaking to get the shared secret between server and client requires a private key, which the malicious intruder doesn't have. But now it has another option: the defective client will accept the downgraded connection with only a 512-bit export-compliant RSA key from the server, trivial to break down with sufficient hardware in this day and age, which the intruder in the middle can also see. The intruder can factor the RSA modulus to recover the decryption key, uses that to decrypt the pre-master secret the client sends back, and, now in possession of the shared secret, can snoop on the connection all it wants (or decrypt stored data it already has). Worse, if the intruder has encrypted data from before and the server never regenerated the RSA key, they can decrypt the previous data as well!

There are two faults here: the server for allowing such a request to downgrade the connection, and the client for accepting deficient keys. One would think that most current servers would not allow this to occur, stopping the attack in practice, and one would be wrong. On the FREAK Attack site (which doubles as a test page), it looks like over a quarter of sites across the IPv4 address space are vulnerable ... including nsa.gov!

What about the client side? Well, that's even worse: currently every Android phone (except if you use Firefox for Android, which I recently switched to because I got tired of Android Chrome crashing all the damn time), every iOS device, and every Mac running Safari or Chrome is vulnerable, along with anything else that ships with a vulnerable version of OpenSSL. Guess what's not? Firefox. Guess what's also not? TenFourFox. NSS does not advertise nor accept export-only keys or ciphers. TenFourFox is not vulnerable to FREAK, nor any current version of Firefox on any platform. Test it yourself.

Classilla is vulnerable in its current configuration. If you go into the settings for security, however, you can disable export-only support and I suggest you do that immediately if you're using Classilla on secure sites. I already intended to disable this for 9.3.4 and now it is guaranteed I will do so.

What about Safari or OmniWeb on Power Macs? I would be interested to hear from 10.5 users, but the test site doesn't work correctly in either browser on 10.4. Unfortunately, because all Macs (including 10.6 through 10.10) are known to be vulnerable, I must assume that both Tiger and Leopard are also vulnerable because they ship a known-defective version of OpenSSL. Installing Leopard WebKit fixes many issues and security problems but does not fix this problem, because it deals with site display and not secure connections: the browser still relies on NSURL and other components which use the compromised SSL library. I would strongly recommend against using a non-Mozilla browser on 10.7 and earlier for secure sites in the future for this reason. If you use Android as I do, it's a great time to move to Firefox for Android. Choice isn't just a "nice thing to have" sometimes.

Armen Zambranomozci 0.2.5 released - major bug fixes + many improvements

Big thanks again to vaibhav1994adusca and valeriat for their many contributions in this release.

Release notes

Major bug fixes:
  • Bug fix: Sort pushid_range numerically rather than alphabetically
  • Calculation of hours_ago would not take days into consideration
Others:
  • Added coveralls/coverage support
  • Added "make livehtml" for live documentation changes
  • Improved FAQ
  • Updated roadmap
  • Large documentation refactoring
  • Automatically document scripts
  • Added partial testing of mozci.mozci
  • Streamed fetching of allthethings.json and verify integrity
  • Clickable treeherder links
  • Added support for zest.releaser
    Release notes: https://github.com/armenzg/mozilla_ci_tools/releases/tag/0.2.5
    PyPi package: https://pypi.python.org/pypi/mozci/0.2.5
    Changes: https://github.com/armenzg/mozilla_ci_tools/compare/0.2.4...0.2.5


    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    Mozilla Privacy BlogTrust in an Increasingly Connected World

    This year at Mobile World Congress, I participated in two formal discussions. I spoke alongside a panel of experts thinking about mobile and data which was hosted by the GSMA. I was also invited to a Fireside Chat hosted by … Continue reading

    Anthony HughesNew Beginnings

    …or trying to adapt to the inevitability of change.


    Change is a reality of life common to all things; we all must adapt to change or risk obsolescence. I try to look at change as a defining moment, an opportunity to reflect, to learn, and to make an impact. It is in these moments that I reflect on the road I’ve traveled and attempt to gain clarity of the road ahead. This is where I find myself today.

    How Did I Get Here?

    In my younger days I wasted years in college jumping from program to program before eventually dropping out. I obviously did not know what I wanted to do with my life and I wasn’t going to spend thousands of dollars while I figured it out. This led to a frank and difficult discussion with my parents about my future which resulted in me enlisting in the Canadian military. As it happens, this provided me the space I needed to think about what I wanted to do going forward, who I wanted to be.

    I served for three years before moving back to Ontario to pursue a degree in software development at the college I left previously. I had chosen a path toward working in the software industry. I had come to terms with a reality that I would likely end up working on some proprietary code that I didn’t entirely care for, but that would pay the bills and I would be happier than I was as a soldier.

    After a couple of years following this path I met David Humphrey, a man who would change my life by introducing me to the world of open source software development. On a whim, I attended his crash-course, sacrificing my mid-semester week off. It was here that discovered a passion for contributing to an open source project.

    Up until this point I was pretty ignorant about open source. I had been using Linux for a couple years but I didn’t identify it as “open source”; it was merely a free as in beer alternative to Windows. At this point I hadn’t even heard of Mozilla Firefox. It was David who opened my eyes to this world; a world of continuous learning and collaboration, contributing to a freer and more open web. I quickly realized that choosing this path was about more than a job opportunity, more than a career; I was committing myself to world view and my part to play in shaping it.

    Over the last eight years I have continued to follow this path, from volunteering nights at school, through internships, a contract position, and finally full-time employment in 2010.

    Change is a way of life at Mozilla

    Since I began my days at Mozilla I have always been part of the same team. Over the years I have seen my team change dramatically but it has always felt like home.

    We started as a small team of specialists working as a cohesive unit on a single product. Over time Mozilla’s product offering grew and so did the team, eventually leading to multiple sub-teams being formed. As time moved on and demands grew, we were segmented into specialized teams embedded on different products. We were becoming more siloed but it still felt like we were all part of the QA machine.

    This carried on for a couple of years but I began to feel my connection to people I no longer worked with weaken. As this feeling of disconnectedness grew, my passion for what I was working on decreased. Eventually I felt like I was just going through the motions. I was demoralized and drifting.

    This all changed for me again last year when Clint Talbert, our newly appointed Director and a mentor of mine since the beginning, developed a vision for tearing down those silos. It appeared as though we were going to get back to what made us great: a connected group of specialists. I felt nostalgic for a brief moment. Unfortunately this would not come to pass.

    Moving into 2015 our team began to change again. After “losing” the B2G QA folks to the B2G team in 2014, we “lost” the Web and Services QA folks to the Cloud Services team. Sure the people were still here but it felt like my connection to those people was severed. It then became a waiting game, an inevitability that this trend would continue, as it did this week.

    The Road Ahead

    Recently I’ve had to come to terms with the reality of some departures from Mozilla.  People I’ve held dear for, and sought mentorship from, for many years have decided to move on as they open new chapters in their lives. I have seen many people come and go over the years but those more recently have been difficult to swallow. I know they are moving on to do great things and I’m extremely happy for them, but I’ll also miss them intensely.

    Over the years I’ve gone from reviewing add-ons to testing features to driving releases to leading the quality program for the launch of Firefox Hello. I’ve grown a lot over the years and the close relationships I’ve held with my peers are the reason for my success.

    Starting this week I am no longer a part of a centralized QA team, I am now the sole QA member of the DOM engineering team. While this is likely one of the more disruptive and challenging changes I’ve ever experienced, it’s also exciting to me.

    Overcoming the Challenge

    As I reflect on this entire experience I become more aware of my growth and the opportunity that has been presented. It is an opportunity to learn, to develop new bonds, to impact Mozilla’s mission in new and exciting ways. I will remain passionate and engaged as long as this opportunity exists. However, this change does not come without risk.

    The greatest risk to Mozilla is if we are unable to maintain our comradery, to share our experiences, to openly discuss our challenges, to engage participation, and to visualize the broader quality picture. We need to strengthen our bonds, even as we go our separate ways. The QA team meeting will become ever more important as we become more decentralized and I hope that it continues.

    Looking Back, Looking Forward

    I’ve experienced a lot of change in my life and it never gets any less scary. I can’t help but fear reaching another “drifting point”. However, I’ve also learned that change is inevitable and that I reach my greatest potential by adapting to it, not fighting it.

    I’m entering a new chapter in my life as a Mozillian and I’m excited for the road ahead.

    Paul McLanahanThe State of Mozilla.org - February 2015

    Hello all! It's already been a month since last we spoke and much has happened! Let's get to it.

    Note: It appears that monthly will be a better schedule for these than every 2 weeks, at least for me. I'll try to keep to that. Please call me out if I fail.

    Theme for February: ZOMG BUSY!

    February is always a busy month for we developers in the Engagement team, at least since Mozilla broke into the mobile world with Fennec and Firefox OS. This is because the first of March is Mobile World Congress (MWC) time, and it's always a scramble to get things done in time. I concentrate mostly on the server side of the Web, but my colleagues in Web Prod who deal in HTML, CSS, and JS were more than a little busy. They launched a new page for MWC, a new Firefox OS main page, a new consolidated nav for all of that, and updates to various other FxOS related pages to support announcements. It was a herculean effort, the results are amazing, and I'm more than a little proud to work with them all. Special thanks to our new staff-member teammate Schalk Neethling for going way above and beyond to get it all done.

    Also, long time friend of mozilla.org Craig Cook knocked out a refresh of the Mozilla Leadership page. Nice work Craig!

    Not only all of that, but February saw a major overhaul of how bedrock handles static assets (CSS, LESS, JS, Fonts, Images, etc.). It's all part of the plan. There's more to come.

    1. We have finally moved into the modern world (speaking in Django terms) and are using the staticfiles system. We are now free to do things like handle user-uploaded media, use new and cool tools, and not feel bad about ourselves.
    2. We've switched from jingo-minify to django-pipeline. Pipeline hooks into Django's static media system and is therefore easier to integrate with other parts of the Django ecosystem as well as more customizable. It is also a much more active project and supports a lot of fun new things (e.g. Babel-JS for sweet sweet ES6 goodness in our ES5 world).
    3. Good Olde Apache™ used to be how we served our static assets, but we're now doing that from bedrock itself using Whitenoise. Since we have a CDN the traffic for these files on the server is quite low so getting cache headers, CORS, and gzipping right is the most important thing. Whitenoise handles all of this efficiently and automatically. I highly recommend it.

    With these new toys came the ability to generate what are called "Immutable Files". This means that the system will now copy all static files to new filenames that include the md5 hash of the file contents. This means that the file name (e.g. site.4d72c30b1a11.js) will always necessarily refer to the same contents. The advantage of this is that we can set the cache headers to basically never expire. Any time the file content changes, the generated file name will be different and cached separately.

    We're also generating gzipped versions of all files at deploy time. Whitenoise will see that these files exist (e.g. site.4d72c30b1a11.js.gz) and serve up the compressed version when the browser says it can handle it (and nearly all can these days). This is good because this is no longer happening at request time in Apache, thus reducing load, and we can use better and slower compression since it's happening outside of the request process.

    Much more happened, but I'm loath to make this much longer. Skim the git log below for the full list.

    Contributors

    Even more new contributors! HOORAY!

    • blisman started contributing to bedrock this month and has already fixed 3 bugs!
    • The aforementioned Schalk Neethling is new to the team, but not Mozilla nor FLOSS contribution, nor even bedrock as he's maintained the Plugin Check page for quite some time. He did a wonderful job on the new Firefox OS page.
    • Kohei Yoshino continues dominating all the things and even got yet another Friend of The Tree (Friends of Mozilla) mention.
    • Stephanie Hobson (on loan from MDN) stepped up to help us with some changes in preparation for the new Firefox for iOS (coming soon to an iDevice near you).

    Thank you all for your contributions to bedrock and the Open Web \o/

    Git Log for February

    • 8547262 (Pascal Chevrel) Bug 1128957 - Fix block parsing errors in jinja templates
    • bb8aa7a (Kohei Yoshino) Fix Bug 1129130 - Hyperlinking bios to Steering Committee page
    • f67af74 (Kohei Yoshino) Fix Bug 1129214 - Please add Rust and Cargo to our trademark list
    • 80c4b28 (Kohei Yoshino) Fix Bug 1128885 - Plugincheck-site considers 31.4.0ESR out of date
    • 7664dc5 (Alex Gibson) Update UITour documentation
    • d0c20a4 (Tim) updated sumo link to be locale-neutral
    • f588ff8 (Craig Cook) Fix bug 1124826 - Net Neutrality home promo
    • 5475a14 (Paul McLanahan) Add new contributors to humans.txt
    • 68a927c (Kohei Yoshino) Fix Bug 1124724 - Tiles product page update copy mozilla.org/en-US/firefox/tiles/
    • 64c4f17 (Paul McLanahan) Fix bug 1130285: Treat hsb/dsb locales as de for number formatting.
    • ea43c4e (Kohei Yoshino) Fix Bug 1129911 - Text error in https://www.mozilla.org/en-US/about/governance/policies/commit/
    • e5496d6 (Francesco Lodolo (:flod)) Bug 1115066 - Add 'si' to Thunderbird start page redirect
    • 776cff3 (Alex Gibson) Add test suite for browser-tour.js
    • 73295a6 (Tin Aung Lin) Updated with new Facebook Page Link
    • 43909f2 (Kohei Yoshino) Fix Bug 1131142 - Update Firefox Refresh SUMO article link on /firefox/new/
    • 9a4de71 (Alex Gibson) [fix bug 1131680] Stop redirecting Firefox Geolocation page to Mozilla Taiwan website
    • a502fe1 (Kohei Yoshino) Fix Bug 1130160 - Extra '#' in section headers for roll up pages
    • 6aaee9f (Paul McLanahan) Fix bug 1131738: Mark advisory reporter as safe.
    • 90aabcb (Paul McLanahan) Bug 906176: Move to using Django staticfiles for media.
    • 66541a9 (Paul McLanahan) Bug 906176: Enable caching static storage and remove cachebusts.
    • 63da90f (Paul McLanahan) Update references to "media()" in docs.
    • 286c35c (Paul McLanahan) Bug 906176: Move to django-pipeline from jingo-minify.
    • 6fdbc53 (Paul McLanahan) Add futures, a dependency of pipeline.
    • 3a200c0 (Paul McLanahan) Add node dependencies less and yuglify.
    • a3b0895 (Paul McLanahan) Reorder deployment to keep the git repo clean.
    • 56b89a1 (Paul McLanahan) Serve static files with Whitenoise.
    • c96ba80 (Paul McLanahan) No longer test Python 2.6 in Travis.
    • ae040d4 (Paul McLanahan) Fix unicode issue with image helpers.
    • f9849c7 (Kohei Yoshino) Fix Bug 1131111 - PN Changes (Snippets/SMS Campaign, default Search provider, and SSL Error reporting)
    • a61fd11 (Paul McLanahan) Disable locale sync from crons temporarily.
    • c5cbfd7 (Paul McLanahan) Enable locale update cron jobs; they are now fixed.
    • f8ced59 (Paul McLanahan) Fix missing image referenced in thunderbird base template.
    • 1592de3 (Paul McLanahan) Fix bug 1132317: Fix gigabit pages errors.
    • b8e2afd (Paul McLanahan) Remove remaining date-based cache busting query params.
    • 6eb5107 (Logan Rosen) fix Bug 1132323: change Tabzilla heading ID
    • cdadc8a (Logan Rosen) fix Bug 1108278: congstar link is incorrect
    • 453ae78 (Paul McLanahan) Encourage use of humans.txt
    • d3c553d (Alex Gibson) [fix bug 1132289] Plugin check minify JS error
    • 375b3b3 (schalkneethling) Syncing content with Google doc, part of the l10n hand-over
    • b4c217e (Jon Petto) Bug 1128726. Add 2 new firstrun tests, each with 2 variants.
    • 8ccad60 (Alex Gibson) [fix bug 1132313] Venezuela community page references missing images
    • f74b2a7 (Paul McLanahan) Fix bug 1132454: Update platform_img helper for new static files.
    • fd4215b (Alex Gibson) [bug 1132454] Add missing high-res ios platform image to firefox/new
    • 5184b79 (Alex Gibson) Update Mozilla.ImageHelper JS tests
    • 03d7b14 (Kohei Yoshino) Fix Bug 1132835 - 404 linking to /contribute/local from /about/governance/organizations
    • ec262fe (Paul McLanahan) Fix bug 1132961: Add cache to twitter feeds.
    • a420c03 (Kohei Yoshino) Fix Bug 1132956 - Legal-docs pages for hu and hr throwing errors.
    • 61dea65 (Kohei Yoshino) Fix pep8 errors: W503 line break before binary operator
    • ad3f3c8 (Francesco Lodolo (:flod)) Bug 1124894 - Add Swahili (sw) to PROD_LOCALES
    • 6193a4f (Jon Petto) Bug 1130565. Add more localized videos to Hello page.
    • e41a55f (blisman) fix bug 1132942, removed url for missing html template (/bedrock/mozorg/about/governance/policies/commit/faq.html)
    • 89ad7c1 (blisman) Bug 1134492 - move assets from assets.mozilla.org to assets.mozillalabs.com
    • 43ac8cd (Alex Gibson) [fix bug 1053214] Missing Mozilla Estonia from Contact Pages
    • a3cb7e5 (Stephanie Hobson) Fix Bug 1134058: Show .form-details when form has focus
    • 0307a5b (Paul McLanahan) Only build master in Travis.
    • 4990787 (Cory Price) [fix bug 1130198] Update Hello FTU for GA36 * Send Custom Variable to GA containing the referral * Add referral to localStorage on copy/email link * Retreve referral from localStorage on tour connect and send to GA * Hide info panels when Contacts tab is clicked (it's okay that they don't see it if they switch back to Rooms) * Update docs * Add Test
    • 4f7a542 (Paul McLanahan) Add author link tag to base templates for humans.txt
    • c452ac0 (Kohei Yoshino) Fix Bug 1134936 - Firefox download pages: filter localized builds as you type
    • d681e78 (Josh Mize) Add backend for fxos feed links: bug 1128587
    • 4a81001 (Josh Mize) Restore dev update crons: fix bug 1133942
    • f0ab7b5 (Steven Garrity) Bug 1120689 MWC Preview page for 2015
    • e6d9ada (blisman) fix bug 1129961: reps ical feed update fail silently
    • 54b27ed (Paul McLanahan) Remove accidentially committed print statement.
    • d406fb0 (Steven Garrity) Bug 1120689 Update MWC map reference
    • 2592f4f (Alex Gibson) [fix bug 1135496] Missing Firefox OS wordmark on devices page
    • 259f1d2 (Alex Gibson) [fix bug 1099471 1084200] Implement Firefox Hello tours GA 36
    • eb6471a (Jon Petto) Add firstrun and whatsnew pages. Bug 1099471.
    • e797b42 (Alex Gibson) Update Hello fx36 tour logic and add tests
    • 93a6f2e (Alex Gibson) Add Fx36 Hello tour GA tracking events
    • 2500616 (Jon Petto) Hello tour updates:
    • 7b7329e (Steven Garrity) Bug 1120689 Last minute MWC preview text tweaks
    • 8859e9c (Alex Gibson) Fx36 Hello tour template updates
    • 5dde0d1 (Kohei Yoshino) Improve the Share widget, part of Bug 1131309
    • d6d4ab6 (Kohei Yoshino) Fix Bug 1131309 - Add share buttons to 'Check your plugins' page
    • 0a3dea6d (Steven Garrity) Bug 1120689 Update map for MWC 2015 Removed the link to the PDF and used a single PNG for mobile/desktop
    • b0eced3 (Paul McLanahan) Bug 1116511: Add script to sync data from Tableau.
    • acca704 (Paul McLanahan) Bug 1116511: Add view for serving JSON contributor data.
    • b807b1a (Paul McLanahan) Bug 1116511: Add cron jobs for stage and prod tableau data.
    • e27fa38 (Kohei Yoshino) Fix Bug 1128579 - Finish moving certs/included and certs/pending web pages to wiki pages
    • 34f032d (Paul McLanahan) Fix a potential error in the TwitterCacheManager.
    • 5fa4fb3 (Steven Garrity) Bug 1120667 Remove "over" from MWC preview page
    • d627564 (Francesco Lodolo (:flod)) Bug 1111597 - Set up Santali (sat) for production
    • 20afbf3 (Craig Cook) Update home page promos
    • f727d1a (Cory Price) [fix bug 1130194] Add FTU tracking to Hello product page
    • 3ec1f54 (Kohei Yoshino) Fix Bug 1136224 - firefox hello privacy policy link to tokbox privacy policy broken
    • 59b7b09 (Paul McLanahan) Fix bug 1136307: Catch all errors and report exceptions for MFSA import.
    • c4d1534 (schalkneethling) Fix Bug 1132298 Moves mustache script above the share script
    • 41c8b12 (Craig Cook) Bug 1132231 - fix copy for Webmaker and Hello promos
    • 0963b2a (Kohei Yoshino) Standardize the header share button
    • 4fdd9f0 (Kohei Yoshino) Fix Bug 1131304 - Add share buttons to 'Download Firefox in your language' page
    • f53b05f (Kohei Yoshino) Fix Bug 1131299 - Add share buttons to Firefox Developer Edition page
    • 2c8a27f (Steven Garrity) Bug 1120689 Update title on MWC preview for 2015
    • 9ab3132 (Paul McLanahan) Fix bug 1136559: Add dev deploy cron scripts to repo.
    • 4b36ba5 (Stephanie Hobson) Fix Bug 1126578: iOS CTA updates and newsletter
    • c8b7a5f (Stephanie Hobson) Bug 1126578: iOS CTA updates and newsletter
    • d53c578 (Kohei Yoshino) Fix Bug 1126837 - Make Fx38 Win64 build of Dev Edition Available on moz.org
    • acf34be (Kohei Yoshino) Fix Bug 1137213 - Sky theme is not applied to Firefox channel page if Developer Edition is selected first
    • 934fe4c (Jon Petto) Bug 1135092. Fx family nav V1.
    • 5e8d55d (Paul McLanahan) Get current hash from local file and run dev autodeploy every 20min.
    • 6984385 (Paul McLanahan) No output for dev autoupdate unless deploying.
    • f6a6fad (Kohei Yoshino) Fix Bug 1137061 - Firefox Release Notes list shows unsorted sub-versions
    • f8a5358 (Kohei Yoshino) Fix Bug 1137604 - /security/advisories: abbreviation mismatch: MSFA vs. MFSA
    • d8572fe (Jon Petto) Bug 1135092. Add small IE fixes to fx family nav v1.
    • 8847d9f (Jon Petto) Bug 1137260. Add GA to fx family nav.
    • 4f16a2b (Josh Mize) Update firefox os feeds on dev deploy
    • 05bc712 (Craig Cook) Fix bug 1134522 - New leadership page
    • b966d62 (Paul McLanahan) Remove locale update from deployment.
    • 9ed05bc (Steven Garrity) Bug 1120686 Update Fx Partners page for MWC 2015
    • d0bf649 (Jon Petto) Bug 1137904. Add headlines to MWC page.
    • 97685f5 (Steven Garrity) Bug 1137347 Add temporary links to static logos
    • b03989e (Steven Garrity) Add All press link
    • b0440e9 (schalkneethling) Fix Bug 1120700, implement new design for firefox/os
    • 5ac9fe3 (Steven Garrity) Bug 1120686 Fix overlaping menus Mobile partners nav was overlapping family nav submneu due to excessive z-index
    • 1de1bfc (Steven Garrity) Bug 1137347 Use https for static images
    • 431aea6 (Paul McLanahan) Update static files, product-details, and external files in SRC dir.
    • ebfd67a (Craig Cook) Bug 1120700 - Misc tweaks and fixes for new FxOS page
    • b6225ee (Paul McLanahan) Update revision.txt before collectstatic.
    • e8c9f28 (Craig Cook) Bug 1124734 - remove Net Neutrality promo after Feb 26
    • c0066c1 (Craig Cook) Fix bug 1138169 - MWC partner logo updates
    • 90bec7a (Francesco Lodolo (:flod)) Bug 1120700 - Fx OS consumer page: restore page title on old template
    • c0dfdee (Steven Garrity) Bug 1137347 Replace temporary MWC logos

    Air MozillaWebdev Extravaganza: March 2015

    Webdev Extravaganza: March 2015 Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

    Armen ZambranoHow to generate allthethings.json

    It's this easy!
        hg clone https://hg.mozilla.org/build/braindump
        cd braindump/community
        ./generate_allthethings_json.sh

    allthethings.json is generated based on data from buildbot-configs.
    It contains data about builders, schedulers, masters and slavepools.

    If you want to extract information from allthethings.json feel free to use mozci to help you!
    https://mozilla-ci-tools.readthedocs.org/en/latest/allthethings.html


    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    Air MozillaMartes mozilleros

    Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

    The Mozilla BlogUnity 5 Ships and Brings One Click WebGL Export to Legions of Game Developers

    Mozilla’s goal of high quality plugin-free gaming on the Web is taking a giant leap forward today with the release of Unity 5. This new version of the world’s most popular game development tool includes a preview of their amazing WebGL exporter. Unity 5 developers are one click away from publishing their games to the Web in a whole new way, by taking advantage of WebGL and asm.js. The result is native-like performance in desktop browsers without the need for plugins.

    Unity is a very popular game development tool. In fact the company says just under half of all developers report using this tool. The engine is highly suited for mobile development and as such has been used to produce a wealth of content which is particularly well suited for Web export. Small download size, low memory usage, and rendering pipeline similarities make this content straight forward to port to the Web. Unity has a long history of providing their developers the ability to ship online via a Web plugin. In recent years, browser vendors have moved to reduce their dependency on plugins for content delivery.

    A new cross browser approach was needed and it has arrived

    Mozilla and Unity worked together to find a way to bring content developed in Unity 5 to the Web using only standard compliant APIs and JavaScript. Unity’s new approach to Web delivery is made possible by using a combination of IL2CPP and a cross-compiler named Emscripten to port its content. IL2CPP was developed at Unity Technologies and converts all ingame scripts to C++. This approach has performance benefits when porting to multiple platforms, including the Web. Unity then uses Emscripten to convert the resulting C++ to asm.js, a subset of JavaScript that can be optimized to run at near native speeds in the browser. asm.js was pioneered by Mozilla. The code then executes in the browser as any other Web content. It accesses hardware via standard compliant APIs such as WebGL, IndexedDB, and Web Audio. The results of this collaboration have now reached the point where it’s time to get them into the hands of developers.

    “Unity has always been a strong supporter of Web gaming,” said Andreas Gal, CTO of Mozilla. “With the ability to do plugin-free WebGL export with Unity 5, Mozilla is excited to see Unity promoting the Web as a first-class platform for their developers. One-click export to WebGL will give Unity’s developers the ability to share their content with a new class of user.”

    Dead Trigger 2Angry BotsAaaaaAAaaaAAAaaAAAAaAAAAA for The Awesome!

    Clicking on the images above will take you to live examples of Unity 5 exports.

    At GDC, Mozilla will be providing a first look at WebGL2. Unity has redeveloped their Teleporter demo to showcase the technology in action. This is the next generation of 3D on the Web, seen for the first time at the show. Mozilla’s showcase will also include titles developed in Unity and exported using the WebGL export. This including Nival’s Prime World Defender and AaaaaAAaaaAAAaaAAAAaAAAAA! for Awesome by Dejobaan Games, which can be played right on their website. You can also try Dead Trigger 2 and Angry Bots available via Unity Technologies’ website.

    For more information on Unity’s news please see their blog post.

    For more information on Mozilla’s news at GDC see this post.

    The Mozilla BlogBringing Native Games to the Web is About to get a Whole Lot Easier

    GDC 2015 is a major milestone in a long term collaboration between Mozilla and the world’s biggest game engine makers. We set out to bring high performance games to the Web without plugins, and that goal is now being realized. Unity Technologies is including the WebGL export preview as part of their Unity 5 release, available today. Epic Games has added a beta HTML5 exporter as part of their regular binary engine releases. This means plugin-free Web deployment is now in the hands of game developers working with these popular tools. They select the Web as their target platform and, with one click, they can build to it. Now developers can unlock the world’s biggest open distribution platform leveraging two Mozilla-pioneered technologies, asm.js and WebGL.

    What has changed?

    Browser vendors are moving to reduce their dependency on plugins for content delivery, with Chrome planning to drop support for NPAPI entirely. Developers such as King, Humble Bundle, Game Insider, and Zynga are using Emscripten to bring their C and C++ based games to the Web. Disney has shipped Where’s My Water on Firefox OS, which was ported using the same technology. Emscripten allows developers to cross-compile their native games to asm.js, a subset of JavaScript that can be optimized to run at near native speeds. However, this approach to Web delivery can be challenging to use, and most of these companies have been working with in-house engines to achieve their goals. This has put some of the most advanced Web deployment techniques out of reach of the majority of developers, until now.

    The technology is spreading

    Browser support for the underlying Web standards is growing. WebGL has now spread to all modern browsers, both desktop and mobile. We are seeing all browsers optimize for asm.js-style code, with Firefox and Internet Explorer committed to advanced optimizations.

    “With the ability to reach hundreds of millions of users with just a click, the Web is a fantastic place to publish games,” said Andreas Gal, CTO of Mozilla. “We’ve been working hard at making the platform ready for high performance games to rival what’s possible on other platforms, and the success of our partnerships with top-end engine and game developers shows that the industry is taking notice.”

    Handwritten JavaScript games: can you spot the difference?

    At GDC, Mozilla will be showcasing a few amazing examples of HTML5 using handwritten JavaScript. The Firefox booth will include a demonstration of a truly ubiquitous product called Tanx, developed by PlayCanvas. It runs on multiple desktop and mobile platforms. It can even be played inside an iOS WebView, launched within Twitter. Gamepad and multiplayer support are also part of the experience. Mozilla will in addition be featuring The Marvelous Miss Take by Wonderstruck and Turbulenz. This title is soon to ship on both Firefox Marketplace and is available on Steam today. For Steam distribution, the HTML5 application is packaged as a native application but you would be hard pressed to know it.

    Not done yet

    Mozilla is committed to advancing what is possible on the Web. While already capable of running great game experiences, there is plenty of potential still to be unlocked. This year’s booth showcase will include some bleeding edge technologies such as WebGL 2 and WebVR, as well as updated developer tools aimed at game and Web developers alike. These tools will be demonstrated in our recently released 64-bit version of Firefox Developer Edition. Mozilla will also be providing developers access to SIMD and experimental threading support. Developers are invited to start experimenting with these technologies, now available in Firefox Nightly Edition. Visit the booth to learn more about Firefox Marketplace, now available in our Desktop, Android, and Firefox OS offerings as a distribution opportunity for developers.

    To learn more about Mozilla’s presence at GDC, read articles from the developers on the latest topics, or learn how to get involved, visit games.mozilla.org or come see us at South Hall Booth #2110 till March 6th. For press inquiries please email press@mozilla.com.

    Florian QuèzeMozilla not accepted for Google Summer of Code 2015

    As you may have already seen, Mozilla is not in the list of organizations accepted for Google Summer of Code 2015.

    People who have observed the list carefully may have noticed that there are fewer accepted organizations this year: 137 (down from 190 in 2014 and 177 in 2013). Other organizations that have participated successfully several times are also not in the 2015 list (eg. Linux Foundation, Tor, ...).

    After a quick email exchange with Google last night, here is the additional information I have:
    • not accepting Mozilla was a difficult decision for them. It is not the result of a mistake on our part or an accident on their side.
    • there's an assumption that not participating for one year would not be as damaging for us as it would be for some other organizations, due to us having already participated many times.
    • this event doesn't affect negatively our chances of being selected next year, and we are encouraged to apply again.

    This news has been a surprise for me. I am disappointed, and I'm sure lots of people reading this are disappointed too. I would like to thank all the people who considered participating this year with Mozilla, and especially all the Mozillians who volunteered to mentor and contributed great project ideas. I would also like to remind students that while Summer of Code is a great opportunity to contribute to Mozilla, it's not the only one. Feel free to contact mentors if you would like to work on some of the suggested ideas anyway.

    Let's try again next year!

    Daniel Stenbergcurl: embracing github more

    Pull requests and issues filed on github are most welcome!

    The curl project has been around for a long time by now and we’ve been through several different version control systems. The most recent switch was when we switched to git from CVS back in 2010. We were late switchers but then we’re conservative in several regards.

    When we switched to git we also switched to github for the hosting, after having been self-hosted for many years before that. By using github we got a lot of services, goodies and reliable hosting at no cost. We’ve been enjoying that ever since.

    cURLHowever, as we have been a traditional mailing list driving project for a long time, I have previously not properly embraced and appreciated pull requests and issues filed at github since they don’t really follow the old model very good.

    Just very recently I decided to stop fighting those methods and instead go with them. A quick poll among my fellow team mates showed no strong opposition and we are now instead going full force ahead in a more github embracing style. I hope that this will lower the barrier and remove friction for newcomers and allow more people to contribute easier.

    As an effect of this, I would also like to encourage each and everyone who is interested in this project as a user of libcurl or as a contributor to and hacker of libcurl, to skip over to the curl github home and press the ‘watch’ button to get notified and future requests and issues that appear.

    We also offer this helpful guide on how to contribute to the curl project!

    Mike TaylorWAP Telemetry in Firefox Mobile browsers, Part 1

    Telemetry in Firefox is how we measure stuff in the browser—anything from how fast GIFS are decoded, to how many people opened the Dev Tools animation inspector. You can check out the collection of gathered results at http://telemetry.mozilla.org or see what your browser is sending (or disable it, if that's your thing) in about:telemetry.

    One question the Web Compat team at Mozilla is interested in is whether Firefox for Android and Firefox OS users are being sent more than their fair share of WAP content (typically WML or XHTMLMP sites).

    (Personally, I missed out on WAP because I was too afraid to open the browser on my Nokia and have to pay for data in the early 2000s. (Also I didn't live in Japan.))

    Here's the kind of amazing content that Firefox Mobile users are served and are unable to see:

    image I stole (without permission) from http://www.petecampbell.com

    Since Gecko doesn't know how to decode WAP, the browser calls it a day and treats it as application/octet-stream which results in a prompt for the user to download a page. Check out the dependencies of these bugs for some more of the gritty details.

    As to why we're sent WAP stuff in the first place, this is likely due to old UA detection libraries that don't recognize the User Agent header. The logical assumption therefore is that this unknown browser is some kind of ancient proto-HTML capable graphing calculator. Naturally you would want to serve that kind of user agent WAP, rather than a Plain-Old HTTP Site (POHS).

    So this seems like a pretty good opportunity to use Telemetry to measure how common happens. If it's happening all the time, we can push for some form of actual support in Gecko itself. But if it's exceedingly rare we can all move on with our lives, etc.

    To measure this we landed a histogram named HTTP_WAP_CONTENT_TYPE_RECEIVED. I made a not-very-useful and mostly buggy visualization of the data we've gathered so far (from Nightly 39 users) using the Telemetry.js API here: http://miketaylr.github.io/compat-telemetry-dashboards/wap.html. This updates every night and will need a few months of measuring before we can make any real decisions so I won't bother publishing any results just yet.

    I will note that these patches haven't been taken up yet by Mozilla China's version of Firefox for Android which is one region we suspect receives more WAP than the West.

    OK, so that's Part 1 of this exciting 2 part WAP Telemetry series. In Part 2 (which might get actually written tomorrow depending on a number of factors all of which are probably just laziness) I'll write out the more mundane technical details of landing a Telemetry patch in Gecko.

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1134392] Need edits to Recruiting Component
    • [1136222] Adding “Rank” to Product:Core Component: webRTC, webRTC: Audio/Video, webRTC: Signaling, webRTC: Networking
    • [1136687] form.reps.mentorship calls an invalid method (Can’t locate object method “realname” via package “Bugzilla::User”)
    • [1108823] removing the privacy review bug
    • [1136979] Minor Brand Initiation Form Updates
    • [880552] Add links to socorro from the crash signatures in show_bug.cgi

    discuss these changes on mozilla.tools.bmo.


    Filed under: bmo, mozilla

    Raniere SilvaCall for MathML May Meeting

    Call for MathML May Meeting

    Last month we didn’t have our MathML month meeting but this month it will happen. Please reserve March 11th at 8pm UTC (check the time at your location here) and add topics in the PAD.

    Leia mais...

    Chris DoubleFirefox Media Source Extensions Update

    This is an update on some recent work on the Media Source Extensions API in Firefox. There has been a lot of work done on MSE and the underlying media framework by Gecko developers and this update just covers some of the telemetry and exposed debug data that I’ve been involved with implementing.

    Telemetry

    Mozilla has a telemetry system to get data on how Firefox behaves in the real world. We’ve added some MSE video stats to telemetry to help identify usage patterns and possible issues.

    Bug 1119947 added information on what state an MSE video is in when the video is unloaded. The intent of this is to find out if users are exiting videos due to slow buffering or seeking. The data is available on telemetry.mozilla.org under the VIDEO_MSE_UNLOAD_STATE category. This has five states:

    0 = ended, 1 = paused, 2 = stalled, 3 = seeking, 4 = other

    The data provides a count of the number of times a video was unloaded for each state. If a large number of users were exiting during the stalled state then we might have an issue with videos stalling too often. Looking at current stats on beta 37 we see about 3% unloading on stall with 14% on ended and 57% on other. The ‘other’ represents unloading during normal playback.

    Bug 1127646 will add additional data to get:

    • Join Latency - time between video load and video playback for autoplay videos
    • Mean Time Between Rebuffering - play time between rebuffering hiccups

    This will be useful for determining performance of MSE for sites like YouTube. The bug is going through the review/comment stage and when landed the data will be viewable at telemetry.mozilla.org.

    about:media plugin

    While developing the Media Source Extensions support in Firefox we found it useful to have a page displaying internal debug data about active MSE videos.

    In particular it was good to be able to get a view of what buffered data the MSE JavaSript API had and what our internal Media Source C++ code stored. This helped track down issues involving switching buffers, memory size of resources and other similar things.

    The internal data is displayed in an about:media page. Originally the page was hard coded in the browser but :gavin suggested moving it to an addon. The addon is now located at https://github.com/doublec/aboutmedia. That repository includes the aboutmedia.xpi which can be installed directly in Firefox. Once installed you can go to about:media to view data on any MSE videos.

    To test this, visit a video that has MSE support in a nightly build with the about:config preferences media.mediasource.enabled and media.mediasource.mp4.enabled set to true. Let the video play for a short time then visit about:media in another tab. You should see something like:

    https://www.youtube.com/watch?v=3V7wWemZ_cs
      mediasource:https://www.youtube.com/6b23ac42-19ff-4165-8c04-422970b3d0fb
        currentTime: 101.40625
        SourceBuffer 0
          start=0 end=14.93043
        SourceBuffer 1
          start=0 end=15
    
        Internal Data:
          Dumping data for reader 7f9d85ef1800:
            Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
              Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
              Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
            Dumping Video Track Decoders - mLastVideoTime: 7.000000
              Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
              Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

    The first portion of the displayed data shows the JS API video of the data buffered:

    currentTime: 101.40625
      SourceBuffer 0
        start=0 end=14.93043
      SourceBuffer 1
        start=0 end=15

    This shows two SourceBuffer objects. One containing data from 0-14.9 seconds and the other 0-15 seconds. One of these will be video data and the other audio. The currentTime attribute of the video is 101.4 seconds. Since there is no buffered data for this range the video is likely buffering. I captured this data just after seeking while it was waiting for data from the seeked point.

    The second portion of the displayed data shows information on the C++ objects implementing media source:

    Dumping data for reader 7f9d85ef1800:
      Dumping Audio Track Decoders: - mLastAudioTime: 7.732243
        Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880
        Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246
      Dumping Video Track Decoders - mLastVideoTime: 7.000000
        Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613
        Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914

    A reader is an instance of the MediaSourceReader C++ class. That reader holds two SourceBufferDecoder C++ instances. One for audio and the other for video. Looking at the video decoder it has two readers associated with it. These readers are instances of a derived class of MediaDecoderReader which are tasked with the job of reading frames from a particular video format (WebM, MP4, etc).

    The two readers each have buffered data ranging from 0-10 seconds and 10-15 seconds. Neither are ‘active’. This means they are not currently the video stream used for playback. This will be because we just started a seek. You can view how buffer switching works by watching which of these become active as the video plays. The size is the amount of data in bytes that the reader is holding in memory. mLastVideoTime is the presentation time of the last processed video frame.

    MSE videos will have data evicted as they are played. This size threshold for eviction defaults to 75MB and can be changed with the media.mediasource.eviction_threshold variable in about:config. When data is appended via the appendBuffer method on a SourceBuffer an eviction routine is run. If data greater than the threshold is held then we start removing portions of data held in the readers. This will be noticed in about:media by the start and end ranges being trimmed or readers being removed entirely.

    This internal data is most useful for Firefox media developers. If you encounter stalls playing videos or unusual buffer switching behaviour then copy/pasting the data from about:media in a bug report can help with tracking the problem down. If you are developing an MSE player then the information may also be useful to find out why the Firefox implementation may not be behaving how you expect.

    The source of the addon is on github and relies on a chrome only debug method, mozDebugReaderData on MediaSource. Patches to improve the data and functionality are welcome.

    Status

    Media Source Extensions is still in progress in Firefox and can be tested on Nightly, Aurora and Beta builds. The current plan is to enable support limited to YouTube only in Firefox 37 on Windows and Mac OS X for MP4 videos. Other platforms, video formats and wider site usage will be enabled in future versions as the implementation improves.

    To track work on the API you can follow the MSE bug in Bugzilla.

    Geoffrey MacDougallInfographic: Contribution & Fundraising in 2014

    2013 was an amazing year. Which is why I’m especially proud of what we accomplished in 2014.

    We doubled our small dollar performance. We tripled our donor base. We met our target of 10,000 volunteer contributors. And we matched our exceptional grant performance.

    We also launched our first, large-scale advocacy campaign, playing a key role in the Net Neutrality victory.

    But best of all is that close to 100 Mozillians share the credit for pulling this off.

    Here’s to 2015 and to Mozilla continuing to find its voice and identity as a dynamic non-profit.

    A big thank you to everyone who volunteered, gave, and made it happen.

    CLICK THE IMAGE TO MAKE IT LARGER

    fundraising-infographic-2014


    Filed under: Mozilla

    Rizky AriestiyansyahMarch Application Curation Board Task

    Have you visited Marketplace lately to app nominations in the spotlight? We just refreshed the ever-present “Mozillia Communites” apps collection. And a lot of apps populate the recent “Cats” and “Outer Space Collections” collections, this time we are move move and move to prepare next month featured application on Firefox Marketplace.

    Emma IrwinWebmaker Exploratory

    Two years ago I proposed a Webmaker Club at my daughter’s school, and it was turned down in an email:

     Because it involves students putting (possibly) personal info/images on-line we are not able to do the club at this time.  They did say that they may have to reconsider in the future because more and more of life is happening on-line.

    One year later, and because our principle is amazing, and sponsored it – I had a  ‘lunch time’ Webmaker Club at my daughter’s elementary school (grades 4 & 5) .  It was great fun, I learned a lot as always thanks to  challenges : handling the diversity of attendance, interests and limited time.   I never get tired of helping kids ‘make the thing they are imagining’.

    This year, I was excited to be invited to lead a Webmaker ‘Exploratory’ in our town’s middle school (grades 6-8).   Exciting on so many levels, but two primarily

    1) Teachers and schools are recognizing the need for web literacy (and its absence), and that it should be offered as part of primary education.

    2) Schools are putting faith in community partnerships to teach.  At least this is what it feels like to me – pairing a technically-strong teacher, with a community expert in coding/web (whatever) is a winning situation.

    My exploratory ran for 7 weeks – we started with 28 kids, and lost a few to other exploratories as they realized that HTML (for example) wasn’t something they wanted to learn.  Of those 28 kids, only 3 were girls, which made me sad. I really have to figure out better messaging.   We covered the basics of HTML, CSS and then JavaScript and slowly built a Memory Card game.  Each week I started the class off with a Thimble Template representing a stage in the ‘building’.

    Week3, Week4, Week5, Week6, Week7

    I wrote specific instructions for each week that we tracked on a wiki, we used Creative Commons Image Search and talked about our digital footprint.

    What worked

    Having an ‘example make’ of the milestone  for this class where each week kids could see, in advance what they were making.

    Having a ‘starting template‘ for the lesson helped those kids who missed a class, catch up quickly.

    Being flexible about that template, meant those kids who preferred to work on their own single ‘make’ could still challenge themselves a bit more.

    Baked-In Web Literacy  CC image search brought up conversations about ownership, sharing on the web and using a Wiki led to discussion about how Wikimedia editing and editors build content; about participating in open communities.

    Sending my teacher-helper the curriculum a few days before, so she could prepare as a mentor.

    Having some ‘other activities’ in my back pocket for kids who got bored, or finished early.  These were just things like check out this ‘hour of code tutorial’.

    What didn’t work

    We were sharing a space with the ‘year book’ team, who also used the internet, and sometimes  our internet was moving slower than a West Coast Banana Slug.  In our class ‘X Ray Goggles’ challenge, kids sat for long periods of time before being able to do much.   Some also had challenges saving/publishing their X Ray Goggles Make.

    Week 2, To get around slow internet –  I brought everyone USB sticks and taught them to work locally – this also was a bit of a fail, as I realized many in the group didn’t know simple terms like ‘directory and folder’.  I made a wrong assumption they had this basic knowledge.  Also I should have collected USB sticks after class, because most lost or damaged in the care of students.  We went back to slow internet – although, it was never as bad as that first day.

    Having only myself and one teacher with that many kids meant we were running between kids.  Also slightly unfair to the teacher who was learning along with the group. It also sometimes meant kids waited too long for help.

    Not all kids liked the game we were making


     

    So overall I think it went well, we had some wonderful kids, I was proud of all of them.  The final outcome/learning, the sponsoring teacher, and I realized was that many of the lessons (coding, wikipedia, CC) could easily fit into any class project –  rather than having Webmaking as it’s ‘own class’.

    So in future, that may be the next way I participate: as someone who comes into say – a social studies class, or history class and helps students put together a project on the web. Perhaps that’s how community can offer their help to teachers in schools, as a way to limit large commitments like running an entire program, but to have longer-lasting and embedding impact in schools.

    For the remainder of the year, and next –  my goal seems to be as a ‘Webmaker Plugin’ , helping integrate web literacy into existing class projects :)

     

     

     

     

    Jared WeinAn update on my mentoring program

    Today is the start of the third week of the mentoring program.

    Since the start of the program, four bugs have been marked fixed:

    1. Bug 951695 – Consider renaming “Character Encoding” to “Text Encoding”
    2. Bug 782623 – Name field in Meta tags often empty
    3. Bug 1124271 – Clicking the reader mode button in an app tab opens reader mode in a new tab
    4. Bug 1113761 – Devtools rounds sizes up way too aggressively (and not reflecting actual layout). e.g. rounding 100.01px up to 101px

    Also, the following bugs are in progress and look like they should be ready for review soon:

    1. Bug 1054276 – In the “media” view, the “save as” button saves images with the wrong extension
    2. Bug 732688 – No Help button in the Page Info window

    The bugs currently being worked on are:

    1. Bug 1136526 – Move silhouetted versions of Firefox logo into browser/branding
    2. Bug 736572 – pageinfo columns should have arrows showing which column is sorted and sort direction
    3. Bug 418517 – Add “Select All” button to Page Info “Media” tab
    4. Bug 967319 – Show a nodesList result with natural order

    I was hoping to have 8-9 bugs fixed by this time, but I’m happy with four bugs fixed and two bugs being pretty close. Bug 967319 in the “being worked on” section is also close, but still needs work with tests before it can be ready for review.


    Tagged: firefox, mentoring, mozilla, planet-mozilla

    Air MozillaMozilla Weekly Project Meeting

    Mozilla Weekly Project Meeting The Monday Project Meeting

    Anthony HughesImproving Recognition

    I’ve been hearing lately that Mozilla QA’s recognition story kind of sucks with some people going completely unrecognized for their efforts. Frankly, this is embarrassing!

    Some groups have had mild success attempting to rectify this problem but not all groups share in this success. Some of us are still struggling to retain contributors due to lack of recognition; a problem which becomes harder to solve as QA becomes more decentralized.

    As much as it pains me to admit it, the Testdays program is one of these areas. I’ve blogged, emailed, and tweeted about this but despite my complaining, things really haven’t improved. It’s time for me to take some meaningful action.

    We need to get a better understanding of our recognition story if we’re ever to improve it. We need to understand what we’re doing well (or not) and what people value so that we can try to bridge the gaps. I have some general ideas but I’d like to get feedback from as many voices as possible and not move forward based on personal assumptions.

    I want to hear from you. Whether you currently contribute or have in the past. Whether you’ve written code, ran some tests, filed some bugs, or if you’re still learning. I want to hear from everyone.

    Look, I’m here admitting we can do better but I can’t do that without your help. So please, help me.

     

     

    Daniel GlazmanAdobe Edge Reflow anyone?

    Edge ReflowI received this morning a message from the Adobe Edge Reflow prerelease forum that triggered my interest. I must admit I did not really follow what happened there during the last twelve months for many various reasons... But this morning, it was different. In short, the author had questions about the fate of Edge Reflow, in particular because of the deep silence of that forum...

    Adobe announced Edge Reflow in Q3 2012 I think. It followed the announcement of Edge Code a while ago. Reflow was aimed at visual responsive design in a new, cool, interactive desktop application with mobile and photoshop links. The first public preview was announced in February 2013 and a small community of testers and contributors gathered around the Adobe prerelease fora. Between January 2013 and now, roughly 1300 messages were sent there.

    Reflow is a html5/JS app turned into a desktop application through the magic of CEF. It has a very cool and powerful UI, superior management of simple Media Queries, excellent management of colors, backgrounds, layers, magnetic grids and more. All in all, a very promising application for Web Authoring.

    But the last available build of Reflow, again through the prerelease web site, is only a 0.57.17154 and it is now 8 months old. After 2 years and a half, Reflow is still not here and there are reasons to worry.

    First, the team (the About dialog lists more than 20 names...) seems to have vanished and almost nothing new has been contributed/posted to Reflow in the last six to eight months.

    Second, the application still suffers from things I identified as rather severe issues early on: the whole box model of the application is based on CSS floats and is then not in line with what modern web designers are looking for. Eh, it's not even using absolute positioning... It also means it's going to be rather complicated to adapt it to grids and flexbox, not even mentioning Regions...

    Reflow also made the choice to generate Web pages instead of editing Web pages... It means projects are saved in a proprietary format and only exported to html and CSS. It's impossible to take an existing Web page and open it in Reflow to edit it. In a world of Web Design that sees authors use heterogeneous environments, I considered that as a fatal mistake. I know - trust me, I perfectly know - that making html the pivot format of Reflow would have implied some major love and a lot, really a lot of work. But not doing it meant that Edge Reflow had to be at the very beginning of the editorial chain, and that seemed to me an unbearable market restriction.

    Then there was the backwards compatibility issue. Simply put, how does one migrate Dreamweaver templates to Reflow? Short answer, you can't...

    I suspect Edge Reflow is now at least on hold, more probably stopped. More than 2 years and still no 1.0 on such an application that should have seen a 1.0beta after six to eight months is not a good sign anyway. After Edge Code that became Brackets in november 2014, that raises a lot of question on the Edge concept and product line. Edge Animate seems to be still maintained at Adobe (there's our old Netscape friend Kin Blas in the list of credits) but I would not be surprised if the name is changed in the future.

    Too bad. I was, in the beginning, really excited by Edge Reflow. I suspect we won't hear about it again.

    Henrik SkupinFirefox Automation report – week 51/52 2014

    In this post you can find an overview about the work happened in the Firefox Automation team during week 51 and 52 of 2014. I’m sorry for this very late post but changes to our team, which I will get to in my next upcoming post, caught me up with lots of more work and didn’t give me the time for writing status reports.

    Highlights

    Henrik started work towards a Mozmill 2.1 release. Therefore he had to upgrade a couple of mozbase packages first to get latest Mozmill code on master working again. Once done the patch for handling parent sections in manifest files finally landed, which was originally written by Andrei Eftimie and was sitting around for a while. That addition allows us to use mozhttpd for serving test data via a local HTTP server. Last but not least another important feature went in, which let us better handle application disconnects. There are still some more bugs to fix before we can actually release version 2.1 of Mozmill.

    Given that we only have the capacity to fix the most important issues for the Mozmill test framework, Henrik started to mass close existing bugs for Mozmill. So only a hand-full of bugs will remain open. If there is something important you want to see fixed, we would encourage you to start working on the appropriate bug.

    For Mozmill CI we got the new Ubuntu 14.10 boxes up and running in our staging environment. Once we can be sure they are stable enough, they will also be enabled in production.

    Individual Updates

    For more granular updates of each individual team member please visit our weekly team etherpad for week 51 and week 52.

    Meeting Details

    If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 51 and week 52.

    Mozilla Open Policy & Advocacy BlogCISA threatens Internet security and undermines user trust

    Protecting the privacy of users and the information collected about them online is crucial to maintaining and growing a healthy and open Web. Unfortunately, there have been massive threats that weaken our ability to create the Web that we want to see. The most notable and recent example of this is the expansive surveillance practices of the U.S. government that were revealed by Edward Snowden. Even though it has been nearly two years since these revelations began, the U.S. Congress has failed to pass any meaningful surveillance reform, and is about to consider creating new surveillance authorities in the form of the Cybersecurity Information Sharing Act of 2015.

    We opposed the Cyber Intelligence Sharing and Protection Act in 2012 – as did a chorus of privacy advocates, information security professionals, entrepreneurs, and leading academics, with the President ultimately issuing a veto threat. We believe the newest version of CISA is worse in many respects, and that the bill fundamentally undermines Internet security and user trust.

    CISA is promoted as facilitating the sharing of cyber threat information, but:

    • is overbroad in scope, allowing virtually any type information to be shared and to be used, retained, or further shared not just for cybersecurity purposes, but for a wide range of other offences including arson and carjacking;
    • allows information to be shared automatically between civilian and military agencies including the NSA regardless of the intended purpose of sharing, which limits the capacity of civilian agencies to conduct and oversee the exchange of cybersecurity information between the private sector and sector-specific Federal agencies;
    • authorizes dangerous countermeasures that could seriously damage the Internet; and
    • provides blanket immunity from liability with shockingly insufficient privacy safeguards.

    The lack of meaningful provisions requiring companies to strip out personal information before sharing with the government, problematic on its own, is made more egregious by the realtime sharing, data retention, lack of limitations, and sweeping permitted uses envisioned in the bill.

    Unnecessary and harmful sharing of personal information is a very real and avoidable consequence of this bill. Even in those instances where sharing information for cybersecurity purposes is necessary, there is no reason to include users’ personal information. Threat indicators rarely encompass such details. Furthermore, it’s not a difficult or onerous process to strip out personal information before sharing. In the exceptional cases where personal information is relevant to the threat indicator, those details would be so relevant to mitigating the threat at hand that blanket immunity from liability for sharing would not be necessary.

    We believe Congress should focus on reining in the NSA’s sweeping surveillance authority and practices. Concerns around information sharing are at best a small part of the problem that needs to be solved in order to secure the Internet and its users.

    Daniel StenbergMore HTTP framing attempts

    Previously, in my exciting series “improving the HTTP framing checks in Firefox” we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.

    And now I’ve landed my third version. The amendment I did this time:

    When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)

    This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing…

    Will this version of the fix actually stick? I don’t know. There’s lots of bad voodoo out there in the HTTP world and I’m putting my finger right in the middle of some of it with this change. I’m pretty sure I’ve not written my last blog post on this topic just yet… If it sticks this time, it should show up in Firefox 39.

    bolt-cutter

    This Week In RustThis Week in Rust 72

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

    What's cooking on master?

    135 pull requests were merged in the last week, and 1 RFC PR.

    Now you can follow breaking changes as they happen!

    Breaking Changes

    Other Changes

    New Contributors

    • defuz
    • FuGangqiang
    • JP-Ellis
    • lummax
    • Michał Krasnoborski
    • nwin
    • Raphael Nestler
    • Ryan Prichard
    • Scott Olson

    Approved RFCs

    Mysteriously, during the week of February 23 to March 1 there were no RFCs approved to The Rust Language.

    New RFCs

    Quote of the Week

    "I must kindly ask that you please not go around telling people to disregard the rules of our community. Violations of Rule #6 will absolutely not be tolerated."

    kibwen is serious about upholding community standards.

    Notable Links

    Project Updates

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    The Mozilla BlogFirefox OS Proves Flexibility of Web: Ecosystem Expands with More Partners, Device Categories and Regions in 2015

    Orange to bring Firefox OS to 13 new markets in Africa and Middle East; Mozilla, KDDI, LG U+, Telefónica and Verizon collaborate on new category of phones based on Firefox OS

    Barcelona, Spain – Mobile World Congress – March 1st, 2015 – Mozilla, the mission-based organization dedicated to keeping the power of the Web in people’s hands, welcomed new partners and devices to the Firefox OS ecosystem at an event in Barcelona, leading into Mobile World Congress.

    Mozilla President Li Gong summarized the status of Firefox OS, which currently scales across devices ranging from the world’s most affordable smartphone to 4K Ultra HD TVs. “Two years ago Firefox OS was a promise. At MWC 2014, we were able to show that Firefox OS scales across price ranges and form factors. Today, at MWC 2015, we celebrate dozens of successful device launches across continents, adoption of Firefox OS beyond mobile, as well as growing interest and innovation around the only truly open mobile platform. Also, we are proud to report that three major chip vendors contribute to the Firefox OS ecosystem.”

    Firefox OS MWC 2015 News in Detail:

    •    Mozilla, KDDI, LG U+, Telefonica and Verizon Wireless collaborate to create a new category of intuitive and easy to use Firefox OS phones: The companies are collaborating to contribute to the Mozilla community and create a new range of Firefox OS phones for a 2016 launch in various form factors – flips, sliders and slates – that balance the simplicity of a basic phone (calls, texts) with the more advanced features of a smartphone such as fun applications, content, navigation, music players, camera, video, LTE, VoLTE, email and Web browsing. For more details and supporting quotes see blog.mozilla.org.

    •    Orange announces bringing Firefox OS to 13 markets as part of a new digital offer: Today, Orange puts the mobile Internet within reach of millions more people, otherwise not previously addressed, with the launch of a new breakthrough digital offer across its significant African and Middle Eastern footprint. The Orange Klif digital offer starts from under US$40 (€35), inclusive of data, voice and text bundle and sets a new benchmark in price that will act as a major catalyst for smartphone and data adoption across the region. The 3G Firefox OS smartphone is exclusive to Orange and will be available from Q2 in 13 of Orange’s markets in the region, including, but not limited to, Egypt, Senegal, Tunisia, Cameroon, Botswana, Madagascar, Mali, The Ivory Coast, Jordan, Niger, Kenya, Mauritius and Vanuatu.
    ALCATEL ONETOUCH collaborates with Orange and announced more details on the new phone today:

    Orange Klif 3G-Volcano-Black-_LO•    ALCATEL ONETOUCH expands mobile internet access with the newest Firefox OS phone, the Orange Klif. The Orange Klif offers connectivity speeds of up to 21 Mbps, is dual SIM, and includes a two-megapixel camera and micro-SD slot. The addition of the highly optimised Firefox OS meanwhile allows for truly seamless Web browsing experiences, creating a powerful Internet-ready package.
    The Orange Klif is the first Firefox OS phone powered by a MediaTek processor.

    •    Mozilla revealed further details about upcoming versions of Firefox OS, among them: Improved performance and support of multi-core processors, enhanced privacy features, additional support for WebRTC, right to left language support and an NFC payments infrastructure.

    Runcible by Monohm•    Earlier this week, KDDI Corporation announced an investment in Monohm, a US based provider of innovative IoT devices based on Firefox OS. Monohm’s first product “Runcible” will be showcased at the Mozilla booth at MWC 2015.

    Panasonic VIERA TX-CR730

    The Firefox OS ecosystem continues to expand with new partners and devices ranging from the line of Panasonic 4K Ultra HD TVs to the world’s most affordable smartphone:

    “Just months ago, Cherry Mobile introduced the ACE, the first Firefox OS smartphone in the Philippines, which is also the most affordable smartphone in the world. We are excited that the ACE, which keeps gaining positive feedback in the market, is helping lots of consumers move from feature phones to smartphones. Through the close partnership with Mozilla Firefox OS, we will continue to bring more affordable quality mobile devices to consumers,” said Maynard Ngu, Cherry Mobile CEO.

    With today’s announcements, Firefox OS will be available from leading operator partners in more than 40 markets in the next year on a total of 17 smartphones.

    Firefox OS unlocks the power of the Web as the platform and will continue to expand across markets and device categories as we move forward the Internet of Things (IOT), using open Web technology to enable operators, hardware manufacturers and developers to create innovative and customized applications and products for consumers to use across these connected devices.

    Creating Content for Mobile, on Mobile Devices
    Mozilla today unveiled the beta version of Webmaker, a free and open source mobile content creation app, which strips away the complexity of traditional Web creation. Webmaker will be available for Android, Firefox OS, and via a modern mobile browser on other devices in over 20 languages later this year. For more info, please visit webmaker.org/localweb

    The Mozilla BlogMozilla, KDDI, LG U+, Telefónica and Verizon Wireless Collaborate to Create New Category of Firefox OS Phones

    New range of intuitive and easy-to-use phones to be powered by Firefox OS

    Barcelona, Spain – Mobile World Congress – March 1, 2015
    Mozilla, the mission based organization dedicated to keeping the power of the Web in people’s hands, together with KDDI, LG U+, Telefónica and Verizon Wireless, today announced at Mobile World Congress a new initiative to create devices based on Firefox OS.

    The goal of this initiative is to create a more intuitive and easy-to-use experience (powered by Firefox OS) for consumers around the world. The companies are collaborating to contribute to the Mozilla community and create a new range of Firefox OS phones for a 2016 launch in various form factors – flips, sliders and slates – that balance the simplicity of a basic phone (calls, texts) with the more advanced features of a smartphone such as fun applications, content, navigation, music players, camera, video, LTE, VoLTE, email and Web browsing.

    Firefox OS was chosen as the platform for this initiative because it unlocks the mobile ecosystem and enables independence and innovation. This results in more flexibility for network operators and hardware manufacturers to provide a differentiated experience and explore new business ventures, while users get the performance, personalization and affordability they want packaged in a beautiful, clean and easy-to-use experience.

    “By leveraging Firefox OS and the power of the Web, we are re-imagining and providing a modern platform for entry-level phones, said Li Gong, President of Mozilla. “We’re excited to work with operator partners like KDDI, LG U+, Telefonica and Verizon Wireless to reach new audiences in both emerging and developed markets and offer customers differentiated services.”

    Yasuhide Yamamoto, Vice President, Product Sector at KDDI said “We have been gaining high attention from the market with Fx0, a high tier LTE based Firefox OS smartphone launched last December, and we have faith in the unlimited potential of Firefox OS. KDDI has been very competitive in the Japanese mature mobile phone market for decades, so we are confident that we can contribute to the Mozilla community in developing this new concept product.”

    “Telefónica is actively supporting Firefox OS, aligned with our strategy of bringing more options and more openness to our customers. Firefox OS smartphones are currently offered in 14 markets across our footprint and are helping to bring connectivity to more people who are looking for a reliable and simple user experience at affordable prices,” said Francisco Montalvo, Director, Telefónica Group Devices Unit.

    Rosemary McNally, Vice President, Device Technology at Verizon said “Verizon aims to deliver innovative new products to its customers, and this initiative is about creating a modern, simple and smart platform for basic phones. We’re looking forward to continuing to work with Mozilla and other service providers to leverage the power of Firefox OS and the Web community.”
    ###

    About Mozilla
    Mozilla has been a pioneer and advocate for the Web for more than 15 years. We create and promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. With Firefox OS and Firefox Marketplace, Mozilla is driving a mobile ecosystem that is built entirely on open Web standards, freeing mobile providers, developers and end users from the limitations and restrictions imposed by proprietary platforms. For more information, visit www.mozilla.org.

    About KDDI Corporation
    KDDI, a comprehensive communications company offering fixed-line and mobile communications services, strives to be a leading company for changing times. For individual customers, KDDI offers its mobile communications (mobile phone) and fixed-line communications (broadband Internet/telephone) services under the brand name au, helping to realize Fixed Mobile and Broadcasting Convergence (FMBC). For business clients, KDDI provides comprehensive Information and Communications services, from Fixed Mobile Convergence (FMC) networks to data centers, applications, and security strategies, which helps clients strengthen their businesses. For more information please visit http://www.kddi.com/english.

    About Telefónica
    Telefónica is one of the largest telecommunications companies in the world in terms of market capitalisation and number of customers. With its best in class mobile, fixed and broadband networks, and innovative portfolio of digital solutions, Telefónica is transforming itself into a ‘Digital Telco’, a company that will be even better placed to meet the needs of its customers and capture new revenue growth. The company has a significant presence in 21 countries and a customer base of 341 million accesses around the world. Telefónica has a strong presence in Spain, Europe and Latin America, where the company focuses an important part of its growth strategy. Telefónica is a 100% listed company, with more than 1.5 million direct shareholders. Its share capital currently comprises 4,657,204,330 ordinary shares traded on the Spanish Stock Market  and on those in London, New York, Lima, and Buenos Aires.

    About Verizon Wireless
    Verizon Wireless operates the nation’s largest and most reliable 4G LTE network.  As the largest wireless company in the U.S., Verizon Wireless serves 108.2 million retail customers, including 102.1 million retail postpaid customers.  Verizon Wireless is wholly owned by Verizon Communications Inc. (NYSE, Nasdaq: VZ).  For more information, visit www.verizonwireless.com.  For the latest news and updates about Verizon Wireless, visit our News Center at http://www.verizonwireless.com/news or follow us on Twitter at http://twitter.com/VZWNews.

    Pascal FinetteLink Pack (March 1st)

    What I was reading this week:

    The Mozilla BlogWebmaker App Takes Fresh Approach to Digital Literacy

    Tomorrow at Mobile World Congress in Barcelona, Mozilla will release an open beta of the Webmaker app: a free, independent web publishing tool. This is an important next step in Mozilla’s effort to dramatically increase digital literacy around the world.

    The Webmaker app emerged from a year of research in Bangladesh, India and Kenya. The research pointed to two things: new smartphone users face a steep learning curve, often limiting themselves to basic apps like Facebook and not even knowing they are on the Internet; and users yearn for — and can benefit greatly from — the ability to create local, relevant content.

    Webmaker app is designed to address these needs by making it possible for anyone to quickly publish a website or an app from the moment they turn on their first smartphone. Students can build a digital bulletin board for their peers, teachers can create and distribute lesson plans, and merchants can produce websites to promote their products.

    The idea is to get new smartphone users making things quickly when they get online — and then to help them do more sophisticated things over time. This ‘make first’ approach to digital literacy encourages people to see themselves as active creators rather than passive consumers. This mindset will be critical as billions people grapple with the question ‘how and why should I use the internet?’ for the first time over the next few years.

    Webmaker app is free, open source and available in over 20 languages. Users can share their creations using a simple URL via SMS, Facebook, WhatsApp and more. Content created in Webmaker will load in any mobile web browser. The current open beta version is available for Android, Firefox OS and modern mobile browsers. A full release is planned for later this year.

    Complementing the Webmaker app are Mozilla’s far-reaching, face-to-face learning programs. Our network of volunteer makers, mentors and educators operate in more than 80 countries. These volunteers — equipped with the app and other tools — run informal workshops in  schools, libraries and other public places to help people understand how the Web works and create content relevant to their everyday lives.  Last year alone, Mozilla volunteers ran 2,513 workshops across 450 cities.

    All of these digital literacy activities are driven by partnerships. Mozilla partners with NGOs, mobile carriers and other global organizations to ensure our digital literacy programs reach individuals who need it most. We’re joining forces with influential partners who share our passion for an open Web, local content creation and empowered users.

    When billions of first-time Web users come online, they will find a platform they can build, mold and use everyday to better their lives, businesses and education. It’s an ambitious order, but Mozilla is prepared. To participate, or learn more about our digital literacy initiatives, visit webmaker.org/localweb.

    Gervase MarkhamTop 50 DOS Problems Solved: Doubling Disk Capacity

    Q: I have been told that it is possible to convert 720K 3.5-inch floppy disks into 1.44Mb versions by drilling a hole in the casing. Is this true? How is it done? Is it safe?

    A: It is true for the majority of disks. A few fail immediately, but the only way to tell is to try it. The size and placement of the hole is, near enough, a duplicate of the write-protect hole.

    If the write-protect hole is in the bottom left of the disk, the extra hole goes in a similar position in the bottom right. Whatever you do, make sure that all traces of plastic swarf are cleared away. As to whether this technique is safe, it is a point of disagreement. In theory, you could find converted disks less reliable. My own experience over several years has been 100 per cent problem free other than those disks which have refused to format to 1.44Mb in the first place.

    You can perform a similar trick with 360K and 1.2Mb 5.25-inch disks.

    Hands up who remembers doing this. I certainly do…

    Doug BelshawWeeknotes 08/2015 and 09/2015

    Last week I was in Dubai on holiday with my family thanks to the generosity of my Dad. Here’s a couple of photos from that trip. Scroll down for this week’s updates!

    Dubai Marina

    Giraffes feeding at Al Ain Zoo

    Doug

    This (four-day) work week I’ve been:

    Mozilla

     Dynamic Skillset

    Other

    Digital Maker/Citizen badges

    Next week I’ll be at home working more on the Learning Pathways whitepaper and Web Literacy Map v1.5. I’ll also be helping out with the Clubs curriculum work where necessary.

    Finally, I’m considering doing more work I originally envisaged this year with Dynamic Skillset, so email hello@nulldynamicskillset.com if you think I can help you or your organisation!

    All images by me, except header image CC BY-NC NASA’s Marshall Space Flight Center

    Cameron KaiserThe oldest computer running TenFourFox

    In the "this makes me happy" department, Miles Raymond posted his Power Macintosh 9500 (with a 700MHz G4 and 1.5GB of RAM) running TenFourFox in Tiger. I'm pretty sure there is no older system that can boot and run 10.4, but I'd be delighted to see if anyone can beat this. FWIW, the 9500 was released May 1995, making it 20 years old this year and our very own "Twentieth Anniversary" Macintosh.

    And Mozilla says you need an Intel Mac to run Firefox! Oh, those kidders! They're a laugh a minute!

    Yunier José Sosa VázquezFirefox 38 brindará soporte para Windows de 64 bits

    Firefox 38 -actualmente en el canal Developer Edition- será la primera versión de Firefox con soporte para Windows de 64 bits, con este lanzamiento Mozilla completará el soporte a dicha arquitectura pues ya se contaba con versiones en Linux y Mac.

    Anteriormente, solo de contaba con Firefox de 64 bits para Windows en el canal Nightly bajo etapa de pruebas y correcciones, ahora los usuarios de Windows podrán utilizar una versión optimizada de Firefox para dicha plataforma. Firefox 38 incorpora otras novedades interesantes y más adelante hablaremos sobre ello.

    La liberación de Firefox 38 final está planificada para el 12 de mayo y aún falta tiempo por definir si finalmente se contará con esta versión, esperamos que sí. Esta edición la pueden descargar desde la sección Aurora de nuestra zona de Descargas para Linux y Windows en español.

    Mozilla Reps CommunityRep of the month: February 2015

    Stefania Ioana Chiorean is one of the most humble, inspiring and hard working contributor of the Reps community.

    She has always been an inspiration of enthusiasm for the Mozilla community worldwide. Her proactive nature of getting things done has motivated Reps throughout. Being the part of Mozilla Romania Community, Ioana helps out anyone and everyone who wants to learn and make the web better. Spreading around Mozillian News through Social Media accounts of Mozilla Romania Community she enjoys helping the SUMO community. An emboldening persona in Womoz, Ioana encourage women participation in tech.

    ioana

    During the last few months, Ioana has been organizing and participating in several events to promote Mozilla like FOSDEM, OSOM, and also to involve more women into Free/Open Source communities and Mozilla through WoMoz initiative, highly involved in Mozilla QA helping to smash as many bugs as possible in several Mozilla products.

    Ioana is now driving the Buddy Up QA Pilot program, which aims to recruit and train community members to actively own testing of this project.

    Also we welcome Ioana as a Peer of the Reps Module and congratulate her for being the Rep of the Month!

    Thanks Ioana for all you do for the the Reps, Mozilla and the Open Web.

    Cheers little romanian vampire!

    Don’t forget to congratulate her on Discourse!

    Adrian GaudebertSpectateur, custom reports for crash-stats

    The users of Socorro at Mozilla, the Stability team, have very specific needs that vary over time. They need specific reports for the data we have, new aggregations or views with some special set of parameters. What we developers of Socorro used to do was to build those reports for them. It's a long process that usually requires adding something to our database's schema, adding a middleware endpoint and creating a new page in our webapp. All those steps take a long time, and sometimes we understand the needs incorrectly, so it takes even longer. Not the best way to invest our time.

    Nowadays, we have Super Search, a flexible interface to our data, that allows users to do a lot of those specific things they need. As it is highly configurable, it's easy to keep the pace of new additions to the crash reports and to evolve the capabilities of this tool. Couple that with our public API and we can say that our users have pretty good tools to solve most of their problems. If Super Search's UI is not good enough, they can write a script that they run locally, hitting our API, and they can do pretty much anything we can do.

    But that still has problems. Local scripts are not ideal: it's inconvenient to share them or to expose their results, it's hard to work on them collaboratively, it requires working on some rendering and querying the API where one could just focus on processing the data, and it doesn't integrate with our Web site. I think we can do better. And to demonstrate that, I built a prototype. Introducing...

    Spectateur

    spectateur.jpg Spectateur is a service that takes care of querying the API and rendering the data for you. All you need to do is work on the data, make it what you want it to be, and share your custom report with the rest of the world. It uses a language commonly known, JavaScript, so that most people (at least at Mozilla) can understand and hack what you have done. It lets you easily save your report and gives you a URL to bookmark and to share. And that's about it, because it's just a prototype, but it's still pretty cool, isn't it?

    To explain it a little more: Spectateur contains three parts. The Model lets you choose what data you want. It uses Super Search and gives you about the same capabilities that Socorro's UI has. Once you have set your filters and chosen the aggregations you need, we move to the Controller. That's a simple JavaScript editor (using Ace) and you can type almost anything in there. Just keep the function transform, the callback and the last lines that set the interface, otherwise it won't work at all. There are also some limitations for security: the code is executed in a Web Worker in an iframe, so you have no access to the main page's scope. Network requests are blocked, among other things. I'm using a wonderful library called jailed, if you want to know more, please read its documentation.

    Once you are done writing your controller, and you have exposed your data, you can click the Run button to create the View. It will fetch the data, run your processor on that data and then render the results following the rules you have exposed. The data can currently be displayed as a table (using jsGrid) or as a chart (using Chart.js). For details, please read the documentation of Spectateur (there's a link at the top). When you are satisfied with your custom report, click the button Save. That will save the Model and the Controller and give you a URL (by updating the URL bar). Come back to that URL to reload your report. Note that if you make a change to your report and click Save again, a new URL will be generated, the previous report won't be overwritten.

    As an example, here is a report that shows, for our B2G product, a graph of the top versions, a chart of the top signatures and a list of crash reports, all of that based on data from the last 7 days: https://spectateur.mozilla.io/#58a036ec-c5bf-469a-9b23-d0431b67f436

    I hope this tool will be useful to our users. As usual, if you have comments, feedback, criticisms, if you feel this is a waste of time and we should not invest any more time in it, or on the contrary you think this is what you needed this whole time, please please please let us know!

    Robert O'CallahanGreat Barrier Island

    Last weekend a couple of Mozillians --- David Baron and Jean-Yves Avenard --- plus myself and my children flew to Great Barrier Island for the weekend. Great Barrier is in the outer Hauraki Gulf, not far from Auckland; it takes about 30 minutes to fly there from Auckland Airport in a very small plane. The kids and I camped on Friday night at "The Green" campsite at Whangaparapara, while David and Jean-Yves stayed at Great Barrier Lodge nearby. On Saturday we did the Aotea Track in clockwise direction, heading up the west side of the hills past Maungapiko, then turning east along the South Fork Track to Mt Heale Hut for the night. (The usual continuation past Kaiaraara Hut along the Kaiaraara track had been washed out by storms last year, and we saw evidence of storm damage in the form of slips almost everywhere we went.) Even the South Fork Track had been partially rerouted along the bed of the Kaiaraara Stream. We were the only people at Mt Heale Hut and had a good rest after a reasonably taxing walk. But inspired by Jean-Yves, we found the energy to do a side trip to Mt Hobson --- the highest point on the island --- before sunset.

    On Sunday we walked south out of the hills to the Kaitoke hot strings and had a dip in the hot, sulphurous water --- very soothing. Then along the road to Claris and a well-earned lunch at the "Claris, Texas" cafe. We still had lots of time to kill before our flight so we dropped our bags at the airport (I use the term loosely) and walked out to Kaitoke Beach. A few of us swam there, carefully, since the surf felt very treacherous.

    I'd never been tramping overnight at the Barrier before and really enjoyed this trip. There aren't many weekend-sized hut tramps near Auckland, so this is a great option if you don't mind paying to fly out there. The flight itself is a lot of fun.

    Rizky AriestiyansyahMozKopDarJKT February 2015 Photo

    Gallery of #MozKopDarJKT

    Mozilla ThunderbirdThunderbird Usage Continues to Grow

    We’re happy to report that Thunderbird usage continues to expand.

    Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

    To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

    Thunderbird Active Daily Installations, peak value per month.

    Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

    Rank Country ADI 2015-02-24
    1 Germany 1,711,834
    2 Japan 1,002,877
    3 United States 927,477
    4 France 777,478
    5 Italy 514,771
    6 Russian Federation 494,645
    7 Poland 480,496
    8 Spain 282,008
    9 Brazil 265,820
    10 United Kingdom 254,381
    All Others 2,543,493
    Total 9,255,280

    Country Rankings for Thunderbird Usage, February 24, 2015

    The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

    Liz HenryA useful Bugzilla trick

    At the beginning of February I changed teams within Mozilla and am now working as a release manager. It follows naturally from a lot of the work I’ve already been doing at Mozilla and I’m excited to join the team working with Lukas, Lawrence, and Sylvestre!

    I just learned a cool trick for dealing with several bugzilla.mozilla.org bugs at once, on MacOS X.

    1) Install Bugzilla Services.

    2) Add a keyboard shortcut as Alex Keybl describes in the blog post above. (I am using Control-Command-B)

    3) Install the BugzillaJS (Tweaks for Bugzilla) addon.

    4) Install the Tree Style Tab addon.

    Now, from any text, whether in email, a desktop text file, or anywhere in the browser, I can highlight a bunch of text and bug number will be parsed out of the text. For example, from an email this morning:

    Bug 1137050 - Startup up Crash - patch should land soon, potentially risky
    David Major seems to think it is risky for the release.
    
    Besides that, we are going to take:
    Bug 1137469 - Loop exception - patch waiting for review
    Bug 1136855 - print preferences - patch approved
    Bug 1137141 - Fx account + hello - patch waiting for review
    Bug 1136300 - Hello + share buttons - Mike  De Boer will work on a patch today
    
    And maybe a fix for the ANY query (bug 1093983) if we have one...

    I highlighted the entire email and hit the “open in bugzilla” keystroke. This resulted in a Bugzilla list view for the 6 bugs mentioned in the email.

    Bugzilla list view example

    With BugzillaJS installed, I have an extra option at the bottom of the page, “Open All in Tabs”, so if I wanted to triage these bugs, I can open them all at once. The tabs show up in my sidebar, indented from their parent tab. This is handy if I want to collapse this group of tabs, or close the parent tab and all its children at once (The original list view of these 6 bugs, and each of its individual tabs.) Tree Style Tab is my new favorite thing!

    Tree style tabs bugzilla

    In this case, after I had read each bug from this morning and closed the tabs, my coworker Sylvestre asked me to make sure I cc-ed myself into all of them to keep an eye on them later today and over the weekend so that when fixes are checked in, I can approve them for release.

    Here I did not want to open up every bug in its own tab but instead went for “Change Several Bugs at Once” which is also at the bottom of the page.

    Bugzilla batch edit

    This batch edit view of bugs is a bit scarily powerful since it will result in bugmail to many people for each bug’s changes. When you need it, it’s a great feature. I added myself to the cc: field all in one swoop instead of having to click each tab open, click around several times in each bug to add myself and save and close the tab again.

    It was a busy day yesterday at work but I had a nice time working from the office rather than at home. Here is the view from the SF Mozilla office 7th floor deck where I was working and eating cake in the sun. Cannot complain about life, really.
    Mozilla bridge view

    Related posts:

    Chris AtLeeDiving into python logging

    Python has a very rich logging system. It's very easy to add structured or unstructured log output to your python code, and have it written to a file, or output to the console, or sent to syslog, or to customize the output format.

    We're in the middle of re-examining how logging works in mozharness to make it easier to factor-out code and have fewer mixins.

    Here are a few tips and tricks that have really helped me with python logging:

    There can be only more than one

    Well, there can be only one logger with a given name. There is a special "root" logger with no name. Multiple getLogger(name) calls with the same name will return the same logger object. This is an important property because it means you don't need to explicitly pass logger objects around in your code. You can retrieve them by name if you wish. The logging module is maintaining a global registry of logging objects.

    You can have multiple loggers active, each specific to its own module or even class or instance.

    Each logger has a name, typically the name of the module it's being used from. A common pattern you see in python modules is this:

    # in module foo.py
    import logging
    log = logging.getLogger(__name__)
    

    This works because inside foo.py, __name__ is equal to "foo". So inside this module the log object is specific to this module.

    Loggers are hierarchical

    The names of the loggers form their own namespace, with "." separating levels. This means that if you have have loggers called foo.bar, and foo.baz, you can do things on logger foo that will impact both of the children. In particular, you can set the logging level of foo to show or ignore debug messages for both submodules.

    # Let's enable all the debug logging for all the foo modules
    import logging
    logging.getLogger('foo').setLevel(logging.DEBUG)
    

    Log messages are like events that flow up through the hierarchy

    Let's say we have a module foo.bar:

    import logging
    log = logging.getLogger(__name__)  # __name__ is "foo.bar" here
    
    def make_widget():
        log.debug("made a widget!")
    

    When we call make_widget(), the code generates a debug log message. Each logger in the hierarchy has a chance to output something for the message, ignore it, or pass the message along to its parent.

    The default configuration for loggers is to have their levels unset (or set to NOTSET). This means the logger will just pass the message on up to its parent. Rinse & repeat until you get up to the root logger.

    So if the foo.bar logger hasn't specified a level, the message will continue up to the foo logger. If the foo logger hasn't specified a level, the message will continue up to the root logger.

    This is why you typically configure the logging output on the root logger; it typically gets ALL THE MESSAGES!!! Because this is so common, there's a dedicated method for configuring the root logger: logging.basicConfig()

    This also allows us to use mixed levels of log output depending on where the message are coming from:

    import logging
    
    # Enable debug logging for all the foo modules
    logging.getLogger("foo").setLevel(logging.DEBUG)
    
    # Configure the root logger to log only INFO calls, and output to the console
    # (the default)
    logging.basicConfig(level=logging.INFO)
    
    # This will output the debug message
    logging.getLogger("foo.bar").debug("ohai!")
    

    If you comment out the setLevel(logging.DEBUG) call, you won't see the message at all.

    exc_info is teh awesome

    All the built-in logging calls support a keyword called exc_info, which if isn't false, causes the current exception information to be logged in addition to the log message. e.g.:

    import logging
    logging.basicConfig(level=logging.INFO)
    
    log = logging.getLogger(__name__)
    
    try:
        assert False
    except AssertionError:
        log.info("surprise! got an exception!", exc_info=True)
    

    There's a special case for this, log.exception(), which is equivalent to log.error(..., exc_info=True)

    Python 3.2 introduced a new keyword, stack_info, which will output the current stack to the current code. Very handy to figure out how you got to a certain point in the code, even if no exceptions have occurred!

    "No handlers found..."

    You've probably come across this message, especially when working with 3rd party modules. What this means is that you don't have any logging handlers configured, and something is trying to log a message. The message has gone all the way up the logging hierarchy and fallen off the...top of the chain (maybe I need a better metaphor).

    import logging
    log = logging.getLogger()
    log.error("no log for you!")
    

    outputs:

    No handlers could be found for logger "root"
    

    There are two things that can be done here:

    1. Configure logging in your module with basicConfig() or similar

    2. Library authors should add a NullHandler at the root of their module to prevent this. See the cookbook and this blog for more details here.

    Want more?

    I really recommend that you read the logging documentation and cookbook which have a lot more great information (and are also very well written!) There's a lot more you can do, with custom log handlers, different output formats, outputting to many locations at once, etc. Have fun!

    Blake WintonA long time ago, on a computer far far away…

    Six years ago, I started contributing to Mozilla.

    Read more… (3 min remaining to read)

    Mike ConleyThe Joy of Coding (Episode 3)

    The third episode is up! My machine was a little sluggish this time, since I had OBS chugging in the background attempting to do a hi-res screen recording simultaneously.

    Richard Milewski and I are going to try an experiment where I try to stream with OBS next week, which should result in a much higher-resolution stream. We’re also thinking about having recording occur on a separate machine, so that it doesn’t bog me down while I’m working. Hopefully we’ll have that set up for next week.

    So this third episode was pretty interesting. Probably the most interesting part was when I discovered in the last quarter that I’d accidentally shipped a regression in Firefox 36. Luckily, I’ve got a patch that fixes the problem that has been approved for uplift to Aurora and Beta. A point release is also planned for 36, so I’ve got approval to get the fix in there too. \o/

    Here are the notes for the bug I was working on. The review feedback from karlt is in this bug, since I kinda screwed up where I posted the review request with MozReview.

    Doug BelshawThe final push for Web Literacy Map v1.5 (and how you can get involved!)

    By the end of March 2015 we should have a new, localised iteration of Mozilla’s Web Literacy Map. We’re calling this ‘version 1.5’ and it’s important to note that this is a point release rather than a major new version.

    Cat with skills

    Right now we’re at the point where we’ve locked down the competencies and are now diving into the skills underpinning those competencies. To help, we’ve got a epic spreadsheet with a couple of tabs:

    Tabs

    The REVIEW tab contains lots of comments about the suitability of the skills for v1.5. On this week’s community call we copied those skills that had no comments about them to the REFINE tab:

    REFINE tab

    This is where we need your help. We’ve got skills in the REVIEW tab that, with some tweaking, can help round out those skills we’ve already transferred. It would be great if you could help us discuss and debate those. There’s also some new competencies that have no skills defined at present.

    We’ve got weekly community calls where we work on this stuff, but not everyone can make these. That’s why we’re using GitHub issues to discuss and debate the skills asychronously.

    Here’s how to get involved:

    1. Make sure you’ve got a (free) GitHub account
    2. Head to the meta-issue for all of the work we’re doing around v1.5 skills
    3. Have a look through the skills under the various competencies (e.g. Remixing)
    4. Suggest an addition, add a question, or point out overlaps
    5. Get email updates when people reply - and then continue the conversation!

    We really do need as many eyes as possible on this. No matter whether you’re an old hand or a complete n00b, you’re welcome. The community is very inviting and tolerant, so please dive in!


    Comments? Questions? These would be better in GitHub, but if you want to get in touch directly I’m @dajbelshaw on Twitter or you can email me: doug@mozillafoundation.org

    Mozilla Reps CommunityReps Weekly Call – February 26th 2015

    Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

    serious-funny

    Summary

    • RepsIDMeetup
    • Alumni status and leaving SOP
    • New mentors coming soon
    • GMRT event Pune
    • Teach The Web Talks
    • FOSS Asia
    • BuddyUp
    • Say Hello Day

    Detailed notes

    AirMozilla video

    Don’t forget to comment about this call on Discourse and we hope to see you next week!

    Kaustav Das ModakFirefox OS for everyone

    While a good number of events around Firefox OS have focused on helping developers understand the technology behind it, there has been very little effort in helping people understand Firefox OS from a non-technical and non-developer perspective. I’ve started a series of events titled “Firefox OS for everyone” to solve this gap. The content has […]

    Pierros PapadeasFOSDEM 2015 Bug sprint tool

    “Reclama” Experiment report

    Intro

    FOSDEM 2015 is a premier open source developers’ event in Europe. Mozilla is heavily participating in the event for over 10 years now, with booths and dev-rooms. This year the Community Development team decided to run an experimental approach on recruiting new contributors by promoting good-first bugs.

    Scope

    Given the highly technical nature of the audience in FOSDEM 2015, the approach decided has a straight-forward promotion of bugs and prizes for them, for people to track and get involved.

    Following sign-off meeting with the stakeholders (William Quiviger, Francisco Piccolini and Brian King) the specifications agreed for the first iteration of the experiment were as follows:

    1. A straight-forward interface displaying hand-selected good first bugs, ready to be worked on
    2. Next to bugs, prizes should be displayed.
    3. Viewing from mobile devices should be also accounted for.
    4. Public access to all content. Ability to edit and add events on selected accounts (hardcoded for v1)
    5. The interface has to be properly Mozilla branded (high visibility and promotion in an event)
    6. It needs to be future-proof for event to come (easily add new events and/or bugs after deployment)
    7. Solution URL should be short and easily memorable and digestible.
    8. It needs to be delivered before the start of FOSDEM 2015 ;)
    9. A report should be compiled with usage statistics and impact analysis for the experiment

    Development

    Given the extremely short timeframe (less than 3 days), ready, quick or off-the-shelf solutions were evaluated like:

    • An online spreadsheet (gdocs)
      • Not meeting requirements #3, #5, #6, #7
    • A bugzilla query
      • Not meeting requirements #2, #7, #6
    • A scrappy HTML plain coded solution
      • Not meeting requirements #4, #6

    Thus, given the expertise of the group it was decided to create a django application to meet all requirements in time.

    Following 2 days of non-stop development (fantastic work from Nikos and Nemo!), testing and iteration we met all requirements by developing the app which is codenamed “reclama” (Italian -shortof-  for claim).

    Code can be found here: https://github.com/mozilla/reclama

    Screen Shot 2015-02-27 at 14.57.26Deployment

    In order to meet requirement #7 (short and memorable URL) and given the timeframe, we decided to acquire a URL quickly ( mozbugsprints.org ) and deploy the application.

    For usage statistics, awstats was deployed on top of the app to track incoming traffic.

    Usage statistics

    During the weekend of FOSDEM 2015, 500 people with almost 5000 hits visited the website. That’s almost 10% of the event participants.mozbugsprint-stats

    Saturday 31-Jan-2015 traffic analysis

    mozbugsprint-saturdayBooths and promotion of the experiment started at 9:00 as expected. With mid-day (noon) pick which is consistent with increased traffic in booths area of the event.

    Traffic continues to flow in steadily even after the end of the event, which indicates that people keep the URL and interact with our experiment substantial time after the face to face interaction with our booth. Browsing continues through the night, and help might be needed (on call people/mentors ?) during that too.

    Sunday 1-Feb-2015 traffic analysis

    mozbugsprint-sundaySecond day in FOSDEM 2015 included a Mozilla dedicated dev-room. The assumption that promotion through the dev-room would increase the traffic in our experiment proved to be false, as the traffic continued on the same levels for day 2.
    As expected there was a sharp cut-off after 16:00 (when the final keynote starts) and also people seem to not come back after the event. Thus the importance of hyper-specific event-focused (and branded) challenge seems to be high, as people relate to that understanding the one-off nature of it.

    Impact

    32 coding bugs from different product areas were presented to people. 9 of them were edited (assigned, commented, worked on) during or immediately (one day) after FOSDEM 2015. Out of those 9 , 4 ended up on first patch submission (new contributors) and 3 received no response (blocked contributor) from mozilla staff (or core contributors).

    Recommendations

    • Re-do experiment in a different cultural environment, but still with related audience, so we can cross-compare.
    • Continue the experiment by implementing new functionality as suggested by the stakeholders (notes, descriptions etc).
    • Experiment random sorting of bugs, as current order seemed to affect what has been worked on.
    • Submission of bugs to be featured on an event should be coordinated by event owner and related to event theme and topics.
    • A/B test how prize allocation affects what bugs are worked on.
    • Expand promotional opportunities throughout the event. (booth ideas?)
    • On call developers on promoted bugs would eliminate the non-answered section of our bugs

    PS. Special thanks to Elio for crafting an excellent visual identity and logo once again!

    Gervase MarkhamAn Encounter with Ransomware

    An organization which I am associated with (not Mozilla) recently had its network infected with the CryptoWall 3.0 ransomware, and I thought people might be interested in my experience with it.

    The vector of infection is unknown but once the software ran, it encrypted most data files (chosen by extension) on the local hard drive and all accessible shares, left little notes everywhere explaining how to get the private key, and deleted itself. The notes were placed in each directory where files were encrypted, as HTML, TXT, PNG and as a URL file which takes you directly to their website.

    Their website is accessible as either a TOR hidden service or over plain HTTP – both options are given. Presumably plain HTTP is for ease for less technical victims; Tor is for if their DNS registrations get attacked. However, as of today, that hasn’t happened – the site is still accessible either way (although it was down for a while earlier in the week). Access is protected by a CAPTCHA, presumably to prevent people writing automated tools that work against it. It’s even localised into 5 languages.

    CryptoWall website CAPTCHA

    The price for the private key was US$500. (I wonder if they set that based on GeoIP?) However, as soon as I accessed the custom URL, it started a 7-day clock, after which the price doubled to US$1000. Just like parking tickets, they incentivise you to pay up quickly, because argument and delay will just make it cost more. If you haven’t paid after a month, they delete your secret key and personal page.

    While what these thieves do is illegal, immoral and sinful, they do run a very professional operation. The website had the following features:

    • A “decrypt one file” button, which allows them to prove they have the private key and re-establish trust. It is, of course, also protected by a CAPTCHA. (I didn’t investigate to see whether it was also protected by numerical limits.)
    • A “support” button, which allows you to send a message to the thieves in case you are having technical difficulties with payment or decryption.

    The organization’s last backup was a point-in-time snapshot from July 2014. “Better backups” had been on the ToDo list for a while, but never made it to the top. After discussion with the organization, we decided that recreating the data would have taken much more time than the value of the ransom, and so were were going to pay. I tried out the “Decrypt One File” function and it worked, so I had some confidence that they were able to provide what they said they were.

    I created a wallet at blockchain.info, and used an exchange to buy exactly the right amount of Bitcoin. (The first exchange I tried had a ‘no ransomware’ policy, so I had to go elsewhere.) However, when I then went to pay, I discovered that there was a 0.0001BTC transaction fee, so I didn’t have enough to pay them the full amount! I was concerned that they had automated validation and might not release the key if the amount was even a tiny bit short. So, I had to go on IRC and talk to friends to blag a tiny fraction of Bitcoin in order to afford the transfer fee.

    I made the payment, and pasted the transaction ID into the form on the ransomware site. It registered the ID and set status to “pending”. Ten or twenty minutes later, once the blockchain had moved on, it accepted the transaction and gave me a download link.

    While others had suggested that there was no guarantee that we’d actually get the private key, it made sense to me. After all, word gets around – if they don’t provide the keys, people will stop paying. They have a strong incentive to provide good ‘customer’ service.

    The download was a ZIP file containing a simple Windows GUI app which was a recursive decryptor, plus text files containing the public key and the private key. The app worked exactly as advertised and, after some time, we were able to decrypt all of the encrypted files. We are now putting in place a better backup solution, and better network security.

    A friend who is a Bitcoin expert did do a little “following the money”, although we think it went into a mixer fairly quickly. However, before it did so, it was aggregated into an account with $80,000+ in it, so it seems that this little enterprise is fairly lucrative.

    So, 10/10 for customer service, 0/10 for morality.

    The last thing I did was send them a little message via the “Support” function of their website, in both English and Russian:

    Such are the ways of everyone who is greedy for unjust gain; it takes away the life of its possessors.

    Таковы пути всех, кто жаждет преступной добычи; она отнимает жизнь у завладевших ею.

    ‘The time has come,’ Jesus said. ‘The kingdom of God has come near. Repent and believe the good news!’

    – Пришло время, – говорил Он, – Божье Царство уже близко! Покайтесь и верьте в Радостную Весть!

    Mozilla Reps CommunityImpact teams: a new approach for functional impact at Reps

    When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

    With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

    The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.

    impact-team-2

    These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

    We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

    The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

    We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!

    Tantek Çelik#IndieWeb: Homebrew Website Club 2015-02-25 Summary

    2015-02-25 Homebrew Website Club participants, seven of them, sit in two rows for a photograph

    At last night's Homebrew Website Club we discussed, shared experiences, and how-tos about realtime indie readers, changing/choosing your webhost, indie RSVPs, moving from Blogger/Tumblr to your own site, new IndieWebCamp Slack channel, and ifthisthen.cat.

    See kevinmarks.com/hwc2015-02-25.html for the writeup.

    Tantek ÇelikDisappointed in @W3C for Recommending Longdesc

    W3C has advanced the longdesc attribute to a Recommendation, overruling objections from browser makers.

    Not a single browser vendor supported advancing this specification to recommendation.

    Apple formally objected when it was a Candidate Recommendation and provided lengthy research and documentation (better than anyone has before or since) on why longdesc is bad technology (in practice has not and does not solve the problems it claims to).

    Mozilla formally objected when it was a Proposed Recommendation, agreeing with Apple’s research and reasoning.

    Both formal objections were overruled.

    For all the detailed reasons noted in Apple’s formal objection, I also recommend avoid using longdesc, and instead:

    • Always provide good alt (text alternative) attributes for images, that read well inline if and when the image does not load. Or if there’s no semantic loss without the image, use an empty alt="".
    • For particularly rich or complex images, either provide longer descriptions of images in normal visible markup, or linked from a image caption or other visible affordance. See accessibility expert James Craig’s excellent Longdesc alternatives in HTML5 resource for even more and better techniques.

    Perhaps the real tragedy is that many years have been wasted on a broken technology that could have been spent on actually improving accessibility of open web technologies. Not to mention the harrassment that’s occurred in the name of longdesc.

    Sometimes web standards go wrong. This is one of those times.

    Planet Mozilla InternsMichael Sullivan: MicroKanren (μKanren) in Haskell

    Our PL reading group read the paper “μKanren: A Minimal Functional Core for Relational Programming” this week. It presents a minimalist logic programming language in Scheme in 39 lines of code. Since none of us are really Schemers, a bunch of us quickly set about porting the code to our personal pet languages. Chris Martens produced this SML version. I hacked up a version in Haskell.

    The most interesting part about this was the mistake I made in the initial version. To deal with recursion and potentially infinite search trees, the Scheme version allows some laziness; streams of results can be functions that delay search until forced; when a Scheme μKanren program wants to create a recursive relation it needs wrap the recursive call in a dummy function (and plumb through the input state); the Scheme version wraps this in a macro called Zzz to make doing it more palatable. I originally thought that all of this could be dispensed with in Haskell; since Haskell is lazy, no special work needs to be done to prevent self reference from causing an infinite loop. It served an important secondary purpose, though: providing a way to detect recursion so that we can switch which branch of the tree we are exploring. Without this, although the fives test below works, the fivesRev test infinite loops without producing anything.

    The initial version was also more generalized. The type signatures allowed for operating over any MonadPlus, thus allowing pluggable search strategies. KList was just a newtype wrapper around lists. When I had to add delay I could have defined a new MonadPlusDelay typeclass and parametrized over that, but it didn’t’ seem worthwhile.

    A mildly golfed version that drops blank lines, type annotations, comments, aliases, and test code clocks in at 33 lines.

    <noscript>View the code on <a href="https://gist.github.com/msullivan/4223fd47991acbe045ec">Gist</a>.</noscript>

    Doug BelshawAn important day for the Internet

    As I’ve just been explaining to my son, when he’s my age and looks back at the history of the Internet, 26 February 2015 will be seen as a very important day.

    Why? The Mozilla blog summarises it well:

    We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

    Net Neutrality can be a difficult thing to understand and, especially if you’re not based in the USA, it can feel like something that doesn’t affect you. However, it is extremely important, and it impacts everyone.

    Last year we put together a Webmaker training module on Net Neutrality. I’d like to think it helped towards what was achieved today. As Mitchell Baker stated, this victory was far from inevitable, and the success will benefit all of humankind.

    It’s worth finding out more about Net Neutrality, for the next time it’s threatened. Forewarned is forearmed.

    Image CC BY Kendrick Erickson

    Air MozillaBrown Bag, "Analysing Gaia with Semmle"

    Brown Bag, "Analysing Gaia with Semmle" Title: Analysing Gaia with Semmle Abstract: Semmle has recently added support for JavaScript to its analysis platform. As one of our first major JavaScript analysis...

    The Mozilla BlogA Major Victory for the Open Web

    We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
    in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

    This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

    Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

    Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

    Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

    Rosana ArdilaImpact teams: a new approach for functional impact at Reps

    When the new participation plan was forming one of the first questions was: how can the Reps program enable more and deeper participation in Mozilla? We know that Reps are empowering local and regional communities and have been playing an important role in various project like Firefox OS launches, but there wasn’t an organized and more importantly scalable way to provide support to functional teams at Mozilla. The early attempts of the program to help connect volunteers with functional areas were the Special Interest Groups (SIG). Although in some cases and for some periods of time the SIGs worked very well and were impactful, they wasn’t sustainable in the long run. We couldn’t provide a structure that ensured mutual benefit and commitment.

    With the renewed focus on participation we’re trying to think differently about the way that Reps can connect to functional teams, align with their goals and participate in every part of Mozilla. And this is where the “Impact teams” come in. Instead of forming loose interest groups, we want to form teams that work well together and are defined by the impact they are having, as well as excited by future opportunity to not only have deeper participation but personal growth as part of a dedicated team where colleagues include project staff.

    The idea of these new impact teams is to make sure that the virtuous circle of mutual benefit is created. This means that we will work with functional teams to ensure that we find participation opportunities for volunteers that have direct impact on project goals, but at the same time we make sure that the volunteers will benefit from participating, widening their skills, learning new ones.

    impact-team-2

    These teams will crystallize through the work on concrete projects, generating immediate impact for the team, but also furthering the skills of volunteers. That will allow the impact team to take on bigger challenges with time: both volunteers and functional teams will learn to collaborate and volunteers with new skills will be able to take the lead and mentor others.

    We’re of course at the beginning and many questions are still open. How can we organize this in an agile way? How can we make this scalable? Will the scope of the role of Reps change if they are more integrated in functional activities? How can we make sure that all Mozillians, Reps and non Reps are part of the teams? Will we have functional mentors? And we think the only way to answer those questions is to start trying. That’s why we’re talking to different functional areas, trying to find new participation opportunities that provide value for volunteers. We want to learn by doing, being agile and adjusting as we learn.

    The impact teams are therefore not set in stone, we’re working with different teams, trying loose structures and specially putting our energy into making this really beneficial for both functional teams and volunteers. Currently we are working to the Marketplace team, the Firefox OS Market research team and the developer relations team. And we’ll be soon reaching out to Mozillians and Reps who have a track record in those areas to ask them to help us build these impact teams.

    We’re just at the beginning of a lot of pilots, tests, prototypes. But we’re excited to start moving fast and learn! We have plenty of work to do and many questions to answer, join us in shaping these new impact teams. Specially help us now how your participation at Mozilla can benefit your life, make you grow, learn, develop yourself. Emma Irwin is working on making education a centerpiece of participation, but do you have any other ideas? Share them with us!


    Pascal FinetteWhat Every Entrepreneur Should Know About Pitching

    The following post is a summary of a series of earlier Heretic posts on the subject compiled into one comprehensive list - compiled by the wonderful folks at Unreasonable.is

    Your pitch deck MUST start with a description of what it is that you’re doing. Your second slide (after the cover slide) is titled “What is NAME-OF-YOUR-COMPANY” (e.g. “What is eBay”). Explain in simple English what you’re doing. This is not the place to be clever, show off your extraordinary grasp of the English language or think that your pitch deck is a novel where you build tension and excitement in the first half and surprise the reader in the end.

    If I (or any investor for that matter) don’t understand what you are doing in the first 10-15 seconds you already lost me. I know investors who don’t go past slide two if they don’t grasp what the company does.

    Simple and obvious, eh? The crazy thing is, I get tons of decks which make me literally go “WTF?!” Let me illustrate the point. Here are two pitches which get it right (names and some product details have been changed):

    What is ACME CORP?

    • A unique coffee subscription business…
    • 1) Rare, unique, single-origin coffee
    • 2) Freshly roasted to order each month
    • 3) A different coffee every month, curated by experts
    • delivered monthly through the letterbox.
    • A subscription-only business—sign up online—available for self-purchase or as a special gift

    Moving on — pitch #2:

    ACME CORP is an E-FASHION club that provides affordable trendy shoes, accessories and PERSONALIZED style recommendations to European Women!

    Clear. They are a shopping club focussed on shoes for European women. Crisp and clear. No fancy language. Just the facts.

    And now for something different—pitch #3:

    ACME CORP is an online collaboration hub and development environment for makers, hobbyist and engineers.

    I have mostly no clue what they are doing. Worse — as I actually know the team and their product — this isn’t even an accurate statement of what they are up to. What they build is Github plus a deployment system for Arduinos. Their statement is overly broad and unspecific.

    So your first slide (not the cover — your first actual content slide) is the most important slide in your deck. It’s the slide where I decide if I want to see and hear more. It’s the slide which sets the tone for the rest of your interaction. And it’s the slide which forms my first image of you and your company. Spend time on it. Make it perfect. Pitch it to strangers who know nothing about your company and see if they get it. Show them just this one slide and then ask them to describe back to you what your company does. If they don’t get it, or it’s inaccurate, go back and revise the slide until you get it right.

    The Team Slide

    We all know that investors put their money into teams not ideas. Ideas are cheap and plentiful. Incredible execution is rare. Ideas evolve (yeah — sometimes they “pivot“). Great teams excel at this, mediocre teams fall apart. So you invest into people.

    Which means your team slide better be good.

    I can’t tell you too much about the composition of your team — as this is highly dependent on your idea and the industry you’re in.

    Teams of one are usually a bad sign. If you can’t bring a team to the table when you ask for funding it just doesn’t reflect well on your ability to recruit. Teams that have a bunch of people listed as “will come on board once we can pay her/him a salary” don’t work. People who are not willing to take the risk are employees, not cofounders.

    Don’t bullshit when you talk about your team. Sentences such as “X has 10 years of Internet experience” make me cringe, then laugh and then delete your email. Every man and his dog has ten years of “Internet experience” by now. Be honest, tell me what your team did. If your team hasn’t done anything interesting. Well, that’s something you should think about. You won’t be able to hide it anyway. “Y is a Ruby ninja?” I let your teammate speak with one of our portfolio companies for three minutes and I know if he’s a ninja or a backwater coder who still designs MySpace pages for his school chorus. Oh, and by the way: Nobody is a fucking ninja, superstar or what-have-you. Cut the lingo.

    Lastly—and this shows your attention to detail — make sure the pictures you use have a common look and feel and don’t look like a collection of randomly selected vacation snapshots. Unless you’re all totally wasted in the shots. That might be funny.

    Let’s Talk About Advisors

    Typically you add them to your team slide. And most of the time you see something along the lines of a list of names a la “John Doe, Founder ACME Corp.,” sometimes with a picture.

    Here’s the deal—advisors provide two signals for an investor:

    1. Instant credibility if you have “brand name” advisors
    2. A strong support network

    The first one only works if you have truly recognizable names which are relevant to your field. Sergey Brin works, a random director at a large company doesn’t. The second one is trickier. In pitch presentations I often wonder what those people actually do for you — as often the entrepreneurs either just rattle down the names of the advisors on their slide or even glance over them and say something to the tune of, “and we have a bunch of awesome advisors.”

    If you want to make your pitch stronger I recommend you make sure that your advisors are relevant (no, your dad’s buddy from the plumbing shop down the road most likely doesn’t count) and that they are actual advisors and not only people with whom you emailed once. You can spend 15 seconds in your pitch describing the relationship you have with your advisors. (e.g. “We assembled a team of relevant advisors with deep expertise in X, Y and Z. To leverage their expertise and network we meet with them every month for a one-hour session and can also ask for advice via phone and email anytime in between.”)

    By the way, there is something to be said about the celebrity advisor. As awesome as it might be that you got Tim Cook from Apple to be an advisor I instantly ask myself how much time you actually get out of him—he’s busy as hell. So you might want to anticipate that (silent) question and address it head on in your pitch.

    The Dreaded Finance Slide

    The one slide that is made up out of one hundred percent pure fantasy. And yet it seems (and it is) so important. Everybody knows that whatever you write down on your finance slide is not reality but (in the best case) wishful thinking. That includes your investor.

    Why bother? Simple. The finance slide shows your investor that you understand the fundamentals of your business. That you have a clue. That he can trust you with his money.

    So what do you need to know about your finance slide? As so often in life the answer is: it depends. Here’s my personal take. For starters you want to show a one-year plan which covers month-by-month revenue and costs. Lump costs into broader buckets and don’t fall into the trap of pretended accuracy by showing off precise numbers. Nobody will believe that you really know that your marketing costs will be precisely $6,786 in month eight. You’re much better off eyeballing these numbers. Also don’t present endless lists of minutia details such as your telecommunication costs per month. Show your business literacy by presenting ballpark numbers that make sense (e.g. salaries should come as fully loaded head counts—not doing this is a strong indicator that you don’t know the fundamentals of running a business).

    On the revenue side you want to do a couple of things. First of all explain your business model (this is something you might want to pull out on its own slide). Then give me your assumptions—sometimes it makes sense to work with scenarios (best, base and worst case). And then use this model to validate your revenue assumptions bottom-up. If you say you will have 5,000 customers in month three, what does that mean in terms of customer acquisition per day, how does that influence your cost model, etc.

    This is probably the most useful part of putting together your financials. It allows you to make an educated guess about your model and will, if done right, feed you tons of information back into your own thinking. Weirdly a lot of entrepreneurs don’t do this and then fall onto their face when, in a pitch meeting, their savvy investor picks the numbers apart and points out the gapping holes (something I like to do—it really shows if someone thought hard about their business or if they are only wanna-be entrepreneurs).

    And then you do the same for year two and three — this time on a quarterly basis.

    Above all, do this for yourself, not because you need to do this for your investors. Use this exercise to validate your assumptions, to get a deep understanding of the underlying logic of your business. Your startup will be better off for it.

    The Business Model Slide

    You have a business model, right? Or at least you pretend to have one? If you don’t and you believe you can get by, by using a variant of the “we’ll figure it out” phrase you better have stratospheric growth and millions of users.

    Here’s the thing about your business model slide in your pitch deck: If you spend time to make it clear, concise, easy to understand and grasp, I am so much more likely to believe that you have at least a foggy clue about what you’re doing. If you, in contrast, do what I see so many startups do and give me a single bullet, hidden on one of your other slides which reads something like “Freemium Model” and that’s it… well, that’s a strong indicator that you haven’t thought about this a whole lot, that you are essentially clueless about your business and that I really shouldn’t trust you with my money.

    With that being said, what does a great business model slide look like? It really depends on what your business model is (or you believe it is to be precise—these things tend to change). What I look for is a clear expression of your model, the underlying assumptions and the way the model works out. Often this can be neatly expressed in an info graphic—showing where and how the money comes in, how the value chain looks like and what the margins are alongside the chain. Here’s an example: it’s not perfect yet much better than simply expressing your model as “we take a 25% margin.”

    Spend some time on your business model slide. Make it clear and concise. The litmus test is: Show someone who doesn’t know anything about your company just this one slide and ask them to explain back to you your business model. If they get it and they get it in its entirety you are off to the races.

    The Ask

    You know that you always have to have an ask in your pitch, right? It might be for money, it might be for partnerships or just feedback—but never ever leave an audience without asking for something.

    There’s a lot of art and science to asking. Here’s my view on the science piece: Let’s assume you pitch for money. Don’t start your pitch with your ask. By putting your ask first you A) rob yourself of the wonderful poetry of a well delivered pitch (as everyone only thinks about the dollars), B) you might loose a good chunk of your audience who might not fall into your price bracket (believe me, more often than not they gladly invest if they just hear you out and get excited by your company) and C) you will have every investor judge every single slide against the dollar amount you put up.

    That said, don’t just close your pitch with a “We’re raising $500k. Thank you very much.” but give me a bit more detail what you need the money for and how long it is projected to last (pro tip: Make sure you budget for enough runway—raising for too short an amount of time is a classic beginners mistake). You want to say something like: “We’re raising $500k which will go towards building out our engineering team, building our Android app and getting the first wave of marketing out which should get us to 500k users. The round is projected to last 12 months and will validate our milestones.”

    A Word About Design

    One question I get quite often about pitch decks is: How much text should be or need to be on the slides? This can be a tricky question—you will both present the deck to an audience (in which case you tend to want to have less text and more emphasis on your delivery) and you’ll send the deck to investors via email (in which case a slide with just an image on it won’t work—the recipient doesn’t have the context of your verbal presentation).

    Guy Kawasaki famously formulated the 10/20/30 rule: 10 Slides, 20 Minutes, 30 Point Minimal Font Size. This is a great starting point—and what I would recommend for your in-person investor presentation. But it might not work for the deck you want to email.

    Here’s what I would do (and have done): Start with a slide deck where each and every slide can stand on its own. Assume you give the slide deck to someone who has no context about you and your startup. And at the same time—treat your deck like a deck and not a word document. Keep text short, reduce it to the bare necessity, cut out unnecessary language. Keep the whole deck short—one should be able to flip through your deck in a few minutes and digest the whole thing in 10-15 minutes.

    Once you have this, which will be the deck you send out to investors, you take the deck and cut out all the words which are not necessary for an in-person presentation. This will give you the deck that you present. Keeping the two decks in sync with regards to slides, order and design will make it easier for someone who saw your deck recognize it in your pitch.

    Flow

    The best pitch deck fails to deliver if your overall flow isn’t right. Flow has as much to do with the logical order of your slides as it has with a certain level of theatrical drama (tension/relieve) and your personal delivery.

    Guy recommends ten slides in the following order:

    • Problem
    • Your solution
    • Business model
    • Underlying magic/technology
    • Marketing and sales
    • Competition
    • Team
    • Projections and milestones
    • Status and timeline
    • Summary and call to action

    Personally I think this is as good an order as most. Some people like to talk about the team earlier (as investors invest into people first and foremost), others have specific slides talking about some intricate issues specific to the business they are pitching.

    For me, it comes down to a logical order: talk about the problem you’re solving first, then present the solution (the tension and relieve arch), and what feels good for you. I prefer a slide deck that is a bit off but comes with amazing in-person presence over a great slide deck and an uninspired presentation any day.

    Note that you want to have a bit of drama in your deck—yet it’s not your school play where you try to outcompete Shakespeare. Don’t spend half an hour on building tension and then, TADA!, present your solution. In the end it’s all about balance.

    And hey, send me your deck and I’ll provide honest, direct feedback. I won’t bite. Promise.

    Cameron KaiserIonPower passes V8!

    At least in Baseline-only mode, but check it out!

    Starting program: /Volumes/BruceDeuce/src/mozilla-36t/obj-ff-dbg/dist/bin/js --no-ion --baseline-eager -f run.js
    warning: Could not find malloc init callback function.
    Make sure malloc is initialized before calling functions.
    Reading symbols for shared libraries ....................................................................+++......... done
    Richards: 144
    DeltaBlue: 137
    Crypto: 215
    RayTrace: 230
    EarleyBoyer: 193
    RegExp: 157
    Splay: 140
    NavierStokes: 268
    ----
    Score (version 7): 180

    Program exited normally.

    Please keep in mind this is a debugging version and performance is impaired relative to PPCBC (and if I had to ship a Baseline-only compiler in TenFourFox 38, it would still be PPCBC because it has the best track record). However, all of the code cleanup for IonPower and its enhanced debugging capabilities paid off: with one exception, all of the bugs I had to fix to get it passing V8 were immediately flagged by sanity checks during code generation, saving much labourious single stepping through generated assembly to find problems.

    I have a Master's program final I have to study for, so I'll be putting this aside for a few days, but after I thoroughly bomb it the next step is to mount phase 4, where IonPower can pass the test suite in Baseline mode. Then the real fun will begin -- true Ion-level compilation on big-endian PowerPC. We are definitely on target for 38, assuming all goes well.

    I forgot to mention one other advance in IonPower, which Ben will particularly appreciate if he still follows this blog: full support for all eight bitfields of the condition register. Unfortunately, it's mostly irrelevant to generated code because Ion assumes, much to my disappointment, that the processor possesses only a single set of flags. However, some sections of code that we fully control can now do multiple comparisons in parallel over several condition registers, reducing our heavy dependence upon (and usually hopeless serialization of) cr0, and certain FPU operations that emit to cr1 (or require the FPSCR to dump to it) can now branch directly upon that bitfield instead of having to copy it. Also, emulation of mcrxr on G5/POWER4+ no longer has a hard-coded dependency upon cr7, simplifying much conditional branching code. It's a seemingly minor change that nevertheless greatly helps to further unlock the untapped Power in PowerPC.

    Karl DubostBugs and spring cleaning

    We have a load of bugs in Tech Evangelism which are not taken care of. Usually the pattern is:

    1. UNCONFIRMED: bug reported
    2. NEW: analysis done ➜ [contactready]
    3. ASSIGNED: attempt at a contact ➜ [sitewait]
    4. 🌱, ☀️, 🍂, ❄️… and 🌱
    5. one year later nothing has happened. The bug is all dusty.

    A couple of scenarios might have happened during this time:

    • The bug has been silently fixed (Web dev didn't tell, the site changed, the site disappeared, etc.)
    • We didn't get a good contact
    • We got a good contact, but the person has forgotten to push internally
    • The contact/company decided it would be a WONTFIX (Insert here 👺 or 👹)

    I set for myself a whining mail available from the bugzilla administration panel. The feature is defined by.

    Whining: Set queries which will be run at some specified date and time, and get the result of these queries directly per email. This is a good way to create reminders and to keep track of the activity in your installation.

    The event is defined by associating a saved search and associating it with a recurrent schedule.

    Admin panel for setting the whining mail

    Every monday (Japanese time), I'm receiving an email which lists the bugs which have not received comments for the last 30 weeks (6+ months). It's usually in between 2 to 10 bugs to review every monday. Not a high load of work.

    Email sent by my whining schedule

    Still that doesn't seem very satisfying and it relies specifically on my own setup. How do we help contributors, volunteers or even other employees at Mozilla.

    All doomed?

    There might be actions to improve. Maybe some low hanging fruits could help us.

    • To switch from [sitewait] to [needscontact] when there was no comment on the bug for 𝓃 weeks. The rationale here is that the contact was probably not good, and we need to make the bug again has something that someone can feel free to take actions.
    • To improve the automated testing of Web sites (bottom of the page) done by Hallvord. The Web site tests are collected on Github. And you can contribute new tests. For example, when the test reports that the site has been FIXED, a comment could be automatically posted on the bug itself with a needinfo on the ASSIGNED person. Though that might create an additional issue related to false positives.
    • To have a script for easily setting up the whining email that I set up for myself to help contributors have this kind of reminder if they wish so.

    Other suggestions? This is important, because apart of Mozilla, we will slowly have the same issue on 💡 webcompat site for the already contacted Web sites. Please discuss on your own blog and/or send an email to compatibility list on this thread.

    Otsukare!

    PS: No emojis were harmed in this blog post. All emojis appearing in this work are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.

    The Mozilla BlogA Major Victory for the Open Web

    We just accomplished something very important together. Today, the U.S. Federal Communications Commission voted for strong net neutrality protections. This happened because millions of people — including many hundreds of thousands
    in Mozilla’s community — joined together as citizens of the Web to demand those strong protections.

    This is an important victory for the world’s largest public resource, the open Web. Net neutrality is a key aspect of enabling innovation from everywhere, and especially from new players and unexpected places. Net neutrality allows citizens and consumers to access new innovations and judge the merit for themselves. It allows individual citizens to make decisions, without gate-keepers who decide which possibilities can become real. Today’s net neutrality rules help us protect this open and innovative potential of the Internet.

    Mozilla builds our products to put this openness and opportunity into the hands of individuals. We are organized as a non-profit so that the assets we create benefit everyone. Our products go hand-in-hand with net neutrality; they need net neutrality to bring the full potential of the Internet to all of us.

    Today’s net neutrality rules are an important step in protecting opportunity for all. This victory was not inevitable. It occurred because so many people took action, so many people put their voice into the process. To each of you we say “Thank you.” Thank you for taking the time to understand the issue, for recognizing it’s important, and for taking action. Thank you for helping us build openness and opportunity into the very fabric of the Internet.

    Video message from Mitchell Baker, Executive Chairwoman, Mozilla Foundation

    Planet Mozilla InternsMichael Sullivan: Parallelizing compiles without parallelizing linking – using make

    I have to build LLVM and Clang a lot for my research. Clang/LLVM is quite large and takes a long time to build if I don’t use -j8 or so to parallelize the build; but I also quickly discovered that parallelizing the build didn’t work either! I work on a laptop with 8gb of RAM and while this can easily handle 8 parallel compiles, 8 parallel links plus Firefox and Emacs and everything else is a one way ticket to swap town.

    So I set about finding a way to parallelize the compiles but not the links. Here I am focusing on building an existing project. There are probably nicer ways that someone writing the Makefile could use to make this easier for people or the default, but I haven’t really thought about that.

    My first attempt was the hacky (while ! pgrep ld.bfd.real; do sleep 1; done; killall make ld.bfd.real) & make -j8; sleep 2; make. Here we wait until a linker has run, kill make, then rerun make without parallel execution. I expanded this into a more general script:

    <noscript>View the code on <a href="https://gist.github.com/3290e688670c54f8d1a2">Gist</a>.</noscript>

    This approach is kind of terrible. It’s really hacky, it has a concurrency bug (that I would fix if the whole thing wasn’t already so bad), and it slows things down way more than necessary; as soon as one link has started, nothing more is done in parallel.

    A better approach is by using locking to make sure only one link command can run at a time. There is a handy command, flock, that does just that: it uses a file link to serialize execution of a command. We can just replace the Makefile’s linker command with a command that calls flock and everything will sort itself out. Unfortunately there is no totally standard way for Makefiles to represent how they do linking, so some Makefile source diving becomes necessary. (Many use $(LD); LLVM does not.) With LLVM, the following works: make -j8 'Link=flock /tmp/llvm-build $(Compile.Wrapper) $(CXX) $(CXXFLAGS) $(LD.Flags) $(LDFLAGS) $(TargetCommonOpts) $(Strip)'

    That’s kind of nasty, and we can do a bit better. Many projects use $(CC) and/or $(CXX) as their underlying linking command; if we override that with something that uses flock then we’ll wind up serializing compiles as well as links. My hacky solution was to write a wrapper script that scans its arguments for “-c”; if it finds a “-c” it assumes it is a compile, otherwise it assumes it is a link and uses locking. We can then build LLVM with: make -j8 'CXX=lock-linking /tmp/llvm-build-lock clang++'.

    <noscript>View the code on <a href="https://gist.github.com/d33029fcda6889b7d097">Gist</a>.</noscript>

    Is there a better way to do this sort of thing?

    Air MozillaProduct Coordination Meeting

    Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

    Kim MoirRelease Engineering special issue now available

    The release engineering special issue of IEEE software was published yesterday. (Download pdf here).  This issue focuses on the current state of release engineering, from both an industry and research perspective. Lots of exciting work happening in this field!

    I'm interviewed in the roundtable article on the future of release engineering, along with Chuck Rossi of Facebook and Boris Debic of Google.  Interesting discussions on the current state of release engineering at organizations that scale large number of builds and tests, and release frequently.  As well,  the challenges with mobile releases versus web deployments are discussed. And finally, a discussion of how to find good release engineers, and what the future may hold.

    Thanks to the other guest editors on this issue -  Stephany Bellomo, Tamara Marshall-Klein, Bram Adams, Foutse Khomh and Christian Bird - for all their hard work that make this happen!


    As an aside, when I opened the issue, the image on the front cover made me laugh.  It's reminiscent of the cover on a mid-century science fiction anthology.  I showed Mr. Releng and he said "Robot birds? That is EXACTLY how I pictured working in releng."  Maybe it's meant to represent that we let software fly free.  In any case, I must go back to tending the flock of robotic avian overlords.

    David HumphreyRepeating old mistakes

    This morning I've been reading various posts about the difficulty ahead for the Pointer Events spec, namely, Apple's (and by extension Google's) unwillingness to implement it. I'd encourage you to read both pieces, and whatever else gets written on this in the coming days.

    I want to comment not on the spec as such, but on the process on display here, and the effect it has on the growth and development of the web. There was a time when the web's course was plotted by a single vendor (at the time, Microsoft), and decisions about what was and wasn't headed for the web got made by employees of that corporation. This story is so often retold that I won't pretend you need to read it again.

    And yet here we are in 2015, where the web on mobile, and therefore the web in general, is effectively in the control of one vendor; a vendor who, despite its unmatched leadership and excellence in creating beautiful hardware, has shown none of the same abilities in its stewardship and development of the web.

    If the only way to improve and innovate the web is to become an employee of Apple Inc., the web is in real danger.

    I think that the larger web development community has become lax in its care for the platform on which it relies. While I agree with the trend toward writing JS to supplement and extend the browser, I think that it also tends to lull developers into thinking that their job is done. We can't simply write code on the platform, and neglect writing code for the platform. To ignore the latter, to neglect the very foundations of our work, is to set ourselves up for a time when everything collapses into the sand.

    We need more of our developer talent and resources going into the web platform. We need more of our web developers to drop down a level in the stack and put their attention on the platform. We need more people in our community with the skills and resources to design, implement, and test new specs. We need to ensure that the web isn't something we simply use, but something we are active in maintaining and building.

    Many web developers that I talk to don't think about the issues of the "web as platform" as being their problem. "If only Vendor X would fix their bugs or implement Spec Y--we'll have to wait and see." There is too often a view that the web is the problem of a small number of vendors, and that we're simply powerless to do anything other than complain.

    In actual fact there is a lot that one can do even without the blessing or permission of the browser vendors. Because so much of the web is still open, and the code freely available, we can and should be experimenting and innovating as much as possible. While there's no guarantee that code you write will be landed and shipped, there is still a great deal of value in writing patches instead of just angry tweets: it is necessary to change peoples' minds about what the web is and what it can do, and there is no better way that with working code.

    I would encourage the millions of web developers who are putting time into technologies and products on top of the web to also consider investing some time in the web itself. Write a patch, make some builds, post them somewhere, and blog about the results. Let data be the lever you use to shift the conversation. People will tell you that something isn't possible, or that one spec is better than another. Nothing will do more to convince people than a working build that proves the opposite.

    There's no question that working on the web platform is harder than writing things for the web. The code is bigger, older, more complicated, and requires different tooling and knowledge. However, it's not impossible. I've been doing it for years with undergraduate students at Seneca, and if 3rd and 4th years can tackle it, so too can the millions of web developers who are betting their careers and companies on the web.

    Having lived through and participated in every stage of the web's development, it's frustrating to see that we're repeating mistakes of the past, and allowing large vendors to control too great a stake of the web. The web is too important for that, and it needs the attention and care of a global community. There's something you can do, something I can do, and we need to get busy doing it.

    Air MozillaBugzilla Development Meeting

    Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

    Christian HeilmannSimple things: Storing a list of booleans as a single number

    This blog started as a scratch pad of simple solutions to problems I encountered. So why not go back to basics?

    Yesterday I was asked by someone if there is a possibility to store the state of a collection of checkboxes in a single value. The simplest way I could think of doing this is by using binary conversion.

    Tron binary

    You can see the result of my approach in this JSBin:

    Storing a list of booleans as a single number

    What’s going on here? The state of a checkbox is a Boolean, meaning it is checked or unchecked. This could be true or false, or, as JavaScript is a lenient language, 1 or 0. That’s why we can loop over all the checkboxes and assemble a string of their state that compromises of zeros and ones:

    var inputs = document.querySelectorAll('input');
    var all = inputs.length;
    for (var i = 0; i < all; i++){
      state += inputs[i].checked ? '1' : '0';
    }

    This results in a string like 1001010101. This could be our value to store, but looks pretty silly and with 50 or more checkboxes becomes very long to store, for example, as a URL parameter string.

    That’s why we can use parseInt() to convert this binary number into a decimal one. That’s what the second parameter in parseInt() is for – not only to please Douglas Crockford and JSLint (as it is preset to decimal – 10 – people keep omitting it). The counterpart of parseInt() in this case is toString() and that one also takes an optional parameter that is the radix of the number system you convert from. That way you can convert this state back and forth:

    x = parseInt('1001010101',2);
    // x -> 579
    x.toString(2);
    // "1001010101"

    Once converted, you turn it back into a string and loop over the values to set the checked state of the checkboxes accordingly.

    A small niggle: leading zeroes don’t work

    One little problem here is that if the state results in a string with leading zeroes, you get a wrong result back as toString() doesn’t create them (there is no knowing how long the string needs to be, all it does is convert the number).

    x = parseInt('00001010101',2);
    x.toString(2);
    "1010101"

    You can avoid this is in two ways: either pad the string by always starting it with a 1 or by reversing the string and looping over the checkboxes in reverse. In the earlier example I did the padding part, in this JSBin you can see the reversing trick:

    Storing a list of booleans as a single number (reverse)r

    Personally, I like the reversing method better, it just feels cleaner. It does rely a lot on falsy/truthy though as the size of the resulting arrays differs.

    Limitation

    In any case, this only works when the amount of checkboxes doesn’t change in between the storing and re-storing, but that’s another issue.

    As pointed out by Matthias Reuter on Twitter this is also limited to 52 checkboxes, so if you need more, this is not the solution.

    Karl DubostWeb Compatibility Summit 2015

    The Web Compatibility Summit was organized in Mountain View (USA) on February 18, 2015. I summarize the talks that we have been given during the day. I encourage to continue the discussion on compatibility@lists.mozilla.org.

    If you want to look at the talks:

    Intro to Web Compatibility

    Mike Taylor (Mozilla) has introduced the Web Compatibility topic. The Web being a giant set of new and old things. We need to care for a lot of different things sometimes incompatible. Features will disappear, new features emerge, but you still need to make the Web usable for all users whatever their devices, countries, browsers.

    Mike reminded us of the evolution of User Agent strings and how they grew with more and more terms. The reason is that the User Agent string became an ID for having the content rendered. So any new User Agent is trying to get access to the Web site content by not being filtered out.

    Then he went through some traditional bugs (horrors) be JavaScript, CSS, etc.

    WebCompat.com has been created to give a space for users to report Web Compatibility issues they have on a Web site. But the space is useful for browser vendors, it is relatively easy to tie the bug reporting of browsers directly to webcompat.com.

    Discussions

    Beyond Vendor Prefixes

    Jacob Rossi (Microsoft) introducing the purpose and the caveats of vendor prefixes. Vendor prefixes have been created for helping people to test the new API safely without breaking other browser vendors. It shields against collision with other implementations, but it also creates Web Compatibility issues. The main issue being that Web developers use these on production sites and on articles, examples, documentations.

    Microsoft tried to contact Web sites with bogus code examples and articles. They also created new content with the correct way of writing things. The results were promising for the FullScreen API but the experiment was less successful for other properties. Basically, fix the problem before it happens.

    So how do we keep the possibility to have a large surface for feedback and at the same time limit the usage so that it doesn't become a requirement to use a specific browser. The first idea to put new features behind flags. Then the issue becomes that the feedback is shallow. So Jacob is floating the idea of an API trial, where someone would register to get a key for enabling a feature. It would help the developer to test and at the same time, make it possible to set deadlines for the test.

    It would probably require a community effort to set up. It has a cost. Where this discussion should happen? It could be a community group at W3C. Not all APIs need to be used at scale for having a real sense of feedback. IndexedDB, appcache would have been good candidates for this thing. If there was a community site, it would be yet another good way to build awareness about the features.

    A good discussion has been recorded on the video.

    How CSS is being used in the wild

    Alex McPherson (QuickLeft) introduced the wonderful work he did about CSS properties on the Web as they are currently (2014) used by Web devs. The report was initially done for understanding what QuickLeft should recommend in terms of technology when they tackle a new project. For this report, they scrap the CSS of the top 15000 Web sites and checking the frequency, values, doing graph distributions. The purpose was not to be an exact academic research. So they are definitely caveats in the study, but it gives a rough overview of what is done. The data were collected through a series of HTTP requests. One of the consequences is that we probably miss everything which is set through JavaScript. A better study would include a browser crawler handling the DOM. There are probably variations with regards to the user agent too.

    It would be good to have a temporal view of these data and repeat the study continuously. So we can identify how the Web is evolving. Browser vendors seems to have more resources to do this type of studies than a single person in an agency.

    Engaging with Web Developers for Site Outreach

    Colleen Williams (Microsoft) talked about what it takes to do daily Web Compatibility work. Contacting Web developers is really about trying to convince people that there could be a benefit for them to implement with a wider scope of platforms.

    Social networking and using linkedin are very useful to be to contact the right persons in companies. It's very important to be very upfront and to tell developers:

    • Who we are?
    • What we are doing?
    • Why are we contacting them?

    Microsoft IE team has a list of top sites and systematically test every new version of IE into these sites. It's an opportunity for contacting Web sites which are broken. When contacting Web sites, it's important to understand that you are building a relationship on a longterm. You might have to recontact the people working for this company and/or Web sites in a couple of months. It's important to nurture a respectful and interesting relation with the Web developers. You can't say "your code sucks". It's important to talk to technical people directly.

    Do not share your contact list with business departements. We need a level of privacy with the persons we are contacting. They keep contact with you because they are eager to help of solving technical issues. Empathy in the relationship goes a long way in terms of creating a mutual trust. The relationship should always go both ways.

    Having a part or the full solution for solving the issue will help you a lot in getting the issue fixed. It's better to show code and help the developer demonstrate what could work.

    Companies user support channels are usually not the best tool, unfortunately, for reaching the company. There's a difference in between having a contact and having the right contact.

    Web Compatibility Data: A Shared Resource

    Finally Justin Crawford (Mozilla) introduced the project about having a better set of site compatibility data. But I encouraged you to read his own summary on his blog.

    Unconference

    We discuss at the end of the day using an unconference format. Alexa Roman moderated the session. We discussed about User Agent Sniffing and the format of the UA string, console.log for reporting Web Compatibility issues, API trials, documentation, etc.

    More information

    You can contact and interact with us: IRC: irc.mozilla.org #webcompat Discuss issues on compatibility@lists.mozilla.org Twitter: @webcompat Report bug: http://webcompat.com/

    Planet Mozilla InternsMichael Sullivan: Forcing memory barriers on other CPUs with mprotect(2)

    I have something of an unfortunate fondness for indefensible hacks.

    Like I discussed in my last post, RCU is a synchronization mechanism that excels at protecting read mostly data. It is a particularly useful technique in operating system kernels because full control of the scheduler permits many fairly simple and very efficient implementations of RCU.

    In userspace, the situation is trickier, but still manageable. Mathieu Desnoyers and Paul E. McKenney have built a Userspace RCU library that contains a number of different implementations of userspace RCU. For reasons I won’t get into, efficient read side performance in userspace seems to depend on having a way for a writer to force all of the reader threads to issue a memory barrier. The URCU library has one version that does this using standard primitives: it sends signals to all other threads; in their signal handlers the other threads issue barriers and indicate so; the caller waits until every thread has done so. This is very heavyweight and inefficient because it requires running all of the threads in the process, even those that aren’t currently executing! Any thread that isn’t scheduled now has no reason to execute a barrier: it will execute one as part of getting rescheduled. Mathieu Desnoyers attempted to address this by adding a membarrier() system call to Linux that would force barriers in all other running threads in the process; after more than a dozen posted patches to LKML and a lot of back and forth, it got silently dropped.

    While pondering this dilemma I thought of another way to force other threads to issue a barrier: by modifying the page table in a way that would force an invalidation of the Translation Lookaside Buffer (TLB) that caches page table entries! This can be done pretty easily with mprotect or munmap.

    Full details in the patch commit message.

    Planet Mozilla InternsMichael Sullivan: Why We Fight

    Why We Fight, or

    Why Your Language Needs A (Good) Memory Model, or

    The Tragedy Of memory_order_consume’s Unimplementability

    This, one of the most terrifying technical documents I’ve ever read, is why we fight: https://www.kernel.org/doc/Documentation/RCU/rcu_dereference.txt.

    Background

    For background, RCU is a mechanism used heavily in the Linux kernel for locking around read-mostly data structures; that is, data structures that are read frequently but fairly infrequently modified. It is a scheme that allows for blazingly fast read-side critical sections (no atomic operations, no memory barriers, not even any writing to cache lines that other CPUs may write to) at the expense of write-side critical sections being quite expensive.

    The catch is that writers might be modifying the data structure as readers access it: writers are allowed to modify the data structure (often a linked list) as long as they do not free any memory removed until it is “safe”. Since writers can be modifying data structures as readers are reading from it, without any synchronization between them, we are now in danger of running afoul of memory reordering. In particular, if a writer initializes some structure (say, a routing table entry) and adds it to an RCU protected linked list, it is important that any reader that sees that the entry has been added to the list also sees the writes that initialized the entry! While this will always be the case on the well-behaved x86 processor, architectures like ARM and POWER don’t provide this guarantee.

    The simple solution to make the memory order work out is to add barriers on both sides on platforms where it is need: after initializing the object but before adding it to the list and after reading a pointer from the list but before accessing its members (including the next pointer). This cost is totally acceptable on the write-side, but is probably more than we are willing to pay on the read-side. Fortunately, we have an out: essentially all architectures (except for the notoriously poorly behaved Alpha) will not reorder instructions that have a data dependency between them. This means that we can get away with only issuing a barrier on the write-side and taking advantage of the data dependency on the read-side (between loading a pointer to an entry and reading fields out of that entry). In Linux this is implemented with macros “rcu_assign_pointer” (that issues a barrier if necessary, and then writes the pointer) on the write-side and “rcu_dereference” (that reads the value and then issues a barrier on Alpha) on the read-side.

    There is a catch, though: the compiler. There is no guarantee that something that looks like a data dependency in your C source code will be compiled as a data dependency. The most obvious way to me that this could happen is by optimizing “r[i ^ i]” or the like into “r[0]”, but there are many other ways, some quite subtle. This document, linked above, is the Linux kernel team’s effort to list all of the ways a compiler might screw you when you are using rcu_dereference, so that you can avoid them.

    This is no way to run a railway.

    Language Memory Models

    Programming by attempting to quantify over all possible optimizations a compiler might perform and avoiding them is a dangerous way to live. It’s easy to mess up, hard to educate people about, and fragile: compiler writers are feverishly working to invent new optimizations that will violate the blithe assumptions of kernel writers! The solution to this sort of problem is that the language needs to provide the set of concurrency primitives that are used as building blocks (so that the compiler can constrain its code transformations as needed) and a memory model describing how they work and how they interact with regular memory accesses (so that programmers can reason about their code). Hans Boehm makes this argument in the well-known paper Threads Cannot be Implemented as a Library.

    One of the big new features of C++11 and C11 is a memory model which attempts to make precise what values can be read by threads in concurrent programs and to provide useful tools to programmers at various levels of abstraction and simplicity. It is complicated, and has a lot of moving parts, but overall it is definitely a step forward.

    One place it falls short, however, is in its handling of “rcu_dereference” style code, as described above. One of the possible memory orders in C11 is “memory_order_consume”, which establishes an ordering relationship with all operations after it that are data dependent on it. There are two problems here: first, these operations deeply complicate the semantics; the C11 memory model relies heavily on a relation called “happens before” to determine what writes are visible to reads; with consume, this relation is no longer transitive. Yuck! Second, it seems to be nearly unimplementable; tracking down all the dependencies and maintaining them is difficult, and no compiler yet does it; clang and gcc both just emit barriers. So now we have a nasty semantics for our memory model and we’re still stuck trying to reason about all possible optimizations. (There is work being done to try to repair this situation; we will see how it turns out.)

    Shameless Plug

    My advisor, Karl Crary, and I are working on designing an alternate memory model (called RMC) for C and C++ based on explicitly specifying the execution and visibility constraints that you depend on. We have a paper on it and I gave a talk about it at POPL this year. The paper is mostly about the theory, but the talk tried to be more practical, and I’ll be posting more about RMC shortly. RMC is quite flexible. All of the C++11 model apart from consume can be implemented in terms of RMC (although that’s probably not the best way to use it) and consume style operations are done in a more explicit and more implementable (and implemented!) way.

    Planet Mozilla InternsMichael Sullivan: The x86 Memory Model

    Often I’ve found myself wanting to point someone to a description of the x86’s memory model, but there wasn’t any that quite laid it out the way I wanted. So this is my take on how shared memory works on multiprocessor x86 systems. The guts of this description is adapted/copied from “A Better x86 Memory Model: x86-TSO” by Scott Owens, Susmit Sarkar, and Peter Sewell; this presentation strips away most of the math and presents it in a more operational style. Any mistakes are almost certainly mine and not theirs.

    Components of the System:

    There is a memory subsystem that supports the following operations: store, load, fence, lock, unlock. The memory subsystem contains the following:

    1. Memory: A map from addresses to values
    2. Write buffers: Per-processor lists of (address, value) pairs; these are pending writes, waiting to be sent to memory
    3. “The Lock”: Which processor holds the lock, or None, if it is not held. Roughly speaking, while the lock is held, only the processor that holds it can perform memory operations.

    There is a set of processors that execute instructions in program order, dispatching commands to the memory subsystem when they need to do memory operations. Atomic instructions are implemented by taking “the lock”, doing whatever reads and writes are necessary, and then dropping “the lock”. We abstract away from this.

    Definitions

    A processor is “not blocked” if either the lock is unheld or it holds the lock.

    Memory System Operation

    Processors issue commands to the memory subsystem. The subsystem loops, processing commands; each iteration it can pick the command issued by any of the processors to execute. (Each will only have one.) Some of the commands issued by processors may not be eligible to execute because their preconditions do not hold.

    1. If a processor p wants to read from address a and p is not blocked:
      a. If there are no pending writes to a in p’s write buffer, return the value from memory
      b. If there is a pending write to a in p’s write buffer, return the most recent value in the write buffer
    2. If a processor p wants to write value v to address a, add (a, v) to the back of p’s write buffer
    3. At any time, if a processor p is not blocked, the memory subsystem can remove the oldest entry (a, v) from p’s write buffer and update memory so that a maps to v
    4. If a processor p wants to issue a barrier
      a. If the barrier is an MFENCE, p’s write buffer must be empty
      b. If the barrier is an LFENCE/SFENCE, there are no preconditions; these are no-ops **
    5. If a processor p’s wants to lock the lock, the lock must not be held and p’s write buffer must be empty; the lock is set to be p
    6. If a processor p’s wants to unlock the lock, the lock must held by p and p’s write buffer must be empty; the lock is set to be None

    Remarks

    So, the only funny business that can happen is that a load can happen before a prior store to a different location has been flushed from the write buffer into memory. This means that if CPU0 executes “x = 1; r0 = y” and CPU1 executes “y = 1; r1 = x”, with x and y both initially zero, we can get “r0 == r1 == 0″.

    The common intuition that atomic instructions act like there is an MFENCE before and after them is basically right; MFENCE requires the write buffer to empty before it can execute and so do lock and unlock.

    x86 is a pleasure to compile atomics code for. The “release” and “acquire” operations in the C++11 memory model don’t require any fencing to work. Neither do the notions of “execution order” and “visibility order” in my advisor and my RMC memory model.

    ** The story about LFENCE/SFENCE is a little complicated. Some sources insist that they actually do things. The Cambridge model models them as no-ops. The guarantees that they are documented to provide are just true all the time, though. I think they are useful when using non-temporal memory accesses (which I’ve never done), but not in general.