Jess KleinHive Labs at the Mozilla Festival: Building an Ecosystem for Innovation



 


This weekend marks the fifth year anniversary of the Mozilla Festival - and Hive Labs has a ton of fun design - oriented, hands-on activities to get messy with in person or remotely. We are using the event to explore design questions that are relevant to local communities and Hives and to dabble in building out a community-driven ecosystem for innovation. Here's a few highlights:

Challenges to Enacting and Scaling Connected Learning

This year, the Hive track at MozFest (http://2014.mozillafestival.org/tracks/) is bringing together Hive and "Hive curious" travelers from around the world to incubate solutions to shared challenges in enacting and scaling connected learning. We're working together over the course of the MozFest weekend to collaboratively answer questions that come up again and again in our networks across the globe. One question that Hive Labs is focusing on is: How do we build a community that supports innovation in the education space? 



Action Incubator

We will be hosting a series of activities embedded within the Hive track to think through problems in your Hives and local communities and brainstorming solutions collectively. We will be leveraging three teaching kit's that were made specifically to facilitate this kind of design thinking activity:

  • Firestarter: In this activity, participants will identify opportunities and then brainstorm potential design solutions.
  • User Testing at an Event: Events are a great time to leverage all of the different kinds of voices in a room to get feedback on an in-progress project or half-baked idea. This activity will help you to test ideas and projects and get constructive and actionable feedback.
  • Giving + Getting Actionable Feedback: Getting effective and actionable feedback on your half - baked ideas and projects can be challenging. This activity explores some ways to structure your feedback session.

Art of the Web 

This entire track is dedicated to showcasing and making art using the Web as your medium. Follow the #artoftheweb hashtag on twitter. 


in response to the #mozfest remotee challenge

MozFest Remotee Challenge

Want to join in on all of the Mozilla Festival action even though you aren't physically at the event? This challenge is for you! We have compiled a handful of activities focused on Web Literacy, supporting community - based learning and making so that you can take part in the conversation and brainstorming at the Mozilla Festival. Go here to start the challenge.

You can follow along all weekend using the #mozfest or #hivebuzz hashtags on Twitter.

Mozilla Reps CommunityReps Weekly Call – October 23th 2014

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-sticker

Summary

  • The Open Standard.
  • Firefox 10th anniversary.
  • New council members.
  • Mozfest.
  • FOSDEM.
  • New reporting activities.

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141023/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Frédéric WangA quick note for Mozillians regarding MathML on Wikipedia

As mentioned some time ago and as recently announced on the MathML and MediaWiki mailing lists, a MathML mode with SVG/PNG fallback is now available on Wikipedia. In order to test it, you need to log in with a Wikipedia account and select the mode in the "Math" section of your preferences.

Zoomed-Windows8-Firefox32-MathML-LatinModern

Some quick notes for Mozillians:

  • Although Mozilla intern Jonathan Wei has done some work on MathML accessibility and that there are reports about work in progress to make Firefox work with NVDA / Orca / VoiceOver, we unfortunately still don't have something ready for Gecko browsers. You can instead try the existing solutions for Safari or Internet Explorer (ChromeVox and JAWS 16 beta are supposed to be MathML-aware but fail to read the MathML on Wikipedia at the moment).

  • By default, the following MATH fonts are tried: Cambria Math, Latin Modern Math, STIX Math, Latin Modern Math (Web font). In my opinion, our support for Cambria Math (installed by default on Windows) is still not very good, so I'd recommend to use Latin Modern Math instead, which has the same "Computer Modern" style as the current PNG mode. To do that, go to the "Skin" section of your preferences and just add the rule math { font-family: Latin Modern Math; } to your "Custom CSS". Latin Modern Math is installed with most LaTeX distributions, available from the GUST website and provided by the MathML font add-on.

  • You can actually install various fonts and try to make the size and style of the math font consistent with the surrounding text. Here are some examples:

    /* Asana Math (Palatino style) */
    .mw-body, mtext {
        font-family: Palatino Linotype, URW Palladio L, Asana Math;
    }
    math {
        font-family: Asana Math;
    }
    
    
    /* Cambria (Microsoft Office style) */
    .mw-body, mtext {
        font-family: Cambria;
    }
    math {
        font-family: Cambria Math;
    }
    
    
    /* Latin Modern (Computer Modern style) */
    .mw-body, mtext {
        font-family: Latin Modern Roman;
    }
    math {
        font-family: Latin Modern Math;
    }
    
    
    /* STIX/XITS (Times New Roman style) */
    .mw-body, mtext {
        font-family: XITS, STIX;
    }
    math {
        font-family: XITS Math, STIX Math;
    }
    
    
    /* TeX Gyre Bonum (Bookman style) */
    .mw-body, mtext {
        font-family: TeX Gyre Bonum;
    }
    math {
        font-family: TeX Gyre Bonum Math;
    }
    
    
    /* TeX Gyre Pagella (Palatino style) */
    .mw-body, mtext {
        font-family: TeX Gyre Pagella;
    }
    math {
        font-family: TeX Gyre Pagella Math;
    }
    
    
    /* TeX Gyre Schola (Century Schoolbook style) */
    .mw-body, mtext {
        font-family: TeX Gyre Schola;
    }
    math {
        font-family: TeX Gyre Schola Math;
    }
    
    
    /* TeX Gyre Termes (Times New Roman style) */
    .mw-body, mtext {
        font-family: TeX Gyre Termes;
    }
    math {
        font-family: TeX Gyre Termes Math;
    }
    
  • We still have bugs with missing fonts and font inflation on mobile devices. If you are affected by these bugs, you can force the SVG fallback instead:

    span.mwe-math-mathml-inline, div.mwe-math-mathml-display {
      display: none !important;
    }
    span.mwe-math-mathml-inline + .mwe-math-fallback-image-inline {
      display: inline !important;
    }
    div.mwe-math-mathml-display + .mwe-math-fallback-image-display {
      display: block !important;
    }
    
  • You might want to install some Firefox add-ons for copying MathML/LaTeX, zooming formulas or configuring the math font.

  • Finally, don't forget to report bugs to Bugzilla so that volunteers can continue to improve our MathML support. Thank you!

Rob CampbellThanks, Mozilla

random access memoryThis’ll be my last post to planet.mozilla.org.

After 8 years and thousands of airmiles, I’ve decided to leave Mozilla. It’s been an incredible ride and I’ve met a lot of great people. I am incredibly grateful for the adventure.

Thanks for the memories.

Benoit GirardGPU Profiling has landed

A quick remainder that one of the biggest benefit to having our own built-in profiler is that individual teams and project can add their own performance reporting features. The graphics team just landed a feature to measure how much GPU time is consumed when compositing.

I already started using this in bug 1087530 where I used it to measure the improvement from recycling our temporary intermediate surfaces.

Screenshot 2014-10-23 14.35.29Here we can see that the frame had two rendering phases (group opacity test case) totaling 7.42ms of GPU time. After applying the patch from the bug and measuring again I get:

Screenshot 2014-10-23 14.38.15Now with retaining the surface the rendering GPU time drops to 5.7ms of GPU time. Measuring the GPU time is important because timing things on the CPU time is not accurate.

Currently we still haven’t completed the D3D implementation or hooked it up to WebGL, we will do that as the need arises. To implement this, when profiling, we insert a query object into the GPU pipeline for each rendering phase (framebuffer switches).


Gavin Sharpr=gfritzsche

I’m happy to announce that Georg Fritzsche is now officially a Firefox reviewer.

Georg has been contributing to Firefox for a while now – his contributions started with done some great work on Firefox’s Telemetry system, as well as investigating stability issues in plugins and Firefox itself. He played a crucial role in building the telemetry experiments system, and more recently has become familiar with a few key parts of the Firefox front-end code, including our click to play plugin UI and the upcoming “FHR self support” feature. He’s a thorough reviewer with lots of experience with Mozilla code, so don’t hesitate to ask Georg for review if you’re patching Firefox!

Thanks Georg!

Matt BrubeckA little randomness for Hacker News

In systems that rely heavily on “most popular” lists, like Reddit or Hacker News, the rich get richer while the poor stay poor. Since most people only look at the top of the charts, anything that’s not already listed has a much harder time being seen. You need visibility to get ratings, and you need ratings to get visibility.

Aggregators try to address this problem by promoting the items as well as popular ones. But this is hard to do effectively. For example, the “new” page at Hacker News gets only a fraction of the front page’s traffic. Most users want to see the best content, not wade through an unfiltered stream of links. Thus, very little input is available to decide which links get promoted to the front page.

As an experiment, I wrote a userscript that uses the Hacker News API to search for new or low-ranked links and randomly insert just one or two of them into the front page. It’s also available as a bookmarklet for those who can’t or don’t want to install the user script.

Install user script (may require a browser extension)

Randomize HN (drag to bookmark bar, or right-click to bookmark)

This gives readers a chance to see and vote on links that they otherwise wouldn’t, without altering their habits or wading through a ton of unfiltered content. Each user will see just one or two links per visit, but thanks to randomness a much larger number of links will be seen by the overall user population. My belief, though I can’t prove it, is that widespread use of this feature would improve the quality of the selection process.

The script isn’t perfect (search for FIXME in the source code for some known issues), but it works well enough to try out the idea. Unfortunately, the HN API doesn’t give access to all the data I’d like, and sometimes the script won’t find any suitable links to insert. (You can look at your browser’s console to see which which items were randomly inserted.) Ideally, this feature would be built in to Hacker News—and any other service that recommends “popular” items.

Peter BengtssonlocalForage vs. XHR

tl;dr; Fetching from IndexedDB is about 5-15 times faster than fetching from AJAX.

localForage is a wrapper for the browser that makes it easy to work with any local storage in the browser. Different browsers have different implementations. By default, when you use localForage in Firefox is that it used IndexedDB which is asynchronous by default meaning your script don't get blocked whilst waiting for data to be retrieved.

A good pattern for a "fat client" (lots of javascript, server primarly speaks JSON) is to download some data, by AJAX using JSON and then store that in the browser. Next time you load the page, you first read from the local storage in the browser whilst you wait for a fresh new JSON from the server. That way you can present data to the screen sooner. (This is how Buggy works, blogged about it here)

Another similar pattern is that you load everything by AJAX from the server, present it and store it in the local storage. Then you perdiocally (or just on onload) you send the most recent timestamp from the data you've received and the server gives you back everything new and everything that has changed by that timestamp. The advantage of this is that the payload is continuously small but the server has to make a custom response for each client whereas a big fat blob of JSON can be better cached and such. However, oftentimes the data is dependent on your credentials/cookie anyway so most possibly you can't do much caching.

Anyway, whichever pattern you attempt I thought it'd be interesting to get a feel for how much faster it is to retrieve from the browsers memory compared to doing a plain old AJAX GET request. After all, browsers have seriously optimized for AJAX requests these days so basically the only thing standing in your way is network latency.

So I wrote a little comparison script that tests this. It's here: http://www.peterbe.com/localforage-vs-xhr/index.html

It retrieves a 225Kb JSON blob from the server and measures how long that took to become an object. Equally it does the same with localforage.getItem and then it runs this 10 times and compares the times. It's obviously not a surprise the local storage retrieval is faster, what's interesting is the difference in general.

What do you think? I'm sure both sides can be optimized but at this level it feels quite realistic scenarios.

Doug BelshawInterim results of the Web Literacy Map 2.0 community survey

Thanks to my colleague Adam Lofting, I’ve been able to crunch some of the numbers from the Web Literacy Map v2.0 community survey. This will remain open until the end of the month, but I thought I’d share some of the results.

Web Literacy Map v2.0 community survey: overview

This is the high-level overview. Respondents are able to indicate the extent to which they agree or disagree with each proposal on a five-point scale. The image above shows the average score as well as the standard deviation. Basically, for the top row the higher the number the better. For the bottom row, low is good.

Web Literacy Map v2.0 community survey: by location

Breaking it down a bit further, there’s some interesting things you can pull out of this. Note that the top-most row represents people who completed the survey, but chose not to disclose their location. All of the questions are optional.

Things that stand out:

  • There’s strong support for Proposal 4: I believe that concepts such as ‘Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  • There’s also support for Proposal 1: I believe the Web Literacy Map should explicitly reference the Mozilla manifesto and Proposal 5: I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.
  • People aren’t so keen on Proposal 2: I believe the three strands should be renamed 'Reading’, 'Writing’ and 'Participating’. (although, interestingly, in the comments people say they like the term 'Participating’, just not 'Reading’ and 'Writing’ which many said reminded them too much of school!)

This leaves Proposal 3: I believe the Web Literacy Map should look more like a 'map’. This could have been better phrased, as the assumption in the comments seems to be that we can present it either as a grid or as a map. In fact, here’s no reason why we can’t do both. In fact, I’d like to see us produce:

Finally, a word on Proposal 5 and remixing. In the comments there’s support for this - but also a hesitation lest it 'dilutes’ the impact of the Web Literacy Map. A number of people suggested using a GitHub-like model where people can 'fork’ the map if necessary. In fact, this is already possible as v1.1 is listed as a repository under the Mozilla account.

I’m looking forward to doing some more analysis of the community survey after it closes!


Comments? Questions? Send them this way: @dajbelshaw or doug@mozillafoundation.org

Benjamin SmedbergHow I Do Code Reviews at Mozilla

Since I received some good feedback about my prior post, How I Hire at Mozilla, I thought I’d try to continue this is a mini-series about how I do other things at Mozilla. Next up is code review.

Even though I have found new module owners for some of the code I own, I still end up doing 8-12 review/feedback cycles per week. Reviews are only as good as the time you spend on them: I approach reviews in a fairly systematic way.

When I load a patch for review, I don’t read it top-to-bottom. I also try to avoid reading the bug report: a code change should be able to explain itself either directly in the code or in the code commit message which is part of the patch. If bugzilla comments are required to understand a patch, those comments should probably be part of the commit message itself. Instead, I try to understand the patch by unwrapping it from the big picture into the small details:

The Commit Message

  • Does the commit message describe accurately what the patch does?
  • Am I the right person to make a decision about this change?
  • Is this the right change to be making?
  • Are there any external specifications for this change? This could include a UX design, or a DOM/HTML specification, or something else.
  • Should there be an external specification for this change?
  • Are there other experts who should be involved in this change? Does this change require a UX design/review, or a security, privacy, legal, localization, accessibility, or addon-compatibility review or notification?

Read the Specification

If there is an external specification that this change should conform to, I will read it or the appropriate sections of it. In the following steps of the review, I try to relate the changes to the specification.

Documentation

If there is in-tree documentation for a feature, it should be kept up to date by patches. Some changes, such as Firefox data collection, must be documented. I encourage anyone writing Mozilla-specific features and APIs to document them primarily with in-tree docs, and not on developer.mozilla.org. In-tree docs are much more likely to remain correct and be updated over time.

API Review

APIs define the interaction between units of Mozilla code. A well-designed API that strikes the right balance between simplicity and power is a key component of software engineering.

In Mozilla code, APIs can come in many forms: IDL, IPDL, .webidl, C++ headers, XBL bindings, and JS can all contain APIs. Sometimes even C++ files can contain an API; for example Mozilla has an mostly-unfortunate pattern of using the global observer service as an API surface between disconnected code.

In the first pass I try to avoid reviewing the implementation of an API. I’m focused on the API itself and its associated doccomments. The design of the system and the interaction between systems should be clear from the API docs. Error handling should be clear. If it’s not perfectly obvious, the threading, asynchronous behavior, or other state-machine aspects of an API should be carefully documented.

During this phase, it is often necessary to read the surrounding code to understand the system. None of our existing tools are very good at this, so I often have several MXR tabs open while reading a patch. Hopefully future review-board integration will make this better!

Brainstorm Design Issues

In my experience, the design review is the hardest phase of a review, the part which requires the most experience and creativity, and provides the most value.

  • How will this change interact with other code in the tree?
  • Are there edge cases or failure cases that need to be addressed in the design?
  • Is it likely that this change will affect performance or security in unexpected ways?

Testing Review

I try to review the tests before I review the implementation.

  • Are there automated unit tests?
  • Are there automated performance tests?
  • Is there appropriate telemetry/field measurement, especially for error conditions?
  • Do the tests cover the specification, if there is one?
  • Do the tests cover error conditions?
  • Do the tests strike the right balance between “unit” testing of small pieces of code versus “integration” tests that ensure a feature works properly end-to-end?

Code Review

The code review is the least interesting part of the review. At this point I’m going through the patch line by line.

  • Make sure that the code uses standard Mozilla coding style. I really desperately want somebody to automated lots of this as part of the patch-submission process. It’s tedious both for me and for the patch author if there are style issues that delay a patch and require another review cycle.
  • Make sure that the number of comments is appropriate for the code in question. New coders/contributors sometimes have a tendency to over-comment things that are obvious just by reading the code. Experienced contributors sometimes make assumptions that are correct based on experience but should be called out more explicitly in the code.
  • Look for appropriate assertions. Assertions are a great form of documentation and runtime checking.
  • Look for appropriate logging. In Mozilla we tend to under-log, and I’d like to push especially new code toward more aggressive logging.
  • Mostly this is scanning the implementation. If there is complexity (such as threading!), I’ll have to slow down a lot and make sure that each access and state change is properly synchronized.

Re-read the Specification

If there is a specification, I’ll briefly re-read it to make sure that it was covered by the code I just finished reading.

Mechanics

Currently, I primarily do reviews in the bugzilla “edit” interface, with the “edit attachment as comment” option. Splinter is confusing and useless to me, and review-board doesn’t seem to be ready for prime-time.

For long or complex reviews, I will sometimes copy and quote the patch in emacs and paste or attach it to bugzilla when I’m finished.

In some cases I will cut off a review after one of the earlier phases: if I have questions about the general approach, the design, or the API surface, I will often try to clarify those questions before proceeding with the rest of the review.

There’s an interesting thread in mozilla.dev.planning about whether it is discouraging to new contributors to mark “review-” on a patch, and whether there are less-painful ways of indicating that a patch needs work without making them feel discouraged. My current practice is to mark r- in all cases where a patch needs to be revised, but to thank contributors for their effort so that they are still appreciated and to be as specific as possible about required changes while avoiding any words that could be perceived as an insult.

If I haven’t worked with a coder (paid or volunteer) in the past, I will typically always ask them to submit an updated patch with any changes for re-review. This allows me to make sure that the changes were completed properly and didn’t introduce any new problems. After I gain some experience, I will often trust people to make necessary changes and simply mark “r+ with review comments fixed”.

Michael KaplyDisabling Buttons In Preferences

I get asked a lot how to disable certain buttons in preferences like Make Firefox the default browser or the various buttons in the Startup groupbox. Firefox does have a way to disable these buttons, but it's not very obvious. This post will attempt to remedy that.

These buttons are controlled through preferences that have the text "disable_button" in them. Just changing the preference to true isn't enough, though. The preference has to be locked, either via the CCK2 or AutoConfig. What follows is a mapping of all the preferences to their corresponding buttons.

pref.general.disable_button.default_browser
Advanced->General->Make Firefox the default browser
pref.browser.homepage.disable_button.current_page
General->Use Current Pages
pref.browser.homepage.disable_button.bookmark_page
General->Use Bookmark
pref.browser.homepage.disable_button.restore_default
General->Restore to Default
security.disable_button.openCertManager
Advanced->Certificates->View Certificates
security.disable_button.openDeviceManager
Advanced->Certificates->Security Devices
app.update.disable_button.showUpdateHistory
Advanced->Update->Show Update History
pref.privacy.disable_button.cookie_exceptions
Privacy->History->Exceptions
pref.privacy.disable_button.view_cookies
Privacy->History->Show Cookies
pref.privacy.disable_button.view_passwords
Security->Passwords->Saved Paswords
pref.privacy.disable_button.view_passwords_exceptions
Security->Passwords->Exceptions

As a bonus, there's one more preference you can set and lock - pref.downloads.disable_button.edit_actions. It prevents the changing of any actions on the Applications page in preferences.

Mozilla Release Management TeamFirefox 34 beta1 to beta2

  • 60 changesets
  • 135 files changed
  • 2926 insertions
  • 673 deletions

ExtensionOccurrences
js32
cpp23
cc17
jsm9
html9
css9
jsx7
h7
ini3
sh2
patch2
mm2
manifest2
list2
xul1
xml1
txt1
py1
mozilla1
mk1
json1
build1

ModuleOccurrences
browser60
gfx21
security12
content11
toolkit5
layout4
widget3
testing3
media3
gfx3
js2
xulrunner1
xpcom1
netwerk1
mobile1
+gfx1
dom1

List of changesets:

Justin DolskeBug 1068290 - UI Tour: Add ability to highlight New Private Window icon in chrome. r=MattN a=dolske - aa77bc7b59e3
Justin DolskeBug 1072036 - UI Tour: Add ability to highlight new privacy button. r=mattn a=dolske - d37b92959827
Justin DolskeBug 1071238 - UI Tour: add ability to put a widget in the toolbar. r=mattn a=dolske - 1c96180e6a5b
Blair McBrideBug 1068284 - UI Tour: Add ability to highlight search provider in search menu. r=MattN a=dolske - 9d4b08eecd9a
Joel MaherBug 1083369 - update talos.json to include fixes for mainthreadio whitelist and other goodness. r=dminor a=test-only - 184bc1bea651
Bas SchoutenBug 1074272 - Use exception mode 0 for our D3D11 devices. r=jrmuizel, a=sledru - 46d2991042df
Gijs KruitboschBug 1079869 - Fix closing forget panel by adding a closemenu=none attribute. r=jaws, a=sledru - e6441f98f159
Patrick BrossetBug 1020038 - Disable test browser/devtools/layoutview/test/browser_layoutview_update-in-iframes.js. a=test-only - da7c401c5aa7
Jeff GilbertBug 1079848 - Large allocs should be infallible and handled. r=kamidphish, a=sledru - 03d4ab96c271
Mike HommeyBug 1081031 - Unbust xulrunner mac builds by not exporting all JS symbols (Bug 920731). r=bsmedberg, a=npotb - 5967c4a96835
Mark BannerBug 1081906 - Fix unable to start Firefox due to 'Couldn't load XPCOM'. r=bsmedberg, a=sledru - c212fd07fd32
Georg FritzscheBug 1079312 - Fix invalid log.warning() to log.warn(). r=irving, a=sledru - ae6317e02f72
Mike de BoerBug 1081130 - Fix importing contacts with only a phone number and fetch the correct format. r=abr, a=sledru - 1f7f807b6362
Simon MontaguTest for Bug 1067268. r=jfkthame, a=lmandel - 4f904d9bcff2
Simon MontaguBug 1067268 - Don't mix physical and logical coordinates when calculating width to clear past floats. r=jfkthame, a=lmandel - 29dd7b8ee41f
JW WangBug 1069289 - Take |mAudioEndTime| into account when updating playback position at the end of playback. r=kinetik, a=lmandel - 31fc68be9136
David KeelerBug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=lmandel - 2535e75ff9c6
David KeelerBug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=lmandel - a7b8a4567262
Jonathan KewBug 1074223 - Update OTS to pick up fixes for upstream issues 35, 37. Current rev: c24a839b1c66c4de09e58fabaacb82bf3bd692a4. r=jdaggett, a=lmandel - 6524ec11ce53
Gijs KruitboschBug 1050638 - Should be able to close tab with onbeforeunload warning if clicking close a second time. r=ttaubert, a=lmandel - 98fc091c4706
JW WangBug 760770 - Allow 'progress' and 'suspend' events after 'ended'. r=roc, a=test-only - 915073abfd8b
Benjamin ChenBug 1041362 - Modify testcases because during the oncanplaythrough callback function, the element might not ended but the mediastream is ended. r=roc, a=test-only - c3fa7201e034
Jean-Yves AvenardBug 1079621 - Return error instead of asserting. r=kinetik, a=lmandel - 9be2b1620955
Andrei OpreaBug 1020449 Loop should show caller information on incoming calls. Patch originally by Andrei, updated and polished by Standard8. r=nperriault a=lmandel - 742beda04394
Mark BannerBug 1020449: Fix typo in addressing review comments in Bug 1020449 that caused broken jsx. rs=NiKo a=lmandel - 0033bca3ce22
Mark BannerBug 1029433 When in a Loop call, the title bar should display the remote party's information. r=nperriault a=lmandel - 530ec559a14c
Simone BrunoBug 1058286 - Add in-tree manifests needed for tests. DONTBUILD a=NPOTB - 0b7106ef79d2
Jonathan KewBug 1074809 - For OTS warning (rather than failure) messages, only log the first occurrence of any given message per font. r=jdaggett, a=lmandel - 1875f4aff106
Ed LeeBug 1081157 - "What is this page" link appears on "blank" version of about:newtab. r=ttaubert, a=sledru - c00a4cfe83e9
Matthew GreganBug 1080986 - Check list chunk is large enough to read list ID before reading. r=giles, a=sledru - bb851de524c2
Matthew GreganBug 1079747 - Follow WhatWG's MIMESniff spec for MP4 more closely. r=cpearce, a=sledru - f752e25f4c42
Steven MichaudBug 1084589 - Fix a Yosemite topcrasher. r=gijskruitbosch a=gavin - 3a24d0c65745
Mike de BoerBug 1081061: switch to a different database if a userProfile is active during the first mozLoop.contacts access to always be in sync with the correct state. r=MattN. a=lmandel - 6b4c22bfe385
Mark BannerBug 1081066 Incoming call window stays open forever if the caller closes the window/tab or crashes. r=nperriault a=lmandel - 8c329499cf7d
Matthew NoorenbergheBug 1079656 - Make the Loop Account menu item work after a restart. r=jaws a=lmandel - ada526904539
Mike de BoerBug 1076967: fix Error object data propagation to Loop content pages. r=bholley a=lmandel - 880cfb4ef6f8
Mark BannerBug 1078226 Unexpected Audio Level indicator on audio-only calls for Loop, also disable broken low-quality video warning indicator. r=nperriault a=lmandel - 5ad9f4e96214
Nicolas PerriaultBug 1048162 Part 1 - Add an 'Email Link' button to Loop desktop failed call view. r=Standard8 a=lmandel - f705ffd06218
Mark BannerBug 1081154 - Loop direct calls should attempt to call phone numbers as well as email addresses. r=mikedeboer a=lmandel - 191b3ce44bea
Nicolas PerriaultBug 1048162 Part 2 - Display an error message if fetching an email link fails r=standard8,darrin a=lmandel - 3fc523fcc7da
Mike de BoerBug 1013989: change the label of the Loop button to Hello. r=MattN a=lmandel - 2edc9ed56fa4
Romain GauthierBug 1079811 - A new call won't start if the outgoing call window is opened (showing feedback or retry/cancel). r=Standard8 a=lmandel - a4e22c4da890
Mike de BoerBug 1079941: implement LoopContacts.search to allow searching for contacts by query and use that to find out if a contact who's trying to call you is blocked. r=abr a=lmandel - bda95894a692
Randell JesupBug 1084384: support importing contacts with phone numbers in a different format r=abr a=lmandel - 8c42ccaf8aa1
Mike de BoerBug 1084097: make sure that the Loop button only shows up in the palette when unthrottled. r=Unfocused a=lmandel - 5840764c4312
Randell JesupBug 1070457: downgrade assertion about cubeb audiostreams to a warning r=roc a=lmandel - 9e420243b962
Randell JesupBug 1075640: Don't return 0-length frames for decoding; add comments about loss handling r=ehugg a=lmandel - bb26f4854630
Ethan HuggBug 1075640 - Check for zero length frames in GMP H264 decode r=jesup a=lmandel - 452bc7db811e
Luke WagnerBug 1064668 - OdinMonkey: Only add AsmJSActivation to profiling stack after it is fully initialized. r=djvj, a=lmandel - eb43a2c05eb3
Bas SchoutenBug 1060588 - Use PushClipRect when possible in ClipToRegionInternal. r=jrmuizel, a=lmandel - 6609fd74488b
Gijs KruitboschBug 1077404 - subviewradio elements in panic button panel are elliptical and labels get borders. r=jaws, a=lmandel - 2418d9e17dd0
Markus StangeBug 1081160 - Update window shadows for Yosemite. r=smichaud, a=lmandel - fc032520a26e
Chenxia LiuBug 1079761 - Add 'stop tab mirroring' one level higher in the menu. r=rbarker, a=lmandel - 4018d170ab06
Mike HommeyBug 1082323 - Reject pymake in client.mk. r=mshal, a=sledru - f19a52b7e6ec
Tomasz KołodziejskiBug 1074817 - Allow downgrade of content-prefs.sqlite. r=MattN, a=lmandel - 2235079fe205
Gijs KruitboschBug 1083895 - Favicon should not change if link element isn't in DOM. r=bz, a=lmandel - 73bc0bc9343b
Dragana DamjanovicBug 1081794 - Fixing a test for dns request cancel. On e10s, the dns resolver is sometimes faster than a cancel request. Use a random string to be resolved instead of a fix one. r=sworkman, a=test-only - 6d2e5afd8b75
Nicolas SilvaBug 1083071 - Add some old intel drivers to the blocklist. r=Bas, a=sledru - 53b97e435f10
Nicolas SilvaBug 1083071 - Blacklist device family IntelGMAX4500HD drivers older than 7-19-2011 because of OMTC issues on Windows. r=Bas, a=sledru - b5d97c1c71b7
Ryan VanderMeulenBug 1083071 - Change accidentally-used periods to commas. rs=nical, a=bustage - 8cc403ad710b

Robert NymanThe editors I’ve been using – which one is your favorite?

The other day when I wrote about Vim and how to get started with it, I got a bit nostalgic with the editors I’ve been using over the years.

Therefore, I thought I’d list the editors I’ve been using over the years. I remember dabbling around with a few and trying to understand them, but this list is made up of editors that I’ve been using extensively:

    Allaire HomeSite
    Ah, good ol’ HomeSite. You never forget your first real editor that you used for your creations. It was later bought by MacroMedia and then, in 2009, it was retired. Its creator, Nick Bradbury, wrote a bit about that in HomeSite Discontinued. I also sometimes used TopStyle, also created by Nick, as a complement to HomeSite – and that one is actually still alive!
    Visual Studio.NET
    I was young and I needed the money.
    TextMate
    After my switch to Mac OS X, I quickly started using TextMate and it was my main editor for a good number of years.
    MacVim
    When I had used TextMate for a long time, a number of developers told me I should really get into Vim, where MacVim seemed like the most suitable alternative. I tried, really hard, with it for about 6 months; learned a lot, but eventually went back to TextMate.
    Sublime Text
    Later, along came Sublime Text and seemed to have a lot of nice features and active development, while TextMate had been pretty stale for a long time.
    MacVim (again)
    And now, as explained in my recent blog post on Vim, I’m back there again. :-)

I also do like to dabble around with various editors, to see what I like, get another perspective on workflow and general inspiration. One thing I’m toying around with there is Atom from GitHub, and I look forward to testing it more as well.

Which editor are you using?

It would be very interesting and great if you’d like to share in the comments which editor you are using, and why you prefer it! Or with which editor you started your developer career!

Nick CameronThoughts on numeric types

Rust has fixed width integer and floating point types (`u8`, `i32`, `f64`, etc.). It also has pointer width types (`int` and `uint` for signed and unsigned integers, respectively). I want to talk a bit about when to use which type, and comment a little on the ongoing debate around the naming and use of these types.

Choosing a type


Hopefully you know whether you want an integer or a floating point number. From here on in I'll assume you want an integer, since they are the more interesting case. Hopefully you know if you need your integer to be signed or not. Then it gets interesting.

All other things being equal, you want to use the smallest integer you can for performance reasons. Programs run as fast or faster on smaller integers (especially if the larger integer has to be emulated in software, e.g., `u64` on a 32 bit machine). Furthermore, smaller integers will give smaller code.

At this point you need to think about overflow. If you pick a type which is too small for your data, then you will get overflow and usually bugs. You very rarely want overflow. Sometimes you do - if you want arithmetic modulo 2^n where n is 8, 16, 32, or 64, you can use the fixed width type and rely on overflow behaviour. Sometimes you might also want signed overflow for some bit twiddling tricks. But usually you don't want overflow.

If your data could grow to any size, you should use a type which will never overflow, such as Rust's `num::bigint::BigInt`. You might be able to do better performance-wise if you can prove that values might only overflow in certain places and/or you can cope with overflow without 'upgrading' to a wider integer type.

If, on the other hand, you choose a fixed width integer, you are asserting that the value will never exceed that size. For example, if you have an ascii character, you know it won't exceed 8 bits, so you can use `u8` (assuming you're not going to do any arithmetic which might cause overflow).

So far, so obvious. But, what are `int` and `uint` for? These types are pointer width, that means they are the same size as a pointer on the system you are compiling for. When using these types, you are asserting that a value will never grow larger than a pointer (taking into account details about the sign, etc.). This is actually quite a rare situation, the usual case is when indexing into an array, which is itself quite rare in Rust (since we prefer using an iterator).

What you should never do is think "this number is an integer, I'll use `int`". You must always consider the maximum size of the integer and thus the width of the type you'll use.

Language design issues


There are a few questions that keep coming up around numeric types - how to name the types? Which type to use as a default? What should `int`/`uint` mean?

It should be clear from the above that there are only very few situations when using `int`/`uint` is the right choice. So, it is a terrible choice for any kind of default. But what is a good choice? Well first of all, there are two meanings for 'default': the 'go to' type to represent an integer when programming (especially in tutorials and documentation), and the default when a numeric literal does not have a suffix and type inference can't infer a type. The first is a matter of recommendation and style, and the second is built-in to the compiler and language.

In general programming, you should use the right width type, as discussed above. For tutorials and documentation, it is often less clear which width is needed. We have had an unfortunate tendency to reach for `int` here because it is the most familiar and the least scary looking. I think this is wrong. We should probably use a variety of sized types so that newcomers to Rust get aquainted with the fixed width integer types and don't perpetuate the habit of using `int`.

For a long time, Rust had `int` as the default type when the compiler couldn't decide on something better. Then we got rid of the default entirely and made it a type error if no precise numeric type could be inferred. Now we have decided we should have a default again. The problem is that there is no good default. If you aren't being explicit about the width of the value, you are basically waving your hands about overflow and taking a "it'll be fine, whatever" approach, which is bad programming. There is no default choice of integer which is appropriate for that situation (except a growable integer like BigInt, but that is not an appropriate default for a systems langauge). We could go with `i64` since that is the least worst option we have in terms of overflow (and thus safety). Or we could go with `i32` since that is probably the most performant on current processors, but neither of these options are future-proof. We could use `int`, but this is wrong since it is so rare to be able to reason that you won't overflow when you have an essentially unknown width. Also, on processors with less than 32 bit pointers, it is far to easy to overflow. I suspect there is no good answer. Perhaps the best thing is to pick `i64` because "safety first".

Which brings us to naming. Perhaps `int`/`uint` are nor the best names since they suggest they should be the default types to use when they are not. Names such as `index`/`uindex`, `int_ptr`/`u_ptr`, and `size`/`usize` have been suggested. All of these are better in that they suggest the proper use for these types. They are not such nice names though, but perhaps that is OK, since we should (mostly) discourage their use. I'm not really sure where I stand on this, again I don't really like any of the options, and at the end of the day, naming is hard.

Raniere SilvaThe Status of Math in Open Access

The Status of Math in Open Access

This week, October 20–26, 2014, held the Open Access Week (see the announcement) and will end with Mozilla Festival that has a awesome science track. This post has some thoughts about math and open access and this two events.

Leia mais...

Curtis KoenigThanks for all the Fish

I’ve always loved that book or in fact any of Douglas Adams books as they made me laugh while reading for the first time. And like the ending of that series it always seemed a good way to start a ending. This is only the 3rd real job I’ve ever had and they’ve all ended with that as the subject line, so by now you all know where this is going.

The last 3 years 9 months and 21 days have been the best of my adult working life. Mozilla has been more than a job, more than a career. It was a home. The opportunity to apply ones talent in conjunction with values and mission is a gift. It’s a dream state, even on bad days, that I gladly would have remained a slumberer in. The Community of Mozilla is a powerful and wonderful uniqueness that embodies the core of what it means to be Open, and if we ever lose that we’ve lost a precious gem.

I hope to work with many of you again at some future time. If we cross paths somewhere I’d happily lift a libation in remembrance.

With that I shall end with one of my favorite bits of poetry:

Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;

Then took the other, as just as fair,
And having perhaps the better claim
Because it was grassy and wanted wear,
Though as for that the passing there
Had worn them really about the same,

And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way
I doubted if I should ever come back.

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I,
I took the one less traveled by,
And that has made all the difference.

 

/Curtis


Ben KeroMissoula visit, day 1

NOTE: This is a personal post, so if these sort of things do not interest you please feel free to disregard.

Yesterday I drove from Seattle to Missoula to visit my mother and help her sort out her health issues. I left later than usual, but with the drive time reduced by an hour compared to Portland combined with the relatively straight and boring I-90 I made it to Missoula with energy to spare (which also might have been why I was up far later than my arrival time).

The purpose of my visit is to visit my mother, show that I still care about my family, and help her sort out her medical issues (she’s currently going through her third round of cancer). When I arrived last night around 2300 I didn’t get a very good look at her. She had stayed awake past her normal 2000 time to await my arrival and greet me. Last night was relatively uneventful besides my restful sleep. She showed me to the apartment’s single bedroom. Each surface was meticulously cleaned, although none of the multitude of tchotchkes or personal accessories were organized or put away. While using the bathroom I noticed there was a small mop and bucket. Normally I wouldn’t think much of it, but in previous weeks my sister told me that my mother had given herself a panic attack making sure the apartment was spotless for my arrival. Unfortunately my trip over was waylaid for a few weeks, so I hope that she hasn’t been in such a state the whole time.

Last year I sent her an air purifier to remove all the pet danger from the air, and although she couldn’t figure out what it was or its purpose, thankfully she had finally figured it out and was using it. As a result the air was much cleaner and her phlegm-laden smoker’s cough was slightly better than last time I visited.

This morning I woke up later because I also fell asleep quite late into the morning combined with the timezone change. She has a few friends here in Missoula who (as far as I can tell) provide her with some company and source of gossip and usefulness. when I emerged from my room this morning I found her on the phone with one such friend. I didn’t get much from the conversation, but apparently the daily phone calls are a routine for her. That’s good.

What I really didn’t like though was the television. During the day it was a large part of her unchanging environment. Equipped with a “digital cable” box, this tube never showed anything besides Informative Murder Porn. Likewise, the bookshelves were chocked full of James Patterson books, promising more tripe romance and novelized informative murder porn. Although I fear that it’s rotting her brain I refrained from commenting about it.

After my emergence this morning we finally got a good look at each other. She finally noticed my attempt at growing a beard and I finally noticed her emaciation. Her weight is below 100lb now, and she looks positively skeletal. It’s not a pretty sight. It appears she attempted to dye her hair recently, but even with that effort it’s resisted, instead opting for a grey and wispy appearance. I offered to pick her up a meal with my lunch, but she refused saying that she already had breakfast.

We talked for a while about how she’s been. Besides the new medical situation nothing ever seems to change with her. She remains cloistered in her dark, smoke-smelling apartment with nothing to keep her company save her pets, her informative murder porn, and trashy novels. She hasn’t expressed any dissatisfaction with the situation, so perhaps she enjoys it. I haven’t asked, and haven’t decided if it’s appropriate to ask yet. Perhaps it’s imposing my values on her to think that she would be unhappy with this bountiful life she’s living.

This morning we talked about her medical situation. The gist of the situation is that she believes that she’s been caught up in a catch-22 with a set of doctors, each waiting to hear from another before proceeding to make a diagnosis and start a treatment. I asked her about what she knows (red blood cell count is down), and what kind of treatment she was currently undergoing. The answer was no treatment, so I continued by asking which doctor’s she’s seen and what their next steps would be (or what they’re waiting on). I got quite a few answers about quite a few doctors, and was unfortunately unable to follow most of it. I asked her to write a list of doctors down, along with what she thinks they’re waiting on and the last time they’ve been in contact. Hopefully when I get home this evening I can help untangle this and get her the treatment she needs. She didn’t seem particularly worried about things, which frustrated me. The scenario in front of me was her off-the-cuff attitude about it combined with the television showing an emotionally charged lady with a bloody knife crumpled into a heap on the ground, the police intervening to save the day and arrest the paedophile, had caused me to want to ragequit and give her some time to compile the list of doctors. I do not have much hope of returning home this evening to any list.

This afternoon, as usual for my Missoula visits, I’ve holed up at The Break coffee shop. It’s an old standby for spending uninterrupted time on my laptop to get a little work done. It’s also conveniently located far away from that apartment, and near the other goodness of downtown Missoula. The espresso might have been burnt, but the well worn tables, music reminiscent of my high school days in Missoula, and mixed clientèle lead to a very pleasing atmosphere. As I sit at my stained and worn coffee table typing away at my laptop I can see a young man in a cowboy hat and vest conversing with a friend in a denim jacket over a cup of joe, a dreadlocked young lady enjoying an iced coffee alone, equipped with Macbook and textbooks. Various others including a suited businessman, and a few white collar workers taking a lunch break are around. The bright but persistently drizzling conditions outside add to the hearth-like atmosphere of the shop. This place is going to get sick of me before the week is out.

This afternoon I’m going to attempt to do a bit do a bit of work and get in contact with my father and sister for their obligatory visits. I hope that I can return to my mother’s place and provide some sort of assistance besides moral support.

I’ll be in Missoula until Thursday evening.

Mic BermanWhat Does Your Life Design Look Like?

I would guess many people would react to this title like what on earth is she talking about? Design my life - what does that even mean? Many people I coach and work with have not thought about their life’s design - meaning the focused work you do, is it in support of your values and vision? the people you surround yourself with - do they inspire and motivate you? the physical space you live in - does it give you energy?  

 

Your life design includes more than your career. I invite my clients to think beyond what's on paper. Your career is your role, salary, title, industry company. I’m asking you also think of -  who you will be surrounded with, where it might take you beyond the role right now, how it will honour or not honour your values and how it will fit with your vision for your life.

 

If we jump mindlessly from one opportunity to the next - even if they are great opportunities on the surface eg more money, bigger title etc it can cost us in our personal lives in ways we may not consider. Arianna Huffington recently wrote about this in her book Thrive, “Our relentless pursuit of the two traditional metrics of success - money and power - has led to an epidemic of burnout and stress-related illnesses, and an erosion in the quality of our relationships, family life, and, ironically, our careers”. In addition to this sentiment, everyone’s heard repeated stories of those on their deathbed who never regret not having worked more or harder - what they always say is they wished they'd spent more time with people they loved or doing things that inspire and nurture. These are our regrets. So how do you want to design your life with no regrets in mind?

 

Here are my 3 tips on designing which I hope inspires you to re-create your designed life:

  1. Know your values and how you want to honour them

  2. Define your vision if not for 5 years at least for the next year - what are the big impressions you want to leave

  3. Who are your stakeholders (the people that matter to you) and how will you honour them

Doug BelshawA 10-point #MozFest survival guide

It’s the Mozilla Festival in London this weekend. It’s sold out, so you’ll have to beg, borrow or steal a ticket! This will will be my fourth, and third as a paid contributor (i.e. Mozilla employee).

Here’s my tips for getting the most of it.

1. Attend the whole thing

There’s always the temptation with multi-day events not to go to each of the days. It’s easy to slip off into the city – especially if it’s one you haven’t been to before. However, that would a real shame as there’s so much to do and see at MozFest. Plus, you really should have booked a few days either side to chill out.

2. Sample everything

Some tracks will grab you more than others. However, with nine floors and multiple sessions happening at the same time, there’s always going to be something to keep you entertained. Feel free to vote with your feet if you’re not getting maximum value from a session – and drop into something you don’t necessarily know a lot about!

3. Drink lots

Not alcohol or coffee – although there’ll be plenty of that on offer! I mean fluids that will rehydrate you. At the Mozilla Summit at the end of last year we were all given rehydration powder along with a Camelbak refillable bottle. This was the perfect combination and I urge you to bring something similar. Pro tip: if you can’t find the powder (it’s harder to come by in the UK) just put a slice of lemon in the bottom of the bottle to keep it tasting fresh all day!

4. Introduce yourself to people

The chances are that you don’t know all 1,600 people who have tickets for MozFest. I know I don’t! You should feel encouraged to go up and introduce yourself to people who look lost, bewildered, or at a loose end. Sample phrases that seem to work well:

  • “Wow, it’s pretty crazy, eh?”
  • “Hi! Which session have you just been to?”
  • “Is this your first MozFest?”

5. Take time out

It’s easy to feel overwhelmed, so feel free to find a corner, put your headphones on and zone out for a while. You’ll see plenty of people doing this – on all floors! Pace yourself – it’s a marathon, not a sprint.

6. Wear comfortable shoes/clothing

There are lifts at the venue but, as you can imagine, with so many people there they get full quickly. As a result there’ll be plenty of walking up and down stairs. Wear your most comfortable pair of shoes and clothing that’ll still look good when you’re a sticky mess. ;-)

7. Expect tech fails

I’ve been to a lot of events and at every single one, whether because of a technical problem or human error, there’s been a tech fail. Expect it! Embrace it. The wifi is pretty good, but mobile phone coverage is poor. Plan accordingly and have a backup option.

8. Ask questions

With so many people coming from so many backgrounds and disciplines, it’s difficult to know the terminology involved. If someone ‘drops a jargon bomb’ then you should call them out on it. If you don’t know what they mean, then the chances are others won’t know either. And if you’re the one doing the explaining, be aware that others may not share your context.

9. Come equipped

Your mileage may vary, but I’d suggest the following:

  • Bag
  • Laptop
  • Mobile phone and/or tablet
  • Pen
  • Notepad
  • Multi-gang extension lead
  • Charging cables
  • Headphones
  • Snacks (e.g. granola bars)
  • Refillable water bottle

I’d suggest a backpack as something over one shoulder might eventually cause pain. You might also want to put a cloth bag inside the bag you’re carrying in case you pick up extra stuff.

10. Build (and network!)

MozFest is a huge opportunity to meet and co-create stuff with exceptionally talented and enthusiastic people. So get involved! Bring your skills and lend a hand in whatever’s being built. If nothing else, you can take photos and help document the festival.

The strapline of MozFest is ‘arrive with an idea, leave with a community’. Unlike some conferences that have subtitles that, frankly, bear no relation to what actually happens, this one is dead on. You’ll want to keep in touch with people, so in addition to the stuff listed above you might want to bring business cards. Far from being a 20th century thing, I’ve found them much more useful than just writing on a scrap of paper or exchanging Twitter usernames.


This isn’t meant to be comprehensive, just my top tips. But I’d be very interested to hear your advice to newbies if you’re a MozFest veteran! Leave a comment below. :-)

Image CC BY-SA Alan Levine

Update: my colleague Kay Thaney has a great list of blighty sights that you should check out too!

Mic BermanWhat do you want for you life? knowing oneself

There are many wiser than me that have offered knowing yourself as a valuable pursuit that brings great rewards.

 

Here are a few of my favourite quotes on why to do this:

 

This is how I invest in knowing myself - I hope it inspires you to create your own practice

  1. I spend time understanding my motivations, values in action or inaction, and my triggers. I leverage my coach to deconstruct situations that were particularly difficult or rewarding, where I’m overwhelmed by emotion and don’t feel I can think rationally - I check in to get crystal clear on what is going on, how am I feeling, what was the trigger(s) and how will I be at choice in action going forward.
  2. I challenge myself in areas I want to understand more about myself by reading, going to lectures, sharing honestly with leanred or experienced friends.
  3. I keep a daily journal - particularly around areas in my life I want to change or improve like being on time and creating sufficient time in my day for reflection. I’ve long run my calendar to ‘maximize time and service’ i.e. every available minute is in meetings, working on a project, etc. This is not only un-sustainable for me, it doesn’t leave me any room for the unexpected and more importantly an opportunity to reflect on what may have just happened or prepare for what or who I may be seeing next. This is not fair to me nor to the people I work with.

Mic BermanHow are you taking care of yourself?

The leaders I coach drive themselves and their teams to great achievements, are engaged in what they do, love their work and have passion and compassion in how they work for their teams and customers. They face tough situations - impossible-seeming deadlines or goals, difficult conversations, constant re-balancing of work-life priorities, and crazy business scenarios we’ve never faced before.

Their days can be both energizing and completely draining. And each day they face those choices and predicaments at times with full grace and others with total foolishness.

Along the way I hear and offer the questions - how are you taking care of yourself? how will you rejuvenate? how will you maintain balance? so you I ask these questions of the leaders I work with so that they can keep driving their goals, over-achieving each day and showing up for the important people in your life :)

 

I focus on three ways to do this myself.

  • Knowing myself - spending time to understand and check in with my values, triggers, and motivations.

  • Doing a daily practice - i’ve created a daily and weekly practice that touches on my mind, body and spirit. this discipline and evolving practice keeps me learning, present and ‘in balance’

  • Being discerning about my influences - choose the people, experiences and beauty that influence my life and what’s important about that today, this week or month or year.

Jen Fong-AdwentDistribute everything

I've been thinking about how our online communication and interactions have cost our personal relationships and helped others profit.

Priyanka NagMozilla and WeTech Women's Maker Party, Delhi

Well, I love the name Larissa came up with for today's event. It is kind of a little long but defines the event best - "Mozilla and WeTech Woman’s Maker Party".

We had landed in Delhi on the 22nd of July 2014 and as Larissa defines it, Delhi was indeed a 'steam sauna'. We did spend most of that day going around and visiting a few famous places like the Red Fort, the India Gate, Parliament house etc. In the evening, we did meet the local Mozillians in Delhi. Well, it was an informal meeting of all Mozillians, talking all 'sh!t mozillians say' ;)

23rd morning began with all excitement. It was a small crowd, but a really awesome crowd in that conference room. Right from the introduction session, we could feel the high intellectual capabilities these young ladies were filled with. After a small game of spectrogram, we immediately moved to introduce Mozilla as an organization as well as all the Mozilla projects. To my surprise, most of the participants already knew about Open Source and had a fair idea about Mozilla. To my greater surprise, all of our participants had used Firefox at some point of time (even if it was not their default/regular browser). It was thus easy to introduce the different Mozilla projects and contribution pathways to them.

Serious hacking in progress...
The confidence these dynamic ladies did showcase was beyond appreciation.
One thing each person in the room agreed to was - "being a woman in technology is indeed tough". But these girls were ready to face the tough world and fight it out for themselves!

Post lunch, we got to some webmaking. So much hacking, so much remixing...it was tough to believe that many of these people were "not from a technical background".
Some of the awesome makes can be found listed on this spreadsheet.

Well, it goes beyond saying that these superstarts definitely deserved some awards for their awesomeness and thus, we did give them some webmaker badges.

Very few events have given me the happiness of being able to convert almost all participants into Mozillians and this was one of those rare ones.

The awesome woman Webmakers of Delhi :)


Priyanka NagMaker Party Bhubaneshwar

Last weekend I had a blast in Bhubaneshwar. Over two days, I was there at two different colleges for two Maker parties.

Saturday (23rd August 2014), we were at the Center of IT & Management Education (CIME) where we were asked to address a crowd of 100 participants whom we were supposed to teach webmaking. Trust me, very rarely do we get such crowd in events where we get the opportunity to be less of a teacher and more of a learner. We taught them Webmaking, true, but in return we learnt a lot from them.

Maker Party at Center of IT & Management Education (CIME)

On Sunday, things were even more fabulous at Institute of Technical Education & Research(ITER), Siksha 'O' Anusandhan University college, where we were welcomed by around 400 participants, all filled with energy, enthusiasm and the willingness to learn.

Maker Party at Institute of Technical Education & Research(ITER)

Our agenda for both days were simple....to have loads and loads of fun! We kept the tracks interactive and very open ended. On both days, we did cover the following topics:
  • Introduction to Mozilla
  • Mozilla Products and projects
  • Ways of contributing to Mozilla
  • Intro to Webmaker tools
  • Hands-on session on Thimble, Popcorn and X-ray goggles and Appmaker
Both days, we concluded our sessions by giving away some small tokens of appreciation like e T-shirts, badges, stickers etc, to the people who had been extra awesome among the group. We concluded the awesomeness of the two days by cutting a very delicious cake and fighting over it till its last pieces.
Cake.....
Bading goodbye after two days was tough, but after witnessing the enthusiasm of everyone we met during these two events, I am very sure we are going to return soon to Bhubaneshwar for even more awesomeness.
A few people who are two be thanked for making these events sucessful and very memorable are:
  1. Sayak Sarkar, the co-organizer for this event.
  2. Sumantro, Umesh and Sukanta from travelling all the way from Kolkata and helping us out with the sessions.
  3. Rish and Prasanna for organizing these events.
  4. Most importantly, the entire team of volunteers from both colleges without whom we wouldn't havebeen able to even move a desk.
 p.s - Not to forget, we did manage to grab media's attention as well. The event was covered by a local newspaper.
The article in the newspaper next morning

Christian HeilmannRemoving private metadata (geolocation, time, date) from photos the simple way: removephotodata.com

When you take photos with your smartphone or camera it adds much more to the image than meets the eye. This EXIF data contains all kind of interesting information: type of device, flash on or off, time, date and most worrying – geographical location. Services like Flickr or Google Plus use this data to show your photos on a map, which is nice, but you may find yourself in situations where you share images without wanting to tell the recipient in detail where and when they were taken.

For example the photo of me here:

christian heilmann, not sure about the shirt

Doesn’t only tell you that I am not sure about this shirt, but also the following information:

  • GPSInfoIFDPointer: 462
  • Model: Nexus 5
  • ExifIFDPointer: 134
  • YCbCrPositioning: 1
  • YResolution: 72
  • ResolutionUnit: 2
  • XResolution: 72
  • Make: LGE
  • ApertureValue: 3.07
  • InteroperabilityIFDPointer: 432
  • DateTimeDigitized: 2014:10:19 16:06:20
  • ShutterSpeedValue: 5.321
  • ColorSpace: 1
  • DateTimeOriginal: 2014:10:19 16:06:20
  • FlashpixVersion: 0100
  • ExposureBias: 0
  • PixelYDimension: 960
  • ExifVersion: 0220
  • PixelXDimension: 1280
  • FocalLength: 1.23
  • Flash: Flash did not fire
  • ExposureTime: 0.025
  • ISOSpeedRatings: 102
  • ComponentsConfiguration: YCbCr
  • FNumber: 2.9
  • GPSImgDirection: 105
  • GPSImgDirectionRef: M
  • GPSLatitudeRef: N
  • GPSLatitude: 59,19,6.7941
  • GPSLongitudeRef: E
  • GPSLongitude: 18,3,35.5311
  • GPSAltitudeRef: 0
  • GPSAltitude: 0
  • GPSTimeStamp: 14,6,10
  • GPSProcessingMethod: ASCIIFUSED
  • GPSDateStamp: 2014:10:19

I explained that this might be an issue in the case of nude photos people put online in my TEDx talk on making social media social again and showed that there is a command line tools called EXIFtool that allows for stripping out this extra data. This article describes other tools that do the same. EXIFtool is the 800 pound gorilla of this task as it allows you to edit EXIF data.

As a lot of people asked me for a tool to do this, I wanted to make it easier for you without having to resort to an installable tool. Enter removephotodata.com

Remove photo data in action

This is a simple web page that allows you to pick an image from your hard drive, see the data and save an image with all the data stripped by clicking a button. You can see it in action in this screencast

Under the hood, all I do is use Jacob Seidelin’s EXIF.js and copy the photo onto a CANVAS element to read out the raw pixel data without any of the extra information. The source code is on GitHub.

The tool does not store any image data and all the calculations and information gathering happens on your computer. Nothing gets into the cloud or onto my server.

So go and drag and drop your images there before uploading them. Be safe® out there.

Karl DubostMonitoring Web page differences on a long term with screenshots

Hallvord has created a very nice system for testing regressions automatically. The results are displayed on Are We Compatible Yet?. I had explained in a blog post how to add or fix tests there.

I was reading yesterday another type of large scale testing for link rot on the UK Web. They share some of the challenges we meet in terms of guessing if the Web page has changed or not meaningfully. It's a very interesting read. They try at a point to determine if something is similar or dissimilar. Here what they say.

The easy case is when the content is exactly the same – we can just record that the resources are identical at the binary level. If not, we extract whatever text we can from the archived and live URLs, and compare them to see how much the text has changed. To do this, we compute a fingerprint from the text contained in each resource, and then compare those to determine how similar the resources are. This technique has been used for many years in computer forensics applications, such as helping to identify ‘bad’ software, and here we adapt the approach in order to find similar web pages.

Specifically, we generate ssdeep ‘fuzzy hash’ fingerprints, and compare them in order to determine the degree of overlap in the textual content of the items. If the algorithm is able to find any similarity at all, we record the result as ‘SIMILAR’. Otherwise, we record that the items are ‘DISSIMILAR’.

I remember in the past for a talk I had given about quality on how to use selenium to take screenshots at different stages of the development and check if the rendering was the same. One way to evaluate the differences is to create a comparison of the images by first taking a screenshot

from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://www.mozilla.org/')
browser.save_screenshot('moz-home.png')
browser.close()

And then, if you get multiple screenshots of the same page, to compare two images. Such as the reference image and the new screenshot:

def diff_ratio(screen_ref, screen_new):
    s = difflib.SequenceMatcher(None, s1, s2)
    return s.quick_ratio()

SequenceMatcher is a tool for comparing pairs of sequences of any type and quick_ratio return an upper bound on .ratio() relatively quickly, which is a measure of the sequences' similarity (float in [0,1]). Just to give an example of the type of results.

import difflib
a = 'Life is a long slow river.'
b = 'Life is among slow rivers.'
s = difflib.SequenceMatcher(None, a, b)
s.quick_ratio()
# returns 0.9230769230769231
s2 = difflib.SequenceMatcher(None, a, a)
s2.quick_ratio()
# returns 1.0

So if the images are quite similar it will return a number close to 1.

And For Web Compatibility Issues?

Most of our use cases are Web sites not sending the right version of the Web site, such as desktop content instead of the mobile version. So I was wondering if we could have a very quick check which would involve less human checking during our surveys of sites for certain countries.

One possible challenge (among maybe many) is Website relying heavily on advertisements and sending different images. In this case, the site would be different even if sending the same version.

Another one is press Web sites, changing the content of the home page quite often.

Maybe it's worth testing. Maybe we would get an acceptable ratio.

Addendum

I had forgotten but Hallvord did that already for quickly selecting the screenshots which were worth testing. He added:

I suppose we could explore stuff like hashing all the CSS code included in a page and compare hashes to find different styling. Although my next project is going to be using Compatipede 2, finding all the elements in a DOM that are styled with -webkit- properties or values, then generate XPath identifiers or JS to locate the same element and see if it has equivalent styles when the page loads in a different rendering engine.

Otsukare.

Nicholas NethercoteFirefox OS phones on sale in Australia

Firefox OS phones are now on sale in Australia! You can buy a ZTE Open C with Firefox OS 1.3 installed for $99 (AUD) at JB Hi-Fi. (For non-Australian readers: JB Hi-Fi is probably the biggest electronics and home entertainment retailer in Australia.)

Australia’s not the ideal market for the current versions of Firefox OS, being a  country where a large fraction of people already use high-end phones. But it’s nice that they’re easily available :)

Mozilla Reps CommunityNew council members – Fall 2014

We are happy to announce that four new members of the Council have been elected.

Welcome San James, Ankit, Luis and Bob! They bring with them skills they have picked up as Reps mentors, and as community builders both inside Mozilla and in other fields. A HUGE thank you to the outgoing council members – Guillermo Movia, Sayak Sarkar, Nikos Roussos and Majda Nafissa. We are hoping you continue to use your talents and experience to continue in a leadership role in Reps and Mozilla.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the ReMo wiki.

new-council

Congratulate new Council members on this Discourse topic!

Aaron TrainAndroid MediaCodec use in Firefox for Android

James 'snorp' Willcox has landed support for hardware decoding via the public MediaCodec Java class in Nightly for Android for devices running Android 4.1+ (Jellybean). This is replacing OMXCodec & the Stagefright library which was introduced as a replacement for OpenCore for media decoding. This relatively new public Java class is used for decoding H.264/AAC in MP4 for playback in the browser with the benefit of allowing for direct access to the media codecs on the device through a "raw" interface.

This should correct a number of playback issues which have been reported to us regarding problems on Android 4.1+ devices.

Victory!

How to Help

  • Install Nightly (available on 10/21/2014's build)
  • Test video playback on your Android 4.1+ device (e.g, test page)
  • Talk to us on IRC about your experience or problems

Curtis KoenigCurtis Report 2014-10-17

This week was a real grab bag of stuff starting and stopping and interruption that kept me all over the place.

I’m particularly happy to see the python learning group starting the week of 2014010-20 and I feel like the work for my local Mozillians and KitHerder is getting closer to done. I spent all day Fri working out install issues with KitHerder with the developer (thanks WiredCrow for all the help).

  • local Mozillians working
  • Announced Python learning group
  • Working on more details and bits for next weeks launch
  • SecChamps Report
  • vendor review
  • SeaSponge video feedback
  • EME research
  • EME PTR setup
  • KitHerder wrangling / install

Meetings

Mon

  • Weekly meeting

Tue

  • SecAutomation
  • Cloud Security Team Meeting

Wed

  • MWoS team meeting
  • Web Bug Triage

Thu

  • Sec Open Mic
  • Community Building Team
  • 1:1

Jet VillegasThe Graphical Web (conference video)

I occasionally accept invitations to speak at conferences and events. Here’s the video from my recent talk at The Graphical Web in Winchester, England. I discuss how and why I now work on Web Platform Rendering, and how disruptive innovations are enabled by seemingly mundane key technologies that bridge gaps for developers and audiences.

Soledad PenadesA VERY BELATED Mozilla Festival 2013 post

Note: I started writing this past year after the festival finished, and then I went heads down into an spiral of web audio hacking and conferencing and what not, so I didn’t finish it.

But with the festival starting this Friday, it’s NOW OR NEVER!

Ahead with the PUBLISH button!

~~~

(AKA #MozFest everywhere else)

MozFest finished a week almost a year ago already, but I’m still feeling its effects on my brain: tons of new ideas, and a pervasive feeling of not having enough time to develop them all. I guess it’s good (if I manage it properly).

I came to the Festival without knowing what it would be about. The Mozilla London office had been pretty much taken over by the Mozilla Foundation people from all over the world who were doing their last preparations in there. Meeting rooms were a scarce resource, and one of them was even renamed as “MF WAR ROOM”, until someone came next day and re-renamed it as “MW PEACE ROOM”. So, it was all “a madhouse”, in Potch’s words, but amicable, friendly chaos after all. Hard to gather what the festival would consist of, though. So I just waited until Friday…

Friday

Well, saying that I waited wouldn’t be true. I wasn’t sitting, arms crossed. I was furiously stealing sleep hours to finish a hack that Dethe Elza (from Mozilla Vancouver, and curator for the Make the web physical track) had asked me to bring and present at the Science Fair on Friday.

My hack, HUMACCHINA, briefly consisted on using my QuNeo to control an instrument running on the browser, with Web Audio. I will talk about more technicalities in a future post, but what interests me here is the experience of presenting my creation to people on a fair. It was quite enlightening to observe how people react to the unknown and how they interpret what is in front of them according to their existing knowledge. Granted, my experiment was a little bit cryptic, specially if you were not a musician already (which would give you some hints), and it was hard to even listen to the music because of the noise in the environment, but still most people seemed to have fun and spent a while playing with the pads, others were puzzled by it (“but… why did you do this?”) and finally others were able to take the QuNeo out of its current preset and into another (wrong) one by just pressing all the buttons randomly at the right times (!!!). I’m glad I noticed, and I’m even gladder than I had programmed a test pattern to ensure everything was properly setup, so I could reset it and ensure all was OK before the next person came to the booth.

The sad part of this is that… I couldn’t get to see any other of the booths, so I missed a great lot. You can’t have it all, I guess.

At some point I was super tired, first because it was the end of a long day (and week!) and second because explaining the same thing over and over again to different people is not something I do every day, so I was exhausted. I decided to call it a day and we went for dinner to a nearby place… where we happened to find a sizeable amount of Mozillians having dinner there too. So we all gathered together for a final drink and then quickly rushed before the last tube left.

Saturday

I couldn’t be on time for the opening, but as soon as I arrived to Ravensbourne College I dashed through to the “Pass the App” session that Bobby Richter was running and had asked me to join. I, again, had not much of a clue of how it would develop. He paired me with a startup who’s trying to crowdsource custom built prosthetic parts for children in need, and we set up to prototype ideas for an app that could help them get to their goal. I think I should have drank a couple litres of coffee before joining this session, but although I wasn’t in my best shape, I think we did good enough. We came up with a mockup for an app that would use a futuristic hypothetical AppMaker to start with some sort of template app that parents could customise to describe their children’s needs, and then generate an app that they could then upload to a Marketplace and use that Marketplace payment features to fund the goal. It was fun to draw the mockup at giant scale and just discuss ideas without going technical for a change!

Some people stayed to try to build a demo for Sunday, but I was honest with myself and declined building any hack during the event. I know that after a few hours of hacking while many other activities were happening, I would be hating myself, at the end of the week-end I would hate everyone else, and on Monday I would hate the whole universe. Or worse.

I ended up chatting with Kumar, who’s actually worked on the payments system in the Firefox Marketplace, and then Piotr (of JSFiddle fame) showed up. He had brought his daughter–she had been translating WebMaker into Polish first, and now she was happily designing a voxel based pig using Voxel painter under the careful supervision of Max Ogden. Behind us, a whole group of tables were covered with the most varied stuff: plasticine, a water-colour machine, lots of Arduinos, sensors and wires, Makey makey pseudo joysticks, and whatnot.

It was also time for lunch, so we grabbed some sandwiches carefully arranged in a nearby table. They also were yummy! But I was totally yearning for a coffee, and a social break, so I popped out of the building and into the O2 for some sugary coffee based ice cream. Back into the College, I got a tweet suggesting me I visit the Makers Academy booth, which I did. It was interesting to know about their existence, because I get many questions about where to learn programming in a practical way, and I never know what to answer. Now I think I’d recommend Makers Academy as their approach seems quite sensible!

Then I decided that since I was on the ground floor, I could just as well try to visit all booths starting from that floor and work my way upwards. So I went to the Mozilla Japan booth, where I had quite a lot of fun playing with their Parapara animation tool. Basically you draw some frame-based animation in a tablet, which gets saved into an SVG image. This is then played in several devices, moving along a certain path, and it seemed as if the character I drawn was travelling around the world. Here it went crossing Tower Bridge, then on the next device he would be crossing Westminster Bridge… all the way until it reached Mount Fuji. (Here’s a better explanation of how Parapara works, with pictures)

I was also really honoured to spend some time speaking with Satoko Takita, better known as “Chibi”. She worked for Netscape before, of all places! She’s a survivor! But today, she humbly insisted, “she’s mostly retired”. She was also super kind and helped me de-Mac-ify my laptop with a couple of vinyl stickers in vivid orange. Now when the lid is open, an orange dinosaur glows inside the apple. Gecko inside!

After saying “arigato” many times (the only word I can say in Japanese… but probably a very useful one), I tried to continue my building tour. I tried to enter the first huge room which resembled a coffee place but it was so thriving with activity that it was impossible to get past the first meters. Also, I found Kate Hudson too, which I hadn’t had time to speak to during the Science Fair. She had to buy a SIM card, so we ventured out to the O2 shop. Something funny happened there. She was wearing a “Firefox” hoodie, and the guy in the shop asked her if she worked for Firefox. I was looking at the whole scene, partly amused because of my anonymous condition (I wasn’t wearing any branded apparel), and partly intrigued as to how the thing would end.

She started explaining that she actually worked for Mozilla… but then the guy interrupted her, and said that Internet Explorer was the best browser. He was a real-life troll! But Kate wouldn’t shut up–oh no! Actually it was good that she was the one wearing the hoodie, because she was taller and way more imposing than the troll (and than me! ha!). So she entered Evangelist Mode™ and calmly explained the facts while the other guy lost steam and… he finally left.

After the incident, we went back to the college, and up to the Plenary area, where the keynotes would be held. But it was still early, so I used that time to make a few commits (I have a goal of making at least one small fix every day), since I hadn’t made anything productive yet.

The keynote speakers were sort of walking/rocking back and fro behind the stage, rehearsing their lines, which was a curious insight since that is not something you get to see normally. It was also quite humanising–made them approachable. In the meantime, we logged into chat.meatspac.es and said “hi” to my team mate and former MozToronto office resident, Jen Fong (she’s now in Portland!).

Finally the keynotes started, live broadcasted by Air Mozilla. Mitchell Baker’s keynote was quite similar to her summit’s keynote. Camille’s speech had a few memorable quotes, including the “some people have never done homework without the web”, and “people often say that democracies are like plumbing, because you only care about plumbing when there are really bad smells… We are the plumbers of the Information Society”.

After her, Dethe himself got up on stage to show Lightbeam, a project that displays in a graphical way the huge amount of information that is “leaked” when you visit any given website. There was a most unsettling moment, when somebody sitting behind me said “Oh wow I never realised this was happening while I browsed!”. That was a moment of tension, and of revelation–people really need to understand how the web works; hopefully Lightbeam and similar tools will help them.

The co-founder from Technologywillsaveus gave us a tour about their products, and what they had learnt while building and marketing them, and although it seemed interesting, my brain was just refusing to accept any more information :-(

After that–MozParty! We went to a pub in (guess where…) the O2, where the party would happen. Somebody was livecoding visuals and music with livecodelab, but I couldn’t see who or where he was. At some point we went for dinner, and although the initial intention was to go back, we ended up retreating home as the first day had been quite exhausting!

Sunday

I think my brain was still fried when I woke up. Also, I was super hungry, almost to the point of being “hangry” (an invented word I had learnt about on Saturday), so –unsurprisingly– my feet brought me to the usual breakfast place. After a flat white and an unfinished “French Savoury Toast” (because it was massive) I was so high on sugar that I could say I was even levitating some centimeters over the floor. I took the tube in Victoria–gross error. The platform was crowded with people dressed as comic characters and tourists dressed as English souvenirs (basically: Union Jack-themed apparel), and I mildly cursed myself for taking the tube in Victoria instead of walking to any of the other nearby stations. Only mildly, because I was under the effects of a sugar kick, and couldn’t really get angry. At least, not for a few more hours.

When I arrived, the opening keynote was finishing. Way to start the day! I had decided to “go analogic” and left my computer home, so I wanted to attend sessions where computers were not required.

I finally attended the “Games on the urban space” session by Sebastian Quack, which was quite funny (and definitely didn’t require us using computers!). This got me thinking about the urban environment and the activities that can take place in it–can everything be converted into a game? when’s the best moment to play a game, or to involve passing pedestrians in your “gamified” activity? and do you tell them, or do you involve them without letting them know they are part of a game?

I had lunch with a few participants of that session –it was funny that Myrian Schwitzner from Apps4Good was there too. We had met already at a ladieswhocode meetup in February, but I couldn’t quite pinpoint it. We were like: “your face… looks familiar!” And this was something that happened frequently during the week-end: there was plenty of acquaintances to say hi to! It certainly slowed down the movement from one place to another, because you couldn’t be rude and ignore people.

Downstairs on the first floor, some people from the Webmaker team were hacking on something-something-audio for Appmaker. Meanwhile, Kumar was learning how to program his QuNeo. Turns out the star (*) “trick” I found out of pure chance can be extended to more uses, so we tried finding the limits of the trick, sending different combinations to the device and seeing what would happen. I also explained how the LED control for the sliders worked (you control the brightness of each LED in the slider separately).

After we ran out of ideas to send to the QuNeo, I browsed nearby tables. There was a woman with a bunch of planets modelled with plasticine (quite convincingly, I must say). She invited me to build something but I was fearing I would miss another session I was interested in–the “debate” with some journalists that had unveiled the NSA scandal. Still, I asked if I could smell the plasticine. If you ever used plasticine in the 80s/90s know what it smelled like, right? Well, it doesn’t smell like that anymore. I wouldn’t get hooked to it nowadays…

A quick escape for some coffee and back from the stormy, inclement weather outside, I was all set for the session. It ended up feeling a bit too long, and at times quite hard to follow because they weren’t using any microphone and relied entirely on their lungs to get the message across. I’m glad we all were super quiet, but the noise around the area and the speaker announcements coming from the Plenary were quite disruptive. Staying so focused for so long left me quite tired and I’m afraid to confess–I don’t remember anything about the closing keynote. I know it happened, but that’s it.

After it, the “Demo Fest” was set up, and similarly to the Science Fest people set up booths and tables to show what they had been working on during the week-end. For once, I didn’t have anything to show… which meant I could wander around looking at other people’s work!

I stayed for a little more, then we asked some people whether they’d like to join us in a quest to find a French steak restaurant in Marylebone, but they wouldn’t, so we went there anyway. It was pouring with rain and that was the day in which I decided to wear canvas shoes. My feet stayed wet until 1 AM. Awful.

We then went back to the official Moz-Hotel, where the Mozilla people were staying. There was no sign of after party first, then some people showed up with bags from the offlicense (too telling), and the hotel people weren’t happy about that, so they asked them to consume whatever was in the bags in their rooms. I decided to discreetly head back home before St. Jude’s storm got stronger. It was certainly an “atmospheric walk”, with rain and wind blasting either way, which made quite difficult to hold the umbrella still. I ended up running as much as I could, to shorten the misery. My recent running exercises proved its worth!

A few minutes after arriving home, Rehan told me that everyone had gone downstairs again and they were partying. But I had already changed into dry clothes and wasn’t venturing out into the wild again, so that was it for me.

In short: quite a good event. It was refreshing to do something not purely technical for a change, although I have this cunning feeling that I missed many sessions because there were so many of them. It was also good that kids were not only allowed but indeed encouraged into the festival, as they got to be involved in “grown up” activities such as translating, or for example designing things. I like how they question things and assumptions we take for granted—makes for refreshing points of view!

~~~

And written in October 2014: MozFest 2014 is coming! Here are some details of what I’ll be doing there. See you!

flattr this!

Peter Bengtssondjango-html-validator

In action
A couple of weeks ago we had accidentally broken our production server (for a particular report) because of broken HTML. It was an unclosed tag which rendered everything after that tag to just plain white. Our comprehensive test suite failed to notice it because it didn't look at details like that. And when it was tested manually we simply missed the conditional situation when it was caused. Neither good excuses. So it got me thinking how can we incorporate HTML (html5 in particular) validation into our test suite.

So I wrote a little gist and used it a bit on a couple of projects and was quite pleased with the results. But I thought this might be something worthwhile to keep around for future projects or for other people who can't just copy-n-paste a gist.

With that in mind I put together a little package with a README and a setup.py and now you can use it too.

There are however some caveats. Especially if you intend to run it as part of your test suite.

Caveat number 1

You can't flood htmlvalidator.nu. Well, you can I guess. It would be really evil of you and kittens will die. If you have a test suite that does things like response = self.client.get(reverse('myapp:myview')) and there are many tests you might be causing an obscene amount of HTTP traffic to them. Which brings us on to...

Caveat number 2

The htmlvalidator.nu site is written in Java and it's open source. You can basically download their validator and point django-html-validator to it locally. Basically the way it works is java -jar vnu.jar myfile.html. However, it's slow. Like really slow. It takes about 2 seconds to run just one modest HTML file. So, you need to be patient.

Alina MierlusBack to idiotism and a few other changes…

This blog has been quiet for a while now. I do hope to start writing soon (although I think I’ll do most of my next writing in Catalan – and the focus will not be only in technology).
A few people asked me what I’m doing now, since I’m not so active in projects I used to contribute (e.g. Mozilla) or doing things I used to do locally (e.g. participating in tech. events and open source groups).

Well, part of the answer is that during the last year I tried to find some time for myself. I started to feel kind of overwhelmed by the amount and quality of the information around me (yes, yes, I mean that kind of BuzzMachine that most of Open Source/Tech. “community” seems to be nowadays).

And what is it better than becoming an idiot? Experimenting that state of being useless to the world, of being less intelligent than most of the “smart citizens”, and start questioning again why and how things happen.

Professor Han, a contemporary essayist and cultural theorist from Germany, puts it very well:

It’s a function of philosophy to represent the role of the idiot. From the beginning, the philosophy goes along with the idiotism. Any philosopher that generates a new language, a new style, a new thinking, needs to be an idiot before.

The history of philosophy is a history of idiotisms.
Socrates, who only knows that he doesn’t know anything, is an idiot. An idiot is also Descartes, because he doubts of everything. Cogito ergo sum is an idiotism.
An internal contraction of thinking makes possible a new beginning. Descartes thinks about the thinking. The thinking recovers its virginal state when it connects with itself. Deluze opposes to the Cartesian idiot an another idiot [...]

Today, it looks like the somewhat marginalised, the crazy and the idiot basically disappeared from society. The whole network connectedness and the digital communications increase considerably the coercion over conformity. The violence of consensus repress the idiotisms.

(from the book “Psychopolitik: Neoliberalismus und die neuen Machttechniken”)

Yes, part of my time right now is devoted to studying philosophy at the University (offline and back to the system!). And I really like it very much. Only because you are there with a group of people who disagrees with you, who appreciates your critique, your skepticism and negativity… makes me feel much more being part of a community.

However, I’m not out of the technology world, as the other part of my time is devoted to work, which means applications development, deployments and even trainings.
But indeed, I’m feeling that I have to step back from the BuzzMachine that OpenSource/OpenWeb/OpenWathever has became. I’ve also developed more of a critical view on what is “connectedness or a connected society”, including the “sharing economy” – a deviation of open source concept (which on its turn is a mutation of Free Software social movement).

Soledad PenadesThis week… and beyond

  • Monday: shyly open my inbox after a week of holidays, and probably duck to avoid the rolling ball of stale mail coming my way.
  • Wednesday: maybe meet Karolina who’s in London for a conference!
  • Thursday: my talk is closing a conference O_O — when the organiser mentioned “closing” the day I thought he meant closing the first day, not the second. NO PRESSURE. Although the conf is held at Shoreditch Village Hall, which is a venue where I feel like at home, so I’ll probably be OK. There’s a meatspace meatup afterwards, and I’m glad it’s around Shoreditch too or I’ll be dropping on that.
  • Friday: MozFest facilitators meeting, and also the Science Fair during the evening (if it is still called Science Fair)
  • Saturday and Sunday: MozFest, MozFest, MozFest! Paul Rouget asked me to show WebIDE there, and then Bobby (aka SecretRobotron and your best friend) came up with this idea of a MEGABOOTH where people can go and learn something about app-making in sessions of 5-20 minutes. Of course I can’t be all week-end there or I’ll basically die of social extenuation, so I asked some friends and together we’ll be helping spread the word about Firefox OS development in its various facets: Gaia/Gonk/the operating system itself, Gaia apps, DevTools and WebIDE. Come to the MEGABOOTH and hang with Nicola, Wilson, Francisco, Potch and me! (linking to myself and wondering if the Internet will break with so much recursion, teehee)

That’ll be for this week. Beyond, there’s a few more conferences—some I can announce and some I can’t:

  • dotJS — Paris, France 17th November, which is pretty exciting to be in because the venue and the looks of everything are so sophisticated…!
  • OSOM — Cluj-Napoca, Romania 22th November, which I’m moderately nervous about because I’m keynoting (!!!), but I’m also excited because it’s in TRANSYLVANIA!!! :-[

We’ll also be hosting a Firefox OS Bug Squash Party at the Mozilla London office the week-end after Halloween. Expect weirdnesses. There are only 5 spots left!

Add to this the new thing I’m working on (details to be revealed as soon as there’s something to show) and it makes for a very busy Autumn!

I’m glad I took those holidays past week. I went to Tenerife, which makes it the furthest South I’ve ever been, and then the highest even in Spain since I climbed to el Teide! Also, my hotel was an hour away from the airport so I rented a car and drove myself around the island. After many years of not driving, that was mega-awesome, and a tad terrifying as well (I’ll detail in a future post).

This is going down the hill, with the cable car:

The whole scenery in the National Park is super incredible–it definitely looks like out of this world. Since the rocks are of volcanic origin they have very interesting textures, and are super lightweight, so it was funny to go picking random stones up and realising how little they weighted.

Also the vegetation and fauna were unlike most of what I’d seen before. In particular, the lizards were ENORMOUS. I would be walking and hearing huge noises on the dry leaves, turn around expecting to find a dog or a cat, only to find a huge lizard looking at me. What do they feed them? Maybe it’s better not to know… I’ll leave you with this not-so-little fella:

flattr this!

Mozilla Reps CommunityReps Weekly Call – October 16th 2014

Last Thursday we had our regular weekly call about the Reps program, this time we moved one hour later to avoid some conflicts and allow Reps on the West Coast to join us in the very morning.

reps-polo

Summary

  • Council elections this weekend.
  • Tech 4 Africa.
  • AdaCamp – Post event.
  • Mozfest – updates.
  • Firefox OS Bus.
  • Get Involved Re-design.

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141016/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Christian HeilmannWhy Microsoft matters more than we think

I’m guilty of it myself, and I see it a lot: making fun of Microsoft in a presentation. Sure, it is easy to do, gets a laugh every time but it is also a cheap shot and – maybe – more destructive to our goals than we think.

is it HTML5? if it doesn't work in IE, it is joke

Let’s recap a bit. Traditionally Microsoft has not played nice. It destroyed other companies, it kept things closed that open source could have benefited from and it tried to force a monoculture onto something that was born open and free: the web.

As standard conscious web developers, IE with its much slower adaption rate of newer versions was always the bane of our existence. It just is not a simple thing to upgrade a browser when it is an integral part of the operating system. This is exacerbated by the fact that newer versions of Windows just weren’t exciting or meant that a company would have to spend a lot of money buying new software and hardware and re-educate a lot of people. A massive investment for a company that wasn’t worth it just to stop the web design department from whining.

Let’s replace IE then!

Replacing IE also turned out to be less easy than we thought as the “this browser is better” just didn’t work when the internal tools you use are broken in them. Chrome Frame was an incredible feat of engineering and – despite being possible to roll out on server level even – had the adoption rate of Halal Kebabs at a Vegan festival.

Marketing is marketing. Don’t try to understand it

It seems also fair to poke fun at Microsoft when you see that some of their marketing at times is painful. Bashing your competition is to me never a clever idea and neither is building shops that look almost exactly the same as your main competitor next to theirs. You either appear desperate or grumpy.

Other things they do

The thing though is that if you look closely and you admit to yourself that what we call our community is a tiny part of the overall market, then Microsoft has a massive part to play to do good in our world. And they are not cocky any longer, they are repentant. Not all departments, not all people, and it will be easy to find examples, but as a whole I get a good vibe from them, without being all marketing driven.

Take a look at the great tools provided at Modern.ie to allow you to test across browsers. Take a look at status.modern.ie which – finally – gives you a clear insight as to what new technology IE is supporting or the team is working on. Notice especially that this is not only for Explorer – if you expand the sections you get an up-to-date cross-browser support chart linked to the bugs in their trackers.

status of different web technologies provided by Microsoft

This is a lot of effort, and together with caniuse.com makes it easier for people to make decisions whether looking into a technology is already worth-while or not.

Reaching inside corporations

And this to me is the main point why Microsoft matters. They are the only ones that really reach the “dark matter” developers they created in the past. The ones that don’t read hacker news every morning and jump on every new experimental technology. The ones that are afraid of using newer features of the web as it might break their products. The ones that have a job to do and don’t see the web as a passion and a place to discuss, discard, hype and promote and troll about new technologies. And also the ones who build the products millions of people use every day to do their non-technology related jobs. The booking systems, the CRM systems, the fiscal data tools, all the “boring” things that really run our lives.

We can moan and complain about all our great new innovations taking too long to be adopted. Or we could be open to feeding the people who talk to those who are afraid to try new things with the information they need.

Let’s send some love and data

I see Microsoft not as the evil empire any longer. I see them as a clearing house to graduate experimental cool web technology into something that is used in the whole market. Chances are that people who use Microsoft technologies are also audited and have to adhere to standard procedures. There is no space for wild technology goose chases there. Of course, you could see this as fundamentally broken – and I do to a degree as well – but you can’t deny that these practices exist. And that they are not going to go away any time soon.

With this in mind, I’d rather have Microsoft as a partner in crime with an open sympathetic ear than someone who doesn’t bother playing with experimental open technology of competitors because these don’t show any respect to begin with.

If we want IT to innovate and embrace new technologies and make them industrial strength we need an ally on the inside. That can be Microsoft.

Kartikaya GuptaGoogle-free android usage

When I switched from using a BlackBerry to an Android phone a few years ago it really irked me that the only way to keep my contacts info on the phone was to also let Google sync them into their cloud. This may not be true universally (I think some samsung phones will let you store contacts to the SD card) but it was true for phone I was using then and is true on the Nexus 4 I'm using now. It took a lot of painful digging through Android source and googling, but I successfully ended up writing a bunch of code to get around this.

I've been meaning to put up the code and post this for a while, but kept procrastinating because the code wasn't generic/pretty enough to publish. It still isn't but it's better to post it anyway in case somebody finds it useful, so that's what I'm doing.

In a nutshell, what I wrote is an Android app that includes (a) an account authenticator, (b) a contacts sync adapter and (c) a calendar sync adapter. On a stock Android phone this will allow you to create an "account" on the device and add contacts/calendar entries to it.

Note that I wrote this to interface with the way I already have my data stored, so the account creation process actually tries to validate the entered credentials against a webhost, and the the contacts sync adapter is actually a working one-way sync adapter that will download contact info from a remote server in vcard format and update the local database. The calendar sync adapter, though, is just a dummy. You're encouraged to rip out the parts that you don't want and use the rest as you see fit. It's mostly meant to be a working example of how this can be accomplished.

The net effect is that you can store contacts and calendar entries on the device so they don't get synced to Google, but you can still use the built-in contacts and calendar apps to manipulate them. This benefits from much better integration with the rest of the OS than if you were to use a third-party contacts or calendar app.

Source code is on Github: staktrace/pimple-android.

Kalpa WelivitigodaLight Level Meter | Firefox OS App

Light Level Meter [1], is a Firefox OS app developed by myself to demonstrate the use of Mozilla WebAPI [2]. The app measures the ambient light level in lux [3] and present in realtime. It records the max and min values and plots the variation of the light level over time.

I've made use of DeviceLightEvent [4] to get the current ambient light level from the light level detector in the device (I have tested it with Keon [5]). The real time chart is implemented using Smoothie Charts [6] which is a simple, easy to use javascript charting library for streaming data.

Measurement of ambient light level has many uses. One is that it could be used to adjust the light level of electronic visual displays that are there in many of the devices we use today such as mobile phones and tablets. By such adjustments based on the ambient light level, we could save energy while delivering a comfortable reading experience to the user.

Another use of measuring ambient light level is in electrical lighting design. For example, the light level recommended for reading is different from that is recommended for hand tailoring. Recommended light levels in building designing in Sri Lanka can be found in page 38 of "Code of Practice for Energy Efficient Buildings in Sri Lanka" [7].

Source code of Light Level Meter [8].

[1] https://marketplace.firefox.com/app/light-level-meter
[2] https://wiki.mozilla.org/WebAPI
[3] http://en.wikipedia.org/wiki/Lux
[4] https://developer.mozilla.org/en-US/docs/Web/API/DeviceLightEvent
[5] http://en.wikipedia.org/wiki/GeeksPhone_Keon
[6] http://smoothiecharts.org/
[7] http://www.energy.gov.lk/pdf/Building%20CODE.pdf
[8] https://github.com/callkalpa/callkalpa.github.io/tree/master/LightLevelMeter

Erik VoldJetpack Pro Tip - Using JPM on Travis CI

First, enable Travis on your repo.

Then, Add the following .travis.yml file to the repo:

This will download Firefox nightly, install jpm, and run jpm test -v on your JPM based Firefox add-on.

Examples

Add-ons

Third Party NPM Modules

Asa DotzlerPrivate Browsing Coming Soon to Firefox OS

This week, the team landed code changes for Bug 832700 – Add private browsing to Firefox OS. This was the back end implementation in Gecko and we still have to determine how this will surface in the front end. That work is tracked at Bug 1081731 - Add private browsing to Firefox OS in Gaia.

We also got a couple of nice fixes to one of my favorite new features, the still experimental “app grouping” feature for the Firefox OS home screen. The fixes for Bug 1082627 and Bug 1082629 ensure that the groups align properly and have the right sizes. You can enable this experimental feature in settings -> developer -> homescreen -> app grouping.

There’s lots going on every day in Firefox OS development. I’ll be keeping y’all up to date here and on Twitter.

 

 

Frédéric HarperI’m leaving Mozilla, looking for a new challenge

Copyright: Eva Blue https://flic.kr/p/nDrPAL

Copyright: Eva Blue https://flic.kr/p/nDrPAL

January 1st (or before if needed) will be my last day as a Senior Technical Evangelist at Mozilla. I truly believe in the Mozilla’s mission, and I’ll continue to share my passion for the open web, but this time, as a volunteer. From now on, I’ll be on the search for a new challenge.

I want to thank my rock star team for everything: Havi Hoffman, Jason Weathersby, Robert Nyman, and Christian Heilmann. I also want to thank Mark Coggins for his strong leadership as my manager. It was a real pleasure to work with you all! Last, but not least, thanks to all Mozillians, and continue the good work: let’s keep in touch!

What’s next

I’m now reflecting on what will be next for me, and open to discussing all opportunities. Having ten years as a software developer, and four years as a technical evangelist in my backpack, here are some ideas (I’m not limited to those), in no particular order, I have in mind:

  • Principal Technical Evangelist about a product/service/technology I believe in;
  • General manager of a startup accelerator program;
  • CTO of a startup.

I have no issue to travel extensively: I was on the road one-third of last year – speaking in more than twelves countries. I may not have an issue to move depending on the offer, and country. I like to share my passion on stage – more than 100 talks in the last three years. Also, my book on personal branding for developers will be published at Apress before the end of the year.

I like technology, but I’m not a developer anymore, and not looking to go back in a developer role. I may also be open to a non-technical role, but it need to target other of my passions like startups. For the last five years, I’ve been working at home, with no schedule, just end goals to reach. I can’t deal with micro-management, so I need some freedom to be effective. No matter what will be next, it need to be an interesting challenge as I have a serial entrepreneur profile: I like to take ideas, and make them a reality.

You can find more about my experience on my LinkedIn profile. If you want to grab a coffee or discuss any opportunities, send me an email.

P.S.: I see no values in highlighting the reasons of my departure, but I’m sad to leave, and keep in mind I’m not the only one in my team who resigned. If you have concerns, please send me an email.


--
I’m leaving Mozilla, looking for a new challenge is a post on Out of Comfort Zone from Frédéric Harper

David BoswellMozillians of the world, unite!

When i got involved with Mozilla in 1999, it was clear that something big was going on. The mozilla.org site had a distinctly “Workers of the world, unite!” feel to it. It caught my attention and made me interested to find out more.

600px-1998_site2_cropped

The language on the site had the same revolutionary feel as the design. One of the pages talked about Why Mozilla Matters and it was an impassioned rallying cry for people to get involved with the audacious thing Mozilla was trying to do.

“The mozilla.org project is terribly important for the state of open-source software. [...] And it’s going to be an uphill battle. [...] A successful mozilla.org project could be the lever that moves a dozen previously immobile stones. [...] Maximize the opportunity here or you’ll be kicking yourself for years to come.”

With some minor tweaks, these words are still true today. One change: we call the project just Mozilla now instead of mozilla.org. Our mission today is also broader than creating software, we also educate people about the web, advocate to keep the Internet open and more.

Another change is that our competition has adopted many of the tactics of working in the open that we pioneered. Google, Apple and Microsoft all have their own open source communities today. So how can we compete with companies that are bigger than us and are borrowing our playbook?

We do something radical and audicious. We build a new playbook. We become pioneers for 21st century participation. We tap into the passion, skills and expertise of people around the world better than anyone else. We build the community that will give Mozilla the long-term impact that Mitchell spoke about at the Summit.

mitchell_summit

Mozilla just launched the Open Standard site and one of the first articles posted is “Struggle For An Open Internet Grows“. This shows how the challenges of today are not the same challenges we faced 16 years ago, so we need to do new things in new ways to advance our mission.

If the open Internet is blocked or shut down in places, let’s build communities on the ground that turn it back on. If laws threaten the web, let’s make that a public conversation. If we need to innovate to be relevant in the coming Internet of Things, let’s do that.

Building the community that can do this is work we need to start on. What doesn’t serve our community any more? What do we need to do that we aren’t? What works that needs to get scaled up? Mozillians of the world, unite and help answer these questions.


Daniel Stenbergcurl is no POODLE

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name: POODLE. If you have even the slightest interest in this sort of stuff you’ve already grown tired and bored about everything that’s been written about this so why on earth do I have to pile on and add to the pain?

This is my way of explaining how POODLE affects or doesn’t affect curl, libcurl and the huge amount of existing applications using libcurl.

Is my application using HTTPS with libcurl or curl vulnerable to POODLE?

No. POODLE really is a browser-attack.

Motivation

The POODLE attack is a combination of several separate pieces that when combined allow attackers to exploit it. The individual pieces are not enough stand-alone.

SSLv3 is getting a lot of heat now since POODLE must be able to downgrade a connection to SSLv3 from TLS to work. Downgrade in a fairly crude way – in libcurl, only libcurl built to use NSS as its TLS backend supports this way of downgrading the protocol level.

Then, if an attacker manages to downgrade to SSLv3 (both the client and server must thus allow this) and get to use the sensitive block cipher of that protocol, it must maintain a connection to the server and then retry many similar requests to the server in order to try to work out details of the request – to figure out secrets it shouldn’t be able to. This would typically be made using javascript in a browser and really only HTTPS allows this so no other SSL-using protocol can be exploited like this.

For the typical curl user or a libcurl user, there’s A) no javascript and B) the application already knows the request it is doing and normally doesn’t inject random stuff from 3rd party sources that could be allowed to steal secrets. There’s really no room for any outsider here to steal secrets or cookies or whatever.

How will curl change

There’s no immediate need to do anything as curl and libcurl are not vulnerable to POODLE.

Still, SSLv3 is long overdue and is not really a modern protocol (TLS 1.0, the successor, had its RFC published 1999) so in order to really avoid the risk that it will be possible exploit this protocol one way or another now or later using curl/libcurl, we will disable SSLv3 by default in the next curl release. For all TLS backends.

Why? Just to be extra super cautious and because this attack helped us remember that SSLv3 is old and should be let down to die.

If possible, explicitly requesting SSLv3 should still be possible so that users can still work with their legacy systems in dire need of upgrade but placed in corners of the world that every sensible human has since long forgotten or just ignored.

In-depth explanations of POODLE

I especially like the ones provided by PolarSSL and GnuTLS, possibly due to their clear “distance” from browsers.

Daniel Stenbergcurl and POODLE

Once again the internet flooded over with reports and alerts about a vulnerability using a funny name.

Justin DolskeSans Flash

I upgraded to a new MacBook about a week ago, and thought I’d use the opportunity to try living without Flash for a while. I had previously done this two years ago (for my last laptop upgrade), and I lasted about a week before breaking down and installing it. In part because I ran into too many sites that needed Flash, but the main reason was that the adoption and experience of HTML5 video wasn’t great. In particular, the HTML5 mode on YouTube was awful — videos often stalled or froze. (I suspect that was an issue on YouTube’s end, but the exact cause didn’t really matter.) So now that the Web has had a few additional years to shift away from Flash, I wanted to see if the experience was any better.

The short answer is that I’m pleased (with a few caveats). The most common Flash usage for me had been the major video sites (YouTube and Vimeo), and they now have HTML5 video support that’s good. YouTube previously had issues where they still required the use of Flash for some popular videos (for ads?), but either they stopped or AdBlock avoids the problem.

I was previously using Flash in click-to-play mode, which I found tedious. On the whole, the experience is better now — instead of clicking a permission prompt, I find myself just happy to not be bothered at all. Most of the random Flash-only videos I encountered (generally news sites) were not worth the time anyway, and on the rare occasion I do want to see one it’s easy to find an equivalent on YouTube. I’m also pleased to have run across very few Flash-only sites this time around. I suspect we can thank the rise of mobile (thanks iPad!) for helping push that shift.

There are a few problem sites, though, which so far I’m just living with.

Ironically, the first site I needed Flash for was our own Air Mozilla. We originally tried HTML5, but streaming at scale is (was?) a hard problem, so we needed a solution that worked. Which meant Flash. It’s unfortunate, but that’s Mozilla pragmatism. In the meantime, I just cheat and use Chrome (!) which comes with a private copy of Flash. Facebook (and specifically the videos people post/share) were the next big thing I noticed, but… I can honestly live without that too. Sorry if I didn’t watch your recent funny video.

I will readily admit that my Web usage is probably atypical. I’ve rarely play online Flash games, which are probably close to video usage. And I’m willing to put up with at least a little bit of pain to avoid Flash, which isn’t something fair to expect of most users.

But so far, so good!

[Coincidental aside: Last month I removed the Plugin Finder Service from Firefox. So now Firefox won't even offer to install a plugin (like Flash) that you don't have installed.]


Asa DotzlerFirefox OS 2.0 Pre-release for Flame

About 4,000 of y’all have a Flame Firefox OS reference phone. This is the developer phone for Firefox OS. If you’re writing apps or contributing directly to the open source Firefox OS project, Flame is the device you should have.

The Flame shipped with Firefox OS 1.3 and we’re getting close to the first major update for the device, Firefox OS 2.0. This will be a significant update with lots of new features and APIs for app developers and for Firefox OS developers. I don’t have a date to share with y’all yet, but it should be days and not weeks.

If you’re like me, you cannot wait to see the new stuff. With the Flame reference phone, you don’t have to wait. You can head over to MDN today and get a 2.0 pre-release base image, give that a whirl, and report any problems to Bugzilla. You can even flash the latest 2.1 and 2.2 nightly builds to see even further into the future.

If you don’t have a Flame yet, and you’re planning on contributing testing or coding to Firefox OS or to write apps for Firefox OS, I encourage you to get one soon. We’re going to be wrapping up sales in about 6 weeks.

Asa DotzlerI’m Back!

PROTIP: Don’t erase the Android phone with your blog’s two-factor authentication setup to see if you can get Firefox OS running on it unless you are *sure* you have printed out your two-factor back-up codes. Sort of thinking you probably printed them out is not the same thing as being sure :-)

Thank you to fellow Tennessean, long-time Mozillian, and WordPress employee Daryl Houston for helping me get my blog back.

Daniel StenbergFOSS them students

On October 16th, I visited DSV at Stockholm University where I had the pleasure of holding a talk and discussion with students (and a few teachers) under the topic Contribute to Open Source. Around 30 persons attended.

Here are the slides I use, as usual possibly not perfectly telling stand-alone without the talk but there was no recording made and I talked in Swedish anyway…

Julien VehentMitigating Poodle SSLv3 vulnerability on a Go server

If you run a Go server that supports SSL/TLS, you should update your configuration to disable SSLv3 today. The sample code below sets the minimal accepted version to TLSv1, and reorganizes the default ciphersuite to match Mozilla's Server Side TLS guidelines.

Thank you to @jrconlin for the code cleanup!

package main

import (
    "crypto/rand"
    "crypto/tls"
    "fmt"
)

func main() {
    certificate, err := tls.LoadX509KeyPair("server.pem", "server.key")
    if err != nil {
        panic(err)
    }
    config := tls.Config{
        Certificates:             []tls.Certificate{certificate},
        MinVersion:               tls.VersionTLS10,
        PreferServerCipherSuites: true,
        CipherSuites: []uint16{
            tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
            tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_128_CBC_SHA,
            tls.TLS_RSA_WITH_AES_256_CBC_SHA,
            tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
            tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA},
    }
    config.Rand = rand.Reader

    netlistener, err := tls.Listen("tcp", "127.0.0.1:50443", &config)
    if err != nil {
        panic(err)
    }
    newnetlistener := tls.NewListener(netlistener, &config)
    fmt.Println("I am listening...")
    for {
        newconn, err := newnetlistener.Accept()
        if err != nil {
            fmt.Println(err)
        }
        fmt.Printf("Got a new connection from %s. Say Hi!\n", newconn.RemoteAddr())
        newconn.Write([]byte("ohai"))
        newconn.Close()
    }
}

Run the server above with $ go run tls_server.go and test the output with cipherscan:

$ ./cipherscan 127.0.0.1:50443
........
Target: 127.0.0.1:50443

prio  ciphersuite                  protocols              pfs_keysize
1     ECDHE-RSA-AES128-GCM-SHA256  TLSv1.2                ECDH,P-256,256bits
2     ECDHE-RSA-AES128-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
3     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
4     AES128-SHA                   TLSv1,TLSv1.1,TLSv1.2
5     AES256-SHA                   TLSv1,TLSv1.1,TLSv1.2
6     ECDHE-RSA-DES-CBC3-SHA       TLSv1,TLSv1.1,TLSv1.2  ECDH,P-256,256bits
7     DES-CBC3-SHA                 TLSv1,TLSv1.1,TLSv1.2

Certificate: UNTRUSTED, 2048 bit, sha1WithRSAEncryption signature
TLS ticket lifetime hint: None
OCSP stapling: not supported
Server side cipher ordering

Gregory SzorcThe Rabbit Hole of Using Docker in Automated Tests

Warning: This post is long and rambling. There is marginal value in reading beyond the first few paragraphs unless you care about Docker.

I recently wrote about how Mozilla tests version control. In this post, I want to talk about the part of that effort that consumed the most time: adding Docker support to the test harness.

Introducing the Problem and Desired End State

Running Docker containers inside tests just seems like an obvious thing you'd want to do. I mean, wouldn't it be cool if your tests could spin up MySQL, Redis, Cassandra, Nginx, etc inside Docker containers and test things against actual instances of the things running in your data centers? Of course it would! If you ask me, this approach beats mocking because many questions around accuracy of the mocked interface are removed. Furthermore, you can run all tests locally, while on a plane: no data center or staging environment required. How cool is that! And, containers are all isolated so there's no need to pollute your system with extra packages and system services. Seems like wins all around.

When Mozilla started adding customizations to the Review Board code review software in preparation for deploying it at Mozilla as a replacement for Bugzilla's Splinter, it quickly became apparant that we had a significant testing challenge ahead of us. We weren't just standing up Review Board and telling people to use it, we were integrating user authentication with Bugzilla, having Review Board update Bugzilla after key events, and were driving the initiation of code review in Review Board by pushing code to a Mercurial server. That's 3 user-visible services all communicating with each to expose a unified workflow. It's the kind of thing testing nightmares are made of.

During my early involvement with the project, I recognized the challenge ahead and was quick to insist that we write automated tests for as much as possible. I insisted that all the code (there are multiple extensions to ReviewBoard, a Mercurial hook, and a Mercurial extension) live under one common repository and share testing. That way we could tinker with all the parts easily and test them in concern without having to worry about version sync. We moved all the code to the version-control-tools repository and Review Board was the driving force behind improvements to the test harness in that repository. We had Mercurial .t tests starting Django dev servers hosting Review Board running from per-test SQLite databases and all was nice. Pretty much every scenario involving the interaction between Mercurial and ReviewBoard was tested. If you cared about just these components, life was happy.

A large piece of the integration story was lacking in this testing world: Bugzilla. We had somewhat complex code for having Review Board and Bugzilla talk to each other but no tests for it because nobody had yet hooked Bugzilla up to the tests. As my responsibilities in the project expanded from covering just the Mercurial and Review Board interaction to Bugzilla as well, I again looked at the situation and thought there's a lot of complex interaction here and alpha testing has revealed the presence of many bugs: we need a better testing story. So, I set out to integrate Bugzilla into the test harness.

My goals were for Review Board tests to be able to make requests against a Bugzilla instance configured just like bugzilla.mozilla.org, to allow tests to execute concurrently (don't make developers wait on machines), for tests to run as quickly as possible, to run tests in an environment as similar to production as possible, and to be able to run tests from a plane or train or anywhere without internet connectivity. I was unwilling to budge on these core testing requirements because they represent what's best from test accuracy and developer productivity standpoints: you want your tests to be representative of the real world and you want to enable people to hack on this service anywhere, anytime, and not be held back by tests that take too long to execute. Move fast and don't break things.

Before I go on, it's time for a quick aside on tolerable waiting times. Throughout this article I talk about minimizing the run time of tests. This may sound like premature optimization. I argue it isn't, at least not if you are optimizing for developer productivity. There is a fair bit of academic research in this area. A study on tolerable waiting time: how long are Web users willing to wait gets cited a lot. It says 2 seconds for web content. If you read a few paragraphs in, it references other literature. They disagree on specific thresholds, but one thing is common: the thresholds are typically low - just a few seconds. The latencies I deal with are all longer than what research says leads to badness. When given a choice, I want to optimize workflows for what humans are conditioned to tolerate. If I can't do that, I've failed and the software will be ineffective.

The architecture of Bugzilla created some challenges and eliminated some implementation possibilities. First, I wasn't using any Bugzilla: I was using Mozilla's branch of Bugzilla that powers bugzilla.mozilla.org. Let's call it BMO. I could try hosting it from local SQLite files and running a local, Perl-based HTTP server (Bugzilla is written in Perl). But my experience with Perl and takeaways from talking to the BMO admins was that pain would likely be involved. Plus, this would be a departure from test accuracy. So, I would be using MySQL, Apache HTTPd, and mod_perl, just like BMO uses them in production.

Running Apache and MySQL is always a... fun endeavor. It wasn't a strict requirement, but I also highly preferred that the tests didn't pollute the system they ran on. In other words, having tests connect to an already-running MySQL or Apache server felt like the wrong solution. That's just one more thing people must set up and run locally to run the tests. That's just one more thing that could differ from production and cause bad testing results. It felt like a dangerous approach. Plus, there's the requirement to run things concurrently. Could I have multiple tests talking to the same MySQL server concurrently? They'd have to use separate databases so they don't conflict. That's a possibility. Honestly, I didn't entertain the thought of running Apache and MySQL manually for too long. I knew about this thing called Docker and that it theoretically fit my use case perfectly: construct building blocks for your application and then dymanically hook things up. Perfect. I could build Docker containers for all the required services and have each test start a new, independent set of containers for just that test.

So, I set out integrating Docker into the version-control-tools test harness. Specifically, my goal was to enable the running of independent BMO instances during individual tests. It sounded simple enough.

What I didn't know was that integrating a Dockerized BMO into the test harness would take the better part of 2 weeks. And it's still not up to my standards. This post is the story about the trials and tribulations I encountered along the way. I hope it serves as a warning and potentially a guide for others attempting similar feats. If any Docker developers are reading, I hope it gives you ideas on how to improve Docker.

Running Bugzilla inside Docker

First thing's first: to run BMO inside Docker I needed to make Docker containers for BMO. Fortunately, David Lawrence has prior art here. I really just wanted to take that code, dump it into version-control-tools and call it a day. In hindsight, I probably should have done that. Instead, armed with the knowledge of the Docker best practice of one container per service and David Lawrence's similar wishes to make his code conform to that ideal, I decided to spend some time to fix David's code so that MySQL and Apache were in separate containers, not part of a single container running supervisord. Easy enough, right?

It was relatively easy extracting the MySQL and Apache parts of BMO into separate containers. For MySQL, I started with the official MySQL container from the Docker library and added a custom my.cnf. Simple enough. For Apache, I just copied everything from David's code that wasn't MySQL. I was able to manually hook the containers together using the Docker CLI. It sort of just worked. I was optimistic this project would only take a few hours.

A garbage collection bug in Docker

My first speed bump came as I was iterating on Dockerfiles. All of a sudden I get an error from Docker that it is out of space. Wat? I look at docker images and don't see anything too obvious eating up space. What could be going on? At this point, I'm using boot2docker to host Docker. boot2docker is this nifty tool that allows Windows and OS X users to easily run Docker (Docker requires a Linux host). boot2docker spins up a Linux virtual machine running Docker and tells you how to point your local docker CLI interface at that VM. So, when Docker complains it is out of space, I knew immediately that the VM must be low on space. I SSH into it, run df, and sure enough, the VM is nearly out of space. But I looked at docker images -a and confirmed there's not enough data to fill the disk. What's going on? I can't find the issue right now, but it turns out there is a bug in Docker! When running Docker on aufs filesystems (like boot2docker does), Docker does not always remove data volumes containers when deleting a container. It turns out that the MySQL containers from the official Docker library were creating a data-only container to hold persistent MySQL data that outlives the container itself. These containers are apparently light magic. They are containers that are attached to other containers, but they don't really show up in the Docker interfaces. When you delete the host container, these containers are supposed to be garbage collected. Except on aufs, they aren't. My MySQL containers were creating 1+ GB InnoDB data files on start and the associated data containers were sitting around after container deletion, effectively leaking 1+ GB every time I created a MySQL container, quickly filling the boot2docker disk. Derp.

I worked around this problem by forking the official MySQL container. I didn't need persistent MySQL data (the containers only need to live for one invocation - for the lifetime of a single test), so I couldn't care less about persisted data volumes. So, I changed the MySQL container to hold its data locally, not in a data volume container. The solution was simple enough. But it took me a while to identify the problem. Here I was seeing Docker do something extremely stupid. Surely my understanding of Docker was wrong and I was doing something stupid to cause it to leak data. I spent hours digging through the documentation to make sure I was doing things exactly as recommended. It wasn't until I started an Ubuntu VM and tried the same thing there did I realize this looked like a bug in boot2docker. A few Google searches later led me to a comment hiding at the bottom of an existing GitHub issue that pins aufs as the culprit. And here I thought Docker reached 1.0 and wouldn't have bad bugs like this. I certainly wouldn't expect boot2docker to be shipping a VM with a sub-par storage driver (shouldn't it be using devicemapper or btrfs instead). Whatever.

Wrangling with Mozilla's Branch of Bugzilla

At this point, I've got basic Docker containers for MySQL and Apache+mod_perl+Bugzilla being created. Now, I needed to convert from vanilla Bugzilla to BMO. Should be straightforward. Just change the Git remote URL and branch to check out. I did this and all-of-a-sudden my image started encountering errors building! It turns out that the BMO code base doesn't work on a fresh database! Fortunately, this is a known issue and I've worked around it previously. When I tackled it a few months ago, I spent a handful of hours disecting this problem. It wasn't pretty. But this time I knew what to do. I even had a Puppet manifest for installing BMO on a fresh machine. So, I just needed to translate that Puppet config into Dockerfile commands. No big deal, right? Well, when I did that Puppet config a few months ago, I based it on Ubuntu because I'm more familiar with Debian-based distros and figured Ubuntu would be the easiest since it tends to have the largest package diversity. Unfortunately, David's Docker work is based on Fedora. So, I spent some time converting the Dockerfile to Ubuntu rather than trying to port things to Fedora. Arguably the wrong decision since Mozilla operates the RedHat flavor of Linux distributions in production. But I was willing to trade accuracy for time here, having lost time dealing with the aufs bug.

Unfortunately, I under-estimated how long it would take to port the image to Ubuntu. It didn't take so long from a code change perspective. Instead, most of the time was spent waiting for Docker to run the commands to build the image. In the final version, Apt is downloading and installing over 250 packages. And Bugzilla's bootstrap process installs dozens of packages from CPAN. Every time I made a small change, I invalidated Docker's image building cache, causing extreme delays while waiting for Apt and CPAN to do their thing. This experience partially contributed to my displeasure with how Docker currently handles image creation. If Docker images were composed of pre-built pieces instead of stacked commands, my cache hit rate would have been much higher and I would have converted the image in no time. But no, that's not how things work. So I lost numerous hours through this 2 week process waiting for Docker images to perform operations I've already done elsewhere dozens of times before.

Docker Container Orchestration

After porting the Bugzilla image to Ubuntu and getting BMO to bootstrap in a manually managed container (up to this point I'm using the docker CLI to create images, start containers, etc), it was time to automate the process so that tests could run the containers. At this time, I started looking for tools that performed multiple container orchestration. I had multiple containers that needed to be treated as a single logical unit, so I figured I'd use an existing tool to solve this problem for me. Don't reinvent the wheel unless you have to, right? I discovered Fig, which seemed to fit the bill. I read that it is being integrated into Docker itself, so it must be best of breed. Even if it weren't its future seems to be more certain than other tools. So, I stopped my tools search and used Fig without much consideration for other tools.

Lack of a useful feature in Fig

I quickly whipped up a fig.yml and figured it would just work. Nope! Starting the containers from scratch using Fig resulted in an error. I wasn't sure what the error was at first because Fig didn't tell me. After some investigation, I realized that my bmoweb container (the container holding Apache + BMO code) was failing in its entrypoint command (that's a command that runs when the container starts up, but not the primary command a container runs - that's a bit confusing I know - read the docs). I added some debug statements and quickly realized that Bugzilla was erroring connecting to MySQL. Strange, I thought. Fig is essentially a DSL around manual docker commands, so I checked everything by typing everything into the shell. No error. Again on a new set of containers. No error. I thought maybe my environment variable handling was wrong - that the dynamically allocated IP address and port number of the linked MySQL container being passed to the bmoweb container weren't getting honored. I added some logging to disprove that theory. The wheels inside my brain spun for a little bit. And, aided by some real-time logging, I realized I was dealing with a race condition: Fig was starting the MySQL and bmoweb containers concurrently and bmoweb was attempting to access the MySQL server before MySQL had fully initialized and started listening on its TCP port! That made sense. And I think it's a reasonable optimization for Fig to start containers concurrently to speed up start time. But surely a tool that orchestrates different containers has considered the problem of dependencies and has a mechanism to declare them to prevent these race conditions. I check the online docs and there's nothing to be found. A red panda weeps. So, I change the bmoweb entrypoint script to wait until it can open a TCP socket to MySQL before actually using MySQL and sure enough, the race condition goes away and the bmoweb container starts just fine!

OK, I'm real close now. I can feel it.

Bootstrapping Bugzilla

I start playing around with manually starting and stopping containers as part of a toy test. The good news is things appear to work. The bad news is it is extremely slow. It didn't take long for me to realize that the reason for the slowness is Bugzilla's bootstrap on first run. Bugzilla, like many complex applications, has a first run step that sets up database schema, writes out some files on the filesystem, inserts some skeleton data in the database, creates an admin user, etc. Much to my dismay this was taking a long time. Something on the order of 25 to 30 seconds. And that's on a Haswell with plenty of RAM and an SSD. Oy. The way things are currently implemented would result in a 25 to 30 second delay when running every test. Change 1 line and wait say 25s for any kind of output. Are you kidding me?! Unacceptable. It violated my core goal of having tests that are quick to run. Again, humans should not have to wait on machines.

I think about this problem for like half a second and the solution is obvious: take a snapshot of the bootstrapped images and start instances of that snapshot from tests. In other words, you perform the common bootstrap operations once and only once. And, you can probably do that outside the scope of running tests so that the same snapshot can be used across multiple invocations of the test harness. Sounds simple! To the Docker uninitiated, it sounds like the solution would be to move the BMO bootstrapping into the Dockerfile code so it gets executed at image creation time. Yes, that would be ideal. Unfortunately, when building images via Dockerfile, you can't tell Docker to link that image to another container. Without container linking, you can't have MySQL. Without MySQL, you can't do BMO bootstrap. So, BMO bootstrap must be done during container startup. And in Docker land, that means putting it as part of your entrypoint script (where it was conveniently already located for this reason).

Talking Directly to the Docker API

Of course, the tools that I found that help with Docker image building and container orchestration don't seem to have an answer for this create a snapshot of a bootstrapped container problem. I'm sure someone has solved this problem. But in my limited searching, I couldn't find anything. And, I figured the problem would be easy enough to solve manually, so I set about creating a script to do it. I'm not a huge fan of shell script for automation. It's hard to debug and simple things can be hard and hard things can be harder. Plus, why solve solutions such as parsing output for relevant data when you can talk to an API directly and get native types. Since the existing test harness automation in version-control-tools was written in Python, I naturally decided to write some Python to create the bootstrapped images. So, I do a PyPI search and discover docker-py, a Python client library to the Docker Remote API, an HTTP API that the Docker daemon runs and is what the docker CLI tool itself uses to interface with Docker. Good, now I have access to the full power of Docker and am not limited by what the docker CLI may not expose. So, I spent some time looking at source and the Docker Remote API documentation to get an understanding of my new abilities and what I'd need to do. I was pleasantly surprised to learn that the docker CLI is pretty similar to the Remote API and the Python API was similar as well, so the learning was pretty shallow. Yay for catching a break!

Confusion Over Container Stopping

I wrote some Python for building the BMO images, launching the containers, committing the result, and saving state to disk (so it could be consulted later - preventing a bootstrap by subsequent consumers). This was pleasantly smooth at first, but I encountered some bumps along the way. First, I didn't have a complete grasp on the differences between stop and kill. I was seeing some weird behavior by MySQL on startup and didn't know why. Turns out I was forcefully killing the container after bootstrap via the kill API and this was sending a SIGKILL to MySQL, effectively causing unclean shutdown. After some documentation reading, I realized stop is the better API - it issues SIGTERM, waits for a grace period, then issues SIGKILL. Issuing SIGTERM made MySQL shut down gracefully and this issue stemming from my ignorance was resolved. (If anyone from Docker is reading, I think the help output for docker kill should mention the forcefullness of the command versus stop. Not all of us remember the relative forcefullness of the POSIX signals and having documentation reinforce their cryptic meaning could help people select the proper command.) A few lines of Python later and I was talking directly to the Docker Remote API, doing everything I needed to do to save (commit in Docker parlance) a bootstrapped BMO environment for re-use among multiple tests.

It was pretty easy to hook the bootstrapped images up to a single test. Just load the bootstrapped image IDs from the config file and start new containers based on them. That's Docker 101 (except I was using Python to do everything).

Concurrent Execution Confuses Bugzilla

Now that I could start Dockerized BMO from a single test, it was time to make things work concurrently. I hooked Docker up to a few tests and launched them in parallel to see what would happen. The containers appeared to start just fine! Great anticipation on my part to design for concurrency from the beginning, I thought. It appeared I was nearly done. Victory was near. So, I changed some tests to actually interact with BMO running from Docker. (Up until this point I was merely starting containers, not doing anything with them.) Immediately I see errors. Cannot connect to Bugzilla http://... connection refused. Huh? It took a few moments, but I realized the experience I had with MySQL starting and this error were very similar. I changed my start BMO containers code to wait for the HTTP server's TCP socket to start accepting connections before returning control and sure enough, I was able to make HTTP requests against Bugzilla running in Docker! Woo!

Next step, make an authenticated query against Bugzilla running in Docker. HTTP request completes... with an internal server error. What?! I successfully browsed BMO from containers days before and was able to log in just fine - this shouldn't be happening. This problem took me ages to diagnose. I traced every step of provisioning and couldn't figure out what was going on. After resorting to print debugging in nearly every component, including Bugzilla's Perl code itself, I found the culprit: Bugzilla wasn't liking the dynamic nature of the MySQL and HTTP endpoints. You see, when you start Docker containers, network addresses change. The IP address assigned to the container is whatever is available to Docker at the time the container was started. Likewise the IP address and port number of linked services can change. So, your container entrypoint has to deal with this dynamic nature of addresses. For example, if you have a configuration file, you need to update that configuration file on every run with the proper network address info. My Bugzilla entrypoint script was doing this. Or so I thought. It turns out that Bugzilla's bootstrap process has multiple config files. There's an answers file that provides static answers to questions asked when running the bootstrap script (checksetup.pl). checksetup.pl will produce a localconfig file (actually a Perl script) containing all that data. There's also a data/params file containing yet more configuration options. And, the way I was running bootstrap, checksetup.pl refused to update files with new values. I initially had the entrypoint script updating only the answers file and running checksetup.pl, thinking checksetup.pl would update localconfig if the answers change. Nope! checksetup.pl only appears to update localconfig if localconfig is missing a value. So, here my entrypoint script was, successully calling checksetup.pl with the proper network values, which checksetup.pl was more than happy to use. But when I started the web application, it used the old values from localconfig and data/params and blew up. Derp. So, to have dynamic MySQL hosts and ports and a dynamic self-referential HTTP URL, I needed to manually update localconfig and data/params during the entrypoint script. The entrypoint script now rewrites Perl scripts during container load to reflect appropriate variables. Oy.

Resource Constraints

At some point I got working BMO containers running concurrently from multiple tests. This was a huge milestone. But it only revealed my next problem: resource constraints. The running containers were consuming gobs of memory and I couldn't run more than 2 or 3 tests concurrently before running out of memory. Before, I was able to run 8 tests concurrently no problem. Well crap, I just slowed down the test harness significantly by reducing concurrency. No bueno.

Some quick investigation revealed the culprit was MySQL and Apache being greedier than they needed to be. MySQL was consuming 1GB RSS on start. Apache was something like 350 MB. It had been a while since I ran a MySQL server, so I had to scour the net for settings to put MySQL on a diet. The results were not promising. I knew enough about MySQL to know that the answers I found had similar quality to comments on the php.net function documentation circa 2004 (it was not uncommon to see things like SQL injection in the MySQL pages back then - who knows, maybe that's still the case). Anyway, a little tuning later and I was able to get MySQL using a few hundred MB RAM and I reduced the Apache worker pool to something reasonable (maybe 2) to free up enough memory to be able to run tests with the desired concurrency again. If using Docker as part of testing ever takes off, I imagine there will be two flavors of every container: low memory and regular. I'm not running a production service here: I'll happily trade memory for high-end performance as long as it doesn't impact my tests too much.

Caching, Invalidating, and Garbage Collecting Bootstrapped Images

As part of iterating on making BMO bootstrap work, I encountered another problem: knowing when to perform a bootstrap. As mentioned earlier, bootstrap was slow: 25 to 30 seconds. While I had reduced the cost of bootstrap to at most once per test suite execution (as opposed to once per test), there was still the potential for a painful 25-30s delay when running tests. Unacceptable! Furthermore, when I changed how bootstrap worked, I needed a way to invalidate the previous bootstrapped image. Otherwise, we may use an outdated bootstrapped image that doesn't represent the environment it needs to and test execution would fail. How should I do this?

Docker has considered this problem and they have a solution: build context. When you do a docker build, Docker takes all the files from the directory containing the Dockerfile and makes them available to the environment doing the building. If you ADD one of these files in your Dockerfile, the image ID will change if the file changes, invalidating the cache used by Docker to build images. So, if I ADDed the scripts that perform BMO bootstrap to my Docker images, Docker would automagically invalidate the built images and force a bootstrap for me. Nice! Unfortunately, docker build doesn't allow you to add files outside of the current directory to the build context. Internet sleuthing reveals the solution here is to copy things to a temporary directory and run docker build from that. Seriously? Fortunately, I was using the Docker API directly via Python. And that API simply takes an archive of files. And since you can create archives dynamically inside Python using e.g. tarfile, it wasn't too difficult to build proper custom context archives that contained my extra data that could be used to invalidate bootstrapped images. I threw some simple ADD directives into my Dockerfiles and now I got bootstrapped image invalidation!

To avoid having to perform bootstrap on every test run, I needed a mapping between the base images and the bootstrapped result. I ended up storing this in a simple JSON file. I realize now I could have queried Docker for images having the base image as its parent since there is supposed to be a 1:1 relationship between them. I may do this as a follow-up.

With the look-up table in place, ensuring bootstrapped images were current involved doing a couple docker builds, finding the bootstrapped images from those base images, and doing the bootstrap if necessary. If everything is up-to-date, docker build finishes quickly and we have less than 1s of delay. Very acceptable. If things aren't current, well, there's not much you can do there if accuracy is important. I was happy with where I was.

Once I started producing bootstrapped images every time the code impacting the generation of that image changed, I ran into a new problem: garbage collection. All those old bootstrapped images were piling up inside of Docker! I needed a way to prune them. Docker has support for associating a repository and a tag with images. Great, I thought, I'll just associate all images with a well-defined repository, leave the tag blank (because it isn't really relevant), and garbage collection will iterate over all images in to-be-pruned repositories and delete all but the most recent one. Of course, this simple solution did not work. As far as I can tell, Docker doesn't really let you have multiple untagged images. You can set a repository with no tag and Docker will automagically assign the latest tag to that image. But the second you create a new image in that repository, the original image loses that repository association. I'm not sure if this is by design or a bug, but it feels wrong to me. I want the ability to associate tags with images (and containers) so I can easily find all entities in a logical set. It seemed to me that repository facilitated that need (albeit with the restriction of only associating 1 identifier per image). My solution here was to assign type 1 UUIDs to the tag field for each image. This forced Docker to retain the repository association when new images were created. I chose type 1 UUIDs so I can later extract the time component embedded within and do time-based garbage collection e.g. delete all images created more than a week ago.

Making Things Work in Jenkins/Ubuntu

At about this point, I figured things were working well enough on my boot2docker machine that it was time to update the Jenkins virtual machine / Vagrant configuration to run Docker. So, I hacked up the provisioner to install the docker.io package and tried to run things. First, I had to update code that talks to Docker to know where Docker is in an Ubuntu VM. Before, I was keying things off DOCKER_HOST, which I guess is used by the docker CLI and boot2docker reminds you to set. Easy enough. When I finally got things talking to Docker, my scripts threw a cryptic error when talking to Docker. Huh? This worked in boot2docker! When in doubt, always check your package versions. Sure enough, Ubuntu was installing an old Docker version. I added the Docker Apt repo to the Vagrant provisioner and tried again. Bingo - working Docker in an Ubuntu VM!

Choice of storage engines

I started building the BMO Docker images quickly noticed something: building images was horribly slow. Specifically, the part where new images are committed was taking seemingly forever. 5 to 8 seconds or something. Ugh. This wouldn't really bother me except due to subsequent issues, I found myself changing images enough as part of debugging that image building latency became a huge time sink. I felt I was spending more time waiting for layers to commit than making progress. So, I I decided to do something about it. I remembered glancing at an overview of storage options in Docker the week or two prior. I instinctively pinned the difference on different storage drivers between boot2docker and Ubuntu. Sure enough, boot2docker was using aufs and Ubuntu was using devicemapper. OK, now I identified a potential culprit. Time to test the theory. A few paragraphs into that blog post, I see a sorted list of storage driver priorities. I see aufs first, btrfs second, and devicemapper third. I know aufs has kernel inclusion issues (plus a nasty data leakage bug). I don't want that. devicemapper is slow. I figured the list is ordered for a reason and just attempted to use btrfs without really reading the article. Sure enough, btrfs is much faster at committing images than devicemapper. And, it isn't aufs. While images inside btrfs are building, I glance over the article and come to the conclusion that btrfs is in fact good enough for me.

So now I'm running Docker on btrfs on Ubuntu and Docker on aufs in boot2docker. Hopefully that will be the last noticable difference between host environments. After all, Docker is supposed to abstract all this away, right? I wish.

The Mystery of Inconsistent State

It was here that I experienced the most baffling, mind bending puzzle yet. As I was trying to get things working on the Jenkins/Ubuntu VM - things that had already been proved out in boot2docker - I was running into inexplicable issues during creation of the bootstrapped BMO containers. It seemed that my bootstrapped containers were somehow missing data. It appeared as if bootstrap had completed but data written during bootstrap failed to write. You start the committed/bootstrapped image and bootstrap had obviously completed partially, but it appeared to have never finished. Same Docker version. Same images. Same build scripts. Only the host environment was different. Ummmm, Bueller?

This problem had me totally and completely flabbergasted. My brain turned to mush exhausting possibilities. My initial instinct was this was a filesystem buffering problem. Different storage driver (btrfs vs aufs) means different semantics in how data is flushed, right? I once again started littering code with print statements to record the presence or non-presence of files and content therein. MySQL wasn't seeing all its data, so I double and triple check I'm shutting down MySQL correctly. Who knows, maybe one of the options I used to trim the fat from MySQL removed some of the safety from writing data and unclean shutdown is causing MySQL to lose data?

While I was investigating this problem, I noticed an additional oddity: I was having trouble getting reliable debug output from running containers (docker log -f). It seemed as if I was losing log events. I could tell from the state of a container that something happened, but I was seeing no evidence from docker logs -f that that thing actually happened. Weird! On a hunch, a threw some sys.stdout.flush() calls in my Python scripts, and sure enough my missing output started arriving! Pipe buffering strikes again. So, now we have dirty hacks in all the Python scripts related to Docker to unbuffer stdout to prevent data loss. Don't ask how much time was wasted tracking down bad theories due to stdout output being buffered.

Getting back to the problem at hand, I still hand Docker containers seemingly lose data. And it was only happening when Ubuntu/btrfs was the host environment for Docker. I eventually exhausted all leads in my filesystem wasn't flushed theory. At some point, I compared the logs of docker logs -f between boot2docker and Ubuntu and eventually noticed that the bmoweb container in Ubuntu wasn't printing as much. This wasn't obvious at first because the output from bootstrap on Ubuntu looked fine. Besides, the script that waits for bootstrap to complete waits for the Apache HTTP TCP socket to come alive before it gracefully stops the container and snapshots the bootstrapped result: bootstrap must be completing, ignore what docker logs -f says.

Eventually I hit an impasse and resort to context dumping everything on IRC. Ben Kero is around and he picks up on something almost immediately. He simply says ... systemd?. I knew almost instantly what he was referring to and knew the theory fit the facts. Do you know what I was doing wrong?

I still don't know what and quite frankly I don't care, but something in my Ubuntu host environment had a trigger on the TCP port the HTTP server would be listening on. Remember, I was detecting bootstrap completion by waiting until a TCP socket could be opened to the HTTP server. As soon as that connection was established, we stopped the containers gracefully and took a snapshot of the bootstrapped result. Except on Ubuntu something was accepting that socket open, giving a false positive to my wait code, and triggering early shutdown. Docker issued the signal to stop the container gracefully, but it wasn't finished bootstrapping yet, so it forcefully killed the container, resulting in bootstrap being in a remarkably-consistent-across-runs inconsistent state. Changing the code from wait on TCP socket to wait for valid HTTP response fixed the problem. And just for good measure, I changed the code waiting on the MySQL server to also try to establish an actual connection to the MySQL application layer, not merely a TCP socket.

After solving this mystery, I thought there's no way I could be so blind as to not see the container receiving the stop signal during bootstrap. So, I changed things back to prove to myself I wasn't on crack. No matter how hard I tried, I could not get the logs to show that the signal was received. I think what was happening was that my script was starting the container and issuing the graceful stop so quickly that it wasn't captured by log clients. Sure enough, adding some sleeps in the proper places made it possible to catch the events in action. In hindsight, I suppose I could have used docker events to shed some light on this as well. If Docker persisted logs/output from containers and allowed me to scroll back in time, I think this would have saved me. Although, there's a chance my entrypoint script wouldn't have informed me about the received signal. Perhaps checksetup.pl was ignoring it? What I really need is a unified event + log stream from Docker containers so I can debug exactly what's going on.

Everything is Working, Right?

After solving the inconsistent bootstrap state problem, things were looking pretty good. I had BMO bootstrapping and running from tests on both boot2docker and Ubuntu hosts. Tests were seeing completely independent environments and there were no race conditions. I was nearly done.

So, I started porting more and more tests to Docker. I started running tests more and more. Things worked. Most of the time. But I'm still frustrated by periodic apparent bugs in Docker. For example, our containers periodically fail to shut down. Our images periodically fail to delete.

During container shutdown and delete at the end of tests, we periodically see error messagess like the following:

docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot destroy container f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: Driver aufs failed to remove root filesystem f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255: rename /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255 /mnt/sda1/var/lib/docker/aufs/mnt/f13828df94c9d295bfe24b69ac02377a757edcf948a3355cf7bc16ff2de84255-removing: device or resource busy")
500 Server Error: Internal Server Error ("Cannot destroy container 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: Unable to remove filesystem for 7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: remove /mnt/sda1/var/lib/docker/containers/7e87e5950501734b2a1c02705e9c19f65357a15bad605d8168452aa564d63786: directory not empty")

Due to the way we're executing tests (Mercurial's .t test format), this causes the test's output to change and the test to fail. Sadness.

I think these errors are limited to boot2docker/aufs. But we haven't executed enough test runs in the Jenkins/Ubuntu/btrfs VM yet to be sure. This definitely smells like a bug in Docker and it is very annoying.

Conclusion

After much wrangling and going deeper in a rabbit hole than I ever felt was possible, I finally managed to get BMO running inside Docker as part of our test infrastructure. We're now building tests for complicated components that touch Mercurial, Review Board, and Bugzilla and people are generally happy with how things work.

There are still a handful of bugs, workarounds, and components that aren't as optimal as I would like them to be. But you can't always have perfection.

My takeaway from this ordeal is that Docker still has a number of bugs and user experience issues to resolve. I really want to support Docker and to see it succeed. But every time I try doing something non-trivial with Docker, I get bit hard. Yes, some of the issues I experienced were due to my own ignorance. But at the same time, if one of Docker's mantras is about simplicity and usability, then should there be such gaping cracks for people like me to fall through?

In the end, the promise of Docker fits my use case almost perfectly. I know the architecture is a good fit for testing. We will likely stick with Docker, especially now that I've spent the time to make it work. I really wish this project would have taken a few days, not a few weeks.

Robert NymanGetting started with & understanding the power of Vim

Being a developer and having used a lot of code editors over the years, I think it’s a very interesting area both when it comes to efficiently but also in the program we spend many many hours in. At the moment, I’m back with Vim (more specifically, MacVim).

The last years I’ve been using Sublime Text extensively, and before that, TextMate. I’ve really liked Sublime Text, it supports most of what I want to do and I’m happy with it.

As the same time, my belief is that you need to keep on challenging yourself. Try and learn new things, get another perspective, learn about needs and possibilities you didn’t even knew you had. Or, at the very least, go back to what you used before, now being more aware of how much you like and appreciate it.

Vim redux

A few years ago I tried out Vim (MacVim) to see what it was like. A lot of great developers use it, and a few friends swore by how amazing it was. So, naturally I had to try it.

Tried for a while, and did it with the Janus distribution. I did end up in a situation where I didn’t have enough control; or rather, didn’t understand how it all works and didn’t take the time to learn. So I tried Vim for a while, got fed up and aggravated that I could get things done quickly. While I learned a lot about Vim after a while, at that time and during its circumstances, the cost was too big to continue.

But now I’m back again, and so far I’m happy about it. :-)

Let’s be completely honest, though: the learning curve is fairly steep and there are a lot of annoying moments in the beginning, in particular since it is very different from what most people have used before.

Getting started

My recommendation to get started, and really grasp Vim, is to download a clean version, and probably something with a graphical user interface/application wrapper for your operating system. As mainly a Mac OS X user, my choice has been MacVim.

In your home folder, you will get (or create) a folder and a file (there could be more, but this is the start):

.vim folder
Contains your plugins and more
.vimrc file
A file with all kinds of configurations, presets and customizations. For a Vim user, the .vimrc file is the key to success (for my version, see below)

Editing Modes

One of things is that Vim offers a number of different modes, depending on what you want to do. The core ones are:

normal
This is the default mode in Vim, for navigating and manipulating text. Pressing <Esc> at any time takes you back to this mode
insert
Inserting and writing text and code
visual
Any kinds of text selections
command-line
Pressing : takes you to the command line in Vim, from which you can call a plethora of commands

Once you’ve gotten used to switching between these commands, you will realize how extremely powerful they are and, when gained control, how they dramatically improves efficiency. Search/substitute is also very powerful in Vim, but I really do recommend checking out vimregex.com for the low-down on commands and escaping.

Keyboard shortcuts

With the different Modes, there’s an abundance of keyboard shortcuts, some of them for one mode, some of them spanning across modes (and all this customizable as well through your .vimrc file).

Also, Vim is a lot about intent. Not just what you want to do now, but thinking 2, 3 or 4 steps ahead. Where are you going with this entire flow, not just action by action without connections.

For instance, let’s say I have a <h2> element with text in it that I want to replace, like this:

<h2>I am a heading</h2>

My options are (going from most most complicated to most efficient):

  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text and then delete it (with the delete key or pressing d), press i to go into Insert mode, then enter the new text
  • Press v to go into Visual mode, then use the w (jump by start of words) or e (jump to end of words) to select the text, then press c to go into Insert mode with a change action, i.e. all selected text will be gone and what you type is the new value
  • Press dit in Normal mode, which means “delete in tag”, then press i or c to go into Insert mode and write the new text
  • Press ct< in Normal mode, which means “change to [character]“, then just write the new text
  • Press cit in Normal mode, which means “change in tag”, then just write the new text

Using ct[character] or dt[character], e.g. ct< will apply the first action (“change”) to the specified character (“<” in this case). Other quick ways of changing or deleting things on a row is pressing C or D which will automatically do that action to the end of the current line.

There is a ton of options and combinations, and I’ve listed the most common ones below (taken from http://worldtimzone.com/res/vi.html):

Cursor movement
h - move left
j - move down
k - move up
l - move right
w - jump by start of words (punctuation considered words)
W - jump by words (spaces separate words)
e - jump to end of words (punctuation considered words)
E - jump to end of words (no punctuation)
b - jump backward by words (punctuation considered words)
B - jump backward by words (no punctuation)
0 - (zero) start of line
^ - first non-blank character of line
$ - end of line
G - Go To command (prefix with number - 5G goes to line 5)

Note: Prefix a cursor movement command with a number to repeat it. For example, 4j moves down 4 lines.
Insert Mode – Inserting/Appending text

i - start insert mode at cursor
I - insert at the beginning of the line
a - append after the cursor
A - append at the end of the line
o - open (append) blank line below current line (no need to press return)
O - open blank line above current line
ea - append at end of word
Esc - exit insert mode
Editing
r - replace a single character (does not use insert mode)
J - join line below to the current one
cc - change (replace) an entire line
cw - change (replace) to the end of word
c$ - change (replace) to the end of line
s - delete character at cursor and substitute text
S - delete line at cursor and substitute text (same as cc)
xp - transpose two letters (delete and paste, technically)
u - undo
. - repeat last command

Marking text (visual mode)

v - start visual mode, mark lines, then do command (such as y-yank)
V - start Linewise visual mode
o - move to other end of marked area
Ctrl+v - start visual block mode
O - move to Other corner of block
aw - mark a word
ab - a () block (with braces)
aB - a {} block (with brackets)
ib - inner () block
iB - inner {} block
Esc - exit visual mode
Visual commands
> - shift right
< - shift left
y - yank (copy) marked text
d - delete marked text
~ - switch case
Cut and Paste
yy - yank (copy) a line
2yy - yank 2 lines
yw - yank word
y$ - yank to end of line
p - put (paste) the clipboard after cursor
P - put (paste) before cursor
dd - delete (cut) a line
dw - delete (cut) the current word
x - delete (cut) current character
Exiting
:w - write (save) the file, but don't exit
:wq - write (save) and quit
:q - quit (fails if anything has changed)
:q! - quit and throw away changes
Search/Replace
/pattern - search for pattern
?pattern - search backward for pattern
n - repeat search in same direction
N - repeat search in opposite direction
:%s/old/new/g - replace all old with new throughout file
:%s/old/new/gc - replace all old with new throughout file with confirmations
Working with multiple files
:e filename - Edit a file in a new buffer
:bnext (or :bn) - go to next buffer
:bprev (of :bp) - go to previous buffer
:bd - delete a buffer (close a file)
:sp filename - Open a file in a new buffer and split window
ctrl+ws - Split windows
ctrl+ww - switch between windows
ctrl+wq - Quit a window
ctrl+wv - Split windows vertically

Plugins

There are a number of different ways of approaching plugins with Vim, but the most simple and clearest one that I’ve found, in the form of a plugin itself, is using pathogen.vim. Then you will place all other plugins you install in .vim/bundle

These are the plugins I currently use:

command-t
Mimicking the Command + T functionality in TextMate/Sublime Text, to open any file in the current project. I press , + f to use it (where , is my Leader key)
vim-snipmate
To import snippet support in Vim. For instance, in a JavaScript file, type for then tab to have it completed into a full code snippet. As part of this, some other plugins were needed:

vim-multiple-cursors
I love the multiple selection feature in Sublime Text; Command + D to select the next match(es) in the document that are the same as what is currently selected.
This is a version of this for Vim that works very well. Use Ctrl + n to select any matches, and then act on them with all the powerful commands available in Vim. For instance, after you are done selecting, the simplest thing is to press c to change all those occurrences to what you want.
vim-sensible
A basic plugin to help out with some of the key handling.
vim-surround
surround is a great plugin for surround text with anything you wish. Commands starts with pressing ys which stands for “you surround” and then you enter the selection criteria and finally what to surround it with.
Examples:

  • ysiw" – “You surround in word”
  • ysip<C-t> – “You surround in paragraph” and then ask for which tag to surround with
nerdtree
This offers a fairly rudimentary tree navigation to Vim. Don’t use it much at the moment, though, but rather prefer pressing : to go to the command line in Vim and then just type in e. to open a file tree.

My .vimrc file

Here’s is my .vimrc file which is vital for me in adapting Vim to all my needs – keyboard shortcuts, customizations, eficiency flows:

HyperLinkHelper in Vim

Another thing I really like in TextMate and Sublime Text is the HyperlinkHelper, basically wrapping the current selection as a link with what’s in the clipboard set as the href value. So I created this command for Vim, to add in your .vimrc file:

vmap <Space>l c<a href="<C-r>+"><C-r>"</a>

In Visual mode, select text and then press space bar + l to trigger this action.

Scratching the surface

This has only been scratching the surface of all the power in Vim, but I hope it has been inspiring, understandable and hopefully motivated you to give it a go, alternatively taught you something you didn’t know.

Any input, thoughts and suggestions are more than welcome!

Doug Belshaw99% finished: Badge Alliance Digital & Web Literacies working group's Privacy badge pathway

I’m the co-chair of the Badge Alliance’s working group on Digital & Web Literacies. We’ve just finished our first cycle of meetings and are almost finished the deliverable. Taking the Web Literacy Map (v1.1) as a starting point, we created a document outlining considerations for creating a badged pathway around the Privacy competency.

Cat x-ray

The document is currently on Google Docs and open for commenting. After the Mozilla Festival next week the plan is to finalise any edits and then use the template we used for the Webmaker whitepaper.

Click here to access the document: http://goo.gl/40byub


Comments? Questions? Get in touch: @dajbelshaw / doug@mozillafoundation.org

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1079476] Allow getting and updating groups via the web services
  • [1079463] Bugzilla::WebService::User missing update method
  • [1080600] CVE ID format change: CVE-\d{4}-\d{4} becomes CVE-\d{4}-\d{4,7} this year
  • [1080554] Create custom entry form for submissions to Mozilla Communities newsletter
  • [1074586] Add “Bugs of Interest” to the dashboard
  • [1062775] Create a form to create/update bounty tracking tracking attachments
  • [1074350] “new to bugzilla” indicator should be removed when a user is added to ‘editbugs’, not ‘canconfirm’
  • [1082887] comments made when setting a flag from the attachment details page are not included in the “flag updated” email

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Jennie Rose HalperinNew /contribute page

In an uncharacteristically short post, I want to let folks know that we just launched our new /contribute page.

I am so proud of our team! Thank you to Jess, Ben, Larissa, Jen, Rebecca, Mike, Pascal, Flod, Holly, Sean, David, Maryellen, Craig, PMac, Matej, and everyone else who had a hand. You all are the absolute most wonderful people to work with and I look forward to seeing what comes next!

I’ll be posting intermittently about new features and challenges on the site, but I first want to give a big virtual hug to all of you who made it happen and all of you who contribute to Mozilla in the future.

David BoswellInvesting more in community building

I’m very excited to see the new version of Mozilla’s Get Involved page go live. Hundreds of people each week come to this page to learn about how they can volunteer. Improvements to this page will lead to more people making more of an impact on Mozilla’s projects.

get_involved_2014

This page has a long history—this page existed on www.mozilla.org when Mozilla launched in 1998 and it has been redesigned a few times before. There is something different about the effort this time though.

We’ve spent far more time researching, prototyping, designing, testing, upgrading and building than ever before. This reflects Mozilla’s focus this year of enabling communities that have impact and that goal has mobilized experts from many teams who have made the experience for new volunteers who use this page much better.

Mozilla’s investment in community in 2014 is showing up in other ways too, including a brand new contribution dashboard, a relaunched contributor newsletter, a pilot onboarding program, the first contributor research effort in three years and much more.

All of these pieces are coming together and will give us a number of options for how we can continue and increase the investment in community in 2015. Look for more thoughts soon on why that is important, what that could look like and how you could help shape it.


Mozilla Open Policy & Advocacy BlogSpotlight on Amnesty International: A Ford-Mozilla Open Web Fellows Host

{This is the second installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. Amnesty International is a great addition to the program, especially as new technologies have such a profound impact – both positive and negative – on human rights. With its tremendous grassroots advocacy network and decades of experience advocating for fundamental human rights, Amnesty International, its global community and its Ford-Mozilla Fellow are poised to continue having impact on shaping the digital world for good.}

Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellow Host
By Tanya O’Carroll, Project Officer, Technology and Human Rights, Amnesty International

For more than fifty years Amnesty International has campaigned for human rights globally: exposing information that governments will go to extreme measures to hide; connecting individuals who are under attack with solidarity networks that span the globe; fighting for policy changes that often seem impossible at first.

We’ve developed many tools and tactics to help us achieve change.

But the world we operate in is also changing.

Momentous developments in information and communications networks have introduced new opportunities and threats to the very rights we defend.

amnesty-logoThe Internet has paved the way for unprecedented numbers of people to exercise their rights online, crucially freedom of expression and assembly.

The ability for individuals to publish information and content in real-time has created a new world of possibilities for human rights investigations globally. Today, we all have the potential to act as witnesses to human rights violations that once took place in the dark.

Yet large shadows loom over the free and open Web. Governments are innovating and seeking to exploit new tools to tighten their control, with daunting implications for human rights.

This new environment requires specialist skills to respond. When we challenge the laws and practices that allow governments to censor individuals online or unlawfully interfere with their privacy, it is vital that we understand the mechanics of the Internet itself–and integrate this understanding in our analysis of the problem and solutions.

That’s why we’re so excited to be an official host for the Ford-Mozilla Open Web Fellowship.

We are seeking someone with the expert skill set to help shape our global response to human rights threats in the digital age.

Amnesty International’s work in this area builds on our decades of experience campaigning for fundamental human rights.

Our focus is on the new tools of control – that is the technical and legislative tools that governments are using to clamp down on opposition, restrict lawful expression and the free flow of information and unlawfully spy on private communications on a massive scale.

In 2015 we will be actively campaigning for an end to unlawful digital surveillance and for the protection of freedom of expression online in countries across the world.

Amnesty International has had many successes in tackling entrenched human rights violations. We know that as a global movement of more than 3 million members, supporters and activists in more than 150 countries and territories we can also help to protect the ideal of a free and open web. Our success will depend on building the technical skills and capacities that will keep us ahead of government efforts to do just the opposite.

Demonstrating expert leadership, the fellow will contribute their technical skills and experience to high-quality research reports and other public documents, as well as international advocacy and public campaigns.

If you are passionate about stopping the Internet from becoming a weapon that is used for state control at the expense of freedom, apply now to become a Ford-Mozilla Open Web Fellow and join Amnesty International in the fight to take back control.


Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Doug BelshawWeb Literacy Map 2.0 community calls

To support the development of Web Literacy Map v2.0, we’re going to host some calls with the Mozilla community.

Dogs on phone

There is significant overlap between the sub-section of the community interested in the Web Literacy Map and the sub-section involved in the Badge Alliance working group on Digital/Web Literacies. It makes sense, therefore, to use the time between cycles of the Badge Alliance working group to focus on developing the Web Literacy Map.


Calls

We’ll have a series of seven community calls on the following dates. The links take you to the etherpad for that call.


Discussion

You can subscribe to a calendar for these calls at the link below:

Calendar: http://bit.ly/weblitmap2-calls

We’ll be using the #TeachTheWeb forum for asynchronous discussion. I do hope you’ll be able to join us!


Questions? Comments? Direct them to @dajbelshaw / doug@mozillafoundation.org

Robert NymanHow to become efficient at e-mail

For many years I’ve constantly been receiving hundreds of e-mails every day. A lot of them work-related, a number of them personal. And I’ve never seen this is an issue, since I have an approach that works for me.

Many people complain that e-mail is broken, but I think it’s a great communication form. I can deal with it when I feel I have the time and can get back to people when it suits me. If I need to concentrate on something else, I won’t let it interrupt my flow – just have notifications off/e-mail closed/don’t read it, and then get to it while you can.

Your miles might, and will, vary, of course, but here are the main things that have proven to work very well for me.

Deal with it

When you open up your Inbox with all new e-mail, deal with it. Now and then. Because having seen the e-mail, maybe even glanced at some of the contents beyond the subjects as well, I believe it has already reserved a mental part of your brain. You’ll keep on thinking about it till you actually deal with it.

In some cases, naturally it’s good to ponder your reply, but mostly, just go with your knowledge and act on it. Some things are easiest to deal with directly, some need a follow-up later on (more on that in Flags and Filters below).

Flags

Utilize different flags for various actions you want. Go through your Inbox directly, reply to the e-mails or flag them accordingly. It doesn’t have to be Inbox Zero or similar, but just that you know and are on top of each and every e-mail.

These are the flags/labels I use:

This needs action
Meaning, I need to act on this: that could be replying, checking something out, contact someone else before I know more etc
Watch this
No need for an immediate action, but watch and follow up on this and see what it happens. Good for things when you never got a reply from people and need to remind them
Reference
No need to act, no need to watch it. But it is plausible that this topic and discussion might come up in the future, so file it just for reference.

The rest of it is Throw away. No need to act, watch or file it? Get rid of it.

Filters

Getting e-mails from the same sender/on the same topic on a regular basis? Set up a filter. This way you can have the vast majority of e-mail already sorted for you, bypassing the Inbox directly.

Make them go into predefined folders (or Gmail labels) per sender/topic. That way you can see in the structure that you have unread e-mails from, say, LinkedIn, Mozilla, Netflix, Facebook, British Airways etc. Or e-mails from your manager or the CEO. Or e-mail sent to the team mailing list, company branch or all of the company. And then deal with it when you have the time.

Gmail also has this nice feature of choosing to only show labels in the left hand navigation if they have unread e-mails in them, making it even easier to see where you’ve got new e-mails.

Let me stress that this is immensely useful, drastically reducing which e-mails you need to manually filter and decide an action for.

Acknowledge people

If you have a busy period when replying properly is hard, still make sure to take the time to acknowledge people. Reply, say that you’ve seen their e-mail and that you will get back to them as soon as you have a chance.

They took the time to write to you, and they respect the common decency of a reply.

Unsubscribe

How many newsletters or information e-mails are you getting that you don’t really care about? Maybe on a monthly basis, so it’s annoying, but not annoying enough? Apply the above suggestion filters with them or, even better, start unsubscribing from crap you don’t want.

Get to know your e-mail client

Whether you use an e-mail client, Gmail or similar, make sure to learn its features. Keyboard shortcuts, filters and any way you can customize it to make you more efficient.

For instance, I’ve set up keyboard shortcuts for the above mentioned flags and for moving e-mails into pre-defined folders. Makes the manual part of dealing with e-mail really fast.

Summing up

E-mail doesn’t have to be bad. On the contrary, it can be extremely powerful and efficient if you just make the effort to streamline the process and use it for you, not against you.

E-mails aren’t a problem, they’re an opportunity.

Kent JamesThunderbird Summit in Toronto to Plan a Viable Future

On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time)  using the same channels as the regular weekly Thunderbird status meetings.

Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.

Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014

Feel free to join in if you are interested in the future of Thunderbird.

J. Ryan StinnettDevTools for Firefox OS browser tabs

We've had various tools for inspecting apps on remote devices for some time now, but for a long time we've not had the same support for remote browser tabs.

To remedy this, WebIDE now supports inspecting browser tabs running on Firefox OS devices.

Inspecting a tab in WebIDE

A few weeks back, WebIDE gained support for inspecting tabs on the remote device, but many of the likely suspects to connect to weren't quite ready for various reasons.

We've just landed the necessary server-side bits for Firefox OS, so you should be able try this out by updating your device to the next nightly build after 2014-10-14.

How to Use

After connecting to your device in WebIDE, any open browser tabs will appear at the bottom of WebIDE's project list.

Browser tab list in WebIDE

The toolbox should open automatically after choosing a tab. You can also toggle the toolbox via the "Pause" icon in the top toolbar.

What's Next

We're planning to make this work for Firefox for Android as well. Much of that work is already done, so I am hopeful that it will be available soon.

If there are features you'd like to see added, file bugs or contact the team via various channels.

Andreas GalOpenH264 Now in Firefox

The Web is an open ecosystem, generally free of proprietary control and technologies—except for video.

Today in collaboration with Cisco we are shipping support for H.264 in our WebRTC implementation. Mozilla has always been an advocate for an open Web without proprietary controls and technologies. Unfortunately, no royalty-free codec has managed to get enough adoption to become a serious competitor to H.264. Mozilla continues to support the VP8 video format, but we feel that VP8 has failed to gain sufficient adoption to replace H.264. Firefox users are best served if we offer a video codec in WebRTC that maximises interoperability, and since much existing telecommunication infrastructure uses H.264 we think this step makes sense.

The way we have structured support for H.264 with Cisco is quite interesting and noteworthy. Because H.264 implementations are subject to a royalty bearing patent license and Mozilla is an open source project, we are unable to ship H.264 in Firefox directly. We want anyone to be able to distribute Firefox without paying the MPEG LA.

Instead, Cisco has agreed to distribute OpenH264, a free H.264 codec plugin that Firefox downloads directly from Cisco. Cisco has published the source code of OpenH264 on Github and Mozilla and Cisco have established a process by which the binary is verified as having been built from the publicly available source, thereby enhancing the transparency and trustworthiness of the system.

OpenH264

OpenH264 is not limited to Firefox. Other Internet-connected applications can rely on it as well.

Here is how Jonathan Rosenberg, Cisco’s Chief Technology Officer for Collaboration, described today’s milestone: “Cisco is excited to see OpenH264 become available to Firefox users, who will then benefit from interoperability with the millions of video communications devices in production that support H.264”.

We will continue to work on fully open codecs and alternatives to H.264 (such as Daala), but for the time being we think that OpenH264 is a significant victory for the open Web because it allows any Internet-connected application to use the most popular video format. And while OpenH264 is not truly open, at least it is the most open widely used video codec.

Note: Firefox currently uses OpenH264 only for WebRTC and not for the <video> tag, because OpenH264 does not yet support the high profile format frequently used for streaming video. We will reconsider this once support has been added.


Filed under: Mozilla

Doug BelshawSome interesting feedback from the Web Literacy Map 2.0 community survey

Last week we at Mozilla launched a community survey containing five proposals for Web Literacy Map v2.0. I don’t want to share how positively or negatively the overall sentiment is for each proposal as the survey is still open. However, I do want to pull out some interesting comments we’ve seen so far.

Mickey Mouse - piano

There’s really good points to be made for and against each of the proposals - as the following (anonymized) examples demonstrate. While I’d like to share the whole spreadsheet, there’s people’s contact details on there, and I haven’t asked them if I can share their feedback with names attached.

What I’ve done here - and I guess you’ll have to trust me on this - is to try and give examples that show the range of feedback we’re getting.

 

1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.

The map can be about putting our manifesto and principles into practice. It’s a way to teach Mozilla’s beliefs.
I think this would put some people off using the map as an educational resource as they will think it has some political angle to it.
100% yes. The manifesto needs to be much more broadly spread - it is an inviting and inclusive document by nature and it is important that people engaging in our vision of web literacy understand the context from which we speak.
I often present the main ideas from the Manifesto when introducing the map and Webmaker. This aligns with my teaching practice.
While I like the manifesto, I don’t think the Web Literacy Map should be tied to it. I think that might decrease the likelihood of partners feeling ownership over it.
I think it is important for Mozilla to embrace its output – we shouldn’t shy away from taking credit for the things we work so hard to produce. But I do not believe Mozilla should try to achieve mission alignment as part of the literacy map: Literacy is a tool that helps people decide for themselves what to believe, and disagreement with Mozilla’s manifesto is a valid result of that.
Not sure if it needs to reference the manifesto, if the principles are followed when needed they would be implicit?

 

2. I believe the three strands should be renamed ‘Reading’, 'Writing’ and 'Participating’.

Definitely easier to understand off the bat.
No. Exploring, Building and Connecting are better descriptions.
Reading is not navigating. Writing is not building (necessarily). Communicating is more then participating.
Kinda torn on this. A lot of the time when literacy people from schools of education take over, they come up with weaker definitions of reading and writing, and I like the existing descriptions. But at the same time, R/W/P might make it more appealing for those folks, and trojan-horse them into using stronger standards.
Reading, writing, participating sounds more like school which is a turn off for many.
There’s a lot more than reading and writing going on.
I think reading and writing are too limited in their understood meanings and prefer the existing terms exploring and building. I prefer participating as a term over connecting.

 

3. I believe the Web Literacy Map should look more like a 'map’.

Naw. As I said before, while it might help visualize the connections, it could quickly become a plate of spaghetti that’s not approachable or user friendly. But – there’s no reason there couldn’t be a map tool for exploring the things on the side.
I think it would seem more accessible as a map and people will stay connected/interested for longer.
There should be an easy to read way to see all of the map. It’s not so important what that looks like, although having some more map-like versions of it is interesting.
A list is not good enough and it’s necessary to show off relation between the various skills and competencies. But a true interactive map is maybe a bit to much.
It should look like whatever it needs to look like for people to understand it. If “Map” is causing confusion, rename it rather than change the form to match the name.
I like this idea a lot. It could even have “routes” like the pathways tool.
But you should provide both options - we all interpret data differently, have preferred means of reading information, so leave the list style for those who think better that way, and the map for those who take a more graphic-based approach.

 

4. I believe that concepts such as 'Mobile’, 'Identity’, and 'Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.

Even if they’re included solely as reference points or suggested teaching angles, having them in there strengthens the entire project.
I think adding cross-cutting themes (like the vertical themes at Mozfest) will be quite confusing for some people.
Yeah, I think that’s a good way to deal with those. They’re useful as themes, and this keeps them from overpowering the track structure in the way I was worried they would. Good work!
Well if you introduce the readers to many *new* terms (that may be new to them) you risk to confuse them and they end up missing the content in the map.
An idea I discussed with Doug was concepts that could act as lens through which to view the web literacy map (i.e., mobile or digital citizenship). I support the idea of demonstrating how remix the map with cross-cutting ideas but I think the ideas should be provided as examples and users should also bring their own concepts forward.
Agreed. There are these larger themes or elements across the map. I’d be interested to see how these are represented. Perhaps this is the focus between cycles of the working group.
The problem here is that there are many other themes that could be added. Perhaps these are better emphasised in resources and activities at the point of learning rather than in the map itself?

 

5. I believe a 'remix’ button should allow me to remix the Web Literacy Map for my community and context.

I’d love to see a remix button, but one that integrated with GitHub to include proper historical attribution and version control. Simply spinning up multiple HTML pages without knowing who changed what when would cause more confusion I think.

Only a basic level of skill is needed to fork a repo on GitHub and edit text files directly on the website. We could provide guidelines on how to do that for people who want to contribute.
Definitely! Some of the things on the map now is strictly no-go in my context, I would love to have the ability to Remix to better match my needs.
In some contexts remixing it would be good to relate to situations better.
Perhaps if the name is required to be changed. But much better to get these people to help make the core map work for them.
Agree in principle but not many people will do this. I wouldn’t make it a high priority. Those who like to remix will always find a way.
Completely torn on this one. On the one hand it would embody the open principles on which both the map and Mozilla is built. It is also useful to be able to adapt tools for contexts. However, it could also potentially lead to mixed messages and dilution of the core 'literacy’ principles that are in the map.
意見該被傾聽,讓此份文件更加完善 *(“The views to be heard, so that this document be more perfect.” - according to Google Translate…)*

Many thanks to those who have completed the survey so far. Please do so if you haven’t yet! https://goo.gl/forms/LKNSNrXCnu

If you’ve got meta-level questions and feedback, please send it to @dajbelshaw / doug@mozillafoundation.org

Gregory SzorcRobustly Testing Version Control at Mozilla

Version control services and interaction with them play an important role at any company. Despite version control being a critical part of your infrastructure, my experience from working at a few companies and talking with others is that version control often doesn't get the testing love that other services do. Hooks get written, spot-tested by the author, and deployed. Tools that interact with version control often rely on behavior that may or may not change over time, especially when the version of your version control software is upgraded.

We've seen this pattern at Mozilla. Mercurial hooks and extensions were written and deployed to the server without test coverage. As a result, things break when we try to upgrade the server. This happens a few times and you naturally develop an attitude of fear, uncertainty, and doubt around touching anything on the server (or the clients for that matter). If it isn't broken, why fix it prevails for months or years. Then one an enthusiastic individual comes around wanting to deploy some hot new functionality. You tell them the path is arduous because the server is running antiquated versions of software and nothing is tested. The individual realizes the amazing change isn't worth the effort and justifiably throws up their hands and gives up. This is almost a textbook definition of how not having test coverage can result in technical debt. This is the position Mozilla is trying to recover from.

One of the biggest impacts I've had since joining the Developer Services Team at Mozilla a little over a month ago has been changing the story about how we test version control at Mozilla.

I'm proud to say that Mozilla now has a robust enough testing infrastructure in place around our Mercurial server that we're feeling pretty good about silencing the doubters when it comes to changing server behavior. Here's how we did it.

The genesis of this project was likely me getting involved with the hg-git and Mercurial projects. For hg-git, I learned a bit about Mercurial internals and how extensions work. When I looked at Mercurial extensions and hooks used by Mozilla, I started to realize what parts were good and what parts were bad. I realized what parts would likely break after upgrades. When I started contributing patches to Mercurial itself, I took notice of how Mercurial is tested. When I discovered T Tests, I thought, wow, that's pretty cool: we should use them to test Mozilla's Mercurial customizations!

After some frustrations with Mercurial extensions breaking after Mercurial upgrades, I wanted to do something about it to prevent this from happening again. I'm a huge fan of unified repositories. So earlier this year, I reached out to the various parties who maintain all the different components and convinced nearly everyone that establishing a single repository for all the version control code was a good idea. The version-control-tools repository was born. Things were slow at first. It was initially pretty much my playground for hosting Mercurial extensions that I authored. Fast forward a few months, and the version-control-tools repository now contains full history imports of our Mercurial hooks that are deployed on hg.mozilla.org, the templates used to render HTML on hg.mozilla.org, and pretty much every Mercurial extension authored by Mozillians, including pushlog. Having all the code in one repository has been very useful. It has simplified server deployments: we now pull 1 repository instead of 3. If there is a dependency between different components, we can do the update atomically. These are all benefits of using a single repository instead of N>1.

While version-control-tools was still pretty much my personal playground, I introduced a short script for running tests. It was pretty basic: just find test files and invoke them with Mercurial's test harness. It served my needs pretty well. Over time, as more and more functionality was rolled into version-control-tools, we expanded the scope of the test harness.

We can now run Python unit tests (in addition to Mercurial .t tests). Test all of the things!

We set up continuous integration with Jenkins so tests run after check-in and alert us when things fail.

We added code coverage so we can see what is and isn't being tested. Using code coverage data, we've identified a server upgrade bug before it happens. We're also using the data to ensure that code is tested as thoroughly as it needs to be. The code coverage data has been invaluable at assessing the quality of our tests. I'm still shocked that Firefox developers tolerate not having JavaScript code coverage when developing Firefox features. (I'm not saying code coverage is perfect, merely that it is a valuable tool in your arsenal.)

We added support for running tests against multiple versions of Mercurial. We even test the bleeding edge of Mercurial so we know when an upstream Mercurial change breaks our code. So, no more surprises on Mercurial release day. I can tell you today that we have a handful of extensions that are broken in Mercurial 3.2, due for release around November 1. (Hopefully we'll fix them before release.)

We have Vagrant configurations so you can start a virtual machine that runs the tests the same way Jenkins does.

The latest addition to the test harness is the ability to spin up Docker containers as part of tests. Right now, this is limited to running Bugzilla during tests. But I imagine the scope will only increase over time.

Before I go on, I want to quickly explain how amazing Mercurial's .t tests are. These are a flavor of tests used by Mercurial and the dominant form of new tests added to the version-control-tools repository. These tests are glorified shell scripts annotated with expected command output and other metadata. It might be easier to explain by showing. Take bzpost's tests as an example. The bzpost extension automatically posts commit URLs to Bugzilla during push. Read more if you are interested. What I like so much about .t tests is that they are actually testing the user experience. The test actually runs hg push and verifies the output is exactly what is intended. Furthermore, since we're running a Dockerized Bugzilla server during the test, we're able to verify that the bzpost extension actually resulted in Bugzilla comments being added to the appropriate bug(s). Contrast this with unit tests that only test a subset of functionality. Or, contrast with writing a lot of boilerplate and often hard-to-read code that invokes processes and uses regular expressions, etc to compare output. I find .t tests are more concise and they do a better job of testing user experience. More than once I've written a .t test and thought this user experience doesn't feel right, I should change the behavior to be more user friendly. This happened because I was writing actual end-user commands as part of writing tests and seeing the exact output the user would see. It is much harder to attain this sense of understanding when writing unit tests. I can name a few projects with poor command line interfaces that could benefit from this approach... I'm not saying .t tests are perfect or that they should replace other testing methodologies such as unit tests. I just think they are very useful for accurately testing higher-level functionality and for assessing user experience. I really wish we had these tests for mach commands...

Anyway, with a proper testing harness in place for our version control code, we've been pretty good about ensuring new code is properly tested. When people submit new hooks or patches to existing hooks, we can push back and refuse to grant review unless tests are included. When someone requests a new deployment to the server, we can look at what changed, cross-reference to test coverage, and assess the riskiness of the deployment. We're getting to the point where we just trust our tests and server deployments are minor events. Concerns over accidental regressions due to server changes are waning. We can tell people if you really care about this not breaking, you need a test and if you add a test, we'll support it for you. People are often more than happy to write tests to ensure them peace of mind, especially when that test's presence shifts maintenance responsibility away from them. We're happy because we don't have many surprises (and fire drills) at deployment time. It's a win-win!

So, what's next? Good question! We still have a number of large gaps in our test coverage. Our code to synchronize repositories from the master server to read-only slaves is likely the most critical omission. We also don't yet have a good way of reproducing our server environment. Ideally, we'd run the continuous integration in an environment that's very similar to production. Same package versions and everything. This would also allow us to simulate the actual hg.mozilla.org server topology during tests. Currently, our tests are more unit-style than integration-style. We rely on the consistent behavior of Mercurial and other tools as sufficient proxies for test accuracy and we back those up with running the tests on the staging server before production deployment. But these aren't a substitute for an accurate reproduction of the production servers, especially when it comes to things like the replication tests. We'll get there some day. I also have plans to improve Mercurial's test harness to better facilitate some of our advanced use cases. I would absolutely love to make Mercurial's .t test harness more consumable outside the context of Mercurial. (cram is one such attempt at this.) We also need to incorporate the Git server code into this repository. Currently, I'm pretty sure everything Git at Mozilla is untested. Challenge accepted!

In summary, our story for testing version control at Mozilla has gone from a cobbled together mess to something cohesive and comprehensive. This has given us confidence to move fast without breaking things. I think the weeks of people time invested into improving the state of testing was well spent and will pay enormous dividends going forward. Looking back, the mountain of technical debt now looks like a mole hill. I feel good knowing that I played a part in making this change.

Jordan LundThis week in Releng - Oct 5th, 2014

Major highlights:

  • kmoir ended our official tegra support. All code referencing them has been deleted in bug 1016453
  • kmoir is preparing material teaching a releng class next week http://polymorse.polymtl.ca/plow/
  • bhearsum added signing support for firefox 64bit windows builds in bug 711210

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Allison NaaktgeborenApplying Privacy Series: Introduction

Introduction

In January, I laid out information in a presentation & blog post information for a discussion about applying Mozilla’s privacy principles in practice to engineering.  Several fellow engineers wanted to see it applied in a concrete example, complaining that the material presented was too abstract to be actionable. This  is a fictional series of conversations around the concrete development of a fictional mobile app feature. Designing and building software is a process of evolving and refining ideas, and this example is designed for engineers to understand actionable privacy and data safety concerns can and should be a part of the development process.

Disclaimer

The example is fictional. Any resemblance to any real or imagined feature, product, service, or person is purely accidental. Some technical statements to flesh out the fictional dialogues. They are assumed to only apply to this fictional feature of a fictional mobile application. The architecture might not be production-quality. Don’t get too hung up on it, it’s a fictional teaching example.

Thank You!

    Before I begin, a big thank you to Stacy Martin, Alina Hua, Dietrich Ayala, Matt Brubeck, Mark Finkle, Joe Stevenson, and Sheeri Cabral for their input on this series of posts.

The Cast of Characters

so fictional they don’t even get real names

  1. Engineer
  2. Engineering Manager
  3. Service Operations Engineer
  4. Database Administrator (DBA)
  5. Project Manager
  6. Product Manager
  7. Privacy Officer, Legal’s Privacy Auditor, Privacy & Security there are many names & different positions here
  8. UX Designer

Fictional Problem Setup

Imagine that the EU provides a free service to all residents that will translate English text to one of the EU’s supported languages. The service requires the target language and the device Id. It is however, rather slow.

For the purposes of this fictional example, the device id is a hard coded number on each computer, tablet, or phone. It is globally unique and unchangeable, and so highly identifiable.

A mobile application team wants to use this service to offer translation in page (a much desired feature non-English speakers) to EU residents using their mobile app.  For non-english readers, the ability to read the app’s content in their own language is a highly desired feature.

After some prototyping & investigation, they determine that the very slow speed of the translation service adversely affects usability. They’d still like to use it, so they decide to evolve the feature. They’d also like to translate open content while the device is offline so the translated content comes up quicker when the user reopens the app.

Every New Feature Starts Somewhere

Engineer sees announcement in tech press about the EU’s new service and its noble goal of overcoming language barriers on the web for its citizens. She sends an email to her team’s public mailing list “wouldn’t it be cool apply this to our content for users instead of them having to copy/paste blocks of text into an edit box? We have access to those values on the phone already”

Engineering Team, Engineering Manager & Product Manager on the thread are enthusiastic about the idea.  Engineering Manager assigns Engineer to make it happen.

 

She schedules the initial meeting to figure out what the heck that actually means and nail down a specification.

Robert O'CallahanBack In New Zealand

I just finished a three-week stint in North America, mostly a family holiday but some work too. Some highlights:

  • Visited friends in Vancouver. Did the Grouse Grind in just over an hour. Lovely mountain.
  • Work week in Toronto. Felt productive. Ran barefoot from downtown to Humber River and back a couple of times. Lovely.
  • Rendezvoused with my family in New York. Spent a day in White Plains where we used to live, and at Trinity Presbyterian Church where we used to be members. Good sermon on the subject of "do not worry", and very interesting autobiographical talk by a Jewish Christian. Great time.
  • Visited the 9/11 Museum. Very good, though perhaps a shade overstressing the gravity of 3000 lives lost. One wonders what kind of memorial there will be if a nuke kills 100x that many.
  • Our favourite restaurant in Chinatown, Singapore Cafe, is gone :-(.
  • Had some great Persian food :-).
  • The amazingness of New York is still amazing.
  • Train to Boston. Gave a talk about rr at MIT, hosted by my former supervisor. Celebrated 20-year anniversary of me starting as his first (equal) grad student. Had my family watch Dad at work.
  • Spent time with wonderful friends.
  • Flew to Pittsburgh. More wonderful friends. Showed up at our old church with no prior warning to anyone. Enjoyed reactions. God continues to do great things there.
  • La Feria and Fuel-and-Fuddle still great. Still like Pittsburgh a lot.
  • Flew to San Francisco. Late arrival due to flight being rerouted through Dallas, but did not catch Ebola.
  • Saw numerous of seals and dolphins from the Golden Gate Bridge.
  • Showed my family a real Mozilla office.
  • Two days in Mountain View for Gecko planning meetings. Hilarious dinner incident. Failed to win at Settlers.
  • Took family to Big Basin Redwoods State Park; saw pelicans, deer, a dead snake, a banana slug, and a bobcat.
  • Ever since we made liquid nitrogen ice cream for my bachelor party, I've thought it would make a great franchise; Smitten delivers.
  • Kiwi friends in town for Salesforce conference; took them to Land's End for a walk. Saw a coyote.
  • Watched Fleet Week Blue Angels display from Twin Peaks. Excellent.
  • Played disc golf; absolutely hopeless.
  • Went to church at Home of Christ #5 with friends. Excellent sermon about the necessity of the cross.
  • Flew home on Air NZ's new 777. Upgraded entertainment system is great; more stuff than you could ever hope to watch.

Movie picoreviews:

    Edge Of Tomorrow: Groundhog Day meets Starship Troopers. Not as good as Groundhog Day but pretty good.

    X-Men: Days Of Future Past: OK.

    Godzilla: OK if your expectations are set appropriately.

    Dawn Of The Planet Of The Apes: watched without sound, which more or less worked. OK.

    Amazing Spider-Man 2: Bad.

    Se7en: Good.

Gregory SzorcDeterministic and Minimal Docker Images

Docker is a really nifty tool. It vastly lowers the barrier to distributing and executing applications. It forces people to think about building server side code as a collection of discrete applications and services. When it was released, I instantly realized its potential, including for uses it wasn't primary intended for, such as applications in automated build and test environments.

Over the months, Docker's feature set has grown and many of its shortcomings have been addressed. It's more usable than ever. Most of my early complaints and concerns have been addressed or are actively being addressed.

But one supposedly solved part of Docker still bothers me: image creation.

One of the properties that gets people excited about Docker is the ability to ship execution environments around as data. Simply produce an image once, transfer it to a central server, pull it down from anywhere, and execute. That's pretty damn elegant. I dare say Docker has solved the image distribution problem. (Ignore for a minute that the implementation detail of how images map to filesystems still has a few quirks to work out. But they'll solve that.)

The ease at which Docker manages images is brilliant. I, like many, was overcome with joy and marvelled at how amazing it was. But as I started producing more and more images, my initial excitement turned to frustration.

The thing that bothers me most about images is that the de facto and recommended method for producing images is neither deterministic nor results in minimal images. I strongly believe that the current recommended and applied approach is far from optimal and has too many drawbacks. Let me explain.

If you look at the Dockerfiles from the official Docker library (examples: Node, MySQL), you notice something in common: they tend to use apt-get update as one of their first steps. For those not familiar with Apt, that command will synchronize the package repository indexes with a remote server. In other words, depending on when you run the command, different versions of packages will be pulled down and the result of image creation will differ. The same thing happens when you clone a Git repository. Depending on when you run the command - when you create the image - you may get different output. If you create an image from scratch today, it could have a different version of say Python than it did the day before. This can be a big deal, especially if you are trying to use Docker to accurately reproduce environments.

This non-determinism of building Docker images really bothers me. It seems to run counter to Docker's goal of facilitating reliable environments for running applications. Sure, one person can produce an image once, upload it to a Docker Registry server, and have others pull it. But there are applications where independent production of the same base image is important.

One area is the security arena. There are many people who are justifiably paranoid about running binaries produced by others and pre-built Docker images set off all kinds of alarms. So, these people would rather build an image from source, from a Dockerfile, than pull binaries. Except then they build the image from a Dockerfile and the application doesn't run because of an incompatibility with a new version of some random package whose version wasn't pinned. Of course, you probably lost numerous hours tracing down this obscure reason. How frustrating! Determinism and verifiability as part of Docker image creation help solve this problem.

Deterministic image building is also important for disaster recovery. What happens if your Docker Registry and all hosts with copies of its images go down? If you go to build the images from scratch again, what guarantee do you have that things will behave the same? Without determinism, you are taking a risk that things will be different and your images won't work as intended. That's scary. (Yes, Docker is no different here from existing tools that attempt to solve this problem.)

What if your open source product relies on a proprietary component that can't be legally distributed? So much for Docker image distribution. The best you can do is provide a base image and instructions for completing the process. But if that doesn't work deterministically, your users now have varying Docker images, again undermining Docker's goal of increasing consistency.

My other main concern about Docker images is that they tend to be large, both in size and in scope. Many Docker images use a full Linux install as their base. A lot of people start with a base e.g. Ubuntu or Debian install, apt-get install the required packages, do some extra configuration, and call it a day. Simple and straightforward, yes. But this practice makes me more than a bit uneasy.

One of the themes surrounding Docker is minimalism. Containers are lighter than VMs; just ship your containers around; deploy dozens or hundreds of containers simultaneously; compose your applications of many, smaller containers instead of larger, monolithic ones. I get it and am totally on board. So why are Docker images built on top of the bloaty excess of a full operating system (modulo the kernel)? Do I really need a package manager in my Docker image? Do I need a compiler or header files so I can e.g. build binary Python extensions? No, I don't, thank you.

As a security-minded person, I want my Docker images to consist of only the files they need, especially binary files. By leaving out non-critical elements from your image and your run-time environment, you are reducing the surface area to attack. If your application doesn't need a shell, don't include a shell and don't leave yourself potentially vulnerable to shellshock. I want the attacker who inevitably breaks out of my application into the outer container to get nothing, not something that looks like an operating system and has access to tools like curl and wget that could potentially be used to craft a more advanced attack (which might even be able to exploit a kernel vulnerability to break out of the container). Of course, you can and should pursue additional security protections in addition to attack surface reduction to secure your execution environment. Defense in depth. But that doesn't give Docker images a free pass on being bloated.

Another reason I want smaller containers is... because they are smaller. People tend to have relatively slow upload bandwidth. Pushing Docker images that can be hundreds of megabytes clogs my tubes. However, I'll gladly push 10, 20, or even 50 megabytes of only the necessary data. When you factor in that Docker image creation isn't deterministic, you also realize that different people are producing different versions of images from the same Dockerfiles and that you have to spend extra bandwidth transferring the different versions around. This bites me all the time when I'm creating new images and am experimenting with the creation steps. I tend to bypass the fake caching mechanism (fake because the output isn't deterministic) and this really results in data explosion.

I understand why Docker images are neither deterministic nor minimal: making them so is a hard problem. I think Docker was right to prioritize solving distribution (it opens up many new possibilities). But I really wish some effort could be put into making images deterministic (and thus verifiable) and more minimal. I think it would make Docker an even more appealing platform, especially for the security conscious. (As an aside, I would absolutely love if we could ship a verifiable Firefox build, for example.)

These are hard problems. But they are solvable. Here's how I would do it.

First, let's tackle deterministic image creation. Despite computers and software being ideally deterministic, building software tends not to be, so deterministic image creation is a hard problem. Even tools like Puppet and Chef which claim to solve aspects of this problem don't do a very good job with determinism. Read my post on The Importance of Time on Machine Provisioning for more on the topic. But there are solutions. NixOS and the Nix package manager have the potential to be used as the basis of a deterministic image building platform. The high-level overview of Nix is that the inputs and contents of a package determine the package ID. If you know how Git or Mercurial get their commit SHA-1's, it's pretty much the same concept. In theory, two people on different machines start with the same environment and bootstrap the exact same packages, all from source. Gitian is a similar solution. Although I prefer Nix's content-based approach and how it goes about managing packages and environments. Nix feels so right as a base for deterministically building software. Anyway, yes, fully verifiable build environments are turtles all the way down (I recommend reading Tor's overview of the problem and their approach. However, Nix's approach addresses many of the turtles and silences most of the critics. I would absolutely love if more and more Docker images were the result of a deterministic build process like Nix. Perhaps you could define the full set of packages (with versions) that would be used. Let's call this the package manifest. You would then PGP sign and distribute your manifest. You could then have Nix step through all the dependencies, compiling everything from source. If PGP verification fails, compilation output changes, or extra files are needed, the build aborts or issues a warning. I have a feeling the security-minded community would go crazy over this. I know I would.

OK, so now you can use Nix to produce packages (and thus images) (more) deterministically. How do you make them minimal? Well, instead of just packaging the entire environment, I'd employ tools like makejail. The purpose of makejail is to create minimal chroot jail environments. These are very similar to Docker/LXC containers. In fact, you can often take a tarball of a chroot directory tree and convert it into a Docker container! With makejail, you define a configuration file saying among other things what binaries to run inside the jail. makejail will trace file I/O of that binary and copy over accessed files. The result is an execution environment that (hopefully) contains only what you need. Then, create an archive of that environment and pipe it into docker build to create a minimal Docker image.

In summary, Nix provides you with a reliable and verifiable build environment. Tools like makejail pair down the produced packages into something minimal, which you then turn into your Docker image. Regular people can still pull binary images, but they are much smaller and more in tune with Docker's principles of minimalism. The paranoid among us can produce the same bits from source (after verifying the inputs look credible and waiting through a few hours of compiling). Or, perhaps the individual files in the image could be signed and thus verified via trust somehow? The company deploying Docker can have peace of mind that disaster scenarios resulting in Docker image loss should not result in total loss of the image (just rebuild it exactly as it was before).

You'll note that my proposed solution does not involve Dockerfiles as they exist today. I just don't think Dockerfile's design of stackable layers of commands is the right model, at least for people who care about determinism and minimalism. You really want a recipe that knows how to create a set of relevant files and some metadata like what ports to expose, what command to run on container start, etc and turn that into your Docker image. I suppose you could accomplish this all inside Dockerfiles. But that's a pretty radical departure from how Dockerfiles work today. I'm not sure the two solutions are compatible. Something to think about.

I'm pretty sure of what it would take to add deterministic and verifiable building of minimal and more secure Docker images. And, if someone solved this problem, it could be applicable outside of Docker (again, Docker images are essentially chroot environments plus metadata). As I was putting the finishing touches on this article, I discovered nix-docker. It looks very promising! I hope the Docker community latches on to these ideas and makes deterministic, verifiable, and minimal images the default, not the exception.

Mozilla Release Management TeamFirefox 33 rc1 to rc2

A important last change forced us to generate a build 2 of Firefox 33. We took this opportunity to backout an OMTC-related regression and two startup fixes on fennec.

  • 9 changesets
  • 19 files changed
  • 426 insertions
  • 60 deletions

ExtensionOccurrences
cpp10
java5
h3
build1

ModuleOccurrences
security12
mobile5
widget1
gfx1

List of changesets:

Ryan VanderMeulenBacked out changeset 9bf2a5b5162d (Bug 1044975) - 1dd4fb21d976
Ryan VanderMeulenBacked out changeset d89ec5b69c01 (Bug 1076825) - 1233c159ab6d
Ryan VanderMeulenBacked out changeset bbc35ec2c90e (Bug 1061214) - 6b3eed217425
Jon CoppeardBug 1061214. r=terrence, a=sledru - a485602f5cb1
Ryan VanderMeulenBacked out changeset e8360a0c7d74 (Bug 1074378) - 7683a98b0400
Richard NewmanBug 1077645 - Be paranoid when parsing external intent extras. r=snorp, a=sylvestre - 628f8f6c6f72
Richard NewmanBug 1079876 - Handle unexpected exceptions when reading external extras. r=mfinkle, a=sylvestre - 96bcea5ee703
David KeelerBug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=sledru - 4c62d5e8d5fc
David KeelerBug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=sledru - fe4f4c9342b1

Daniel GlazmanHappy birthday Disruptive Innovations!

Daniel StenbergWhat a removed search from Google looks like

Back in the days when I participated in the starting of the Subversion project, I found the mailing list archive we had really dysfunctional and hard to use, so I set up a separate archive for the benefit of everyone who wanted an alternative way to find Subversion related posts.

This archive is still alive and it recently surpassed 370,000 archived emails, all related to Subversion, for seven different mailing lists.

Today I received a notice from Google (shown in its entirety below) that one of the mails received in 2009 is now apparently removed from a search using a name – if done within the European Union at least. It is hard to take this seriously when you look at the page in question, and as there aren’t that very many names involved in that page the possibilities of which name it is aren’t that many. As there are several different mail archives for Subversion mails I can only assume that the alternative search results also have been removed.

This is the first removal I’ve got for any of the sites and contents I host.


Notice of removal from Google Search

Hello,

Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.

These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.

Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.

If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can’t guarantee responses to submissions to that form.

The following URLs have been affected by this action:

http://svn.haxx.se/users/archive-2009-08/0808.shtml

Regards,

The Google Team

Christian HeilmannEvangelism conundrum: Don’t mention the product

Being a public figure for a company is tough. It is not only about what you do wrong or right – although this is a big part. It is also about fighting conditioning and bad experiences of the people you are trying to reach. Many a time you will be accused of doing something badly because of people’s preconceptions. Inside and outside the company.

The outside view: oh god, just another sales pitch!

One of these conditionings is the painful memory of the boring sales pitch we all had to endure sooner or later in our lives. We are at an event we went through a lot of hassle to get tickets for. And then we get a presenter on stage who is “excited” about a product. It is also obvious that he or she never used the product in earnest. Or it is a product that you could not care less about and yet here is an hour of it shoved in your face.

Many a time these are “paid for” speaking slots. Conferences offer companies a chance to go on stage in exchange for sponsorship. These don’t send their best speakers, but those who are most experienced in delivering “the cool sales pitch”. A product the marketing department worked on hard to not look like an obvious advertisement. In most cases these turn out worse than a – at least honest – straight up sales pitch would have.

I think my favourite nonsense moment is “the timelapse excitement”. That is when when a presenter is “excited” about a new feature of a product and having used it “for weeks now with all my friends”. All the while whilst the feature is not yet available. It is sadly enough often just too obvious that you are being fed a make-believe usefulness of the product.

This is why when you go on stage and you show a product people will almost immediately switch into “oh god, here comes the sale” mode. And they complain about this on Twitter as soon as you mention a product for the first time.

This is unfair to the presenter. Of course he or she would speak about the products they are most familiar with. It should be obvious when the person knows about it or just tries to sell it, but it is easier to be snarky instead of waiting for that.

The inside view: why don’t you promote our product more?

From your company you get pressure to talk more about your products. You are also asked to show proof that what you did on stage made a difference and got people excited. Often this is showing the Twitter time line during your talk which is when a snarky comment can be disastrous.

Many people in the company will see evangelists as “sales people” and “show men”. Your job is to get people excited about the products they create. It is a job filled with fancy hotels, a great flight status and a general rockstar life. They either don’t understand what you do or they just don’t respect you as an equal. After all, you don’t spend a lot of time coding and working on the product. You only need to present the work of others. Simple, isn’t it? Our lives can look fancy to the outside and jealousy runs deep.

This can lead to a terrible gap. You end up as a promoter of a product and you lack the necessary knowledge that makes you confident enough to talk about it on stage. You’re seen as a sales guy by the audience and as a given by your peers. And it can be not at all your fault as your attempts to reach out to people in the company for information don’t yield any answers. Often it is fine to be “too busy” to tell you about a new feature and it should be up to you to find it as “the documentation is in the bug reports”.

Often your peers like to point out how great other companies are at presenting their products. And that whilst dismissing or not even looking at what you do. That’s because it is important for them to check what the competition does. It is less exciting to see how your own products “are being sold”.

How to escape this conundrum?

Frustration is the worst thing you can experience as an evangelist.

Your job is to get people excited and talk to another. To get company information out to the world and get feedback from the outside world to your peers. This is a kind of translator role, but if you look deep inside and shine a hard light on it, you are also selling things.

Bruce Lawson covered that in his talk about how he presents. You are a sales person. What you do though is sell excitement and knowledge, not a packaged product. You bring the angle people did not expect. You bring the inside knowledge that the packaging of the product doesn’t talk about. You show the insider channels to get more information and talk to the people who work on the product. That can only work when these people are also open to this. When they understand that any delay in feedback is not only seen as a disappointment for the person who asked the question. It is also diminishing your trustworthiness and your reputation and without that you are dead on stage.

In essence, do not mention the product without context. Don’t show the overview slides and the numbers the press and marketing team uses. Show how the product solves issues, show how the product fits into a workflow. Show your product in comparison with competitive products, praising the benefits of either.

And grow a thick skin. Our jobs are tiring, they are busy and it is damn hard to keep up a normal social life when you are on the road. Each sting from your peers hurts, each “oh crap, now the sales pitch starts” can frustrate you. You’re a person who hates sales pitches and tries very hard to be different. Being thrown in the same group feels terribly hurtful.

It is up to you to let that get you down. You could also concentrate on the good, revel in the excitement you see in people’s faces when you show them a trick they didn’t know. Seeing people grow in their careers when they repeat what they learned from you to their bosses.

If you aren’t excited about the product, stop talking about it. Instead work with the product team to make it exciting first. Or move on. There are many great products out there.

Rob HawkesLeaving Pusher to work on ViziCities full time

On the 7th of November I'll be leaving my day job heading up developer relations at Pusher. Why? To devote all my time and effort toward ensuring ViziCities gets the chance it very much deserves. I'm looking to fund the next 6–12 months of development and, if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'm no startup guru (I often feel like I'm making this up as I go), all I know is that I have a vision for ViziCities and, as a result of a year talking with governments and organisations, I'm beyond confident that there's demand for what ViziCities offers.

Want to chat? Send me an email at rob@vizicities.com. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Leaving your day job. Are you crazy?

Probably. I certainly don't do things by halves and I definitely thrive under immense pressure with the distinct possibility of failure. I've learnt that life isn't fulfilling for me unless I'm taking a risk with something unknown. I'm obsessed with learning something new, whether in programming, business or something else entirely. The process of learning and experimentation is my lifeblood, the end result of that is a bonus.

I think quitting your day job without having the funding in place to secure the next 6 to 12 months counts as immense pressure, some may even call it stupid. To me it wasn't even a choice; I knew I had to work on ViziCities so my time at Pusher had to end, simple. I'm sure I'll work the rest out.

Let me be clear. I thoroughly enjoyed my time at Pusher, they are the nicest bunch of people and I'm going to miss them dearly. My favourite thing about working at Pusher was being around the team every single day. Their support and advice around my decision with ViziCities has really helped other the past few weeks. I wish them all the best for the future.

As for my future, I'm absolutely terrified about it. That's a good thing, it keeps me focused and sharp.

So what's the plan with ViziCities

Over the past 18 months ViziCities has evolved from a disparate set of exciting experiments into a concise and deliberate offering that solves real problems for people. What has been learnt most over that time is that visualising cities in 3D isn't what makes ViziCities so special (though it's really pretty), rather it's the problems it can solve and the ways it can help governments, organisations and citizens. That's where ViziCities will make its mark.

After numerous discussions with government departments and large organisations worldwide it's clear that not only can ViziCities solve their problems, it's also financially viable as a long-term business. The beauty of what ViziCities offers is that people will always need tools to help turn geographic data into actionable results and insight. Nothing else provides this in the same way ViziCities can, both as a result of the approach but also as a result of the people working on it.

ViziCities now needs your help. I need your help. For this to happen it needs funding, and not necessarily that much to start with. There are multiple viable business models and avenues to explore, all of which are flexible and complementary, none of which compromise the open-source heart.

I'm looking to fund the next 6–12 months of development, and if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).

I'll be writing about the quest for funding in much more detail.

You can help ViziCities succeed

This is the part where you can help. I can't magic funds out of no where, though I'm trying my best. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.

Want to chat? Send me an email at rob@vizicities.com.

James LongTransducers.js Round 2 with Benchmarks

A few weeks ago I released my transducers library and explained the algorithm behind it. It's a wonderfully simple technique for high-performant transformations like map and filter and was created by Clojure (mostly Rich Hickey I think).

Over the past week I've been hard at work polishing and benchmarking it. Today I published version 0.2.0 with a new API and completely refactored internals that make it easy to use and get performance that beats other popular utility libraries. (This is a different library than the recently released one from Cognitect)

A Few Benchmarks

Benchmarking is hard, but I think it's worthwhile to post a few of them that backs up these claims. All of these were run on the latest version of node (0.10.32). First I wanted to prove how transducers devastates many other libraries for large arrays (update: lodash + laziness comes the closest, see more in the next section). The test performs two maps and two filters. Here is the transducer code:

into([],
     compose(
       map(function(x) { return x + 10; }),
       map(function(x) { return x * 2; }),
       filter(function(x) { return x % 5 === 0; }),
       filter(function(x) { return x % 2 === 0; })
     ),
     arr);

The same transformations were implemented in lodash and underscore, and benchmarked with an arr of various sizes. The graph below shows the time it took to run versus the size of arr, which starts at 500 and goes up to around 300,000. Here's the full benchmark (it outputs Hz so the y-axis is 1/Hz).

Once the array reaches around the size of 90,000, transducers completely blow the competition away. This should be obvious; we never need to allocate anything between transformations, while underscore and lodash always have to allocation an intermediate array.

Laziness would not help here, since we are eagerly evaluating the whole array.

Update: More Detailed Benchmark

This section was added after requests for a more thorough benchmark, particularly including lodash's new lazy behavior

The master branch of lodash supports laziness, which should provide performance gains. Let's include that in the benchmark to see how well it helps. Laziness is a technique where a chain doesn't evaluate the transformations until a final value method is called, and it attempts to reduce intermediate allocations. Here's the full benchmark that generated the following graph.

We also added comparisons with native map and filter, and a baseline that manually performs the same operations in a for loop (thanks @stefanpenner for that).

First, as expected the baseline performs the best. But the cost of transducers isn't too bad, and you get a far better and easier to use abstraction than manually hand-coding for loops. Unfortunately, native is slowest for various reasons.

The real interesting thing is that the laziness of lodash does help it out a lot. For some reason there's still a jump, but it's at a much higher point, around 280,000 items. In general transducers take about 2/3rds of the time though, and the performance is more consistent. Note that for there's actually a perf hit for lodash laziness for smaller arrays under 90,000.

This benchmark was run with node 0.10.32, and it most likely looks different on various engines. Transducers don't beat a lazy lodash as much (for some array sizes not at all) in Firefox, but I think that's more due to poor optimization in Firefox. The algorithm is inherently open to great optimizations as the process is only a few functions calls per item, so I think it will only get better across each engine. My guess is that Firefox needs to do a better job inlining functions, but I still need to look into it.

Small Arrays

While it's not as dramatic, even with arrays as small as 1000 you will see performance wins. Here is the same benchmarks but only running it twice with a size of 1000 and 10,000:

_.map/filter (1000) x 22,302 ops/sec ±0.90% (100 runs sampled)
u.map/filter (1000) x 21,290 ops/sec ±0.65% (96 runs sampled)
t.map/filter+transduce (1000) x 26,638 ops/sec ±0.77% (98 runs sampled)

_.map/filter (10000) x 2,277 ops/sec ±0.49% (101 runs sampled)
u.map/filter (10000) x 2,155 ops/sec ±0.77% (99 runs sampled)
t.map/filter+transduce (10000) x 2,832 ops/sec ±0.44% (99 runs sampled)

Take

If you use the take operation to only take, say, 10 items, transducers will only send 10 items through the transformation pipeline. Obviously if I ran benchmarks we would also blow away lodash and underscore here because they do not lazily optimize for take (and transform all the array first and then runs take). You can do this in some of the other libraries like lodash with explicitly marking a chain as lazy and then requesting the value at the end. We get this for free though, and still beat it in this scenario because we don't have any laziness machinery.

I ran a benchmark here but I don't have it anymore, but it's worth noting that we don't need to be explicitly lazy to optimize for take.

immutable-js

The immutable-js library is fantastic collection of immutable data structures. They implement lazy transformations so you get a lot of perf wins with that. Even so, there is a cost to the laziness machinery. I implemented the same map->map->filter->filter transformation above in another benchmark which compares it with their transformations. Here is the output with arr sizes of 1000 and 100,000:

Immutable map/filter (1000) x 6,414 ops/sec ±0.95% (99 runs sampled)
transducer map/filter (1000) x 7,119 ops/sec ±1.58% (96 runs sampled)

Immutable map/filter (100000) x 67.77 ops/sec ±0.95% (72 runs sampled)
transducer map/filter (100000) x 79.23 ops/sec ±0.47% (69 runs sampled)

This kind of perf win isn't a huge deal, and their transformations perform well. But we can apply this to any data structure. Did you notice how easy it was to use our library with immutable-js? View the full benchmark here.

Transducers.js Refactored

I just pushed v0.2.0 to npm with all the new APIs and performance improvements. Read more in the new docs.

You may have noticed the Cognitect, where Rich Hickey and other core maintainers of Clojure(Script) work, released their own JavaScript transducers library on Friday. I was a little bummed because I had just spent a lot of time refactoring mine, but I think I offer a few improvements. Internally, we basically converged on the exact same technique for implementing transducers, so you should find the same performance characteristics above with their library.

All of the following features are things you can find in my library transducers.js.

My library now offers several integration points for using transducers:

  • seq takes a collection and a transformer and returns a collection of the same type. If you pass it an array, you will get back an array. An iterator will give you back an iterator. For example:
// Filter an array
seq([1, 2, 3], filter(x => x > 1));
// -> [ 2, 3 ]

// Map an object
seq({ foo: 1, bar: 2 }, map(kv => [kv[0], kv[1] + 1]));
// -> { foo: 2, bar: 3 }

// Lazily transform an iterable
function* nums() {
  var i = 1;
  while(true) {
    yield i++;
  }
}

var iter = seq(nums(), compose(map(x => x * 2),
                               filter(x => x > 4));
iter.next().value; // -> 6
iter.next().value; // -> 8
iter.next().value; // -> 10
  • toArray, toObject, and toIter will take any iterable type and force them into the type that you requested. Each of these can optionally take a transform as the second argument.
// Make an array from an object
toArray({ foo: 1, bar: 2 });
// -> [ [ 'foo', 1 ], [ 'bar', 2 ] ]

// Make an array from an iterable
toArray(nums(), take(3));
// -> [ 1, 2, 3 ]

That's a very quick overview, and you can read more about these in the docs.

Collections as Arguments

All the transformations in transducers.js optionally take a collection as the first argument, so the familiar pattern of map(coll, function(x) { return x + 1; }) still works fine. This is an extremely common use case so this will be very helpful if you are transitioning from another library. You can also pass a context as the third argument to specify what this should be bound to.

Read more about the various ways to use transformations.

Laziness

Transducers remove the requirement of being lazy to optimize for things like take(10). However, it can still be useful to "bind" a collection to a set of transformations and pass it around, without actually evaluating the transformations. It's also useful if you want to apply transformations to a custom data type, get an iterator back, and rebuild another custom data type from it (there is still no intermediate array).

Whenever you apply transformations to an iterator it does so lazily. It's easy to convert array transformations into a lazy operation, just use the utility function iterator to grab an iterator of the array instead:

seq(iterator([1, 2, 3]),
    compose(
      map(x => x + 1),
      filter(x => x % 2 === 0)))
// -> <Iterator>

Our transformations are completely blind to the fact that our transformations may or may not be lazy.

The transformer Protocol

Lastly, transducers.js supports a new protocol that I call the transformer protocol. If a custom data structure implements this, not only can we iterate over it in functions like seq, but we can also build up a new instance. That means seq won't return an iterator, but it will return an actual instance.

For example, here's how you would implement it in Immutable.Vector:

var t = require('./transducers');
Immutable.Vector.prototype[t.protocols.transformer] = {
  init: function() {
    return Immutable.Vector().asMutable();
  },
  result: function(vec) {
    return vec.asImmutable();
  },
  step: function(vec, x) {
    return vec.push(x);
  }
};

If you implement the transformer protocol, now your data structure will work with all of the builtin functions. You can just use seq like normal and you get back an immutable vector!

t.seq(Immutable.Vector(1, 2, 3, 4, 5),
      t.compose(
        t.map(function(x) { return x + 10; }),
        t.map(function(x) { return x * 2; }),
        t.filter(function(x) { return x % 5 === 0; }),
        t.filter(function(x) { return x % 2 === 0; })));
// -> Vector [ 30 ]

I hope you give transducers a try, they are really fun! And unlike Cognitect's project, mine is happy to receive pull requests. :)

Brett GaylorFrom Mozilla to new making

Yesterday was my last day as an employee of the Mozilla Foundation. I’m leaving my position as VP, Webmaker to create an interactive web series about privacy and the economy of the web.

I’ve had the privilege of being a “crazy Mofo” for nearly five years. Starting in early 2010, I worked with David Humphrey and researchers at the Center for Development of Open Technology to create Popcorn.js. Having just completed “Rip!”, I was really interested in mashups - and Popcorn was a mashup of open web technology questions (how can we make video as elemental an element of the web as images or links?) and formal questions about documentary (what would a “web native” documentary look like? what can video do on the web that it can’t do on TV?). That mashup is one of the most exciting creative projects I’ve ever been involved with, and lead to a wonderful amount of unexpected innovation and opportunity. An award winning 3D documentary by a pioneer of web documentaries, the technological basis of a cohort of innovative(and fun) startups, and a kick ass video creation tool that was part of the DNA of Webmaker.org - which this year reached 200,000 users and facilitated the learning experience of over 127,200 learners face to face at our annual Maker Party.

Thinking about video and the web, and making things that aim to get the best of both mediums, is what brought me to Mozilla - and it’s what’s taking me to my next adventure.

I’m joining my friends at Upian in Paris (remotely, natch) to direct a multi-part web series around privacy, surveillance and the economy of the web. The project is called Do Not Track and it’s supported by the National Film Board of Canada, Arte, Bayerischer Rundfunk (BR), the Tribeca Film Institute and the Centre National du Cinéma. I’m thrilled by the creative challenge and humbled by the company I’ll be keeping - I’ve wanted to work with Upian since their seminal web documentary Gaza/Sderot and have been thrilled to watch from the sidelines as they’ve made Prison Valley, Alma, MIT’s Moments of Innovation project, and the impressive amount of work they do for clients in France and around the world. These are some crazy mofos, and they know how to ship.

Fake it Till You Make it

Mozilla gave me a wonderful gift: to innovate on the web, to dream big, without asking permission to do so. To in fact internalize innovation as a personal responsibility. To hammer into me every day the belief that for the web to remain a public resource, the creativity of everyone needs to be brought to the effort. That those of us in positions of privilege have a responsibility to wake up every day trying to improve the network. It’s a calling that tends to attract really bright people, and it can elicit strong feelings of impostor syndrome for a clueless filmmaker. The gift Mozilla gave me is to witness first hand that even the most brilliant people, or especially the most brilliant people, are making it up every single day. That’s why the web remains as much an inspiration to me today as when I first touched it as a teenager. Even though smart people criticize sillicon valley’s hypercapitalism, or while governments are breeding cynics and mistrust by using the network for surveillance, I still believe the web remains the best place to invent your future.

I’m very excited, and naturally a bit scared, to be making something new again. Prepare yourself - I’m going to make shit up. I’ll need your help.

Working With

source

“Where some people choose software projects in order to solve problems, I have taken to choosing projects that allow me to work with various people. I have given up the comfort of being an expert , and replaced it with a desire to be alongside my friends, or those with whom I would like to be friends, no matter where I find them. My history among this crowd begins with friendships, many of which continue to this day.

This way of working, where collegiality subsumes technology or tools, is central to my personal and professional work. Even looking back over the past two years, most of the work I’ve done is influenced by a deep desire to work with rather than on. ” - On Working With Instead of On

David Humphrey, who wrote that, is who I want to be when I grow up. I will miss daily interactions with him, and many others who know who they are, very much. "In the context of working with, technology once again becomes the craft I both teach and am taught, it is what we share with one another, the occasion for our time together, the introduction, but not the reason, for our friendship.”

Thank you, Mozilla, for a wonderful introduction. Till the next thing we make!