Mike ConleyThings I’ve Learned This Week (April 13 – April 17, 2015)

When you send a sync message from a frame script to the parent, the return value is always an array

Example:

// Some contrived code in the browser
let browser = gBrowser.selectedBrowser;
browser.messageManager.addMessageListener("GIMMEFUE,GIMMEFAI", function onMessage(message) {
  return "GIMMEDABAJABAZA";
});

// Frame script that runs in the browser
let result = sendSendMessage("GIMMEFUE,GIMMEFAI");
console.log(result[0]);
// Writes to the console: GIMMEDABAJABAZA

From the documentation:

Because a single message can be received by more than one listener, the return value of sendSyncMessage() is an array of all the values returned from every listener, even if it only contains a single value.

I don’t use sync messages from frame scripts a lot, so this was news to me.

You can use [cocoaEvent hasPreciciseScrollingDeltas] to differentiate between scrollWheel events from a mouse and a trackpad

scrollWheel events can come from a standard mouse or a trackpad1. According to this Stack Overflow post, one potential way of differentiating between the scrollWheel events coming from a mouse, and the scrollWheel events coming from a trackpad is by calling:

bool isTrackpad = [theEvent hasPreciseScrollingDeltas];

since mouse scrollWheel is usually line-scroll, whereas trackpads (and Magic Mouse) are pixel scroll.

The srcdoc attribute for iframes lets you easily load content into an iframe via a string

It’s been a while since I’ve done web development, so I hadn’t heard of srcdoc before. It was introduced as part of the HTML5 standard, and is defined as:

The content of the page that the embedded context is to contain. This attribute
is expected to be used together with the sandbox and seamless attributes. If a
browser supports the srcdoc attribute, it will override the content specified in
the src attribute (if present). If a browser does NOT support the srcdoc
attribute, it will show the file specified in the src attribute instead (if
present).

So that’s an easy way to inject some string-ified HTML content into an iframe.

Primitives on IPDL structs are not initialized automatically

I believe this is true for structs in C and C++ (and probably some other languages) in general, but primitives on IPDL structs do not get initialized automatically when the struct is instantiated. That means that things like booleans carry random memory values in them until they’re set. Having spent most of my time in JavaScript, I found that a bit surprising, but I’ve gotten used to it. I’m slowly getting more comfortable working lower-level.

This was the ultimate cause of this crasher bug that dbaron was running into while exercising the e10s printing code on a debug Nightly build on Linux.

This bug was opened to investigate initializing the primitives on IPDL structs automatically.

Networking is ultimately done in the parent process in multi-process Firefox

All network requests are proxied to the parent, which serializes the results back down to the child. Here’s the IPDL protocol for the proxy.

On bi-directional text and RTL

gw280 and I noticed that in single-process Firefox, a <select> dropdown set with dir=”rtl”, containing an <option> with the value “A)” would render the option as “(A”.

If the value was “A) Something else”, the string would come out unchanged.

We were curious to know why this flipping around was happening. It turned out that this is called “BiDi”, and some documentation for it is here.

If you want to see an interesting demonstration of BiDi, click this link, and then resize the browser window to reflow the text. Interesting to see where the period on that last line goes, no?

It might look strange to someone coming from a LTR language, but apparently it makes sense if you’re used to RTL.

I had not known that.

Some terminal spew

Some terminal spew

Now what’s all this?

My friend and colleague Mike Hoye showed me the above screenshot upon coming into work earlier this week. He had apparently launched Nightly from the terminal, and at some point, all that stuff just showed up.

“What is all of that?”, he had asked me.

I hadn’t the foggiest idea – but a quick DXR showed basic_code_modules.cc inside Breakpad, the tool used to generate crash reports when things go wrong.

I referred him to bsmedberg, since that fellow knows tons about crash reporting.

Later that day, mhoye got back to me, and told me that apparently this was output spew from Firefox’s plugin hang detection code. Mystery solved!

So if you’re running Firefox from the terminal, and suddenly see some basic_code_modules.cc stuff show up… a plugin you’re running probably locked up, and Firefox shanked it.


  1. And probably a bunch of other peripherals as well 

Mike ConleyThe Joy of Coding (Ep. 10): The Mystery of the Cache Key

In this episode, I kept my camera off, since I was having some audio-sync issues1.

I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.

And what did I need to do?

I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!

I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I should have those resolved for Episode 11! 

  2. And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( 

  3. Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. 

Alex GibsonMy second year working at Mozilla

This week marked my second year Mozillaversary. I did plan to write this blog post of the 15th April, which would have marked the day I started, but this week flew by so quickly I almost completely missed it!

Carrying on from last years blog post, much of my second year at Mozilla has been spent working on various parts of mozilla.org, to which I made a total of 196 commits this year.

Much of my time has been spent working on Firefox on-boarding. Following the success of the on-boarding flow we built for the Firefox 29 Australis redesign last year, I went on to work on several more on-boarding flows to help introduce new features in Firefox. These included introducing the Firefox 33.1 privacy features, Developer Edition firstrun experience, 34.1 search engine changes, and 36.0 for Firefox Hello. I also got to work on the first time user experience for when a user makes their first Hello video call, which initially launched in 35.0. It was all a crazy amount of work from a lot of different people, but something I really enjoyed getting to work on alongside various other teams at Mozilla.

In between all that I also got to work on some other cool things, including the 2015 mozilla.org homepage redesign. Something I consider quite a privilege!

On the travel front, I got to visit both San Fransisco and Santa Clara a bunch more times (I’m kind of losing count now). I also got to visit Portland for the first time when Mozilla had their all-hands week last December, which was such a great city!

I’m looking forward to whatever year three has in store!

Air MozillaWebdev Beer and Tell: April 2015

Webdev Beer and Tell: April 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWebmaker Demos April 17 2015

Webmaker Demos April 17 2015 Webmaker Demos April 17 2015

Gregory SzorcMy Current Thoughts on System Administration

I attended PyCon last week. It's a great conference. You should attend. While I should write up a detailed trip report, I wanted to quickly share one of my takeaways.

Ansible was talked about a lot at PyCon. Sitting through a few presentations and talking with others helped me articulate why I've been drawn to Ansible (over say Puppet, Chef, Salt, etc) lately.

First, Ansible doesn't require a central server. Administration is done remotely Ansible establishes a SSH connection to a remote machine and does stuff. Having Ruby, Python, support libraries, etc installed on production systems just for system administration never really jived with me. I love Ansible's default hands off approach. (Yes, you can use a central server for Ansible, but that's not the default behavior. While tools like Puppet could be used without a central server, it felt like they were optimized for central server use and thus local mode felt awkward.)

Related to central servers, I never liked how that model consists of clients periodically polling for and applying updates. I like the idea of immutable server images and periodic updates work against this goal. The central model also has a major bazooka pointed at you: at any time, you are only one mistake away from completely hosing every machine doing continuous polling. e.g. if you accidentally update firewall configs and lock out central server and SSH connectivity, every machine will pick up these changes during periodic polling and by the time anyone realizes what's happened, your machines are all effectively bricked. (Yes, I've seen this happen.) I like having humans control exactly when my systems apply changes, thank you. I concede periodic updates and central control have some benefits.

Choosing not to use a central server by default means that hosts are modeled as a set of applied Ansible playbooks, not necessarily as a host with a set of Ansible playbooks attached. Although, Ansible does support both models. I can easily apply a playbook to a host in a one-off manner. This means I can have playbooks represent common, one-off tasks and I can easily run these tasks without having to muck around with the host to playbook configuration. More on this later.

I love the simplicity of Ansible's configuration. It is just YAML files. Not some Ruby-inspired DSL that takes hours to learn. With Ansible, I'm learning what modules are available and how they work, not complicated syntax. Yes, there is complexity in Ansible's configuration. But at least I'm not trying to figure out the file syntax as part of learning it.

Along that vein, I appreciate the readability of Ansible playbooks. They are simple, linear lists of tasks. Conceptually, I love the promise of full dependency graphs and concurrent execution. But I've spent hours debugging race conditions and cyclic dependencies in Puppet that I'm left unconvinced the complexity and power is worth it. I do wish Ansible could run faster by running things concurrently. But I think they made the right decision by following KISS.

I enjoy how Ansible playbooks are effectively high-level scripts. If I have a shell script or block of code, I can usually port it to Ansible pretty easily. One pass to do the conversion 1:1. Another pass to Ansibilize it. Simple.

I love how Ansible playbooks can be checked in to source control and live next to the code and applications they manage. I frequently see people maintain separate source control repositories for configuration management from the code it is managing. This always bothered me. When I write a service, I want the code for deploying and managing that service to live next to it in version control. That way, I get the configuration management and the code versioned in the same timeline. If I check out a release from 2 years ago, I should still be able to use its exact configuration management code. This becomes difficult to impossible when your organization is maintaining configuration management code in a separate repository where a central server is required to do deployments (see Puppet).

Before PyCon, I was having an internal monolog about adopting the policy that all changes to remote servers be implemented with Ansible playbooks. I'm pleased to report that a fellow contributor to the Mercurial project has adopted this workflow himself and he only has great things to say! So, starting today, I'm going to try to enforce that every change I make to a remote server is performed via Ansible and that the Ansible playbooks are checked into version control. The Ansible playbooks will become implicit documentation of every process involved with maintaining a server.

I've already applied this principle to deploying MozReview. Before, there was some internal Mozilla wiki documenting commands to execute in a terminal to deploy MozReview. I have replaced that documentation with a one-liner that invokes Ansible. And, the Ansible files are now in a public repository.

If you poke around that repository, you'll see that I have Ansible playbooks referencing Docker. I have Ansible provisioning Docker images used by the test and development environment. That same Ansible code is used to configure our production systems (or is at least in the process of being used in that way). Having dev, test, and prod using the same configuration management has been a pipe dream of mine and I finally achieved it! I attempted this before with Puppet but was unable to make it work just right. The flexibility that Ansible's design decisions have enabled has made this finally possible.

Ansible is my go to system management tool right now. And I still feel like I have a lot to learn about its hidden powers.

If you are still using Puppet, Chef, or other tools invented in previous generations, I urge you to check out Ansible. I think you'll be pleasantly surprised.

Mozilla Release Management TeamFirefox 38 beta4 to beta5

In this beta, we disabled the define EARLY_BETA_OR_EARLIER (used by some features to get testing during the first half of beta cycle).

In this release, we took some changes related to reading list, polishing of in-tab preferences and some various minor crash fixes.

We also landed the stability fixes which should ship with the release 37.0.2.

  • 52 changesets
  • 86 files changed
  • 3766 insertions
  • 2141 deletions

ExtensionOccurrences
cpp25
js14
h9
jsm6
java6
css4
list3
ini3
xml2
idl2
html2
sh1
py1
MOZILLA1
mn1
mk1
json1
ipdl1
in1
common1
c1

ModuleOccurrences
dom25
mobile12
browser12
media7
layout6
toolkit5
gfx3
db3
testing2
services2
netwerk2
build2
widget1
security1
js1
config1

List of changesets:

Jon CoppeardBug 1149526 - Check HeapPtrs have GC lifetime r=terrence a=sylvestre - 7ca7e178de40
Sylvestre LedruPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - 4c2454564144
Bill McCloskeyBack out Bug 1083897 a=backout - 56f805ac34ce
Bill McCloskeyBack out Bug 1103036 to resolve shutdown hangs a=backout - 8a5486269821
JW WangBug 1153739 - Make Log() usable outside EME test cases. r=edwin, a=test-only - bf3ca76f10c3
JW WangBug 1080685 - Add more debug aids and longer timeout. r=edwin, a=test-only - b2d1be38dab1
Sami JaktholmBug 1150005 - Don't wait for "editor-selected" event in browser_styleeditor_fetch-from-cache.js as it may have already been emitted. r=bgrins, a=test-only - d1e3ce033c7a
Mark HammondBug 1151666 - Fix intermittent orange by reducing verified timer intervals and always using mock storage. r=zaach, a=test-only - 87f3453f6cc0
Shu-yu GuoBug 996982 - Fix Debugger script delazification logic to account for relazified clones. r=bz, a=sledru - 5ca4e237b259
Brian GrinsteadBug 1151259 - Switch <toolbar> to <box> to get rid of -moz-appearance styles for devtools sidebar. r=jryans, a=sledru - 7af104b169fa
Jared WeinBug 1152327 - ReadingListUI.init() should be called from delayedStartup, not onLoad. r=gavin, a=sledru - 9e1bf10888cd
Tim NguyenBug 1013714 - Remove old OSX focusring from links in in-content prefs. r=Gijs, a=sledru - 48976876cdb9
Chris PearceBug 1143278 - Make gmp-clearkey not require a Win8 only DLL to decode audio on Win7. r=edwin, a=sledru - f9f96ba1dbdb
Chris PearceBug 1143278 - Add more null checks in gmp-clearkey's decoders. r=edwin, a=sledru - 5779893b39a5
Chris PearceBug 1143278 - Use a different CLSID to instantiate the H264 decoder MFT in gmp-clearkey, as Win 7 Enterprise N requires that. r=edwin, a=sledru - dfce472edd1e
Chris PearceBug 1143278 - Support IYUV and I420 in gmp-clearkey on Windows, as Win 7 Enterprise N's H.264 decoder doesn't output I420. r=edwin, a=sledru - 3beb9cbddb3f
Cameron McCormackBug 1153693 - Only call ReleaseRef on nsStyle{ClipPath,Filter} once when setting a new value. r=dbaron, a=sledru - f5d0342230c0
Milan SreckovicBug 1152331 - If we do not delete indices array, it gets picked up down the line and breaks some assumptions in aboutSupport.js. r=dvander, a=sledru - 4cc36a9a958b
Richard NewmanBug 1153358 - Client mitigation: don't upload stored_on. r=nalexander, a=sledru - 1412c445ff0d
Mark HammondBug 1148701 - React to Backoff and Retry-After headers from Reading List server. r=adw, a=sledru - 91df81e2edac
Ryan VanderMeulenBacked out changeset d1e3ce033c7a (Bug 1150005) for leaks - 4f36d5aff5cf
Cameron McCormackBug 1146101 - Call ClearCachedInheritedStyleDataOnDescendants on more style contexts that had structs swapped out from them. r=dbaron, a=sledru - baa8222aaafd
Reed LodenBug 1152939 - Upgrade to SQLite 3.8.9. r=mak77, a=sledru - 01e0d4e09b6d
Mike HommeyBug 1146738 - Fix race condition between js/src/target and js/src/host. r=mshal, a=NPOTB - 7496d2eea111
Ben TurnerBug 1114788 - Disable failing test on workers. r=mrbkap, a=test-only - c82fcbeb7194
Matthew GreganBug 1144199 - Require multiple consecutive timeouts to accumulate before triggering timeout error handling in libcubeb's WASAPI backend; this avoids spurious timeout errors triggered by system sleep/wake cycles. r=padenot, a=sledru - ea342656f3cb
Xidorn QuanBug 1145448 - Avoid painting native frame on fullscreen window when activate/inactivate. r=jimm, a=sledru - a27fb9b83867
Bas SchoutenBug 1151361 - Wrap WARP D3D11 creation in a try catch block like done with regular D3D11. r=jrmuizel, a=sledru - 4954faa47dd0
Jan-Ivar BruaroeyBug 1153056 - Fix about:webrtc to not blank on zero allocated PeerConnections. r=jesup, a=sledru - e487ace8d7f9
Ryan VanderMeulenBug 1154434 - Bump mozharness.json to revision 4567c42063b7. a=test-only - 97856a6ac44d
Richard NewmanBug 1153357 - Don't set SYNC_STATUS_MODIFIED unless an update touches fields that we sync. r=nalexander, a=sledru - 199b60ec60dc
vivekBug 1145567 - Display toolbar only after Domcontentloaded is triggered. r=margaret, a=sledru - df47a99c442f
Mark GoodwinBug 1153090 - Unaligned access in cert bock list. r=keeler, a=sledru - 58f203b17be2
Michael ComellaBug 1148390 - Dynamically add padding to share icon on GB devices. r=wesj, a=sledru - e10ddd2bc05f
Ben TurnerBug 1154599 - Revert unintentional change to crash reporting infra in changeset ce2692d64bcf. a=sledru - 7b296a71b115
Edwin FloresBug 1148071 - Fix CDM update behaviour. r=cpearce, a=sledru, ba=jorgev - 6c7e8d9f955c
Gijs KruitboschBug 1154447 - add aero asset for update badge, r=me, a=sylvestre - 98703ce041e2
Gijs KruitboschBug 1150703, allow about: pages to be unlinkable even if "safe for content", r=mcmanus, IGNORE IDL, ba=sylvestre - 5c9df6adebed
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on mobile, r=margaret, a=sylvestre - a5203cabcc04
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on desktop, r=gavin, a=sylvestre - 062e49bcb2da
Ryan VanderMeulenBug 1092202 - Skip testGetUserMedia for frequent failures. a=test-only - 85106e95bcb8
Ryan VanderMeulenBug 1123563 - Annotate test-animated-image-layers.html and test-animated-image-layers-background.html as random on Android and Linux. a=test-only - fe141895d7ab
Ryan VanderMeulenBug 1097721 - Skip test_mozaudiochannel.html on OSX 10.6 due to intermittent crashes. a=test-only - 86b6cb966d95
Ryan VanderMeulenBug 1021174 - Skip test_bug495145.html on OSX 10.6 due to intermittent crashes. a=test-only - 34331bbc9575
Mark GoodwinBug 1120748 - Resolve intermittent failure of browser_ssl_error_reports.js. r=ttaubert, a=test-only - 22eb12ac64e9
Ryan VanderMeulenBug 847903 - Skip 691096-1.html on OSX 10.6 due to intermittent crashes. a=test-only - 348cc6be3ba0
Nicolas SilvaBug 1145981 - Do not crash when a DIB texture is updated without a compositor. r=jrmuizel, a=sledru - 16d7e20d9565
Xidorn QuanBug 1141931 - Part 0: Fix unicode-bidi value of ruby elements in html.css. a=sledru - ccb54262291d
Margaret LeibovicBug 1152121 - Factor out logic to get original URL from reader URL into shared place, and handle malformed URI excpetions. r=Gijs, r=mcomella, a=sledru - 7a10ff7fd9e4
Richard NewmanBug 1153973 - Don't blindly apply deletions as insertions. r=nalexander, a=sledru - f9d36adcdf51
Xidorn QuanBug 1154814 - Move font rules from 'rt' to 'rtc, rt' and make text-emphasis conditional. r=heycam, a=sledru - 8fd05ce16a5f
Gijs KruitboschBug 1148923 - min-width the font menulists. r=jaws, a=sledru - 45a5eaa7813b

Matjaž HorvatOffline localization by Sandra

Pontoon is a web application, which is great. You can run it on almost any device with any operating system. You can be sure you always have the latest version, so you don’t need to worry about updates. You don’t even need to download or install anything. There’s just one particular occasion when web applications aren’t so great.

When you’re offline.

Mostly that means the game is over. But it doesn’t need to be so. Application caching together with web storage has made offline web applications a reality. In its latest edition released yesterday, Pontoon now allows translating even when you’re offline. See full changelog for details.

There are many scenarios where offline localization is the only option our localizers have. Decent internet connection simply cannot be taken for granted in many parts of the World. If it’s hard for you to belive that, visit any local tech conference. :-) Or, if you started localizing at home, you can now continue with localization on your daily commute to work. And vice versa.

The way it works is very simple. After Pontoon detects you no longer have a connection, it saves translations to localStorage instead of server. Once you get online again, translations are stored to server. In the meantime, connection dependant functionality like History and Machinery is of course unavailable.

Offline mode was single-handedly developed by our new contributor Sandra Shklyaeva. She just joined Mozilla community and has already fixed one of our oldest bugs. She’s attacking the bugs everybody was pushing away. I can’t wait to see what the future holds (shhhhh)!

Sandra has an interesting story on what got her attracted to Mozilla:

I was exploring some JS API on the developer.mozilla.org when I noticed pretty tabzilla on the top. I clicked it and my chrome became unresponsive completely XD. Maybe it was just a coincidence… Anyway, the tabzilla has caught my attention and that’s how I found out about Get Involved stuff in Mozilla.

If you also want to get involved, now you know where you can find us!

Karl DubostWeb Compatibility in Japan

I'm living in Japan. And time to time, I'm confronted with issues on the Japanese market when using Firefox on Mobile (Firefox OS and Firefox for Android). There's a situation in Japan which has similarities with the Chinese Market. For example, many sites have been designed with old WebKit CSS properties only, such as flexbox. The sites have not been updated to the new set of properties.

We started our testing with a list of around 100 Japanese Web sites. This list needs to be refined and improved. After a first batch of testing one year ago, we ended up with a list of about 50 sites having some issues. Most of them have been tested against a Firefox OS User Agent aka something like User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0 (the version number is irrelevant).

Here I'm making a summary of the issues to help us

  1. refine our future testing
  2. have a better understanding of the issues at stake

We currently have 51 bugs on bugzilla (json) related to issues with Web Compatibility on Japane Web Sites. On these 51 sites, there is 1 duplicate and 13 resolved.

Type Of Issues

  • HTTP Redirection to a mobile domain based on User-Agent: HTTP header
  • JavaScript redirection to a mobile domain based on navigator.userAgent on the client side through window.location
  • Content customization based on User-Agent: or navigator.userAgent
  • Display of a banner to switch to a mobile version of the site based on User-Agent: or navigator.userAgent. Example: Asahi Web site
  • Receiving a mobile site with outdated WebKit CSS only properties
  • Site using a Web framework or JavaScript library which is exclusively compatible with a set of browsers. Example: Sencha on Nezu Museum. Not a lot can be done here.

Todo List For Better Testing

  • Most of these are only the surface as we have tested most of the time, only the home page. We need to try to test a couple of subpages
  • The sites need to be tested again with screenshots for:
  • Firefox OS User Agent (User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android User Agent (User-Agent: Mozilla/5.0 (Android; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android Modified User Agent (User-Agent: Mozilla/5.0 (Android 5.0; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0) (this is a fake Firefox for Android UA, but some sites keep sending different versions or no version at all based on the following detection /.*Android ([0-9])\.([0-9])/ or match(/Android\s+(\d\.\d)/i. DO NOT DO THIS AT HOME at least not without a sensible fallback)
  • A recent Android Chrome User Agent
  • A recent iOS Safari User Agent

Some Ideas and Things We Can Do Together

There are a couple of things which can be done and where you can help.

  • Translating this article in Japanese.
  • Advocacy around you.
  • Publish an article about Web Compatibility and Recipes in Japanese Press (Web Designing, Web Creators, etc.). I can help. Or maybe we could propose a monthly column in Web Designing on "let's fix this it" where we would go through a known site issues and how to solve them.
  • Contact Web sites directly and pointing them to the bugs.
  • Share with us if you know a person or a friend of a friend working on these sites/companies. Talk about it around you! A CTO, a Web developer, someone who can help us negociate a change on the site.
  • Report sites which are broken on webcompat.com. It helps.

Old WebKit CSS, Flexbox Nightmare

Maybe in all these efforts in contacting Web sites, the flexbox story is the most frustrating one. I talked a couple of times about it: Fix Your Flexbox Web site and Flexbox old syntax to new syntax converter. The frustration comes from two things:

  1. It's very easy to fix.
  2. The sites are using outdated 1st version of Flexbox which was developed for WebKit only.

Swicthing to the new standard syntax would actually improve their customers reach and make them compatible with the future. It must also be frustrating for Apple and Co, because it means they can't really retire the old code from their rendering engine without breaking sites. Chicken and egg situation. If you remove the support, you break sites but push sites to update. If you keep the support, sites don't fix, but users using other browsers can't go to these sites. If they don't go to these sites, the browser doesn't show up in the stats, and so the site owners say: "We do not have to support this browser, nobody is using it on our site." Yes… you know. Running into circles.

In the end it forces other browser vendors to do dirty things for making it usable for everyone.

Fixing Your CSS - Easy!

Hallvord Steen has developped a quick tool to help you fix your CSS. It's not perfect, but it will remove a big part of the hard work on figuring out how to convert this WebKit only flexbox or gradient to a standard one supported everywhere.

Conclusion

All of these is part of a much bigger effort for Web Compatibility in general. In the next couple of days, I will go through all bugs we already have opened and check if there are new things.

If we get the flexbox/gradient right and the User Agent sniffing, we will have solved probably 80% of the issues of Web Compatibility issues in Japan.

Otsukare!

Cameron Kaisersystemsetupusthebomb revisited: Vulnerable after all

Previously, previously. tl;dr: Apple uses a setuid binary called writeconfig to alter certain system settings which on at least 10.7+ could be used to write arbitrary files as setuid root, allowing almost instantaneous privilege escalation -- i.e., your computer is now pwned. This was fixed in Yosemite 10.10.3, but not any previous version. Originally I had not been able to exploit my 10.4 systems in the same fashion, so despite the binary being there, I concluded the actual vulnerability did not exist.

Well, Takashi Yoshi has succeeded where I failed (I'm still pretty confident on Darwin Nuke, though), and I have confirmed it on my systems using his RootPipe Tester tool. Please note, before you run, that this tool specifically exploits the vulnerability to write a setuid root file to disk, which if he weren't a nice guy means he now owns your system. Takashi is clearly a good guy but with any such tool you may wish to get in the habit of building from source you've closely examined, which he provides. The systemsetupusthebomb vulnerability is indeed successful on all versions of OS X going back to at least 10.2.8.

The workaround for this vulnerability is straightforward in concept -- disable writeconfig or neuter it -- but has side effects, because if you monkey with writeconfig the system will lose the capability to control certain configuration profiles (in 10.4, this generally affects the Sharing pane in System Preferences; 10.5+, which specifically exposes systemsetup, may be affected in other ways) and may also affect remote administration capabilities. Takashi and I exchanged E-mails on two specific solutions. Both of these possible solutions will alter system functionality, in a hopefully reversible fashion, but a blown command may interfere with administering your computer. Read carefully.

One solution is to rename (or remove, but this is obviously more drastic) writeconfig to something else. Admittedly this works a bit too well. RootPipe Tester actually crashed, which may be useful to completely stop a malicious app in its tracks, but it also made System Preferences unstable and will likely do the same to any app expecting to use Admin.framework. Although 10.4 seemed to handle this a bit better, it too locked up the Sharing pane after banging on it a bit. However, you can be guaranteed nothing will happen in this configuration because it's not possible for it to occur -- apps looking for the victim ToolLiaison class won't be able to find it. Since I'm rarely in that panel, this is the approach I've personally selected for my own systems, but I'm also fully comfortable with the limitations. You can control this with two commands in Terminal on 10.4-10.6 (make sure you fixed the issue with sudo first!):

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv writeconfig noconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv noconfig writeconfig

For added security, make noconfig a custom filename only you know so an attacker won't be easily able to find it in an alternate location ... or, if you're nucking futs, archive or delete it entirely. (Not recommended except for the fascistic maniac.)

Takashi found the second approach to be gentler, but is slightly less secure: strip the setuid bits off. In this mode, the vulnerability can still be exploited to write arbitrary files, but as it lacks the setuid permission it cannot run as root and the file is only written as the current user (so no privilege escalation, just an unexpected file write). Applications that use Admin.framework simply won't work as expected; they shouldn't crash. For example, System Preferences will just "look at you" in the Sharing panel when you try to change or start a new system service -- nothing will happen. For many users, this will be the better option. Here are the Terminal commands for 10.4-10.6:

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u-s writeconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u+s writeconfig

Choose one of these options. Most of the time, you should leave your system in the safe state. If you need to change Sharing or certain other settings with systemsetup or System Preferences, return to the original state, make the change, and return to the safe state.

Of course, one other option is to simply do nothing. This might be a surprising choice, but Takashi does make the well-taken point that this attack can only be perpetrated upon an administrative user where root is just your password away anyhow, and no implementation of this attack other than his runs on PowerPC. This isn't good enough for me personally, but his argument is reasonable, and if you have to do a lot of configuration changes on your system I certainly understand how inconvenient these approaches could be. (Perhaps someone will figure out a patch for System Preferences that does it for you. I'll leave that exercise to the reader.) As in many such situations, you alone will have to decide how much you're willing to put up with, but it's good to see other people are also working to keep our older Macs better protected on OS X.

Ob10.4Fx IonPower status report: 75% of V8 passing, interrupted briefly tonight to watch the new Star Wars trailer. I have cautious, cautious hope it won't suck, but J. J. Abrams, if you disappoint me, it will be for the last time (to paraphrase).

The Rust Programming Language BlogMixing matching, mutation, and moves in Rust

One of the primary goals of the Rust project is to enable safe systems programming. Systems programming usually implies imperative programming, which in turns often implies side-effects, reasoning about shared state, et cetera.

At the same time, to provide safety, Rust programs and data types must be structured in a way that allows static checking to ensure soundness. Rust has features and restrictions that operate in tandem to ease writing programs that can pass these checks and thus ensure safety. For example, Rust incorporates the notion of ownership deeply into the language.

Rust's match expression is a construct that offers an interesting combination of such features and restrictions. A match expression takes an input value, classifies it, and then jumps to code written to handle the identified class of data.

In this post we explore how Rust processes such data via match. The crucial elements that match and its counterpart enum tie together are:

  • Structural pattern matching: case analysis with ergonomics vastly improved over a C or Java style switch statement.

  • Exhaustive case analysis: ensures that no case is omitted when processing an input.

  • match embraces both imperative and functional styles of programming: you can continue using break statements, assignments, et cetera, rather than being forced to adopt an expression-oriented mindset.

  • match "borrows" or "moves", as needed: Rust encourages the developer to think carefully about ownership and borrowing. To ensure that one is not forced to yield ownership of a value prematurely, match is designed with support for merely borrowing substructure (as opposed to always moving such substructure).

We cover each of the items above in detail below, but first we establish a foundation for the discussion: What does match look like, and how does it work?

The Basics of match

The match expression in Rust has this form:

match INPUT_EXPRESSION {
    PATTERNS_1 => RESULT_EXPRESSION_1,
    PATTERNS_2 => RESULT_EXPRESSION_2,
    ...
    PATTERNS_n => RESULT_EXPRESSION_n
}

where each of the PATTERNS_i contains at least one pattern. A pattern describes a subset of the possible values to which INPUT_EXPRESSION could evaluate. The syntax PATTERNS => RESULT_EXPRESSION is called a "match arm", or simply "arm".

Patterns can match simple values like integers or characters; they can also match user-defined symbolic data, defined via enum.

The below code demonstrates generating the next guess (poorly) in a number guessing game, given the answer from a previous guess.

enum Answer {
    Higher,
    Lower,
    Bingo,
}

fn suggest_guess(prior_guess: u32, answer: Answer) {
    match answer {
        Answer::Higher => println!("maybe try {} next", prior_guess + 10),
        Answer::Lower  => println!("maybe try {} next", prior_guess - 1),
        Answer::Bingo  => println!("we won with {}!", prior_guess),
    }
}

#[test]
fn demo_suggest_guess() {
    suggest_guess(10, Answer::Higher);
    suggest_guess(20, Answer::Lower);
    suggest_guess(19, Answer::Bingo);
}

(Incidentally, nearly all the code in this post is directly executable; you can cut-and-paste the code snippets into a file demo.rs, compile the file with --test, and run the resulting binary to see the tests run.)

Patterns can also match structured data (e.g. tuples, slices, user-defined data types) via corresponding patterns. In such patterns, one often binds parts of the input to local variables; those variables can then be used in the result expression.

The special _ pattern matches any single value, and is often used as a catch-all; the special .. pattern generalizes this by matching any series of values or name/value pairs.

Also, one can collapse multiple patterns into one arm by separating the patterns by vertical bars (|); thus that arm matches either this pattern, or that pattern, et cetera.

These features are illustrated in the following revision to the guessing-game answer generation strategy:

struct GuessState {
    guess: u32,
    answer: Answer,
    low: u32,
    high: u32,
}

fn suggest_guess_smarter(s: GuessState) {
    match s {
        // First arm only fires on Bingo; it binds `p` to last guess.
        GuessState { answer: Answer::Bingo, guess: p, .. } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~  ~~~~~~~~  ~~
     //     |                 |                 |     |
     //     |                 |                 |     Ignore remaining fields
     //     |                 |                 |
     //     |                 |      Copy value of field `guess` into local variable `p`
     //     |                 |
     //     |   Test that `answer field is equal to `Bingo`
     //     |
     //  Match against an instance of the struct `GuessState`

            println!("we won with {}!", p);
        }

        // Second arm fires if answer was too low or too high.
        // We want to find a new guess in the range (l..h), where:
        //
        // - If it was too low, then we want something higher, so we
        //   bind the guess to `l` and use our last high guess as `h`.
        // - If it was too high, then we want something lower; bind
        //   the guess to `h` and use our last low guess as `l`.
        GuessState { answer: Answer::Higher, low: _, guess: l, high: h } |
        GuessState { answer: Answer::Lower,  low: l, guess: h, high: _ } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~   ~~~~~~  ~~~~~~~~  ~~~~~~~
     //     |                 |                 |        |        |
     //     |                 |                 |        |    Copy or ignore
     //     |                 |                 |        |    field `high`,
     //     |                 |                 |        |    as appropriate
     //     |                 |                 |        |
     //     |                 |                 |  Copy field `guess` into
     //     |                 |                 |  local variable `l` or `h`,
     //     |                 |                 |  as appropriate
     //     |                 |                 |
     //     |                 |    Copy value of field `low` into local
     //     |                 |    variable `l`, or ignore it, as appropriate
     //     |                 |
     //     |   Test that `answer field is equal
     //     |   to `Higher` or `Lower`, as appropriate
     //     |
     //  Match against an instance of the struct `GuessState`

            let mid = l + ((h - l) / 2);
            println!("lets try {} next", mid);
        }
    }
}

#[test]
fn demo_guess_state() {
    suggest_guess_smarter(GuessState {
        guess: 20, answer: Answer::Lower, low: 10, high: 1000
    });
}

This ability to simultaneously perform case analysis and bind input substructure leads to powerful, clear, and concise code, focusing the reader's attention directly on the data relevant to the case at hand.

That is match in a nutshell.

So, what is the interplay between this construct and Rust's approach to ownership and safety in general?

Exhaustive case analysis

...when you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.

-- Sherlock Holmes (Arthur Conan Doyle, "The Blanched Soldier")

One useful way to tackle a complex problem is to break it down into individual cases and analyze each case individually. For this method of problem solving to work, the breakdown must be collectively exhaustive; all of the cases you identified must actually cover all possible scenarios.

Using enum and match in Rust can aid this process, because match enforces exhaustive case analysis: Every possible input value for a match must be covered by the pattern in a least one arm in the match.

This helps catch bugs in program logic and ensures that the value of a match expression is well-defined.

So, for example, the following code is rejected at compile-time.

fn suggest_guess_broken(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        // ERROR: non-exhaustive patterns: `Bingo` not covered
    };
    println!("maybe try {} next", next_guess);
}

Many other languages offer a pattern matching construct (ML and various macro-based match implementations in Scheme both come to mind), but not all of them have this restriction.

Rust has this restriction for these reasons:

  • First, as noted above, dividing a problem into cases only yields a general solution if the cases are exhaustive. Exhaustiveness-checking exposes logical errors.

  • Second, exhaustiveness-checking can act as a refactoring aid. During the development process, I often add new variants for a particular enum definition. The exhaustiveness-check helps points out all of the match expressions where I only wrote the cases from the prior version of the enum type.

  • Third, since match is an expression form, exhaustiveness ensures that such expressions always either evaluate to a value of the correct type, or jump elsewhere in the program.

Jumping out of a match

The following code is a fixed version of the suggest_guess_broken function we saw above; it directly illustrates "jumping elsewhere":

fn suggest_guess_fixed(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        Answer::Bingo  => {
            println!("we won with {}!", prior_guess);
            return;
        }
    };
    println!("maybe try {} next", next_guess);
}

#[test]
fn demo_guess_fixed() {
    suggest_guess_fixed(10, Answer::Higher);
    suggest_guess_fixed(20, Answer::Lower);
    suggest_guess_fixed(19, Answer::Bingo);
}

The suggest_guess_fixed function illustrates that match can handle some cases early (and then immediately return from the function), while computing whatever values are needed from the remaining cases and letting them fall through to the remainder of the function body.

We can add such special case handling via match without fear of overlooking a case, because match will force the case analysis to be exhaustive.

Algebraic Data Types and Structural Invariants

Algebraic data types succinctly describe classes of data and allow one to encode rich structural invariants. Rust uses enum and struct definitions for this purpose.

An enum type allows one to define mutually-exclusive classes of values. The examples shown above used enum for simple symbolic tags, but in Rust, enums can define much richer classes of data.

For example, a binary tree is either a leaf, or an internal node with references to two child trees. Here is one way to encode a tree of integers in Rust:

enum BinaryTree {
    Leaf(i32),
    Node(Box<BinaryTree>, i32, Box<BinaryTree>)
}

(The Box<V> type describes an owning reference to a heap-allocated instance of V; if you own a Box<V>, then you also own the V it contains, and can mutate it, lend out references to it, et cetera. When you finish with the box and let it fall out of scope, it will automatically clean up the resources associated with the heap-allocated V.)

The above enum definition ensures that if we are given a BinaryTree, it will always fall into one of the above two cases. One will never encounter a BinaryTree::Node that does not have a left-hand child. There is no need to check for null.

One does need to check whether a given BinaryTree is a Leaf or is a Node, but the compiler statically ensures such checks are done: you cannot accidentally interpret the data of a Leaf as if it were a Node, nor vice versa.

Here is a function that sums all of the integers in a tree using match.

fn tree_weight_v1(t: BinaryTree) -> i32 {
    match t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(left, payload, right) => {
            tree_weight_v1(*left) + payload + tree_weight_v1(*right)
        }
    }
}

/// Returns tree that Looks like:
///
///      +----(4)---+
///      |          |
///   +-(2)-+      [5]
///   |     |   
///  [1]   [3]
///
fn sample_tree() -> BinaryTree {
    let l1 = Box::new(BinaryTree::Leaf(1));
    let l3 = Box::new(BinaryTree::Leaf(3));
    let n2 = Box::new(BinaryTree::Node(l1, 2, l3));
    let l5 = Box::new(BinaryTree::Leaf(5));

    BinaryTree::Node(n2, 4, l5)
}

#[test]
fn tree_demo_1() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);
}

Algebraic data types establish structural invariants that are strictly enforced by the language. (Even richer representation invariants can be maintained via the use of modules and privacy; but let us not digress from the topic at hand.)

Both expression- and statement-oriented

Unlike many languages that offer pattern matching, Rust embraces both statement- and expression-oriented programming.

Many functional languages that offer pattern matching encourage one to write in an "expression-oriented style", where the focus is always on the values returned by evaluating combinations of expressions, and side-effects are discouraged. This style contrasts with imperative languages, which encourage a statement-oriented style that focuses on sequences of commands executed solely for their side-effects.

Rust excels in supporting both styles.

Consider writing a function which maps a non-negative integer to a string rendering it as an ordinal ("1st", "2nd", "3rd", ...).

The following code uses range patterns to simplify things, but also, it is written in a style similar to a switch in a statement-oriented language like C (or C++, Java, et cetera), where the arms of the match are executed for their side-effect alone:

fn num_to_ordinal(x: u32) -> String {
    let suffix;
    match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => {
            suffix = "st";
        }
        (2, 2) | (2, 22...92) => {
            suffix = "nd";
        }
        (3, 3) | (3, 23...93) => {
            suffix = "rd";
        }
        _                     => {
            suffix = "th";
        }
    }
    return format!("{}{}", x, suffix);
}

#[test]
fn test_num_to_ordinal() {
    assert_eq!(num_to_ordinal(   0),    "0th");
    assert_eq!(num_to_ordinal(   1),    "1st");
    assert_eq!(num_to_ordinal(  12),   "12th");
    assert_eq!(num_to_ordinal(  22),   "22nd");
    assert_eq!(num_to_ordinal(  43),   "43rd");
    assert_eq!(num_to_ordinal(  67),   "67th");
    assert_eq!(num_to_ordinal(1901), "1901st");
}

The Rust compiler accepts the above program. This is notable because its static analyses ensure both:

  • suffix is always initialized before we run the format! at the end of the function, and

  • suffix is assigned at most once during the function's execution (because if we could assign suffix multiple times, the compiler would force us to mark suffix as mutable).

To be clear, the above program certainly can be written in an expression-oriented style in Rust; for example, like so:

fn num_to_ordinal_expr(x: u32) -> String {
    format!("{}{}", x, match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => "st",
        (2, 2) | (2, 22...92) => "nd",
        (3, 3) | (3, 23...93) => "rd",
        _                     => "th"
    })
}

Sometimes expression-oriented style can yield very succinct code; other times the style requires contortions that can be avoided by writing in a statement-oriented style. (The ability to return from one match arm in the suggest_guess_fixed function earlier was an example of this.)

Each of the styles has its use cases. Crucially, switching to a statement-oriented style in Rust does not sacrifice every other feature that Rust provides, such as the guarantee that a non-mut binding is assigned at most once.

An important case where this arises is when one wants to initialize some state and then borrow from it, but only on some control-flow branches.

fn sometimes_initialize(input: i32) {
    let string: String; // a dynamically-constructed string value
    let borrowed: &str; // a reference to string data
    match input {
        0...100 => {
            // Construct a String on the fly...
            string = format!("input prints as {}", input);
            // ... and then borrow from inside it.
            borrowed = &string[6..];
        }
        _ => {
            // String literals are *already* borrowed references
            borrowed = "expected between 0 and 100";
        }
    }
    println!("borrowed: {}", borrowed);

    // Below would cause compile-time error if uncommented...

    // println!("string: {}", string);

    // ...namely: error: use of possibly uninitialized variable: `string`
}

#[test]
fn demo_sometimes_initialize() {
    sometimes_initialize(23);  // this invocation will initialize `string`
    sometimes_initialize(123); // this one will not
}

The interesting thing about the above code is that after the match, we are not allowed to directly access string, because the compiler requires that the variable be initialized on every path through the program before it can be accessed. At the same time, we can, via borrowed, access data that may held within string, because a reference to that data is held by the borrowed variable when we go through the first match arm, and we ensure borrowed itself is initialized on every execution path through the program that reaches the println! that uses borrowed.

(The compiler ensures that no outstanding borrows of the string data could possibly outlive string itself, and the generated code ensures that at the end of the scope of string, its data is deallocated if it was previously initialized.)

In short, for soundness, the Rust language ensures that data is always initialized before it is referenced, but the designers have strived to avoid requiring artificial coding patterns adopted solely to placate Rust's static analyses (such as requiring one to initialize string above with some dummy data, or requiring an expression-oriented style).

Matching without moving

Matching an input can borrow input substructure, without taking ownership; this is crucial for matching a reference (e.g. a value of type &T).

The "Algebraic Data Types" section above described a tree datatype, and showed a program that computed the sum of the integers in a tree instance.

That version of tree_weight has one big downside, however: it takes its input tree by value. Once you pass a tree to tree_weight_v1, that tree is gone (as in, deallocated).

#[test]
fn tree_demo_v1_fails() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // If you uncomment this line below ...

    // assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // ... you will get: error: use of moved value: `tree`
}

This is not a consequence, however, of using match; it is rather a consequence of the function signature that was chosen:

fn tree_weight_v1(t: BinaryTree) -> i32 { 0 }
//                   ^~~~~~~~~~ this means this function takes ownership of `t`

In fact, in Rust, match is designed to work quite well without taking ownership. In particular, the input to match is an L-value expression; this means that the input expression is evaluated to a memory location where the value lives. match works by doing this evaluation and then inspecting the data at that memory location.

(If the input expression is a variable name or a field/pointer dereference, then the L-value is just the location of that variable or field/memory. If the input expression is a function call or other operation that generates an unnamed temporary value, then it will be conceptually stored in a temporary area, and that is the memory location that match will inspect.)

So, if we want a version of tree_weight that merely borrows a tree rather than taking ownership of it, then we will need to make use of this feature of Rust's match.

fn tree_weight_v2(t: &BinaryTree) -> i32 {
    //               ^~~~~~~~~~~ The `&` means we are *borrowing* the tree
    match *t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(ref left, payload, ref right) => {
            tree_weight_v2(left) + payload + tree_weight_v2(right)
        }
    }
}

#[test]
fn tree_demo_2() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v2(&tree), (1 + 2 + 3) + 4 + 5);
}

The function tree_weight_v2 looks very much like tree_weight_v1. The only differences are: we take t as a borrowed reference (the & in its type), we added a dereference *t, and, importantly, we use ref-bindings for left and right in the Node case.

The dereference *t, interpreted as an L-value expression, is just extracting the memory address where the BinaryTree is represented (since the t: &BinaryTree is just a reference to that data in memory). The *t here is not making a copy of the tree, nor moving it to a new temporary location, because match is treating it as an L-value.

The only piece left is the ref-binding, which is a crucial part of how destructuring bind of L-values works.

First, let us carefully state the meaning of a non-ref binding:

  • When matching a value of type T, an identifier pattern i will, on a successful match, move the value out of the original input and into i. Thus we can always conclude in such a case that i has type T (or more succinctly, "i: T").

For some types T, known as copyable T (also pronounced "T implements Copy"), the value will in fact be copied into i for such identifier patterns. (Note that in general, an arbitrary type T is not copyable.)

Either way, such pattern bindings do mean that the variable i has ownership of a value of type T.

Thus, the bindings of payload in tree_weight_v2 both have type i32; the i32 type implements Copy, so the weight is copied into payload in both arms.

Now we are ready to state what a ref-binding is:

  • When matching an L-value of type T, a ref-pattern ref i will, on a successful match, merely borrow a reference into the matched data. In other words, a successful ref i match of a value of type T will imply that i has the type of a reference to T (or more succinctly, "i: &T").

Thus, in the Node arm of tree_weight_v2, left will be a reference to the left-hand box (which holds a tree), and right will likewise reference the right-hand tree.

We can pass these borrowed references to trees into the recursive calls to tree_weight_v2, as the code demonstrates.

Likewise, a ref mut-pattern (ref mut i) will, on a successful match, borrow a mutable reference into the input: i: &mut T. This allows mutation and ensures there are no other active references to that data at the same time. A destructuring binding form like match allows one to take mutable references to disjoint parts of the data simultaneously.

This code demonstrates this concept by incrementing all of the values in a given tree.

fn tree_grow(t: &mut BinaryTree) {
    //          ^~~~~~~~~~~~~~~ `&mut`: we have exclusive access to the tree
    match *t {
        BinaryTree::Leaf(ref mut payload) => *payload += 1,
        BinaryTree::Node(ref mut left, ref mut payload, ref mut right) => {
            tree_grow(left);
            *payload += 1;
            tree_grow(right);
        }
    }
}

#[test]
fn tree_demo_3() {
    let mut tree = sample_tree();
    tree_grow(&mut tree);
    assert_eq!(tree_weight_v2(&tree), (2 + 3 + 4) + 5 + 6);
}

Note that the code above now binds payload by a ref mut-pattern; if it did not use a ref pattern, then payload would be bound to a local copy of the integer, while we want to modify the actual integer in the tree itself. Thus we need a reference to that integer.

Note also that the code is able to bind left and right simultaneously in the Node arm. The compiler knows that the two values cannot alias, and thus it allows both &mut-references to live simultaneously.

Conclusion

Rust takes the ideas of algebraic data types and pattern matching pioneered by the functional programming languages, and adapts them to imperative programming styles and Rust's own ownership and borrowing systems. The enum and match forms provide clean data definitions and expressive power, while static analysis ensures that the resulting programs are safe.

For more information on details that were not covered here, such as:

  • how to say Higher instead of Answer::Higher in a pattern,

  • defining new named constants,

  • binding via ident @ pattern, or

  • the potentially subtle difference between { let id = expr; ... } versus match expr { id => { ... } },

consult the Rust documentation, or quiz our awesome community (in #rust on IRC, or in the user group).

(Many thanks to those who helped review this post, especially Aaron Turon and Niko Matsakis, as well as Mutabah, proc, libfud, asQuirrel, and annodomini from #rust.)

Sean McArthurmerit badge

Are you jealous of the badges that npm modules or rubygems have, showing off their latest published version? Of course you are. I was too. When looking at a repository, the version may change as code is merged in, before being published to crates.io. Now, instead of looking at a repo and wondering what version is latest, that repo can add a crates.io badge.

Here’s hyper’s: crates.io

Besides wanting to be able to use badges like these for my own repositories, I also wanted a simple example of using hyper to create a simple app. Bonus points: it users the hyper Server and Client, together.

Go ahead, get your merit badge.

Air MozillaParticipation at Mozilla

Participation at Mozilla The Participation Forum

Gervase MarkhamTop 50 DOS Problems Solved: Whoops, I Deleted Everything

Q: I accidentally deleted all the files in the root directory of my hard disk for the second time this month. I managed to reinstall everything, but is there a way of avoiding the problem?

A: There are two approaches you could try, both of which have applications for other things too:

  • Modify the files so that they cannot be deleted without first explicitly making them deletable. You can do this with the DOS utility Attrib which was supplied with your system. … To protect the file use the command:

    ATTRIB +R filename

    The +R switch means “make this file read-only”.

  • Stop using the DEL command to delete files. Use a batch file instead which will prompt you before taking action.

    … <batch file code is given> …

    This batch file has a useful enhancement beyond the precautionary message. You can use it to specify multiple files, for example:

    DF *.BAK FRED.BAS ?.DOC

    With one command this would delete all .BAK files, FRED.BAS, and all .DOC files whose names begin with a single letter.

A delete command which takes multiple arguments – wow…

Gervase MarkhamIf You Like…

Air MozillaKids' Vision - Mentorship Series

Kids' Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 10

The Joy of Coding (mconley livehacks on Firefox) - Episode 10 Watch mconley livehack on Firefox Desktop bugs!

Kim MoirMozilla pushes - March 2015

Here's March 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
The number of pushes increased from those recorded in the previous month with a total of 10943. 

Highlights
  • 10943 pushes
  • 353 pushes/day (average)
  • Highest number of pushes/day: 579 pushes on Mar 11, 2015
  • 23.18 pushes/hour (highest average)

General Remarks
  • Try keeps on having around 49% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 26% of all the pushes.

Records
  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 





Soledad PenadesRunning a web server on the front-end

The introduction of TCP sockets support in Firefox OS made it possible to run a web server from the front-end, and all is written in JavaScript. Think of having something similar to express.js… but running on a browser (because after all, Firefox OS is a superturbocharged browser).

Again, JS server superstar Justin d’Archangelo wrote an implementation of a web server that works on Firefox OS. It’s called fxos-web-server and it includes a few examples you can run.

None of the examples particularly fit my use case–I want to serve static content from a phone to other phones, but the examples were a bit more contrived. So I decided to build a simpler proof-of-concept example: catserver, a web server that served a simple page with full screen Animated GIFs of cats:

Browserify

The first thing I wanted to do is to use Browserify “proper”, to write my app in a more modular way. For this I had to fork Justin’s original project and modify its package.json so it would let me require() the server instead of tucking its variable in the window globals :-) – sadly I yet have to send him a PR so you will need to be aware of this difference. The dependency in package.json points to my fork for now:

"fxos-web-server": "git+https://github.com/sole/fxos-web-server.git"

Building (with gulp)

My example is composed of two “websites”. The first one is the web server app itself which is what is executed in the “server” device. The sources for this are in src.

The second website is what the server will transfer to devices that connect to it, so this actually gets executed in the client devies. This is where the cats are! Its contents are in the www folder.

Both websites are packaged in just one ZIP file and then installed onto the server device.

This build process also involves running Browserify first so I can get from node-style code that uses require() to a JavaScript bundle that runs on Firefox OS. So I’m using gulp to do all that. The tasks are in gulpfile.js.

Web server app

Thanks to Justin’s work, this is fairly simple. We create an HTTP server on port 80:

var HTTPServer = require('fxos-web-server');
var server = new HTTPServer(80);

And then we add a listener for when a request is made, so we can respond to it (and serve content to the connected client):

server.addEventListener('request', function(evt) {
      var request = evt.request;
      var response = evt.response;

      //... Decide what to send
});

The “decide what to send” part is a bit like writing nginx or express config files. In this case I wanted to:

  • serve an index.html file if we request a “directory” (i.e. a path that ends with “/”)
  • serve static content if we request a file (i.e. the path doesn’t end with “/”)

Before serving a file we need to tell the client what kind of content we’re sending to it. I’m using a very simple “extension to content type” function that determines MIME types based on the extension in the path. E.g. ‘html’ returns text/html.

Once we know the content type, we set it on the response headers:

response.headers['Content-Type'] = getContentType(fileToSend);

and use Justin’s shortcut function sendFile to send a file from our app to the client:

response.sendFile(fileToSend);

With the request handler set up, we can finally start the server!

server.start();

But… how does it even work?!

Welcome to the Hack of The Week!

When you say “sendFile” what the server app does is: it creates an XMLHttpRequest with type = arraybuffer to load the raw contents of the resource. When the XMLHttpRequest emits the load event, the server takes the loaded data and sends it to the client. That’s it!

Naive! Simple! It works! (for simple cases)

Ways that this could be improved – AKA “wanna make this better?! this is your chance!”

As I mentioned above, right now this is very naive and assumes that the files will exist. If they don’t, well, horrible things will happen. Or you will get a 404 error. I haven’t tried it myself. I’m not sure. I’d say there is no error handling (yet).

The extension to content type function could be made into a module, and probably extended to know about more file types. It probably exists already, somewhere in npmlandia.

Another idea I had is that instead of loading the entire resource in memory and then flush it down the pipe as sendFile does, we could use progress events on the XMLHttpRequest and feed it to the client as we load it–so that the server device won’t run out of memory. I don’t know how we could find the length of the resource first, perhaps a HEAD request could work!

And finally we could try to serve each request in a worker, so that the server doesn’t block when responding to a large request. Am I going too far? I don’t even know if the TCP sockets work with workers, or if there are issues with that, or who knows!? This is unexplored territory, welcome to Uncertainty Land! :-D

Even extremer ways to get very… “creative”

What if you wrote a PHP parser that runs in JavaScript and then parsed .php files instead of just returning their contents as text/html?

I’m only half kidding, but you could maybe execute JS templates on the server. Forget security! You trust the content, right? ;-)

Another thing you could do is take advantage of the fact that the server is also a browser. So you can do browsersy things such as using Canvas to generate images, instead of having to load libraries that simulate canvas in node. Or you could synthesise web audio stuff on demand–maybe you could use an OfflineAudioWorker for extra non-blocking goodness! Or, if you want to go towards a relatively more boring direction, you could do DOM / text handling on a server that can deal with that kind of stuff natively.

With all the new platforms in which we can run Firefox OS, there are so many things we can do! Phone servers might be limited by battery, but a Raspberry PI running Firefox OS and connected to a power source can be an interesting platform with which to experiment with this kind of easy-to-write web servers.

But I saw NFC mentioned in the video!

Indeed! But I will cover that in another post, so we keep this one focused on the HTTP server :-)

Happy www serving!

flattr this!

Mozilla Release Management TeamFirefox mobile 37.0.1 to 37.0.2

37.0.2 mobile fix a critical issue for our Japanese users.

A 37.0.2 Desktop release is planned to fix some issues (especially graphic). It should be published in the next few days.

  • 1 changesets
  • 4 files changed
  • 4 insertions
  • 5 deletions

ExtensionOccurrences
txt2
sh1
in1

ModuleOccurrences
mobile2
config1
browser1

List of changesets:

Mark FinkleBug 1151469 - Tweak the package manifest to avoid packaging the wrong file. r=rnewman, a=lmandel - c8866e34cbf3

Eitan Isaacson(re)Introducing eSpeak.js

td;dr

Look! A flashy demo with buttons!

Background

A long time ago, we were investigating a way to expose text-to-speech functionality on the web. This was long before the Web Speech API was drafted, and it wasn’t yet clear what this kind of feature would look like. Alon Zakai stepped up, and proposed porting eSpeak to Javascript with Emscripten. This was a provocative idea: was our platform powerful enough to support speech synthesis purely in JS? Alon got back a few days later with a working demo, the answer was “yes”.

While the speak.js port was very impressive, it didn’t answer many of our practical needs. For example, the latency was not good enough for making a responsive UI, you could wait more than a couple of seconds to hear a short phrase. In addition, the longer the text you wanted to synthesize, the longer you needed to wait.

It proved a concept, but there were missing pieces we didn’t have four years ago. Today, we live in the future of 2011, and things that were theoretical then, are possible now (in the future).

asm.js

Today, Emscripten will compile C/C++ code into a subset of Javascript called asm.js. This subset is optimized on all current browsers, and allows performance to be about 2x native. That is really good. eSpeak is a pretty lightweight library already, the extra performance boost of asm.js makes speech instantaneous.

Transferable Objects

Passing data between a web worker and a parent process used to mean a lot of copying, since the worker doesn’t share memory with the parent process. But today, you can transfer ownership of ArrayBuffers with zero copying. When the web worker is ready to send audio data back to the calling process, it could do so while maintaining a single copy of the audio buffer.

Web Audio API

We have a slick, full featured Audio API today on the web. When speak.js came out in 2011, it used a prefixed method on an <audio> element to write PCM data to. Today, we have a proper API that enables us to take the audio data and send it through an elaborate pipeline of filters and mixers, or even send it into the ether with WebRTC.

Emscripten Got Fancy

This was my first time playing with it, so I am not sure what was available in 2011. But, if I have to guess, it was not as powerful and fun to work with. Emscripten’s new WebIDL support makes adding bindings extremely easy. You still get a chance to do some pointer arithmetic, but that’s supposed to be fun. Right?

So here is eSpeak.js!

I wanted to do a real API port, as opposed to simply porting a command line program that takes input and writes a WAV file. Why? two main reasons:

  1. eSpeak can progressively synthesize speech. If you provide a callback to espeak_Synth(), it will be called repeatedly with as many samples as you defined in the buffer size. It doesn’t matter how long the text is that you want synthesized, it will fill the buffer and return it to you immediately. This allows for a consistent low latency from the moment you call espeak_Synth(), until you could start playing audio.
  2. eSpeak supports events. If you use a callback, you get access to a list of events that provide a timestamp in the audio, and the type of event that occurs there, such as word or sentence boundaries.

And, of course, with all the recent-ish platform improvements above, I was really time for a fresh attempt.

Future Work

  • Break up the data files. Right now, eSpeak.js is over a 2MB download. That’s because I packaged all the eSpeak data files indiscriminately. There may be a few bits that are redundant. On the flip side you get all 99 voice/language combinations (that’s a good deal for 2MB, eh?). It would be cool to break it up to a few data files and allow the developer to choose which voices to bundle or, even better, just grab them on demand.
  • Make a demo of the speech events. It makes my head hurt to think about how to do something compelling. But it is a neat feature that should somehow be shown.
  • ScriptProcessorNode is apparently deprecated. This is going to need to be ported to an AudioWorker once that is widely implemented.

I’m done apologizing, here is the demo.


Mike HommeyAnnouncing git-cinnabar 0.2.1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

What’s new since 0.2.0?

Not much, but this felt important enough to warrant a release, even though the issue has been there since before 0.1.0:

Mercurial can be slower when cloning or pulling a list of “heads” that contain non-topological heads. On repositories like the mercurial repository, it’s not so much of a big deal, taking 7s instead of 4s. But on big repositories like mozilla-central, it’s taking 23 minutes instead of 2 minutes and 20s (on my machine). And that’s with 100% CPU use on the server side.

The problem is that mozilla-central recently merged some old closed heads, such that it now has branch heads that aren’t topological heads. Git-cinnabar, until this release, would request those branch heads, leading the server to use the slow path mentioned above. This release works around the issue.

It also fixes an issue pushing to a remote empty mercurial repository.

Mike HommeyGetting ready for GCC 5.1

Confusingly, GCC has a new, weird, version scheme. The first release of GCC 5 will be 5.1. It is due soon (next week). For that reason, and because it’s better to find compiler bugs before it’s released, I started looking into building Firefox with it.

The first round of builds I did was with 5-20150405. That got me to find a small bunch of issues:

So that got me to do a second round with the first 5.1 RC, which had the fix for that ICE.

With all the above fixed, I could finally get builds out of try, and tests running, which revealed two more issues:

  • Another (quickly fixed) Internal Compiler Error on 32-bits PGO builds (but only for a nightly setup, with --enable-profiling, not for a release setup, which doesn’t have it).
  • JS engine assertions during some JIT tests on 64-bits builds (with or without PGO), which Dan Gohman kindly tracked down and reduced to a small test case allowing to file a GCC bug and bisect to pinpoint at the GCC upstream commit that broke it (yay git bisect run on a 36-CPU EC2 instance).

Preliminary results are promising, with benchmarks improving up to 16%, but the comparison wasn’t entirely fair, because they compared GCC 4.8 builds with frame pointers and JS engine diagnostics to GCC 5.1 builds without.

I’ll also give a spin to LTO, possibly finding more GCC bugs in the process.

Anthony HughesReport on Recognition

Earlier this year I embarked on a journey to investigate how we could improve participation in the Mozilla QA community. I was growing concerned that we were failing in one key component of a vibrant community: recognition.

Before I could began, I needed to better understand Mozilla QA’s recognition story. To this end I published a survey to get anonymous feedback and today, I’d like to share some of that feedback.

Profile of Participants

The first question I asked was intended to profile the respondents in terms of how long they’d been involved with Mozilla and whether they were still contributing.

recognition-activityThis revealed that we have a larger proportion of contributors who’ve been involved for more than a couple of years. I think what this indicates we need to be doing a better job of developing long-term relationships with new contributors.

recognition-teamsWhen asked which projects contributors identified with, 100% of respondents identified as being volunteers with the Firefox QA team. The remaining teams breakdown fairly evenly between 11% and 33%. I think this indicates most people are contributing to more than one team, and that teams at the lower end of the scale have an excellent opportunity for growth.

Recognizing Recognition

The rest of the questions were focused more on evaluating the forms of recognition we’ve employed in the past.

recognition-forms

When looking at how we’ve recognized contributors it’s good to see that everyone is being recognized in some form or another, in many cases receiving multiple forms of recognition. However I suspect the results are somewhat skewed (ie. people who haven’t been recognized are probably long gone and did not respond to the survey). In spite of that, it appears that seemingly simple things, like being thanked in a meeting, are well below what I’d expect to see.

recognition-likelihoodWhen looking at the impact of being recognized, it seems that more people found recognition to be nice but not necessarily a motivation for continuing to contribute. 44% found recognition to be either ineffective or very ineffective while 33% found it to be either effective or very effective. This could point to a couple of different factors, either our forms of recognition are not compelling or people are motivated by the work itself. I don’t have a good answer here so it’s probably worth following up.

What did we learn?

After all said and done, here is what I learned from doing this survey.

1. We need to be focused on building long-term relationships. Helping people through their first year and making sure people don’t get lost long-term.

2. Most people are contributing to multiple projects. We should have a framework in place that facilitates contribution (and recognition of contribution) across QA. Teams with less participation can then scale more quickly.

4. We need to be more proactive in our recognition, especially in its simplest form. There is literally no excuse for not thanking someone for work done.

5. People like to be thanked for their work but it isn’t necessarily a definitive motivator for participation. We need to learn more about what drives individuals and make sure we provide them whatever they need to stay motivated.

6. Recognition is not as well “baked-in” to QA as it is with other teams — we should work with these teams to improve recognition within QA and across Mozilla.

7. Contributors find testing to be difficult due to inadequate description of how to test. In some cases, people spend considerable amounts of time and energy figuring out what and how to test, presenting a huge hurdle to newcomers in particular. We should make sure contribution opportunities are clearly documented so that anyone can get involved.

8. We should be engaging with Mozilla Reps to build a better, more regional network of QA contributors, beginning with giving local leaders the opportunity to lead.

Next Steps

In closing, I’d like to thank everyone who took the time to share their feedback. The survey remains open if you missed the opportunity. I’m hoping this blog post will help kickstart a conversation about improving recognition of contributions to Mozilla QA. In particular, making progress toward solving some of the lessons learned.

As always, I welcome comments and questions. Feel free to leave a comment below.

Cheers!

Yunier José Sosa VázquezFirefox impedirá que recopilen nuestra información

Si de protección a la privacidad se habla, tenemos que reconocer los esfuerzos que realiza Mozilla para mantenernos seguros en este mundo donde todos quieren espiarnos y robar nuestra información a cualquier costo. Por esa razón, Mozilla recibió el premio a la compañía más confiable en Internet.

Firefox ya incluye la funcionalidad No rastrear, que le indica a los sitios web que no monitorizen tu actividad web, las compañías no están obligadas a hacerle caso. Sin embargo, Mozilla no se detiene y en la próxima versión de Firefox introducirá un nuevo filtro de seguridad que ya podemos activar de forma manual.

Antes de seguir, es importante que conozcan ¿qué es el rastreo en línea u online? El rastreo online consiste en la recopilación de los datos de navegación de una persona a través de los diferentes sitios web; esta recopilación se hace, por lo general, solo con ver el contenido de un sitio web. Los dominios rastreadores solo intentan identificar a una persona a través del uso de cookies o mediante la utilización de otras tecnologías, como por ejemplo, la huella digital.

Pero ¿qué solución ofrece Mozilla? La protección de rastreo te permite tomar el control de tu privacidad. Aunque Firefox proporciona  La Protección de rastreo de Firefox te devuelve de nuevo el control de tu privacidad, bloqueando activamente los dominios y sitios web que son conocidos por rastrear a sus visitantes.

La lista negra inicial utilizada por la herramienta Protección de rastreo, está basada en la lista negra de Disconnect.
Si deseas probar esta funcionalidad, debes instalar la versión Nightly que se encuentra en nuestra zona de Descargas para tu sistema,

¿Cómo activar la Protección de rastreo?

  1. En la barra de direcciones, escribe about:config y pulsa IntroRetorno.
    • El aviso de about:config “¡Zona hostil para manazas!” o “¡Esto puede cancelar su garantía!” puede aparecer. Haz clic en ¡Tendré cuidado, lo prometo! o ¡Seré cuidadoso, lo prometo!, para continuar.
  2. Busca privacy.trackingprotection.enabled.
  3. Haz doble clic en privacy.trackingprotection.enabled para cambiar su valor a true.

Esto activará la Protección de rastreo. Si más tarde quieres desactivarla, repite los pasos anteriores para cambiar el valor a false.

Cómo usar la Protección de rastreo

Una vez activada la Protección de rastreo, verás un escudo en la barra de direcciones cuando Firefox esté bloqueando el rastreo desde ese dominio o desde contenido mixto.

firefox-trackingprotection-1

Para desactivar la Protección de rastreo en un sitio web en particular, simplemente haz clic en el icono del escudo y selecciona “Desactivar protección para este sitio”. Una vez que la Protección de rastreo se desactiva en un sitio web, podrás ver un escudo con una cruz roja. Puedes volver a activar la Protección de rastreo en ese sitio web volviendo a hacer clic en el escudo y seleccionando “Activar protección”.

firefox-trackingprotection-2 

Para ver qué recursos están siendo bloqueados, abre la consola web y busca en los mensajes que están en la pestaña Seguridad.

Si deseas probar esta funcionalidad, debes instalar la versión Nightly que se encuentra en nuestra zona de Descargas para tu sistema,

Fuente: Ayuda de Firefox

Will Kahn-GreeneInput: 2015q1 quarter in review

We got a lot of stuff done in 2015q1--it was a busy quarter.

Things to know:

  • Input is Mozilla's product feedback site.
  • Fjord is the code that runs Input.
  • We maintain project details and plans at https://wiki.mozilla.org/Firefox/Input.
  • I am Will Kahn-Greene and I'm the tech lead, architect, QA and primary developer on Input.

Bugzilla and git stats

Quarter 2015q1 (2015-01-01 -> 2015-03-31)
=========================================


Bugzilla
========

Bugs created: 73
Creators: 7

         Will Kahn-Greene [:willkg] : 67
     Gregg Lind (User Advocacy - He : 1
                  Shashishekhar H M : 1
              Matt Grimes [:Matt_G] : 1
           Mark Banner (:standard8) : 1
                         deshrajdry : 1
                      L. Guruprasad : 1

Bugs resolved: 97

                            WONTFIX : 13
                              FIXED : 82
                          DUPLICATE : 1
                         INCOMPLETE : 1

                         Tracebacks : 4
                           Research : 2
                            Tracker : 6

Research bugs: 2

    1124412: [research] evaluate SUMO search APIs for best results
        given a piece of feedback

Tracker bugs: 6

    907871: [tracker] add analytics infrastructure and reports to
        Input
    967037: [tracker] add classifier page to Firefox OS feedback form
    968230: [tracker] capture carrier in Firefox OS form
    1092280: [tracker] heartbeat v2 (Input-specific changes)
    1104932: [tracker] about:support support tracker
    1130599: [tracker] Alerts phase 1

Resolvers: 6

         Will Kahn-Greene [:willkg] : 65
                      L. Guruprasad : 16
     Ricky Rosario [:rrosario, :r1c : 7
                             aokoye : 5
                            mgrimes : 2
                         deshrajdry : 2

Commenters: 22

                             willkg : 579
                           rrosario : 25
                         deshrajdry : 10
                            mcooper : 10
                          lgp171188 : 10
                            mgrimes : 9
                                cww : 8
                              glind : 8
                         adnan.ayon : 6
                             aokoye : 4
                               robb : 4
                            brnet00 : 3
                          mozaakash : 2
                        John99-bugs : 2
                           chofmann : 2
                            bwalker : 1
                              laura : 1
                          standard8 : 1
                    shashishekharhm : 1
                          christian : 1
                            fwenzel : 1
                          dlucian93 : 1

git
===

Total commits: 276

      Will Kahn-Greene :   189  (+9707, -3904, files 558)
         L. Guruprasad :    38  (+916, -402, files 160)
         Ricky Rosario :    36  (+1756, -10764, files 314)
            Adam Okoye :     6  (+210, -12, files 12)
               deshraj :     3  (+19, -19, files 5)
         Michael Kelly :     2  (+2, -2, files 4)
      Adrian Gaudebert :     2  (+20, -6, files 4)

Total lines added:   12630
Total lines deleted: 15109
Total files changed: 1057

Everyone
========

    Adam Okoye
    adnan.ayon
    Adrian Gaudebert
    brnet00
    bwalker
    chofmann
    christian
    cww
    deshrajdry
    dlucian93
    fwenzel
    Gregg Lind (User Advocacy - Heartbeat - Test Pilot)
    John99-bugs
    L. Guruprasad
    laura
    Mark Banner (:standard8)
    Matt Grimes [:Matt_G]
    mcooper
    Michael Kelly
    mozaakash
    Ricky Rosario
    robb
    Shashishekhar H M
    standard8
    Will Kahn-Greene

Code line counts:

2014q1: April 1st, 2014:        15195 total  6953 Python
2014q2: July 1st, 2014:         20456 total  9247 Python
2014q3: October 7th. 2014:      23466 total  11614 Python
2014q4: December 31st, 2014:    30158 total  13615 Python
2015q1: April 1st, 2015:        28977 total  12623 Python

Input finally shrunk, though this is probably due to switching from the South migration system to the Django 1.7 migration system and in the process of doing that ditching most of our old migration code.

Contributor stats

L Guruprasad worked through 16 bugs this quarter--that's awesome!

Adam worked on the Thank You page overhaul. It's not quite done, but it's in a good place--I'll be finishing up that work in 2015q2.

Ricky finisedh up the Django 1.7 update just in time for Django 1.8 to be released. In doing that work, we cleaned up a lot of code, shed a bunch of dependencies and are in a much better place in regards to technical debt. Yay!

Thank you to everyone who contributed!

Accomplishments

Django 1.7 upgrade: We upgraded to Django 1.7. That's a big deal since Django 1.8 was just released so Django 1.6 isn't supported anymore. Django 1.7 has a new migration system, so there was a lot of work required to upgrade Input.

Heartbeat v2: We did most of Heartbeat v2 in 2014q4, however it didn't really launch until 2015q1. We did a bunch of work to tweak things for the release.

Alerts v1: We added an Alerts API. Input collects a variety of feedback-type data. After several discussions, we decided that it was a better idea to have alert systems live outside of Input, but push alert events to Input. This allows us to develope alert emitting systems faster because they're outside of the Input development process. Further, it relaxes implementation details. The Alerts API has GET and POST abilities and lets us capture and report on arbitrary alert events.

Alerts API.

Remote troubleshooting data capture: We finished this work in 2015q1. It's now rolled out for specific products and in all locales.

Remote troubleshooting data capture project plan.

12 Factor App: At some point, we're going to move Input to AWS. In the process of doing that, we're going to change how Input is configured and deployed and switch to a 12-factor-app-friendly model. I spent a good portion this quarter cleaning things up and redoing configuration so it's more 12-factor-app-compliant.

There's still some work to do, but it'll be easier to do as we're in the process of switching to AWS and know more about how the infrastructure is going to be structured.

12 Factor App.

Snow removal: I live next town over from Lowell, MA, USA. We got 118 inches of snow this winter the bulk of which came in a 6-week period where it pretty much snowed every three days. It was exhausting.

I did a lot of shoveling, but never really solved the problem. However, it did subside after a while and now it's gone.

Snow removal.

Summary

2015q1 went by really fast and we got a lot of stuff done and we worked through a lot of technical debt, too. It was a good quarter.

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Mozilla Release Management TeamFirefox 38 beta3 to beta4

This beta release is going back to normal level in term of number of patches and size.

We took some important fixes for graphics and also uplifted some polish patches for the new features which will ship with 38 or 38.1 (Reading list, in-tab preferences, etc).

  • 33 changesets
  • 68 files changed
  • 1204 insertions
  • 986 deletions

ExtensionOccurrences
cpp21
js14
html6
ini5
h5
xml4
jsm4
java2
css2
webidl1
py1
list1
inc1
idl1

ModuleOccurrences
browser19
dom8
layout7
toolkit6
mobile6
widget5
testing4
gfx4
editor3
media2
js2
xpcom1
ipc1

List of changesets:

Ryan VanderMeulenBug 1153060 - Bump the in-tree mozharness revision. a=test-only - 7f7236a5b6dd
Patrick BrossetBug 1139925 - Make the BoxModelHighlighter highlight all quads and draw guides around the outer-most rect. r=miker, a=sledru - e2f81a3ca1e5
Brian HackettBug 1151401 - Watch for non-object unboxes while optimizing object-or-null operations. r=jandem, a=sledru - d2bade84e15e
Gijs KruitboschBug 1147337 - Stop checking article URL as AboutReader.jsm gets created separately every time anyway. r=margaret, a=sledru - 948241aa9d1a
Gijs KruitboschBug 1152104 - Use command event and delegate clicks to it. r=jaws, a=sledru - f5f2adb88968
Matt WoodrowBug 1116812 - Blacklist two intel GPUs that are trigger driver crashes frequently. r=jrmuizel, a=sledru - 6d9fdd280e65
Jean-Yves AvenardBug 1133633: Part2. Enable async decoding on mac. r=mattmoodrow, a=sledru - 10f75583d21a
Jean-Yves AvenardBug 1153469: Ensure IOSurface isn't released before being composited. r=mattwoodrow, a=sledru - 7efd806788be
Kannan VijayanBug 1134515 - Ensure SPSBaselineOSRMarker checks pseudostack size properly. r=shu, a=sledru - 10c3198eb453
Aryeh GregorBug 1134545 - Insufficient null check. r=ehsan, a=sledru - 5f042fe29707
Chris PearceBug 1148286 - Ensure we don't nullpointer deref if the CDM crashes in MediaKeys and Reader::SetCDMProxy implementations. r=edwin, a=sledru - 999636e73165
Jean-Yves AvenardBug 1147744 - Part 1: Round down display size. r=k17e, a=sledru - 8f8ebd186863
Jean-Yves AvenardBug 1147744 - Part 2: Properly calculate cropping value. r=k17e, a=sledru - c4a01c159cb6
Margaret LeibovicBug 1150695 - Use isProbablyReaderable function from Readability.js. r=Gijs, a=sledru - ab0337907115
Drew WillcoxonBug 1151077 - Make the desktop reading list sync module batch its POST /batch requests. r=markh, a=sledru - 1b6ba1cb52f6
John SchoenickBug 1139560 - Reject non-standard parses of integers in srcset descriptors. r=jst, a=sledru - dffb5c867f47
John SchoenickBug 1139560 - Fix srcset parser 'After descriptor' state mishandling spaces. r=jst, a=sledru - 07666fc071be
John SchoenickBug 1139560 - <img>.currentSrc should be not be nullable. r=jst, a=sledru - 7285a02cd883
Mike TaylorBug 1139560 - Update srcset web-platform expectations. r=jst, a=sledru - cd5b5709b2e4
Ben TurnerBug 1135344 - Don't let IPDL use names that are reserved for compilers. r=froydnj, a=sledru - 8340415e7b27
Jared WeinBug 1149230 - In-content preferences: missing padding between labels and learn more links in Advanced -> Data Choices panel. rs=Gijs, a=sledru - 002faed66e96
Mark HammondBug 1152703 - Prevent desktop reading list sync errors from preventing sync from starting again. r=adw, a=sledru - 8191b45753c7
Blake WintonBug 1149136 - Specify a min-width and min-height to avoid flex making things too small. ui-r=mmaslaney, r=florian, a=sledru - 0a9b3f3e962d
Michael ComellaBug 1153193 - Add EXTRA_DEVICES_ONLY flag to share intents. r=rnewman, a=sledru - 7081bbd2b331
Margaret LeibovicBug 1153262 - Remove length comparison from testReadingListCache. r=gijs, a=sledru - 2ffc047abd90
Michael ComellaBug 1150430 - Set quickshare !visible and !enabled by default. r=mfinkle, a=sledru - b54c44cfa07e
Jared WeinBug 1043612 - Persist the size of resizable in-content subdialogs. r=gijs, a=sledru - 9740c1d817f1
Michael ComellaBug 1152489 - Prevent getMostRecentHomePanel() from being called on null selectedTab. r=mfinkle, a=sledru - afe57494b44d
Mats PalmgrenBug 1143299 - Make frame insertion methods deal with aPrevFrame being on an overflow list. r=roc, a=sledru - 2659ba26dcf2
Gijs KruitboschBug 1152022 - Update Readability to github tip. r=gijs, r=margaret, a=sledru - 333017ad43a9
Andrew McCreightBug 1144649 - Make CCGraph::AddNodeToMap fallible again. r=smaug, a=sledru - 7717f3aa4cf6
L. David BaronBug 1148829 - Backport a safer version of part of Bug 1061364 to make transitions stop running the refresh driver after they've finished. r=bbirtles, a=sledru - 4359c16b7f44
Jeff MuizelaarBug 1153381 - Add a D3D11 ANGLE blacklist. r=mstange, ba=sledru - 05508ccf3ae8

Planet Mozilla InternsWillie Cheong: Academics Complete

I’ve written many last exams before. But today I finish writing my last, last exam; graduation awaits. Life is pretty exciting right now. School has been an amazing experience, but being done feels much better.

Goodbye Waterloo; Hello World!

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1152160] “take” button doesn’t update the ui, so it looks like nothing happened
  • [1152163] passing an invalid bug id to the multiple bug format triggers: Can’t call method “name” on an undefined value
  • [1152167] “powered by” logo requests fails: it sets the assignee to dboswell
  • [1150448] Replace the newline with ” – ” when the bug’s id and summary are copied
  • [1152368] BUGZILLA_VERSION in Bugzill::Constants causes error when installing Perl deps for new BMO installation
  • [1090493] Allow ComponentWatching extension to work on either bmo/4.2 or upstream 5.0+
  • [1152662] user story text should wrap
  • [1152818] changing an assignee to nobody@mozilla.org or any .bugs address should automatically reset the status from ASSIGNED to NEW
  • [1149406] “project flags” label is visible even if there aren’t any project flags
  • [1031035] xmlrpc can be DoS’d with billion laughs attack
  • [1152118] Shortcut for editing gets triggered even when “ctrl” and “e” are not pressed at the same time
  • [1152360] Add parameter to checksetup.pl that generates a cpanfile usable by utilities such as cpanm for installing Perl dependencies
  • [1148490] Custom Budget Request form for FSA program
  • [1154098] Unable to add mentors to bugs
  • [1146767] update relative dates without refreshing the page
  • [1153103] add hooks for legal product disclaimer

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Air MozillaElasticsearch Meetup

Elasticsearch Meetup Join us as Mozilla hosts an ElasticSearch meetup, featuring a duo inviting you along on a whirlwind dive of the ElasticSearch .NET client

Jet VillegasGaia Tips and Tricks for Gecko Hackers

I’m often assigned Firefox Rendering bugs in bugzilla. By the time a bug gets assigned to me, the reporter had usually exhausted other options and assumed (correctly) that I’m ultimately responsible for fixing Firefox rendering bugs. Of course, I often have to reassign most bugs to more capable individuals.

Some of the hardest bugs to assign are the ones reported by our own Gaia team: the team responsible for building the user experience in Firefox OS. The Gaia engineers take CSS and JavaScript and build powerful mobile apps like the phone dialer and SMS client. When they report bugs, it’s often found within lots of CSS and JS code. I wanted to learn how to effectively reduce the time it takes to resolve rendering issues reported by the Gaia team. It takes a long time to go from a Gaia bug like “scrolling in the gallery app is slow” to find the underlying Gecko bug, for example “rounding issue creates an invalidation rectangle that is too large.”

To do that, I became a Gaia developer for a few days at our Paris office. I reasoned that if I could learn how they work, then I can help my team boil down issues faster and become more responsive to their needs. We already recognize the value of having expert web application developers on staff, but we could do a better job with a better understanding of how they work. With that in mind, I spent the week without any C++ code to look at, and dived into the world of mobile web app development.

I wrote down the steps I took to set up a FirefoxOS build and test environment in an earlier post This time, I’ll list a few of the tips and tricks I learned while I was working with the Gaia developers.

The first and most important tip: You will brick the phone when working on the OS. In fact, you’re probably not trying hard enough if you don’t brick it :) Fastboot lets you connect ADB to the phone when it becomes unresponsive to flash the device with a known good system (like the base image.) Learn how to manually force fastboot on your phone.

Julien showed me how to maintain a Gaia developer profile on your desktop development environment. This set of commands will configure your B2G build to produce the desktop B2G runtime that’s a bit easier to debug than a device build:

# change value of the FIREFOX to point to the full path to the B2G desktop build
 export FIREFOX=/Volumes/firefoxos/B2G/build/dist/B2G.app/Contents/MacOS/b2g
 export PROFILE_FOLDER=gaia-profile DEBUG=1 DESKTOP=0
 make

With a Gaia developer profile, you can switch between B2G desktop and a regular Firefox browser build for testing:

export FIREFOX=/full/path/to/desktop/browser
 $FIREFOX -profile gaia-profile --no-remote app://sms.gaiamobile.org

The Gaia profile lets you use URL’s like app://sms.gaiamobile.org to run the Gaia apps on the desktop browser. This trick alone was a huge time saver! Try it with other URL’s like app://communications.gaiamobile.org

For a first Gaia development project, I picked up the implementation of the new card view for gaia that is based on an asynchronous panning and zooming (APZC.) Etienne did the initial proof-of-concept and my goal is to rebase/finish/polish it and add some CSS Scroll Snapping features. My initial tests for this feature are very promising. CSS Scroll Snapping is much more responsive than the previous JavaScript-based implementation. I’m still working out some bugs but hope to land my first Gaia pull request soon.

I’ve already been able to apply what I’ve learned to triage bugs like this one. The bug started out described as a problem with how we launch GMail on B2G in Arabic language. Based on the testing tricks I learned from Gaia team, I was able to distill it to a root cause with scrollbar rendering on right-to-left (RTL) languages. I added a simplified test case to the bug that should greatly reduce debugging time, and assigned it to one of our RTL experts. That’s quite a bit better than assigning tough bugs to random developers with the entire OS as the test case!

Thanks to Julien and Ettiene for helping me get up to speed. I highly recommend that any Gecko engineer spend a few days as a Gaia hacker. I’m humbled by the ingenuity these developers have for building the entire OS user experience with only the capabilities offered by the Web. We could all learn a lot in the trenches with these hackers!

John O'Duinn“why work doesnt happen at work” by Jason Fried on TEDx

While reading “Remote”, I accidentally found this TEDx talk by one of the authors, Jason Fried. Somehow I’d missed this when it first came out in 2010, so stopped to watch it. I’ve now watched this a few times in a row, found it just as relevant today as it was 4-5 years ago, so am writing this blogpost.

The main highlights for me were:

1) work, like sleep, needs solid uninterrupted time. However, most offices are designed to enable interrupts. Open plan layouts. Phones. Casual walk-by interrupts from managers asking for status. Unneeded meetings. They are not designed for uninterrupted focus time. No-one would intentionally plan to have frequently-interrupted-sleep every night and consider it “good”, so why set up our work environments like this?

2) Many people go into the office for the day, attempting to get a few hours uninterrupted work done, only to spend time reacting to interrupts all day, and then lament at the end of the day that “they didn’t get anything done”! Been there, lived through that. As a manager, he extols people to try things like “no-talking-Thursdays”, just to see if people can actually be more productive.

3) The “where do you go when you really want to get work done” part of his presentation nailed it for me. He’s been asking people this question for years, and the answers tend to fall into three categories:

  • place: “the kitchen”, “the spare room”, “the coffee shop”, …
  • moving object: plane, train, car… the commute
  • time: “somewhere really early or really late at night or on the weekend”

… and he noted that no-one said “the office during office hours”!! The common theme is that people use locations where they can focus, knowing they will not get interrupted. When I need to focus, I know this is true for me also.

All of which leads to his premise that organizing how people work together, with most communication done in a less interruptive way is really important for productivity. Anyone who has been at one of my remoties sessions knows I strongly believe this is true – especially for remoties! He also asked why businesses spend so much money on these counter-productive offices.

Aside: I found his “Facebook and twitter are the modern day smoke breaks” comment quite funny! Maybe thats just my sense of humor. Overall, its a short 15min talk, so instead of your next “facebook/twitter/smokebreak”, grab a coffee and watch this. You’ll be glad you did.

Nick CameronContributing to Rust

I wrote a few things about contributing to Rust. What with the imminent 1.0 release, now is a great time to learn more about Rust and contribute code, tests, or docs to Rust itself or a bunch of other exciting projects.

The main thing I wanted to do was make it easy to find issues to work on. I also stuck in a few links to various things that new contributors should find useful.

I hope it is useful, and feel free to ping me (nrc in #rust-internals) if you want more info.

Matt ThompsonMozilla Learning Networks: what’s next?

What has the Mozilla Learning Networks accomplished so far this year? What’s coming next in Q2? This post includes a slide presentation, analysis and interview with Mozilla’s Chris Lawrence, Michelle Thorne and Lainie DeCoursy. It’s a summary of a more detailed report on the quarter here. Join the discussion on #teachtheweb.

What’s the goal?

Establish Mozilla as the best place to teach and learn the web.

Not only the technical aspects of the open web — but also its culture, citizenship and collaborative ethos.

How will we measure that? Through relationships and reach.

2015 goal: ongoing learning activity in 500 cities

In 2015, our key performance indicator (KPI) is to establish ongoing, on-the-ground activity in 500 cities around the world. The key word is ongoing — we’ve had big success in one-off events through programs like Maker Party. This year, we want to grow those tiny sparks into ongoing, year-round activity through clubs and lasting networks.

From one-off events to lasting Clubs and Networks

Maker Party events help active and on-board local contributors. Clubs give them something more lasting to do. Hive Networks grow further into city-wide impact.

What are we working on?

These key initiatives:

  1. teach.mozilla.org
  2. Web Clubs
  3. Hive Networks
  4. Maker Party
  5. MozFest
  6. Badges

image

teach.mozilla.org

teach.mozilla.org will provide a new home for all our teaching offerings — including Maker Party.

What we did: developed the site, which will soft launch in late April.

What’s next: adding dynamic content like blogs, curriculum and community features. Then make it easier for our community to find and connect with each other.

image

Web Clubs

We shipped the model and tested it in 24 cities. Next up: train 10 Regional Coordinators. And grow to 100 clubs.

This is a new initiative, evolved from the success of Maker Party. The goal: take the sparks of activation created through Maker Party and sustain them year-round, with local groups teaching the web on an ongoing basis — in their homes, schools, libraries, everywhere.

What we did:

  • Established pilot Clubs in 24 cities. With 40 community volunteers.
  • Shipped new Clubs curriculum, “Web Literacy Basics.”
  • Field-tested it. With 40 educators and learners from 24 cities, including Helsinki Pune, Baltimore, Wellington and Cape Town.
  • Developed a community leadership model. With three specific roles: Club Leader, Regional Coordinator, and Organizer. (Learning from volunteer organizing models like Obama for America, Free the Children and Coder Dojo.)

What’s next:

  • Train 10 Regional Coordinators. Each of whom will work to seed 10 clubs in their respective regions.
  • Develop new curriculum. For Privacy, Mobile and “Teach like Mozilla.”

Hive Networks

What we did:

We added four new cities in Q1, bringing our total to 11. Next up: grow to 15.

  • We welcomed 4 new cities into the Hive family: Hive Vancouver, Mombasa, Denver and Bangalore.
  • Made it easier for new cities to join. Clarified how interested cities can become official Hive Learning Communities and shipped new “Hive Cookbook” documentation.

What’s next:

  • Strengthen links between Clubs and new potential Hives. With shared community leadership roles.
  • Document best practices. For building sustainable networks and incubating innovative projects.
  • Ship a fundraising toolkit. To help new Hives raise their own local funding.

Surman Keynote for Day One - Edit.005

Maker Party

A global kick-off from July 15 – 31, seeding local activity that runs year-round.

What we did: created a plan for Maker Party 2015, building off our previous success to create sustained local activity around teaching web literacy.

What’s next: this year Maker Party will start with a big two-week global kick-off campaign, July 15-31. We’ll encourage people to try out activities from the new Clubs curriculum.

Mozilla Festival

This year’s MozFest will focus on leadership development and training

Mark your calendars: MozFest 2015 will take place November 6 – 8 in London.
A key focus this year is on leadership development; we’ll offer training to our Regional Co-ordinators and build skill development for all attendees. Plus run another Hive Global meet-up, following on last year’s success.

What’s next: refine the narrative arc leading up to MozFest. Communicate this year’s focus and outcomes.

Badges

What we did: In Q1 our focus was on planning and decision making.

What’s next: improve the user experience for badge issuers and earners.

Community voices

  • “I run two tech programmes in Argentina. I do it outside of my job, and it can be tricky to find other committed volunteers with skills and staying power. I’d love help, resources and community to do it with.” –Alvar Maciel, school teacher, Buenos Aires, Argentina
  • “I always thought I’d visit websites. Not make them! But now I can.” – middle school student from PASE Explorers, NYC afterschool program
  • “Our partnership with Hive makes us fresh, keeps us moving forward rather than doing the same old thing all the time.” –Dr. Michelle Larson, President and CEO, Adler Planetarium, Hive Chicago
  • “We had constant demand from our community members for web literacy classes, and we were finally able to create a great recipe with Web Clubs and curriculum.” –Elio Qoshi, Super Mentor/Mozilla Rep, Albania

Partnerships

The focus this year is on building partnerships that help us: 1) activate more mentors and 2) reach more cities. This builds on the success of partnerships like National Writing Project (NWP) and CoderDojo, and has sparked conversations with new potential partners like the Peace Corps.

Key challenges

  • It’s hard to track sustained engagement offline. We often rely on contributors to self-report their activity — as much of it happens offline, and can’t be tracked in an automated way. How can we incentivize updates and report-backs from community members? How do other organizations tackle this?
  • Establishing new brand relationships. We’ve changed our branding. Our current community of educators grew in deep connection with Webmaker. But in 2015 we made a decision to more closely align learning network efforts directly with the Mozilla brand. How can we best transition the community through this, and simplify our overall branding?
  • Quantifying impact. We’re getting better at demonstrating quantity, as in the numbers of events we host or cities we reach. But those measurements don’t help us measure the net end result or overall impact. How do we get better at that?

Mozilla Science LabMozilla Science Lab Week in Review, April 6-12

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Blogs & Articles

  • Erin McKiernan put the call out on Twitter last week for examples of collaborations arising from open science & open data, and got a great spectrum (from worm simulations to text mining Phillip K. Dick) of responses; see her summary here.
  • Hackaday interviewed Charles Fracchia of the MIT Media Lab on the need and impact of open hardware in open science. Fracchia makes the observation that reproducibility is well-served by distributing standardized data collection hardware that can be deployed in many labs & conditions.
  • Figshare blogged recently about decisions taken by the US Health & Human Services department obliging its operating divisions to make government funded research data available to the public.
  • Jonathan Rochkind blogged on the general unusability of institutional library paywall & login systems, and discusses potential solutions in the form of LibX, bookmarklets and Zotero & co.
  • Nature Biotechnology is engaging in more proactive editorial oversight to ensure the reproducibility of the computational studies it publishes, by way of ensuring the availability of relevant research objects.
  • Shoaib Sufi blogged for the Software Sustainability Institute on their recent Collaborations Workshop 2015. In it, Sufi highlights some of the trends emerging in the conversation around developing research software, including the cultural battle in research with imposter phenomenon (see also our recent article on this matter), and the rising profile of containerization as a fundamental tool for reproducible research.

Conferences & Meetings

  • OpenCon 2015 has been announced for 14-16 November, in Brussels, Belgium. From the conference’s website, ‘the event will bring together students and early career academic professionals from across the world to learn about the issues, develop critical skills, and return home ready to catalyze action toward a more open system for sharing the world’s information — from scholarly and scientific research, to educational materials, to digital data.‘ Applications for OpenCon open on 1 June; updates are available from their mailing list. Also, here’s Erin McKiernan’s thoughts on OpenCon 2014.
  • Jake VanderPlas gave a great talk on Fast Numerical Computing with NumPy at PyCon 2015 on Friday.
  • The European Space Agency is organizing a conference entitled Earth Observation Science 2.0 at ESRIN, Frascati, Italy, on 12-14 October.  Topics include open science & data, citizen science, data visualization and data science as they pertain to earth observation; submissions are open until 15 May.
  • The French National Natural History Museum is planning three open forums on biodiversity, designed to collect broad-based input to inform the theme and goals of a forthcoming observatory. The project extends the principles of citizen science to include the public in the discussion surrounding not just data collection, but scientific program design.

Tools & Services

  • Harvard’s Dataverse.org project has made CC0 the default license for all data deposited therein in their version 4.0, citing the license’s familiarity to the open data community.
  • The US Federal government’s open data portal, data.gov, has created a new theme section highlighting climate & human health data. From their website, ‘The Human Health Theme section allows users to access data, information, and decision tools describing and analyzing climate change impacts on public health. Extreme heat and precipitation, air pollution, diseases carried by vectors, and food and water-borne illnesses are just some of the topics addressed in these resources.
  • GitHub is inviting users to participate in a test of their forthcoming support for the new Git large file storage extension to the popular version control system.
  • The Ocean Observation Initiative, a multi-site array of heavily instrumented underwater observatories, is set to come on-line in June. Data from the OOI is slated for open access distribution.

Chris IliasMy Installed Add-ons – Keyword Search

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’m doing a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

My previous posts have been about:

For this blog post, I’ll talk about Keyword Search.
In Firefox, whenever you do a web search from the location bar, it will use the same search engine as in the search bar. Keyword Search allows you to use a separate search engine for location bar web searches. This is really helpful to people like me who mainly use one search engine (for basic web searches) and others for content-specific use cases.

To set your location bar search engine, go to the add-ons manager.

  1. Beside “Keyword Search“, click Preferences.
  2. Beside “Keyword Search Engine“, select the search engine you want to use.

You can install it via the Mozilla Add-ons site.

This Week In RustThis Week in Rust 77

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

104 pull requests were merged in the last week, and 6 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes

Other Changes

New Contributors

  • Ben Ashford
  • Christopher Chambers
  • Dominick Allen
  • Hajime Morrita
  • Igor Strebezhev
  • Josh Triplett
  • Luke Gallagher
  • Michael Alexander
  • Michael Macias
  • Oak
  • Remi Rampin
  • Sean Bowe
  • Tibor Benke
  • Will Hipschman
  • Xue Fuqiao

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

<frankmcsherry> rust is like a big bucket of solder and wire, with the promise that you can't electrocute yourself.

From #rust.

Thanks to BurntSushi for the tip. Submit your quotes for next week!.

Cameron KaiserDarwin Nuke the Refrigerator, Wet Sprocket, etc.

Two more security notes.

First, as a followup, a couple of you pointed out that there is a writeconfig on 10.4 through 10.6 (and actually earlier) in /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources. Yes, there is, and it's even setuid root (I wish Apple wouldn't do that). However, it is not exploitable, at least not by systemsetupusthebomb or a similar notion, because it appears to lack the functionality required for that sort of attack. I should have mentioned this in my prior posting.

Second, Darwin Nuke is now making the rounds, similar to the old WinNuke which plagued early versions of Windows until it was corrected in the Windows 95 days in that you can send a specially crafted packet to an OS X machine and kernel panic it. It's not as easy as WinNuke was, though -- that was as simple as opening a TCP connection to port 139 on the victim machine and sending it nonsense data with the Urgent Pointer flag set in the TCP header. Anyone could do that with a modified Telnet client, for example, and there were many fire-and-forget tools that were even easier. Unless you specifically blocked such connections on ingress, and many home users and quite a few business networks didn't at the time, WinNuke was a great means to ruin someone's day. (I may or may not have done this from my Power Mac 7300 a couple times to kick annoying people off IRC. Maybe.)

Darwin Nuke, on the other hand, requires you to send a specially crafted invalid ICMP packet. This is somewhat harder to trigger remotely as many firewalls and routers will drop this sort of malformed network traffic, so it's more of a threat on an unprotected LAN. Nevertheless, an attacker with a raw socket interface can engineer and transmit such packets, and the technical knowledge required is relatively commonplace.

That said, even on my test network I'm having great difficulty triggering this against the Power Macs; I have not yet been able to do so. It is also not clear if the built-in firewall protects against this attack, though the level at which the attack exists suggests to me it does not. However, the faulty code is indeed in the 10.4 kernel source, so if it's there and in 10.10, it is undoubtedly in 10.5 and 10.6 as well. For that reason, I must conclude that Power Macs are vulnerable. If your hardware (or non-OS X) firewall or router supports it, blocking incoming ICMP will protect you from the very small risk of being hit at the cost of preventing pings and traceroutes into your network (but this is probably what you want anyway).

Even if you do get nailed, the good news (sort of) is that your computer can't be hacked by this method that anyone is aware of; it's a Denial of Service attack, you'll lose your work, you may need to repair the filesystem if it does so at a bad time and that sucks, but it doesn't compromise the machine otherwise. And, because this is in open source kernel code, it should be possible to design a fix and build a new kernel if the problem turns out to be easier to exploit than it appears currently. (Please note I'm not volunteering, at least, not yet.)

So, you can all get out of your fridges now, mmkay?

10.4Fx 38 and IonPower update: 50% of V8 passes and I'm about 20% into the test suite. Right now wrestling with a strange bug with return values in nested calls, but while IonPower progress is slow, it's progress!

Raniere SilvaMathml April Meeting

Mathml April Meeting

This is a report about the Mozilla March IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in May 13th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Leia mais...

Mike ConleyThings I’ve Learned This Week (April 6 – April 10, 2015)

It’s possible to synthesize native Cocoa events and dispatch them to your own app

For example, here is where we synthesize native mouse events for OS X. I think this is mostly used for testing when we want to simulate mouse activity.

Note that if you attempt to replay a queue of synthesized (or cached) native Cocoa events to trackSwipeEventWithOptions, those events might get coalesced and not behave the way you want. mstange and I ran into this while working on this bug to get some basic gesture support working with Nightly+e10s (Specifically, the history swiping gesture on OS X).

We were able to determine that OS X was coalescing the events because we grabbed the section of code that implements trackSwipeEventWithOptions, and used the Hopper Disassembler to decompile the assembly into some pseudocode. After reading it through, we found some logging messages in there referring to coalescing. We noticed that those log messages were only sent when NSDebugSwipeTrackingLogic was set to true, we executed this:

defaults write org.mozilla.nightlydebug NSDebugSwipeTrackingLogic -bool YES

In the console, and then re-ran our swiping test in a debug build of Nightly to see what messages came out. Sure enough, this is what we saw:

2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 coalescing scrollevents
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002
2015-04-09 15:11:55.395 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 cumulativeDelta:-2.000 progress:-0.002 adjusted:-0.002
2015-04-09 15:11:55.396 firefox[5203:707] ___trackSwipeWithScrollEvent_block_invoke_0 call trackingHandler(NSEventPhaseChanged, gestureAmount:-0.002)

This coalescing means that trackSwipeEventWithOptions is only getting a subset of the events that we’re sending, which is not what we had intended. It’s still not clear what triggers the coalescing – I suspect it might have to do with how rapidly we flush our native event queue, but mstange suspects it might be more sophisticated than that. Unfortunately, the pseudocode doesn’t make it too clear.

String templates and toSource might run the risk of higher memory use?

I’m not sure I “learned” this so much, but I saw it in passing this week in this bug. Apparently, there was some section of the Marionette testing framework that was doing request / response logging with toSource and some string templates, and this caused a 20MB regression on AWSY. Doing away with those in favour of old-school string concatenation and JSON.stringify seems to have addressed the issue.

When you change the remote attribute on a <xul:browser> you need to re-add the <xul:browser> to the DOM tree

I think I knew this a while back, but I’d forgotten it. I actually re-figured it out during the last episode of The Joy of Coding. When you change the remoteness of a <xul:browser>, you can’t just flip the remote attribute and call it a day. You actually have to remove it from the DOM and re-add it in order for the change to manifest properly.

You also have to re-add any frame scripts you had specially loaded into the previous incarnation of the browser before you flipped the remoteness attribute.1

Using Mercurial, and want to re-land a patch that got backed out? hg graft is your friend!

Suppose you got backed out, and want to reland your patch(es) with some small changes. Try this:

hg update -r tip
hg graft --force BASEREV:ENDREV

This will re-land your changes on top of tip. Note that you need –force, otherwise Mercurial will skip over changes it notices have already landed in the commit ancestry.

These re-landed changes are in the draft stage, so you can update to them, and assuming you are using the evolve extension2, and commit –amend them before pushing. Voila!

Here’s the documentation for hg graft.


  1. We sidestep this with browser tabs by putting those browsers into “groups”, and having any new browsers, remote or otherwise, immediately load a particular set of framescripts. 

  2. And if you’re using Mercurial, you probably should be. 

Mozilla Release Management TeamFirefox 38 beta2 to beta3

Yet another busy beta release!

We took many changes for the reading list feature but also landed some improvements for the sharing actions on mobile (this is why we did a beta 3 release on mobile).

We also took a few changes for Thunderbird and Seamonkey as they base their major releases on ESR releases.

  • 108 changesets
  • 227 files changed
  • 2501 insertions
  • 1138 deletions

ExtensionOccurrences
js29
cpp29
py25
html21
xml16
h13
java12
ini12
jsm10
xul3
svg3
css3
mn2
sh1
rst1
patch1
list1
jsx1
json1
in1
idl1

ModuleOccurrences
mobile30
dom30
browser26
layout20
python18
toolkit17
js13
testing5
media5
media5
gfx5
netwerk3
services2
xpcom1
widget1
tools1
security1
parser1
+media1
editor1

List of changesets:

Ralph GilesBug 1080995 - Don't use the h264parser gstreamer element. r=kinetik, a=lizzard - 5b6180fc4286
Mike HommeyBug 1147217 - Improve l10n repack error message when locale doesn't contain necessary files. r=mshal, a=NPOTB - 5c4cacd09c9c
Tooru FujisawaBug 1150297 - Move source property to RegExp instance again. r=till, a=sylvestre - f208b7bb88ae
Tom TromeyBug 1150646 - Ensure that memory stats show up in treeherder logs. r=chmanchester, a=test-only - a1d7b2cdd950
Jean-Yves AvenardBug 1100210 - Mark MPEG2 Layer 1,2,3 audio as MP3. r=k17e, a=sledru - a72d76b284ea
Mike HommeyBug 1147283 - Replace mozpack.path with mozpath. r=mshal, a=sledru - d59b572e546f
Mike HommeyBug 1147207 - Add a ComposedFinder class that acts like a FileFinder proxy over multiple FileFinders. r=gps, a=sledru - fc1e894eec2f
Mike HommeyBug 1147207 - Improve SimplePackager manifest consistency check. r=gps, a=sledru - 46262c24ca5b
Mike HommeyBug 1147207 - Allow to give extra l10n directories to l10n-repack.py. r=gps, a=sledru - 7f2d41560360
Mike HommeyBug 1147207 - Use SimplePackager code to find manifest entries and base directories during l10n repack. r=gps, a=sledru - 0c29ab096b90
Jon CoppeardBug 1146696 - Don't assume there are no arenas available after last ditch GC r=terrence a=sylvestre - 484a6aef6a4f
Patrick BrossetBug 1134500 - Fix multiple browser/devtools/animationinspector intermittent tests. r=bgrins, a=test-only - 589aafc2bb13
Matt WoodrowBug 1102612 - Don't attempt to read data from a resource if we've evicted the start position. r=jya, a=sledru - 98ac0c020205
Jan BeichBug 1147845 - Drop redundant check to keep blocked download data on Tier3 platforms as well. r=jaws, a=sledru - 8ff6cc64abe8
Mike de BoerBug 1146921 - Disable the window sharing dropdown item in Loop conversation windows on unsupported platforms. r=Standard8, a=sledru - d384bdaed2fd
Gavin SharpBug 1148562 - Right clicking the reader mode button shouldn't trigger reader mode. r=jaws, a=sledru - 4406ce9ace92
Richard NewmanBug 1151484 - Account for null result when polling on a latch during Reading List sync. r=nalexander, a=sledru - 8a734418a22e
Mark FinkleBug 1149094. r=blassey, a=sledru - d2987ec0e0e7
Margaret LeibovicBug 1150872 - Update toast notification when removing a page from reading list from reader view toolbar. r=mcomella, a=sledru - 3f5d7f277471
JW WangBug 1150277 - Match hostname when removing GMP data. r=cpearce, a=sledru, ba=sledru - 8f0271f2c153
Ryan VanderMeulenBacked out changeset 589aafc2bb13 (Bug 1134500) for a rebase error. - 88bda8094530
Patrick BrossetBug 1134500 - Fix multiple browser/devtools/animationinspector intermittent tests. r=bgrins, a=test-only - 16c909280059
Shih-Chiang ChienBug 1080130 - Force GC to close all used socket immediately. r=jmaher, a=test-only - eb178aedaaad
Jeff MuizelaarBug 1146034 - Cherry pick "Fix struct uniform packing." a=sledru - 9adbbf9a8784
Christoph KerschbaumerBug 1147026 - CSP should ignore query string when checking a resource load. r=dveditz, a=sledru - c2f29d6648e8
Christoph KerschbaumerBug 1147026 - CSP should ignore query string when checking a resource load - tests. r=dveditz, a=sledru - 6d1efbb2c76c
Michael ComellaBug 1130203 - Clean up OverlayDialogButton's initialization. r=mhaigh a=sylvestre - 4f2f00d1331c
Michael ComellaBug 1130203 - Remove header container in share overlay & roughly style text. r=mhaigh a=sylvestre - d6200a67e007
Michael ComellaBug 1130203 - Remove Firefox logo from share overlay. r=mhaigh a=sylvestre - 57a21c5e1100
Michael ComellaBug 1130203 - Move share overlay title styles into styles.xml and revise to match mocks. r=mhaigh a=sylvestre - 8002be97de82
Michael ComellaBug 1130203 - Remove dividers in share overlay. r=mhaigh a=sylvestre - 9a5a28809525
Michael ComellaBug 1130203 - Add @dimen/button_corner_radius and replace corner radius use in code. r=mhaigh a=sylvestre - feb7a6808bfb
Michael ComellaBug 1130203 - Round the corners of the first item in the share overlay. r=mhaigh a=sylvestre - 5dd03a21c376
Michael ComellaBug 1130203 - Set width for share overlay. r=mhaigh a=sylvestre - c3ec8ada4705
Michael ComellaBug 1130203 - Update share overlay text colors to match mocks. r=mhaigh a=sylvestre - ca3650a73fdf
Michael ComellaBug 1130203 - Rename TextAppearance.ShareOverlay to ShareOverlayTextAppearance. r=mhaigh a=sylvestre - 41eba60614f8
Michael ComellaBug 1130302 - Move ShareOverlayButton.Text to ShareOverlayTextAppearance.Button. r=mhaigh a=sylvestre - 8691a7ac4c95
Michael ComellaBug 1130203 - Remove excess LinearLayout from ShareOverlay. r=mhaigh a=sylvestre - b731c0df23aa
Michael ComellaBug 1130203 - Remove unused share overlay layout. r=mhaigh a=sylvestre - 1c2ce96f9359
Michael ComellaBug 1130203 - Clean up style inheritance in share overlay. r=mhaigh a=sylvestre - 49441819b75a
Michael ComellaBug 1130203 - Update share overlay row pressed color & color names. r=mhaigh a=sylvestre - f2cbe1ec6d5a
Michael ComellaBug 1130203 - Update ShareOverlay icon padding & assets. r=mhaigh a=sylvestre - abff0e240078
Michael ComellaBug 1130203 - Clean up share overlay toast styles. r=mhaigh a=sylvestre - 8a05ce8c5ff7
Michael ComellaBug 1130203 - Reset the first item background drawable state onResume. r=mhaigh a=sylvestre - dd5f8068b392
Michael ComellaBug 1130203 - Review: Remove single use styles in share overlay. r=trivial a=sylvestre - ce7199dbb0af
Michael ComellaBug 1130203 - Review: Finish off share overlay nits. r=trivial a=sylvestre - dcadb3572692
Michael ComellaBug 1130203 - Add drop shadow to overlay share dialog result toast. r=margaret a=sylvestre - 994526939c21
Michael ComellaBug 1130203 - Remove retry button in share overlay retry toast. r=margaret a=sylvestre - d49aaecd32a3
Michael ComellaBug 1134484 - Add Fennec color palette to colors.xml. r=liuche a=sylvestre - 3eb3b25437dd
Michael ComellaBug 1130203 - uplift: Add dropshadow assets from Bug 1137921. r=trivial a=sylvestre - e399294c9df3
Michael ComellaBug 1148041 - Have the ShareOverlay text styles inherit from the default TextAppearance. r=liuche a=sylvestre - 0442cb68ed69
Michael ComellaBug 1148041 - Inherit from Gecko theme in share overlay. r=liuche a=sylvestre - 0db186d2534c
Michael ComellaBug 1148197 - Move share overlay margins to child to properly align. r=liuche a=sylvestre - 4db575e80883
Michael ComellaBug 1151089 - Move slide up animations to onResume. r=liuche a=sylvestre - 358448358c21
Michael ComellaBug 1148677 - Use larger shareplane icon. r=liuche a=sylvestre - 5b70a93a7f10
Michael ComellaBug 1132747 - Set the padding for share in the context menu on Lollipop. r=mhaigh a=sylvestre - cbe44fd0d2fc
Ryan VanderMeulenBug 984821 - Disable browser_CTP_iframe.js on Linux and OSX for ongoing intermittent failures. - a1c4c4d43776
AnishBug 1135091 - Convert remaining SpecialPowers.setBoolPref to pushPrefEnv. r=jmaher, r=mwargers, a=test-only - 8d23b1e2cc0f
Sami JaktholmBug 1148770 - Rewrite browser_styleeditor_bug_870339.js to fix intermittent leaks. r=ejpbruel, a=test-only - a7535132fe8e
Mike de BoerBug 1150052 - Report exceptions that occur in MozLoop object APIs directly to the console, so we'll be able to recognize errors better. r=Standard8, a=sledru - 0299772271a8
Michael ComellaBug 1147661 - Add new device assets. r=liuche, a=sledru - 32e9c40ea3f9
Michael ComellaBug 1147661 - Use new device icons in share overlay. r=liuche, a=sledru - 2d8d16d8c2ad
John SchoenickBug 1139554 - Fix srcset parser mishandling bare URLs followed by a comma. r=jst, a=sledru - a9d1df7af6fc
Allison NaaktgeborenBug 1124895 - Add password manager usage data to FHR. r=dolske, r=gfritzsche, a=sledru - ac9862939f3e
Robert LongsonBug 1149516 - Draw continuous stroke if stroke-dasharray = 0. r=jwatt, a=sledru - 1dc6d70e9022
David MajorBug 1137614 - Align the mvsadcost array to work around a possible compiler issue. r=rillian, a=sledru - 58b20f079d4f
Matt WoodrowBug 1151721 - Disable hardware accelerated video decoding for older intel drivers since it gives black frames on youtube. r=ajones, a=sledru - d4e6fe0b0eb5
Jonathan KewBug 1012640 - Part 1: Add checks for IsOriginalCharSkipped() to the gfxSkipChar unit tests. r=roc, a=sledru - 083361a65349
Jonathan KewBug 1012640 - Part 2: Ensure mCurrentRangeIndex is initialized correctly when creating iterator for a gfxSkipChars that begins with a skipped run. r=roc, a=sledru - 3c64e9fdc3d7
Jonathan KewBug 1012640 - Part 3: Reftest for line break after inline element with white-space:nowrap and whitespace inside the element. r=roc, a=sledru - 4c9214ed82b8
Jean-Yves AvenardBug 1149278 - Limit box reads to resource length. r=k17e, a=sledru - de1e5351aad2
Jean-Yves AvenardBug 1151299 - Part 1: Only attempt to decode first frame when available. r=mattwoodrow, a=sledru - 68f61e9c41d2
Jean-Yves AvenardBug 1151299 - Part 2: Clear EOS flag when new data is received. r=mattwoodrow, a=sledru - 01cf08a90d44
Mark FinkleBug 1151469 - Tweak the package manifest to avoid packaging the wrong file. r=rnewman, a=sledru - 2a6a2f558ec2
Richard NewmanBug 1123389 - Allow Android-side reading list service work to ride the trains. r=rnewman a=sledru - e55db32c5ef6
Ryan VanderMeulenBacked out changeset a7535132fe8e (Bug 1148770) for test bustage. - 0f0c47f90ab6
Mark HammondBug 1149880 - Avoid readinglist item races logging unhandled promise exceptions. r=dolske, a=sledru - 115865f14324
Patrick BrossetBug 1139937 - Don't try accessing the computedStyle of pseudo elements on reflow. r=miker, a=sledru - 9a763ea8d781
Blake WintonBug 1149261 - Replace the close icon and adjust the borders. ui-r=mmaslaney, r=jaws, a=sledru - a3c18ef98317
Blake WintonBug 1149649 - Design Polish Updates for the Reader View Footer. ui-r=mmaslaney, r=jaws, a=sledru - f7dc5b7781e2
Blake WintonBug 1148762 - Tweak the css on the reading list sidebar to prevent unecessary scrollbars. r=mstange, a=sledru - de78faf679e7
Gijs KruitboschBug 1148024 - Fix wrapping of privacy pane. r=jaws, a=sledru - 1f70f2dba807
Gijs KruitboschBug 1151252 - Back out content part of the restyle of about:preferences. r=jaws, a=sledru - fa50c9c02b3c
Stephen PohlBug 1151544 - Update Adobe EME's homepage URL in addons manager. r=gfritzsche, a=sledru - 89de3c04af8b
Timothy NikkelBug 1150021 - Backout the patch for Bug 1077085 on beta and aurora. a=sledru - 188117472132
Aaron KlotzBug 1141081 - Add weak reference support to HTMLObjectElement and use it in nsPluginInstanceOwner. r=jimm, a=sledru - bfff2ca94766
Jean-Yves AvenardBug 1151360: Allow playback of extended AAC profile audio track. r=k17e, a=sledru - a24bdacce4cc
Ryan VanderMeulenBug 1129538 - Skip various tests that hit the mProgressTracker abort. a=test-only - 51c5166a338b
Andrea MarchesiniBug 1134224 - test_bug1132395.html must wait until the port is actually available before sending messages. r=ehsan, a=test-only - 982dba6be01c
Dragana DamjanovicBug 1135405 - Use different multicast addresses for each test. r=michal, a=sledru - 8bb13d7a5d2a
Edwin FloresBug 1146192 - Whitelist sched_yield syscall in GMP sandbox on Linux. r=jld, a=sledru - e06c5a9ce450
Honza BambasBug 1124880 - Call PR_Close of UDP sockets on new threads. r=mcmanus, a=sledru - b04842ef36ca
Edwin FloresBug 1142835 - Null check mPlugin on GMPAudioDecoderParent shutdown. r=cpearce, a=sledru - 0ff855a44d9c
Mark HammondBug 1149869 - Prevent duplicate readinglist items from appearing in the sidebar in some cases. r=Unfocused, a=sledru - 881a59941b04
Seth FowlerBug 1148832 - Return early from nsAlertsIconListener::OnLoadComplete if the image has an error. r=baku, a=sledru - bf83a8535bf4
Mark HammondBug 1149896 - Avoid warnings when using sendAsyncMessage on a ReadingListItem object. r=adw, a=sledru - bbbb9f84cf98
Blake WintonBug 1149520 - Move the font-size change to the container, so as not to repaint the toolbar. r=jaws, r=margaret, a=sledru - 6ab02e48d0c2
Andrea MarchesiniBug 1151609 - WebSocket::CloseConnection must be thread-safe. r=smaug, a=sledru - 07f2a01649a4
Mark BannerBug 1152245 - Receiving a call whilst in private browsing or not browser windows open can stop any calls to contacts being made or received. r=mikedeboer, a=sledru - 367745bbac8a
Valentin GosuBug 1099209 - Only track leaked URLs on the main thread. r=honzab, a=sledru - 58dca3f7560a
Mark BannerFix beta specific xpcshell bustage from Bug 1152245. r+a=bustage-fix - d13016a31d6f
Robert O'CallahanBug 1149494 - Part 1: Add a listener directly to the unblocked input stream that reports the size of the first non-empty frame seen. r=pehrsons, a=sledru - d46cb3b3ebb3
Andreas PehrsonBug 1149494 - Part 2: Add mochitest. r=jesup, a=sledru - c821f76bf302
Byron Campen [:bwc]Bug 1151139 - Simplify how we choose which streams to gather stats from. r=mt, a=abillings - e62ca3da49e1
Florian QuèzeBug 1137603 - WebRTC sharing notifications fail to open from the global indicator when the Hello window has been detached. r=mixedpuppy, a=sledru - d4ee3499fe0d
Florian QuèzeBug 1144774 - Add to reading list button is blurry. ui-r=mmaslaney, r=jaws, a=sledru - f377c6831282
Mike de BoerBug 1152391 - appVersionInfo should use UpdateChannel.jsm to fetch update channel information. r=Standard8, a=sledru - 3f5e298cb641
Margaret Leibovicbackout 7d883361e554 for causing Bug 1150251 - ff91cb79a7c8

Nick CameronNew tutorial - arrays and vectors in Rust

I've just put up a new tutorial on Rust for C++ programmers: arrays and vectors. This covers everything you might need to know about array-like sequences in Rust (well, not everything, but at least some of the things).

As well as the basics on arrays, slices, and vectors (Vec), I dive into the differences in representing arrays in Rust compared with C/C++, describe how to use Rust's indexing syntax with your own collection types, and touch on some aspects of dynamically sized types (DSTs) and fat pointers in Rust.

Erik VoldUsing JPM Watchpost

jpm watchpost is a feature of jpm which can be used to automatically post changes for an add-on to an instance of Firefox or Fennec running the Extension Auto-Installer extension.

This method can be used to quickly test an extension on the same PC on which an add-on is being developed, or another PC, or on an Anroid device over wifi. It can also be used to rapidly test a lot of new changes.

Basically, jpm watchpost --post-url <url> will watch the folder it is executed in for changes, when a file is updated, added, or removed then a new .xpi file is created for the add-on, which is finally sent to the --post-url with an HTTP POST request, which is what the Extension Auto-Installer extension recieves. The Extension Auto-Installer extension then auto installs the extension, which causes Firefox to disable and remove the old version of the add-on.

For more information on using jpm watchpost please see the documentation.

Erik VoldJPM Mobile Beta

I annouced the release of jpm beta nine months ago, and since then we’ve most of the SDK modules compatible with jpm, figured out how to run jpm on travis, and now I’d like to annouce the beta release of jpm-mobile!

jpm-mobile provides a means for running Jetpacks (aka Add-on SDK add-ons) on Android devices. This is achieved by communicating through adb, which is required by jpm-mobile.

Installing is easy with npm install jpm-mobile -g, then testing an add-on on Firefox for Android (aka Fennec) is as simple as plugging the device in to your computer and running:

jpm-mobile run --adb /path/to/adb -b fennec

Note that the add-on’s package.json or install.rdf should be marked as supporting Fennec. For the former, the package.json should include something like:

{
  ...
  "engines": {
    "fennec": "*"
  }
  ...
}

Hannah KaneUser Testing the Teach Site

We are soooooo close to releasing the new Teach site.

People seem to dig the bright colors and quirky illustrations throughout the site.

People seem to dig the bright colors and quirky illustrations throughout the site.

In advance of the release, I wanted to conduct some user tests to make sure we’re still on the right track. This week I conducted two user tests with members of the community (yay!). As is always the case with user testing, I learned a lot from observing users interact with the site.

You can see detailed notes here and read my recommendations below.

These recommendations are based on formal user tests with two users as well as feedback from people who’ve been involved or observing the process throughout.  Also, please note that I wasn’t able to test the primary functionality on the site (adding a Club to the map), so these recommendations are more about IA and other content issues.

Findings and Recommendations

Screen Shot 2015-04-10 at 2.48.07 PM

ACTIVITIES / CURRICULUM / RESOURCES

Findings:

  • People want to see more activities and resources.
  • People expect to be able to sort and filter.
  • Our internal distinction between the Clubs Curriculum (the official curriculum for Clubs; with a strong recommendation for following the prescribed path) and Teaching Activities (more “grab and go”-style) is not intuitive to users.
  • The Teach Like Mozilla content needs to be more integrated into common user flows.

Recommendations:

  • Continue with current plan for developing and publishing more approved curriculum and activities.
  • Continue brainstorming work around scalable presentation of curriculum begun in this heartbeat. The ideas discussed so far address sorting and filtering, and make good use of the Web Literacy Map as an organizing tool.
  • As part of that design work, we should also allow users to access all teaching materials from the same page, and provide specific views for “official Clubs curriculum.” I recommend we keep the Teaching Activities page, and remove the Clubs Curriculum sub-page. This content is one of our primary offerings so it belongs at the top level. /cc @iamjessklein
  • We need to offer a solution for sharing resources—e.g. maker tools, other curricula, programs. (Hello, Web Lit Mapper!)
  • We need to design a stronger connection between teaching activities and the Teach Like Mozilla content. A short-term solution might be to link to the TLM page from every individual activity page, but we should also be working towards a better longer-term solution. /cc @laurahilliger

Screen Shot 2015-04-10 at 2.50.46 PM

CLUBS

Findings:

  • The Clubs Toolkit is not findable, and needs to be supplemented with content targeted towards helping people “get started.”
  • We are not providing enough information for the use case of a person who is deciding in the moment whether to start a Club.

Recommendations:

  • Make the Clubs Toolkit more visible on the page.
  • Consider renaming the Clubs Toolkit something like “Getting Started Guide” or “A Club’s First Month” – and editing content to match. /cc @thornet
  • Based on my understanding of the expected pathways to starting Clubs, I do not think we need to make any significant changes to the Clubs page to address the use case of someone coming to the site and deciding in the moment whether or not to start a Club. As I understand it, our plan for growing Clubs makes use of the following scenarios:
    • 1) Someone is “groomed” by staff member, Regional Coordinator, or other community member. By the time they arrive at the site, they have the specific intent of adding their Club.
    • 2) Someone finds out about us through Maker Party, and through a series of communications learns about Clubs and decides to start one. They are coming to the site with the specific intent to add their Club.
    • 3) Someone with an existing program or group wants to be listed in the database. Again, they are coming to the site with the intent to add their Club.

In short, I don’t think we’ve yet seen a reason to have the site serve a “selling” or persuasive function. I *do* think the Clubs page is a natural first stop for someone who is looking to understand how to start a Club. I think the changes recommended in the bullet points above address that.

Screen Shot 2015-04-10 at 2.51.21 PM

EVENTS/MAKER PARTY

Findings:

  • The copy describing the Events page in the main navigation is misleading, since the content on the Events page is about Maker Party.
  • People may understand throwing a Maker Party as a “first step” to starting a Club, rather than a lower-bar option for people who do not want to start a Club (and perhaps never will).

Recommendations:

  • I think we should re-brand what is currently the Events landing page as “Maker Party.” We’ve already sort of done this in that, while the page is called “Events” in the nav, the h1 copy in the hero image is “Host a Maker Party.” I suggest we change the copy in the nav to “MAKER PARTY” and the teaser copy to “Our annual global campaign”. /cc @amirad

Screen Shot 2015-04-10 at 2.52.06 PM

INFORMATION ARCHITECTURE

Findings:

  • Users tend to ignore, not see, or misinterpret the CTAs at the bottom of every page
  • Users do not notice links to the sub-pages in the main navigation

Recommendation:

  • We need to design better, more intuitive pathways for viewing secondary pages

ADDITIONAL RECOMMENDATIONS

I’m going to keep banging this drum: We need to clarify our audience! I think we’ve made good progress in terms of clarifying that our “first line” audience includes educators and activists. But I think we have to take it a step further and clarify who those educators and activists are working with. There are at least two axes that I think are important to be clear about: first, the global nature of our work, and second, the specific age groups of what I’m calling the “end learners,” for lack of a better term.

I think we do a pretty decent job of conveying the global nature of the program through copy and imagery, though obviously implementing our l10n strategy is absolutely fundamental to this.

I think we are less clear when it comes to the age groups we’re targeting with our programs and materials.   For example, I think we ought to specify the appropriate age level for each activity. (And the images, activity titles, and copy should reflect the target audience.)

Questions, comments, disagreements wholeheartedly welcomed!


Mark SurmanQ1 Participation update

I asked two questions about participation back in January: 1. what is radical participation? and 2. what practical steps  can we take right now to bring more of it to Mozilla?. It’s been great to see people across Mozilla digging into these questions. I’m writing to offer an update on what I’ve seen happening.

First, we set ourselves a high bar when we started talking about radical participation at Mozilla late last year. I still believe it is the right bar. The Mozilla community needs more scale and impact than it has today if we want to confront the goliaths who would take the internet down a path of monopoly and control.

However, I don’t think we can invent ‘radical’ in the abstract, even if I sometimes say things that make it sound like I do :). We need to build it as we go, checking along the way to see if we’re getting better at aligning with core Mozilla principles like transparency, distributed leadership, interoperability and generativity. In particular, we need to be building new foundations and systems and ways of thinking that make more radical participation possible. Mitchell has laid out how we are thinking about this exploration in three areas (link).

When I look back at this past quarter, that’s what I see that we’ve done.

As context: we laid out a 2015 plan that included a number of first steps toward more radical participation at Mozilla. The immediate objectives in this plan were to a) invest more deeply in ReMo and our regional communities and b) better connect our volunteer communities to the work of product teams. At the same time, we committed to a longer term objective: c) create a Participation Lab (originally called a task force…more on that name change below) charged with looking for and testing new models of participation.

Progress on our first two objectives

As a way to move the first part of this plan forward, the ReMo Council met in Paris a month or so back. There was a big theme on how to unleash the leadership potential of the Reps program in order to move Mozilla’s core goals forward in ways that take advantage of our community presence around the world. For example, combining the meteoric smartphone growth in India with the local insights of our Indian community to come up with fresh ideas on how to move Firefox for Android towards its growth goal.

We haven’t been as good as we need to be in recent years in encouraging and then actually integrating this sort of ‘well aligned and ambitious thinking from the edge’. Based on reports I’ve heard back, the Paris meeting set us up for more of this kind of thinking. Rosana Ardila and the Council, along with William Quiviger and Brian King, are working on a “ReMo2.0” plan that builds on this kind of approach, that seeks a deeper integration between our ReMo and Regional Community strategies, and that also adds a strong leadership development element to ReMo.

reps council

Reps Council and Peers at the 2015 Paris meet-up

On the second part of our plan, the Participation Team has talked to over 100 people in Mozilla product and functional groups in the past few months. The purpose of these conversations was to find immediate term initiatives that create the sort of ‘help us meet product goals’ and ’empower people to learn and do’ virtuous circle that we’ve been talking about in these discussions about radical participation.

Over 40 possible experiments came out of these conversations. They included everything from leveraging Firefox Hello to provide a new kind of support and mentoring; to taking a holistic, Mozilla-wide approach to community building in our African Firefox OS launch markets; to turning Mozilla.org into a hub that lets millions of people play small but active roles in moving our mission forward. I’m interested in these experiments, and how they will feed into our work over the coming quarters—many of them have real potential IMHO.

I’m even more excited about the fact that these conversations have started around very practical ideas about how volunteers and product teams can work more closely together again. It’s just a start, but I think the right questions are being asked by the right people.

Mozilla Participation Lab

The third part of our plan was to set up a ‘Task Force’ to help us unlock bold new thinking. The bold thinking part is still the right thing to aim for. However, as we thought about it, the phrase ‘task force’ seemed too talky. What we need is thoughtful and forceful action that gets us towards new models that we can expand. With that in mind we’ve replaced the task force idea with the concept of a Participation Lab. We’ve hired former Engineers Without Borders CEO George Roter to define and lead the Lab over the next six months. In George’s words:

“The lab is Mozilla, and participation is the topic.”

With this ethos in mind, we have just introduced the Lab as both a way to initiate focused experiments to test specific hypotheses about how participation brings value to Mozilla and Mozillians, and to support Mozillians who have already initiated similar experiments. The Lab will be an engine for learning about what works and what will get us leverage, via the experiments and relationships with people outside Mozilla. I believe this approach will move us more quickly towards our bold new plan—and will get more people participating more effectively along the way. You can learn more about this approach by reading George’s blog post.

A new team and a new approach

There is a lot going on. More than I’ve summarized above. And, more importantly, hundreds of people from across the Mozilla community are involved in these efforts: each of them is taking a fresh look at how participation fits into their work. That’s a good sign of progress.

However, there is only a very small Participation Team staff contingent at the heart of these efforts. George has joined David Tenser (50% of his time on loan from User Success for six months) to help lead the team. Rosana Ardila is supporting the transformation of ReMo along with Rubén and Konstantina. Emma Irwin is figuring out how we help volunteers learn the things they need to know to work effectively on Mozilla projects. Pierros Papadeas and a small team of developers (Nikos, Tasos and Nemo) are building pieces of tech under the hood. Brian King along with Gen and Guillermo are supporting our regional communities, while Francisco Picolini is helping develop a new approach to community events. William Quiviger is helping drive some of the experiments and invest across the teams in ensuring our communities are strong. As Mitchell and I worked out a plan to rebuild from the old community teams, these people stepped forward and said ‘yes, I want to help everyone across Mozilla be great at participation’. I’m glad they did.

The progress this Participation Team is making is evident not just in the activities I outlined above, but also in how they are working: they are taking a collaborative and holistic approach to connecting our products with our people.

One concrete example is the work they did over the last few months on Mozilla MarketPulse, an effort to get volunteers gathering information about on-the-street smartphone prices in FirefoxOS markets. The team not only worked closely with FirefoxOS product marketing team to identify what information was needed, they also worked incredibly well together to recruit volunteers, train them up with the info they needed on FirefoxOS, and build an app that they could use to collect data locally. This may not sound like a big deal, but it is: we often fail to do the kind of end to end business process design, education and technology deployment necessary to set volunteers up for success. We need to get better at this if we’re serious about participation as a form of leverage and impact. The new Participation Team is starting to show the way.

Looking at all of this, I’m hoping you’re thinking: this sounds like progress. Or: these things sound useful. I’m also hoping you’re saying: but this doesn’t sound radical yet!!! If you are, I agree. As I said above, I don’t think we can invent ‘radical’ in the abstract; we need to build it as we go.

It’s good to look back at the past quarter with this in mind. We could see the meeting in Paris as just another ReMo Council gathering. Or, we could think of it—and follow up on it—as if it was the first step towards a systematic way for Mozilla to empower people, pursue goals and create leaders on the ground in every part of the world. Similarly, we could look at MarketPulse as basic app for collecting phone prices. Or, we could see it as a first step towards building a community-driven market insights strategy that lets us outsee— and outsmart—our competitors. It all depends how we see what we’re doing and what we do next. I prefer to see this as the development of powerful levers for participation. What we need to do next is press on these levers and see what happens. That’s when we’ll get the chance to find out what ‘radical’ looks like.
PS. I still owe the world (and the people who responded to me) a post synthesizing people’s suggestions on radical participation. It’s still coming, I promise. :/


Filed under: mozilla

George RoterIntroducing the Mozilla Participation Lab

I’m excited to introduce the Mozilla Participation Lab, an initiative across Mozilla to architect a strategy and new approaches to participation.

As Mitchell articulated, people around Mozilla are deeply invested in the question: how can participation add even more value to the products and communities we build that are advancing the open web?

Across Mozilla there’s a flurry of activity aimed at answering this question and increasing participation. Mitchell framed the scope of this exploration as including three broad areas: First, strengthening the efforts of those who devote the most energy to Mozilla. Second, connecting people more closely to Mozilla’s mission and to each other. And third, thinking about organizational structure and practices that support participation.

The Mozilla Participation Lab is designed to strengthen and augment the efforts and energies that Mozillians are devoting to this exploration in the months ahead. If you count yourself as one of those Mozillians who is working on this problem, my hope is that you’ll see how the Mozilla Participation Lab can be relevant for you.

First, let’s back up for some context…

In January, Mitchell and Mark along with the Participation Team laid out a Participation Plan for Mozilla that articulated an ambitious vision for participation in 2017:

  • Many more people working on Mozilla activities in ways that make Mozilla more effective than we can imagine today.
  • An updated approach to how people around the world are helping to build, improve and promote our products and programs.
  • A steady flow of ideas and execution for programs, products, and initiatives around the world—new and diverse activities that move the mission forward in concrete ways.
  • Ways for people to participate in our mission directly through our products—there is integration of participation into the use and value proposition.
  • Ultimately: more Mozilla activities than employees can track, let alone control.

While this vision describes where Mozilla wants to be, how we’re going to get there still needs to be figured out. The how is an important and explicit goal in the participation plan for 2015: Develop a bold long-term plan for radical participation at Mozilla.

This is the goal you’ve heard Mitchell and Mark talking about, and they’ve hired me to get this work going over the next 6 months.

Initially, they talked about this goal being pursued by a task force—a group of people who could go away and “figure this out”. But as we started to build this out, a task force didn’t feel right.

Mozilla Participation Lab

What is the Mozilla Participation Lab? Concretely, the Lab will have three related sets of activities.

1) Focused experiments.

The Participation Team will initiate experiments, after consulting and coordinating with product/functional teams and volunteers, around particular hypotheses about where participation can bring value and impact in Mozilla. All of these experiments will be designed to move a top-line goal of Mozilla (the product side of the virtuous circle), and give volunteers/participants a chance to learn something, have impact or get some other benefit (the people side of the virtuous circle). If the experiments work, we’ll start to see an impact on our product goals and increased volunteer engagement.

virtuouscircle

These experiments will be built in a way that will assess whether the hypotheses are true, what’s required for participation to have impact, and what the return on investment is for our key products and programs, and for Mozillians.

For example, many in Mozilla have articulated a belief that participation can enable local content to make our products better and more relevant, and so we are working on a series of experiments in West Africa alongside the launches of the Orange Klif. If these are successful, they will have had an impact on Firefox OS adoption while building vital, sustainable communities of volunteers.

In order to identify these experiments, our team has already talked with Mozilla staff and volunteers from all over the organization, plus Mozilla’s leadership (staff and volunteers). Here’s a long list of rough ideas that came out of these conversations; we obviously need to make some choices! Our aim to is settle on and launch a first set of focused experiments over the next couple of weeks.

2) Distributed experiments.

I’ve had conversations with roughly 100 Mozillians over the past couple of months and realized that, in true Mozilla distributed style, we’re already trying out new approaches to participation all over the world. Buddy Up, TechSpeakers, Mozilla Hispano, Clubs, Marketpulse are just a few of many many examples. I’m also confident that there will be many more initiatives in the coming months.

My hope is that many of these initiatives will be part of the Participation Lab. This will be different than the focused experiments above in two ways. First, the Participation Team won’t be accountable for results; the individual initiative leaders will be. Second, they can probably be lighter-weight experiments; whereas the focused experiments are likely to be resource intensive.

How does an initiative fit? If it meets two simple criteria: (1) it is testing out a set of hypotheses about how participation can bring value and impact to our mission and to Mozillians, and (2) we can work together to apply a systematic methodology for learning and evaluation.

Of course, it’s the leaders of these initiatives who can choose to be part of the Lab—I hope you do! To be upfront, this could mean a bit of extra work, but you can also access some resources and have an influence on our participation strategy. I think it’s worthwhile:

  1. We will work together to apply a systematic learning and experimenting methodology (documented here).
  2. You can unlock support from the Participation Team. This could be in the form of strategic or design advice; specific expertise (for example, volunteer engagement, building metrics or web development); helping you gather best practices from other organizations; or small amounts of money. We do have limited staff and volunteer time, so may need to make some choices depending on the number of initiatives that are part of the Lab.
  3. Your initiative will make a significant contribution to Mozilla’s overall participation strategy moving forward.
3) Outside ideas.

We will bring together experts and capture world-leading ideas about participation from outside of Mozilla. This is a preliminary list of people we are aiming to reach out to.

Who’s involved?

In short, a broad set of Mozillians will be supported by a smaller team of staff and volunteers from the Participation Team. This team will coordinate various experiments in the Lab, curate the learning, build processes to ensure that all of this is working in the open in a way that any Mozillian can engage with, and make recommendations to Mozilla leaders and community members.

What’s the result, and by when?

The primary outputs of the Lab are:

  1. A series of participation initiatives that result in more impactful and fulfilling participation toward reaching Mozilla’s goals. (Read more below about how what you’re working on right now can fit into this.)
  2. An evidence-based analysis of the effectiveness of specific participatory activities.
  3. Recommendations on how we might expand or generalize the activities that provided the most value to Mozilla and Mozillians.
  4. A preliminary assessment of the organizational changes we might consider in order to gain an even greater strategic advantage from participation.
  5. A set of learning resources and best practices packaged in a way that teams across Mozilla will be able to use to strengthen our collective participation efforts.
  6. Possibly, a series of strategic choices and opportunities for Mozilla leaders and community members to consider.

The first set of activities will take place primarily in Q2, wrapping up by early July, at which point we will assess what’s next for the Lab.

How is this relevant for you?

You have the opportunity to participate in the Lab and in shape the way forward for participation in Mozilla. Here’s how:

1) Be part of the team. Do you want to have a big hand in shaping how Mozilla moves ahead on participation?

In the coming couple of weeks we’ll be starting some focused experiments. If these are problems you’re also excited about (or are already tackling), please get in touch. We’re certain that coders, marketers, project managers, designers, educators, facilitators, writers, evaluators, and more can make a big difference.

Also, if you’re interested being part of the learning team that is tracking and synthesizing lessons from inside and outside Mozilla, please get in touch.

2) Are you already running or planning a new participation initiative, or have an idea you’d like to get off the ground? Could you use some help from the Lab (and hopefully volunteers or other resources)? I’d love to have a conversation about whether your initiative can be part of the Participation Lab and how we can help.

3) Can you think of someone we should be talking to, a book or article to read, or a community to engage? Pass it along. Or better yet, help us to get in touch with people outside of Mozilla or summarize the key lessons for participation.

4) Follow along. We’d like many Mozillians to share their feedback and ideas. We’ll be working out in the open with a home base on this wiki page.

Please get in touch! Reply to this post or send me an email: groter <at> mozilla.com

Let’s together use this Lab as a way to architect an approach to participation that will have a massive positive impact on the web and on people’s lives!


Aaron ThornburghMobile Minded

Imagining the future of New Tab for Firefox Android.

New Tab on Firefox for Android - CONCEPT

For over a year, the Content Services team has been busy evolving New Tab beyond a simple directory of recent, frequently visited sites. Once Firefox 39 lands on desktops later this summer, New Tab will include an updated interface, better page controls, and suggested content from our partners. With any luck, these and future products releases for the desktop browser will facilitate more direct, deeper relationships between brands and users. Most importantly (to me, anyway), richer controls on New Tab will also offer users more customization and better utility.

While this ongoing project work has certainly kept me busy, I can’t help but think about “the next big thing” whenever I have the chance. Lately, my mind has been preoccupied with a question that’s easy to ask, but much more difficult to answer:

How could Suggested Sites and more advanced controls work on mobile?

Providing Firefox Desktop users with more control over the sites they see on New Tab is relatively straightforward. The user is likely seated, focused entirely on the large screen in front of them, and is using a mouse pointer to activate hover states. These conditions are appropriate for linear, deliberate interactions. Therefore, New Tab on desktop can take advantage of the inherent screen real estate and mouse precision to support advanced actions like editing or adding sites. And since New Tab is literally one page, users can’t get really get “lost”.

Mobile is altogether different. The user may be standing, sitting, or on the move. Their attention is divided. Screens are physically smaller, yet still support resolutions comparable to larger desktop displays. More importantly, there aren’t any hover states, and mobile interactions are imprecise (which is maybe why we call them “gestures”). Because of this imprecision on handheld screens, a tap often launches another view or state that may the user to another destination – and after a few taps, the user may find themselves down a navigational rabbit hole that’s cumbersome to climb out of. Combined, these factors sometimes make it hard to perform complex actions on a mobile device. Likewise, any action made by the user should be minimal, simple to perform, and always contextual.

Taking all of the above into consideration, the following is an early peek at my vision for the New Tab experience on Firefox Android, with user control in mind.

+++++

New layout

New Tab on Android: DefaultNew Tab on either desktop or mobile devices has always been about one thing: Helping users navigate the Web more efficiently.

Today, New Tab shows a two-column grid of rectangles depicting Websites they recently visited. While it may make the destination page easier to see, this is an inefficient use of space.

By shrinking the rectangles, more of them can fit onto the page; and by showing a logo instead of a Web page (when possible), identifying individual sites becomes easier too. These smaller “tiles” could even be grouped, just as the user would group apps on their device home screen.

Some folks may also be interested in discovering something entirely new on the Web. The future New Tab could serve suggested content for these users, based on their browsing history (and with permission, of course). But instead of commandeering a tile, suggestions could be delivered natively, and in line with the user’s history list.

Quick and painless suggestions

New Tab on Android: Suggested contentViewing suggested content in other applications typically launches a new app or another tab in the user’s browser. Yet it only takes a second or two for the user to decide if the content is actually interesting to them. Personally, I think it would be better to give users a preview of the content, and then give them the option of dismissing it or continuing on without leaving the page they’re on.

Shown above, I image that after tapping a suggested item, New Tab could slide away to the left, revealing a preview of the suggested content beneath. If the user scrolls to view more content, a button then slides into view at the bottom of the screen, taking them the destination page suggested on-tap. If they aren’t interested in reading further, they would simply tap the navigation bar (below the search bar) to return to New Tab. Meanwhile, they never actually “left” the original screen.

Drag-and-drop Web addresses

New Tab on Android: Drag a site onto pageHowever, if the user does find the suggested content interesting, then they should be able to add the destination site directly to New Tab. One solution may be allowing users to drag-and-drop a Web address from the search bar and into New Tab. Perhaps by dragging the address onto another tile, users could even create a new group of related sites.

New Tab on Android: Adding a group

If a user doesn’t care for a particular suggestion, however, then deleting it – or any item on New Tab, for that matter – should be as easy as dragging it off either edge of the screen. Borrowing from another popular email application, swiping an item would reveal the word “delete” beneath, further reinforcing the action being performed. Naturally, this may sometimes happen by accident. As such, a temporary button could appear that allows the user to retrieve the item previously listed, then disappear after a few seconds.

DIY tiles

New Tab for Android: Edit site appearanceAlternatively, a user could add a new site directly from New Tab. Tapping the “+” button would launch a native keyboard and other controls, allowing them to search for a URL, define the tile’s appearance, or opt-out of related content suggestions. For extra clarification – and a little fun – the user would literally “build” their tile in real-time. Selecting any URL from the search bar dropdown would update the example tile shown, displaying a logo by default. Or, the user may choose instead to show an image of the destination homepage, or the last page they visited.

Next steps?

What I’ve proposed should be taken with a few grains of salt. For one, I believe that limiting the need for new, fancy gestures encourages adoption and usage. Likewise, many of these interactions aren’t especially novel. In fact, most of them are intended to mimic native functions a user may find elsewhere on his or her Android device. My ultimate goal here was to introduce new features available on Firefox that won’t require a steep learning curve.

For another, the possibilities for New Tab on mobile devices are numerous, and exciting to think about – but any big changes are a long ways away. By the time a new big update for Firefox on Android lands, this post will probably to totally irrelevant. But in the meantime, I hope to plant a few seeds that will take root and develop further as my team, and many others at Mozilla, contemplate the future of Firefox for the mobile Web.


Mike ConleyThe Joy of Coding (Ep. 9): More View Source Hacking!

In this episode1, I continued the work we had started in Episode 8, by trying to make it so that we don’t hit the network when viewing the source of a page in multi-process Firefox.

It was a little bit of a slog – after some thinking, I decided to undo some of the work we had done in the previous episode, and then I set up the messaging infrastructure for talking to the remote browser in the view source window.

I also rebased and landed a patch that we had written in the previous episode, after fixing up some nits2.

Then, I (re)-learned that flipping the “remote” attribute of a browser is not enough in order for it to run out-of-process; I have to remove it from the DOM, and then re-add it. And once it’s been re-added, I have to reload any frame scripts that I had loaded in the previous incarnation of the browser.

Anyhow, by the end of the episode, we were able to view the source from a remote browser inside a remote view source browser!3 That’s a pretty big deal!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. A note that I also tried an experiment where I keep my camera running during the entire session, and place the feed into the bottom right-hand corner of the recording. It looks like there were some synchronization issues between audio and video, which are a bit irritating. Sorry about that! I’ll see what I can do about that. 

  2. and dropping a nit having conversed with :gabor about it 

  3. We were still loading it off the network though, so I need to figure out what’s going on there in the next episode. 

Nick FitzgeraldA Compact Representation Of Captured Stack Frames For Spidermonkey

Last year, I implemented a new, compact representation for captured JavaScript stacks in SpiderMonkey1. This was done to support the allocation tracking I built into SpiderMonkey's Debugger API2 3, which saves the stack for potentially every Object allocation in the observed globals. That's a lot of stacks, and so it was important to make sure that we incurred as little overhead to memory usage as we could. Even more so, because we'd like to affect memory usage as little as possible while simultaneously instrumenting and measuring it.

The key observation is that while there may be many allocations, there are many fewer allocation sites. Most allocations happen at a few places. Thus, much information is duplicated across stacks. For example, if we run the esprima JavaScript parser on its own JavaScript source, there are approximately 54,700 total Object allocations, but just ~1,400 unique JS stacks at allocation time. There are only ~200 allocation sites if you consider only the youngest stack frame.

Consider the example below:

function a() {
  b();
}

function b() {
  c();
  d();
  e();
}

function c() { new Object; }
function d() { new Object; }
function e() { new Object; }

Disregarding compiler optimizations removing allocations, arguments objects, as well as engine internal allocations, calling the a function allocates three Objects. With arrows representing the "called by" relationship from a younger frame to its older, parent frame, this is the set of stacks that existed during an Object allocation:

c -> b -> a
d -> b -> a
e -> b -> a

Instead of duplicating all these saved stack frames, we use a technique called hash consing. Hash consing lets us share older tail stack frames between many unique stacks, similar to the way a trie shares string prefixes. With hash consing, SpiderMonkey internally represents those stacks like this:

c -> b -> a
     ^
d ---|
     |
e ---'

Each frame is stored in a hash table. When saving new stacks, we use this table to find pre-existing saved frames. If such an object is already extant, it is reused; otherwise a new saved frame is allocated and inserted into the table. During the garbage collector's sweep phase, we remove old entries from this hash table that are no longer used.

I just landed a series of patches to make Error objects use these hash cons'd stacks internally, and lazily stringify it when the stack property is accessed4. Previously, every time an Error object was created, the stack string was eagerly computed. This change can result in significant memory savings for JS programs that allocate many Error objects. In one Real World™ test case5, we dropped memory usage from 1.2GB down to 167MB. Additionally, we use almost half of the memory that V8 uses on this same test case.

Many thanks to Jim Blandy for guidance and reviews. Thanks also to Boris Zbarsky, Bobby Holley, and Jason Orendorff for insight into implementing the security wrappers needed for passing stack frame objects between privileged and un-privileged callers. Thanks to Paolo Amadini for adding support for tracing stacks across asynchronous operations (more about this in the future). Thanks again to Boris Zbarsky for making DOMExceptions use these hash cons'd stacks. Another thanks to Jason Orendorff for reviewing my patches to make the builtin JavaScript Error objects use hash cons'd stacks behind the scenese.

In the future, I would like to make capturing stacks even faster by avoiding re-walking stack frames that we already previously walked capturing another stack6. The cheaper capturing a stack is, the more ubiquitous it can be throughout the the platform, and that puts more context and debugging information into developers' hands.

Cameron Kaisersystemsetupusthebomb

This article has been superseded: Power Macs are vulnerable after all.

Oh, Apple. Ohhh, Apple. Today's rookie mistake is a system process called writeconfig that, through a case of the infamous confused deputy problem (it exists to allow certain operations by System Preferences and its command line equivalent systemsetup to be performed by admin users that are not root), can be coerced to allow any user to create arbitrary files with arbitrary permissions -- including setuid -- as root. That's, to use the technical term, bad.

This problem exists in 10.10, and is fixed in 10.10.3, but Apple will not fix it for 10.9 (or 10.8, or 10.7; the reporters confirmed it as far back as 10.7.2), citing technical limitations. Thanks, Apple!

The key is a privileged process called writeconfig which can be tricked into writing files anywhere using a cross-process attack. You would ask, reasonably, why such a process would exist in the first place, and the apparent reason is to allow these later versions of systemsetup et al to create user-specific Apache webserver configurations for guest users. If systemsetup doesn't have this functionality in your version of Mac OS X, then this specific vulnerability, at least, does not exist.

Fortunately, 10.6 and earlier do not support this feature; for that matter, there's no ToolLiaison or WriteConfigClient Objective-C class to exploit either. In fact, systemsetup isn't even in /usr/sbin in non-Server versions of OS X prior to 10.5: it's actually in /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Support/, as a component of Apple Remote Desktop. I confirmed all this on my local 10.4 and 10.6 systems and was not able to replicate the issue with the given proof of concept or any reasonable variation thereof, so I am relieved to conclude that Power Macs and Snow Leopard do not appear to be vulnerable to this exploit. All your PowerPC-base systems are still belong to you.

Meanwhile, on the TenFourFox 38 front, IonPower is almost passing the first part of V8. Once I get Richards, DeltaBlue and Crypto working the rest of it should bust wide open. Speed numbers are right in line with what I'd expect based on comparison tests on my 2014 i7 MacBook Air. It's gonna be nice.

The Rust Programming Language BlogFearless Concurrency with Rust

The Rust project was initiated to solve two thorny problems:

  • How do you do safe systems programming?
  • How do you make concurrency painless?

Initially these problems seemed orthogonal, but to our amazement, the solution turned out to be identical: the same tools that make Rust safe also help you tackle concurrency head-on.

Memory safety bugs and concurrency bugs often come down to code accessing data when it shouldn't. Rust's secret weapon is ownership, a discipline for access control that systems programmers try to follow, but that Rust's compiler checks statically for you.

For memory safety, this means you can program without a garbage collector and without fear of segfaults, because Rust will catch your mistakes.

For concurrency, this means you can choose from a wide variety of paradigms (message passing, shared state, lock-free, purely functional), and Rust will help you avoid common pitfalls.

Here's a taste of concurrency in Rust:

  • A channel transfers ownership of the messages sent along it, so you can send a pointer from one thread to another without fear of the threads later racing for access through that pointer. Rust's channels enforce thread isolation.

  • A lock knows what data it protects, and Rust guarantees that the data can only be accessed when the lock is held. State is never accidentally shared. "Lock data, not code" is enforced in Rust.

  • Every data type knows whether it can safely be sent between or accessed by multiple threads, and Rust enforces this safe usage; there are no data races, even for lock-free data structures. Thread safety isn't just documentation; it's law.

  • You can even share stack frames between threads, and Rust will statically ensure that the frames remain active while other threads are using them. Even the most daring forms of sharing are guaranteed safe in Rust.

All of these benefits come out of Rust's ownership model, and in fact locks, channels, lock-free data structures and so on are defined in libraries, not the core language. That means that Rust's approach to concurrency is open ended: new libraries can embrace new paradigms and catch new bugs, just by adding APIs that use Rust's ownership features.

The goal of this post is to give you some idea of how that's done.

Background: ownership

We'll start with an overview of Rust's ownership and borrowing systems. If you're already familiar with these, you can skip the two "background" sections and jump straight into concurrency. If you want a deeper introduction, I can't recommend Yehuda Katz's post highly enough. And the Rust book has all the details.

In Rust, every value has an "owning scope," and passing or returning a value means transferring ownership ("moving" it) to a new scope. Values that are still owned when a scope ends are automatically destroyed at that point.

Let's look at some simple examples. Suppose we create a vector and push some elements onto it:

fn make_vec() {
    let mut vec = Vec::new(); // owned by make_vec's scope
    vec.push(0);
    vec.push(1);
    // scope ends, `vec` is destroyed
}

The scope that creates a value also initially owns it. In this case, the body of make_vec is the owning scope for vec. The owner can do anything it likes with vec, including mutating it by pushing. At the end of the scope, vec is still owned, so it is automatically deallocated.

Things get more interesting if the vector is returned or passed around:

fn make_vec() -> Vec<i32> {
    let mut vec = Vec::new();
    vec.push(0);
    vec.push(1);
    vec // transfer ownership to the caller
}

fn print_vec(vec: Vec<i32>) {
    // the `vec` parameter is part of this scope, so it's owned by `print_vec`

    for i in vec.iter() {
        println!("{}", i)
    }

    // now, `vec` is deallocated
}

fn use_vec() {
    let vec = make_vec(); // take ownership of the vector
    print_vec(vec);       // pass ownership to `print_vec`
}

Now, just before make_vec's scope ends, vec is moved out by returning it; it is not destroyed. A caller like use_vec then receives ownership of the vector.

On the other hand, the print_vec function takes a vec parameter, and ownership of the vector is transferred to it by its caller. Since print_vec does not transfer the ownership any further, at the end of its scope the vector is destroyed.

Once ownership has been given away, a value can no longer be used. For example, consider this variant of use_vec:

fn use_vec() {
    let vec = make_vec();  // take ownership of the vector
    print_vec(vec);        // pass ownership to `print_vec`

    for i in vec.iter() {  // continue using `vec`
        println!("{}", i * 2)
    }
}

If you feed this version to the compiler, you'll get an error:

error: use of moved value: `vec`

for i in vec.iter() {
         ^~~

The compiler is saying vec is no longer available; ownership has been transferred elsewhere. And that's very good, because the vector has already been deallocated at this point!

Disaster averted.

Background: borrowing

The story so far isn't totally satisfying, because it's not our intent for print_vec to destroy the vector it was given. What we really want is to grant print_vec temporary access to the vector, and then continue using the vector afterwards.

This is where borrowing comes in. If you have access to a value in Rust, you can lend out that access to the functions you call. Rust will check that these leases do not outlive the object being borrowed.

To borrow a value, you make a reference to it (a kind of pointer), using the & operator:

fn print_vec(vec: &Vec<i32>) {
    // the `vec` parameter is borrowed for this scope

    for i in vec.iter() {
        println!("{}", i)
    }

    // now, the borrow ends
}

fn use_vec() {
    let vec = make_vec();  // take ownership of the vector
    print_vec(&vec);       // lend access to `print_vec`
    for i in vec.iter() {  // continue using `vec`
        println!("{}", i * 2)
    }
    // vec is destroyed here
}

Now print_vec takes a reference to a vector, and use_vec lends out the vector by writing &vec. Since borrows are temporary, use_vec retains ownership of the vector; it can continue using it after the call to print_vec returns (and its lease on vec has expired).

Each reference is valid for a limited scope, which the compiler will automatically determine. References come in two flavors:

  • Immutable references &T, which allow sharing but not mutation. There can be multiple &T references to the same value simultaneously, but the value cannot be mutated while those references are active.

  • Mutable references &mut T, which allow mutation but not sharing. If there is an &mut T reference to a value, there can be no other active references at that time, but the value can be mutated.

Rust checks these rules at compile time; borrowing has no runtime overhead.

Why have two kinds of references? Consider a function like:

fn push_all(from: &Vec<i32>, to: &mut Vec<i32>) {
    for i in from.iter() {
        to.push(*i);
    }
}

This function iterates over each element of one vector, pushing it onto another. The iterator keeps a pointer into the vector at the current and final positions, stepping one toward the other.

What if we called this function with the same vector for both arguments?

push_all(&vec, &mut vec)

This would spell disaster! As we're pushing elements onto the vector, it will occasionally need to resize, allocating a new hunk of memory and copying its elements over to it. The iterator would be left with a dangling pointer into the old memory, leading to memory unsafety (with attendant segfaults or worse).

Fortunately, Rust ensures that whenever a mutable borrow is active, no other borrows of the object are active, producing the message:

error: cannot borrow `vec` as mutable because it is also borrowed as immutable
push_all(&vec, &mut vec);
                    ^~~

Disaster averted.

Message passing

Now that we've covered the basic ownership story in Rust, let's see what it means for concurrency.

Concurrent programming comes in many styles, but a particularly simple one is message passing, where threads or actors communicate by sending each other messages. Proponents of the style emphasize the way that it ties together sharing and communication:

Do not communicate by sharing memory; instead, share memory by communicating.

--Effective Go

Rust's ownership makes it easy to turn that advice into a compiler-checked rule. Consider the following channel API (channels in Rust's standard library are a bit different):

fn send<T: Send>(chan: &Channel<T>, t: T);
fn recv<T: Send>(chan: &Channel<T>) -> T;

Channels are generic over the type of data they transmit (the <T: Send> part of the API). The Send part means that T must be considered safe to send between threads; we'll come back to that later in the post, but for now it's enough to know that Vec<i32> is Send.

As always in Rust, passing in a T to the send function means transferring ownership of it. This fact has profound consequences: it means that code like the following will generate a compiler error.

// Suppose chan: Channel<Vec<i32>>

let mut vec = Vec::new();
// do some computation
send(&chan, vec);
print_vec(&vec);

Here, the thread creates a vector, sends it to another thread, and then continues using it. The thread receiving the vector could mutate it as this thread continues running, so the call to print_vec could lead to race condition or, for that matter, a use-after-free bug.

Instead, the Rust compiler will produce an error message on the call to print_vec:

Error: use of moved value `vec`

Disaster averted.

Locks

Another way to deal with concurrency is by having threads communicate through passive, shared state.

Shared-state concurrency has a bad rap. It's easy to forget to acquire a lock, or otherwise mutate the wrong data at the wrong time, with disastrous results -- so easy that many eschew the style altogether.

Rust's take is that:

  1. Shared-state concurrency is nevertheless a fundamental programming style, needed for systems code, for maximal performance, and for implementing other styles of concurrency.

  2. The problem is really about accidentally shared state.

Rust aims to give you the tools to conquer shared-state concurrency directly, whether you're using locking or lock-free techniques.

In Rust, threads are "isolated" from each other automatically, due to ownership. Writes can only happen when the thread has mutable access, either by owning the data, or by having a mutable borrow of it. Either way, the thread is guaranteed to be the only one with access at the time. To see how this plays out, let's look at locks.

Remember that mutable borrows cannot occur simultaneously with other borrows. Locks provide the same guarantee ("mutual exclusion") through synchronization at runtime. That leads to a locking API that hooks directly into Rust's ownership system.

Here is a simplified version (the standard library's is more ergonomic):

// create a new mutex
fn mutex<T: Send>(t: T) -> Mutex<T>;

// acquire the lock
fn lock<T: Send>(mutex: &Mutex<T>) -> MutexGuard<T>;

// access the data protected by the lock
fn access<T: Send>(guard: &mut MutexGuard<T>) -> &mut T;

This lock API is unusual in several respects.

First, the Mutex type is generic over a type T of the data protected by the lock. When you create a Mutex, you transfer ownership of that data into the mutex, immediately giving up access to it. (Locks are unlocked when they are first created.)

Later, you can lock to block the thread until the lock is acquired. This function, too, is unusual in providing a return value, MutexGuard<T>. The MutexGuard automatically releases the lock when it is destroyed; there is no separate unlock function.

The only way to access the lock is through the access function, which turns a mutable borrow of the guard into a mutable borrow of the data (with a shorter lease):

fn use_lock(mutex: &Mutex<Vec<i32>>) {
    // acquire the lock, taking ownership of a guard;
    // the lock is held for the rest of the scope
    let mut guard = lock(mutex);

    // access the data by mutably borrowing the guard
    let vec = access(&mut guard);

    // vec has type `&mut Vec<i32>`
    vec.push(3);

    // lock automatically released here, when `guard` is destroyed
}

There are two key ingredients here:

  • The mutable reference returned by access cannot outlive the MutexGuard it is borrowing from.

  • The lock is only released when the MutexGuard is destroyed.

The result is that Rust enforces locking discipline: it will not let you access lock-protected data except when holding the lock. Any attempt to do otherwise will generate a compiler error. For example, consider the following buggy "refactoring":

fn use_lock(mutex: &Mutex<Vec<i32>>) {
    let vec = {
        // acquire the lock
        let mut guard = lock(mutex);

        // attempt to return a borrow of the data
        access(&mut guard)

        // guard is destroyed here, releasing the lock
    };

    // attempt to access the data outside of the lock.
    vec.push(3);
}

Rust will generate an error pinpointing the problem:

error: `guard` does not live long enough
access(&mut guard)
            ^~~~~

Disaster averted.

Thread safety and "Send"

It's typical to distinguish some data types as "thread safe" and others not. Thread safe data structures use enough internal synchronization to be safely used by multiple threads concurrently.

For example, Rust ships with two kinds of "smart pointers" for reference counting:

  • Rc<T> provides reference counting via normal reads/writes. It is not thread safe.

  • Arc<T> provides reference counting via atomic operations. It is thread safe.

The hardware atomic operations used by Arc are more expensive than the vanilla operations used by Rc, so it's advantageous to use Rc rather than Arc. On the other hand, it's critical that an Rc<T> never migrate from one thread to another, because that could lead to race conditions that corrupt the count.

Usually, the only recourse is careful documentation; most languages make no semantic distinction between thread-safe and thread-unsafe types.

In Rust, the world is divided into two kinds of data types: those that are Send, meaning they can be safely moved from one thread to another, and those that are !Send, meaning that it may not be safe to do so. If all of a type's components are Send, so is that type -- which covers most types. Certain base types are not inherently thread-safe, though, so it's also possible to explicitly mark a type like Arc as Send, saying to the compiler: "Trust me; I've verified the necessary synchronization here."

Naturally, Arc is Send, and Rc is not.

We already saw that the Channel and Mutex APIs work only with Send data. Since they are the point at which data crosses thread boundaries, they are also the point of enforcement for Send.

Putting this all together, Rust programmers can reap the benefits of Rc and other thread-unsafe types with confidence, knowing that if they ever do accidentally try to send one to another thread, the Rust compiler will say:

`Rc<Vec<i32>>` cannot be sent between threads safely

Disaster averted.

Sharing the stack: "scoped"

So far, all the patterns we've seen involve creating data structures on the heap that get shared between threads. But what if we wanted to start some threads that make use of data living in our stack frame? That could be dangerous:

fn parent() {
    let mut vec = Vec::new();
    // fill the vector
    thread::spawn(|| {
        print_vec(&vec)
    })
}

The child thread takes a reference to vec, which in turn resides in the stack frame of parent. When parent exits, the stack frame is popped, but the child thread is none the wiser. Oops!

To rule out such memory unsafety, Rust's basic thread spawning API looks a bit like this:

fn spawn<F>(f: F) where F: 'static, ...

The 'static constraint is a way of saying, roughly, that no borrowed data is permitted in the closure. It means that a function like parent above will generate an error:

error: `vec` does not live long enough

essentially catching the possibility of parent's stack frame popping. Disaster averted.

But there is another way to guarantee safety: ensure that the parent stack frame stays put until the child thread is done. This is the pattern of fork-join programming, often used for divide-and-conquer parallel algorithms. Rust supports it by providing a "scoped" variant of thread spawning:

fn scoped<'a, F>(f: F) -> JoinGuard<'a> where F: 'a, ...

There are two key differences from the spawn API above:

  • The use a parameter 'a, rather than 'static. This parameter represents a scope that encompasses all the borrows within the closure, f.

  • The return value, a JoinGuard. As its name suggests, JoinGuard ensures that the parent thread joins (waits on) its child, by performing an implicit join in its destructor (if one hasn't happened explicitly already).

Including 'a in JoinGuard ensures that the JoinGuard cannot escape the scope of any data borrowed by the closure. In other words, Rust guarantees that the parent thread waits for the child to finish before popping any stack frames the child might have access to.

Thus by adjusting our previous example, we can fix the bug and satisfy the compiler:

fn parent() {
    let mut vec = Vec::new();
    // fill the vector
    let guard = thread::scoped(|| {
        print_vec(&vec)
    });
    // guard destroyed here, implicitly joining
}

So in Rust, you can freely borrow stack data into child threads, confident that the compiler will check for sufficient synchronization.

Data races

At this point, we've seen enough to venture a strong statement about Rust's approach to concurrency: the compiler prevents all data races.

A data race is any unsynchronized, concurrent access to data involving a write.

Synchronization here includes things as low-level as atomic instructions. Essentially, this is a way of saying that you cannot accidentally "share state" between threads; all (mutating) access to state has to be mediated by some form of synchronization.

Data races are just one (very important) kind of race condition, but by preventing them, Rust often helps you prevent other, more subtle races as well. For example, it's often important that updates to different locations appear to take place atomically: other threads see either all of the updates, or none of them. In Rust, having &mut access to the relevant locations at the same time guarantees atomicity of updates to them, since no other thread could possibly have concurrent read access.

It's worth pausing for a moment to think about this guarantee in the broader landscape of languages. Many languages provide memory safety through garbage collection. But garbage collection doesn't give you any help in preventing data races.

Rust instead uses ownership and borrowing to provide its two key value propositions:

  • Memory safety without garbage collection.
  • Concurrency without data races.

The future

When Rust first began, it baked channels directly into the language, taking a very opinionated stance on concurrency.

In today's Rust, concurrency is entirely a library affair; everything described in this post, including Send, is defined in the standard library, and could be defined in an external library instead.

And that's very exciting, because it means that Rust's concurrency story can endlessly evolve, growing to encompass new paradigms and catch new classes of bugs. Libraries like syncbox and simple_parallel are taking some of the first steps, and we expect to invest heavily in this space in the next few months. Stay tuned!

Air MozillaCantina Speaker: Jason MacPherson

Cantina Speaker: Jason MacPherson For April, we're welcoming Jason MacPherson, Chief Scientist from Culture Amp, the folks who are running our employee engagement survey - which kicks off earlier...

The Servo BlogThis Week In Servo 30

In the past week, we merged 66 pull requests

We now use homu to queue pull requests and coordinate with buildbot, in place of bors. Homu is a bit more efficient when it comes to API usage, and responds immediately to changes (bors needs to wait till it can hit the queue again after three minutes). It’s also got a bunch of other useful features like prioritization and efficient usage of build machines when retrying on a failure. You can try it out for yourself!

Last week’s post was discussed on Hacker News.

Notable additions

New contributors

Meeting

Minutes

We had some issues with James’ CSS test PR breaking GitHub’s API, and the fallout on our CI. At the time of writing, the issue seems fixed. There were a couple of annoucements regarding the switch to homu and the new CSS tests, along with some discussion on the growing pull request backlog. We’re moving all our submodules to crates.io, with many of them running on Rust beta – please help if you can!

Will Kahn-Greenepyvideo status: April 9th, 2015

What is pyvideo.org

pyvideo.org is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with pyvideo.org.

Status

It's been about a year since my last status report. The last year was tough for a bunch of reasons I don't want to go into. On top of that, I was pretty burned out and I had a ton of other stuff I needed to do. So I didn't do much with pyvideo for a long time.

However, we had a lot of help from other people. Sheila Miguez took over adminning the site and added a bunch of conferences. Paul Collins did a ton of work fixing technical debt issues and cleaning up richard codebase. Trey Hunner got richard working with Python 3. Spencer Herzberg, Magnun Leno, Reiner Gerecke, Wes Turner, Benjamin Bertrand, and Burak Guven fixed a bunch of bugs, added new functionality and cleaned up documentation.

I started to get back to things mid-March. I started with the ansible work that Sheila did and tweaked it so we could use it to deploy pyvideo. Then I redid all the infrastructure, updated richard to Django 1.7, fixed a bunch of django-browserid related issues, nixed some code and did some other cleanup. Then I had a few days left before PyCon US 2015 and I decided to throw together a rough playlist implementation.

Playlists are something Sheila and I have talked about for a while. It's great that pyvideo is an index of videos, but until now there's been no good way of collecting a subset of videos you think are interesting to watch later. There's been no good way of curating a group of videos and then sharing that list with other people. Perhaps you want to help people learn Flask. Perhaps you want to share videos about debugging in Python. Perhaps you want to collect videos related to a class you're teaching. Perhaps you want everyone to experience Erik Rose: Man of Mystery.

I landed a rough implementation of playlists today. It's not perfect; it's missing some key things. I wrote up some issues in the richard issue tracker for features that should get implemented to make it really useful. Even without some of those things, it's useful today.

Want to try it out?

Sign in to pyvideo. You'll see a "My playlists" link in the navbar at the top. Go to that page and create a playlist.

Now you can go to any video page on the site and add that video to your playlists.

All playlists are public. You can share the url of your playlists with other people.

Next step is to implement some of the other features listed in the richard issue tracker. If there are other things you want to see or you bump into problems, toss an issue in the tracker.

I hope you find it helpful!

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Air MozillaCommunity Education Call

Community Education Call The Community Education Working Group exists to merge ideas, opportunities, efforts and impact across the entire project through Education & Training.

Soledad PenadesGetting logs of your Firefox OS device

Often you want to output debugging data from your app, but the space on the screen is limited! And perhaps you don’t want to connect to the app with WebIDE and launch the debugger and etc, etc…

One solution is to use any of the console. functions in your code. They will be sent to the device’s log. For example:

console.log('hello friends!');

and you can also use console.error, console.info, etc.

Then, if you have the adb utility installed, you can get instant access to those logs when your device is connected to your computer with USB.

You can get adb from the Android Developer Toolkit or if you just want that binary, with package managers such as brew on a Mac:

brew install android-platform-tools

Once adb is installed, if you execute this in a terminal:

adb logcat

it will start to print everything that is sent to the device’s log. This includes your app’s messages AND EVERYTHING ELSE! It’s like a huge kitchen sink where all the debug data goes.

For example:

I/HelloApp(21456): Content JS LOG: hello friends
I/HelloApp(21456):     at hello (app://5ae38330-dde0-11e4-9397-fd926d95d498/js/app.js:87:4)
D/wpa_supplicant(  900): RX ctrl_iface - hexdump(len=11): 53 49 47 4e 41 4c 5f 50 4f 4c 4c
D/wpa_supplicant(  900): wlan0: Control interface command 'SIGNAL_POLL'

Although sometimes you want to see the whole bunch of messages, more often than not you’re just interested in your app’s messages. You can use grep to filter the output of adb logcat. Stop the current process with CTRL+C and then type this:

adb logcat | grep HelloApp

The result:

I/HelloApp(21456): Content JS LOG: hello friends
I/HelloApp(21456):     at hello (app://5ae38330-dde0-11e4-9397-fd926d95d498/js/app.js:87:4)

What we’re saying is: only show me lines that contain HelloApp. This depends on your app’s name, so adjust accordingly—if you enter the wrong argument for grep, you won’t see anything at all ;-)

And what if you connect multiple devices…?

When you connect multiple devices and run adb logcat, you get this message:

error: more than one device and emulator

adb doesn’t know what do you actually want to see, so it just resorts to not showing you anything.

To tell it what you want to look at, you need to find the label for each device first:

adb devices

This will produce a list similar to this:

List of devices attached
3739ce99        device
356cd099        device

Where the first column represents each device’s label. Then you can use that to filter when calling adb logcat, like this:

adb -s 3739ce99 logcat | grep HelloApp

You could also open another terminal window and run logcat on it, but for another device, simultaneously.

Saving a log to a file

Often when you file a bug you’re asked to produce a log. You can create one by redirecting the output of adb to a file! The way to do this is to go to a terminal, and type:

adb logcat > debug.txt
# or also...
adb -s 356cd099 logcat > debug.txt

Instead of outputting the logs to the screen, they will be stored in debug.txt until you press CTRL+C. You should run the steps to reproduce the bug while adb is logging to debug.txt, and then stop it afterwards, and then you can append said debug.txt file to the bug in bugzilla–it contains valuable debug information, specially if you don’t filter by your app name!

And there are more useful adb bits and pieces on this MDN page.

Happy logging ;-)

flattr this!

Emma IrwinOn being forces of good for each other

This is two one of two  – on recognition.

My last post focused on personalized recognition design.  We need be deliberate about designing recognition that’s valuable to community (staff and volunteers),  recognition that aligns with participation goals,  recognition that provides a  sustainable vision for the future between project and person.

If that sounds like a big task, it’s actually not, compared with the scale of what we need to accomplish.  The truly big task is to make the Mozilla community a place worth hanging your hat. Hoping you’ve read Leslie’sA place to hang my hat” now about surfacing the accomplishments of others:

And I want the same things for everyone I know. For all those folks who pour their heart into things and are unsung heroes. For people who give freely of their time and knowledge, and don’t expect a big party in return, just respect for having contributed. I’d rather none of us had to spend the time proving what we know.

(And this is especially true for women.)

I’d rather we all spent some time concentrating our energies on being forces for good for each other.

I watched the huge and positive response to Leslie’s post with interest  – because awesome.  There were tons of Mozilla tweets for this initiative #LABHR,  but then  – none in the last month or so.  Why is that?  Perhaps because the rush of participation felt good, but that we fail to personalize why surfacing the efforts of others on a regular basis matters.

Possibly, many of us are in privileged place of already being appreciated;  and because the consequences are silent, it does little to erode our personal glow.  Perhaps we feel we’re doing enough (and some teams to be fair do this really well already) or we’re just bad at time management – I’m sure there are a few reasons but I know it’s not because we’re out of people who need recognized :)

Here’s my proposal: Let’s reboot, or actually *start* the #mozlove idea that everyone loved (posted to CBT list) earlier year , and breathe life into it:

Find a community member(volunteer or staff) you admire and write/blog about their impact on the project, perhaps on you personally : encouraging stories about people who you haven’t seen highlighted previously –   as inspirations for 2015. Tag your shares with #mozlove

Lets be move from being ‘bandwagon-y’ about appreciation to being active participants and believers in surfacing the accomplishments of others.

2015-04-08_1758

There are lots of suggestions for how to do that on Leslie’s Blog post,  but I want to emphasis one of key suggestion:

Ultra-mega-bonus points if your first few write ups are for people who are not like you.

I’ll share what I’ve been doing in the past month (so it’s possible!)

  • I added a bi-weekly, reoccurring calendar event –  for half an hour to catch up on appreciation. This month I have written 2 LinkedIn recommendations.  I could not believe some of the amazing volunteers with no recommendations at all.  
  • I am slowly writing blog posts profiling a few young contributors at Mozilla who work tirelessly for Community IT (posting 1 later this week).  I wrote one on Nigel earlier this year as well. Here’s a template I use in case it helps you.
  • I’m trying to work appreciation into my workflow  – adding meaningful comments to issue trackers, and tweeting when I know it’s OK to do so:

Feels good, and important. #LABHR  and #mozlove  – hope you to see you there.

 Firefox Image Credit – Faisal !

 

 

 

 

 

Emma IrwinPersonalizing Community Recognition

This is part one of two  – on recognition.

Something I’m thinking a LOT about these days is community recognition:  meaningful and personalized recognition.  Especially for community education, and especially to celebrate milestones of success navigating contribution ladders/pathways.

Earlier this year, we sent out a survey asking Mozilla Community (staff and volunteers) to evaluate, from a provided list,  methods of recognition they most valued. Interestingly, no single method had more than 75% approval, with most hovering around 30% negative response. From digital badges, to shout-outs and printable-certificates there was no clear winner, and I think this is a compelling thing to solve for.

Early thinking around this includes solutions that add ‘preferred recognition’ as a choice in our Mozillians.org and/or Reps profiles, so that when we want to acknowledge someone’s accomplishments, we can literally ‘look up’ what is most valued by that individual, and do that thing.  I’m also mid-journey with community infrastructure friends to add badges to our profiles – which I hope finally, help Mozillians share those badges they’ve been collecting.

The panic starts when we add the word ‘scalable’.

How can we design scalable, personalized recognition when we have so many amazing people moving the needle every day? When those people are in tiny corners of the project, or lost in a sea of greater community initiatives – how can we ever, ever manage to make recognition part of our reputation?

Well I’ll tell you how we can do it: stop thinking of recognition as this huge thing we to set aside our precious time to do.  That’s not to say all of what we’re building doesn’t need dedicated planning – it does, but the majority of what we can accomplish by making recognition part of our workflow.

My next post will talk a bit about that, and how I hope a rebooted version of the #mozlove initiative  can help.  But first read this blog post from Leslie Hawthorn, and you’ll see where I’m going.

If you are working on recognition, or have thoughts, ideas and inventions that relate to personalized recognition I would love to hear from you!

 

Air MozillaKids' Vision - Mentorship Series

Kids' Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

PomaxTouch events, Reactjs, and Android. Good luck.

We're doing a bit of prototype work over at the Mozilla Foundation, playing around with what possible future ways of interacting with makable web things could look like (can that be more vague?), and one of these prototypes takes the shape of dropping HTML elements onto a page and, photo book style, moving them around (or rather, moving, rotating, and scaling, using CSS3) without necessarily affecting the markup ordering.

And that works well! We're currently exploring React.js (which comes with a refreshing look at what programming for the web can look like) and so I figured I'd try my hand at the idea by writing a React component/mixin that could be used in conjunction with arbitrary content to magically make it movable, rotatable and scalable. And in desktop browsers, it works really well!

Unfortunately, we also need things to work on mobile devices, where there are no mouse cursors, and instead you have to work with touch. Touch changes some things (the CSS :hover state, for instance, becomes meaningless) but for the most part if your code worked with mousedown, mousemove and mouseup, those map fairly straight forward to touchstart, touchmove and touchend. Add the touch listeners and make them do the same as the mouse listeners, and done. Or, you would be, if these generated the same data. They don't, so you have a bit more work to do for getting the correct coordinates out of the touch events (mouse events have evt.clientX, touch events are an array of possible multitouch, so you end up with evt.touches[0].pageX, for instance). Still, entirely doable.

Unfortunately, things get weird when you do these things and then try to use them on, say, Android. Android has bugs when it comes touch events. Outside of the expected, that is. First, it turns out that Android won't fire off touchend events, even if they occur, if you never told Android to "prevent the default behaviour" on a touchstart or touchmove. Why? Because if you don't, Android will treat the finger gesture first as what you needed to do, and then as "oh but the default behaviour should still happen, the user wants to scroll the page" and then the touchend that stops Android from listening to page scroll gets consumed and never sent on to your code. If you didn't know about that, you're wasting quite a bit of time figuring out what the heck is going on.

But now you know about that, so adding evt.preventDefault() in your start and move handling should fix things, right? Well... no. It turns out there's another, far more magical, feature in Android that does what should reasonably be impossible in any programming setting. Have a look at this code:

var element = ...;
element.addEventListener("touchstart", handleTouchStart);
element.addEventListener("touchmove", handleTouchMove);
element.addEventListener("touchend", handleTouchEnd);

function touchStart(evt) {
  console.log("touch started);
}

function touchMove(evt) {
  console.log("touch move);
}

function touchEnd(evt) {
  console.log("touch ended);
}

This works great. Loading pages with code like this on Android will show that all three events fire if you put down your finger, move it around a bit, and take it off the screen again. But, we might want to know where all those events happen, so let's write a helper function and modify the handlers:

function fixEvtCoords(evt) {
  evt.clientX = evt.clientX || evt.touches[0].pageX;
  evt.clientY = evt.clientY || evt.touches[0].pageY;
}
...
function touchStart(evt) {
  fixEvtCoords(evt);
  console.log("touch started at " + evt.clientX + "," + evt.clientY);
}

function touchMove(evt) {
  fixEvtCoords(evt);
  console.log("touch move at " + evt.clientX + "," + evt.clientY);
}

function touchStart(evt) {
  fixEvtCoords(evt);
  console.log("touch ended at " + evt.clientX + "," + evt.clientY);
}

That looks perfectly reasonable, and start and move now show the coordinates at which the events are generated. But touchend no longer works... what? It gets more interesting: what if we don't fix the coordinates for the end event?

function touchStart(evt) {
  console.log("touch ended at " + evt.clientX + "," + evt.clientY);
}

This logs "touch ended at undefined,undefined", which makes sense because touch events don't have the .clientX and .clientY properties. So, let's change those to the real thing:

function touchStart(evt) {
  console.log("touch ended at " + evt.touches[0].pageX + "," + evt.touches[0].pageY);
}

This won't actually do anything. There is nothing in .touches[0] anymore, so there will be a JS error and the code won't run. So what do we do? The simplest solution is to rely on the fact that we're only using single finger interaction, and just assume that if a touchend fired at all, we no longer have any fingers on the screen:

function touchStart(evt) {
  console.log("touch ended");
}

This is weird for several reasons: if we want to deal with multi touch, how do we track which finger just stopped being on the screen? You'd be tempted to try something like this:

function touchStart(evt) {
  console.log("touch ended", JSON.stringify(evt, false, 2));
}

To get an easy to debug bit of string data to tell us what's in that event, but if we do this, more JS errors and the log call will throw instead of logging useful data.

The worst is you just read this in a matter of a few minutes, but discovering all this, if you don't really work with Android all that much, is pretty much hours and hours of trying things, not understanding why they work on desktop but not on Android, trying more things, case reducing, starting from scratch, noticing things do work, slowly building things back up, noticing they break at some point, going back to where things weren't broken, and slowly figuring out what's going wrong because you home in on specific calls and patterns that just don't seem to work.

Over the course of 6 hours I went from not knowing these things to knowing both how to deal with this in the future, as well as knowing how to write my React code in such a way that touch events will propagate properly. Fun fact: if you're using React in an Android WebView "browser", there are some things you can do that work perfectly fine on desktop, and will not work at all on Android, too.

For instance, React has onTouchStart, onTouchMove and onTouchEnd component event handlers, with augmented events to make sure every browser will work the same. That's great, except it has bugs. The event augmentation does something (and without looking at the React source code, I have no real idea what that something is) that breaks event propagation. So, this code doesn't work:

var Positionable = ... ({
  render: function() {
    return (
      <div onTouchStart={this.handleTouchStart}>
        <RotationControls />
        <ScaleControls />
      </div>
    );
  }
})

var RotationControls = ... ({
  render: function() {
    return (
      <div onTouchStart={this.handleTouchStart}>
        ...
      </div>
    );
  }
})

var ScaleControls = ... ({
  render: function() {
    return (
      <div onTouchStart={this.handleTouchStart}>
        ...
      </div>
    );
  }
})

You might think it would, but nope: not on Android. While this works fine on desktop, trying this on Android and tapping the RotationControls element actually gets sent to the higher level Positionable instead. No matter how much you tap, that touch event is not going to make it into the handler defined in RotationControls to rotate our element. So, ultimately, despite React having code in place to make working with touch events nicer, we actually need to go back to the drawing board and use the good old low level addEventListener('touchstart', ...) and friends in order to make sure that nothing interferes with event propagation.

var TouchMixin = {
  componentDidMount: function() {
    var localNode = this.getDOMNode();
    localNode.addEventListener('touchstart', this.handleTouchStart);
  },
  componentWillUnmount: function() {
    var localNode = this.getDOMNode();
    localNode.removeEventListener('touchstart', this.handleTouchStart);
  }
};

var Positionable = ... ({
  mixins: [
    TouchMixin
  ],
  render: function() {
    return (
      <div>
        <RotationControls />
        <ScaleControls />
      </div>
    );
  }
})

With similar changes in RotationControls and ScaleControls. Fun!

But wait, there's more. The component I'm writing also has a ZIndexController, which gives you two buttons for changing a number, and that number gets communicated up, and used as z-index for the element on the page:

var Positionable = ... ({
  render: function() {
    return (
      <divb>
        <RotationControls />
        <ScaleControls />
        <ZIndexController />
        { this.props.children }
      </div>
    );
  }
})

var ZIndexController = ... ({
  getInitialState: function() {
    return { zIndex: this.props.zIndex || 0 };
  },
  render: function() {
    return (
      <div className="zindex-controller">
        layer position:
        <span className="zmod left" onClick={this.zDown}>◀</span>
        { this.state.zIndex }
        <span className="zmod right" onClick={this.zUp}>▶</span>
      </div>
    );
  },
  zUp: function(evt) {
    evt.stopPropagation();
    this.setState({ zIndex: this.state.zIndex + 1 }, function() {
      this.props.onChange(this.state.zIndex);
    });
  },
  zDown: function(evt) {
    evt.stopPropagation();
    this.setState({ zIndex: Math.max(0, this.state.zIndex - 1) }, function() {
      this.props.onChange(this.state.zIndex);
    });
  }
})

Again, this works great in desktop browsers, but does not work on Android. What's going on? As it turns out, React events like onClick work because React intercepts all events at the document level and then routes them on further based on which things registered first, rather than "the most specific thing first". We can try to work around this, to try to force the ordering that we want, but that's just making a bad situation worse should more touch propagation need to happen in the future: instead the only workable solution that I've found is to just say "alright, forget it, make touch regions non-overlapping". As such, rather than a positionable thing with rotation and scale controls, the solution is to have an inert "thing" with positioning, rotation, and scale controls instead. That way the touch events for moving the element around do not overlap with, for instance, the z-index controls, and things work. It's less nice, but the only tractible solution:

var Positionable = ... ({
  render: function() {
    return (
      <divb>
        <PlacementController />
        <RotationControls />
        <ScaleControls />
        <ZIndexController />
        { this.props.children }
      </div>
    );
  }
})

And with that, things work. New technologies have a great way of bringing back the pain you thought you'd left behind.

Christian HeilmannKeeping it simple: coding a carousel

One of the things that drives me crazy in our “modern development” world is our fetish of over-complicating things. We build solutions, and then we add layers and layers of complexity for the sake of “making them easier to maintain”. In many cases, this is a fool’s errand as the layers of complexity and with them the necessary documentation make people not use our solutions. Instead, in many cases, people build their own, simpler, versions of the same thing and call it superior. Until this solution gets disputed and the whole dance happens once again.

In this article I want to approach the creation of a carousel differently: by keeping it as simple as possible whilst not breaking backwards compatibility or have any dependencies. Things break on the web. JavaScript might not be loaded, CSS capabilities vary from browser to browser. It is not up to us to tell the visitor what browser to use. And as good developers we shouldn’t create interfaces that look interactive but do nothing when you click them.

So, let’s have a go at building a very simple carousel that works across browsers without going overboard. You can see the result and get the code on GitHub.

The HTML structure of a carousel

Let’s start very simple: a carousel in essence is an ordered list in HTML. Thus, the basic HTML is something like this:

<div class="carouselbox">
  <ol class="content">
    <li>1</li>
    <li>2</li>
    <li>3</li>
    <li>4</li>
  </ol>
</div>

Using this, and a bit of CSS we have something that works and looks good. This is the base we are starting from.

The basic CSS

The CSS used here is simple, but hints at some of the functionality we rely on later:

.carouselbox {
  font-family: helvetica,sans-serif;
  font-size: 14px;
  width: 100px;
  position: relative;
  margin: 1em;
  border: 1px solid #ccc;
  box-shadow: 2px 2px 10px #ccc;
  overflow: hidden;
}
.content {
  margin: 0;
  padding: 0;
}
.content li {
  font-size: 100px;
  margin: 0;
  padding: 0;
  width: 100%;
  list-style: none;
  text-align: center;
}

The main thing here is to position the carousel box relatively, allowing us to position the list items absolutely inside it. This is how we’ll achieve the effect. The hidden overflow ensures that later on only the current item of the carousel will be shown. As there is no height set on the carousel and the items aren’t positioned yet, we now see all the items.

All carousel items visible

The carousel visuals in CSS

A lot of carousel scripts you can find will loop through all the items, or expect classes on each of them. They then hide all and show the current one on every interaction. This seems overkill, if you think about it. All we need is two classes:

  • We need a class on the container element that triggers the functional display of our carousel. This one gets applied with JavaScript as this means the look and feel only changes when the browser is capable of showing the effect.
  • We need a class on the currently visible carousel element. This is the odd one out. All the others don’t need any classes.

We can hard-code these for now:

<div class="carouselbox active">
  <ol class="content">
    <li>1</li>
    <li class="current">2</li>
    <li>3</li>
    <li>4</li>
  </ol>
</div>

All we need to show and hide the different carousel items is to change the height of the carousel container and position all but the current one outside this height:

.active {
  height: 130px;
}
.active li {
  position: absolute;
  top: 200px;
}
.active li.current {
  top: 30px;
}

You can see this in action here. Use your browser developer tools to move the “current” class from item to item to show a different one.

carousel changing items

Interaction with JavaScript

To make the carousel work, we need controls. And we also need some JavaScript. Whenever you need a control that triggers functionality that only works when JavaScript is executed, a button is the thing to use. These magical things were meant for exactly this use case and they are keyboard, mouse, touch and pen accessible. Everybody wins.

In this case, I added the following controls in our HTML:

<div class="carouselbox">
  <div class="buttons">
    <button class="prev"><span class="offscreen">Previous</span>
    </button>
    <button class="next">
      <span class="offscreen">Next</span></button>
  </div>
  <ol class="content">
    <li>1</li>
    <li>2</li>
    <li>3</li>
    <li>4</li>
  </ol>
</div>

Now, here is where the hard-liners of semantic markup could chime in and chide me for writing HTML that is dependent on JavaScript instead of creating the HTML using JavaScript. And they’d be correct to do so. There is nothing that stops me from wrapping this chunk of HTML in a DOM call or innerHTML write-out. However, as buttons are meant to trigger JS functionality, I think it is easier to just keep that in the HTML and allow us thus to style them with much less hassle. As a security precaution, we hide them in the non-active state and show them when the “active” class has been applied:

.active .buttons {
  padding: 5px 0;
  background: #eee;
  text-align: center;
  z-index: 10;
  position: relative;
}
.carouselbox button {
  border: none;
  display: none;
}
.active button {
  display: block;
} 
.offscreen {
  position: absolute;
  left: -2000px;
}

The offscreen parts are there to explain what these buttons really mean as the triangle is not enough for some people.

All that is left to make the carousel work is the JavaScript. And here it is:

carousel = (function(){
  var box = document.querySelector('.carouselbox');
  var next = box.querySelector('.next');
  var prev = box.querySelector('.prev');
  var items = box.querySelectorAll('.content li');
  var counter = 0;
  var amount = items.length;
  var current = items[0];
  box.classList.add('active');
  function navigate(direction) {
    current.classList.remove('current');
    counter = counter + direction;
    if (direction === -1 && 
        counter < 0) { 
      counter = amount - 1; 
    }
    if (direction === 1 && 
        !items[counter]) { 
      counter = 0;
    }
    current = items[counter];
    current.classList.add('current');
  }
  next.addEventListener('click', function(ev) {
    navigate(1);
  });
  prev.addEventListener('click', function(ev) {
    navigate(-1);
  });
  navigate(0);
})();

As you can see, by relying on CSS and its built-in crawling of the DOM, there is no need for any loop whatsover. Here’s what’s going on in this script:

  • We grab all the HTML elements we need with querySelector.
  • We set the counter to 0 – this is the variable that keeps track of which item of the carousel is currently shown
  • We read the amount of items in the carousel and store them in a variable – this allows us to loop the carousel
  • We set the current item as the first one in the carousel. The current variable will contain a reference to the element currently visible. All we do when the carousel state changes is remove the CSS class from it and shift it to the other one
  • We add the class of “active” to the container element to change its styling and trigger the CSS functionality explained earlier.
  • The navigate method takes a parameter called direction which defines if we should go backwards (negative values) or forwards in the carousel. It starts by removing the “current” class from the current carousel item, thus hiding it. We then modify the counter and make sure it doesn’t go beyond the amount of items available or below 0. In each case we move to the other extreme, thus making the carousel an endless rotating one. We define the new current item and add the class to show it.
  • We apply event handlers to the buttons to navigate forwards and backwards
  • We show the first carousel item by calling navigate with 0 as the value.

Pretty simple, isn’t it? By allowing CSS to do what it is good at, our JavaScript more or less is only about keeping state and shifting classes around.

You can see the basic carousel in action here. It’s nothing fancy, but it does the job.

simple carousel in action

Getting fancy

The showing and hiding of the items by positioning them in a container with overflow hidden should work in any browser in use these days – even the ones who should be retired. And as all we do is add and remove CSS classes, we can now tap into the beautiful features browsers have these days. Using transition, opacity and transformation, we can add a pretty effect with a few lines of CSS:

fancier carousel

.active li {
  position: absolute;
  top: 130px;
 
  opacity: 0;
  transform: scale(0);
  transition: 1s;
}
.active li.current {
  top: 30px;
 
  opacity: 1;
  transform: scale(1);
  transition: 1s;
}

The beauty of this is that the performance handling, the timing (in case you click too fast) is handled by the browser for us. No need to count FPS or juggle timeouts. As CSS is a one-off state, this also means that browsers that do not support these features simply don’t show them instead of throwing an error.

Bullet-proofing our JavaScript

When a browser in use doesn’t support some JavaScript feature we use, things get trickier. We get an error and things break. Thus, it makes sense to test for the things we use and move on only when there is support for them. In this code, we rely on classList and querySelector, so let’s just check for this:

if (!document.querySelector || 
  !('classList' in document.body)) {
return false;
}

We could get much more paranoid and ensure that all the DOM elements are available before proceeding but this is overkill. If a maintainer forgets to add the “carouselbox” class to the main element, the error thrown is pretty obvious.

Bonus round: Stacking with CSS

One last trick to mention is that if you were to stack all the elements of the carousel visually and only use opacity to blend them then there is a problem with links. You’d always get the link of the first item, no matter which one is shown.

The trick to work around that is user “pointer-events: none” in your CSS:

.active li {
  position: absolute;
  top: 130px;
 
  pointer-events: none;
  opacity: 0;
  transform: scale(0);
  transition: 1s;
}
.active li.current {
  top: 30px;
 
  pointer-events: auto;
  opacity: 1;
  transform: scale(1);
  transition: 1s;
}

You can see this workaround in action here.

Where to go now?

The natural drive as a developer now is to enhance this to allow users to define a different starting element to show, to define lots of preset effects that can be chosen with the data attribute, to allow for non-looping carousels and to define an API to allow other components on the page to interact with the carousel. And an API to create and remove and shuffle items of the carousel. And, and and… All of these are great exercises, but let’s ask ourselves: who do we do that for?

We have such amazing functionality built into the platform of the web now. Maybe it is time to stop writing the perfect generic re-usable widget and just stick with simple things and let people extend them when they need to? Who knows, by not doing the work for them, people might learn to be better coders themselves.

Ehsan AkhgariIntercepting beacons through service workers

Beacons are a way to send asynchronous pings to a server for the purposes such as logging and analytics.  The API itself doesn’t give you a way to get notified when the ping has been successfully sent, which is intentional since the ping may be sent a while after the page has been closed or navigated away from.  There are use cases where the web developer wants to send a ping to the server which is a candidate to use a beacon for, but they also need to know when/if the ping is delivered successfully, which makes beacons an unsuitable solution.

Service worker is a new technology that allows (among other things) intercepting the network requests made by the browser.  It recently occurred to me that mixing these two technologies can solve the use case very well.  The idea is to intercept the beacon fetch inside a service worker and then tell the web page about whether the beacon was successfully sent.  I made a demo which shows how this can work.  This demo works on Firefox Nightly if you toggle the dom.serviceWorkers.enabled pref.  It currently doesn’t work on Chrome, because it doesn’t allow a service worker to intercept the beacon, and I filed a bug about it.

Here is how this demo works:  It registers a service worker as you would usually do, and then for sending the beacon, we create a new iframe to make sure the document where we call sendBeacon is indeed intercepted by the service worker, and call sendBeacon as usual in that iframe.  Inside the service worker, we intercept the beacon.  So at this point the beacon fetch has gotten to our service worker.  My simple demo just sends a message to all controlled windows about this.  A real service worker however would probably do a fetch on its own for sending the beacon to the network, wait for the returned promise to resolve, and then record a log of some sort such as in the DOM Cache, or send a message back to the controlled document.

It’s nice that service workers give you a way to delve into the guts of the platform and retrieve the information that interests you even if the rest of the platform hides that information!  I hope this demo is useful to people who have this use case.

Sean McArthurhyper on beta

Since I announced hyper in December of last year, it has continued to grow as Rust’s http library.

highlights

Of course, a lot of the effort this past few months has been keeping up with all the changes to Rust and the standard library. A million thanks to all those who helped with these upgrades. I can’t overstate the joy it is to wake up in the morning, read that there are breaking changes in the latest nightly, and then see in my inbox that a pull request has already been filed fixing hyper.

up next

Now that the breaking changes are behind us, developement can focus entirely on making hyper do things better-er. Specifically, here’s things that are either in progress, or highly desired (hint hint).

  • The Client is very close to receiving Connection Pool support. Code is in a branch.
  • The Server is switching to a request queue, such that keep-alive connections don’t starve the server.
  • Marko Lalic has written an HTTP/2 library in Rust. Many have stated wanting to help integrate it into hyper. Hopefully, this should happen soon!
  • We’d like to switch to asynchronous IO. There’s been plenty of working happening in mio, which brings us async IO for Unix-y platforms. There’s need for a Windows async IO library, and a library that can wrap the two. Then we can start switching over, and that means more speeeed.

I could imagine aiming for a 1.0 of hyper once we have asynchronous IO.

Again, all of this is thanks to you guys, the amazing community. And if you want to get involved, please join in. Perhaps try tackling one of the easy issues first.


  1. Or you can check the Releases. I try to keep them in sync. 

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Anthony RicaudElectrolysis without tabs underlined

Electrolysis has been re-enabled in Nightly. It brings lots of end-user benefits but it also brings a new style for tabs. I guess this underlining is to help users easily notice the difference with a non-Electrolysis build.

But you can easily disable that:

  1. Locate your profile
  2. Go into the chrome directory. You may need to create it if it doesn't exist.
  3. Create a userChrome.css file with the following content:
    .tabbrowser-tab[remote] {
      text-decoration: none !important;
    }
  4. Restart your Nightly
  5. Enjoy!

Thanks Jonathan Kew for the tip.