Mitchell BakerUN High Level Panel and UN Secretary General Ban Ki-moon issue report on Women’s Economic Empowerment

“Gender equality remains the greatest human rights challenge of our time.”  UN Secretary General Ban Ki-moon, September 22, 2016.

To address this challenge the Secretary General championed the 2010 creation of UN Women, the UN’s newest entity. To focus attention on concrete actions in the economic sphere he created the “High Level Panel on Women’s Economic Empowerment” of which I am a member.

The Panel presented its initial findings and commitments last week during the UN General Assembly Session in New York. Here is the Secretary General, with the the co-chairs, and the heads of the IMF and the World Bank, the Executive Director of the UN Women, and the moderator and founder of All Africa Media, each of whom is a panel member.

UN General Assembly Session in New York

Photo Credit: Anar Simpson

The findings are set out in the Panel’s initial report. Key to the report is the identification of drivers of change, which have been deemed by the panel to enhance women’s economic empowerment:

  1. Breaking stereotypes: Tackling adverse social norms and promoting positive role models
  2. Leveling the playing field for women: Ensuring legal protection and reforming discriminatory laws and regulations
  3. Investing in care: Recognizing, reducing and redistributing unpaid work and care
  4. Ensuring a fair share of assets: Building assets—Digital, financial and property
  5. Businesses creating opportunities: Changing business culture and practice
  6. Governments creating opportunities: Improving public sector practices in employment and procurement
  7. Enhancing women’s voices: Strengthening visibility, collective voice and representation
  8. Improving sex-disaggregated data and gender analysis

Chapter Four of the report describes a range of actions that are being undertaken by Panel Members for each of the above drivers. For example under the Building assets driver: DFID and the government of Tanzania are extending land rights to more than 150,000 Tanzanian women by the end of 2017. Tanzania will use media to educate people on women’s land rights and laws pertaining to property ownership. Clearly this is a concrete action that can serve as a precedent for others.

As a panel member, Mozilla is contributing to the working on Building Assets – Digital. Here is my statement during the session in New York:

“Mozilla is honored to be a part of this Panel. Our focus is digital inclusion. We know that access to the richness of the Internet can bring huge benefits to Women’s Economic Empowerment. We are working with technology companies in Silicon Valley and beyond to identify those activities which provide additional opportunity for women. Some of those companies are with us today.

Through our work on the Panel we have identified a significant interest among technology companies in finding ways to do more. We are building a working group with these companies and the governments of Costa Rica, Tanzania and the U.A. E. to address women’s economic empowerment through technology.

We expect the period from today’s report through the March meeting to be rich with activity. The possibilities are huge and the rewards great. We are committed to an internet that is open and accessible to all.”

You can watch a recording of the UN High Level Panel on Women’s Economic Empowerment here. For my statement, view starting at: 2.07.53.

There is an immense amount of work to be done to meet the greatest human rights challenge of our time. I left the Panel’s meeting hopeful that we are on the cusp of great progress.

Hub FiguièreIntroducing gudev-rs

A couple of weeks ago, I released gudev-rs, Rust wrappers for gudev. The goal was to be able to receive events from udev into a Gtk application written in Rust. I had a need for it, so I made it and shared it.

It is mostly auto-generated using gir-rs from the gtk-rs project. The license is MIT.

Source code

If you have problems, suggestion, patches, please feel free to submit them.

Mozilla Addons BlogHow Video DownloadHelper Became Compatible with Multiprocess Firefox

Today’s post comes from Michel Gutierrez (mig), the developer of Video DownloadHelper, among other add-ons. He shares his story about the process of modernizing his XUL add-on to make it compatible with multiprocess Firefox (e10s).

***

Video DownloadHelper (VDH) is an add-on that extracts videos and image files from the Internet and saves them to your hard drive. As you surf the Web, VDH will show you a menu of download options when it detects something it can save for you.

It was first released in July 2006, when Firefox was on version 1.5. At the time, both the main add-on code and DOM window content were running in the same process. This was helpful because video URLs could easily be extracted from the window content by the add-on. The Smart Naming feature was also able to extract video names from the Web page.

When multiprocess Firefox architecture was first discussed, it was immediately clear that VDH needed a full rewrite with a brand new architecture. In multiprocess Firefox, DOM content for webpages run in a separate process, which means required asynchronous communication with the add-on code would increase significantly. It wasn’t possible to simply make adaptations to the existing code and architecture because it would make the code hard to read and unmaintainable.

The Migration

After some consideration, we decided to update the add-on using SDK APIs. Here were our requirements:

  • Code running in the content process needed to run separately from code running in Javascript modules and the main process. Communication must occur via message passing.
  • Preferences needed to be available in the content process, as there are many adjustable parameters that affect the user interface.
  • Localization of HTML pages within the content script should be as easy as possible.

In VDH, the choice was made to handle all of these requirements using the same Client-Server architecture commonly used in regular Web applications: the components that have access to the preferences, localization, and data storage APIs (running in the main process) serve this data to the UI components and the components injected into the page (running in the content process), through the messaging API provided by the SDK.

Limitations

Migrating to the SDK enabled us to become compatible with multiprocess Firefox, but it wasn’t a perfect solution. Low-level SDK APIs, which aren’t guaranteed to work with e10s or stay compatible with future versions of Firefox, were required to implement anything more than simple features. Also, an increased amount of communication between processes is required even for seemingly simple interactions.

  • Resizing content panels can only occur in the background process, but only the content process knows what the dimensions should be. This gets more complicated when the size dynamically changes or depends on various parameters.
  • Critical features like monitoring network traffic or launching external programs in VDH requires low-level APIs.
  • Capturing tab thumbnails from the Add-on SDK API does not work in e10s mode. This feature had to be reimplemented in the add-on using a framescript.
  • When intercepting network responses, the Add-on SDK does not decode compressed responses.
  • The SDK provides no easy means to determine if e10s is enabled or not, which would be useful as long as glitches remain where the add-on has to act differently.

Future Direction

Regardless of the limitations posed, making VDH compatible to multiprocess Firefox was a great success. Taking the time to rewrite the add-on also improved the general architecture and prepared it for changes needed for WebExtensions. The first e10s-compatible version of VDH is version 5.0.1 and had been available since March 2015.

Looking forward, the next big challenge is making VDH compatible with WebExtensions. We considered migrating directly to WebExtensions, but the legacy and low-level SDK APIs used in VDH could not be replaced at the time without compromising the add-on’s features.

To fully complete the transition to WebExtensions, additional APIs may need to be created. As an extension developer we’ve found it helpful to work with Mozilla to define those APIs, and design them in a way that is general enough for them to be useful in many other types of add-ons.

Armen Zambrano[NEW] Added build status updates - Usability improvements for Firefox automation initiative - Status update #6

[NEW] Starting on this newsletter we will start giving you build automation improvements since they help with the end to end time of your pushes

On this update we will look at the progress made in the last two weeks.

A reminder that this quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:
https://wiki.mozilla.org/EngineeringProductivity/Projects/Debugging_UX_improvements

Status update:
Debugging tests on interactive workers
---------------------------------------------------
Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1262260

Accomplished recently:
  • Fixed regression that broke the interactive wizard
  • Support for Android reftests landed

Upcoming:
  • Support for Android xpcshell
  • Video demonstration


Thunder Try - Improve end to end times on try
---------------------------------------------

Project #1 - Artifact builds on automation
##########################################
Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1284882

Accomplished recently:
  • Windows and Mac artifact builds are soon to land
  • |mach try| now supports --artifact option
  • Compiled-code tests jobs error-out early when run with --artifact on try

Upcoming:
  • Windows and Mac artifact builds available on Try
  • Fix triggering of test jobs on Buildbot with artifact build

Project #2 - S3 Cloud Compiler Cache
####################################
Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1280641

Nothing new in this edition.

Project #3 - Metrics
####################
Tracking bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1286856

Accomplished recently:

  • Drill-down charts:

  • Which lead to a detailed view:

  • With optional wait times included (missing 10% outliers, so looks almost the same):


Upcoming:
  • Iron out interactivity bugs
  • Show outliers
  • Post these (static) pages to my people page
  • Fix ActiveData production to handle these queries (I am currently using a development version of ActiveData, but that version has some nasty anomalies)

Project #4 - Build automation improvements
##########################################
Upcoming:


Project #5 - Run Web platform tests from the source checkout
############################################################
Accomplished recently:
  • WPT is now running from the source checkout in automation

Upcoming:
  • There are still parts in automation relying on a test zip. Next steps is to minimize those so you can get a loner, pull any revision from any repo, and test WPT changes in an environment that is exactly what the automation tests run in.

Other
#####
  • Bug 1300812 - Make Mozharness downloads and unpacks actions handle better intermittent S3/EC2 issues
    • This adds retry logic to reduce intermittent oranges


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaThe Joy of Coding - Episode 73

The Joy of Coding - Episode 73 mconley livehacks on real Firefox bugs while thinking aloud.

Cameron KaiserFirefox OS goes Tier WONTFIX

I suppose it shouldn't be totally unexpected given the end of FirefoxOS phone development a few months ago, but a platform going from supported to take-it-completely-out-of-mozilla-central in less than a year is rather startling: not only has commercial development on FirefoxOS completely ceased (at version 2.6), but the plan is to remove all B2G code from the source tree entirely. It's not an absolutely clean comparison to us because some APIs are still relevant to current versions of OS X macOS, but even support for our now ancient cats was only gradually removed in stages from the codebase and even some portions of pre-10.4 code persisted until relatively recently. The new state of FirefoxOS, where platform support is actually unwelcome in the repository, is beyond the lowest state of Tier-3 where even our own antiquated port lives. Unofficially this would be something like the "tier WONTFIX" BDS referenced in mozilla.dev.planning a number of years ago.

There may be a plan to fork the repository, but they'd need someone crazy dedicated to keep chugging out builds. We're not anything on that level of nuts around here. Noooo.

Air MozillaWeekly SUMO Community Meeting Sept 28, 2016

Weekly SUMO Community Meeting Sept 28, 2016 This is the sumo weekly call

Air MozillaKernel Recipes 2016 Day 1 PM Session

Kernel Recipes 2016 Day 1 PM Session Three days talks around the Linux Kernel

The Mozilla BlogFirefox’s Test Pilot Program Launches Three New Experimental Features

Earlier this year we launched our first set of experiments for Test Pilot, a program designed to give you access to experimental Firefox features that are in the early stages of development. We’ve been delighted to see so many of you participating in the experiments and providing feedback, which ultimately, will help us determine which features end up in Firefox for all to enjoy.

Since our launch, we’ve been hard at work on new innovations, and today we’re excited to announce the release of three new Test Pilot experiments. These features will help you share and manage screenshots; keep streaming video front and center; and protect your online privacy.

What Are The New Experiments?

Min Vid:

Keep your favorite entertainment front and center. Min Vid plays your videos in a small window on top of your other tabs so you can continue to watch while answering email, reading the news or, yes, even while you work. Min Vid currently supports videos hosted by YouTube and Vimeo.

Page Shot:

The print screen button doesn’t always cut it. The Page Shot feature lets you take, find and share screenshots with just a few clicks by creating a link for easy sharing. You’ll also be able to search for your screenshots by their title, and even the text captured in the image, so you can find them when you need them.

Tracking Protection:

We’ve had Tracking Protection in Private Browsing for a while, but now you can block trackers that follow you across the web by default. Turn it on, and browse free and breathe easy. This experiment will help us understand where Tracking Protection breaks the web so that we can improve it for all Firefox users.

How do I get started?

Test Pilot experiments are currently available in English only. To activate Test Pilot and help us build the future of Firefox, visit testpilot.firefox.com.

As you’re experimenting with new features within Test Pilot, you might find some bugs, or lose some of the polish from the general Firefox release, so Test Pilot allows you to easily enable or disable features at any time.

Your feedback will help us determine what ultimately ends up in Firefox – we’re looking forward to your thoughts!

Air MozillaKernel Recipes 2016 Day 1 AM Session

Kernel Recipes 2016 Day 1 AM Session Three days talks around the Linux Kernel

Wil ClouserTest Pilot is launching three new experiments for Firefox

The Test Pilot team has been heads-down for months working on three new experiments for Firefox and you can get them all today!

Min Vid

Min Vid is an add-on that allows you to shrink a video into a small always-on-top frame in the corner of your browser. This lets you watch and interact with a video while browsing the web in other tabs. Opera and Safari are implementing similar features so this one might have some sticking power.

Thanks to Dave, Jen, and Jared for taking this from some prototype code to in front of Firefox users in six months.

Tracking Protection

Luke has been working hard on Tracking Protection - an experiment focused on collecting feedback from users about which sites break when Firefox blocks the trackers from loading. As we collect data from everyday users we can make decisions about how best to block what people don't want and still show them what they do. Eventually this could lead to us protecting all Firefox users with Tracking Protection by default.

Page Shot

Page Shot is a snappy experiment that enables users to quickly take screenshots in their browser and share them on the internet. There are a few companies in this space already, but their products always felt too heavy to me, or they ignored privacy, or some simply didn't even work (this was on Linux). Page Shot is light and quick and works great everywhere.

As a bonus, a feature I haven't seen anywhere else, Page Shot also offers searching the text within the images themselves. So if you take a screenshot of a pizza recipe and later search for "mozzarella" it will find the recipe.

I was late to the Page Shot party and my involvement is just standing on the shoulders of giants at this point: by the time I was involved the final touches were already being put on. A big thanks to Ian and Donovan for bringing this project to life.

I called out the engineers who have been working to bring their creations to life, but of course there are so many teams who were critical to today's launches. A big thank you to the people who have been working tirelessly and congratulations on launching your products! :)

Daniel Stenberg1,000,000 sites run HTTP/2

… out of the top ten million sites that is. So there’s at least that many, quite likely a few more.

This is according to w3techs who runs checks daily. Over the last few months, there have been about 50,000 new sites per month switching it on.

ht2-10-percent

It also shows that the HTTP/2 ratio has increased from a little over 1% deployment a year ago to the 10% today.

HTTP/2 gets more used the more  popular site it is. Among the top 1,000 sites on the web, more than 20% of them use HTTP/2. HTTP/2 also just recently (September 9) overcame SPDY among the top-1000 most popular sites.

h2-sep28

On September 7, Amazon announced their CouldFront service having enabled HTTP/2, which could explain an adoption boost over the last few days. New CloudFront users get it enabled by default but existing users actually need to go in and click a checkbox to make it happen.

As the web traffic of the world is severely skewed toward the top ones, we can be sure that a significantly larger share than 10% of the world’s HTTPS traffic is using version 2.

Recent usage stats in Firefox shows that HTTP/2 is used in half of all its HTTPS requests!

http2

Cameron KaiserAnd now for something completely different: Rehabilitating the Performa 6200?

The Power Macintosh 6200 in its many Performa variants has one of the worst reputations of any Mac, and its pitifully small 603 L1 caches add insult to injury (its poor 68K emulation performance was part of the reason Apple held up the PowerBook's migration to PowerPC until the 603e, and then screwed it up with the PowerBook 5300, a unit that is IMHO overly harshly judged by history but not without justification). LowEndMac has a long list of its perceived faults.

But every unloved machine has its defenders, and I noticed that the Wikipedia entry on the 6200 series radically changed recently. The "Dtaylor372" listed in the edit log appears to be this guy, one "Daniel L. Taylor". If it is, here's his reasoning why the seething hate for the 6200 series should be revisited.

Daniel does make some cogent points, cites references, and even tries to back them some of them up with benchmarks (heh). He helpfully includes a local copy of Apple's tech notes on the series, though let's be fair here -- Apple is not likely to say anything unbecoming in that document. That said, the effort is commendable even if I don't agree with everything he's written. I'll just cite some of what I took as highlights and you can read the rest.

  • The Apple tech note says, "The Power Macintosh 5200 and 6200 computers are electrically similar to the Macintosh Quadra 630 and LC 630." It might be most accurate to say that these computers are Q630 systems with an on-board PowerPC upgrade. It's an understatement to observe that's not the most favourable environment for these chips, but it would have required much less development investment, to be sure.

  • He's right that the L2 cache, which is on a 64-bit bus and clocked at the actual CPU speed, certainly does mitigate some of the problems with the Q630's 32-bit interface to memory, and 256K L2 in 1995 would have been a perfectly reasonable amount of cache. (See page 29 for the block diagram.) A 20-25% speed penalty (his numbers), however, is not trivial and I think he underestimates how this would have made the machines feel comparatively in practice even on native code.

  • His article claims that both the SCSI bus and the serial ports have DMA, but I don't see this anywhere in the developer notes (and at least one source contradicts him). While the NCR controller that the F108 ASIC incorporates does support it, I don't see where this is hooked up. More to the point, the F108's embedded IDE controller -- because the 6200 actually uses an IDE hard drive -- doesn't have DMA set up either: if the Q630 is any indication, the 6200 is also limited to PIO Mode 3. While this was no great sin when the Q630 was in production, it was verging on unacceptable even for a low-to-midrange system by the time the 6200 hit the market. More on that in the next point.

    Do note that the Q630 design does support bus mastering, but not from the F108. The only two entities which can be bus master are the CPU or either the PDS expansion card or communications card via the PrimeTime II IC "southbridge."

  • Daniel makes a very well-reasoned assertion that the computer's major problems were due to software instead of hardware design, which is at least partially true, but I think his objections are oversimplified. Certainly the Mac OS (that's with a capital M) was not well-suited for handling the real-time demands of hardware: ADB, for example, requires quite a bit of polling, and the OS could not service the bus sufficiently often to make it effective for large-volume data transfer (condemning it to a largely HID-only capacity, though that's all it was really designed for). Even interrupt-driven device drivers could be problematic; a large number of interrupts pending simultaneously could get dropped (the limit on outstanding secondary interrupt requests prior to MacOS 9.1 was 40, see Apple TN2010) and a badly-coded driver that did not shunt work off to a deferred task could prevent other drivers from servicing their devices because those other interrupts were disabled while the bad driver tied up the machine.

    That said, however, these were hardly unknown problems at the time and the design's lack of DMA where it counts causes an abnormal reliance on software to move data, which for those and other reasons the MacOS was definitely not up to doing and the speed hit didn't help. Compare this design with the 9500's full PCI bus, 64-bit interface and hardware assist: even though the 9500 was positioned at a very different market segment, and the weak 603 implementation is no comparison to the 604, that doesn't absolve the 6200 of its other deficiencies and the 9500 ran the same operating system with considerably fewer problems (he does concede that his assertions to the contrary do "not mean that [issues with redraw, typing and audio on the 6200s] never occurred for anyone," though his explanation of why is of course different). Although Daniel states that relaying traffic for an Ethernet card "would not have impacted Internet handling" based on his estimates of actual bandwidth, the real rate limiting step here is how quickly the CPU, and by extension the OS, can service the controller. While the comm slot at least could bus master, that only helps when it's actually serviced to initiate it. My personal suspicion is because the changes in OpenTransport 1.3 reduced a lot of the latency issues in earlier versions of OT, that's why MacOS 8.1 was widely noted to smooth out a lot of the 6200's alleged network problems. But even at the time of these systems' design Copland (the planned successor to System 7) was already showing glimmers of trouble, and no one seriously expected the MacOS to explosively improve in the machines' likely sales life. Against that historical backdrop the 6200 series could have been much better from the beginning if the component machines had been more appropriately engineered to deal with what the OS couldn't in the first place.

In the United States, at least, the Power Macintosh 6200 family was only ever sold under the budget "Performa" line, and you should read that as Michael Spindler being Spindler, i.e., cheap. In that sense putting as little extra design money into it wasn't ill-conceived, even if it was crummy, and I will freely admit my own personal bias in that I've never much cared for the Quadra 630 or its derivatives because there were better choices then and later. I do have to take my hat off to Daniel for trying to salvage the machine's bad image and he goes a long way to dispelling some of the more egregious misconceptions, but crummy's still as crummy does. I think the best that can be said here is that while it's likely better than its reputation, even with careful reconsideration of its alleged flaws the 6200 family is still notably worse than its peers.

Air MozillaConnected Devices Meetup - Sensor Web

Connected Devices Meetup - Sensor Web Mozilla's own Cindy Hsiang to discuss SensorWeb SensorWeb wants to advance Mozilla's mission to promote the open web when it evolves to the physical world....

Air MozillaConnected Devices Meetup - Laurynas Riliskis

Connected Devices Meetup - Laurynas Riliskis We are on the verge of next revolution Connected devices have emerged during the last decade into what's known as the Internet of Things. These...

Air MozillaConnected Devices Meetup - Johannes Ernst: UBOS and the Indie IoT

Connected Devices Meetup - Johannes Ernst: UBOS and the Indie IoT We are on the verge of next revolution Connected devices have emerged during the last decade into what's known as the Internet of Things. These...

Air MozillaConnected Devices Meetup - Nicholas van de Walle

Connected Devices Meetup - Nicholas van de Walle Nicholas van de Walle is a local web developer who loves shoving computers into everything from clothes to books to rocks. He has worked for...

Air MozillaConnected Devices Meetup - September 27, 2016

Connected Devices Meetup - September 27, 2016 6:30pm - Johannes Ernst; UBOS and the Indie IoT: Will the IoT inevitably make us all digital serfs to a few overlords in the cloud,...

David LawrenceHappy BMO Push Day 2!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1305713] BMO: Persistent XSS via Git commit messages in comments

discuss these changes on mozilla.tools.bmo.


Air MozillaB2G OS Announcements September 2016

B2G OS Announcements September 2016 The weekly B2G meeting on Tuesday 27th September will be attended by Mozilla senior staff members who would like to make some announcements to the...

Air MozillaMartes Mozilleros, 27 Sep 2016

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Christian HeilmannJavaScript aus ist nicht das Problem – Vortrag beim Frontend Rhein Main Meetup

Vor ein paar Tagen war ich zu Gast bei AOE in Wiesbaden um beim Frontend Rhein-Main meetup über die Gefahren von ueberspitzter Kritik an JavaScript zu sprechen. Dieser Vortrag wird auch bald auf Englisch erhältlich sein.

Chris beim Vortragen

Das Video des Vortrags ist von AOE schon fertig editiert und auf YouTube zu finden:

Die (englischen) Slides sind auf Slideshare:

Eine Vorschau auf den ganzen Talk gab es von mir auch auf der SmashingConf Freiburg Jam Session, und der Screencast davon (schreiend, auf Englisch) ist auch auf YouTube:

Sowohl AOE als auch die UserGruppe haben ueber den Abend gebloggt.

Es war ein netter Abend, und obschon ich dachte es wäre das zehnjährige Jubiläum und nicht das zehnte Meetup hat sichs gelohnt, nach der SmashingConf und vor dem A-Tag in Wiesbaden vorbei zu kommen. Vor allem auch, weil Patricia Kuchen gebacken hat und nun ganz wild JavaScript lernt!

Gervase MarkhamOff Trial

Six weeks ago, I posted “On Trial”, which explained that I was taking part in a medical trial in Manchester. In the trial, I was trying out some interesting new DNA repair pathway inhibitors which, it was hoped, might have a beneficial effect on my cancer. However, as of ten days ago, my participation has ended. The trial parameters say that participants can continue as long as their cancer shrinks or stays the same. Scans are done every six weeks to determine what change, if any, there has been. As mine had been stable for the five months before starting participation, I was surprised to discover that after six weeks of treatment my liver metastasis had grown by 7%. This level of growth was outside the trial parameters, so they concluded (probably correctly!) the treatment was not helping me and that was that.

The Lord has all of this in his hands, and I am confident of his good purposes for me :-)

Emily DunhamRust's Community Automation

Rust’s Community Automation

Here’s the text version, with clickable links, of my Automacon lightning talk today.

Intro

I’m a DevOps engineer at Mozilla Research and a member of the Rust Community subteam, but the conclusions and suggestions in this talk are my own observations and opinions.

The slides are a result of trying to write my own CSS for sliderust... Sorry about the ugliness.

I have 10 minutes, so this is not the time to teach you any Rust. Check out rust-lang.org, the Rust Community Resources, or your city’s Rust meetup to get started with the language.

What we are going to cover is how Rust automates some community tasks, and what you can learn from our automation.

Community

I define “community”, in this context, as “the human interaction work necessary for a project’s success”. This work is done by a wide variety of people in many situations. Every interaction, from helping a new contributor to discussing a proposed code change to criticizing someone’s behavior, affects the overall climate of a project’s community.

Automation

To me, “automation” means “offloading peoples’ work onto a system”. This can be a computer system, but I think it can also mean changes to the social systems that guide peoples’ behavior.

So, community automation is a combination of:

  • Building tools to do things the humans used to have to
  • Tweaking the social systems to minimize the overhead they create

Scoping the Problem

While not all things can be automated and not all factors of the community are under the project leadership’s control, it’s not totally hopeless.

Choices made and automation deployed by project leaders can help control:

  • Which contributors feel welcome or unwelcome in a project
  • What code makes it into the project’s tree
  • Robots!

Moderation

Our robots and social systems to improve workflow and contributor experience all rely on community members’ cooperation. To create a community of people who want to work constructively together and not be jerks to each other, Rust defines behavior expectations code of conduct. The important thing to note about the CoC is that half the document is a clear explanation of how the policies in it will be enforced. This would be impossible without the dedication of the amazing mod team.

The process of moderation cannot and should not be left to a computer, but we can use technology to make our mods’ work as easy as possible. We leave the human tasks to humans, but let our technologies do the rest.

In this case, while the mods need to step in when a human has a complaint about something, we can automate the process of telling peole that the rules exist. You can’t join the IRC channel, post on the Discourse forums, or even read the Rust subreddit without being made aware that you’re expected to follow the CoC’s guidelines in official Rust spaces.

Depending on the forums where your project communicates, try to automate the process of excluding obvious spammers and trolls. Not everybody has the skills or interest to be an excellent moderator, so when you find them, avoid wasting their time on things that a computer could do for them!

It didn’t fit in the talk, but this Slashdot post is one of my favorite examples of somebody being filtered out of participating in the Rust community due to their personal convictions about how project leadership should work. While we do miss out on that person’s potential technical contributions, we also save all of the time that might be spent hashing out our disagreements with them if we had a less clear set of community guideines.

Robots

This lightning talk highlighted 4 categories of robots:

  • Maintaining code quality
  • Engaging in social pleasantries
  • Guiding new contributors
  • Widening the contributor pipeline

Longer versions of this talk also touch on automatically testing compiler releases, but that’s more than 10 minutes of content on its own.

The Not Rocket Science Rule of Software Engineering

To my knowledge, this blog post by Rust’s inventor Graydon Hoare is the first time that this basic principle has been put so succinctly:

Automatically maintain a repository of code that always passes all the tests.

This policy guides the Rust compiler’s development workflow, and has trickled down into libraries and projects using the language.

Bors

The name Bors has been handed down from Graydon’s original autolander bot to an instance of Homu, and is often verbed to refer to the simple actions he does:

  1. Notice when a human says “r+” on a PR
  2. Create a branch that looks like master will after the change is applied
  3. Test that branch
  4. Fastforward the master branch to the tested state, if it passed.

Keep your tree green

Saying “we can’t break the tests any more” is a pretty significant cultural change, so be prepeared for some resistance. With that disclaimer, the path to following the Not Rocket Science Rule is pretty simple:

  1. Write tests that fail when your code is bad and pass when it’s good
  2. Run the tests on every change
  3. Only merge code if it passes all the tests
  4. Fix the tests whenever thy’re wrong.

This strategy encourages people to maintain the tests, because a broken test becomes everyone’s problem and interrupts their workflow until it’s fixed.

I believe that tests are necessary for all code that people work on. If the code was fully and perfectly correct, it wouldn’t need changes – we only write code when something is wrong, whether that’s “It crashes” or “It lacks such-and-such a feature”. And regardless of the changes you’re making, tests are essential for catching any regressions you might accidentally introduce.

Automating social pleasantries

Have you ever submitted an issue or change request to a project, then not heard back for several months? It feels bad to be ignored, and the project loses out on potential contributors.

Rust automates basic social pleasantries with a robot called Highfive. Her tasks are easy to explain, though the implementaion details can be tricky:

  1. Notice when a change is submitted by a new contributor, then welcome them
  2. Assign reviewers, based on what code changed, to all PRs
  3. Nag the reviewer if they seem to have forgotten about their assignment

If you don’t want a dedicated greeter-bot, you can get many of these features from your code management system:

  • Use issue and pull request templates to guide potential contributors to the docs that can help them improve their report or request.
  • Configure notifications so you find out when someone is trying to interact with your project. This could mean muting all the noise notifications so the signal ones are available, or intermittently polling the repositories that you maintain (a daily cron job or weekly calendar reminder works just fine).

Guide new contributors

In open source projects, “I’m new; what can I work on?” is a common inquiry. In internal projects, you’ll often meet colleagues from elsewhere in your organization who ask you to teach them something about the project or the skills you use when working on it.

The Rust-implemented browser engine Servo is actually a slightly better example of this than the compiler itself, since the smaller and younger codebase has more introductory-level issues remaining. The site starters.servo.org automatically scrapes the organization’s issue trackers for easy and unclaimed issues.

Issue triage is often unrewarding, but using the tags for a project like this creates a greater incentive to keep them up to date.

When filing introductory issues, try to include links to the relevant documentation, instructions for reproducing the bug, and a suggestion of what file you would look in first if you tackled the problem yourself.

Automating mentorship

Mentorship is a highly personalized process in which one human transfers their skills to another. However, large projects often have more contributors seeking the same basic skills than mentors with time to teach them.

The parts of mentorship which don’t explicitly require a human mentor can be offloaded onto technology.

The first way to automate mentorship tasks is to maintain correct and up-to-date documentation. Correct docs train humans to consult them before interrupting an expert, whereas docs that are frequently outdated or wrong condition their users to skip them entirely.

Use tools like octohatrack and your project status updates to identify and recognize contributors who help with docs and issue triage. Docs contributions may actually save more developer and community time than new code features, so respect them accordingly.

Finally, maintain a list of introductory or mentored issues – even if that’s just a Google Doc or Etherpad.

Bear in mind that an introductory issue doesn’t necessarily mean “suitable for someone who has never coded before”. Someone with great skills in a scripting language might be looking for a place to help with an embedded codebase, or a UX designer might want to get involved with a web framework that they’ve used. Introductory issues should be clear about what knowledge a contributor should acquire in order to try them, but they don’t have to all be “easy”.

Automating the pipeline

Drive-by fixes are to being a core contributor as interviews are to full time jobs. Just as a company attempts to interview as many qualified candidates as it can, you can recruit more contributors by making your introductory issues widely available.

Before publicizing your project, make sure you have a CONTRIBUTING.txt or good README outlining where a new contributor should start, or you’ll be barraged with the same few questions over and over.

There are a variety of sites, which I call issue aggregators, where people who already know a bit about open source development can go to find a new project to work on. I keep a list on this page <http://edunham.net/pages/issue_aggregators.html>, pull requests welcome <https://github.com/edunham/site/blob/master/pages/issue_aggregators.rst> if I’m missing anything. Submitting your introductory issues to these sites broadens your pipeline, and may free up humans’ recruiting time to focus on peole who need more help getting up to speed.

If you’re working on internal rather than public projects, issue aggregators are less relevant. However, if you have the resources, it’s worthwhile to consider the recruiting device of open sourcing an internal tool that would be useful to others. If an engineer uses and improves that tool, you get a tool improvement and they get some mentorship. In the long term, you also get a unique opportunity to improve that engineer’s opinion of your organization while networking with your engineers, which can make them more likely to want to work for you later.

Follow Up

For questions, you’re welcome to chat with me on Twitter (@QEDunham), email (automacon <at> edunham <dot> net), or IRC (edunham on irc.freenode.net and irc.mozilla.org).

Slides from the talk are here.

The Mozilla BlogHelp Fix Copyright: Send a Rebellious Selfie to European Parliament (Really!)

The EU’s proposed copyright reform keeps in place retrograde laws that make many normal online creative acts illegal. The same restrictive laws will stifle innovation and hurt technology businesses. Let’s fix it. Sign Mozilla’s petition, watch and share videos, and snap a rebellious selfie

Earlier this month, the EU Commission released their proposal for a reformed copyright framework. In response, we are asking everyone reading this post to take a rebellious selfie and send that doctored snapshot to EU Parliament. Seem ridiculous? So is an outdated law that bans taking and sharing selfies in front of the Eiffel Tower at night in Paris, or in front of the Little Mermaid in Copenhagen.

Of course, no one is actually going to jail for subversive selfies. But the technical illegality of such a basic online act underscores the grave shortcomings in the EU’s latest proposal on copyright reform. As Mozilla’s Denelle Dixon-Thayer noted in her last post on the proposed reform, it “thoroughly misses the goal to deliver a modern reform that would unlock creativity and innovation.” It doesn’t, for instance, include needed exceptions for panorama, parody, or remixing, nor does it include a clause that would allow noncommercial transformations of works (like remixes, or mashups) or a flexible user clause like an open norm, or fair dealing.

Translation? Making memes and gifs will remain an illicit act.

And that’s just the start. Exceptions for text and data mining are limited to public institutions. This could stifle startups looking to online data to build innovative businesses. Then there is the dangerous “neighbouring right,” similar to the ancillary copyright laws we’ve seen in Spain and Germany (which have been clear failures, respectively). This misguided part of the reform would allow online publishers to copyright “press publications” for up to 20 years, with retroactive effect. The vague wording makes it unclear exactly to whom and for whom this new exclusive right would apply.

Finally, another unclear provision would require any internet service that provides access to “large amounts” of works to users to broker agreements with rightsholders for the use of, and protection of, their works. This could include the use of “effective content recognition technologies” — which imply universal monitoring and strict filtering technologies that identify and/or remove copyrighted content.

These proposals, if adopted as they are, would deal a blow to EU startups, to independent coders, creators, and artists, and to the health of the internet as a driver for economic growth and innovation.

We’re not advocating plagiarism or piracy. Creators must be treated fairly, including proper remuneration, for their creations and works. Mozilla wants to improve copyright for everyone,  so individuals are not discouraged from creating and innovating.

Mozilla isn’t alone in our objections: Over 50,000 individuals have signed our petition and demanded modern copyright laws that foster creativity, innovation, and opportunity online.

We have our work cut out for us. As the European Parliament revises the proposal this fall, we need a movement — a collection of passionate internet users who demand better, modern laws. Today, Mozilla is launching a public education campaign to support that movement.

post-crimes

Mozilla has created an app to highlight the absurdity of some of Europe’s outdated copyright laws. Try Post Crimes: Take a selfie in front of European landmarks that can be technically unlawful to photograph — like the Eiffel Tower’s night-time light display, or the Little Mermaid in Denmark — due to restrictive copyright laws.

Then, send your selfie as a postcard to your Member of the European Parliament (MEP). Show European policymakers how outdated copyright laws are, and encourage them to forge a more future-looking and innovation-friendly copyright reform.

We’ve also created three short videos that outline the need for reform. They’re educational, playful, and a little bit weird — just like the internet. But they explore a serious issue: The harmful effect outdated and restrictive copyright laws have on our creativity and the open internet. We hope you’ll watch them and share them with others.

We need your help standing up for better copyright laws. When you sign the petition, snap a selfie, or share our videos, you’re supporting creativity, innovation and opportunity online — for everyone.

This Week In RustThis Week in Rust 149

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

RustConf/RustFest Experiences

New Crates & Project Updates

Crate of the Week

Somewhat unsurprisingly, this week's crate of the week is ripgrep. In case you've missed it, this is a grep/ag/pt/whatever search tool you use replacement that absolutely smokes the competition in most performance tests. Thanks to DanielKeep for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

77 pull requests were merged in the last two weeks.

New Contributors

  • aclarry
  • Alexander von Gluck IV
  • Andrew Lygin
  • Ashley Williams
  • Austin Hicks
  • Eitan Adler
  • Gianni Ciccarelli
  • jacobpadkins
  • James Duley
  • Joe Neeman
  • Niels Sascha Reedijk
  • Vanja Cosic

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

You can actually return Iterators without summoning one of the Great Old Ones now, which is pretty cool.

/u/K900_ on reddit.

Thanks to Johan Sigfrids for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Nikki BeeRustConf 2016

So, a couple weeks ago I got to go to RustConf 2016, the first instance of an annual event! What was it about? The relatively new programming language Rust, from Mozilla!

Attending

I was only able to attend thanks to three factors: travel re-reimbursement allowance from Outreachy for my recent internship with them; a scholarship ticket from RustConf to cover attendance fees; and being able to stay at a friends place in Portland for a few days. Were it not for each of those, I would not have felt financially comfortable attending! I’m very grateful for all of the support.

The Morning

I went to RustConf with my now wife, Decky Coss (who also was able to attend under the same circumstances as me!). We decided that since the event is only a day long, and due to our track record for having a hard time getting the “full” experience out of conventions, that we would work extra hard on getting there on time. As such, we only missed 25 minutes of the first talk.

Of course, I’m not pleased that that happened, but we had the disadvantage of some rough flights the day before, as well as not staying on or near the location. Plus, we’re just not cut out for whatever it takes to attend conferences studiously. But we did our best that day and I think that was OK.

As I said, we missed the first half of the opening keynote, given by Aaron Turon and Niko Matsakis, but I really liked what I saw. It felt like a neat overview of Rust, especially concerning common issues newbies (like me!) run into with the language. It was gratifying hearing them point out the very same trip-ups I deal with, especially since they talked about how Rust developers are aware of these issues! They are looking into ways to help stop these trip-ups, such as better language syntax, and updating the Rust documentation.

Acting as a perfect follow-up to the keynote was “The Illustrated Adventure Survival Guide”, by Liz Baillie, which was easily the funnest talk. It was a drawn slideshow/comic making many metaphors and jokes representing how Rust works. I felt like it would be hard to enjoy if you didn’t have a decent understanding of Rust but then again, I suppose nobody would be there if they didn’t!

The last talk of the morning, “Integrating Rust and VLC”, by Geoffroy Couprie. It felt a bit hard to follow in that I didn’t know what it was leading to. All the same I greatly enjoyed seeing both a practical implementation of Rust (in this case, handling video streaming codecs), and how Rust can act as a friendlier alternative to C-like languages (something that has been heavily on my mind).

The Afternoon

Lunchtime! Lunch was set to be an hour and a half long, which, when I first saw the schedule, I thought would be excessively long. Somehow it went by very fast! Plus, following such a long morning, I was very grateful to get to unwind for a bit from listening to so much talking.

However, the dessert I had gave me a bad sugar rush and crash and I felt very out of it for the next few hours, so I don’t remember much about the talks during this time. None of them really did much for me, not that they were bad, but I just wasn’t able to pay much attention.

The Evening

Snack time! Again, I was glad to have a respite where I could chill out for a bit. Especially after having a hard time keeping focus for a few hours. This break was not nearly as long as lunch, but if it was any longer, I think the conference would have been too long.

The first talk of the evening was “A Modern Editor in Rust”, presented by Raph Levien. Raph talked about the text editor Xi. I was excited by some of the things offered in it, since it’s made to tackle unique issues, such as quickly loading and scrolling through massive (like, hundreds of megabytes big) files. I didn’t see much in it that I thought I needed personally, but I appreciate seeing programs for solving unique challenges like that.

The next talk I ended up writing so much about, I put it into another section.

The Second-Last Talk

Next was “RFC: In Order to Form a More Prefect union”, by Josh Triplett, which was easily the most impactful talk to me. It was a chronicle of a Request For Comments idea created by the speaker, which was recently committed into Rust! Josh described the RFC process in the Rust community as “welcoming, open, and inclusive”, all of which echoes my experience as a Rust newbie.

Josh talked about each step of their submission, including how far their proposal went (which was a bit of a surprise to them!). The best part to me was, despite it becoming one of the most discussed issues on Rust’s Github, Josh never found the proposal being taken from them by a more experienced contributor, no matter how high-level the discussion got. It’s not that I would expect such a rude thing to happen easily, but more that I greatly appreciate the speaker being given ample room to learn how to become a better contributor.

Previously, I contributed to Servo (the browser engine written in Rust) for my Outreachy internship, which had formal processes for my additions to be merged into the main code. That makes it obviously different than how it would be for any another contributor, since my internship gave me a unique position in relation to a mentor who knew the ins and outs of the Servo project. I never felt like I would be able to replicate this anywhere else.

All that said, if I ever contribute to Rust more informally, this talk has given me a great idea of what to expect, and how to prepare to enter into doing so. Whenever I have an idea I want to share with Rust, I won’t hesitate over doing so! Also, the next time I contribute to another FOSS project, I’ll be thinking of the process here, and try to map the open Rust process to that project!

The Closing Keynote

Finally, the closing keynote, presented by Julia Evans. She’s the only person I already recognized, as we’re both Recurse Center alums. During the snack break Decky and I decided to try to find her to chat for a couple minutes. We managed to do so- as well as get copies of a delightful zine she wrote about Linux tools she’s found invaluable for programming! You can find it on her website.

By this time I felt pretty fatigued from the long day, but Julia’s talk was very energetic which made me enthused to pay attention. Her keynote was focused on learning systems programming with Rust, while new to the language- two things that felt challenging to her, but that she was able to get into well surprisingly quickly. I really relate to that sort of experience, since many times a new project I wanted to do seemed insurmountable, but became much easier after I started!

I’ve never done any systems programming, but I’ve had a couple friends ask me when I’ve told them about Rust, if they can do X or Y sort of thing in it, to which I don’t often have an answer. Based on Julia’s talk, I feel comfortable saying systems programming is a good bet!

Other Things

That’s it for the talks themselves, although I had a few more thoughts I wanted to share.

  • As part of being late (as I mentioned before), Decky and I missed the breakfast offered by RustConf, although we hadn’t realized there was any before we got there. Having lunch and snacks offered as part of the conference was fantastic. If Decky and I had to go out for lunch we probably would have missed a talk. Although, in the future, I’ll be skipping out on sweets, even free ones, because sugar is just really bad for me :(.

  • I’ve never done an event that’s so continuously long. The schedule for RustConf was about 9 hours straight, and I was quite exhausted by the end of it. In the future, I’d prefer an event with multiple, shorter days! Conventions in general are fatiguing for me, but having to sit all the time for a long day takes a lot out of me.

  • I know better than to assume on an individual level, but I felt dismayed by the low ratio of women to men present at the conference. As a result of feeling a bit stranded by that, I felt uncomfortable talking to strangers. The only people I talked to - for more than a few sentences - were people I knew, or recognized from online.

  • One of the talks ended in a very forced parody of Trump’s campaign slogan which got a lot of laughs from the crowd. When that happened, my only reaction was to turn to Decky and grimace to show her how unhappy I was. Comedy shouldn’t be used to trivialize fascists, it should be used to kill fascism.

Doug BelshawCurriculum as algorithm

algorithm.jpg

Way back in Episode 39 of Today In Digital Education, the podcast I record every week with Dai Barnes, we discussed the concept of ‘curriculum as algorithm’. If I remember correctly, it was Dai who introduced the idea.

The first couple of things that pop into my mind when considering curricula through an algorithmic lens are:

But let’s rewind and define our terms, including their etymology. First up, curriculum:

In education, a curriculum… is broadly defined as the totality of student experiences that occur in the educational process. The term often refers specifically to a planned sequence of instruction, or to a view of the student’s experiences in terms of the educator’s or school’s instructional goals.
[…]
The word “curriculum” began as a Latin word which means “a race” or “the course of a race” (which in turn derives from the verb currere meaning “to run/to proceed”). (Wikipedia)

…and algorithm:

In mathematics and computer science, an algorithm… is a self-contained step-by-step set of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks.

The English word 'algorithm’ comes from Medieval Latin word algorism and French-Greek word “arithmos”. The word 'algorism’ (and therefore, the derived word 'algorithm’) come from the name al-Khwārizmī. Al-Khwārizmī (Persian: خوارزمی‎‎, c. 780–850) was a Persian mathematician, astronomer, geographer, and scholar. English adopted the French term, but it wasn’t until the late 19th century that “algorithm” took on the meaning that it has in modern English. (Wikipedia)

So my gloss on the above would be that a curriculum is the container for student experiences, and an algorithm provides the pathways. In order to have an algorithmic curriculum, there need to be disaggregated learning content and activities that can serve as data points. Instead of a forced group march, the curriculum takes shape around the individual or small groups - much like what is evident in Khan Academy’s Knowledge Map.

The problem is, as the etymology of 'algorithm’ suggests, it’s much easier to do this for subjects like maths, science, and languages than it is for humanities subjects. The former subjects are true/false, have concepts that obviously build on top of one another, and are based on logic. Not all of human knowledge works like that.

For this reason, then, we need to think of a way in which learning content and activities in the humanities can be represented in a way that builds around the learner. This, of course, is exactly what great teachers do in these fields: they personalise the quest for human knowledge based on student interests and experience.

One thing that I think is under-estimated in learning is the element of serendipity. To some extent, serendipity is the opposite of an algorithm. My Amazon recommendations mean I get more of the same. Stumbling across a secondhand bookshop, on the other hand, means I could head off in an entirely new direction due to a serendipitous find.

Using services such as StumbleUpon makes the web a giant serendipity engine. But it’s not a curriculum, as such. What I envisage by curriculum as algorithm is a fine balance between:

  • Continuously-curated learning content and activities
  • Formative feedback
  • Multiple pathways to diverse goals
  • Flexible accreditation

I’m hoping that as the Open Badges system evolves, we move beyond the metaphor of 'playlists’ for learning, towards curriculum as algorithm. I’d love to get some funding to explore this further…


Comments? Questions? Get in touch: @dajbelshaw / mail@dougbelshaw.com

Image CC BY x6e38

Christian HeilmannHelp making the fourth industrial revolution less scary

Last week I spent in Germany at an event sponsored by the government agency for unemployment covering the issue of digitalisation of the job market and the subsequential loss of jobs.

me, giving a keynote on machine learning and work

When the agency approached me to give a keynote on the upcoming “fourth industrial revolution” and what machine learning and artificial intelligence means for the job market I was – to put it mildly – bricking it. All the other presenters at the event had several doctor titles and were professors of this and that. And here I was, being asked to deliver the “future” to an audience of company owners, university professors and influential people who decide the employment fate of thousands of people.

Expert Panel

I went into hermit mode and watched, read and translated dozens of videos and articles on AI and the work environment. In the end, I took a more detailed look at the conference schedule and realised that most of the subject matter data will be covered by the presenter before me.

Thus I delivered a talk covering the current situation of AI and what it means for us as job seekers and employers. The slides and screencast are in German, but I am looking forward to translating them and maybe deliver them in a European frame soon.

The slide deck is on Slideshare, and even without knowing German, you should get the gist:

The screencast is on YouTube:

The feedback was overwhelming and humbling. I got interviewed by the local TV station where I mostly deflected all the negative and defeatist attitude towards artificial intelligence the media loves to portrait.

tv interview

I also got a half page spread in the local newspaper where – to the amusement of my friends – I was touted a “fascinating prophet”.

Newspaper article

During the expert panel on digital security I had a few interesting encounters. Whilst in general it felt tough to see how inflexible and outdated some of the attitudes of companies towards computers were, there is a lot of innovation happening even in rural areas. I was especially impressed with the state of robots in warehouses and the investment of the European Union in Blockchain solutions and security research.

One thing I am looking forward to is working with a cybersecurity centre in the area giving workshops on social engineering and security of iOT.

A few things I learned and I’d like you to also consider:

  • We are at the cusp – if not in the middle of – a new digital revolution
  • Our job as people in the know is to reach out to those who are afraid of it and give out sensible information as a counter point to some of the fearmongering of the press
  • It is incredibly rewarding to go out of our comfort zone and echo chamber and talk to people with real business and social change issues. It humbles you and makes you wonder just how you ended up knowing all that we do.
  • The good social aspects of our jobs could be a blueprint for other companies to work and change to be resilient towards replacement by machines
  • German is hard :)

So, be brave, offer to present at places not talking about the latest flavour of JavaScript or CSS preprocessing. The world outside our echo chamber needs us.

Or as the interrupters put it: What’s your plan for tomorrow?

Gervase MarkhamGPLv2 Combination Exception for the Apache 2 License

CW: heavy open source license geekery ahead.

One unfortunate difficulty with open source licensing is that some lawyers, including the FSF, consider the Apache License 2.0 incompatible with the GPL 2.0, which is to say that you can’t combined Apache 2.0-licensed code with GPL 2.0-licensed code and distribute the result. This is annoying because when choosing a permissive licence, we want people to use the more modern Apache 2.0 over the older BSD or MIT licenses, because it provides some measure of patent protection. And this incompatibility discourages people from doing that.

This was a concern for Mozilla when determining the correct licensing for Rust, and this is why the standard Rust license is a dual license – the choice of Apache 2.0 or MIT. The idea was that Apache 2.0 would be the normal license, but people could choose MIT if they wanted to combine “Rust license” code with GPL 2.0 code.

However, the LLVM project has now had notable open source attorney Heather Meeker come up with an exception to be added to the Apache 2.0 license to enable GPL 2.0 compatibility. This exception meets a number of important criteria for a legal fix for this problem:

  • It’s an additional permission, so is unlikely to affect the open source-ness of the license;
  • It doesn’t require the organization using it to take a position on the question of whether the two licenses are actually compatible or not;
  • It’s specific to the GPL 2.0, thereby constraining its effects to solving the problem.

Here it is:

—- Exceptions to the Apache 2.0 License: —-

In addition, if you combine or link compiled forms of this Software with software that is licensed under the GPLv2 (“Combined Software”) and if a court of competent jurisdiction determines that the patent provision (Section 3), the indemnity provision (Section 9) or other Section of the License conflicts with the conditions of the GPLv2, you may retroactively and prospectively choose to deem waived or otherwise exclude such Section(s) of the License, but only in their entirety and only with respect to the Combined Software.

—- end —-

It seems very well written to me; I wish it had been around when we were licensing Rust.

Gervase MarkhamIntroducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

QMOFirefox 50 Beta 3 Testday, September 30th

Hello Mozillians,

We are happy to announce that Friday, September 30th, we are organizing Firefox 50 Beta 3 Testday. We will be focusing our testing on Pointer Lock API and WebM EME support for Widevine features. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Karl Dubost[worklog] Edition 037. Autumn is here. The fall of -webkit-

Two Japanese national holidays during the week. And there goes the week. Tune of the Week: Anderson .Paak - Silicon Valley

Webcompat Life

Progress this week:

Today: 2016-09-26T09:24:48.064519
336 open issues
----------------------
needsinfo       13
needsdiagnosis  110
needscontact    9
contactready    27
sitewait        161
----------------------

You are welcome to participate

  • Monday day off in Japan: Respect for the elders.
  • Thursday day off in Japan: Autumn Equinox.

Firefox 49 has been released and an important piece of cake is delivered now to every users. You can get some context on why some (not all) -webkit- landed on Gecko and the impact on Web standards.

We have a team meeting soon in Taipei.

The W3C was meeting this week in Lisbon. Specifically about testing.

I did a bit of Prefetch links testing and how they appear in devtools.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Little by little we are accumulating our issues list about CSS zoom. Firefox is the only one to not support the non-standard property. It's coming from (Trident) IE was imported in WebKit (Safari), then maintained alive in Blink (Chrome, Opera) to finally come into Edge. Sadness.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Mic Berman

How will you Lead? From a talk at MarsDD in Toronto April 2016

The Servo BlogThis Week In Servo 79

In the last week, we landed 96 PRs in the Servo organization’s repositories.

Promise support has arrived in Servo, thanks to hard work by jdm, dati91, and mmatyas! This does not fully implement microtasks, but unblocks the uses of Promises in many places (e.g., the WebBluetooth test suite).

Emilio rewrote the bindings generation code for rust-bindgen, dramatically improving the flow of the code and output generated when producing Rust bindings for C and C++ code.

The TPAC WebBluetooth standards meeting talked a bit about the great progress by the team at the University of Szeged in the context of Servo.

Planning and Status

Our overall roadmap is available online and now includes the Q3 plans. The Q4 and 2017 planning will begin shortly!

This week’s status updates are here. We have been having a conversation on the mailing list about how to better involve all contributors to the Servo project and especially improve the visibility into upcoming work - please make your ideas and opinions known!

Notable Additions

  • bholley made it possible to manage the Gecko node data without using FFI calls
  • aneeshusa improved Homu so that it would ignore Work in Progress (WIP) pull requests
  • wdv4758h implemented iterators for FormData
  • nox updated our macOS builds to use libc++ instead of libstdc++
  • TheKK added support for noreferrer to when determining referrer policies
  • manish made style unit tests run on all properties (including stylo-only ones)
  • gw added the OSMesa source, a preliminary step towards better headless testing on CI
  • emilio implemented improved support for function pointers, typedefs, and macOS’s stdlib in bindgen
  • schuster styled the input text element with user-agent CSS rather than hand-written Rust code
  • jeenalee added support for open-ended dictionaries in the Headers API
  • saneyuki fixed the build failures in SpiderMonkey on macOS Sierra
  • mrobinson added support for background-repeat properties space and round
  • pcwalton improved the layout of http://python.org
  • phrohdoh implemented the minlength attribute for text inputs
  • anholt improved WebGL support
  • mmatyas added ARM support to WebRender
  • ms2ger implemented safe, high-level APIs for manipulating JS typed arrays
  • manish added the ability to uncompute a style back to its specified value, in support of animations
  • cbrewster added an option to replace the current session entry when reloading a page
  • kichjang changed the loading of external scripts to use the Fetch network stack
  • splav implemented the HTMLOptionsCollection API
  • cynicaldevil fixed a panic involving <link> elements and the rel attribute

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Demo of the in-progress fetch() API:

Niko MatsakisIntersection Impls

As some of you are probably aware, on the nightly Rust builds, we currently offer a feature called specialization, which was defined in RFC 1210. The idea of specialization is to improve Rust’s existing coherence rules to allow for overlap between impls, so long as one of the overlapping impls can be considered more specific. Specialization is hotly desired because it can enable powerful optimizations, but also because it is an important component for modeling object-oriented designs.

The current specialization design, while powerful, is also limited in a few ways. I am going to work on a series of articles that explore some of those limitations as well as possible solutions.

This particular posts serves two purposes: it describes the running example I want to consder, and it describes one possible solution: intersection impls (more commonly called lattice impls). We’ll see that intersection impls are a powerful feature, but they don’t completely solve the problem I am aiming to solve and they also intoduce other complications. My conclusion is that they may be a part of the final solution, but are not sufficient on their own.

Running example: interconverting between Copy and Clone

I’m going to structure my posts around a detailed look at the Copy and Clone traits, and in particular about how we could use specialization to bridge between the two. These two traits are used in Rust to define how values can be duplicated. The idea is roughly like this:

  • A type is Copy if it can be copied from one place to another just by copying bytes (i.e., with memcpy). This is basically types that consist purely of scalar values (e.g., u32, [u32; 4], etc).
  • The Clone trait expands upon Copy to include all types that can be copied at all, even if requires executing custom code or allocating memory (for example, a String or Vec<u32>).

These two traits are clearly related. In fact, Clone is a supertrait of Copy, which means that every type that is copyable must also be cloneable.

For better or worse, supertraits in Rust work a bit differently than superclasses from OO languages. In particular, the two traits are still independent from one another. This means that if you want to declare a type to be Copy, you must also supply a Clone impl. Most of the time, we do that with a #[derive] annotation, which auto-generates the impls for you:

1
2
3
4
5
#[derive(Copy, Clone, ...)]
struct Point {
    x: u32,
    y: u32,
}

That derive annotation will expand out to two impls looking roughly like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
struct Point {
    x: u32,
    y: u32,
}

impl Copy for Point {
    // Copy has no methods; it can also be seen as a "marker"
    // that indicates that a cloneable type can also be
    // memcopy'd.
}

impl Clone for Point {
    fn clone(&self) -> Point {
        *self // this will just do a memcpy
    }
}

The second impl (the one implementing the Clone trait) seems a bit odd. After all, that impl is written for Point, but in principle it could be used any Copy type. It would be nice if we could add a blanket impl that converts from Copy to Clone that applies to all Copy types:

1
2
3
4
5
6
// Hypothetical addition to the standard library:
impl<T:Copy> Clone for T {
    fn clone(&self) -> Point {
        *self
    }
}

If we had such an impl, then there would be no need for Point above to implement Clone explicitly, since it implements Copy, and the blanket impl can be used to supkply the Clone impl. (In other words, you could just write #[derive(Copy)].) As you have probably surmised, though, it’s not that simple. Adding a blanket impl like this has a few complications we’d have to overcome first. This is still true with the specialization system described in [RFC 1210][].

There are a number of examples where these kinds of blanket impls might be useful. Some examples: implementing PartialOrd in terms of Ord, implementing PartialEq in terms of Eq, and implementing Debug in terms of Display.

Coherence and backwards compatibility

Hi! I’m the language feature coherence! You may remember me from previous essays like Little Orphan Impls or RFC 1023.

Let’s take a step back and just think about the language as it is now, without specialization. With today’s Rust, adding a blanket impl<T:Copy> Clone for T would be massively backwards incompatible. This is because of the coherence rules, which aim to prevent there from being more than one trait applicable to any type (or, for generic traits, set of types).

So, if we tried to add the blanket impl now, without specialization, it would mean that every type annotated with #[derive(Copy, Clone)] would stop compiling, because we would now have two clone impls: one from derive and the blanket impl we are adding. Obviously not feasible.

Why didn’t we add this blanket impl already then?

You might then wonder why we didn’t add this blanket impl converting from Copy to Clone in the wild west days, when we broke every existing Rust crate on a regular basis. We certainly considered it. The answer is that, if you have such an impl, the coherence rules mean that it would not work well with generic types.

To see what problems arise, consider the type Option:

1
2
3
4
5
#[derive(Copy, Clone)]
enum Option<T> {
    Some(T),
    None,
}

You can see that Option<T> derives Copy and Clone. But because Option is generic for T, those impls have a slightly different look to them once we expand them out:

1
2
3
4
5
6
7
8
9
10
impl<T:Copy> Copy for Option<T> { }

impl<T:Clone> Clone for Option<T> {
    fn clone(&self) -> T {
        match *self {
            Some(ref v) => Some(v.clone()),
            None => None,
        }
    }
}

Before, the Clone impl for Point was just *self. But for Option<T>, we have to do something more complicated, which actually calls clone on the contained value (in the case of a Some). To see why, imagine a type like Option<Rc<u32>> – this is clearly cloneable, but it is not Copy. So the impl is rewritten so that it only assumes that T: Clone, not T: Copy.

The problem is that types like Option<T> are sometimes Copy and sometimes not. So if we had the blanket impl that converts all Copy types to Clone, and we have the impl above that impl Clone for Option<T> if T: Clone, then we can easily wind up in a situation where there are two applicable impls. For example, consider Option<u32>: it is Copy, and hence we could use the blanket impl that just returns *self. But it is also fits the Clone-based impl I showed above. This is a coherence violation, because now the compiler has to pick which impl to use. Obviously, in the case of the trait Clone, it shouldn’t matter too much which one it chooses, since they both have the same effect, but the compiler doesn’t know that.

Enter specialization

OK, all of that prior discussion was assuming the Rust of today. So what if we adopted the existing specialization RFC? After all, its whole purpose is to improve coherence so that it is possible to have multiple impls of a trait for the same type, so long as one of those implementations is more specific. Maybe that applies here?

In fact, the RFC as written today does not. The reason is that the RFC defines rules that say an impl A is more specific than another impl B if impl A applies to a strict subset of the types which impl B applies to. Let’s consider some arbitrary trait Foo. Imagine that we have an impl of Foo that applies to any Option<T>:

1
impl<T> Foo for Option<T> { .. }

The more specific rule would then allow a second impl for Option<i32>; this impl would specialize the more generic one:

1
impl Foo for Option<i32> { .. }

Here, the second impl is more specific than the first, because while the first impl can be used for Option<i32>, it can also be used for lots of other types, like Option<u32>, Option<i64>, etc. So that means that these two impls would be accepted under RFC #1210. If the compiler ever had to choose between them, it would prefer the impl that is specific to Option<i32> over the generic one that works for all T.

But if we try to apply that rule to our two Clone impls, we run into a problem. First, we have the blanket impl:

1
impl<T:Copy> Clone for T { .. }

and then we have an impl tailored to Option<T> where T: Clone:

1
impl<T:Clone> Clone for Option<T> { .. }

Now, you might think that the second impl is more specific than the blanket impl. After all, it can be used for any type, whereas the second impl can only be used Option<T>. Unfortunately, this isn’t quite right. After all, the blanket impl cannot be used for any type T: it can only be used for Copy types. And we already saw that there are lots of types for which the second impl can be used where the first impl is inapplicable. In other words, neither impl is a subset of one another – rather, they both cover two distinct, but overlapping, sets of types.

To see what I mean, let’s look at some examples:

1
2
3
4
5
6
| Type              | Blanket impl | `Option` impl |
| ----              | ------------ | ------------- |
| i32               | APPLIES      | inapplicable  |
| Box<i32>          | inapplicable | inapplicable  |
| Option<i32>       | APPLIES      | APPLIES       |
| Option<Box<i32>>  | inapplicable | APPLIES       |

Note in particular the first and fourth rows. The first row shows that the blanket impl is not a subset of the Option impl. The last row shows that the Option impl is not a subset of the blanket impl either. That means that these two impls would be rejected by RFC #1210 and hence adding a blanket impl now would still be a breaking change. Boo!

To see the problem from another angle, consider this Venn digram, which indicates, for every impl, the sets of types that it matches. As you can see, there is overlap between our two impls, but neither is a strict subset of one another:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +---------------------------------------+-----+
| |                                       |     |
| | Example: Option<i32>                  |     |
| |                                       |     |
+-+---------------------------------------+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Enter intersection impls

One of the first ideas proposed for solving this is the so-called lattice specialization rule, which I will call intersection impls, since I think that captures the spirit better. The intuition is pretty simple: if you have two impls that have a partial intersection, but which don’t strictly subset one another, then you can add a third impl that covers precisely that intersection, and hence which subsets both of them. So now, for any type, there is always a most specific impl to choose. To get the idea, it may help to consider this ASCII Art Venn diagram. Note the difference from above: there is now an impl (indicating with = lines and . shading) covering precisely the intersection of the other two.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +=======================================+-----+
| |[impl<T:Copy> Clone for Option<T>].....|     |
| |.......................................|     |
| |.Example: Option<i32>..................|     |
| |.......................................|     |
+-+=======================================+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Intersection impls have some nice properties. For one thing, it’s a kind of minimal extension of the existing rule. In particular, if you are just looking at any two impls, the rules for deciding which is more specific are unchanged: the only difference when adding in intersection impls is that coherence permits overlap when it otherwise wouldn’t.

They also give us a good opportunity to recover some optimization. Consider the two impls in this case: the blanket impl that applies to any T: Copy simply copies some bytes around, which is very fast. The impl that is tailed to Option<T>, however, does more work: it matches the impl and then recursively calls clone. This work is necessary if T: Copy does not hold, but otherwise it’s wasted work. With an intersection impl, we can recover the full performance:

1
2
3
4
5
6
// intersection impl:
impl<T:Copy> Clone for Option<T> {
    fn clone(&self) -> T {
        *self // since T: Copy, we can do this here
    }
}

A note on compiler messages

I’m about to pivot and discuss the shortcomings of intersection impls. But before I do so, I want to talk a bit about the compiler messages here. I think that the core idea of specialization – that you want to pick the impl that applies to the most specific set of types – is fairly intuitive. But working it out in practice can be kind of confusing, especially at first. So whenever we propose any extension, we have to think carefully about the error messages that might result.

In this particular case, I think that we could give a rather nice error message. Imagine that the user had written these two impls:

1
2
3
4
5
6
7
impl<T: Copy> Clone for T { // impl A
    fn clone(&self) -> T { ... }
}

impl<T: Clone> Clone for Option<T> { // impl B
    fn clone(&self) -> T { ... }
}

As we’ve seen, these two impls overlap but neither specializes the other. One might imagine an error message that says as much, and which also suggests the intersection impl that must be added:

1
2
3
4
5
6
7
8
9
10
error: two impls overlap, but neither specializes the other
  |
2 | impl<T: Copy> Clone for T {...}
  | ----
  |
4 | impl<T: Clone> Clone for Option<T> {...}
  |
  | note: both impls apply to a type like `Option<T>` where `T: Copy`;
  |       to specify the behavior in this case, add the following intersection impl:
  |       `impl<T: Copy> Clone for Option<T>`

Note the message at the end. The wording could no doubt be improved, but the key point is that we should be to actually tell you exactly what impl is still needed.

Intersection impls do not solve the cross-crate problem

Unfortunately, intersection impls don’t give us the backwards compatibility that we want, at least not by themselves. The problem is, if we add the blanket impl, we also have to add the intersection impl. Within the same crate, this might be ok. But if this means that downstream crates have to add an intersection impl too, that’s a big problem.

Intersection impls may force you to precict the future

There is one other problem with intersection impls that arises in cross-crate situations, which nrc described on the tracking issue: sometimes there is a theoretical intersection between impls, but that intersection is empty in practice, and hence you may not be able to write the code you wanted to write. Let me give you an example. This problem doesn’t show up with the Copy/Clone trait, so we’ll switch briefly to another example.

Imagine that we are adding a RichDisplay trait to our project. This is much like the existing Display trait, except that it can support richer formatting like ANSI codes or a GUI. For convenience, we want any type that implements Display to also implement RichDisplay (but without any fancy formatting). So we add a trait and blanket impl like this one (let’s call it impl A):

1
2
trait RichDisplay { /* elided */ }
impl<D: Display> RichDisplay for D { /* elided */ } // impl A

Now, imagine that we are also using some other crate widget that contains various types, including Widget<T>. This Widget<T> type does not implement Display. But we would like to be able to render a widget, so we implement RichDisplay for this Widget<T> type. Even though we didn’t define Widget<T>, we can implement a trait for it because we defined the trait:

1
impl<T: RichDisplay> RichDisplay for Widget<T> { ... } // impl B

Well, now we have a problem! You see, according to the rules from RFC 1023, impls A and B are considered to potentially overlap, and hence we will get an error. This might surprise you: after all, impl A only applies to types that implement Display, and we said that Widget<T> does not. The problem has to do with semver: because Widget<T> was defined in another crate, it is outside of our control. In this case, the other crate is allowed to implement Display for Widget<T> at some later time, and that should not be a breaking change. But imagine that this other crate added an impl like this one (which we can call impl C):

1
impl<T: Display> Display for Widget<T> { ... } // impl C

Such an impl would cause impls A and B to overlap. Therefore, coherence considers these to be overlapping – however, specialization does not consider impl B to be a specialization of impl A, because, at the moment, there is no subset relationship between them. So there is a kind of catch-22 here: because the impl may exist in the future, we can’t consider the two impls disjoint, but because it doesn’t exist right now, we can’t consider them to be specializations.

Clearly, intersection impls don’t help to address this issue, as the set of intersecting types is empty. You might imagine having some alternative extension to coherence that permits impl B on the logic of if impl C were added in the future, that’d be fine, because impl B would be a specialization of impl A.

This logic is pretty dubious, though! For example, impl C might have been written another way (we’ll call this alternative version of impl C impl C2):

1
2
impl<T: WidgetDisplay> Display for Widget<T> { ... } // impl C2
//   ^^^^^^^^^^^^^^^^ changed this bound

Note that instead of working for any T: Display, there is now some other trait T: WidgetDisplay in use. Let’s say it’s only implemented for optional 32-bit integers right now (for some reason or another):

1
2
trait WidgetDisplay { ... }
impl WidgetDisplay for Option<i32> { ... }

So now if we had impls A, B, and C2, we would have a different problem. Now impls A and B would overlap for Widget<Option<i32>>, but they would not overlap for Widget<String>. The reason here is that Option<i32>: WidgetDisplay, and hence impl A applies. But String: RichDisplay (because String: Display) and hence impl B applies. Now we are back in the territory where intersection impls come into play. So, again, if we had impls A, B, and C2, one could imagine writing an intersection impl to cover this situation:

1
impl<T: RichDisplay + WidgetDisplay> RichDisplay for Widget<T> { ... } // impl D

But, of course, impl C2 has yet to be written, so we can’t really write this intersection impl now, in advance. We have to wait until the conflict arises before we can write it.

You may have noticed that I was careful to specify that both the Display trait and Widget type were defined outside of the current crate. This is because RFC 1023 permits the use of negative reasoning if either the trait or the type is under local control. That is, if the RichDisplay and the Widget type were defined in the same crate, then impls A and B could co-exist, because we are allowed to rely on the fact that Widget does not implement Display. The idea here is that the only way that Widget could implement Display is if I modify the crate where Widget is defined, and once I am modifying things, I can also make any other repairs (such as adding an intersection impl) that are necessary.

Conclusion

Today we looked at a particular potential use for specialization: adding a blanket impl that implements Clone for any Copy type. We saw that the current subset-only logic for specialization isn’t enough to permit adding such an impl. We then looked at one proposed fix for this, intersection impls (often called lattice impls).

Intersection impls are appealing because they increase expressiveness while keeping the general feel of the subset-only logic. They also have an explicit nature that appeals to me, at least in principle. That is, if you have two impls that partially overlap, the compiler doesn’t select which one should win: instead, you write an impl to cover precisely that intersection, and hence specify it yourself. Of course, that explicit nature can also be verbose and irritating sometimes, particularly since you will often want the intersection impl to behave the same as one of the other two (rather than doing some third, different thing).

Moreover, the explicit nature of interseciton impls causes problems across crates:

  • they don’t allow you to add a blanket impl in a backwards compatible fashion;
  • they interact poorly with semver, and specifically the limitations on negative logic imposed by RFC 1023.

My conclusion then is that intersection impls may well be part of the solution we want, but we will need additional mechanisms. Stay tuned for additional posts.

A note on comments

As is my wont, I am going to close this post for comments. If you would like to leave a comment, please go to this thread on Rust’s internals forum instead.

Cameron KaiserTenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Mitchell BakerLiving with Diverse Perspectives

Diversity and Inclusion is more than having people of different demographics in a group.  It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.

I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important.  I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.

There’s a decision to be made.  I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good.  I checked with our outside attorney just in case there was something I should know.  I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.

Then one last person checked in with me from my first level of vetting and spoke up.  He’s sorry for the delay, etc but has concerns.  He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.

At first I had that sinking feeling of “Oh bother, look at this.  I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!”  I got up and walked around a bit and did a few thing that put me in a positive frame of mind.  Then I realized — we had added this person to the group for two reasons.  One, he’s awesome — both creative and effective. Second, he has a different perspective.  We say we value that different perspective. We often seek out his opinion precisely because of that perspective.

This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do.  So this is the first time the different perspective is doing more than reinforcing what seemed right to me.

That lead me to think “OK, got to love those different perspectives” a little ruefully.  But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective.  I expect the end result will be more deeply thought out than I had planned.  And it will take me longer to get there.  But the end result will have investigated some key assumptions I started with.  It will be better thought out, and better able to respond to challenges. It will be stronger.

I still can’t say I’m looking forward to the extra work.  But I am looking forward to a decision that has a much stronger foundation.  And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.

I want to build Mozilla into an example of what a trustworthy organization looks like.  I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble.  Having great people be part of a diverse Mozilla is part of that.  Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key.  As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.

Mozilla WebDev CommunityBeer and Tell – September 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Gopher Tessel

First up was emceeaich, who shared Gopher Tessel, a project for running a Gopher server (an Internet protocol that was popular before the World Wide Web) on a Tessel. Tessel is small circuit board that runs Node.js projects; Gopher Tessel reads sensors (such as the temperature sensor) connected to the board, and exposes their values via Gopher. It also can control lights connected to the board.

groovecoder: Crypto: 500 BC – Present

Next was groovecoder, who shared a preview of a talk about cryptography throughout history. The talk is based on “The Code Book” by Simon Sign. Notable moments and techniques mentioned include:

  • 499 BCE: Histiaeus of Miletus shaves the heads of messengers, tattoos messages on their scalps, and sends them after their hair has grown back to hide the message.
  • ~100 AD: Milk of tithymalus plant is used as invisible ink, activated by heat.
  • ~700 BCE: Scytale
  • 49 BC: Caesar cipher
  • 1553 AD: Vigenère cipher

bensternthal: Home Monitoring & Weather Tracking

bensternthal was up next, and he shared his work building a dashboard with weather and temperature information from his house. Ben built several Node.js-based applications that collect data from his home weather station, from his Nest thermostat, and from Weather Underground and send all the data to an InfluxDB store. The dashboard itself uses Grafana to plot the data, and all of these servers are run using Docker.

The repositories for the Node.js applications and the Docker configuration are available on GitHub:

craigcook: ByeHolly

Next was craigcook, who shared a virtual yearbook page that he made as a farewell tribute to former-teammate Holly Habstritt-Gaal, who recently took a job at another company. The page shows several photos that are clipped at the edges to look curved like an old television screen. This is done in CSS using clip-path with an SVG-based path for clipping. The SVG used is also defined using proportional units, which allows it to warp and distort correctly for different image sizes, as seen by the variety of images it is used on in the page.

peterbe: react-buggy

peterbe told us about react-buggy, a client for viewing Github issues implemented in React. It is a rewrite of buggy, a similar client peterbe wrote for Bugzilla bugs. Issues are persisted in Lovefield (a wrapper for IndexedDB) so that the app can function offline. The client also uses elasticlunr.js to provide full-text search on issue titles and comments.

shobson: tic-tac-toe

Last up was shobson, who shared a small Tic-Tac-Toe game on the viewsourceconf.org offline page that is shown when the site is in offline mode and you attempt to view a page that is not available offline.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaParticipation Q3 Demos

Participation Q3 Demos Watch the Participation Team share the work from the last quarter in the Demos.

Emily DunhamSetting a Freenode channel's taxonomy info

Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help
- ***** ChanServ Help *****
- ...
- Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE,
-                 DROP, GETKEY, HELP, INFO, QUIET, STATUS,
-                 SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND,
-                 TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE,
-                 WHY
- ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy
- ***** ChanServ Help *****
- Help for TAXONOMY:
-
- The taxonomy command lists metadata information associated
- with registered channels.
-
- Examples:
-     /msg ChanServ TAXONOMY #atheme
- ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme
- Taxonomy for #atheme:
- url                       : http://atheme.github.io/
- ОХЯЕБУ                    : лололол
- End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set

- ***** ChanServ Help *****
- Help for SET:
-
- SET allows you to set various control flags
- for channels that change the way certain
- operations are performed on them.
-
- The following subcommands are available:
- EMAIL           Sets the channel e-mail address.
- ...
- PROPERTY        Manipulates channel metadata.
- ...
- URL             Sets the channel URL.
- ...
- For more specific help use /msg ChanServ HELP SET command.
- ***** End of Help *****

Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes
the property, removing it from the taxonomy.
  1. A property is overwritten each time someone with the appropriate permissions
issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  1. The algorithm which decides whether to overwrite an existing property or
create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  1. When displayed, taxonomy items are sorted first in alphabetical order (case
insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  1. Attempting to place [mIRC color codes](http://www.mirc.com/colors.html) in the

property name results in the error “Parameters are too long. Aborting.”

However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

Support.Mozilla.OrgWhat’s Up with SUMO – 22nd September

Hello, SUMO Nation!

How are you doing? Have you seen the First Inaugural Firefox Census already? Have you filled it out? Help us figure out what kind of people use Firefox! You can get to it right after you read through our latest & greatest news below.

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 21st of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 28th of September!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

  • PLATFORM REMINDER! The Platform Meetings are BACK! If you missed the previous ones, you can find the notes in this document. (here’s the channel you can subscribe to). We really recommend going for the document and videos if you want to make sure we’re covering everything as we go.
  • A few important and key points to make regarding the migration:
    • We are trying to keep as many features from Kitsune as possible. Some processes might change. We do not know yet how they will look.
    • Any and all training documentation that you may be accessing is generic – both for what you can accomplish with the platform and the way roles and users are called within the training. They do not have much to do with the way Mozilla or SUMO operate on a daily basis. We will use these to design our own experience – “translate” them into something more Mozilla, so to speak.
    • All the important information that we have has been shared with you, one way or another.
    • The timelines and schedule might change depending on what happens.
  • We started discussions about Ranks and Roles after the migration – join in! More topics will start popping up in the forums up for discussion, but they will all be gathered in the first post of the main migration thread.
  • If you are interested in test-driving the new platform now, please contact Madalina.
    • IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
  • QUESTIONS? CONCERNS? Please take a look at this migration document and use this migration thread to put questions/comments about it for everyone to share and discuss. As much as possible, please try to keep the migration discussion and questions limited to those two places – we don’t want to chase ten different threads in too many different places.

Social

Support Forum

  • Once again, and with gusto – SUUUUMO DAAAAAY! Go for it!
  • Final reminder: If you are using email notifications to know what posts to return to, jscher2000 has a great tip (and tool) for you. Check it out here!

Knowledge Base & L10n

Firefox

  • for Android
    • Version 49 is out! Now you can enjoy the following:

      • caching selected pages (e.g. mozilla.org) for offline retrieval
      • usual platform and bug fixes
  • for Desktop
    • Version 49 is out! Enjoy the following:

      • text-to-speech in Reader mode (using your OS voice modules)
      • ending support for older Mac OS versions
      • ending support for older CPUs
      • ending support for Firefox Hello
      • usual platform and bug fixes
  • for iOS
    • No news from under the apple tree this time!

By the way – it’s the first day of autumn, officially! I don’t know about you, but I am looking forward to mushroom hunting, longer nights, and a bit of rain here and there (as long as it stops at some point). What is your take on autumn? Tell us in the comments!

Cheers and see you around – keep rocking the helpful web!

Mozilla WebDev CommunityExtravaganza – September 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Survey Gizmo Integration with Google Analytics

First up was shobson, who talked about a survey feature on MDN that prompts users to leave feedback about how MDN helped them complete a task. The survey is hosted by SurveyGizmo, and custom JavaScript included on the survey reports the user’s answers back to Google Analytics. This allows us to filter on the feedback from users to answer questions like, “What sections of the site are not helping users complete their tasks?”.

View Source Offline Mode

shobson also mentioned the View Source website, which is now offline-capable thanks to Service Workers. The pages are now cached if you’ve ever visited them, and the images on the site have offline fallbacks if you attempt to view them with no internet connection.

SHIELD Content Signing

Next up was mythmon, who shared the news that Normandy, the backend service for SHIELD, now signs the data that it sends to Firefox using the Autograph service. The signature is included with responses via the Content-Signature header. This signing will allow Firefox to only execute SHIELD recipes that have been approved by Mozilla.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Neo

Eli was up next, and he shared Neo, a tool for setting up new React-based projects with zero configuration. It installs and configures many useful dependencies, including Webpack, Babel, Redux, ESLint, Bootstrap, and more! Neo is installed as a command used to initialize new projects or a dependency to be added to existing projects, and acts as a single dependency that pulls in all the different libraries you’ll need.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Standu.ps Reboot

Last up was pmac, who shared a note about how he and willkg are re-writing the standu.ps service using Django, and are switching the rewrite to use Github authentication instead of Persona. They have a staging server setup and expect to have news next month about the availability of the new service.

Standu.ps is a service used by several teams at Mozilla for posting status updates as they work, and includes an IRC bot for quick posting of updates.


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Air MozillaReps weekly, 22 Sep 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox NightlyThese Weeks in Firefox: Issue 1

Every two weeks, engineering teams working on Firefox Desktop get together and update each other on things that they’re working on. These meetings are public. Details on how to join, as well as meeting notes, are available here.

We feel that the bleeding edge development state captured in those meeting notes might be interesting to our Nightly blog audience. To that end, we’re taking a page out of the Rust and Servo playbook, and offering you handpicked updates about what’s going on at the forefront of Firefox development!

Expect these every two weeks or so.

Thanks for using Nightly, and keep on rocking the free web!

Highlights

Contributor(s) of the Week

  • The team has nominated Adam (adamgj.wong), who has helped clean-up some of our Telemetry APIs. Great work, Adam!

Project Updates

 

Add-ons

  • andym wants to remind everybody that the Add-ons team is still triaging and fixing SDK bugs (like this one, for example).

Electrolysis (e10s)

Core Engineering

  • ksteuber rewrote the Snappy Symbolication Server (mainly used for the Gecko Profiler for Windows builds) and this will be deployed soon.
  • felipe is in the process of designing experiment mechanisms for testing different behaviours for Flash (allowing some, denying some, click-to-play some, based on heuristics)

Platform UI and other Platform Audibles

Quality of Experience

Sync / Firefox Accounts

Uncategorized

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Air MozillaPrivacy Lab - September 2016 - EU Privacy Panel

Privacy Lab - September 2016 - EU Privacy Panel Want to learn more about EU Privacy? Join us for a lively panel discussion of EU Privacy, including GDPR, Privacy Shield, Brexit and more. After...

About:CommunityOne Mozilla Clubs

24009148094_5ce13ab4a5_z

In 2015, The Mozilla Foundation launched the Mozilla Clubs program to bring people together locally to teach, protect and build the open web in an engaging and collaborative way. Within a year it grew to include 240+ Clubs in 100+ cities globally, and now is growing to reach new communities around the world.

Today we are excited to share a new focus for Mozilla Clubs taking place on a University or College Campus (Campus Clubs). Mozilla Campus Clubs blend the passion and student focus of the former Firefox Student Ambassador program and Take Back The Web Campaign with the existing structure of  Mozilla Clubs to create a unified model for participation on campuses!

Mozilla Campus Clubs take advantage of the unique learning environments of Universities and Colleges to bring groups of students together to teach, build and protect the open web. It builds upon the Mozilla Club framework to provide targeted support to those on campus through its:

  1. Structure:  Campus Clubs include an Executive Team in addition to the Club Captain position, who help develop programs and run activities specific to the 3 impact areas (teach, build, protect).
  2. Training & Support: Like all Mozilla Clubs, Regional Coordinators and Club Captains receive training and mentorship throughout their clubs journey. However the nature of the training and support for Campus Clubs is specific to helping students navigate the challenges of setting up and running a club in the campus context.
  3. Activities: Campus Club activities are structured around 3 impact areas (teach, build, protect). Club Captains in a University or College can find suggested activities (some specific to students) on the website here.

These clubs will be connected to the larger Mozilla Club network to share resources, curriculum, mentorship and support with others around the world. In 2017 you’ll see additional unification in terms of a joint application process for all Regional Coordinators and a unified web presence.

This is an exciting time for us to unite our network of passionate contributors and create new opportunities for collaboration, learning, and growth within our Mozillian communities. We also see the potential of this unification to allow for greater impact across Mozilla’s global programs, projects and initiatives.

If you’re currently involved in Mozilla Clubs and/or the FSA program, here are some important things to know:

  • The Firefox Student Ambassador Program is now Mozilla Campus Clubs: After many months of hard work and careful planning the Firefox Ambassador Program (FSA) has officially transitioned to Mozilla Clubs as of Monday September 19th, 2016. For full details about the Firefox Student Ambassador transition check out this guide here.
  • Firefox Club Captains will now be Mozilla Club Captains: Firefox Club Captains who already have a club, a structure, and a community set up on a university/college should register your club here to be partnered with a Regional Coordinator and have access to new resources and opportunities, more details are here.
  • Current Mozilla Clubs will stay the same: Any Mozilla Club that already exists will stay the same. If they happen to be on a university or college campus Clubs may choose to register as a Campus Club, but are not required to do so.
  • There is a new application for Regional Coordinators (RC’s): Anyone interested in taking on more responsibility within the Clubs program can apply here.  Regional Coordinators mentor Club Captains that are geographically close to them. Regional Coordinators support all Club Captains in their region whether they are on campus or elsewhere.
  • University or College students who want to start a Club at their University and College may apply here. Students who primarily want to lead a club on a campus for/with other university/college students will apply to start a Campus Club.
  • People who want to start a club for any type of learner apply here. Anyone who wants to start a club that is open to all kinds of learners (not limited to specifically University students) may apply to start a Club here.

Individuals who are leading Mozilla Clubs commit to running regular (at least monthly) gatherings, participate in community calls, and contribute resources and learning materials to the community. They are part of a network of leaders and doers who support and challenge each other. By increasing knowledge and skills in local communities Club leaders ensure that the internet is a global public resource, open and accessible to all.

This is the beginning of a long term collaboration for the Mozilla Clubs Program. We are excited to continue to build momentum for Mozilla’s mission through new structures and supports that will help engage more people with a passion for the open web.

Air MozillaThe Joy of Coding - Episode 72

The Joy of Coding - Episode 72 mconley livehacks on real Firefox bugs while thinking aloud.

Julia ValleraIntroducing Mozilla Campus Clubs

24009148094_d1a1be14ec_k

In 2015, The Mozilla Foundation launched the Mozilla Clubs program to bring people together locally to teach, protect and build the open web in an engaging and collaborative way. Within a year it grew to include 240+ Clubs in 100+ cities globally, and now is growing to reach new communities around the world.

Today we are excited to share a new focus for Mozilla Clubs taking place on a University or College Campus (Campus Clubs). Mozilla Campus Clubs blend the passion and student focus of the former Firefox Student Ambassador program and Take Back The Web Campaign with the existing structure of  Mozilla Clubs to create a unified model for participation on campuses!

Mozilla Campus Clubs take advantage of the unique learning environments of Universities and Colleges to bring groups of students together to teach, build and protect the open web. It builds upon the Mozilla Club framework to provide targeted support to those on campus through its:

  1. Structure:  Campus Clubs include an Executive Team in addition to the Club Captain position, who help develop programs and run activities specific to the 3 impact areas (teach, build, protect).
  2. Specific Training & Support: Like all Mozilla Clubs, Regional Coordinators and Club Captains receive training and mentorship throughout their clubs journey. However the nature of the training and support for Campus Clubs is specific to helping students navigate the challenges of setting up and running a club in the campus context.
  3. Activities: Campus Club activities are structured around 3 impact areas (teach, build, protect). Club Captains in a University or College can find suggested activities (some specific to students) on the website here.

These clubs will be connected to the larger Mozilla Club network to share resources, curriculum, mentorship and support with others around the world. In 2017 you’ll see additional unification in terms of a joint application process for all Club leaders and a unified web presence.

This is an exciting time for us to unite our network of passionate contributors and create new opportunities for collaboration, learning, and growth within our Mozillian communities. We also see the potential of this unification to allow for greater impact across Mozilla’s global programs, projects and initiatives.

If you’re currently involved in Mozilla Clubs and/or the FSA program, here are some important things to know:

  • The Firefox Student Ambassador Program is now Mozilla Campus Clubs: After many months of hard work and careful planning the Firefox Ambassador Program (FSA) has officially transitioned to Mozilla Clubs as of Monday September 19th, 2016. For full details about the Firefox Student Ambassador transition check out this guide here.
  • Firefox Club Captains will now be Mozilla Club Captains: Firefox Club Captains who already have a club, a structure, and a community set up on a university/college should register your club here to be partnered with a Regional Coordinator and have access to new resources and opportunities, more details are here.
  • Current Mozilla Clubs will stay the same: Any Mozilla Club that already exists will stay the same. If they happen to be on a university or college campus Clubs may choose to register as a Campus Club, but are not required to do so.
  • There is a new application for Regional Coordinators (RC’s): Anyone interested in taking on more responsibility within the Clubs program can apply here.  Regional Coordinators mentor Club Captains that are geographically close to them. Regional Coordinators support all Club Captains in their region whether they are on campus or elsewhere.
  • University or College students who want to start a Club at their University and College may apply here. Students who primarily want to lead a club on a campus for/with other university/college students will apply to start a Campus Club.
  • People who want to start a club for any type of learner apply here. Anyone who wants to start a club that is open to all kinds of learners (not limited to specifically University students) may apply on the Mozilla Club website.

Individuals who are leading Mozilla Clubs commit to running regular (at least monthly) gatherings, participate in community calls, and contribute resources and learning materials to the community. They are part of a network of leaders and doers who support and challenge each other. By increasing knowledge and skills in local communities Club leaders ensure that the internet is a global public resource, open and accessible to all.

This is the beginning of a long term collaboration for the Mozilla Clubs Program. We are excited to continue to build momentum for Mozilla’s mission through new structures and supports that will help engage more people with a passion for the open web.

Andy McKaySystem Add-ons

System add-ons are a new kind of add-on in Firefox, you might also know them as Go Faster add-ons.

These are interesting add-ons, they allow Firefox developers to ship code faster by writing the code in an add-on and then allow that to be developed and shipped independently of the main Firefox code.

Mostly these are not using WebExtensions and there is some questions if they should. I've been thinking about this one for a while and here are my thoughts at the moment - they aren't more than thoughts at this time.

System add-ons are really "internal" pieces of code that would otherwise be shipped in mozilla-central, blessed by the module owner and generally approved. They are maintained by someone who is active involved in their code (usually but not always a Mozilla employee). They have gone through security and privacy reviews. They are tested against Firefox code in the test infrastructure on each release. They sometimes do things that no other add-on should be allowed to do.

This is all in contrast to third party add-ons that you'll find on AMO. When you look through all the reasoning behind WebExtensions, you'll find that a lot of the reasons involve things like "hard to maintain", "security problems" and so on. Please see my earlier posts for more on this. I would say that these reasons don't apply to system add-ons.

So do system add-ons need to be WebExtensions? Maybe they don't. In fact I think if we try and push them into being system add-ons we'll create a scenario where WebExtensions become the blocker.

System add-ons will want to do things that don't exist, so APIs will need to be added to WebExtensions. Some of the things that system add-ons will want to do are things that third party add-ons should not be allowed to do. Then we need to add in another permissions layer to say some add-on developers can use those APIs and others can't.

Already there is a distinction between what can and cannot land in Firefox and that's made by the module owners and people who work on Firefox.

If a system add-on does something unusual, do you end up in a scenario where you write a WebExtension API that only one add-on uses? The maintenance burden of creating an API for one part of Firefox that only one add-on can use doesn't seem worth it. It also makes WebExtensions the blocker and slows system add-on development down.

It feels to me like there isn't a compelling reason to make system add-ons to be WebExtensions. Instead we should encourage them to be so if it makes sense and let them be regular bootstrapped add-ons otherwise.

George WrightAn Introduction to Shmem/IPC in Gecko

We use shared memory (shmem) pretty extensively in the graphics stack in Gecko. Unfortunately, there isn’t a huge amount of documentation regarding how the shmem mechanisms in Gecko work and how they are managed, which I will attempt to address in this post.

Firstly, it’s important to understand how IPC in Gecko works. Gecko uses a language called IPDL to define IPC protocols. This is effectively a description language which formally defines the format of the messages that are passed between IPC actors. The IPDL code is then compiled into C++ code by our IPDL compiler, and we can then use the generated classes in Gecko’s C++ code to do IPC related things. IPDL class names start with a P to indicate that they are IPDL protocol definitions.

IPDL has a built-in shmem type, simply called mozilla::ipc::Shmem. This holds a weak reference to a SharedMemory object, and code in Gecko operates on this. SharedMemory is the underlying platform-specific implementation of shared memory and facilitates the shmem subsystem by implementing the platform-specific API calls to allocate and deallocate shared memory regions, and obtain their handles for use in the different processes. Of particular interest is that on OS X we use the Mach virtual memory system, which uses a Mach port as the handle for the allocated memory regions.

mozilla::ipc::Shmem objects are fully managed by IPDL, and there are two different types: normal Shmem objects, and unsafe Shmem objects. Normal Shmem objects are mostly intended to be used by IPC actors to send large data chunks between themselves as this is more efficient than saturating the IPC channel. They have strict ownership policies which are enforced by IPDL; when the Shmem object is sent across IPC, the sender relinquishes ownership and IPDL restricts the sender’s access rights so that it can neither read nor write to the memory, whilst the receiver gains these rights. These Shmem objects are created/destroyed in C++ by calling PFoo::AllocShmem() and PFoo::DeallocShmem(), where PFoo is the Foo IPDL interface being used. One major caveat of these “safe” shmem regions is that they are not thread safe, so be careful when using them on multiple threads in the same process!

Unsafe Shmem objects are basically a free-for-all in terms of access rights. Both sender and receiver can always read/write to the allocated memory and careful control must be taken to ensure that race conditions are avoided between the processes trying to access the shmem regions. In graphics, we use these unsafe shmem regions extensively, but use locking vigorously to ensure correct access patterns. Unsafe Shmem objects are created by calling PFoo::AllocUnsafeShmem(), but are still destroyed in the same manner as normal Shmem objects by simply calling PFoo::DeallocShmem().

With the work currently ongoing to move our compositor to a separate GPU process, there are some limitations with our current shmem situation. Notably, a SharedMemory object is effectively owned by an IPDL channel, and when the channel goes away, the SharedMemory object backing the Shmem object is deallocated. This poses a problem as we use shmem regions to back our textures, and when/if the GPU process dies, it’d be great to keep the existing textures and simply recreate the process and IPC channel, then continue on like normal. David Anderson is currently exploring a solution to this problem, which will likely be to hold a strong reference to the SharedMemory region in the Shmem object, thus ensuring that the SharedMemory object doesn’t get destroyed underneath us so long as we’re using it in Gecko.

Justin CrawfordDebugging WebExtension Popups

Note: In the time since I last posted here I have been doing a bit more hands-on web development [for example, on the View Source website]. Naturally this has led me to learn new things. I have learned a few things that may be new to others, too. I’ll drop those here when I run across them.


I have been looking for a practical way to learn about WebExtensions, the new browser add-on API in Firefox. This API is powerful for a couple reasons: It allows add-on developers to build add-ons that work across browsers, and it’s nicer to work with than the prior Firefox add-on API (for example, it watches code and reloads changes without restarting the browser).

So I found a WebExtensions add-on to hack on, which I’ll probably talk about in a later post. The add-on has a chrome component, which is to say it includes changes to the browser UI. Firefox browser chrome is just HTML/CSS/JavaScript, which is great. But it took me a little while to figure out how to debug it.

The tools for doing this are all fairly recent. The WebExtension documentation on MDN is fresh from the oven, and the capabilities shown below were missing just a few months ago.

Here’s how to get started debugging WebExtensions in the browser:

First, enable the Browser Toolbox. This is a special instance of Firefox developer tools that can inspect and debug the browser’s chrome. Cool, eh? Here’s how to make it even cooler:

  • Set up a custom Firefox profile with the Toolbox enabled, so you don’t have to enable it every time you fire up your development environment. Consider just using the DevPrefs add-on, which toggles a variety of preferences (including Toolbox) to optimize the browser for add-on development.
  • Once you have a profile with DevPrefs installed, you can launch it with your WebExtension like so: ./node_modules/.bin/web-ext run --source-dir=src --firefox-binary {path to firefox binary} --firefox-profile {name of custom profile} (See the WebExtensions command reference for more information.

Next, with the instance of Firefox that appears when you run the above command, go to the Tools -> Web Developer -> Browser Toolbox menu. A window should appear that looks just like a standard Firefox developer tools window. But this window is imbued with the amazing ability to debug the browser itself. Try it: Use the inspector to look at the back button!

browser_toolbox

In that window you’ll see a couple small icons near the top right. One looks like a waffle. This button makes the popup sticky — just like a good waffle. This is quite helpful, since otherwise the popup will disappear the minute you try to inspect, debug, or modify it using the Browser Toolbox.

popup_sticky

Next to the waffle is a button with a downward arrow on it. This button lets you select which content to debug — so, for example, you could select the HTML of your popup. When you have a sticky popup selected, you can inspect and hack on its HTML and CSS just like you would any other web content.

content_selector

This information is now documented in great detail on MDN. Check it out!

The Mozilla BlogLatest Firefox Expands Multi-Process Support and Delivers New Features for Desktop and Android

With the change of the season, we’ve worked hard to release a new version of Firefox that delivers the best possible experience across desktop and Android.

Expanding Multiprocess Support

Last month, we began rolling out the most significant update in our history, adding multiprocess capabilities to Firefox on desktop, which means Firefox is more responsive and less likely to freeze. In fact, our initial tests show a 400% improvement in overall responsiveness.

Our first phase of the rollout included users without add-ons. In this release, we’re expanding support for a small initial set of compatible add-ons as we move toward a multiprocess experience for all Firefox users in 2017.

Desktop Improvement to Reader Mode

This update also brings two improvements to Reader Mode. This feature strips away clutter like buttons, ads and background images, and changes the page’s text size, contrast and layout for better readability. Now we’re adding the option for the text to be read aloud, which means Reader Mode will narrate your favorite articles, allowing you to listen and browse freely without any interruptions.

We also expanded the ability to customize in Reader Mode so you can adjust the text and fonts, as well as the voice. Additionally, if you’re a night owl like some of us, you can read in the dark by changing the theme from light to dark.

Offline Page Viewing on Android

On Android, we’re now making it possible to access some previously viewed pages when you’re offline or have an unstable connection. This means you can interact with much of your previously viewed content when you don’t have a connection. The feature works with many pages, though it is dependent on your specific device specs. Give it a try by opening Firefox while your phone is in airplane mode.

We’re continuing to work on updates and new features that make your Firefox experience even better. Download the latest Firefox for desktop and Android and let us know what you think.

Will Kahn-GreeneStandup v2: system test

What is Standup?

Standup is a system for capturing standup-style posts from individuals making it easier to see what's going on for teams and projects. It has an associated IRC bot standups for posting messages from IRC.

Join us for a Standup v2 system test!

Paul and I did a ground-up rewrite of the Standup web-app to transition from Persona to GitHub auth, release us from the shackles of the old architecture and usher in a new era for Standup and its users.

We're done with the most minimal of minimal viable products. It's missing some features that the current Standup has mostly around team management, but otherwise it's the same-ish down to the lavish shade of purple in the header that Rehan graced the site with so long ago.

If you're a Standup user, we need your help testing Standup v2 on the -stage environment before Thursday, September 22nd, 2016!

We've thrown together a GitHub issue to (ab)use as a forum for test results and working out what needs to get fixed before we push Standup v2 to production. It's got instructions that should cover everything you need to know.

Why you would want to help:

  1. You get to see Standup v2 before it rolls out and point out anything that's missing that affects you.

  2. You get a chance to discover parts of Standup you may not have known about previously.

  3. This is a chance for you to lend a hand on this community project that helps you which we're all working on in our free time.

  4. Once we get Standup v2 up, there are a bunch of things we can do with Standup that will make it more useful. Freddy is itching to fix IRC-related issues and wants https support [1]. I want to implement user API tokens, a cli and search. Paul want's to have better weekly team reports and project pages.

    There are others listed in the issue tracker and some that we never wrote down.

    We need to get over the Standup v2 hurdle first.

Why you wouldn't want to help:

  1. You're on PTO.

    Stop reading--enjoy that PTO!

  2. It's the end of the quarter and you're swamped.

    Sounds like you're short on time. Spare a minute and do something in the Short on time, but want to help anyhow? section.

  3. You're looking to stop using Standup.

    I'd love to know what you're planning to switch to. If we can meet peoples' needs with some other service, that's more free time for me and Paul.

  4. Some fourth thing I lack the imagination to think of.

    If you have some other blocker to helping, toss me an email.

Hooray for the impending Standup v2!

[1]This is in progress--we're just waiting for a cert.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1275568] bottom of page ‘duplicate’ button focuses top of page duplicate field
  • [1283930] Add Makefile.PL & local/lib/perl5 support to bmo/master
  • [1278398] Enable “Due Date” field for all websites, web services, infrastructure(webops, netops, etc), infosec bugs (all components)
  • [1213791] “suggested reviewers” menu overflows horizontally from visible area if reviewers have long name.
  • [1297522] changes to legal form
  • [1302835] Enable ‘Rank’ field for Tech Evangelism Product
  • [1267347] Editing the Dev-Events Form to be current
  • [1303659] Bug.comments (/rest/bug/<id>/comment) should return the count value in the results

discuss these changes on mozilla.tools.bmo.


Andy McKayAutonomous cars are not the answer

I'm frustrated by suggestion that self-driving cars are the answer to congestion, gridlock and a bunch of other things.

There are a bunch of people you have to move from location X to Y, maybe going via location Z. Location X, Y and Z will vary.

The assumption from the self driving car lobby is that this will increase the effectiveness of the transportation system. First some problems:

  • Use of a car transportation system is dependent upon a solid tax base to support the system which is inherently inefficient in so many ways. It is expensive to maintain for low density environments. Conversely, it is almost impossible to maintain for high density environments.

  • The car transportation system has no limit on capacity so as the efficiency of the transportation system increases, so will demand and density.

Secondly, one assumption seems to be wrong:

  • That X and Y are usually some parts of the suburbs and that people are commuting from X to Y because they are trying to maintain a standard of living. As a result everything else must change to support that. That seems to be terribly wrong.

And so, what troubles me is:

  • The location of X and Y. Can X (peoples homes) be moved closer to Y (where people live). Can Y be made redundant (e.g.: tele-commuting)? Will anyone have a Y in the near future (job automation, robots etc)?

  • The mode of transportation. Does another option other than cars exist? How about bikes? Walking? Even public transportation? Why do we have to depend upon cars? Is this really the best we can do?

  • If a lot of people change to another transportation system, what's to stop a large number of people moving to cars and causing grid lock again. Isn't that what history has shown us will happen?

  • Why do we keep supporting the most inefficient and expensive form of transport ever?

As an engineer I am frustrated that the choices here seem to be: self-driving cars or not self-driving cars. The real question is, why do we need cars at all?

This Week In RustThis Week in Rust 148

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

RustConf Experiences

New Crates & Project Updates

Crate of the Week

This week's crate of the week is (the in best TWiR-tradition shamelessly self-promoted) mysql-proxy, a flexible, lightweight and scalable proxy for MySQL databases. Thanks to andygrove for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

98 pull requests were merged in the last two weeks.

New Contributors

  • Caleb Jones
  • dangcheng
  • Eugene Bulkin
  • knight42
  • Liigo

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • RFC 1696: mem::discriminant(). Add a function that extracts the discriminant from an enum variant as a comparable, hashable, printable, but (for now) opaque and unorderable type.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

(Full disclosure: we removed QotW for this issue because selected QotW was deemed inappropriate and against the core values of Rust community. Here is the relevant discussion on reddit. If you are curious, you can find the quote in git history).

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Daniel GlazmanW3C

J'ai toujours dit que la standardisation au W3C, c'est de l'hémoglobine sur les murs dans une ambiance feutrée. Je ne changerai pas un iota à cette affirmation. Mais le W3C c'est aussi l'histoire d'une industrie dès ses premières heures et des amitiés franches construites dans l'explosion d'une nouvelle ère. J'ai passé ce soir, en marge du Technical Plenary Meeting du W3C à Lisbonne, un dîne inoubliable avec mes vieux potes Yves et Olivier que je connais et apprécie depuis ohlala tellement longtemps. Un moment délicieux, sympa et drôle autour d'un repas fabuleux dans une gargote lisboète de rêve. Des éclats de rire, des confidences, une super-soirée bref un vrai moment de bonheur. Merci à eux pour cette géniale soirée et à Olivier pour l'adresse, en tous points extra. Je re-signe quand vous voulez, les gars, et c'est un honneur de vous avoir comme potes :-)

Chris CooperRelEng & RelOps Weekly highlights - September 19, 2016

Welcome back to our *cough*weekly*cough* updates!

Modernize infrastructure:

Amy and Alin decommissioned all but 20 of our OS X 10.6 test machines, and those last few will go away when we perform the next ESR release. The next ESR release corresponds to Firefox 52, and is scheduled for March next year.

Improve Release Pipeline:

Ben finally completed his work on Scheduled Changes in Balrog. With it, we can pre-schedule changes to Rules, which will help minimize the potential for human error when we ship, and make it unnecessary for RelEng to be around just to hit a button.

Lots of other good Balrog work has happened recently too, which is detailed in Ben’s blog post.

Improve CI Pipeline:

Windows TaskCluster builders were split into level-1 (try) and level-3 (m-i, m-c, etc) worker types with sccache buckets secured by level.

Windows 10 AMI generators were added to automation in preparation for Windows 10 testing on TaskCluster. We’ve been looking to switch from testing on Windows 8 to Windows 10, as Windows 8 usage continues to decline. The move to TaskCluster seems like a natural breakpoint to make that switch.

Dustin massive patch set to enable in-tree config of the various build kinds landed last week. This was no small feat. Kudos to him for the testing and review stamina it took to get that done. Those of us working to migrate nightly builds to TaskCluster are now updating – and simplifying – our task graphs to leverage his work.

Operational:

After work to fix some bugs and make it reliable, Mark re-enabled the cron job that generates our Windows 7 AWS AMIs each night.

Now that many of the Windows 7 tests are being run in AWS, Amy and Q reallocated 20 machines from Windows 7 testing to XP testing to help with the load. We are reallocating 111 additional machines from Windows 7 to XP and Windows 8 in the upcoming week.

Amy created a template for postmortems and created a folder where all of Platform Operations can consolidate their postmortem documents.

Jake and Kendall took swift action on the TCP ChallengeAck side attack vulnerability. This has been fixed with a sysctl workaround on all linux hosts and instances.

Jake pushed a new version of the mig-agent client which was deployed across all Linux and OS X platforms.

Hal implemented the new GitHub feature to require two-factor authentication for several Mozilla organizations on GitHub.

Release:

Rail has automated re-generating our SHA-1 signed Windows installers for Firefox, which are served to users on old versions of Windows (XP, Vista). This means that users on those platforms will no longer need to update through a SHA-1 signed, watershed release (we were using Firefox 43 for this) before updating to the most recent version. This will save XP/Vista users some time and bandwidth by creating a one-step update process for them to get the latest Firefox.

See you next *cough*week*cough*!

David Rajchenbach TellerThis blog has moved

You can find my new blog on github. Still rough around the edges, but I’m planning to improve this as I go.


Firefox NightlyGetting Firefox Nightly to stick to Ubuntu’s Unity Dock

I installed Ubuntu 16.04.1 this week and decided to try out Unity, the default window manager. After I installed Nightly I assumed it would be simple to get the icon to stay in the dock, but Unity seemed confused about Nightly vs the built-in Firefox (I assume because the executables have the same name).

It took some doing to get Nightly to stick to the Dock with its own icon. I retraced my steps and wrote them down below.

My goal was to be able to run a couple versions of Firefox with several profiles. I thought the easiest way to accomplish that would be to add a new icon for each version+profile combination and a single left click on the icon would run the profile I want.

After some research, I think the Unity way is to have a single icon for each version of Firefox, and then add Actions to it so you can right click on the icon and launch a specific profile from there.

Installing Nightly

If you don’t have Nighly yet, download Nightly (these steps should work fine with Aurora or Beta also). Open a terminal:

$ mkdir /opt/firefox
$ tar -xvjf ~/Downloads/firefox-51.0a1.en-US.linux-x86_64.tar.bz2 /opt

You may need to chown some directories to get that in /opt which is fine. At the end of the day, make sure your regular user can write to the directory or else you won’t be able to install Nightly’s updates.

Adding the icon to the dock

Then create a file in your home directory named nightly.desktop and paste this into it:

[Desktop Entry]
Version=1.0
Name=Nightly
Comment=Browse the World Wide Web
Icon=/opt/firefox/browser/icons/mozicon128.png
Exec=/opt/firefox/firefox %u
Terminal=false
Type=Application
Categories=Network;WebBrowser;
Actions=Default;Mozilla;ProfileManager;

[Desktop Action Default]
Name=Default Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-default

[Desktop Action Mozilla]
Name=Mozilla Profile
Exec=/opt/firefox/firefox --no-remote -P minefield-mozilla

[Desktop Action ProfileManager]
Name=Profile Manager
Exec=/opt/firefox/firefox --no-remote --profile-manager

Adjust anything that looks like it should change, the main callout being the Exec line should have the names of the profiles you want to use (in the above file mine are called minefield-default and minefield-mozilla). If you have more profiles just make more copies of that section and name them appropriately.

If you think you’ve got it, run this command:

$ desktop-file-validate nightly.desktop

No output? Great — it passed the validator. Now install it:

$ desktop-file-install --dir=.local/share/applications nightly.desktop

Two notes on this command:

  • If you leave off –dir it will write to /usr/share/applications/ and affect all users of the computer. You’ll probably need to sudo the command if you want that.
  • Something is weird with the parsing. Originally I passed in –dir=~/.local/… and it literally made a directory named ~ in my home directory, so, if the menu isn’t updating, double check the file is getting copied to the right spot.

Some people report having to run unity again to get the change to appear, but it showed up for me. Now left-clicking runs Nightly and right-clicking opens a menu asking me which profile I want to use.

Screenshot of menu

Right-click menu for Nightly in Unity’s Dock.

Modifying the Firefox Launcher

I also wanted to launch profiles off the regular Firefox icon in the same way.

The easiest way to do that is to copy the built-in one from /usr/share/applications/firefox.desktop and modify it to suit you. Conveniently, Unity will override a system-wide .desktop file if you have one with the same name in your local directory so installing it with the same commands as you did for Nightly will work fine.

Postscript

I should probably add a disclaimer that I’ve used Unity for all of two days and there may be a smoother way to do this. I saw a couple of 3rd-party programs that will generate .desktop files but I didn’t want to install more things I’d rarely use. Please leave a comment if I’m way off on these instructions! 🙂

This post originally appeared on Wil’s blog.

Henrik SkupinMoving home folder to another encrypted volume on OS X

Over the last weekend I was reinstalling my older MacBookPro (late 2011 model) again after replacing its hard drive with a fresh and modern SSD drive from Crucial 512GB. That change was really necessary given that simple file operations took about a minute, and every system tools claimed that the HDD was fine.

So after installing Mavericks I moved my home folder to another partition to make it easier later to reinstall OS X again. But as it turned out it is not that easy, especially not given that OS X doesn’t support mounting of other encrypted partitions beside the system partition during start-up yet. If you had a single user only, you will be busted after the home dir move and a reboot. That’s what I experienced. As fix under such a situation put back OS X into the “post install” state, and create a new administrator account via single-user mode. With this account you can at least sign-in again, and after unlocking the other encrypted partition you will have access to your original account again.

Having to first login via an account which data is still hosted on the system partition is not a workable solution for me. So I was continuing to find a solution which let me unlock the second encrypted partition during startup. After some search I finally found a tool which actually let me do this. It’s called Unlock and can be found on Github. To make it work it installs a LaunchDaemon which retrieves the encryption password via the System keychain, and unlocks the partition during start-up. To actually be on the safe side I compiled the code myself with Xcode and got it installed with some small modifications to the install script (I may want to contribute those modifications back into the repository for sure :).

In case you have similar needs, I hope this post will help you to avoid those hassles as I have experienced.

Mozilla Privacy BlogImproving Government Disclosure of Security Vulnerabilities

Last week, we wrote about the shared responsibility of protecting Internet security. Today, we want to dive deeper into this issue and focus on one very important obligation governments have: proper disclosure of security vulnerabilities.

Software vulnerabilities are at the root of so much of today’s cyber insecurity. The revelations of recent attacks on the DNC, the state electoral systems, the iPhone, and more, have all stemmed from software vulnerabilities. Security vulnerabilities can be created inadvertently by the original developers, or they can be developed or discovered by third parties. Sometimes governments acquire, develop, or discover vulnerabilities and use them in hacking operations (“lawful hacking”). Either way, once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so that developer can fix the problem and protect their users. We need to work with governments on how they handle vulnerabilities to ensure they are responsible partners in making this a reality today.

In the U.S., the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates is called the Vulnerabilities Equities Process (VEP). The VEP was established in 2010, but not operationalized until the Heartbleed vulnerability in 2014 that reportedly affected two thirds of the Internet. At that time, White House Cybersecurity Coordinator Michael Daniel wrote in a blog post that the Obama Administration has a presumption in favor of disclosing vulnerabilities. But, policy by blog post is not particularly binding on the government, and as Daniel even admits, “there are no hard and fast rules” to govern the VEP.

It has now been two years since Heartbleed and the U.S. government’s blog post, but we haven’t seen improvement in the way that vulnerabilities disclosure is being handled. Just one example is the alleged hack of the NSA by the Shadow Brokers, which resulted in the public release of NSA “cyberweapons”, including “zero day” vulnerabilities that the government knew about and apparently had been exploiting for years. Companies like Cisco and Fortinet whose products were affected by these zero day vulnerabilities had just that, zero days to develop fixes to protect users before the vulnerabilities were possibly exploited by hackers.

The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place.

Cybersecurity is a shared responsibility, and that means we all must do our part – technology companies, users, and governments. The U.S. government could go a long way in doing its part by putting transparent and accountable policies in place to ensure it is handling vulnerabilities appropriately and disclosing them to affected companies. We aren’t seeing this happen today. Still, with some reforms, the VEP can be a strong mechanism for ensuring the government is striking the right balance.

More specifically, we recommend five important reforms to the VEP:

  • All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure.
  • All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT).
  • The VEP should be codified in law to ensure compliance and permanence.

These changes would improve the state of cybersecurity today.

We’ll dig into the details of each of these recommendations in a blog post series from the Mozilla Policy team over the coming weeks – stay tuned for that.

Today, you can watch Heather West, Mozilla Senior Policy Manager, discuss this issue at the New America Open Technology Institute event on the topic of “How Should We Govern Government Hacking?” The event can be viewed here.

Chris McDonaldi-can-management Weekly Update 1

A couple weeks ago I started writing a game and i-can-management is the directory I made for the project so that’ll be the codename for now. I’m going to write these updates to journal the process of making this game. As I’m going through this process alone, you’ll see all aspects of the game development process as I go through them. That means some weeks may be art heavy, while others game rules, or maybe engine refactoring. I also want to give a glance how I’m feeling about the project and rules I make for myself.

Speaking of rules, those are going to be a central theme on how I actually keep this project moving forward.

  • Optimize only when necessary. This seems obvious, but folks define necessary differently. 60 frames per second with 750×750 tiles on the screen is my current benchmark for whether I need to optimize. I’ll be adding numbers for load times and other aspects once they grow beyond a size that feels comfortable.
  • Abstractions are expensive, use them sparingly.This is something I learned from a Jonathan Blow talk I mention in my previous post. Abstractions can increase or remove flexibility. On one hand reusing components may allow more rapid iteration. On the other hand it may take considerable effort to make systems communicate that weren’t designed to pass messages.I’m making it clear in each effort whether I’m in exploration mode so I work mostly with just 1 function, or if I’m in architect mode where I’m trying to make the next feature a little easier to implement. This may mean 1000 line functions and lots of global like use for a while until I understand how the data will be used. Or it may mean abstracting a concept like the camera to a struct because the data is always used together.
  • Try the easier to implement answer before trying the better answer.I have two goals with this. First, it means I get to start trying stuff faster so I know if I want to pursue it or if I’m kinda off on the idea. Maybe this first implementation will show some other subsystem needs features first so I decide to delay the more correct answer. So in short quicker to test and expose unexpected requirements.The other goal is to explore building games in a more holistic way. Knowing a quick and dirty way to implement something may help when trying to get an idea thrown together really quick. Then knowing how to evolve that code into a better long term solution means next games or ideas that cross pollinate are faster to compose because the underlying concepts are better known.

The last couple weeks have been an exploration of OpenGL via glium the library I’m using to access OpenGL from Rust as well as abstract away the window creation. I’d only ever ran the example before this dive into building a game. From what I remember of doing this in C++ the abstraction it provides for the window creation and interaction, using the glutin library is pretty great. I was able to create a window of whatever size, hook up keyboard and mouse events, and render to the screen pretty fast after going through the tutorial in the glium book.

This brings me to one of the first frustrating points in this project. So many things are focused on 3d these days that finding resources for 2d rendering is harder. If you find them, they are for old versions of OpenGL or use libraries to handle much of the tile rendering. I was hoping to find an article like “I built a 2d tile engine that is pretty fast and these are the techniques I used!” but no such luck. OpenGL guides go immediately into 3d space after getting past basic polygons. But it just means I get to explore more which is probably a good thing.

I already had a deterministic map generator built to use as the source of the tiles on the screen. So, I copy and pasted some of the matrices from the glium book and then tweak the numbers I was using for my tiles until they show up on the screen and looked ok. From here I was pretty stoked. I mean if I have 25×40 tiles on the screen what more could someone ask for. I didn’t know how to make the triangle strips work well for the tiles to be drawn all at once, so I drew each tile to the screen separately, calculating everything on every frame.

I started to add numbers here and there to see how to adjust the camera in different directions. I didn’t understand the math I was working with yet so I was mostly treating it like a black box and I would add or multiply numbers and recompile to see if it did anything. I quickly realized I needed it to be more dynamic so I added detection for the mouse scrolling. Since I’m on my macbook most of the time I’m doing development I can scroll vertically as well as horizontally, making a natural panning feeling.

I noticed that my rendering had a few quirks, and I didn’t understand any of the math that was being used, so I went seeking more sources of information on how these transforms work. At first I was directed to the OpenGL transformations page which set me on the right path, including a primer on the linear algebra I needed. Unfortunately, it quickly turned toward 3d graphics and I didn’t quite understand how to apply it to my use case. In looking for more resources I found Solarium Programmers’ OpenGL 101 page which took some more time with orthographic projects, what I wanted for my 2d game.

Over a few sessions I rewrote all the math to use a coordinate system I understood. This was greatly satisfying, but if I hadn’t started with ignoring the math, I wouldn’t have had a testbed to see if I actually understood the math. A good lesson to remember, if you can ignore a detail for a bit and keep going, prioritize getting something working, then transforming it into something you understand more thoroughly.

I have more I learned in the last week, but this post is getting quite long. I hope to write a post this week about changing from drawing individual tiles to using a single triangle strip for the whole map.

In the coming week my goal is to have mouse clicks interacting with the map working. This involves figuring out what tile the mouse has clicked which I’ve learned isn’t trivial. In parallel I’ll be developing the first set of tiles using Pyxel Edit and hopefully integrating them into the game. Then my map will become richer than just some flat colored tiles.

Here is a screenshot of the game so far for posterity’s sake. It is showing 750×750 tiles with deterministic weighted distribution between grass, water, and dirt:Screen Shot 2016-09-18 at 8.38.15 PM.png:


The Servo BlogThis Week In Servo 78

In the last week, we landed 68 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level breakdown in the roadmap page.

This week’s status updates are here.

Special thanks to canaltinova for their work on implementing the matrix transition algorithms for CSS3 transform animation. This allows (both 2D and 3D) rotate(), perspective() and matrix() functions to be interpolated, as well as interpolations between arbitrary transformations, though the last bit is yet to be implemented. In the process of implementation, we had to deal with many spec bugs, as well as implementation bugs in other browsers, which complicated things immensely – it’s very hard to tell if your code has a mistake or if the spec itself is wrong in complicated algorithms like these. Great work, canaltinova!

Notable Additions

  • glennw added support for scrollbars
  • canaltinova implemented the matrix decomposition/interpolation algorithm
  • nox landed a rustup to the 9/14 rustc nightly
  • ejpbruel added a websocket server for use in the remote debugging protocol
  • creativcoder implemented the postMessage() API for ServiceWorkers
  • ConnorGBrewster made Servo recycle session entries when reloading
  • mrobinson added support for transforming rounded rectangles
  • glennw improved webrender startup times by making shaders compile lazily
  • canaltinova fixed a bug where we don’t normalize the axis of rotate() CSS transforms
  • peterjoel added the DOMMatrix and DOMMatrixReadOnly interfaces
  • Ms2ger corrected an unsound optimization in event dispatch
  • tizianasellitto made DOMTokenList iterable
  • aneeshusa excised SubpageId from the codebase, using PipelineId instead
  • gilbertw1 made the HTTP authentication cache use origins intead of full URLs
  • jmr0 fixed the event suppression logic for pages that have navigated
  • zakorgy updated some WebBluetooth APIs to match new specification changes

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Some screencasts of matrix interpolation at work:

This one shows all the basic transformations together (running a tweaked version of this page. The 3d rotate, perspective, and matrix transformation were enabled by the recent change.

Karl Dubost[worklog] Edition 036 - Administrative week

Busy week without much things done for bugs. W3C is heading to Lisbon for the TPAC, so tune of the week: Amalia Rodrigues. I'll be there in spirit.

Webcompat Life

Progress this week:

326 open issues
----------------------
needsinfo       12
needsdiagnosis  106
needscontact    8
contactready    28
sitewait        158
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • yet another appearance: none implemented in Blink. This time for meter.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Andy McKayTFSA Check

The TFSA is a savings account for Canadians that was introduced in 2009.

As a quick check I wanted to see how much or little my TFSA had changed against what it should be. That meant a double check of how much room I had in the TFSA each year. So this is a quick cacluation the theoretical case: that you are able to invest the maximum amount each year, at the beginning of the year and get 5% return (after fees) on that.

Year Maximum Total invested Compounded
2009$5,000.00$5,000.00$5,250.00
2010$5,000.00$10,000.00$10,762.50
2011$5,000.00$15,000.00$16,550.63
2012$5,000.00$20,000.00$22,628.16
2013$5,500.00$25,500.00$29,534.56
2014$5,500.00$31,000.00$36,786.29
2015$10,000.00$41,000.00$49,125.61
2016$5,500.00$46,500.00$57,356.89

Which always raises the question for me of what is a reasonable rate to calculate at these days. It always used to be 10%, but that's very hard to get these days. Since 2006 the annualized return on the S&P 500 is 5.158% for example. Perhaps 5% represents too conversative a number.

Matěj CeplOpenWeatherMapProvider for CyanogenMod 13

I don’t understand. CyanogenMod 13 introduced new Weather widget and lock screen support. Great! Unfortunately, the widget requires specific providers for weather services and CM does not provide any in the default installation. There exists Weather Underground provider, which works, but only other provider I found (Yahoo! Weather provider) does not work with my CM without Google Play!.

I would a way prefer OpenWeatherMap provider, but although CyanogenMod has the GitHub repository for one , but no APK anywhere (and certainly not one for F-Droid). Fortunately, I have found a blogpost which describes how simple it is build the APK from the given code. Unfortunately, author did not provide APK on his site. I am not sure, whether there is not some hook, but here is mine.