Mozilla Science LabTeaming Up: How to Build Your Open Science Collaboration

With the Mozilla Science Global Sprint coming up soon (submissions for project ideas are open until May 11), the community has been coming up with tons of awesome suggestions for projects to work on; meanwhile, new projects on everything from collaborative LaTeX to sleuthing out what references are open access on BMC to a curated bioinformatics reaction pathway database are cropping up on Collaborate, our project to highlight interesting open source and open science projects you can jump into, and learn by doing. But all of these projects have to answer the same question: how do you present an open source project in a way that appeals to people, gets them interested, and attracts collaborators who will stick around? Here are a few tips on what we’ve learned during the first six months of Collaborate:

On Collaborate

Collaborate compiles and curates open source, open science projects, in order to make those projects discoverable and connect them with potential contributors. But once someone finds their way to your project, it’s up to what you present there to convince them to jump in. A few tips for a strong listing:

  • Seek out users. Prolific and long-term volunteer contributors to open-source projects are often those who see themselves as the future users of that project. When describing your project, make sure to illustrate who you think the users will be, how you think they will use the project, and what benefit they’re going to get out of it. Your goal should be for a visitor to read your project summary and say, ‘I would totally use that – let’s make it happen!’
  • Offer some examples. Link to examples of the project in action, that help newcomers understand what the project is and get excited about being involved. If your project is such that it can be demoed on a website, all the better – if it’s not a web-oriented project, consider producing a blogpost that has figures or examples of your work thus far.
  • Describe a roadmap. The ‘Main Deliverables’ section of a Collaborate listing is there to let you describe what sort of high-level things collaborators will get to work on on your project. Don’t worry about nitty-gritty bugs or feature descriptions – that’s what your issue tracker is for. Rather, give a bird’s eye view of the direction you’re hoping to take the work in, and give people an idea of how they could fit into that.

On GitHub

Alright! People are excited about your listing on Collaborate, so they’ve joined your project and forked your repo. How can we turn that initial excitement into enough engagement to stick around and really dig in? A few more tips:

  • Talk to your collaborators. Collaborate will open an issue introducing new collaborators as they express interest in the site; try and respond to this issue within 24 hours. Making newcomers feel welcome is crucial at this stage; let them know you’re interested in working with them, and help them figure out where they can start contributing based on their skills and interests. Same goes for new issues opened – fast engagement signals a collaboration that cares about participation.
  • Have a good Getting Started guide. Confusion on what to do first can kill enthusiasm to participate in a project very quickly. Make sure the README.md of your project contains complete, current and correct instructions on how to set the project up, including any dependencies it may involve.
  • Use that Issue Tracker. The issue tracker is where you can start getting down to the details of what tasks need attention; use liberally, to get the plan out of your head and into a place where people can jump in. Also keep in mind, different issues are suitable for different collaborators; people new to the programming language or topic are best served by detailed, small issues that lay out an exact problem and perhaps a proposed solution, while experienced contributors will be more interested in big, open-ended problems that give them a chance to use their skills and imaginations. A good collaborative project should try to have a mix of both.

All these things point in the same direction – people participate in open projects when they see the value in them, identify personally with them, have a clear path to getting involved, and feel welcomed by the existing community. As always, we’ll help you polish your project listing and repo as it goes up on Collaborate and as we get ready for the Global Sprint. There is a huge and growing list of exciting projects to get involved with; we hope you’ll join us on June 4-5 for the Sprint, and anytime on Collaborate.

Mozilla Science LabMozilla Science Lab Week in Review, April 20-26

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Blogs & Papers

Government & Policy

  • The European Commission released detailed findings & policy recommendations  from its survey late last year on the perceptions, expectations and challenges presented by open science, as understood by the scientific community there.
  • The US National Oceanic and Atmospheric Administration has announced a series of industry partnerships intended to make their environmental data widely available.
  • The NSF-funded Workshop on Supporting Scientific Discovery through Norms and Practices for Software and Data Citation and Attribution released its summary report, including recommendations and action plans surrounding the establishment of clear infrastructure and standards for citing both code and data as part of the scholarly record.

Projects & Resources

Robert Kaiser"Nothing to Hide"?

I've been bothered for quite a while with people telling me they "have nothing to hide anyhow" when the topic of Internet privacy comes up.

I guess that mostly comes from the impression that the whole story is our government watching (over) us and the worst thing that can happen is incrimination. While that might threaten some things, most people do nothing that is really interesting enough for a government to go into attack mode over it (or so they believe, and very firmly so). And I even agree that most governments (including the US and EU countries) actually actively seek out what they call "terrorist activities" (even though they often stretch that term in crazy ways) and/or child abuse and similar topics that the vast majority of citizens agree are a bad thing and are not part of - and the vast majority of politicians and government workers believe they act in the best interest of their citizens when "obviously fighting that" via their different programs of privacy-undermining surveillance. That said, most people seem to be OK with their government collecting data about them as long as it's not used to incriminate them (and when that happens, it's too late to protest the practice anyhow).

A lot has been said about that since the "Snowden leaks", but I think the more obvious short-term and direct threat is in corporate surveillance, which has been swept under the rug in most discussions recently - to the joy of Facebook, Google and other major players in that area. I have also seen that when depicting some obvious scenarios resulting of that, people start to think about it much more promptly and realize the effect on their daily lives (even if those are minor issues compared to government starting a manhunt against you with terror allegations or similar).

So what I start asking is:
  • Are you OK with banks determining your credit conditions based on all his comments on Facebook and his Google searches? ("Your friends say you owe them money, and that you live beyond your means, this is gonna be difficult...")
  • Are you OK with insurances changing your rates based on all that data? ("Oh, so you 'like' all those videos about dangerous sports and that deafening music, and you have some quite aggressive or even violent friends - so you see why we need to go a bit higher there, right?")
  • Are you OK with prices for flights or products in online stores (Amazon etc.) being different depending on what other things you have done on the web? ("So, you already planned that vacation at that location, good, so we can give you a higher air rate as you' can't back out now anyhow.")
  • And, of course, envision ads in public or half-public locations being customized for whoever is in the area. ("You recently searched for engagement rings, so we'll show ads for them wherever you go." or "Hey, this is the third time today we sat down and a screen nearby shows Viagra ads." or "My dear daughter, why do we see ads for diapers everywhere we go?")
There are probably more examples, those are the ones that came to my mind so far. Even if those are smaller things, people can relate to them as they affect things in their own life and not scenarios that feel very theoretical to them.

And, of course, they are true to a degree even now. Banks are already buying data from Facebook, probably including "private" messages, for determining credit scores, insurances base rates on anything they can find out about you, flight rates as well as prices for some Amazon and other web shop products vary based on what you searched before - and ads both on your screen and even on postal mail get tailored to a profile built on all kinds of your online behavior. My questions above just take all of those another step forward - but a pretty realistic one in my opinion.

I hope thinking about questions like that makes people realize they might actually want to evade some of that and in the end they actually have something to hide.

And then, of course, that a non-profit like Mozilla, which doesn't seek to maximize money, can believably be on their side and help them regain some privacy where they - now - want to.

Vaibhav AgrawalA-Team Contributions in 2015

It has been a while since I last blogged and I had something interesting to share so I finally managed to overcome my laziness. In this post, I would like to talk about some of the projects that I have been involved with in 2015 in the A-Team at Mozilla. Before I talk about the projects, I would like to give a shout out to @jmaher, @armenzg, @kmoir, @dminor, @ahal, @gbrown, @wlach who were always there to answer my questions, help me find a way to solve problems when I was stuck and review my patches. I have worked on some exciting problems and made some great progress on the projects in the past 4 months. They are:

1) SETA – Search for Extraneous Test Automation
We run hundreds of builds and thousands of test jobs each day on our integration branches, that is , mozilla-inbound and fx-team. And as more and more platforms are added every month, the load on test machines is ever increasing. But are so many test jobs for each push required? We run the test jobs to catch failures but majority of time, the test jobs pass and the ones who indeed catch failures often have duplicate ones. SETA tries to tackle this problem by being smart about utilizing machine cycles. In SETA, we find the minimum number of jobs that are needed to find all the failures that have occurred in the last six months on integration machines. From this data, we predict that these jobs will be more likely to catch failures than others and therefore other test jobs are set to run less frequently. It is true that we will be wrong certain number of times and when we miss a failure, the sheriffs would need to backfill some jobs to find the root cause. But most of the time, it will work. Joel has done an excellent blog post giving examples and statistics that has been done in this project. This project has been deployed in Mozilla Releng production systems and we have reduced the number of jobs to roughly 150-190 jobs/push from 350-400 jobs/push per day on desktop (linux, osx, win) platforms, a 50% reduction during high load weekdays. To put this into perspective, the past week we have seen the lowest jobs per push since January 1st on both mozilla-inbound and fx-team. I see this as a huge win as it drastically reduces the load on our machines as well as reduces the time the sheriffs need to star intermittents, increasing productivity for all. And this data is for desktop platforms only, android and other platforms are yet to come, after which we should be seeing more gains.

dailyjobs

jobs/push per week since January for mozilla-inbound. Lowest on April 20th – April 26th

Project Repo: https://github.com/dminor/ouija
Project Stats: http://alertmanager.allizom.org/dailyjobs.html
Project Information: http://alertmanager.allizom.org/seta.html

2) Mozilla CI Tools
MozCI (Mozilla CI Tools) is a python library which allows one to trigger builds and test jobs on treeherder.mozilla.org via command line. It has various use cases like triggering jobs for bisecting intermittent failures, backfilling missing jobs, and personally I use it for bisecting performance regressions. This tool is also used by sheriffs and is aimed for increasing developer productivity.
Project Repo: https://github.com/armenzg/mozilla_ci_tools
Project Documentation: https://mozilla-ci-tools.readthedocs.org/en/latest/index.html

3) Firefox Performance Sheriffing
In late January 2015, I took up the role of a performance sheriff. In this role, I look at the performance data produced by the test jobs and find regressions, root causes and get bugs on file to track issues and bring it to the attention of patch authors.
Sheriff documentation: https://wiki.mozilla.org/Buildbot/Talos/Sheriffing

I have also contributed patches to some other projects like removing android*.json from mochitests (bug 1083347), A-Team bootcamp and Mozregression. If you are looking to contribute to open source projects, I think this is a great time to start contributing to Automation and Tools team at Mozilla and make a big impact. For me, it has been one of the most productive quarters and I plan to keep contributing further. As some of you may know, this summer I will be joining as the A-Team intern at Mozilla in San Francisco with @chmanchester as my mentor, and I am looking forward to do more exciting work here!


Jeff WaldenOffensive speech

Yesterday I discussed First Amendment forum doctrine as it related to Texas’s specialty plate program. Today I discuss the subject where everyone learns to stop worrying and love censorship: “offensive” speech.

Trollface
If you reach the end of this post without having been offended at least once, I apologize for the error. (But I am serious about all the arguments made here, trolling for offense notwithstanding.)

“Offensive” speech

Texas denied Texas SCV‘s design because “the design might be offensive to any member of the public” (emphases added). A mere possibility of offense, to a single person, allows Texas (if it wants, as a matter of even-handed objectivity and fairness) to tell the plate requester to pound sand.

What does the First Amendment say about “offensive” speech? Offensive speech particularly is strongly protected. The First Amendment lets you victim-blame soldiers at their own funerals. It lets you burn the American flag, notwithstanding that this “seriously offended” several witnesses. It lets you satirically attribute false quotations to someone implying that his first sexual experience occurred in an outhouse with his own mother…after kicking out the goat. It protects the right to wear an undoubtedly-offensive jacket bearing the words “Fuck the Draft”. It protects a student newspaper cover depicting police officers raping the Statue of Liberty. It lets you falsely claim to have received the highest and most respected of military awards, in some sense diminishing that award’s value. The list goes on and on.

An American flag held, burning, touching the ground
Freedom of speech (picture apparently by noplur, CC-BY-SA-2.0)

The First Amendment was designed to protect offensive and unpopular speech, full stop. “If there is a bedrock principle underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable.” “The fact that protected speech may be offensive to some does not justify its suppression.” As the Court stated regarding a more restrictive forum than the public sphere where license plates reside, “The mere dissemination of ideas — no matter how offensive to good taste — may not be shut off in the name alone of ‘conventions of decency.'”

In short, Texas is horribly wrong to think it has authority to censor offensive specialty plate designs based on “levels of offensiveness” (which, we must remember, was Texas’s post hoc rationale, not anything in the law).

Could the Confederate flag be “fighting words”?

Is some offensive speech nonetheless unprotected? Yes, but it must be fighting words that “by their very utterance, inflict injury or tend to incite an immediate breach of the peace.” No one here argues Texas law is really trying to criminalize fighting words, because doing so would run into (at least) two major problems.

First, Texas concedes that Texas SCV could display a bumper sticker with identical design on vehicles. But it’s irrational to think moving a hypothetically-inciting message a few inches would change that aspect of its nature. If a plate design met that standard, then so would a bumper sticker, and both could be forbidden.

Second, Texas’s “might be offensive to any member of the public” rule goes far beyond fighting words. “Choose Life” or “Respect Choice” specialty plates (or even “All Lives Matter”, which is clearly offensive and you should check your privilege, you racist member of the patriarchy) would offend some people. But it’s inconceivable (not meant that way) these could ever be considered fighting words, rather than protected political statements.

If Texas were to argue its “offensiveness” standard addressed fighting words (and it doesn’t), the statute would be substantially overbroad in criminalizing too much speech and accordingly vulnerable to challenge. Even Texas isn’t stupid enough to try to argue this.

What about “hate speech”?

The First Amendment protects hate speech. You can burn a cross on your own property without intent to intimidate, and it’s protected speech. You can hold a Nazi parade on public streets, even if Holocaust survivors say that “seeing the swastika was like being physically attacked.” You can make various forms of pornography that “subordinat[e] women” in a variety of reprehensible ways, and laws against such will be struck down.

A Neo-Nazi rally on the US Capitol grounds
I hate Illinois Nazis. But this is, rightly, protected speech. (original by David, CC-BY-2.0)

And the recent American kerfuffles over various fraternities’ obnoxious racist speech pretty much all constitute protected speech, for which the perpetrators have been widely judged protected from punishment by public universities. (The universities can get away with whatever punishment they want if the perpetrators won’t stand up for themselves in court. [An action which doesn’t prevent a sincere apology and work toward rehabilitation.] Fortunately, not all universities [#notalluniversities] [#butmost] have forgotten their duties under the First Amendment and their obligations to protect academic freedom even when it hurts.)

The imagined “hate speech” exception to the First Amendment, and to freedom of speech more generally, doesn’t exist.

I see. So you’re a racist bigot standing in the way of progress and social justice?

No. There are plenty of horrible forms of expression that nonetheless are protected by the First Amendment, and should be protected. I don’t need to agree with anything anyone says in order to defend their right to say it. (But by all means continue with the insults! I appreciate the encouragement.)

But consider the alternative to allowing all this speech. Suppose we really did think it was okay to punish some of this speech. What neutral arbitrator is going to decide which categories of speech are okay and which are not okay? Who’s going to neutrally enforce such laws in a non-discriminatory manner? There is no neutral arbitrator: rather, the majority will. And history shows that when a majority has the power to censor, it’ll censor things according to its own interests, not according to some “neutral” perspective able to judge what speech is and isn’t permissible.

Indeed, when the standard is that “offensive” speech is prohibited, aren’t minority views precisely those most likely to be deemed offensive? Inoffensive views aren’t the ones that need protection. Rather, it’s the offensive views that need protection, because majorities often aren’t inclined to protect offensive views. Nobody’s going to complain when someone says “Stop Child Abuse”, but they might about a discussion of then-offensive homosexual marriage.

Offensiveness and correctness aren’t always in agreement

Protecting offensive speech is particularly important because because offensive views may also be right. Martin Luther opening a scholarly debate among theologians about corrupt Church practices in the Ninety-Five Theses offended Johann Tetzel and the Church. (Whether or not Luther or the Church was right, I think we can generally agree the world is a better place for that debate having taken place.) Galileo’s astronomical hypotheses offended many astronomers of the day, and notoriously the Church. John Quincy Adams’s speech in bringing petitions “about” slavery before Congress, where Southern congressmen, fearful of what petitions on the topic of slavery might say, had previously voted in a rule to immediately table any petition on the subject, offended those congressmen. Paul Robert Cohen offended many people in protesting a draft later ended, attributable to some small degree to his speech about it. Archie Bunker epitomized bigotry, yet without it All in the Family couldn’t have made the points it made. (Query whether they could make that series today without being shouted down by the congenitally-stupid.)

It’s easier to judge people who say offensive things

Additionally, offensive speech permits the observer to judge people for what they’ve said. Censorship that prohibits expression of particular views also makes it harder to determine whether a person nonetheless holds them.

(It’s not a First Amendment question, but as a matter of the broader concept of freedom of speech, and the expression and discussion of ideas, I wonder if sometimes we take stigmatization too far. That is, we take it to the point where most adherents to an odious idea adhere in silence, with no way to recognize such people when it’s most important to be able to do so — for personal physical safety, for example. The apparent absence of bigots, when everyone agrees they still exist at least sometimes, can all too easily trigger paranoia. The consequent imputation of widespread bigotry without actual proof, in an overly-broad manner, serves no useful purpose.)

Moving on…

I could write more about this, but I’m well past the length of any prior post and running out of time to write more. I leave readers with a recommendation for further reading: the brief of the Cato Institute and others (most notably including the humorist P. J. O’Rourke and former ACLU president Nadine Strossen) in this case. Its discussion and examples are truly disgusting, an awful piece of work. And this is precisely what makes it so great.

Next time, various observations on oral argument. I still have writing to do here, so this might slip a day or two. Not clear. But right now I’m still aiming for it to go live tomorrow.

Mike ConleyThings I’ve Learned This Week (April 20 – April 24, 2015)

Short one this week. I must not have learned much! 😀

If you’re using Sublime Text to hack on Firefox or Gecko, make sure it’s not indexing your objdir.

Sublime has this wicked cool feature that lets you quickly search for files within your project folders. On my MBP, the shortcut is Cmd-P. It’s probably something like Ctrl-P on Windows and Linux.

That feature is awesome, because when I need to get to a file, instead of searching the folder hierarchy, I just hit Cmd-P, jam in a few of the characters (they can even be out of order – Sublime does fuzzy matching), and then as soon as my desired file is the top entry, just hit Enter, and BLAM – opened file. It really saves time!

At least, it saves time in theory. I noticed that sometimes, I’d hit Cmd-P, and the UI to enter my search string would take ages to show up. I had no idea why.

Then I noticed that this slowness seemed to show up after I had done a build. My objdir resides beneath my srcdir (as is the defaults with a mozilla-central checkout), so I figured perhaps Sublime was trying to index all of those binaries and choking on them.

I went to Project > Edit Project, and added this to the configuration file that opened:

{
    "folders":
    [
        {
            "path": "/Users/mikeconley/Projects/mozilla-central",
      "folder_exclude_patterns": ["*.sublime-workspace", "obj-*"]
        }
    ]
}

I added the workspace thing too1, because I figure it’s unlikely I’ll ever want to open that thing.

Anyhow, after setting that, I restarted Sublime, and everything was crazy-fast. \o/

If you’re using Sublime, and your objdir is under your srcdir, maybe consider adding the same thing. Even if you’re not using Cmd-P, it’ll probably save your machine from needlessly burning cycles indexing stuff.


  1. That’s where Sublime holds my session state for my project. 

Mike ConleyThe Joy of Coding (Ep. 11): Cleaning up the View Source Patch

For this episode, Richard Milewski and I figured out the syncing issue I’d been having in Episode 9, so I had my head floating in the bottom right corner while I hacked. Now you can see what I do with my face while hacking, if that’s a thing you had been interested in.

I’ve also started mirroring the episodes to YouTube, if YouTube is your choice platform for video consumption.

So, like last week, I was under a bit of time pressure because of a meeting scheduled for 2:30PM (actually the meeting I was supposed to have the week before – it just got postponed), so that gave me 1.5 hours to move forward with the View Source work we’d started back in Episode 8.

I started the episode by explaining that the cache key stuff we’d figured out in Episode 9 was really important, and that a bug had been filed by the Necko team to get the issue fixed. At the time of the video, there was a patch up for review in that bug, and when we applied it, we were able to retrieve source code out of the network cache after POST requests! Success!

Now that we had verified that our technique was going to work, I spent the rest of the episode cleaning up the patches we’d written. I started by doing a brief self-code-review to smoke out any glaring problems, and then started to fix those problems.

We got a good chunk of the way before I had to cut off the camera.

I know back when I started working on this particular bug, I had said that I wanted to take you through right to the end on camera – but the truth of the matter is, the priority of the bug went up, and I was moving too slowly on it, since I was restricting myself to a few hours on Wednesdays. So unfortunately, after my meeting, I went back to hacking on the bug off-camera, and yesterday I put up a patch for review. Here’s the review request, if you’re interested in seeing where I got to!

I felt good about the continuity experiment, and I think I’ll try it again for the next few episodes – but I think I’ll choose a lower-priority bug; that way, I think it’s more likely that I can keep the work contained within the episodes.

How did you feel about the continuity between episodes? Did it help to engage you, or did it not matter? I’d love to hear your comments!

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes

Jeff WaldenFirst Amendment forum doctrine

Yesterday I discussed government speech, and Texas’s arguments that specialty plates are government speech that individuals can’t compel Texas to make. Today I discuss the First Amendment forum doctrine.

A reminder: yesterday’s disclaimer still holds. In fact this post is probably most likely to contain errors of all the posts in this series. Caveat lector!

Forums

The Supreme Court classifies places (physical and metaphorical) where the First Amendment applies, and freedom of speech requirements hold, in a few different ways. These places are called forums.

Modern ruins of the Roman Forum
One of the original forums, the Roman Forum: per Wikipedia, the center of public life in ancient Rome, and a venue for many public speeches (Carla Tavares, CC-BY-SA-3.0

Not a forum

Some places aren’t forums. For example, most public television programs aren’t forums. In such places government may exclude particular viewpoints, topics, and content.

If Texas ran its specialty plate program as a contest, picking a few choices from a vast selection according to its own whims as occurs on public TV, it might not be a forum. (That might also make it government speech.) (Or maybe I’m grasping for a plausible example, and I’ve grasped wrongly. Hedge!) But Texas accepts basically anybody, so it’s a forum.

Traditional public forums

Some forums have always been open places of discussion: traditional public forums. These include public sidewalks and parks. Here, no restrictions on content or viewpoint are permitted (except a very small list). Government may impose reasonable content/viewpoint-neutral restrictions as to speech’s time, place, or manner. But in doing so it must further significant government interests, and restrictions must be narrowly written and leave open ample alternative channels for speech. For example: no use outside park hours, potentially you need a permit granted nondiscriminatorily, don’t exceed noise level limits, stay off grass being reseeded.

Specialty plate programs are a modern invention, obviously not a common-law forum historically open for discussion.

Nonpublic forums

Places that are forums that aren’t traditionally open, that government hasn’t opened to public expression, are nonpublic forums. Examples are courthouses and government buildings: generally, government property reserved for some particular use. Government can employ restrictions on content here, to further those particular interests. Otherwise, restrictions match those in traditional public forums (and particularly can’t restrict speech based on its viewpoint).

Texas invited everyone onto its license plates, opening them to public expression. So Texas’s specialty plate program is not a nonpublic forum.

Designated public forums (and limited public forums)

Finally we have locations not traditionally opened, that government has opened to some public expression: designated public forums. The restrictions applicable in traditional public forums are also applicable in designated public forums.

There’s also an additional sort of forum, sometimes viewed as a subset of the designed public forum, where the government limits speech to certain groups or topics: the limited public forum. A limited public forum may place restrictions on content, but it can’t restrict based on viewpoint.

The various forum definitions logically subdivide the set of all possible forums. (Although to be sure, there remains argument as to what subdivision the Court’s cases have actually recognized.) By process of elimination, Texas’s program must be a designated public forum, possibly a limited public forum.

Recap

Let’s check our work. License plate designs are not traditionally open to ideas, but Texas allows anyone to propose a custom plate design espousing practically any idea whatsoever. The ideas are legion: in-state and out-of-state colleges (even bitter rivals), college sports teams, and professional teams; corporations; non-profits; causes; war veteran status; the whimsical (“Rather Be Golfing”); and many others. The individual selects a specialty plate and displays it publicly. Moreover, Wooley v. Maynard indicates that individuals have First Amendment rights in what their license plates say. So Texas’s specialty plates are public expression, at the government’s invitation.

Texas’s specialty-plate program is a designated public forum, possibly a limited public forum. Hold the question as to which one — we’ll return to this later.

Tomorrow, a discussion of offensive speech.

Gervase MarkhamThree Forms of Diversity

I’ve been thinking about the concept of “diversity” recently, and have written some thoughts on the subject, called Three Forms of Diversity. (The title is a play on Three Forms of Unity.)

The piece does include, at the end, a section on the specific applicability of my analysis to the Mozilla community.

Comments, as always, are most welcome. :-)

Cameron KaiserAchievement unlocked: lack of any social or romantic activity PAYS OFF

*with two tests suppressed due to big endian

[4484|   0|   0|   0] 100% ==========================================>|1511.7s
PASSED ALL

Yaaaaaay! It's time to get jiggy with it! First, I'll scrub the changesets to remove PPCBC/JM entirely (except for the PowerPC-specific parallel type guards in Baseline inline caches, which are an important optimization); the only other vestiges that shall remain are a couple code snippets from Ben's original work in the new MacroAssembler and the G3/G4 software square root routine from David, which may be replaced with a call to the C library; I'm going to do some profiling there first. (G5 still uses its hardware square root instruction.) After that we'll sprinkle back in gcc 4.6 compatibility, the last branch we will support with this compiler, mix with IonPower, and then try to get the browser up. If I can get the minimp3 seek routine working, MP3 support will be exposed publicly too. We will also unveil our own bespoke browser front page to avoid issues with Electrolysis, which we don't support currently.

In addition, I have some lofty but possible performance goals for IonPower. For these goals, I've selected V8 as the metric because it's a medium-weight benchmark; SunSpider is probably not representative anymore of the bloatier code many sites (especially ad networks, bleah) now foist upon their users, and it is unlikely many Power Macs are braving the really heavy processing JavaScript apps for time reasons (or support reasons: games using WebGL, for example, won't run on TenFourFox anyway) which makes Kraken and Octane less representative as well. The aim is to get the optimized IonPower faster than PPCBC on V8 in Baseline-only mode, and faster than our old JaegerMonkey backend on V8 in full Baseline/Ion mode, which it now supports. The first is definitely achievable, and the second should come awfully close. That should translate into demonstrable real-world improvements on the vast majority of sites vintage Macs still frequent and will also make pdf.js finally viable on TenFourFox.

I treat every ESR release like it's our last, but by the time 38ESR ends, that will mean Power Macs will have had a fully supported branch of Firefox for 10 years after the last Power Mac rolled out of Cupertino. Now, that's support!

ETA for the first beta is 3-4 weeks, with rollout around 38.0.2 to coincide with the end of support for 31ESR. Localizers, stand by!

Armen ZambranoWhat Mozilla CI tools is and what it can do for you (aka mozci)

Mozci (Mozilla CI tools) is a python library, scripts and package which allows you to trigger jobs on treeherder.mozilla.org.
Not all jobs can be triggered but those that are run on Release Engineering's Buildbot setup. Most (if not all) Firefox desktop and Firefox for Android jobs can be triggered. I believe some B2G jobs can still be triggered.

NOTE: Most B2G jobs are not supported yet since they run on TaskCluster. Support for it will be given on this quarter.

Using it

Once you check out the code:
git clone https://github.com/armenzg/mozilla_ci_tools.git
python setup.py develop
you can run scripts like this one (click here for other scripts):
python scripts/trigger.py \
  --buildername "Rev5 MacOSX Yosemite 10.10 fx-team talos dromaeojs" \
  --rev e16054134e12 --times 10
which would trigger a specific job 10 times.

NOTE: This is independent if a build job exist to trigger the test job. mozci will trigger everything which is required to get you what you need.

One of the many other options is if you want to trigger the same job for the last X revisions, this would require you to use --back-revisions X.

There are many use cases and options listed in here.


A use case for developers

One use case which could be useful to developers (thanks @mike_conley!) is if you pushed to try and used this try syntax: "try: -b o -p win32 -u mochitests -t none". Unfortunately, you later determine that you really need this one: "try: -b o -p linux64,macosx64,win32 -u reftest,mochitests -t none".

In normal circumstances you would go and push again to the try server, however, with mozci (once someone implements this), we could simply pass the new syntax to a script (or with ./mach) and trigger everything that you need rather than having to push again and waster resources and your time!

If you have other use cases, please file an issue in here.

If you want to read about the definition of the project, vision, use cases or FAQ please visit the documentation.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Jeff WaldenGovernment speech and compelled speech

Yesterday I discussed specialty plate programs in lower courts and the parties’ arguments in Walker v. Texas Division, Sons of Confederate Veterans. Today I begin to analyze the questions in the case.

But first, a disclaimer.

Disclaimer

The following is my understanding of First Amendment law, gleaned from years of reading numerous free speech opinions, summaries, and analyses. I’m generally confident in this explanation, but I may well have made mistakes, or simply missed nuance present in the cases but not in the summaries I’ve read. Please point out mistakes in the comments.

Of course, I really have no business trying to explain First Amendment jurisprudence, if I want it explained correctly. First Amendment law is incredibly complex. My haphazard reading will miss things.

But I’m barging ahead anyway, for a few reasons. First, I want to talk about this. Second, it’s fun to talk about it! Third, you don’t learn unless you’re willing to look like a fool from time to time. Fourth, the law is not this recondite, bizarre arcana that only lawyers and judges can understand. It may require some work to correctly understand laws, terms of art, rules of statutory construction, and relevant past decisions in the common law. But any intelligent person can do it if they make the effort.

And fifth, nobody with any sense will unconditionally rely on this as authoritative, not when there are far better places to look for the finest in free Internet legal advice.

Government speech

The “recently minted” government speech doctrine occupies an uneasy place in the realm of speech. For when government speech occurs, non-governmental speech open to First Amendment challenge is reduced. There must be some government speech: otherwise we’d absurdly conclude that the government’s World War II war-bond propaganda must be accompanied by anti-bond propaganda. Government programs often have viewpoints suppressible only in the voting booth. But this mechanism is sluggish and imperfectly responsive, and government speech’s discretion can be abused. So it’s best to be careful anointing government speech.


This is your government. This is your government on beef. Any questions?

Certainly some license plates — the state’s default designs and designs ordered by the legislature — are government speech, even if they’re also individual speech under Wooley v. Maynard. In each case the government wholly chooses what it wishes to say, and that message is government speech. The individual’s choice to assist in conveying it, under Wooley, isn’t government speech.

Circularity

But Texas’s government-speech argument, applied beyond plates it designs itself, is laughable. The linchpin of Texas’s argument is that because they control the program, that makes it government speech they can control. This argument is completely circular! By starting from their control over the program’s speech, they’ve assumed their conclusion.

This doesn’t mean Texas is wrong. But their circular central government-speech argument can prove nothing. This logical flaw is blindingly obvious. Texas’s lawyers can’t have missed this. If they made this their lead argument, they’re scrambling.

Compelling Texas to speak?

Texas’s better argument is that vehicle licenses and plates are its program, implicating its right to speak or not speak under Wooley. But the First Amendment restrains government power, not individual power. And many courts (although so far not the Supreme Court) have held that government can be compelled to “speak” in accepting advertising in government-controlled places (public transit systems, for a common example). The problem is Texas voluntarily created a specialty plate program open to all for speech. No “compulsion” derives from a voluntary act.

Texas didn’t fully control the specialty plate program, but rather opened it to anyone with money. (As Chief Justice Roberts noted in oral argument: “They’re only doing this to get the money.”) It’s possible there’s government speech in Texas SCV‘s plate, perhaps the occasionally-proposed “hybrid” speech. But once Texas opens the program to all, it loses full control over what’s said.

How then do we consider specialty plate programs? What controls may Texas exercise? Now we must decide how to classify the specialty-plate program with respect to First Amendment-protected speech. What sort of forum for speech is Texas’s specialty-plate program?

Tomorrow, First Amendment forum doctrine.

Daniel Stenbergcurl on the NASDAQ tower

Apigee posted this lovely picture over at twitter. A curl command line on the NASDAQ tower.

curl-nasdaq-cropped

Armen ZambranoFirefox UI update testing

We currently trigger manually UI update tests for Firefox releases. There are automated headless update verification tests but they don't test the UI of Firefox.

The goal is to integrate this UI update testing as part of the Firefox releases.
This will require changes to firefox-ui-tests, buildbot scheduling changes, Marionette changes and other Mozbase packages. The ultimate goal is to speed up our turn around on releases.

The update testing code was recently ported from Mozmill to use Marionette to drive the testing.

I've already written some documentation on how to run the update verification using Release Engineering configuration files. You can use my tools repository until the code lands (update_testing is the branch to be used).

My deliverable is to ensure that the update testing works reliably on Release Engineering infrastructure and there is existing scheduling code for it.

You can read more about this project in bug 1148546.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Pierros PapadeasKPI Dashboard on reps.mozilla.org

Mozilla Reps as a program is full of activities. Reps around the world, do extraordinary  things everyday, promoting Mozilla’s mission and getting new contributors on board.

Moving forward, trying to identify how those activities align with top-tier initiatives, Mozilla Reps program wanted a way to visualize some Key Progress Indicators around the program.

We (the Participation Infrastructure team) sat down with the programmatic owners of Reps (Nuke & Rosana) and identified what numbers and metrics we would like to expose in a much more digestible way, so we can assess the progress of the program on many levels.

We identified 3 different KPIs:

  • Number of Reps (and growth rates)
  • Number of Events (and growth rates)
  • Number of Reports (and growth rates)

… and also 3 different filters you can apply on those numbers:

  • Country
  • Functional Area (of Mozilla)
  • Initiative (associated with Rep, Event or Report)

You can find the spec for this initial iteration here.

We decided to have the filters as drop-downs, applied on the whole page (in combination or one-by-one). Then for each KPI group we would have a time graph for the past 6 weeks (fixed for now) with a HUD of basic numbers and growth rates.

Screenshot from 2015-04-24 13:39:29Technology-wise, we tied the coding of this new dashboard to the delivery of a proper API for the Reps Portal (more info on that soon). The new API enabled us to easily create custom endpoints to calculate the numbers needed for our Reps KPI graphs (based on the existing Conversion Points). Nemo and Tasos did a fantastic work to deliver the new API and the custom endpoints, while making sure that this is not heavy on our DB.

Nikos then worked on the front-end using D3.js as the visualization library to create the graphs dynamically (each time you access the page or you filter using Country, Area or Initiative).

Screenshot from 2015-04-24 13:37:05The overall result is smooth and easily helps you assess progress of various Areas and Initiatives on specific Countries, for Reps, Events and Reports.

You can check out the dashboard here.

Next step would be to introduce a time-slider for customizing the time range you want to be displayed.

Chris LordWeb Navigation Transitions

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.


 Navigation Transitions specification proposal

Abstract

An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.

[Update]

Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;
}

@media (navigate-away) {
  ...
}

I think this would indeed be nicer, though I think the exact naming might need some work.

Transitioning

When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
--------------------------------------
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.

Signals

When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.

Examples

Simple slide between two pages:

[page-1.html]

<head>
  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="window.location='page-2.html'"></div>
</body>

[page-1-exit.css]

#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;
}

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }
}

[page-2.html]

<head>
  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="history.back()"></div>
</body>

[page-2-enter.css]

#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;
}

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}
}


I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

Cameron KaiserIonPower progress report

Remember: comparing the G5 optimized PPCBC Baseline-only compiler against the unoptimized test version of IonPower on V8!

% /Applications/TenFourFoxG5.app/Contents/MacOS/js --no-ion -f run.js
Richards: 203
DeltaBlue: 582
Crypto: 358
RayTrace: 584
EarleyBoyer: 595
RegExp: 616
Splay: 969
NavierStokes: 432
----
Score (version 7): 498

% ../../../../mozilla-36t/obj-ff-dbg/dist/bin/js -f run.js
Richards: 337
DeltaBlue: 948
Crypto: 1083
RayTrace: 913
EarleyBoyer: 350
RegExp: 259
Splay: 584
NavierStokes: 3262
----
Score (version 7): 695

I've got one failing test case left to go (the other is not expected to pass because it assumes a little-endian memory alignment)! We're almost to the TenFourFox 38 port!

The Rust Programming Language BlogRust Once, Run Everywhere

Rust’s quest for world domination was never destined to happen overnight, so Rust needs to be able to interoperate with the existing world just as easily as it talks to itself. For this reason, Rust makes it easy to communicate with C APIs without overhead, and to leverage its ownership system to provide much stronger safety guarantees for those APIs at the same time.

To communicate with other languages, Rust provides a foreign function interface (FFI). Following Rust’s design principles, the FFI provides a zero-cost abstraction where function calls between Rust and C have identical performance to C function calls. FFI bindings can also leverage language features such as ownership and borrowing to provide a safe interface that enforces protocols around pointers and other resources. These protocols usually appear only in the documentation for C APIs – at best – but Rust makes them explicit.

In this post we’ll explore how to encapsulate unsafe FFI calls to C in safe, zero-cost abstractions. Working with C is, however, just an example; we’ll also see how Rust can easily talk to languages like Python and Ruby just as seamlessly as with C.

Rust talking to C

Let’s start with a simple example of calling C code from Rust and then demonstrate that Rust imposes no additional overhead. Here’s a C program which will simply double all the input it’s given:

int double_input(int input) {
    return input * 2;
}

To call this from Rust, you might write a program like this:

extern crate libc;

extern {
    fn double_input(input: libc::c_int) -> libc::c_int;
}

fn main() {
    let input = 4;
    let output = unsafe { double_input(input) };
    println!("{} * 2 = {}", input, output);
}

And that’s it! You can try this out for yourself by checking out the code on GitHub and running cargo run from that directory. At the source level we can see that there’s no burden in calling an external function beyond stating its signature, and we’ll see soon that the generated code indeed has no overhead, either. There are, however, a few subtle aspects of this Rust program, so let’s cover each piece in detail.

First up we see extern crate libc. The libc crate provides many useful type definitions for FFI bindings when talking with C, and it makes it easy to ensure that both C and Rust agree on the types crossing the language boundary.

This leads us nicely into the next part of the program:

extern {
    fn double_input(input: libc::c_int) -> libc::c_int;
}

In Rust this is a declaration of an externally available function. You can think of this along the lines of a C header file. Here’s where the compiler learns about the inputs and outputs of the function, and you can see above that this matches our definition in C. Next up we have the main body of the program:

fn main() {
    let input = 4;
    let output = unsafe { double_input(input) };
    println!("{} * 2 = {}", input, output);
}

We see one of the crucial aspects of FFI in Rust here, the unsafe block. The compiler knows nothing about the implementation of double_input, so it must assume that memory unsafety could happen whenever you call a foreign function. The unsafe block is how the programmer takes responsibility for ensuring safety – you are promising that the actual call you make will not, in fact, violate memory safety, and thus that Rust’s basic guarantees are upheld. This may seem limiting, but Rust has just the right set of tools to allow consumers to not worry about unsafe (more on this in a moment).

Now that we’ve seen how to call a C function from Rust, let’s see if we can verify this claim of zero overhead. Almost all programming languages can call into C one way or another, but it often comes at a cost with runtime type conversions or perhaps some language-runtime juggling. To get a handle on what Rust is doing, let’s go straight to the assembly code of the above main function’s call to double_input:

mov    $0x4,%edi
callq  3bc30 <double_input>

And as before, that’s it! Here we can see that calling a C function from Rust involves precisely one call instruction after moving the arguments into place, exactly the same cost as it would be in C.

Safe Abstractions

Most features in Rust tie into its core concept of ownership, and the FFI is no exception. When binding a C library in Rust you not only have the benefit of zero overhead, but you are also able to make it safer than C can! Bindings can leverage the ownership and borrowing principles in Rust to codify comments typically found in a C header about how its API should be used.

For example, consider a C library for parsing a tarball. This library will expose functions to read the contents of each file in the tarball, probably something along the lines of:

// Gets the data for a file in the tarball at the given index, returning NULL if
// it does not exist. The `size` pointer is filled in with the size of the file
// if successful.
const char *tarball_file_data(tarball_t *tarball, unsigned index, size_t *size);

This function is implicitly making assumptions about how it can be used, however, by assuming that the char* pointer returned cannot outlive the input tarball. When bound in Rust, this API might look like this instead:

pub struct Tarball { raw: *mut tarball_t }

impl Tarball {
    pub fn file(&self, index: u32) -> Option<&[u8]> {
        unsafe {
            let mut size = 0;
            let data = tarball_file_data(self.raw, index as libc::c_uint,
                                         &mut size);
            if data.is_null() {
                None
            } else {
                Some(slice::from_raw_parts(data as *const u8, size as usize))
            }
        }
    }
}

Here the *mut tarball_t pointer is owned by a Tarball, which is responsible for any destruction and cleanup, so we already have rich knowledge about the lifetime of the tarball’s memory. Additionally, the file method returns a borrowed slice whose lifetime is implicitly connected to the lifetime of the source tarball itself (the &self argument). This is Rust’s way of indicating that the returned slice can only be used within the lifetime of the tarball, statically preventing dangling pointer bugs that are easy to make when working directly with C. (If you’re not familiar with this kind of borrowing in Rust, have a look at Yehuda Katz’s blog post on ownership.)

A key aspect of the Rust binding here is that it is a safe function, meaning that callers do not have to use unsafe blocks to invoke it! Although it has an unsafe implementation (due to calling an FFI function), the interface uses borrowing to guarantee that no memory unsafety can occur in any Rust code that uses it. That is, due to Rust’s static checking, it’s simply not possible to cause a segfault using the API on the Rust side. And don’t forget, all of this is coming at zero cost: the raw types in C are representable in Rust with no extra allocations or overhead.

Rust’s amazing community has already built some substantial safe bindings around existing C libraries, including OpenSSL, libgit2, libdispatch, libcurl, sdl2, Unix APIs, and libsodium. This list is also growing quite rapidly on crates.io, so your favorite C library may already be bound or will be bound soon!

C talking to Rust

Despite guaranteeing memory safety, Rust does not have a garbage collector or runtime, and one of the benefits of this is that Rust code can be called from C with no setup at all. This means that the zero overhead FFI not only applies when Rust calls into C, but also when C calls into Rust!

Let’s take the example above, but reverse the roles of each language. As before, all the code below is available on GitHub. First we’ll start off with our Rust code:

#[no_mangle]
pub extern fn double_input(input: i32) -> i32 {
    input * 2
}

As with the Rust code before, there’s not a whole lot here but there are some subtle aspects in play. First off, we’ve labeled our function definition with a #[no_mangle] attribute. This instructs the compiler to not mangle the symbol name for the function double_input. Rust employs name mangling similar to C++ to ensure that libraries do not clash with one another, and this attribute means that you don’t have to guess a symbol name like double_input::h485dee7f568bebafeaa from C.

Next we’ve got our function definition, and the most interesting part about this is the keyword extern. This is a specialized form of specifying the ABI for a function which enables the function to be compatible with a C function call.

Finally, if you take a look at the Cargo.toml you’ll see that this library is not compiled as a normal Rust library (rlib) but instead as a static archive which Rust calls a ‘staticlib’. This enables all the relevant Rust code to be linked statically into the C program we’re about to produce.

Now that we’ve got our Rust library squared away, let’s write our C program which will call Rust.

#include <stdint.h>
#include <stdio.h>

extern int32_t double_input(int32_t input);

int main() {
    int input = 4;
    int output = double_input(input);
    printf("%d * 2 = %d\n", input, output);
    return 0;
}

Here we can see that C, like Rust, needs to declare the double_input function that Rust defined. Other than that though everything is ready to go! If you run make from the directory on GitHub you’ll see these examples getting compiled and linked together and the final executable should run and print 4 * 2 = 8.

Rust’s lack of a garbage collector and runtime enables this seamless transition from C to Rust. The external C code does not need to perform any setup on Rust’s behalf, making the transition that much cheaper.

Beyond C

Up to now we’ve seen how FFI in Rust has zero overhead and how we can use Rust’s concept of ownership to write safe bindings to C libraries. If you’re not using C, however, you’re still in luck! These features of Rust enable it to also be called from Python, Ruby, JavaScript, and many more languages.

When writing code in these languages, you sometimes want to speed up some component that’s performance critical, but in the past this often required dropping all the way to C, and thereby giving up the memory safety, high-level abstractions, and ergonomics of these languages.

The fact that Rust can talk to easily with C, however, means that it is also viable for this sort of usage. One of Rust’s first production users, Skylight, was able to improve the performance and memory usage of their data collection agent almost instantly by just using Rust, and the Rust code is all published as a Ruby gem.

Moving from a language like Python and Ruby down to C to optimize performance is often quite difficult as it’s tough to ensure that the program won’t crash in a difficult-to-debug way. Rust, however, not only brings zero cost FFI, but also makes it possible to retain the same safety guarantees as the original source language. In the long run, this should make it much easier for programmers in these languages to drop down and do some systems programming to squeeze out critical performance when they need it.

FFI is just one of many tools in the toolbox of Rust, but it’s a key component to Rust’s adoption as it allows Rust to seamlessly integrate with existing code bases today. I’m personally quite excited to see the benefits of Rust reach as many projects as possible!

Emma IrwinMy year on Reps Council

It’s been one year! An incredible year of learning, leading and helping evolve the Mozilla Reps program as a council member. As my term ends I want to share my experiences with those considering this same path, but also as a way to lend to the greater narrative of Reps as a leadership platform.

I could write easily write 12 posts to cover the experience –  but  I thought this might be more helpful:

The 7 things I know for sure

(after 12 months on Reps Council)

1. Mozilla Reps Council Is a Journey of Learning and Inspiration

I assumed, when I first started council, that my workload would  consist of mostly administrative tasks (although to be truthful there is a lot of that). I also assumed I would effortlessly lean on my existing leadership skills to ‘help out’ where needed.  It turns out, I had a lot to learn and improve on – here are some of the new and sharpened skills I am emerging with:

  • Problem solving
  • Conflict Resolution/ Crisis Management
  • Communication
  • Strategy
  • Transparency
  • Project Planning
  • Task Management
  • Writing
  • Respecting Work-Life Balance
  • Debating Respectfully
  • Public Speaking
  • Facilitation
  • The art of saying ‘no’/when to step back
  • The art of ‘not dropping balls’ or knowing which balls will bounce back, and which will break
  • Being brave (aka talking to leadership and with nagging imposter syndrome)
  • Empathy
  • Planning for Diversity
  • Outreach
  • Teaching
  • Mentorship

2. 2015 is a (super) important year for Reps

Nurtured by the loving hands of 5 previous Reps councils, a strong mentorship structure and over 400 Reps and thousands of community members the Mozilla Reps program has come to an important milestone as a recognized body of leadership across Mozilla.  The  clearly articulated vision of Reps as a ‘launch pad for leadership’ has pushed us to be more  strategic in our goals.  And we are.  The next council together with mentors will be critical in executing these goals.

3. The voice of community is valued, and Mozilla is listening

In the past few months, we’ve worked with Mitchell Baker, Chris Beard, Mark Surman and David Slater, Mary-Ellen and others on everything from conflict resolution, to VP interview and on-boarding processes. Reps Council is on the Mozilla leadership page. The Mozilla Reps call has been attended by Firefox and Brand teams in need of feedback.  It’s not a coincidence, and it’s not casual – your voice matters.  Reps as leaders have the ear of the entire organization, because Reps are the voice of their extended community.

2015-04-23_1942

 

I encourage Rep Mentors with loud minds – to run for council this year.

4. Mozilla Reps is  ever-evolving

View post on imgur.com

When I joined Reps Council, I had a lot of ideas about what would would ‘fix’.  And I laugh at myself now – ‘fixing’ is something we do to flaws, to errors and mistakes – but the Reps program is not an immovable force  – it’s a living organism, it’s alive with people, their ideas, inventions and actions.  How we evolve, while aligning with the needs of project goals, is a bit like changing the tire on a moving car .   If you are considering a run for council, it might help to envision ways you can evolve, improve and grow the program as it shifts, and in response to community vision for their own participation goals.

 5. Changing minds is hard / Outreach matters

I can’t write a list like this without acknowledging a my personal challenge of recognizing and trying to change ‘perception problems’.  It was strange to move from what had been a fairly easy transition between community, Rep and mentor to Reps council where almost suddenly –  I was regarded as part of a bureaucratic structure. I didn’t see or feel that from my fellow council members, and it’s been important to me that we change that perception through outreach.

Perceptions of our extended community have also been challenging – the idea that Reps is somehow isolated or a special  contributor group is contrary to the leadership platform we are really building.

Slowly we are changing minds, slowly outreach is making a difference – I am happy and optimistic about this.

 6.  Diversity Matters  Reps is an incredibly diverse community with diverse representation in many areas including age, geography and experience. Few other communities can compare .  But,  like much of the technology world we struggle with the representation of women in our council, and mentorship base.  To be truly reflective of our community, and our world – to have the benefit of all perspectives we need to encourage women leaders.  As I leave council, my hope is that we will continue to prioritize women in leadership roles.

7. Our community rocks  Brilliant, creative, energetic, passionate, motivated, friends and second family.  The heart of what we do, lies here.

To the Reps community, mentors, the Reps team, Mozilla leadership and community I thank you for this incredible opportunity to contribute and to grow.  I plan to pay it forward.

2015-04-23_1946

Feature Image Credit:  Fay Tandog

 

 

Air MozillaPrivacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences

Privacy Lab and Cryptoparty with guest speaker Melanie Ensign - How Security/Crypto Experts Can Communicate with Non-Technical Audiences Our April Privacy Lab will include a speaker and an optional and free Cryptoparty, hosted by Wildbee (https://wildbee.org/cryptoparty.html). Our speaker will be Melanie Ensign. Melanie's...

L. David BaronThoughts on migrating to a secure Web

Brad Hill asked what I and other candidates in the TAG election think of Tim Berners-Lee's article Web Security - "HTTPS Everywhere" harmful. The question seems worth answering, and I don't think an answer fits within a tweet. So this is what I think, even though I feel the topic is a bit outside my area of expertise:

  • The current path of switching content on the Web to being accessed through secure connections generally involves making the content available via http URLs also available via https URLs, redirecting http URLs to https ones, and (hopefully, although not all that frequently in reality) using HSTS to ensure that the user's future attempts to access HTTP resources get converted to HTTPS without any insecure connection being made. This is a bit hacky, and hasn't solved the problem of the initial insecure connection, but it mostly works, and doesn't degrade the security of anything we have today (e.g., bookmarks or links to https URLs).

  • It's not clear to me what the problem that Tim is trying to solve is. I think some of it is concern over the semantic Web (e.g., his concern over the “identity of the resource”), although there may be other concerns there that I don't understand. I'd tend to prioritize the interests of the browseable Web (with users counted in the billions) and other uses of the Web that are widespread, over those of the semantic Web.

  • There are good reasons for the partitioning that browsers do between http and https:

    • Some of the partitioning prevents attacks directly (for example, sending a cookie that should be sent only to an https site to its http equivalent could allow an active attacker to steal the information in that cookie). Likewise for many other attacks involving the same-origin policy, where http and https are considered different origins.
    • Some of it (e.g., identifying https pages that load resources over http as insecure) is intended to prevent large classes of mistakes that would otherwise be widespread and drastically reduce the security of the Web. Circa 2000, a common Web developer complaint about browser security UI was that a site couldn't be considered secure if an image was loaded over HTTP. This might have been fine if the image was the company logo (and the attack under consideration was avoiding theft of money or credentials rather than avoiding monitoring), but isn't fine if the image is a graph of a bank account balance or if the image's URL has authentication information in it. (On the other hand, if it were a script rather than an image, an active attacker could compromise the entire page if the script could be loaded without authentication.) I think a similar rationale applies for not having mechanisms to do authentication without encryption (even though there are many cases where that would be fine).

    It's not clear to me how Tim's proposal of making http secure would address these issues (and keep everything else working at the same time). For example, is a secure-http page same-origin with insecure-http on the same host, or with https, or neither? They may well be solvable, but I don't see how to solve them off the top of my head, and I think they'd need to be solved before actually pursuing this approach.

  • One problem that I think is worth solving is that HTTPS as a user-presentable prefix has largely failed. Banks tell their customers to go to links like "bofa.com/activate" or "wellsfargo.com/activate". (The first one doesn't even work if the user adds "https://". I guess there's a chance that the experience of existing users could be fixed with HSTS, but that's not the case today.) They do this for a good reason; each additional character (especially the strange characters) is going to reduce the chance the user succeeds at the task.

    It's possible Tim's proposal might help solve this, although it's not clear to me how it could do so with an active man-in-the-middle attacker. (It could help against passive attackers, as could browsers trying https before trying http.) In the long term, maybe the Web will get to a point where typing such URLs tries https and doesn't try http, but I think we're a long way away from a browser being able to do that without losing a large percentage of its users.

I think I basically understand the current approach of migrating to secure connections by migrating to https, which seems to be working, although slowly. I'm hopeful that Let's Encrypt will help speed this up. It's possible that the approach Tim is suggesting could lead to a faster migration to secure connections on the Web, although I don't see enough in Tim's article to evaluate its security and feasibility.

The Servo BlogThis Week In Servo 31

In the past two weeks, we merged 100 pull requests

Lars and Mike wrote a blog post on Servo on the Samsung OSG blog

Notable additions

New contributors

Screenshots

This is a simple demo of our new WebGL support.

Meetings

Minutes

  • We’re trying out Reviewable for code review instead of Critic. It’s pretty neat!
  • Homu is working out very well for us
  • We ought to have some new team members soon!
  • Integration with the Firefox timeline devtool has landed

Air MozillaGerman speaking community bi-weekly meeting

German speaking community bi-weekly meeting Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.

Jeff WaldenSpecialty plates in circuit courts, and the parties’ arguments at the Supreme Court

Yesterday I discussed the background to Walker v. Texas Division, Sons of Confederate Veterans. Stated briefly, Texas denied Texas SCV‘s application for a specialty license plate with a Confederate flag on it, because the design might be “offensive”. The question is whether Texas is required by the First Amendment to grant the application.

Today I discuss how specialty plate programs have fared in lower courts, and the arguments Texas and Texas SCV bring to the case.

In the courts

Almost every circuit court has required that specialty plate programs be viewpoint-neutral, not restricting designs because of their views. (And the one exception judged a program without an open invitation for designs.) So it’s unsurprising that Texas SCV won its Fifth Circuit case.

Texas appealed to the Supreme Court, which agreed to answer two questions. Are specialty plate programs “government speech” that need not be viewpoint-neutral, such that the design can be rejected as “offensive” (or, indeed, for almost any reason)? And did Texas discriminate by viewpoint in rejecting Texas SCV’s design?

Texas’s argument

Texas says license plates are entirely the government speaking, and it can say or not say whatever it wants. Texas relies on two cases: Pleasant Grove City v. Summum, in which a city’s approval of a limited set of monuments in its city park (and denial of a particular monument) was deemed government speech; and Johanns v. Livestock Marketing Association, in which a government beef-promotion plan that exacted a fee from beef producers to support speech (including the Beef. It’s What’s For Dinner tagline) was deemed government speech that program participants couldn’t challenge on the grounds that it compelled them to speak.

According to Texas, its specialty plates are government speech because Texas “effectively control[s]” the whole program. What matters is whether Texas “exercises final approval authority over every word used” — and it does. Texas allows private citizens to participate, but it has “final approval authority” over every design. Texas also argues that it can’t be compelled to speak by displaying the Confederate flag. By making a license plate, the state’s authority backs (or doesn’t back) every design approved or rejected. Plate purchasers shouldn’t be able to force Texas to espouse the views of an unwanted specialty plate, which drivers would then ascribe to Texas.

The Texas-approved Mighty Fine Burger specialty plate

And of course, Texas says ruling against them would lead to “untenable consequences”. For every “Stop Child Abuse” plate there’d have to be an opposing plate supporting child abuse, and so on for the whole parade of horribles. Texas particularly notes that the Eighth Circuit forced Missouri to let the Ku Klux Klan join the state’s Adopt-a-Highway program under this logic. (The person behind me in the oral argument line related that one of the highways entering Arkansas was adopted by the KKK under that rule, giving Arkansas visitors that delightful first impression of the state.)

Texas also asserted that assessing how members of the public view a specialty plate is “an objective inquiry”, so that deciding a specialty plate “might be offensive” doesn’t discriminate on the basis of the specialty plate’s viewpoint. As to the Fifth Circuit’s criticism of the “unbridled discretion” provided by the “might be offensive” bar, Texas instead describes it as “discriminating among levels of offensiveness”, such latitude permitted because the state is “assisting speech”.

Texas SCV’s argument

Texas SCV says Texas is being hypocritical. The Capitol gift shop sells Confederate flags. Texas recognizes a state Confederate Heroes Day. It maintains monuments to Confederate soldiers. Either Texas doesn’t really think the Confederate flag is offensive to the public, or its other “government speech” is flatly inconsistent with its specialty-plate stance.

Texas SCV also distinguishes the plates designed by the state legislature from plates designed by private entities. The former are the product of the government, but the only government involvement in the latter is in approval or disapproval. The driver has ultimate control, because only when he designs a plate and ultimately drives a vehicle with it does speech occur. And under Wooley v. Maynard — a case where a Jehovah’s Witness protested New Hampshire’s fining of people who covered up “Live Free or Die” on their license plates, and the Court said New Hampshire couldn’t force a person to espouse the state motto — it’s the individual’s speech (at least for non-legislatively-designed plates).

Texas SCV brushes off Summum and Johanns. Permanent monuments in parks have always been associated with the government, because parks physically can’t accommodate all monuments. Not so for license plates. (And Texas’s $8000 deposit covers startup costs that might justify treating rare plates differently.) And while the beef-promotion messages were part of a “coordinated program” by government to “advance the image and desirability of beef and beef products”, privately-designed specialty plates are not — especially as their fullycontradictory messages are “consistent” only as a fundraiser.

Finally, given that privately-designed specialty plates are private speech, the First Amendment requires that restrictions be viewpoint-neutral. By restricting Texas SCV’s message based on its potential for offensiveness, Texas endorsed viewpoints that deem the Confederate flag racist and discriminated against viewpoints that do not.

Tomorrow, analysis of Texas’s government speech and compelled speech arguments.

Edward LeeWhys and Hows of Suggested Tiles

As Darren discussed on Monday’s project call [wiki.mozilla.org], Suggested Tiles has been on track to go live to a larger audience next week for wider beta testing. I’ll provide some context around why we implemented this feature, details of how it works, and open questions of how to make it even better.

A suggestion made for people who visit Mozilla sites

A suggestion made for people who visit Mozilla sites

We’ve been looking for ways to improve the user experience within Firefox by combining data that Firefox knows about with data Mozilla can provide. We’re also in a good position to work with the advertising ecosystem so that we can change it to care more about the values Mozilla cares about. We want to create advertising products that respect user’s choice/control over their data and get others interested by showing money can be made.

Last November, we launched Tiles with a framework to show external content within Firefox’s new tab page. A relatively small portion of Firefox users saw these because they only showed up if there would have been empty tiles, i.e., new users with little history or existing users who cleared history. Suggested Tiles expands on this to be a little bit smarter by showing content based on the user’s top sites. For example, if a user tends to visit sites about mobile phones, Firefox can now decide to show a suggestion for Firefox for Android.

We intend to bring value to users by showing them content that they would be interested in and engage with. On the flip side, this means we purposefully hold back on showing content that users might get annoyed with and block. We do this in a way that requires using a minimal amount of the user’s data, and as usual, we provide controls to the user to turn things off if that’s desired. For the initial release of Suggested Tiles, we plan to show content from Mozilla such as mobile Firefox, MDN, and HTML5 gaming.

The Tiles framework has been built in ways that are different from traditional web advertising in both how it gets data into Firefox [blog.mozilla.org] and how it reports on Tiles performance. The two linked posts have quite a bit of detail, but to summarize, Firefox makes generic encrypted cookieless requests to get enough data to decide locally in Firefox whether content should be added to the new tab page. In order for us to have data on how to improve the experience, Firefox reports back when users see and click on these tiles and includes tile data such as IDs of added content but no URLs. We have aggressive data deletion policies and don’t keep any unique identifiers that can be associated with our users.

The technical changes to support Suggested Tiles are not overly complex as the server provides one additional value specifying when a suggestion should be shown. This value informs Firefox which sites need to be in the user’s top sites before showing the tile. The reporting mechanism is unchanged, so if a Suggested Tile is shown or clicked, Firefox reports back the tile’s ID and no URLs just as before.

Even though the technical changes are not too complicated, the effect of this can be significant. In particular, Firefox reports back if it shows a Suggested Tile, and if that tile is only shown when the user has been to one of various news sites, Firefox reveals to our servers that this user reads news. Our handling of the data is no different from before.

Because we care about user privacy, we have policies around how/what suggestions can be made. For example, to match on news sites, we make sure there’s at least 5 popular news sites for Firefox to check against. This makes it so there’s user deniability of which site the user actually visited. We also focus on broad uncontroversial topics, so we don’t make suggestions based on adult content or illegal gambling sites.

An additional layer of protection is built into Firefox by only allowing predefined sets of sites [hg.mozilla.org] for making suggestions. These include mobile phones, technology news, web development, and video games. This rigidness protects Firefox from accepting fake/malicious suggestions that could reveal data Mozilla doesn’t want to collect.

This last point is important to highlight because we want to have a discussion around how we can be more flexible in showing more relevant content and fixing mistakes. For example, people who care about video games might only care about a specific gaming platform, but because Firefox only allows for the predefined sites, we would end up suggesting content that many users didn’t actually want to see.

Feel free to respond with comments about Suggested Tiles or to join in on the discussion about various topics on dev.planning [groups.google.com]:

As usual, we have the source code available for Firefox [hg.mozilla.org] and our servers that send/receive tiles data [github.com], create tiles, and process tiles data. You can also find additional details in the Directory Links Architecture and Data Formats documentation [people.mozilla.org].

No comments

Air MozillaCommunity Education Call

Community Education Call The Community Education Working Group exists to merge ideas, opportunities, efforts and impact across the entire project through Education & Training.

Air MozillaReps weekly

Reps weekly Weekly Mozilla Reps call

William LachancePyCon 2015

So I went to PyCon 2015. While I didn’t leave quite as inspired as I did in 2014 (when I discovered iPython), it was a great experience and I learned a ton. Once again, I was incredibly impressed with the organization of the conference and the diversity and quality of the speakers.

Since Mozilla was nice enough to sponsor my attendance, I figured I should do another round up of notable talks that I went to.

Technical stuff that was directly relevant to what I work on:

  • To ORM or not to ORM (Christine Spang): Useful talk on when using a database ORM (object relational manager) can be helpful and even faster than using a database directly. I feel like there’s a lot of misinformation and FUD on this topic, so this was refreshing to see. video slides
  • Debugging hard problems (Alex Gaynor): Exactly what it says — how to figure out what’s going on when things aren’t behaving as they should. Great advice and wisdom in this one (hint: take nothing for granted, dive into the source of everything you’re using!). video slides
  • Python Performance Profiling: The Guts And The Glory (Jesse Jiryu Davis): Quite an entertaining talk on how to properly profile python code. I really liked his systematic and realistic approach — which also discussed the thought process behind how to do this (hint: again it comes down to understanding what’s really going on). Unfortunately the video is truncated, but even the first few minutes are useful. video

Non-technical stuff:

  • The Ethical Consequences Of Our Collective Activities (Glyph): A talk on the ethical implications of how our software is used. I feel like this is an under-discussed topic — how can we know that the results of our activity (programming) serves others and does not harm? video
  • How our engineering environments are killing diversity (and how we can fix it) (Kate Heddleston): This was a great talk on how to make the environments in which we develop more welcoming to under-represented groups (women, minorities, etc.). This is something I’ve been thinking a bunch about lately, especially in the context of expanding the community of people working on our projects in Automation & Tools. The talk had some particularly useful advice (to me, anyway) on giving feedback. video slides

I probably missed out on a bunch of interesting things. If you also went to PyCon, please feel free to add links to your favorite talks in the comments!

Mozilla Release Management Team38.0 & 38.0.5: An update on the upflits and branches

During the 38 cycle, we are going to publish a release between 38 & 39 (called 38.0.5).

In order to continue the development of 38 & 38.0.5 in parallel, we merged mozilla-beta (m-b) into mozilla-release (m-r).

Before:

  • m-b = 38.0.0 beta
  • m-r = 37.0.2

Now:

  • m-b = 38.0.5 beta (even if we won't build any for now)
  • m-r = 38.0 beta (next one being beta7)

We will do regular m-r => m-b merges to make sure 38.0.5 is up to date.

This does not impact aurora (aka 39). In case we have to make a new 37 dot release, we would use a relbranch.

The m-b tree is closed avoid any confusion.

Last but not least, uplift requests to 38 should be filled for mozilla-release 38.0.5 would be mozilla-beta. However, release managers and sheriffs will translate the information if the uplift requests are incorrect.

The schedule has been updated.

Ian BickingA Product Journal: As A Building Block

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I teeter between thinking big about PageShot and thinking small. The benefit of thinking small is: how can this tool provide value to people who wouldn’t know if it would provide any value? And: how do we get it done?

Still I can’t help but thinking big too. The web gave us this incredible way to talk about how we experience the web: the URL. An incredible amount of stuff has been built on that, search and sharing and archiving and ways to draw people into content and let people skim. Indexes, summaries, APIs, and everyone gets to mint their own URLs and accept anyone else’s URLs, pointing to anything.

But not everyone gets to mint URLs. Developers and site owners get to do that. If something doesn’t have a URL, you can’t point to it. And every URL is a pointer, a kind of promise that the site owner has to deliver on, and sometimes doesn’t choose to, or they lose interest.

I want PageShot to give a capability to users, the ability to address anything, because PageShot captures the state of any page at a moment, not an address so someone else can try to recreate that page. The frozen page that PageShot saves is handy for things like capturing or highlighting parts of the page, which I think is the feature people will find attractive, but that’s just a subset of what you might want to do with a snapshot of web content. So I also hope it will be a building block. When you put content into PageShot, you will know it is well formed, you will know it is static and available, you can point to exact locations and recover those locations later. And all via a tool that is accessible to anyone, not just developers. I think there are neat things to be built on that. (And if you do too, I’d be interested in hearing about your thoughts.)

Jeff WaldenTexas specialty license plates

Yesterday I discussed the second Supreme Court oral argument I attended in a recent trip to the Supreme Court. Today I describe the basic controversy in the first oral argument I attended, in a case potentially implicating the First Amendment. First Amendment law is complicated, so this is the first of several posts on the case.

Texas specialty license plates

State license plates, affixed to vehicles to permit legal use on public roads, typically come in one or very few standard designs. But in many states you can purchase a specialty plate with special imagery, designs, coloring, &c. (Specialty plates are distinct from “vanity” plates. A vanity plate has custom letters and numbers, e.g. a vegetarian might request LUVTOFU.) Some state legislatures direct that specialty designs delivering particular messages be offered. And some state legislatures enact laws that permit organizations or individuals to design specialty plates.

The state of Texas sells both legislatively-requested designs and designs ordered by organizations or individuals. (The latter kind require an $8000 bond, covering ramp-up costs until a thousand plates are sold.) The DMVB evaluates designs for compliance with legislated criteria: for example, reflectivity and legibility concerns. One criterion allows (but does not require) Texas to reject “offensive” plates.

The department may refuse to create a new specialty license plate if the design might be offensive to any member of the public.

An “offensive” specialty plate design

Texas rejected one particular design for just this reason. As they say, a picture is worth a thousand words:

A Texas license plate with the Texas Sons of Confederate Veterans logo on the left side, prominently including a Confederate flag
The Texas Sons of Confederate Veterans’s proposed specialty plate…incorporating a Confederate flag. (Yes, Texas — including Rick Perry and Greg Abbott both — rejected this design.) (source)

For those unfamiliar with American imagery: the central feature of the Texas SCV insignia is the Confederate flag. Evoking many things, but in some minds chiefly representative of revanchist desire to resurrect Southern racism, Jim Crow, and the rest of that sordid time. Such minds naturally find the Confederate flag offensive.

Is the SCV actually racist? (Assuming you don’t construe mere use of the flag as prima facie evidence.) A spokesman denies the claim. Web searches find some who disagree and others who believe it is (or was) of divided view. I find no explicit denunciation of racism on the SCV’s website, but I searched only very briefly. Form your own conclusions.

Tomorrow, specialty plate programs in the courts, and the parties’ arguments.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Mozilla Release Management TeamFirefox 38 beta5 to beta6

A smaller beta release.

In this release, we disabled screen sharing (will arrive with 38.0.5), reading list and reading view are going to be disabled in beta 7. We also took some stability fixes (as usual) and some polishing patches.

  • 32 changesets
  • 71 files changed
  • 857 insertions
  • 313 deletions

ExtensionOccurrences
js25
cpp11
jsm7
java6
mn5
ini4
h4
html2
css2
xul1
list1
json1
idl1

ModuleOccurrences
browser29
toolkit9
mobile7
dom7
js6
testing4
layout3
widget2
modules1
media1
docshell1

List of changesets:

Carsten Tomcat BookBug 1155679 - "mozharness update to ref 4f1cf3369955" on a CLOSED TREE . r=ryanvm, a=test-only - 6103268d785d
Terrence ColeBug 1152177 - Make jsid and Value pre barriers symetrical. r=jonco, a=abillings - d79194507f32
Mats PalmgrenBug 1153478 - Part 1: Add nsInlineFrame::StealFrame and make it deal with being called on the wrong parent for aChild (due to lazy reparenting). r=roc, a=sledru - 18b8b10f2fbd
Mats PalmgrenBug 1153478 - Part 2: Remove useless assertions. r=roc, a=sledru - e1dd0d7756c5
Mike ShalBug 1152031 - Bump mozharness.json to 23dee28169d6. a=test-only - 4411b07ee6bd
Gijs KruitboschBug 1153900 - Fix IE cookies migration. a=sylvestre - 55837b9aa111
Jim ChenBug 1072529 - Only create GeckoEditable once. r=esawin, a=sledru - 69e54b268783
Paul AdenotBug 1136360 - Take into account the output device latency in the clock, and be more robust about rounding error accumulation, in cubeb_wasapi.cpp. r=kinetik, a=sledru - fff936b47a9f
Mike de BoerBug 1155195 - Disable Loop screensharing for Fx38. r=Standard8, a=sledru - 6a5c3aa5b912
Gijs KruitboschBug 1153900 - add fixes to tests for aurora, rs=me, a=RyanVM - b158e9bdd8a0
Robert StrongBug 1154591 - getCanStageUpdates has incorrect checks for Windows. r=spohl, a=sledru - 86d3b1103197
Ed LeeBug 1152145 - Filter for specific suggested tiles adgroups/buckets/frecent_sites lists with display name [r=adw, a=sylvestre] - e66ad17db13f
Gijs KruitboschBug 1147487 - Don't bother sending reader mode updates when isArticle is false. r=margaret, a=sledru - 125ec6c54576
Ehsan AkhgariBug 1151873 - Stop forcing text/plain-only content being copied to the clipboard when an ancestor of the selected node has significant whitespace. r=roc, a=sledru - 7e31d76c4d7b
Margaret LeibovicBug 785549 - Use textContent instead of innerHTML to set domain and credits in reader view. r=Gijs, a=sledru - 38e095acde46
Paul Kerr [:pkerr]Bug 1154482 - about:webrtc intermittently throws a js type error. r=jib, a=sledru - 899ee022ed4c
Jared WeinBug 1134501 - UITour: Force page into Reader View automatically whenever the ReaderView/ReadingList tour page is loaded. r=gijs, a=dolske - e5d6dc48f6de
Gijs KruitboschBug 1152219 - Make reader mode node limit a pref, turn off entirely for desktop because of isProbablyReaderable. r=margaret, a=sledru - 4a98323f8e68
Gijs KruitboschBug 1124217 - Don't gather telemetry for windows that have died. r=mconley, a=sledru - 849bf3c58408
Blake WintonBug 1149068 - Use the correct font for the Sans Serif font button. ui-r=maritz, r=jaws, r=margaret, a=sledru - 44de10db57a6
Gijs KruitboschBug 1155692 - Include latest Readability/JSDOMParser changes into m-c. a=sledru - eb5e2063637b
Bas SchoutenBug 1150376 - Do not try to use D3D11 for popup windows. r=jrmuizel, a=sledru - 746934eab883
Bas SchoutenBug 1155228 - Only use basic OMTC for popups when using WARP. r=jrmuizel, a=sledru - 4dc8d874746b
Olli PettayBug 1153688 - Treat JS Symbol as void on C++ side of Variant. r=bholley, a=abillings - 18af6cfb3b86
Chenxia LiuBug 1154980 - Localize first run pager titles. r=ally, a=sledru - 65cf03fc2bc9
Gijs KruitboschBug 1141031 - Fix in-content prefs dialogs overflowing. r=jaws, a=sledru - 9117f9af554e
Boris ZbarskyBug 1155788 - Make the Ion inner-window optimizations work again. r=efaust, a=sledru - e4192150f53a
Gijs KruitboschBug 1150520 - Disable EME for Windows XP. r=dolske, a=sledru - 704989f295eb
Luke WagnerBug 1152280 - OdinMonkey: tighten changeHeap mask validation. r=bbouvier, a=abillings - 5dc0d44c8dbd
Boris ZbarskyBug 1154366 - Pass in a JSContext to StructuredCloneContainer::InitFromJSVal so it will throw its exceptions somewhere where people might see them. r=bholley, ba=sledru - 72f1b4086067
Ryan VanderMeulenBug 1150376 - Fix rebase typo. a=bustage - f3dd042acc18
Ralph GilesBug 1144875 - Disable EME on ESR releases. r=dolske, a=sledru - 630336da65f2

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 11

The Joy of Coding (mconley livehacks on Firefox) - Episode 11 Watch mconley livehack on Firefox Desktop bugs!

Mozilla Science LabIntroducing Mozilla Science Study Groups

For a long time now, I’ve been thinking about three big challenges in open science:

  • Coding is hard enough by any measure – coding for sharing & reuse is even more demanding. Given that our traditional education system isn’t yet imparting these skills to scientists & researchers, and given that it takes sustained practice over a long time to integrate these skills into our research, how can we help build those skills at scale?
  • Many students and early career researchers feel intensely isolated and unsupported in their efforts to learn to code, leading to fear of embarrassment before their colleagues, struggles with imposter syndrome, and uncertainty on how or even if to proceed with their research careers.
  • The production of open source software in support of open science is not enough on its own; we also need to lower the barriers to discoverability and collaboration so that those projects actually get reused, as was done at the NCEAS Codefest last year – but we need to do it at scale and at home, without requiring expensive trips to conferences.

At some level, these are all the same problem: they are all endemic to a fragmented community. Taken all together, the scientific community has a huge amount of programming knowledge; but it’s split up across individuals that rarely have the opportunity to share that knowledge. Crippling self doubt often arises not from genuine inadequacy, but a loss of perspective that comes from working in isolation where it becomes possible to imagine that we are the worst of all our peers. And as we saw at the NCEAS event, the so-called discoverability problem evaporates very quickly with even a small group of people pooling their experience.

The skills & knowledge we need are there in pieces – we have to find a way to assemble them in a way that elevates us all. The Mozilla Science Lab thinks we can do this via a loose federation of Study Groups.

Our Powers Combined

I started thinking about Study Groups last Autumn, after a conversation with Rachel Sanders (PyLadies San Fransisco); Sanders described regular small PyLadies meetups where learners would support each other as they explored a tutorial, project or idea, where emphasis was on communal, participatory learning, lecturing and leadership roles took a distant back seat, and learning occurred over the long term. By blending these ideas with something like a journal club familiar to many academics, I think we can build Study Groups that powerfully address the questions I started with. I’d like Study Groups to do a few things:

  • Promote learning via a network effect of skill sharing. By highlighting the authentic, practice-driven use of code, tools and packages led by the researchers who actually use them in the wild, we create an exchange of skills that scales, grows richer and tracks real scientific practice the more people participate.
  • Create and normalize a custom of discussing code as a research object. Scientists and researchers need forums where the focus is on code and the methodologies surrounding it, in order to create space for the conversations that lead to discovering new tools and improving personal practice.
  • Acknowledge the ongoing process of learning to code by putting that learning process out in the open & making it shared among colleagues, in order to dispel the misconception that these skills are intuitive, obvious, or in any way inherent.

In practice, these things can be achieved by getting together in an open meetup anywhere from once a month to once a week, where individuals can lead follow-along demos, have a co-working space to explore and experiment together, and everyone feels comfortable asking the group for ideas and help.

Predecessors & Beta Tests

A number of powerful examples of similar groups predate this project, and I had the good fortune to learn from them over the past several months. Noam Ross leads the Davis R User’s Group, a tremendously successful R meetup that has generated a wealth of teaching content on R over the past few years; Ross also organized a recent Ask Us Anything panel on the Mozilla Science Forum, and invited the leads from a number of different similar programs to sit in and share their stories and experiences. I met Rob Johnson and others behind Data Science Hobart while I was in Australia recently; DaSH is doing an amazing job of pulling in speakers and demo leaders from an eclectic range of disciplines and interests, to great effect. And I’ve recently had the privilege of sitting in on lessons from the UBC Earth & Ocean Science coding workout group, which informed my thinking around community-led demos on tools as they are actually used, such as Kathi Unglert‘s work on awk and Nancy Soontiens‘s basemap demo.

Hacky Hour is one of the many awesome ideas from the Research Bazaar.

From these examples and others, I and a team of people at UBC began discussing what a Study Group could look like. For the first few weeks, we met over beers at a university pub, in the Hacky Hour tradition started by our colleagues in Melbourne at the Research Bazaar. Enthusiasm was high – people were very keen to have a place to come and learn about coding in the lab, and find out what that would look like. Soon, with the help of many but particularly with the energetic leadership and community organizing of Amy Lee, we had booked our first event; Andrew MacDonald led a packed (and about 2/3 female) room through introductory R, and within 24 hours attendees had stepped up to volunteer to lead half a dozen further sessions on more advanced topics in R from their research.

There was no shortage of enthusiasm at UBC for the opportunities a Study Group presented, and I see no reason why UBC should be a unique case; the Mozilla Science Lab is prepared to help support and iterate on similar efforts where you are. All that’s required to start a Study Group at your home institution, is your leadership.

Your Turn

In order to support you as you start your own Study Group, the Mozilla Science Lab has a collection of tools for you:

  • We’ve built a template website using GitHub Pages that you can fork and remix for your own use. Not only is the website served automagically from GitHub, but we took a page from Nodeschool.io, and set things up to direct conversation & event listings to your issue tracker, thus adding a free message board & mailing list. Check out the Vancouver R Study Group‘s use of the page; setup instructions are in the README, as well as on YouTube – and as always, feel free to open an issue or contact us at sciencelab@mozillafoundation.org if something isn’t working for you.
  • We’ve written a first draft of the Study Group Handbook, that pulls in lessons learned from other groups and guides newcomers through the process of setting up their own, including a step-by-step guide for your first few events, lesson resources, and more. This is a work in progress, and it’ll only get better as more people try it out and send us feedback!
  • We have begun to collect lesson plans & resources delivered in similar meetups for reuse community-wide. If you’d like to maintain your own lessons, send us a link and we’ll point to your work from our Study Group Handbook; if you’d rather we do the maintenance for you, send a pull request to our collection and we’ll make sure your work helps elevate the entire community.
  • Finally, get on the map! Whether you start a Study Group with our tools, or you’re in one running on its own, send us a link and a location and we’ll add you to the map of Study Groups Worldwide, so others in your community can find your meetup, and we can all see the global community that is emerging around working together.

We’re very much looking forward to working with you to help you spool up your own Study Group, and learn from your experiences on how to make this program what the research community needs it to be; we hope you’ll join us.

Study Groups, Hacky Hours & Open Science Meetups

Adam LoftingMeasuring Quality

At the end of last year, Cassie raised the question of ‘how to measure quality?’ on our metrics mailing list, which is an excellent question. And like the best questions, I come back to it often. So, I figured it needed a blog post.

There are a bunch of tactical opportunities to measure quality in various processes, like the QA data you might extract from a production line for example. And while those details interest me, this thought process always bubbles up to the aggregate concept: what’s a consistent measure of quality across any product or service?

I have a short answer, but while you’re here I’ll walk you through how I get there. Including some examples of things I think are of high quality.

One of the reasons this question is interesting, is that it’s quite common to divide up data into quantitative and qualitative buckets. Often splitting the crisp metrics we use as our KPIs from the things we think indicate real quality. But, if you care about quality, and you operate at ‘scale’, you need a quantitative measure of quality.

On that note, in a small business or on a small project, the quality feedback loop is often direct to the people making design decisions that affect quality. You can look at the customers in your bakery and get a feel for the quality of your business and products. This is why small initiatives are sometimes immensely high in quality but then deteriorate as they attempt to replicate and scale what they do.

What I’m thinking about here is how to measure quality at scale.

Some things of quality, IMHO:

axeThis axe is wonderful. As my office is also my workshop, this axe is usually near to hand. It will soon be hung on the wall. Not because I am preparing for the zombie apocalypse, but because it is both useful as a tool, and as a visual reminder about what it means to build quality products. If this ramble of mine isn’t enough of a distraction, watch Why Values are Important to understand how this axe relates to measures of quality especially in product design.

toasterThis toaster is also wonderful. We’ve had this toaster more than 10 years now, and it works perfectly. If it were to break, I can get the parts locally and service it myself (it’s deliberately built to last and be repaired). It was an expensive initial purchase, but works out cheap in the long run. If it broke today, I would fix it. If I couldn’t fix it for some extreme reason, I would buy the same toaster in a blink. It is a high quality product.

coffeeThis is the espresso coffee I drink every day. Not the tin, it’s another brand that comes in a bag. It has been consistently good for a couple of years until the last two weeks when the grind has been finer than usual and it keeps blocking the machine. It was a high-quality product in my mind, until recently. I’ll let another batch pass through the supermarket shelves and try it again. Otherwise I’ll switch.

spatulaThis spatula looks like a novelty product and typically I don’t think very much of novelty products in place of useful tools, but it’s actually a high quality product. It was a gift, and we use it a lot and it just works really well. If it went missing today, I’d want to get another one the same. Saying that, it’s surprisingly expensive for a spatula. I’ve only just looked at the price, as a result of writing this. I think I’d pay that price though.

All of those examples are relatively expensive products within their respective categories, but price is not the measure of quality, even if price sometimes correlates with quality. I’ll get on to this.

How about things of quality that are not expensive in this way?

What is quality music, or art, or literature to you? Is it something new you enjoy today? Or something you enjoyed several years ago? I personally think it’s the combination of those two things. And I posit that you can’t know the real quality of something until enough time has passed. Though ‘enough time’ varies by product.

Ten years ago, I thought all the music I listened to was of high quality. Re-listening today, I think some of it was high-quality. As an exercise, listen to some music you haven’t for a while, and think about which tracks you enjoy for the nostalgia and which you enjoy for the music itself.

In the past, we had to rely on sales as a measure of the popularity of music. But like price, sales doesn’t always relate to quality. Initial popularity indicates potential quality, but not quality in itself (or it indicates manipulation of the audience via effective marketing). Though there are debates around streaming music services and artist payment, we do now have data points about the ongoing value of music beyond the initial parting of listener from cash. I think this can do interesting things for the quality of music overall. And in particular that the future is bleak for album filler tracks when you’re paid per stream.

Another question I enjoy thinking about is why over the centuries, some art has lasting value, and other art doesn’t. But I think I’ve taken enough tangents for now.

So, to join this up.

My view is that quality is reflected by loyalty. And for most products and services, end-user loyalty is something you can measure and optimize for.

Loyalty comes from building things that both last, and continue to be used.

Every other measurable detail about quality adds up to that.

Reducing the defect rate of component X by 10% doesn’t matter unless it impacts on the end-user loyalty.

It’s harder to measure, but this is true even for things which are specifically designed not to last. In particular, “experiences”; a once-in-a-lifetime trip, a festival, a learning experience, etc, etc. If these experiences are of high quality, the memory lasts and you re-live them and re-use them many times over. You tell stories of the experience and you refer your friends. You are loyal to the experience.

Bringing this back to work.

For MoFo colleagues reading this, our organization goals this year already point us towards Quality. We use the industry term ‘Retention’. We have targets for Retention Rates and Ongoing Teaching Activity (i.e. retained teachers). And while the word ‘retention’ sounds a bit cold and business like, it’s really the same thing as measuring ‘loyalty’. I like the word loyalty but people have different views about it (in particular whether it’s earned or expected).

This overarching theme also aligns nicely with the overall Mozilla goal of increasing the ‘number of long term relationships’ we hold with our users.

Language is interesting though. Thinking about a ‘20% user loyalty rate’ 7 days after sign-up focuses my mind slightly differently than a ‘20% retention rate’. ‘Retention’ can sound a bit too much like ‘detention’, which might explain why so many businesses strive for consumer ‘lock-in’ as part of their business model.

Talking to OpenMatt about this recently he put a better MoFo frame on it than loyalty; Retention is a measure of how much people love what we’re doing. When we set goals for increasing retention rate, we are committing to building things people love so much that they keep coming back for more.

In summary:

  • You can measure quality by measuring loyalty
  • I’m happy retention rates are one of our KPIs this year

My next post will look more specifically about the numbers and how retention rates factor into product growth.

And I’ll try not to make it another essay. ;)

Robert O'Callahanrr 3.1 Released

I released rr 3.1 just now. It contains reverse execution support, but I'm not yet convinced that feature is stable enough to release rr 4.0. (Some people are already using it with joy, but it's much more fun to use when you can really trust it to not blow up your debugging session, and I don't yet.) We needed to do a minor release since we've added a lot of bug fixes and system-call support since 3.0, and for Firefox development rr 3.0 is almost unusable now. In particular various kinds of sandboxing support are being added to desktop Linux Firefox (using unshare and seccomp syscalls) and supporting those in rr was nontrivial.

In the future we plan to do more frequent minor releases to ensure low-risk bug fixes and expanded system-call support are quickly made available in a release.

An increasing number of Mozilla developers are trying rr, which is great. However, I need to figure out a way to measure how much rr is being used successfully, and the impact it's having.

Laura de ReynalUnderstanding Web Literacy within the Web Journey

Thinking out loud with Michelle Thorne, and posted on the Webmaker Blog.

Since 2012, pioneering educators and web activists have been reflecting and developing answers to the question, “What is web literacy?”

These conversations have shaped our Web Literacy Map, a guiding document that outlines the skills and competencies that are essential to reading, writing, and participating on the Web.

Just the other week, we wrapped up improvements to the Web Literacy Map, proudly unveiling version 1.5. Thank you to all who contributed to that discussion, and to Doug Belshaw for facilitating it.

We believe being web literate is not just knowing how to code in HTML, CSS, and Javascript. These are great tools, but they’re only one aspect of being a Web creator and citizen. Therefore, the updated Web Literacy Map includes competencies like privacy, remixing, and collaboration.

As we design and test offerings to foster web literacy, we are also determining how these skills fit into a larger web journey. Prompted by user research in Bangladesh, India, Kenya, and beyond, we’re asking: What skill levels and attitudes encourage people to learn more about web literacy? And how can one wield the Web after learning its fundamentals?

Mozilla believes this is an important question to reflect on in the open. With this blog post, we’d like to start a series of discussions, and warmly invite you to think this through with us.

What is the Web Journey ?

As we talked to 356 people in four different countries (India, Bangladesh, Kenya, and Brazil) over the past six months, we learned how people perceive and use the Web in their daily lives. Our research teams identified common patterns, and we gathered them into one framework called “The Web Journey.”

The Web Journey

This framework outlines five stages of engagement with the Web:

  • Unaware: Have never heard of the Web, and have no idea what it is (for example, these smartphone owners in Bangladesh)
  • No use: Are aware of the existence of the Web, but do not use it, either by rejection (“the Web is not for me, women don’t go online”), Inability (“I can’t afford data”), or perceived inability (“The Web is only for businessmen”)
  • Basic use: Are online, and are stuck in the “social media bubble,” unaware of what else is possible (Internet = Facebook). These users have little understanding of the Web, and don’t leverage its full range of possibilities
  • Leverage: Are able to seize the opportunities the Web has to offer to improve their quality of life (to find jobs, to learn, or to grow their business)
  • Creation: From the tinkerer to the web developer, creators understand how to build the Web and are able to make it their own

You can read the full details of the Web Journey, with constraints and triggers, in the Webmaker Field Research Report from India.

Why do the Web Literacy Map and the Web Journey fit together?

While the Web Literacy Map explores the skills needed, the Web Journey describes various stages of engagement with the Web. It appears certain skills may be more necessary for some stages of the Web Journey. For example: Is there a list of skills that people need to acquire to move from “Basic use” to “Leverage”?

As we continue to research digital literacy in Chicago and London (April – August 2015), we’ll seek to understand how to couple skills listed in the Web Literacy Map with steps of engagement outlined in the Web Journey. Bridging the two can help us empower Mozilla Clubs all around the world.

What are the discussion questions ?

To kick off the conversation, consider the following:

  1. Literacy isn’t an on/off state. It’s more a continuum, and there are many learning pathways. How can this nuance be illustrated and made more intuitive?
  2. How can we leverage the personal motivators highlighted along the Web Journey to propose interest-driven learning pathways?
  3. Millions of people think Facebook is the Internet. How can the Web Literacy Map be a guide for these learners to know more and do more with the Web?
  4. As web literacy skills and competencies increase throughout a learner’s journey, and as people participate in web cultures, particular attitudes emerge and evolve. What are those nuances of web culture? How might we determine a “fluency” in the Web?
  5. How does the journey continue after someone has learned the fundamentals of the Web? How can they begin to participate in their community and share that knowledge forward? How can mentorship, and eventually leadership, be a more explicit part of a web journey? How do confidence and ability to teach others become part of the web journey?

Michael KaplyFirefox 38 ESR is Almost Here

Just a reminder that the next Firefox ESR is only three weeks away. In my next post I'll give you some details on what to expect.

Also, if there are any Firefox enterprise topics you'd like to see me cover on my blog, please let me know.

John O'Duinn“We are ALL Remoties” (Apr2015 edition)

Last week, I had the great privilege of talking with people at Wikimedia Foundation about “we are all remoties”!

This was also the first presentation by a non-Wikimedia person in their brand new space, and was further complicated with local *and* remote attendees! Chip, Greg and Rachel did a great job of making sure everything went smoothly, quickly setting up a complex multi-display remote-and-local video configuation, debugging some initial audio issues, moderating questions from remote attendees, etc. We even had extra time to cover topics like “Disaster Recovery”, “interviewing tips for remoties” and “business remotie trends”. Overall, it was a long, very engaged, session but felt helpful, informative, great fun and seemed to be well received by everyone.

As usual, you can get the latest version of these slides, in handout PDF format, by clicking on the thumbnail image. I’ve changed the PDF format slightly as requested, so let me know if you think this format is better/worse.

As always, if you have any questions, suggestions or good/bad stories about working in a remote or geo-distributed teams, please let me know – I’d love to hear them.

Thanks
John.
=====
ps: Oh, and by the way, Wikimedia are hiring – see here for current job openings. They are smart, nice people, literally changing the world – and yes, remoties ARE welcome. :-)

Jeff WaldenPolice, force, and armed and violent disabled people: San Francisco v. Sheehan

Yesterday I began a series of posts discussing the Supreme Court cases I saw in my latest visit for oral arguments. Today I discuss San Francisco v. Sheehan

San Francisco v. Sheehan concerned a messy use of force by police in San Francisco in responding to a violent, mentally-ill person making threats with a knife — an unhappy situation for all. Very imprecisely, the question is whether the officers used excessive force to subdue an armed and violent, disabled suspect, knowing that suspect might require special treatment under the Americans with Disabilities Act or the Fourth Amendment while being arrested. (Of course, whatever baseline those laws require, police often should and will be held to a higher standard.)

Chief Justice Rehnquist, standing upon a white, blue-outlined Nevada, holding volume 530 of the US Reports
Chief Justice William Rehnquist, one of the other objects of the trip

The obvious prediction

Mildly-interested readers need know but two things to predict this case’s outcome. First, this case arose in the Ninth Circus Circuit: a court regularly with very outlier views. And not solely along the tired left-right axis: when the Court often summarily reverses the Ninth Circuit without even hearing argument, partisanship can play no role. Second, Sheehan must overcome qualified immunity, which for better and worse protects “all but the plainly incompetent” police against lawsuit. These facts typically guarantee San Francisco will win and Sheehan will lose.

That aside, one observation struck me. Stereotyping heavily, it’s surprising that San Francisco in particular would argue, to use overly-reductive descriptions, “for” police and “against” the disabled. Usually we’d assume San Francisco would stand by, not against, underprivileged minorities.

“Bait and switch”

That expectation makes this letter from advocacy groups requesting San Francisco abandon its appeal very interesting. At oral argument Justice Scalia interrupted San Francisco’s argument before it even started to bluntly charge the city with changing its argument, between its request for the Supreme Court to hear the case and when San Francisco presented its argument for why it should win — even calling it a “bait and switch”. Minutes later, Justice Sotomayor echoed his views (in more restrained terms).

When requesting Supreme Court review, San Francisco argued that the ADA “does not require accommodations for armed and violent suspects who are disabled” — during an arrest, all such suspects may be treated identically regardless of ability. In response the Court agreed to decide “whether Title II of the Americans with Disabilities Act requires law enforcement officers to provide accommodations to an armed, violent, and mentally ill suspect” while bringing him into custody.

But San Francisco’s written argument instead argued, “Sheehan was not entitled to receive accommodations in her arrest under Title II of the [ADA]” because her armed violence “posed a direct threat in the reasonable judgment of the officers”. In other words, San Francisco had changed from arguing no armed and violent, disabled suspect deserved an ADA accommodation, to arguing Sheehan particularly deserved no ADA accommodation because she appeared to be a direct threat.

The San Francisco City Hall dome and building

The followup

Thus San Francisco’s argument derailed, on this and other points. Several minutes in Justice Kagan even prefaced a question with, “And while we are talking about questions that are not strictly speaking in the case,” to audience laughter. A Ninth-Circuit, plaintiff-friendly, appeal-by-the-government case is usually a strong bet for reversal, but San Francisco seems to have complicated its own case.

The Court could well dismiss this case as “improvidently granted”, preserving the lower court’s decision without creating precedent. Oral argument raised the possibility, but a month later it seems unlikely. San Francisco’s still likely to win, but the justices’ frustration with San Francisco’s alleged argument change might not bode well when San Francisco next wants the Court to hear a case.

Back to the letter

Again consider the letter urging San Francisco to abandon its appeal. Suppose the letter’s authors first privately requested San Francisco drop the case, resorting to open letter once those overtures failed.

But what if the letter wasn’t a complete failure? Could San Francisco have changed its argument to “split the baby”, protecting its officers and attempting to placate interest groups? The shift couldn’t have responded to just the letter, sent one day before San Francisco made its final argument. But it might have been triggered by prior behind-the-scenes negotiation.

This fanciful possibility requires that the open letter not be San Francisco’s first chance to hear its arguments. It further grants the letter’s authors extraordinary political power…yet too little to change San Francisco’s position. Occam’s Razor absolutely rejects this explanation. But if some involved interest group promptly tried to dissuade San Francisco, the letter might have been partially effective.

Final analysis

Are Justice Scalia’s and Sotomayor’s criticisms reasonable? I didn’t fully read the briefs, and I don’t know when it’s acceptable for a party to change its argument (except by settling the case). It appears to me that San Francisco changed its argument; my sense is doing so but claiming you didn’t is the wrong way to change one’s position. But I don’t know enough to be sure of either conclusion.

As I said yesterday, I didn’t fully prepare for this argument, so I hesitate to say too much. And frankly the messy facts make me glad I don’t have to choose a position. So I’ll leave my discussion at that.

Tomorrow I continue to the primary case I came to see, a First Amendment case.

Mozilla Release Management TeamFirefox 37.0.1 to 37.0.2

This stability release for Desktop focuses on graphic issues. We took patches to fix graphic rendering issues or crashes.

  • 7 changesets
  • 15 files changed
  • 164 insertions
  • 100 deletions

ExtensionOccurrences
cpp7
txt2
h2
sh1
json1
in1
idl1

ModuleOccurrences
widget4
gfx4
mobile2
dom2
testing1
config1
browser1

List of changesets:

Mark FinkleBug 1151469 - Tweak the package manifest to avoid packaging the wrong file. r=rnewman, a=lmandel - c8866e34cbf3
Jeff MuizelaarBug 1137716 - Increase the list of devices that are blocked. a=sledru, a=lmandel - 1931c4e48e39
Matt WoodrowBug 1151721 - Disable hardware accelerated video decoding for older intel drivers since it gives black frames on youtube. r=ajones, a=lmandel - 29e130e0b166
Aaron KlotzBug 1141081 - Add weak reference support to HTMLObjectElement and use it in nsPluginInstanceOwner. r=jimm, a=lmandel - fa7d8b9db216
Jeff MuizelaarBug 1153381 - Add a D3D11 ANGLE blacklist. r=mstange, a=lmandel, ba=const-only-change - 56fada8104a6
Ryan VanderMeulenBug 1154434 - Bump mozharness.json to revision 4567c42063b7. a=test-only - a550f8bc2f26
Bas SchoutenBug 1151361 - Wrap WARP D3D11 creation in a try catch block like done with regular D3D11. r=jrmuizel, a=sledru - 5aa012e8ba58

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1113375] Make changes to MDN’s feature request form
  • [579089] Change default Hardware / OS values to be “Unspecified/Unspecified”
  • [1154730] rewrite product/component searching to use jquery-ui instead of yui
  • [1155528] stop linking bzr commit messages to bzr.mozilla.org’s loggerhead
  • [1155869] Ctrl+e is consumed by Edit button and cursor doesn’t move to the end of the line.
  • [880227] Install of Bugzilla DBI module fails due to mirror.hiwaay.net not being available

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Ian BickingA Product Journal: What Are We Making?

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

I’ve managed to mostly avoid talking about what we’re making here. Perhaps shyness, we (the PageShot team) don’t yet know where it’s going, or if we’ll manage to get this into Firefox.

We are making a tool for sharing on the web. This tool creates a new kind of thing to share, it’s not a communication medium of any kind. We’re calling it PageShot, similar to a screenshot but with all the power we can add to it since web pages are much more understandable than pixels. (The things it makes we call a Shot.)

The tool emphasizes sharing clips or highlights from pages. These can be screenshots (full or part of the screen) or text clippings. Along with those clips we keep an archival copy of the entire web page, preserving the full context of the page you were looking at and the origin of each clip. Generally we try to save as much information and context about the page as we can. We are trying to avoid choices, the burdensome effort to decide what you might want in the future. The more effort you put into using this tool, the more information or specificity you can add to your Shot, but we do what we can to save everything so you can sort it out later if you want.

I mentioned earlier that I started this idea thinking about how to make use of frozen copies of the DOM. What we’re working on now looks much more like a screenshotting tool that happens to keep this copy of the page. This changed happened in part because of user research done at Mozilla around saving and sharing, where I became aware of just how prevalent screenshots had become to many people.

The current (rough) state of the tool

It’s not hard to understand the popularity of screenshots, specifically on mobile devices. iPhone users at least have mostly figured out screenshotting, functionality that remains somewhat obscure on desktop devices (and for the life of me I can’t get my Android device to make a screenshot). Also screenshots are the one thing that works across applications – even with an application that supports sharing, you don’t really know what’s going to be shared, but you know what the screenshot will contain. You can also share screenshots with confidence: the recipient won’t have to log in or sign up, they can read it on any device they want, once it has arrived they don’t need a network connection. Screenshots are a reliable tool. A lesson I try to regularly remind myself of: availability beats fidelity.

In a similar vein we’ve seen the rise of the animated gif over the video (though video resurging now that it’s just a file again), and the smuggling in of long texts to Twitter via images.

A lot of this material moves through communication mediums via links and metadata, but those links and metadata are generally under the control of site owners. It’s up to the site owner what someone sees when they click a link, it’s up to them what the metadata will suggest go into the image previous and description. PageShot gives that control to the person sharing, since each Shot is your link, your copy and your perspective.

As of this moment (April 2015) our designs are still ahead of our implementation, so there’s not a lot to try out at this moment, but this is what we’re putting together.

If you want to follow along, check out the repository.

Nick ThomasChanges coming to ftp.mozilla.org

ftp.mozilla.org has been around for a long time in the world of Mozilla, dating back to original source release in 1998. Originally it was a single server, but it’s grown into a cluster storing more than 60TB of data, and serving more than a gigabit/s in traffic. Many projects store their files there, and there must be a wide range of ways that people use the cluster.

This quarter there is a project in the Cloud Services team to move ftp.mozilla.org (and related systems) to the cloud, which Release Engineering is helping with. It would be very helpful to know what functionality people are relying on, so please complete this survey to let us know. Thanks!

Mozilla Science LabEffective Code Review for Journals

Nature Biotechnology recently announced that it would be requiring authors to ‘check the accessibility of code used in computational studies’, in an effort to mitigate retractions and errors resulting from bugs & under-validated code. The article quoted the Science Lab’s director, Kaitlin Thaney, in observing the Science Lab’s position that openness in research is not only a matter of releasing information, but making sure it is effectively reusable, too, in order to reproduce and confirm results and carry that work forward.

But, technical challenges remain. As was discovered in the series of code review pilot studies from the Science Lab and Marian Petre from Open University in 2013 and 2014, third parties reviewing code they weren’t involved in writing leads to superficial reviews without much value; see reflections on these studies from Thaney as well as Greg Wilson, in addition to recent comments to the same effect from Wilson here.

However, journals like Nature Biotech can still compel some very valuable change by marshaling a system of code review for their submissions. As we discuss in our teaching kit on code review (and as was originally investigated in this study), much value can be derived from setting expectations for code clarity and integrity. By demanding authors submit a high-coverage test suite for any original code used,  journals can encourage researchers to use this fundamental technique for ensuring code quality; also, as discussed in depth in the study linked above, the act of requiring authors to describe and justify the changes made at each pull request results in measurably less bugs committed – before code review has even begun. Specifically, journals could require:

  • a passing test suite with a minimum standard of coverage (>90%)
  • a commit log consisting of small pull requests (<500 lines each), each with an accompanying description & justification of the changes made and strategies taken.

Neither of these require reviewers to read code in-depth, but both push authors to seriously reflect on their code, and thus improve its quality.

For more strategies on how to implement a system of code review for scientific software, check out our curriculum on code review. The ideas and strategies presented there are crafted with busy scientists in mind, and explore how to get the most out of short, low-time-commitment reviews; feedback and contributions always welcome over at the project repo.

Jeff WaldenAnother D.C. trip

A month ago, I visited Washington, D.C. to see (unfortunately only a subset of) friends in the area, to get another Supreme Court bobblehead (Chief Justice Rehnquist) — and, naturally, to watch interesting Supreme Court oral arguments. I attended two arguments on March 23: the first for a First Amendment case, the second for (roughly) a police use-of-force case.

Early morning at the Supreme Court, on a cloudy morning with a mostly-blue sky at sunrise
A deep-blue morning at the Court

I did relatively little preparation for the police use-of-force case, limiting myself to the facts, questions presented, and cursory summaries of the parties’ arguments. My discussion of that case will be brief.

But the other case (for which I amply prepared) will receive different treatment. First Amendment law is extraordinarily complicated. A proper treatment of the case, its background, legal analysis, and oral argument discussion well exceeds a single post.

So a post series it is. Tomorrow: the police use-of-force case.

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

Benjamin SmedbergUsing crash-stats-api-magic

A while back, I wrote the tool crash-stats-api-magic which allows custom processing of results from the crash-stats API. This tool is not user-friendly, but it can be used to answer some pretty complicated questions.

As an example and demonstration, see a bug that Matthew Gregan filed this morning asking for a custom report from crash-stats:

In trying to debug bug 1135562, it’s hard to guess the severity of the problem or look for any type of version/etc. correlation because there are many types of hangs caught under the same mozilla::MediaShutdownManager::Shutdown stack. I’d like a report that contains only those with mozilla::MediaShutdownManager::Shutdown in the hung (main thread) stack *and* has wasapi_stream_init on one of the other threads, please.

To build this report, start with a basic query and then refine it in the tool:

  1. Construct a supersearch query to select the crashes we’re interested in. The only criteria for this query was “signature contains ‘MediaShutdownManager::Shutdown`. When possible, filter on channel, OS, and version to reduce noise.
  2. After the supersearch query is constructed, choose “More Options” from the results page and copy the “Public API URL” link.
  3. Load crash-stats-api-magic and paste the query URL. Choose “Fetch” to fetch the results. Look through the raw data to get a sense for its structure. Link
  4. The meat of this function is to filter out the crashes that don’t have “wasapi_stream_init” on a thread. Choose “New Rule” and create a filter rule:
    function(d) {
      var ok = false;
      d.json_dump.threads.forEach(function(thread) {
        thread.frames.forEach(function(frame) {
          if (frame.function && frame.function.indexOf("wasapi_stream_init") != -1) {
            ok = true;
          }
        });
      });
      return ok;
    }

    Choose “Execute” to run the filter. Link

  5. To get the final report we output only the signature and the crash ID for each result. Choose “New Rule” again and create a mapping rule:
    function(d) {
      return [d.uuid, d.signature];
    }

    Link

One of the advantages of this tool is that it is possible to iterate quickly on the data without constantly re-querying, but at the end it should be possible to permalink to the results in bugzilla or email exchanges.

If you need to do complex crash-stats analysis, please try it out! email me if you have questions, and pull requests are welcome.

Armen ZambranoHow to install pywin32 on Windows

In Mozilla's Release Engineering Windows machines we have pywin32 installed.
This dependency if you're going to run older scripts that are needed for the release process.
Unfortunately, at the moment, we can't get rid of this dependency and need to install it.

If you're not using Mozilla-build, you can easily install it with these steps:
NOTE:These are 32-bit binary installers. 64-bit binaries are also available.

In Mozilla we use Mozilla-build which brings most of the tools you need to build Firefox.
Python is included on it, however, pywin32 is currently not pat of it (bug to fix this).

Since the process was a bit painful for me, I will take note of it for future reference.
I tried few approaches until I figured out that we need to use easy_install instead of pip and we need to point to an .exe file rather than a normal Python package.

Use easy_install

Here it is:
easy_install http://hivelocity.dl.sourceforge.net/project/pywin32/pywin32/Build%20219/pywin32-219.win32-py2.7.exe
You will know that it worked if you can run this without any errors:
python -c "import win32api" 


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Mozilla Science LabMozilla Science Week in Review, April 13-19

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Government & Policy

  • The World Health Organization has released a statement demanding the results of all medical clinical trials be published, in order to combat the effects of dissemination bias.

Tools & Projects

Events & Conferences

  • NASA held its annual NASA Space Apps Challenge, a worldwide hackathon encouraging people to build and learn with a collection of open data sets released by the space agency.

Blogs & Papers

  • Jon Udell blogged at PLOS on an update to his 2000 work, Internet Groupware for Scientific Collaboration. The update is entitled When Open Access is the norm, how do scientists work together online?, and reflects on where science on the web has come since the original report was published.
  • April Wright wrote a compelling blog post describing her methods in pursuit of reproducibility on one of her recent papers; Wright eloquently makes the point that openness is not an all-or-nothing effort, and that ‘perfection’ in openness is neither possible nor required for efforts at reproducibility to be very valuable.
  • Eva Amsen reviewed Ontspoorde Wetenschap (‘Derailed Science’), a book describing examples of research fraud and misconduct in the Netherlands by Frank van Kolfschooten. Van Kolfschooten concludes the book with the observation that the free dissemination of data would have prevented many of these failures of process from progressing as far as they did.
  • Don’t miss F1000’s open science roundup from last week, written by our colleague Eva Amsen.

 

Jen Fong-AdwentFast Forward

There was a time not too long ago where I thought my iPhone 5s was completely perfect and optimal

Chris AtLeeRelEng Retrospective - Q1 2015

RelEng had a great start to 2015. We hit some major milestones on projects like Balrog and were able to turn off some old legacy systems, which is always an extremely satisfying thing to do!

We also made some exciting new changes to the underlying infrastructure, got some projects off the drawing board and into production, and drastically reduced our test load!

Firefox updates

Balrog

balrog

All Firefox update queries are now being served by Balrog! Earlier this year, we switched all Firefox update queries off of the old update server, aus3.mozilla.org, to the new update server, codenamed Balrog.

Already, Balrog has enabled us to be much more flexible in handling updates than the previous system. As an example, in bug 1150021, the About Firefox dialog was broken in the Beta version of Firefox 38 for users with RTL locales. Once the problem was discovered, we were able to quickly disable updates just for those users until a fix was ready. With the previous system it would have taken many hours of specialized manual work to disable the updates for just these locales, and to make sure they didn't get updates for subsequent Betas.

Once we were confident that Balrog was able to handle all previous traffic, we shut down the old update server (aus3). aus3 was also one of the last systems relying on CVS (!! I know, rite?). It's a great feeling to be one step closer to axing one more old system!

Funsize

When we started the quarter, we had an exciting new plan for generating partial updates for Firefox in a scalable way.

Then we threw out that plan and came up with an EVEN MOAR BETTER plan!

The new architecture for funsize relies on Pulse for notifications about new nightly builds that need partial updates, and uses TaskCluster for doing the generation of the partials and publishing to Balrog.

The current status of funsize is that we're using it to generate partial updates for nightly builds, but not published to the regular nightly update channel yet.

There's lots more to say here...stay tuned!

FTP & S3

Brace yourselves... ftp.mozilla.org is going away...

brace yourselves...ftp is going away

...in its current incarnation at least.

Expect to hear MUCH more about this in the coming months.

tl;dr is that we're migrating as much of the Firefox build/test/release automation to S3 as possible.

The existing machinery behind ftp.mozilla.org will be going away near the end of Q3. We have some ideas of how we're going to handle migrating existing content, as well as handling new content. You should expect that you'll still be able to access nightly and CI Firefox builds, but you may need to adjust your scripts or links to do so.

Currently we have most builds and tests doing their transfers to/from S3 via the task cluster index in addition to doing parallel uploads to ftp.mozilla.org. We're aiming to shut off most uploads to ftp this quarter.

Please let us know if you have particular systems or use cases that rely on the current host or directory structure!

Release build promotion

Our new Firefox release pipeline got off the drawing board, and the initial proof-of-concept work is done.

The main idea here is to take an existing build based on a push to mozilla-beta, and to "promote" it to a release build. So we need to generate all the l10n repacks, partner repacks, generate partial updates, publish files to CDNs, etc.

The big win here is that it cuts our time-to-release nearly in half, and also simplifies our codebase quite a bit!

Again, expect to hear more about this in the coming months.

Infrastructure

In addition to all those projects in development, we also tackled quite a few important infrastructure projects.

OSX test platform

10.10 is now the most widely used Mac platform for Firefox, and it's important to test what our users are running. We performed a rolling upgrade of our OS X testing environment, migrating from 10.8 to 10.10 while spending nearly zero capital, and with no downtime. We worked jointly with the Sheriffs and A-Team to green up all the tests, and shut coverage off on the old platform as we brought it up on the new one. We have a few 10.8 machines left riding the trains that will join our 10.10 pool with the release of ESR 38.1.

Got Windows builds in AWS

We saw the first successful builds of Firefox for Windows in AWS this quarter as well! This paves the way for greater flexibility, on-demand burst capacity, faster developer prototyping, and disaster recovery and resiliency for windows Firefox builds. We'll be working on making these virtualized instances more performant and being able to do large-scale automation before we roll them out into production.

Puppet on windows

RelEng uses puppet to manage our Linux and OS X infrastructure. Presently, we use a very different tool chain, Active Directory and Group Policy Object, to manage our Windows infrastructure. This quarter we deployed a prototype Windows build machine which is managed with puppet instead. Our goal here is to increase visibility and hackability of our Windows infrastructure. A common deployment tool will also make it easier for RelEng and community to deploy new tools to our Windows machines.

New Tooltool Features

We've redesigned and deployed a new version of tooltool, the content-addressable store for large binary files used in build and test jobs. Tooltool is now integrated with RelengAPI and uses S3 as a backing store. This gives us scalability and a more flexible permissioning model that, in addition to serving public files, will allow the same access outside the releng network as inside. That means that developers as well as external automation like TaskCluster can use the service just like Buildbot jobs. The new implementation also boasts a much simpler HTTP-based upload mechanism that will enable easier use of the service.

Centralized POSIX System Logging

Using syslogd/rsyslogd and Papertrail, we've set up centralized system logging for all our POSIX infrastructure. Now that all our system logs are going to one location and we can see trends across multiple machines, we've been able to quickly identify and fix a number of previously hard-to-discover bugs. We're planning on adding additional logs (like Windows system logs) so we can do even greater correlation. We're also in the process of adding more automated detection and notification of some easily recognizable problems.

Security work

Q1 included some significant effort to avoid serious security exploits like GHOST, escalation of privilege bugs in the Linux kernel, etc. We manage 14 different operating systems, some of which are fairly esoteric and/or no longer supported by the vendor, and we worked to backport some code and patches to some platforms while upgrading others entirely. Because of the way our infrastructure is architected, we were able to do this with minimal downtime or impact to developers.

API to manage AWS workers

As part of our ongoing effort to automate the loaning of releng machines when required, we created an API layer to facilitate the creation and loan of AWS resources, which was previously, and perhaps ironically, one of the bigger time-sinks for buildduty when loaning machines.

Cross-platform worker for task cluster

Release engineering is in the process of migrating from our stalwart, buildbot-driven infrastructure, to a newer, more purpose-built solution in taskcluster. Many FirefoxOS jobs have already migrated, but those all conveniently run on Linux. In order to support the entire range of release engineering jobs, we need support for Mac and Windows as well. In Q1, we created what we call a "generic worker," essentially a base class that allows us to extend taskcluster job support to non-Linux operating systems.

Testing

Last, but not least, we deployed initial support for SETA, the search for extraneous test automation!

This means we've stopped running all tests on all builds. Instead, we use historical data to determine which tests to run that have been catching the most regressions. Other tests are run less frequently.

Robert O'CallahanAnother VMWare Hypervisor Bug

Single-stepping through instructions in VMWare (6.0.4 build-2249910 in my case) with a 32-bit x86 guest doesn't trigger hardware watchpoints.

Steps to reproduce:

  1. Configure a VMWare virtual machine (6.0.4 build-2249910 in my case) booting 32-bit Linux (Ubuntu 14.04 in my case).
  2. Compile this program with gcc -g -O0 and run it in gdb:
    int main(int argc, char** argv) {
    char buf[100];
    buf[0] = 99;
    return buf[0];
    }
  3. In gdb, do
    1. break main
    2. run
    3. watchpoint -l buf[0]
    4. stepi until main returns
  4. This should trigger the watchpoint. It doesn't :-(.

Doing the same thing in a KVM virtual machine works as expected.

Sigh.

This Week In RustThis Week in Rust 78

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

108 pull requests were merged in the last week, and 5 RFC PRs.

Now you can follow breaking changes as they happen! Or, in html.

Breaking Changes

Other Changes

New Contributors

  • Abhishek Chanda
  • Andrew Seidl
  • Aram Visser
  • Avdi Grimm
  • fenduru
  • James Perry
  • Manuel Hoffmann
  • Mickaël Salaün
  • Nelo Onyiah
  • Nick Hamann
  • pez
  • Robin Kruppe
  • rundrop1
  • Theo Belaire
  • Thomas Jespersen
  • Ting-Yu Lin

Approved RFCs

New RFCs

Notable Links

Project Updates

Upcoming Events

A number of meetups will be celebrating 1.0. Watch this space!

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"unsafe restricts which code could contain undefined behavior, but it doesn't isolate the effects of that undefined behavior." - kmc on the limits of unsafety.

Thanks to tshepang for the tip. Submit your quotes for next week!.

Mike ConleyThings I’ve Learned This Week (April 13 – April 17, 2015)

When you send a sync message from a frame script to the parent, the return value is always an array

Example:

// Some contrived code in the browser
let browser = gBrowser.selectedBrowser;
browser.messageManager.addMessageListener("GIMMEFUE,GIMMEFAI", function onMessage(message) {
  return "GIMMEDABAJABAZA";
});

// Frame script that runs in the browser
let result = sendSendMessage("GIMMEFUE,GIMMEFAI");
console.log(result[0]);
// Writes to the console: GIMMEDABAJABAZA

From the documentation:

Because a single message can be received by more than one listener, the return value of sendSyncMessage() is an array of all the values returned from every listener, even if it only contains a single value.

I don’t use sync messages from frame scripts a lot, so this was news to me.

You can use [cocoaEvent hasPreciciseScrollingDeltas] to differentiate between scrollWheel events from a mouse and a trackpad

scrollWheel events can come from a standard mouse or a trackpad1. According to this Stack Overflow post, one potential way of differentiating between the scrollWheel events coming from a mouse, and the scrollWheel events coming from a trackpad is by calling:

bool isTrackpad = [theEvent hasPreciseScrollingDeltas];

since mouse scrollWheel is usually line-scroll, whereas trackpads (and Magic Mouse) are pixel scroll.

The srcdoc attribute for iframes lets you easily load content into an iframe via a string

It’s been a while since I’ve done web development, so I hadn’t heard of srcdoc before. It was introduced as part of the HTML5 standard, and is defined as:

The content of the page that the embedded context is to contain. This attribute
is expected to be used together with the sandbox and seamless attributes. If a
browser supports the srcdoc attribute, it will override the content specified in
the src attribute (if present). If a browser does NOT support the srcdoc
attribute, it will show the file specified in the src attribute instead (if
present).

So that’s an easy way to inject some string-ified HTML content into an iframe.

Primitives on IPDL structs are not initialized automatically

I believe this is true for structs in C and C++ (and probably some other languages) in general, but primitives on IPDL structs do not get initialized automatically when the struct is instantiated. That means that things like booleans carry random memory values in them until they’re set. Having spent most of my time in JavaScript, I found that a bit surprising, but I’ve gotten used to it. I’m slowly getting more comfortable working lower-level.

This was the ultimate cause of this crasher bug that dbaron was running into while exercising the e10s printing code on a debug Nightly build on Linux.

This bug was opened to investigate initializing the primitives on IPDL structs automatically.

Networking is ultimately done in the parent process in multi-process Firefox

All network requests are proxied to the parent, which serializes the results back down to the child. Here’s the IPDL protocol for the proxy.

On bi-directional text and RTL

gw280 and I noticed that in single-process Firefox, a <select> dropdown set with dir=”rtl”, containing an <option> with the value “A)” would render the option as “(A”.

If the value was “A) Something else”, the string would come out unchanged.

We were curious to know why this flipping around was happening. It turned out that this is called “BiDi”, and some documentation for it is here.

If you want to see an interesting demonstration of BiDi, click this link, and then resize the browser window to reflow the text. Interesting to see where the period on that last line goes, no?

It might look strange to someone coming from a LTR language, but apparently it makes sense if you’re used to RTL.

I had not known that.

Some terminal spew

Some terminal spew

Now what’s all this?

My friend and colleague Mike Hoye showed me the above screenshot upon coming into work earlier this week. He had apparently launched Nightly from the terminal, and at some point, all that stuff just showed up.

“What is all of that?”, he had asked me.

I hadn’t the foggiest idea – but a quick DXR showed basic_code_modules.cc inside Breakpad, the tool used to generate crash reports when things go wrong.

I referred him to bsmedberg, since that fellow knows tons about crash reporting.

Later that day, mhoye got back to me, and told me that apparently this was output spew from Firefox’s plugin hang detection code. Mystery solved!

So if you’re running Firefox from the terminal, and suddenly see some basic_code_modules.cc stuff show up… a plugin you’re running probably locked up, and Firefox shanked it.


  1. And probably a bunch of other peripherals as well 

Mike ConleyThe Joy of Coding (Ep. 10): The Mystery of the Cache Key

In this episode, I kept my camera off, since I was having some audio-sync issues1.

I was also under some time-pressure, because I had a meeting scheduled for 2:30 ET2, giving me exactly 1.5 hours to do what I needed to do.

And what did I need to do?

I needed to figure out why an nsISHEntry, when passed to nsIWebPageDescriptor’s loadPage, was not enough to get the document out from the HTTP cache in some cases. 1.5 hours to figure it out – the pressure was on!

I don’t recall writing a single line of code. Instead, I spent most of my time inside XCode, walking through various scenarios in the debugger, trying to figure out what was going on. And I eventually figured it out! Read this footnote for the TL;DR:3

Episode Agenda

References

Bug 1025146 – [e10s] Never load the source off of the network when viewing sourceNotes


  1. I should have those resolved for Episode 11! 

  2. And when the stream finished, I found out the meeting had been postponed to next week, meaning that next week will also be a short episode. :( 

  3. Basically, the nsIChannel used to retrieve data over the network is implemented by HttpChannelChild in the content process. HttpChannelChild is really just a proxy to a proper nsIChannel on the parent-side. On the child side, HttpChannelChild does not implement nsICachingChannel, which means we cannot get a cache key from it when creating a session history entry. With no cache key, comes no ability to retrieve the document from the network cache via nsIWebDescriptor’s loadPage. 

Alex GibsonMy second year working at Mozilla

This week marked my second year Mozillaversary. I did plan to write this blog post of the 15th April, which would have marked the day I started, but this week flew by so quickly I almost completely missed it!

Carrying on from last years blog post, much of my second year at Mozilla has been spent working on various parts of mozilla.org, to which I made a total of 196 commits this year.

Much of my time has been spent working on Firefox on-boarding. Following the success of the on-boarding flow we built for the Firefox 29 Australis redesign last year, I went on to work on several more on-boarding flows to help introduce new features in Firefox. These included introducing the Firefox 33.1 privacy features, Developer Edition firstrun experience, 34.1 search engine changes, and 36.0 for Firefox Hello. I also got to work on the first time user experience for when a user makes their first Hello video call, which initially launched in 35.0. It was all a crazy amount of work from a lot of different people, but something I really enjoyed getting to work on alongside various other teams at Mozilla.

In between all that I also got to work on some other cool things, including the 2015 mozilla.org homepage redesign. Something I consider quite a privilege!

On the travel front, I got to visit both San Fransisco and Santa Clara a bunch more times (I’m kind of losing count now). I also got to visit Portland for the first time when Mozilla had their all-hands week last December, which was such a great city!

I’m looking forward to whatever year three has in store!

Air MozillaWebmaker Demos April 17 2015

Webmaker Demos April 17 2015 Webmaker Demos April 17 2015

Gregory SzorcMy Current Thoughts on System Administration

I attended PyCon last week. It's a great conference. You should attend. While I should write up a detailed trip report, I wanted to quickly share one of my takeaways.

Ansible was talked about a lot at PyCon. Sitting through a few presentations and talking with others helped me articulate why I've been drawn to Ansible (over say Puppet, Chef, Salt, etc) lately.

First, Ansible doesn't require a central server. Administration is done remotely Ansible establishes a SSH connection to a remote machine and does stuff. Having Ruby, Python, support libraries, etc installed on production systems just for system administration never really jived with me. I love Ansible's default hands off approach. (Yes, you can use a central server for Ansible, but that's not the default behavior. While tools like Puppet could be used without a central server, it felt like they were optimized for central server use and thus local mode felt awkward.)

Related to central servers, I never liked how that model consists of clients periodically polling for and applying updates. I like the idea of immutable server images and periodic updates work against this goal. The central model also has a major bazooka pointed at you: at any time, you are only one mistake away from completely hosing every machine doing continuous polling. e.g. if you accidentally update firewall configs and lock out central server and SSH connectivity, every machine will pick up these changes during periodic polling and by the time anyone realizes what's happened, your machines are all effectively bricked. (Yes, I've seen this happen.) I like having humans control exactly when my systems apply changes, thank you. I concede periodic updates and central control have some benefits.

Choosing not to use a central server by default means that hosts are modeled as a set of applied Ansible playbooks, not necessarily as a host with a set of Ansible playbooks attached. Although, Ansible does support both models. I can easily apply a playbook to a host in a one-off manner. This means I can have playbooks represent common, one-off tasks and I can easily run these tasks without having to muck around with the host to playbook configuration. More on this later.

I love the simplicity of Ansible's configuration. It is just YAML files. Not some Ruby-inspired DSL that takes hours to learn. With Ansible, I'm learning what modules are available and how they work, not complicated syntax. Yes, there is complexity in Ansible's configuration. But at least I'm not trying to figure out the file syntax as part of learning it.

Along that vein, I appreciate the readability of Ansible playbooks. They are simple, linear lists of tasks. Conceptually, I love the promise of full dependency graphs and concurrent execution. But I've spent hours debugging race conditions and cyclic dependencies in Puppet that I'm left unconvinced the complexity and power is worth it. I do wish Ansible could run faster by running things concurrently. But I think they made the right decision by following KISS.

I enjoy how Ansible playbooks are effectively high-level scripts. If I have a shell script or block of code, I can usually port it to Ansible pretty easily. One pass to do the conversion 1:1. Another pass to Ansibilize it. Simple.

I love how Ansible playbooks can be checked in to source control and live next to the code and applications they manage. I frequently see people maintain separate source control repositories for configuration management from the code it is managing. This always bothered me. When I write a service, I want the code for deploying and managing that service to live next to it in version control. That way, I get the configuration management and the code versioned in the same timeline. If I check out a release from 2 years ago, I should still be able to use its exact configuration management code. This becomes difficult to impossible when your organization is maintaining configuration management code in a separate repository where a central server is required to do deployments (see Puppet).

Before PyCon, I was having an internal monolog about adopting the policy that all changes to remote servers be implemented with Ansible playbooks. I'm pleased to report that a fellow contributor to the Mercurial project has adopted this workflow himself and he only has great things to say! So, starting today, I'm going to try to enforce that every change I make to a remote server is performed via Ansible and that the Ansible playbooks are checked into version control. The Ansible playbooks will become implicit documentation of every process involved with maintaining a server.

I've already applied this principle to deploying MozReview. Before, there was some internal Mozilla wiki documenting commands to execute in a terminal to deploy MozReview. I have replaced that documentation with a one-liner that invokes Ansible. And, the Ansible files are now in a public repository.

If you poke around that repository, you'll see that I have Ansible playbooks referencing Docker. I have Ansible provisioning Docker images used by the test and development environment. That same Ansible code is used to configure our production systems (or is at least in the process of being used in that way). Having dev, test, and prod using the same configuration management has been a pipe dream of mine and I finally achieved it! I attempted this before with Puppet but was unable to make it work just right. The flexibility that Ansible's design decisions have enabled has made this finally possible.

Ansible is my go to system management tool right now. And I still feel like I have a lot to learn about its hidden powers.

If you are still using Puppet, Chef, or other tools invented in previous generations, I urge you to check out Ansible. I think you'll be pleasantly surprised.

Mozilla Release Management TeamFirefox 38 beta4 to beta5

In this beta, we disabled the define EARLY_BETA_OR_EARLIER (used by some features to get testing during the first half of beta cycle).

In this release, we took some changes related to reading list, polishing of in-tab preferences and some various minor crash fixes.

We also landed the stability fixes which should ship with the release 37.0.2.

  • 52 changesets
  • 86 files changed
  • 3766 insertions
  • 2141 deletions

ExtensionOccurrences
cpp25
js14
h9
jsm6
java6
css4
list3
ini3
xml2
idl2
html2
sh1
py1
MOZILLA1
mn1
mk1
json1
ipdl1
in1
common1
c1

ModuleOccurrences
dom25
mobile12
browser12
media7
layout6
toolkit5
gfx3
db3
testing2
services2
netwerk2
build2
widget1
security1
js1
config1

List of changesets:

Jon CoppeardBug 1149526 - Check HeapPtrs have GC lifetime r=terrence a=sylvestre - 7ca7e178de40
Sylvestre LedruPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - 4c2454564144
Bill McCloskeyBack out Bug 1083897 a=backout - 56f805ac34ce
Bill McCloskeyBack out Bug 1103036 to resolve shutdown hangs a=backout - 8a5486269821
JW WangBug 1153739 - Make Log() usable outside EME test cases. r=edwin, a=test-only - bf3ca76f10c3
JW WangBug 1080685 - Add more debug aids and longer timeout. r=edwin, a=test-only - b2d1be38dab1
Sami JaktholmBug 1150005 - Don't wait for "editor-selected" event in browser_styleeditor_fetch-from-cache.js as it may have already been emitted. r=bgrins, a=test-only - d1e3ce033c7a
Mark HammondBug 1151666 - Fix intermittent orange by reducing verified timer intervals and always using mock storage. r=zaach, a=test-only - 87f3453f6cc0
Shu-yu GuoBug 996982 - Fix Debugger script delazification logic to account for relazified clones. r=bz, a=sledru - 5ca4e237b259
Brian GrinsteadBug 1151259 - Switch <toolbar> to <box> to get rid of -moz-appearance styles for devtools sidebar. r=jryans, a=sledru - 7af104b169fa
Jared WeinBug 1152327 - ReadingListUI.init() should be called from delayedStartup, not onLoad. r=gavin, a=sledru - 9e1bf10888cd
Tim NguyenBug 1013714 - Remove old OSX focusring from links in in-content prefs. r=Gijs, a=sledru - 48976876cdb9
Chris PearceBug 1143278 - Make gmp-clearkey not require a Win8 only DLL to decode audio on Win7. r=edwin, a=sledru - f9f96ba1dbdb
Chris PearceBug 1143278 - Add more null checks in gmp-clearkey's decoders. r=edwin, a=sledru - 5779893b39a5
Chris PearceBug 1143278 - Use a different CLSID to instantiate the H264 decoder MFT in gmp-clearkey, as Win 7 Enterprise N requires that. r=edwin, a=sledru - dfce472edd1e
Chris PearceBug 1143278 - Support IYUV and I420 in gmp-clearkey on Windows, as Win 7 Enterprise N's H.264 decoder doesn't output I420. r=edwin, a=sledru - 3beb9cbddb3f
Cameron McCormackBug 1153693 - Only call ReleaseRef on nsStyle{ClipPath,Filter} once when setting a new value. r=dbaron, a=sledru - f5d0342230c0
Milan SreckovicBug 1152331 - If we do not delete indices array, it gets picked up down the line and breaks some assumptions in aboutSupport.js. r=dvander, a=sledru - 4cc36a9a958b
Richard NewmanBug 1153358 - Client mitigation: don't upload stored_on. r=nalexander, a=sledru - 1412c445ff0d
Mark HammondBug 1148701 - React to Backoff and Retry-After headers from Reading List server. r=adw, a=sledru - 91df81e2edac
Ryan VanderMeulenBacked out changeset d1e3ce033c7a (Bug 1150005) for leaks - 4f36d5aff5cf
Cameron McCormackBug 1146101 - Call ClearCachedInheritedStyleDataOnDescendants on more style contexts that had structs swapped out from them. r=dbaron, a=sledru - baa8222aaafd
Reed LodenBug 1152939 - Upgrade to SQLite 3.8.9. r=mak77, a=sledru - 01e0d4e09b6d
Mike HommeyBug 1146738 - Fix race condition between js/src/target and js/src/host. r=mshal, a=NPOTB - 7496d2eea111
Ben TurnerBug 1114788 - Disable failing test on workers. r=mrbkap, a=test-only - c82fcbeb7194
Matthew GreganBug 1144199 - Require multiple consecutive timeouts to accumulate before triggering timeout error handling in libcubeb's WASAPI backend; this avoids spurious timeout errors triggered by system sleep/wake cycles. r=padenot, a=sledru - ea342656f3cb
Xidorn QuanBug 1145448 - Avoid painting native frame on fullscreen window when activate/inactivate. r=jimm, a=sledru - a27fb9b83867
Bas SchoutenBug 1151361 - Wrap WARP D3D11 creation in a try catch block like done with regular D3D11. r=jrmuizel, a=sledru - 4954faa47dd0
Jan-Ivar BruaroeyBug 1153056 - Fix about:webrtc to not blank on zero allocated PeerConnections. r=jesup, a=sledru - e487ace8d7f9
Ryan VanderMeulenBug 1154434 - Bump mozharness.json to revision 4567c42063b7. a=test-only - 97856a6ac44d
Richard NewmanBug 1153357 - Don't set SYNC_STATUS_MODIFIED unless an update touches fields that we sync. r=nalexander, a=sledru - 199b60ec60dc
vivekBug 1145567 - Display toolbar only after Domcontentloaded is triggered. r=margaret, a=sledru - df47a99c442f
Mark GoodwinBug 1153090 - Unaligned access in cert bock list. r=keeler, a=sledru - 58f203b17be2
Michael ComellaBug 1148390 - Dynamically add padding to share icon on GB devices. r=wesj, a=sledru - e10ddd2bc05f
Ben TurnerBug 1154599 - Revert unintentional change to crash reporting infra in changeset ce2692d64bcf. a=sledru - 7b296a71b115
Edwin FloresBug 1148071 - Fix CDM update behaviour. r=cpearce, a=sledru, ba=jorgev - 6c7e8d9f955c
Gijs KruitboschBug 1154447 - add aero asset for update badge, r=me, a=sylvestre - 98703ce041e2
Gijs KruitboschBug 1150703, allow about: pages to be unlinkable even if "safe for content", r=mcmanus, IGNORE IDL, ba=sylvestre - 5c9df6adebed
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on mobile, r=margaret, a=sylvestre - a5203cabcc04
Gijs KruitboschBug 1150862, make about:reader unlinkable from content on desktop, r=gavin, a=sylvestre - 062e49bcb2da
Ryan VanderMeulenBug 1092202 - Skip testGetUserMedia for frequent failures. a=test-only - 85106e95bcb8
Ryan VanderMeulenBug 1123563 - Annotate test-animated-image-layers.html and test-animated-image-layers-background.html as random on Android and Linux. a=test-only - fe141895d7ab
Ryan VanderMeulenBug 1097721 - Skip test_mozaudiochannel.html on OSX 10.6 due to intermittent crashes. a=test-only - 86b6cb966d95
Ryan VanderMeulenBug 1021174 - Skip test_bug495145.html on OSX 10.6 due to intermittent crashes. a=test-only - 34331bbc9575
Mark GoodwinBug 1120748 - Resolve intermittent failure of browser_ssl_error_reports.js. r=ttaubert, a=test-only - 22eb12ac64e9
Ryan VanderMeulenBug 847903 - Skip 691096-1.html on OSX 10.6 due to intermittent crashes. a=test-only - 348cc6be3ba0
Nicolas SilvaBug 1145981 - Do not crash when a DIB texture is updated without a compositor. r=jrmuizel, a=sledru - 16d7e20d9565
Xidorn QuanBug 1141931 - Part 0: Fix unicode-bidi value of ruby elements in html.css. a=sledru - ccb54262291d
Margaret LeibovicBug 1152121 - Factor out logic to get original URL from reader URL into shared place, and handle malformed URI excpetions. r=Gijs, r=mcomella, a=sledru - 7a10ff7fd9e4
Richard NewmanBug 1153973 - Don't blindly apply deletions as insertions. r=nalexander, a=sledru - f9d36adcdf51
Xidorn QuanBug 1154814 - Move font rules from 'rt' to 'rtc, rt' and make text-emphasis conditional. r=heycam, a=sledru - 8fd05ce16a5f
Gijs KruitboschBug 1148923 - min-width the font menulists. r=jaws, a=sledru - 45a5eaa7813b

Matjaž HorvatOffline localization by Sandra

Pontoon is a web application, which is great. You can run it on almost any device with any operating system. You can be sure you always have the latest version, so you don’t need to worry about updates. You don’t even need to download or install anything. There’s just one particular occasion when web applications aren’t so great.

When you’re offline.

Mostly that means the game is over. But it doesn’t need to be so. Application caching together with web storage has made offline web applications a reality. In its latest edition released yesterday, Pontoon now allows translating even when you’re offline. See full changelog for details.

There are many scenarios where offline localization is the only option our localizers have. Decent internet connection simply cannot be taken for granted in many parts of the World. If it’s hard for you to belive that, visit any local tech conference. :-) Or, if you started localizing at home, you can now continue with localization on your daily commute to work. And vice versa.

The way it works is very simple. After Pontoon detects you no longer have a connection, it saves translations to localStorage instead of server. Once you get online again, translations are stored to server. In the meantime, connection dependant functionality like History and Machinery is of course unavailable.

Offline mode was single-handedly developed by our new contributor Sandra Shklyaeva. She just joined Mozilla community and has already fixed one of our oldest bugs. She’s attacking the bugs everybody was pushing away. I can’t wait to see what the future holds (shhhhh)!

Sandra has an interesting story on what got her attracted to Mozilla:

I was exploring some JS API on the developer.mozilla.org when I noticed pretty tabzilla on the top. I clicked it and my chrome became unresponsive completely XD. Maybe it was just a coincidence… Anyway, the tabzilla has caught my attention and that’s how I found out about Get Involved stuff in Mozilla.

If you also want to get involved, now you know where you can find us!

Karl DubostWeb Compatibility in Japan

I'm living in Japan. And time to time, I'm confronted with issues on the Japanese market when using Firefox on Mobile (Firefox OS and Firefox for Android). There's a situation in Japan which has similarities with the Chinese Market. For example, many sites have been designed with old WebKit CSS properties only, such as flexbox. The sites have not been updated to the new set of properties.

We started our testing with a list of around 100 Japanese Web sites. This list needs to be refined and improved. After a first batch of testing one year ago, we ended up with a list of about 50 sites having some issues. Most of them have been tested against a Firefox OS User Agent aka something like User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0 (the version number is irrelevant).

Here I'm making a summary of the issues to help us

  1. refine our future testing
  2. have a better understanding of the issues at stake

We currently have 51 bugs on bugzilla (json) related to issues with Web Compatibility on Japane Web Sites. On these 51 sites, there is 1 duplicate and 13 resolved.

Type Of Issues

  • HTTP Redirection to a mobile domain based on User-Agent: HTTP header
  • JavaScript redirection to a mobile domain based on navigator.userAgent on the client side through window.location
  • Content customization based on User-Agent: or navigator.userAgent
  • Display of a banner to switch to a mobile version of the site based on User-Agent: or navigator.userAgent. Example: Asahi Web site
  • Receiving a mobile site with outdated WebKit CSS only properties
  • Site using a Web framework or JavaScript library which is exclusively compatible with a set of browsers. Example: Sencha on Nezu Museum. Not a lot can be done here.

Todo List For Better Testing

  • Most of these are only the surface as we have tested most of the time, only the home page. We need to try to test a couple of subpages
  • The sites need to be tested again with screenshots for:
  • Firefox OS User Agent (User-Agent: Mozilla/5.0 (Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android User Agent (User-Agent: Mozilla/5.0 (Android; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0).
  • Firefox for Android Modified User Agent (User-Agent: Mozilla/5.0 (Android 5.0; Mobile; rv:40.0) Gecko/40.0 Firefox/40.0) (this is a fake Firefox for Android UA, but some sites keep sending different versions or no version at all based on the following detection /.*Android ([0-9])\.([0-9])/ or match(/Android\s+(\d\.\d)/i. DO NOT DO THIS AT HOME at least not without a sensible fallback)
  • A recent Android Chrome User Agent
  • A recent iOS Safari User Agent

Some Ideas and Things We Can Do Together

There are a couple of things which can be done and where you can help.

  • Translating this article in Japanese.
  • Advocacy around you.
  • Publish an article about Web Compatibility and Recipes in Japanese Press (Web Designing, Web Creators, etc.). I can help. Or maybe we could propose a monthly column in Web Designing on "let's fix this it" where we would go through a known site issues and how to solve them.
  • Contact Web sites directly and pointing them to the bugs.
  • Share with us if you know a person or a friend of a friend working on these sites/companies. Talk about it around you! A CTO, a Web developer, someone who can help us negociate a change on the site.
  • Report sites which are broken on webcompat.com. It helps.

Old WebKit CSS, Flexbox Nightmare

Maybe in all these efforts in contacting Web sites, the flexbox story is the most frustrating one. I talked a couple of times about it: Fix Your Flexbox Web site and Flexbox old syntax to new syntax converter. The frustration comes from two things:

  1. It's very easy to fix.
  2. The sites are using outdated 1st version of Flexbox which was developed for WebKit only.

Swicthing to the new standard syntax would actually improve their customers reach and make them compatible with the future. It must also be frustrating for Apple and Co, because it means they can't really retire the old code from their rendering engine without breaking sites. Chicken and egg situation. If you remove the support, you break sites but push sites to update. If you keep the support, sites don't fix, but users using other browsers can't go to these sites. If they don't go to these sites, the browser doesn't show up in the stats, and so the site owners say: "We do not have to support this browser, nobody is using it on our site." Yes… you know. Running into circles.

In the end it forces other browser vendors to do dirty things for making it usable for everyone.

Fixing Your CSS - Easy!

Hallvord Steen has developped a quick tool to help you fix your CSS. It's not perfect, but it will remove a big part of the hard work on figuring out how to convert this WebKit only flexbox or gradient to a standard one supported everywhere.

Conclusion

All of these is part of a much bigger effort for Web Compatibility in general. In the next couple of days, I will go through all bugs we already have opened and check if there are new things.

If we get the flexbox/gradient right and the User Agent sniffing, we will have solved probably 80% of the issues of Web Compatibility issues in Japan.

Otsukare!

Cameron Kaisersystemsetupusthebomb revisited: Vulnerable after all

UPDATE: Edited the below to clarify Takashi's baseline solution.

Previously, previously. tl;dr: Apple uses a setuid binary called writeconfig to alter certain system settings which on at least 10.7+ could be used to write arbitrary files as setuid root, allowing almost instantaneous privilege escalation -- i.e., your computer is now pwned. This was fixed in Yosemite 10.10.3, but not any previous version. Originally I had not been able to exploit my 10.4 systems in the same fashion, so despite the binary being there, I concluded the actual vulnerability did not exist.

Well, Takashi Yoshi has succeeded where I failed (I'm still pretty confident on Darwin Nuke, though), and I have confirmed it on my systems using his RootPipe Tester tool. Please note, before you run, that this tool specifically exploits the vulnerability to write a setuid root file to disk, which if he weren't a nice guy means he now owns your system. Takashi is clearly a good guy but with any such tool you may wish to get in the habit of building from source you've closely examined, which he provides. The systemsetupusthebomb vulnerability is indeed successful on all versions of OS X going back to at least 10.2.8.

Takashi's page also gives a simple workaround: in System Preferences, under Security, check "Require password to unlock each secure system preference" (and make sure the lock at the lower left is locked). Now any known use of the vulnerable tool will either fail or at least prompt you for a password. With this fix in place, the RootPipe Tester cannot function on my 10.4 systems.

If you're more paranoid, however, you can almost completely avoid the risk of an alternative future attack at the price of some system functionality. At its lowest level, the workaround for this vulnerability is straightforward in concept -- disable writeconfig or neuter it -- but has side effects, because if you monkey with writeconfig the system will lose the capability to control certain configuration profiles (in 10.4, this generally affects the Sharing pane in System Preferences; 10.5+, which specifically exposes systemsetup, may be affected in other ways) and may also affect remote administration capabilities. Takashi and I exchanged E-mails on two specific solutions. Both of these possible solutions will alter system functionality, in a hopefully reversible fashion, but a blown command may interfere with administering your computer. Read carefully.

One solution is to rename (or remove, but this is obviously more drastic) writeconfig to something else. Admittedly this works a bit too well. RootPipe Tester actually crashed, which may be useful to completely stop a malicious app in its tracks, but it also made System Preferences unstable and will likely do the same to any app expecting to use Admin.framework. Although 10.4 seemed to handle this a bit better, it too locked up the Sharing pane after banging on it a bit. However, you can be guaranteed nothing will happen in this configuration because it's not possible for it to occur -- apps looking for the victim ToolLiaison class won't be able to find it. Since I'm rarely in that panel, this is the approach I've personally selected for my own systems, but I'm also fully comfortable with the limitations. You can control this with two commands in Terminal on 10.4-10.6 (make sure you fixed the issue with sudo first!):

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv writeconfig noconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo mv noconfig writeconfig

For added security, make noconfig a custom filename only you know so an attacker won't be easily able to find it in an alternate location ... or, if you're nucking futs, archive or delete it entirely. (Not recommended except for the fascistic maniac.)

Takashi found the second approach to be gentler, but is slightly less secure: strip the setuid bits off. In this mode, the vulnerability can still be exploited to write arbitrary files, but as it lacks the setuid permission it cannot run as root and the file is only written as the current user (so no privilege escalation, just an unexpected file write). Applications that use Admin.framework simply won't work as expected; they shouldn't crash. For example, System Preferences will just "look at you" in the Sharing panel when you try to change or start a new system service -- nothing will happen. For many users, this will be the better option. Here are the Terminal commands for 10.4-10.6:

go to a safe state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u-s writeconfig
go to original state: cd /System/Library/PrivateFrameworks/Admin.framework/Versions/A/Resources/ ; sudo chmod u+s writeconfig

Choose one of these options. Most of the time, you should leave your system in the safe state. If you need to change Sharing or certain other settings with systemsetup or System Preferences, return to the original state, make the change, and return to the safe state.

Of course, one other option is to simply do nothing. This might be a surprising choice, but Takashi does make the well-taken point that this attack can only be perpetrated upon an administrative user where root is just your password away anyhow, and no implementation of this attack other than his runs on PowerPC. This isn't good enough for me personally, but his argument is reasonable, and if you have to do a lot of configuration changes on your system I certainly understand how inconvenient these approaches could be. (Perhaps someone will figure out a patch for System Preferences that does it for you. I'll leave that exercise to the reader.) As in many such situations, you alone will have to decide how much you're willing to put up with, but it's good to see other people are also working to keep our older Macs better protected on OS X.

Ob10.4Fx IonPower status report: 75% of V8 passing, interrupted briefly tonight to watch the new Star Wars trailer. I have cautious, cautious hope it won't suck, but J. J. Abrams, if you disappoint me, it will be for the last time (to paraphrase).

The Rust Programming Language BlogMixing matching, mutation, and moves in Rust

One of the primary goals of the Rust project is to enable safe systems programming. Systems programming usually implies imperative programming, which in turns often implies side-effects, reasoning about shared state, et cetera.

At the same time, to provide safety, Rust programs and data types must be structured in a way that allows static checking to ensure soundness. Rust has features and restrictions that operate in tandem to ease writing programs that can pass these checks and thus ensure safety. For example, Rust incorporates the notion of ownership deeply into the language.

Rust’s match expression is a construct that offers an interesting combination of such features and restrictions. A match expression takes an input value, classifies it, and then jumps to code written to handle the identified class of data.

In this post we explore how Rust processes such data via match. The crucial elements that match and its counterpart enum tie together are:

  • Structural pattern matching: case analysis with ergonomics vastly improved over a C or Java style switch statement.

  • Exhaustive case analysis: ensures that no case is omitted when processing an input.

  • match embraces both imperative and functional styles of programming: you can continue using break statements, assignments, et cetera, rather than being forced to adopt an expression-oriented mindset.

  • match “borrows” or “moves”, as needed: Rust encourages the developer to think carefully about ownership and borrowing. To ensure that one is not forced to yield ownership of a value prematurely, match is designed with support for merely borrowing substructure (as opposed to always moving such substructure).

We cover each of the items above in detail below, but first we establish a foundation for the discussion: What does match look like, and how does it work?

The Basics of match

The match expression in Rust has this form:

match INPUT_EXPRESSION {
    PATTERNS_1 => RESULT_EXPRESSION_1,
    PATTERNS_2 => RESULT_EXPRESSION_2,
    ...
    PATTERNS_n => RESULT_EXPRESSION_n
}

where each of the PATTERNS_i contains at least one pattern. A pattern describes a subset of the possible values to which INPUT_EXPRESSION could evaluate. The syntax PATTERNS => RESULT_EXPRESSION is called a “match arm”, or simply “arm”.

Patterns can match simple values like integers or characters; they can also match user-defined symbolic data, defined via enum.

The below code demonstrates generating the next guess (poorly) in a number guessing game, given the answer from a previous guess.

enum Answer {
    Higher,
    Lower,
    Bingo,
}

fn suggest_guess(prior_guess: u32, answer: Answer) {
    match answer {
        Answer::Higher => println!("maybe try {} next", prior_guess + 10),
        Answer::Lower  => println!("maybe try {} next", prior_guess - 1),
        Answer::Bingo  => println!("we won with {}!", prior_guess),
    }
}

#[test]
fn demo_suggest_guess() {
    suggest_guess(10, Answer::Higher);
    suggest_guess(20, Answer::Lower);
    suggest_guess(19, Answer::Bingo);
}

(Incidentally, nearly all the code in this post is directly executable; you can cut-and-paste the code snippets into a file demo.rs, compile the file with --test, and run the resulting binary to see the tests run.)

Patterns can also match structured data (e.g. tuples, slices, user-defined data types) via corresponding patterns. In such patterns, one often binds parts of the input to local variables; those variables can then be used in the result expression.

The special _ pattern matches any single value, and is often used as a catch-all; the special .. pattern generalizes this by matching any series of values or name/value pairs.

Also, one can collapse multiple patterns into one arm by separating the patterns by vertical bars (|); thus that arm matches either this pattern, or that pattern, et cetera.

These features are illustrated in the following revision to the guessing-game answer generation strategy:

struct GuessState {
    guess: u32,
    answer: Answer,
    low: u32,
    high: u32,
}

fn suggest_guess_smarter(s: GuessState) {
    match s {
        // First arm only fires on Bingo; it binds `p` to last guess.
        GuessState { answer: Answer::Bingo, guess: p, .. } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~  ~~~~~~~~  ~~
     //     |                 |                 |     |
     //     |                 |                 |     Ignore remaining fields
     //     |                 |                 |
     //     |                 |      Copy value of field `guess` into local variable `p`
     //     |                 |
     //     |   Test that `answer field is equal to `Bingo`
     //     |
     //  Match against an instance of the struct `GuessState`

            println!("we won with {}!", p);
        }

        // Second arm fires if answer was too low or too high.
        // We want to find a new guess in the range (l..h), where:
        //
        // - If it was too low, then we want something higher, so we
        //   bind the guess to `l` and use our last high guess as `h`.
        // - If it was too high, then we want something lower; bind
        //   the guess to `h` and use our last low guess as `l`.
        GuessState { answer: Answer::Higher, low: _, guess: l, high: h } |
        GuessState { answer: Answer::Lower,  low: l, guess: h, high: _ } => {
     // ~~~~~~~~~~   ~~~~~~~~~~~~~~~~~~~~~   ~~~~~~  ~~~~~~~~  ~~~~~~~
     //     |                 |                 |        |        |
     //     |                 |                 |        |    Copy or ignore
     //     |                 |                 |        |    field `high`,
     //     |                 |                 |        |    as appropriate
     //     |                 |                 |        |
     //     |                 |                 |  Copy field `guess` into
     //     |                 |                 |  local variable `l` or `h`,
     //     |                 |                 |  as appropriate
     //     |                 |                 |
     //     |                 |    Copy value of field `low` into local
     //     |                 |    variable `l`, or ignore it, as appropriate
     //     |                 |
     //     |   Test that `answer field is equal
     //     |   to `Higher` or `Lower`, as appropriate
     //     |
     //  Match against an instance of the struct `GuessState`

            let mid = l + ((h - l) / 2);
            println!("lets try {} next", mid);
        }
    }
}

#[test]
fn demo_guess_state() {
    suggest_guess_smarter(GuessState {
        guess: 20, answer: Answer::Lower, low: 10, high: 1000
    });
}

This ability to simultaneously perform case analysis and bind input substructure leads to powerful, clear, and concise code, focusing the reader’s attention directly on the data relevant to the case at hand.

That is match in a nutshell.

So, what is the interplay between this construct and Rust’s approach to ownership and safety in general?

Exhaustive case analysis

…when you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.

– Sherlock Holmes (Arthur Conan Doyle, “The Blanched Soldier”)

One useful way to tackle a complex problem is to break it down into individual cases and analyze each case individually. For this method of problem solving to work, the breakdown must be collectively exhaustive; all of the cases you identified must actually cover all possible scenarios.

Using enum and match in Rust can aid this process, because match enforces exhaustive case analysis: Every possible input value for a match must be covered by the pattern in a least one arm in the match.

This helps catch bugs in program logic and ensures that the value of a match expression is well-defined.

So, for example, the following code is rejected at compile-time.

fn suggest_guess_broken(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        // ERROR: non-exhaustive patterns: `Bingo` not covered
    };
    println!("maybe try {} next", next_guess);
}

Many other languages offer a pattern matching construct (ML and various macro-based match implementations in Scheme both come to mind), but not all of them have this restriction.

Rust has this restriction for these reasons:

  • First, as noted above, dividing a problem into cases only yields a general solution if the cases are exhaustive. Exhaustiveness-checking exposes logical errors.

  • Second, exhaustiveness-checking can act as a refactoring aid. During the development process, I often add new variants for a particular enum definition. The exhaustiveness-check helps points out all of the match expressions where I only wrote the cases from the prior version of the enum type.

  • Third, since match is an expression form, exhaustiveness ensures that such expressions always either evaluate to a value of the correct type, or jump elsewhere in the program.

Jumping out of a match

The following code is a fixed version of the suggest_guess_broken function we saw above; it directly illustrates “jumping elsewhere”:

fn suggest_guess_fixed(prior_guess: u32, answer: Answer) {
    let next_guess = match answer {
        Answer::Higher => prior_guess + 10,
        Answer::Lower  => prior_guess - 1,
        Answer::Bingo  => {
            println!("we won with {}!", prior_guess);
            return;
        }
    };
    println!("maybe try {} next", next_guess);
}

#[test]
fn demo_guess_fixed() {
    suggest_guess_fixed(10, Answer::Higher);
    suggest_guess_fixed(20, Answer::Lower);
    suggest_guess_fixed(19, Answer::Bingo);
}

The suggest_guess_fixed function illustrates that match can handle some cases early (and then immediately return from the function), while computing whatever values are needed from the remaining cases and letting them fall through to the remainder of the function body.

We can add such special case handling via match without fear of overlooking a case, because match will force the case analysis to be exhaustive.

Algebraic Data Types and Structural Invariants

Algebraic data types succinctly describe classes of data and allow one to encode rich structural invariants. Rust uses enum and struct definitions for this purpose.

An enum type allows one to define mutually-exclusive classes of values. The examples shown above used enum for simple symbolic tags, but in Rust, enums can define much richer classes of data.

For example, a binary tree is either a leaf, or an internal node with references to two child trees. Here is one way to encode a tree of integers in Rust:

enum BinaryTree {
    Leaf(i32),
    Node(Box<BinaryTree>, i32, Box<BinaryTree>)
}

(The Box<V> type describes an owning reference to a heap-allocated instance of V; if you own a Box<V>, then you also own the V it contains, and can mutate it, lend out references to it, et cetera. When you finish with the box and let it fall out of scope, it will automatically clean up the resources associated with the heap-allocated V.)

The above enum definition ensures that if we are given a BinaryTree, it will always fall into one of the above two cases. One will never encounter a BinaryTree::Node that does not have a left-hand child. There is no need to check for null.

One does need to check whether a given BinaryTree is a Leaf or is a Node, but the compiler statically ensures such checks are done: you cannot accidentally interpret the data of a Leaf as if it were a Node, nor vice versa.

Here is a function that sums all of the integers in a tree using match.

fn tree_weight_v1(t: BinaryTree) -> i32 {
    match t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(left, payload, right) => {
            tree_weight_v1(*left) + payload + tree_weight_v1(*right)
        }
    }
}

/// Returns tree that Looks like:
///
///      +----(4)---+
///      |          |
///   +-(2)-+      [5]
///   |     |   
///  [1]   [3]
///
fn sample_tree() -> BinaryTree {
    let l1 = Box::new(BinaryTree::Leaf(1));
    let l3 = Box::new(BinaryTree::Leaf(3));
    let n2 = Box::new(BinaryTree::Node(l1, 2, l3));
    let l5 = Box::new(BinaryTree::Leaf(5));

    BinaryTree::Node(n2, 4, l5)
}

#[test]
fn tree_demo_1() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);
}

Algebraic data types establish structural invariants that are strictly enforced by the language. (Even richer representation invariants can be maintained via the use of modules and privacy; but let us not digress from the topic at hand.)

Both expression- and statement-oriented

Unlike many languages that offer pattern matching, Rust embraces both statement- and expression-oriented programming.

Many functional languages that offer pattern matching encourage one to write in an “expression-oriented style”, where the focus is always on the values returned by evaluating combinations of expressions, and side-effects are discouraged. This style contrasts with imperative languages, which encourage a statement-oriented style that focuses on sequences of commands executed solely for their side-effects.

Rust excels in supporting both styles.

Consider writing a function which maps a non-negative integer to a string rendering it as an ordinal (“1st”, “2nd”, “3rd”, …).

The following code uses range patterns to simplify things, but also, it is written in a style similar to a switch in a statement-oriented language like C (or C++, Java, et cetera), where the arms of the match are executed for their side-effect alone:

fn num_to_ordinal(x: u32) -> String {
    let suffix;
    match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => {
            suffix = "st";
        }
        (2, 2) | (2, 22...92) => {
            suffix = "nd";
        }
        (3, 3) | (3, 23...93) => {
            suffix = "rd";
        }
        _                     => {
            suffix = "th";
        }
    }
    return format!("{}{}", x, suffix);
}

#[test]
fn test_num_to_ordinal() {
    assert_eq!(num_to_ordinal(   0),    "0th");
    assert_eq!(num_to_ordinal(   1),    "1st");
    assert_eq!(num_to_ordinal(  12),   "12th");
    assert_eq!(num_to_ordinal(  22),   "22nd");
    assert_eq!(num_to_ordinal(  43),   "43rd");
    assert_eq!(num_to_ordinal(  67),   "67th");
    assert_eq!(num_to_ordinal(1901), "1901st");
}

The Rust compiler accepts the above program. This is notable because its static analyses ensure both:

  • suffix is always initialized before we run the format! at the end of the function, and

  • suffix is assigned at most once during the function’s execution (because if we could assign suffix multiple times, the compiler would force us to mark suffix as mutable).

To be clear, the above program certainly can be written in an expression-oriented style in Rust; for example, like so:

fn num_to_ordinal_expr(x: u32) -> String {
    format!("{}{}", x, match (x % 10, x % 100) {
        (1, 1) | (1, 21...91) => "st",
        (2, 2) | (2, 22...92) => "nd",
        (3, 3) | (3, 23...93) => "rd",
        _                     => "th"
    })
}

Sometimes expression-oriented style can yield very succinct code; other times the style requires contortions that can be avoided by writing in a statement-oriented style. (The ability to return from one match arm in the suggest_guess_fixed function earlier was an example of this.)

Each of the styles has its use cases. Crucially, switching to a statement-oriented style in Rust does not sacrifice every other feature that Rust provides, such as the guarantee that a non-mut binding is assigned at most once.

An important case where this arises is when one wants to initialize some state and then borrow from it, but only on some control-flow branches.

fn sometimes_initialize(input: i32) {
    let string: String; // a dynamically-constructed string value
    let borrowed: &str; // a reference to string data
    match input {
        0...100 => {
            // Construct a String on the fly...
            string = format!("input prints as {}", input);
            // ... and then borrow from inside it.
            borrowed = &string[6..];
        }
        _ => {
            // String literals are *already* borrowed references
            borrowed = "expected between 0 and 100";
        }
    }
    println!("borrowed: {}", borrowed);

    // Below would cause compile-time error if uncommented...

    // println!("string: {}", string);

    // ...namely: error: use of possibly uninitialized variable: `string`
}

#[test]
fn demo_sometimes_initialize() {
    sometimes_initialize(23);  // this invocation will initialize `string`
    sometimes_initialize(123); // this one will not
}

The interesting thing about the above code is that after the match, we are not allowed to directly access string, because the compiler requires that the variable be initialized on every path through the program before it can be accessed. At the same time, we can, via borrowed, access data that may held within string, because a reference to that data is held by the borrowed variable when we go through the first match arm, and we ensure borrowed itself is initialized on every execution path through the program that reaches the println! that uses borrowed.

(The compiler ensures that no outstanding borrows of the string data could possibly outlive string itself, and the generated code ensures that at the end of the scope of string, its data is deallocated if it was previously initialized.)

In short, for soundness, the Rust language ensures that data is always initialized before it is referenced, but the designers have strived to avoid requiring artificial coding patterns adopted solely to placate Rust’s static analyses (such as requiring one to initialize string above with some dummy data, or requiring an expression-oriented style).

Matching without moving

Matching an input can borrow input substructure, without taking ownership; this is crucial for matching a reference (e.g. a value of type &T).

The “Algebraic Data Types” section above described a tree datatype, and showed a program that computed the sum of the integers in a tree instance.

That version of tree_weight has one big downside, however: it takes its input tree by value. Once you pass a tree to tree_weight_v1, that tree is gone (as in, deallocated).

#[test]
fn tree_demo_v1_fails() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // If you uncomment this line below ...

    // assert_eq!(tree_weight_v1(tree), (1 + 2 + 3) + 4 + 5);

    // ... you will get: error: use of moved value: `tree`
}

This is not a consequence, however, of using match; it is rather a consequence of the function signature that was chosen:

fn tree_weight_v1(t: BinaryTree) -> i32 { 0 }
//                   ^~~~~~~~~~ this means this function takes ownership of `t`

In fact, in Rust, match is designed to work quite well without taking ownership. In particular, the input to match is an L-value expression; this means that the input expression is evaluated to a memory location where the value lives. match works by doing this evaluation and then inspecting the data at that memory location.

(If the input expression is a variable name or a field/pointer dereference, then the L-value is just the location of that variable or field/memory. If the input expression is a function call or other operation that generates an unnamed temporary value, then it will be conceptually stored in a temporary area, and that is the memory location that match will inspect.)

So, if we want a version of tree_weight that merely borrows a tree rather than taking ownership of it, then we will need to make use of this feature of Rust’s match.

fn tree_weight_v2(t: &BinaryTree) -> i32 {
    //               ^~~~~~~~~~~ The `&` means we are *borrowing* the tree
    match *t {
        BinaryTree::Leaf(payload) => payload,
        BinaryTree::Node(ref left, payload, ref right) => {
            tree_weight_v2(left) + payload + tree_weight_v2(right)
        }
    }
}

#[test]
fn tree_demo_2() {
    let tree = sample_tree();
    assert_eq!(tree_weight_v2(&tree), (1 + 2 + 3) + 4 + 5);
}

The function tree_weight_v2 looks very much like tree_weight_v1. The only differences are: we take t as a borrowed reference (the & in its type), we added a dereference *t, and, importantly, we use ref-bindings for left and right in the Node case.

The dereference *t, interpreted as an L-value expression, is just extracting the memory address where the BinaryTree is represented (since the t: &BinaryTree is just a reference to that data in memory). The *t here is not making a copy of the tree, nor moving it to a new temporary location, because match is treating it as an L-value.

The only piece left is the ref-binding, which is a crucial part of how destructuring bind of L-values works.

First, let us carefully state the meaning of a non-ref binding:

  • When matching a value of type T, an identifier pattern i will, on a successful match, move the value out of the original input and into i. Thus we can always conclude in such a case that i has type T (or more succinctly, “i: T”).

For some types T, known as copyable T (also pronounced “T implements Copy”), the value will in fact be copied into i for such identifier patterns. (Note that in general, an arbitrary type T is not copyable.)

Either way, such pattern bindings do mean that the variable i has ownership of a value of type T.

Thus, the bindings of payload in tree_weight_v2 both have type i32; the i32 type implements Copy, so the weight is copied into payload in both arms.

Now we are ready to state what a ref-binding is:

  • When matching an L-value of type T, a ref-pattern ref i will, on a successful match, merely borrow a reference into the matched data. In other words, a successful ref i match of a value of type T will imply that i has the type of a reference to T (or more succinctly, “i: &T”).

Thus, in the Node arm of tree_weight_v2, left will be a reference to the left-hand box (which holds a tree), and right will likewise reference the right-hand tree.

We can pass these borrowed references to trees into the recursive calls to tree_weight_v2, as the code demonstrates.

Likewise, a ref mut-pattern (ref mut i) will, on a successful match, borrow a mutable reference into the input: i: &mut T. This allows mutation and ensures there are no other active references to that data at the same time. A destructuring binding form like match allows one to take mutable references to disjoint parts of the data simultaneously.

This code demonstrates this concept by incrementing all of the values in a given tree.

fn tree_grow(t: &mut BinaryTree) {
    //          ^~~~~~~~~~~~~~~ `&mut`: we have exclusive access to the tree
    match *t {
        BinaryTree::Leaf(ref mut payload) => *payload += 1,
        BinaryTree::Node(ref mut left, ref mut payload, ref mut right) => {
            tree_grow(left);
            *payload += 1;
            tree_grow(right);
        }
    }
}

#[test]
fn tree_demo_3() {
    let mut tree = sample_tree();
    tree_grow(&mut tree);
    assert_eq!(tree_weight_v2(&tree), (2 + 3 + 4) + 5 + 6);
}

Note that the code above now binds payload by a ref mut-pattern; if it did not use a ref pattern, then payload would be bound to a local copy of the integer, while we want to modify the actual integer in the tree itself. Thus we need a reference to that integer.

Note also that the code is able to bind left and right simultaneously in the Node arm. The compiler knows that the two values cannot alias, and thus it allows both &mut-references to live simultaneously.

Conclusion

Rust takes the ideas of algebraic data types and pattern matching pioneered by the functional programming languages, and adapts them to imperative programming styles and Rust’s own ownership and borrowing systems. The enum and match forms provide clean data definitions and expressive power, while static analysis ensures that the resulting programs are safe.

For more information on details that were not covered here, such as:

  • how to say Higher instead of Answer::Higher in a pattern,

  • defining new named constants,

  • binding via ident @ pattern, or

  • the potentially subtle difference between { let id = expr; ... } versus match expr { id => { ... } },

consult the Rust documentation, or quiz our awesome community (in #rust on IRC, or in the user group).

(Many thanks to those who helped review this post, especially Aaron Turon and Niko Matsakis, as well as Mutabah, proc, libfud, asQuirrel, and annodomini from #rust.)

Sean McArthurmerit badge

Are you jealous of the badges that npm modules or rubygems have, showing off their latest published version? Of course you are. I was too. When looking at a repository, the version may change as code is merged in, before being published to crates.io. Now, instead of looking at a repo and wondering what version is latest, that repo can add a crates.io badge.

Here’s hyper’s: crates.io

Besides wanting to be able to use badges like these for my own repositories, I also wanted a simple example of using hyper to create a simple app. Bonus points: it users the hyper Server and Client, together.

Go ahead, get your merit badge.

Air MozillaParticipation at Mozilla

Participation at Mozilla The Participation Forum

Gervase MarkhamTop 50 DOS Problems Solved: Whoops, I Deleted Everything

Q: I accidentally deleted all the files in the root directory of my hard disk for the second time this month. I managed to reinstall everything, but is there a way of avoiding the problem?

A: There are two approaches you could try, both of which have applications for other things too:

  • Modify the files so that they cannot be deleted without first explicitly making them deletable. You can do this with the DOS utility Attrib which was supplied with your system. … To protect the file use the command:

    ATTRIB +R filename

    The +R switch means “make this file read-only”.

  • Stop using the DEL command to delete files. Use a batch file instead which will prompt you before taking action.

    … <batch file code is given> …

    This batch file has a useful enhancement beyond the precautionary message. You can use it to specify multiple files, for example:

    DF *.BAK FRED.BAS ?.DOC

    With one command this would delete all .BAK files, FRED.BAS, and all .DOC files whose names begin with a single letter.

A delete command which takes multiple arguments – wow…

Gervase MarkhamIf You Like…

Air MozillaKids' Vision - Mentorship Series

Kids' Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Air MozillaThe Joy of Coding (mconley livehacks on Firefox) - Episode 10

The Joy of Coding (mconley livehacks on Firefox) - Episode 10 Watch mconley livehack on Firefox Desktop bugs!

Kim MoirMozilla pushes - March 2015

Here's March 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
The number of pushes increased from those recorded in the previous month with a total of 10943. 

Highlights
  • 10943 pushes
  • 353 pushes/day (average)
  • Highest number of pushes/day: 579 pushes on Mar 11, 2015
  • 23.18 pushes/hour (highest average)

General Remarks
  • Try keeps on having around 49% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 26% of all the pushes.

Records
  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 





Soledad PenadesRunning a web server on the front-end

The introduction of TCP sockets support in Firefox OS made it possible to run a web server from the front-end, and all is written in JavaScript. Think of having something similar to express.js… but running on a browser (because after all, Firefox OS is a superturbocharged browser).

Again, JS server superstar Justin d’Archangelo wrote an implementation of a web server that works on Firefox OS. It’s called fxos-web-server and it includes a few examples you can run.

None of the examples particularly fit my use case–I want to serve static content from a phone to other phones, but the examples were a bit more contrived. So I decided to build a simpler proof-of-concept example: catserver, a web server that served a simple page with full screen Animated GIFs of cats:

Browserify

The first thing I wanted to do is to use Browserify “proper”, to write my app in a more modular way. For this I had to fork Justin’s original project and modify its package.json so it would let me require() the server instead of tucking its variable in the window globals :-) – sadly I yet have to send him a PR so you will need to be aware of this difference. The dependency in package.json points to my fork for now:

"fxos-web-server": "git+https://github.com/sole/fxos-web-server.git"

Building (with gulp)

My example is composed of two “websites”. The first one is the web server app itself which is what is executed in the “server” device. The sources for this are in src.

The second website is what the server will transfer to devices that connect to it, so this actually gets executed in the client devies. This is where the cats are! Its contents are in the www folder.

Both websites are packaged in just one ZIP file and then installed onto the server device.

This build process also involves running Browserify first so I can get from node-style code that uses require() to a JavaScript bundle that runs on Firefox OS. So I’m using gulp to do all that. The tasks are in gulpfile.js.

Web server app

Thanks to Justin’s work, this is fairly simple. We create an HTTP server on port 80:

var HTTPServer = require('fxos-web-server');
var server = new HTTPServer(80);

And then we add a listener for when a request is made, so we can respond to it (and serve content to the connected client):

server.addEventListener('request', function(evt) {
      var request = evt.request;
      var response = evt.response;

      //... Decide what to send
});

The “decide what to send” part is a bit like writing nginx or express config files. In this case I wanted to:

  • serve an index.html file if we request a “directory” (i.e. a path that ends with “/”)
  • serve static content if we request a file (i.e. the path doesn’t end with “/”)

Before serving a file we need to tell the client what kind of content we’re sending to it. I’m using a very simple “extension to content type” function that determines MIME types based on the extension in the path. E.g. ‘html’ returns text/html.

Once we know the content type, we set it on the response headers:

response.headers['Content-Type'] = getContentType(fileToSend);

and use Justin’s shortcut function sendFile to send a file from our app to the client:

response.sendFile(fileToSend);

With the request handler set up, we can finally start the server!

server.start();

But… how does it even work?!

Welcome to the Hack of The Week!

When you say “sendFile” what the server app does is: it creates an XMLHttpRequest with type = arraybuffer to load the raw contents of the resource. When the XMLHttpRequest emits the load event, the server takes the loaded data and sends it to the client. That’s it!

Naive! Simple! It works! (for simple cases)

Ways that this could be improved – AKA “wanna make this better?! this is your chance!”

As I mentioned above, right now this is very naive and assumes that the files will exist. If they don’t, well, horrible things will happen. Or you will get a 404 error. I haven’t tried it myself. I’m not sure. I’d say there is no error handling (yet).

The extension to content type function could be made into a module, and probably extended to know about more file types. It probably exists already, somewhere in npmlandia.

Another idea I had is that instead of loading the entire resource in memory and then flush it down the pipe as sendFile does, we could use progress events on the XMLHttpRequest and feed it to the client as we load it–so that the server device won’t run out of memory. I don’t know how we could find the length of the resource first, perhaps a HEAD request could work!

And finally we could try to serve each request in a worker, so that the server doesn’t block when responding to a large request. Am I going too far? I don’t even know if the TCP sockets work with workers, or if there are issues with that, or who knows!? This is unexplored territory, welcome to Uncertainty Land! :-D

Even extremer ways to get very… “creative”

What if you wrote a PHP parser that runs in JavaScript and then parsed .php files instead of just returning their contents as text/html?

I’m only half kidding, but you could maybe execute JS templates on the server. Forget security! You trust the content, right? ;-)

Another thing you could do is take advantage of the fact that the server is also a browser. So you can do browsersy things such as using Canvas to generate images, instead of having to load libraries that simulate canvas in node. Or you could synthesise web audio stuff on demand–maybe you could use an OfflineAudioWorker for extra non-blocking goodness! Or, if you want to go towards a relatively more boring direction, you could do DOM / text handling on a server that can deal with that kind of stuff natively.

With all the new platforms in which we can run Firefox OS, there are so many things we can do! Phone servers might be limited by battery, but a Raspberry PI running Firefox OS and connected to a power source can be an interesting platform with which to experiment with this kind of easy-to-write web servers.

But I saw NFC mentioned in the video!

Indeed! But I will cover that in another post, so we keep this one focused on the HTTP server :-)

Happy www serving!

flattr this!