Asa Dotzlerthoughts on web performance and ad blockers

I often use an ad blocker with my web browser. I do this not because I hate seeing ads. I block ads because I can’t take the performance hit.

Running an ad blocker, or using Firefox’s tracking protection, makes the web responsive again and a pleasure to use. Sites load fast, navigation is smooth, everything is just better in terms of performance when the ads and their scripts are removed from the web.

I don’t like the idea, though, that I’m depriving lots of great independent sites (some of them run by friends) of their ad revenue. Unfortunately, Ads have grown worse and worse over the last decade. They are now just too much load, physically and cognitively and the current state is unsustainable. Users are going to move to ad blockers if web sites and the big ad networks don’t clean up their act.

Sure, everyone moving to ad blockers would make the web feel speedy again, but it would probably mean we all lose a lot of great ad-supported content on the Web and that’s not a great outcome. One of the wonderful things about the web is the long-tail of independent content it makes available to the world — mostly supported by ads.

I think we can find a middle ground that sees ad-tech pull back to something that still generates reasonable returns but doesn’t destroy the experience of the web. I think we can reverse the flow of people off of the web into content silos and apps. But I don’t see that happening without some browser intervention. (Remember when browsers, Firefox leading the pack, decided pop-ups were a step too far? That’s the kind of intervention I’m thinking of.)

I’ve been thinking about what that could look like and how it could be deployed so it’s a win for publishers and users and so that the small and independent publishers especially don’t get crushed in the escalating battle between users and advertising networks.

Web publishers and readers both want sites to be blazing fast and easy to use. The two are very well aligned here. There’s less alignment around tracking and attention grabbing, but there’s agreement from both publishers and readers, I’m sure, that slow sites suck for everyone

So, with this alignment on a key part of the larger advertising mess, let’s build a feedback loop that makes the web fast again. Browsers can analyze page load speed and perhaps bandwidth usage, figure out what part of that comes from the ads, and when it crosses a certain threshold warn the user with a dialog something like “Ads appear to be slowing this site. Would you like to block ads for a week?”

If deployed at enough scale, sites would quickly see a drop-off in ad revenue if their ads started slowing the site down too much. But unlike current ad-blockers, sites would have the opportunity and the incentive to fix the problem and get the users back after a short period of time.

This also makes the ad networks clearly responsible for the pain they’re bringing and gets publishers and readers both on the same side of the debate. It should, in theory, push ad networks to lean down and *still* provide good returns and that’s the kind of competition we need to foster.

What do you all think? Could something like this work to make the web fast again?

 

Yunier José Sosa VázquezActualización en el soporte de Firefox para OS X

En un comunicado en el blog de las futuras versiones de Firefox, Mozilla ha expresado el fin del soporte para OS X 10.6, 10.7 y 10.8 el próximo mes de agosto cuando liberen una nueva actualización del navegador. Después del tiempo estimado, Firefox seguirá funcionando en esas plataformas pero ya no recibirán nuevas funcionalidades o correcciones de seguridad.

El motivo principal por el que Mozilla ha tomado esta decisión es la culminación del soporte por parte de Apple para esas plataformas, y al no recibir actualizaciones de seguridad, se convierten en objeto de ataques, altamente peligrosas y atentan contra la seguridad de los usuarios. Mozilla recomienda encarecidamente a todos que emplean esos sistemas operativos obsoletos que actualicen a versiones de OS X actualmente soportadas por Apple.

Por su parte, Firefox ESR 45 continuará dando soporte a OS X 10.6, 10.7 y 10.8 hasta mediados de 2017. Tiempo suficiente para que las personas amantes a Firefox, migren hacia plataformas seguras.

Si eres fanático a los productos de Apple y tienes un iPhone, iPad o iTouch, debes saber que Firefox también está disponible para ellos y puedes descargarlo desde la Apple Store.

download_on_appstore

Daniel Stenbergeverybody runs this code all the time

I was invited to talk about curl at the recent FOSS North conference in Gothenburg on May 26th. It was the first time the conference ran, but I think it went smooth and the ~110 visitors seemed to have a good time. It was a single track and there was a fairly good and interesting mix of talkers and subjects I think. They’re already planning to make it return again in spring 2017, so if you’re into FOSS and you’re in the Nordic region, consider this event next year…

I took on the subject of talking about my hacker ring^W^Wcurl project insights. Here’s my slide set:

At the event I sat down and had a chat with Simon Campanello, a reporter at IDG Techworld here in Sweden who subsequently posted this article about curl (in Swedish) and how our code has ended up getting used so widely.

photo of me from the Techworld article

Tantek ÇelikTomorrowland: A Change Of Perspective & A Flight To Paris

One Saturday morning last August I got a ride with some friends from San Francisco to Mill Valley for the weekly San Francisco Running Company (SFRC) trail run. My favorite route is the ~7 mile round trip to Tennessee Valley Beach and back.

The SFRC crowd tends to be pretty quick, both those from November Project SF and other regulars. I kept up with the beach group most of the way but slowly fell behind as we got closer to the shore. I reached the beach just as everyone else was turning around from the sand. I still wanted to go touch the Pacific.

Running down to the surf by myself brought a lot of things to mind. I was inspired by the waves & rocks to try a handstand on the rocky black sand.

Tantek doing a handstand on Tennessee Valley Beach. I didn’t think anyone was watching but not everyone had run back right away, and my friend Ali caught me handstanding at a distance with her iPhone 6.

I held it for a split second, long enough to feel the physical shift of perspective, and also gain a greater sense of the possible, of possibilities. Being upside down, feeling gravity the opposite of normal, makes you question the normal, question dominant views, dominant forces. The entire run back felt different. Different thoughts, different views of the trail. I stopped and took different photos.

A bridge along the Tennesee Valley trail.

They say never look back.

I say reflection is a source of insight, wisdom, and inspiration.

...

After returning to the SFRC store, we promptly drove back to SF and grabbed a quick bite at the Blue Front Cafe.

I went home to shower, change, pack, and take a car to the airport. Made it to the gate just as my flight started boarding and found a couple of my colleagues also waiting for the same direct flight to Paris.

After boarding and settling into my seat, the next moments were a bit fuzzy. I don’t remember if they fed us right after take-off or not, or if I took a nap immediately, or if I started to look through the entertainment options.

Whether I napped first or not, I do distinctly remember scrolling through new movie releases and coming upon Tomorrowland. When I saw the thumbnail of the movie poster I got a very different impression from when I saw the Metreon marquee three months earlier.

Tomorrowland movie poster.

I decided to give it a try.

Mitchell BakerWhat is the Role of Mozilla’s Executive Chair?

What does the Executive Chair do at Mozilla? This question comes up frequently in conversations with people inside and outside of Mozilla. I want to answer that question and clearly define my role at Mozilla. The role of Executive Chair is unique and entails many different responsibilities. In particular at Mozilla, the Executive Chair is something more than the well understood role of “Chairman of the Board.” Because Mozilla is a very different sort of organization, the role of Executive Chair can be highly customized and personal. It is not generally an operational role although I may initiate and oversee some programs and initiatives.   

In this post I’ll outline the major areas I’m focused on. In subsequent posts I’ll go into more detail.

#1. Chair the Board

This portion of my role is similar to the more traditional Chair role. At Mozilla in this capacity I work on mission focus, governance, development and operation of the Board and the selection, support and evaluation of the most senior executives. In our case these are Mark Surman, Executive Director of Mozilla Foundation and Chris Beard, CEO of Mozilla Corporation. Less traditionally, this portion of my role  includes an ongoing activity I call “weaving all aspects of Mozilla into a whole.”  Mozilla is an organizationally complex mixture of communities, legal entities and programs.  Unifying this into “one Mozilla” is important.

#2. Represent Mozilla and our mission to Mozillians and the world

This is our version of the general “public spokesperson” role. In this part of my role, I speak at conferences, events and to the media to communicate Mozilla’s message. The goal of this portion of my role is to grow our influence and educate the world on issues that are important to us. This role is particularly important as we transform the company and the products we create, and as we refocus on entirely new challenges to the Open Web, interoperability, privacy and security.

#3. Reform the ways in which Mozilla values are reflected in our culture, management and leadership

This is the core of the work that is intensely customized for Mozilla. It is an area where Mozilla looks to me for leadership, and for which I have a unique vision. Mozilla’s core DNA is a mix of the open source/free software movement and the open architecture of the Internet. We were born as a radically open, radically participatory organization, unbound to traditional corporate structure. We played a role in bringing the “open” movement into mainstream consciousness. How does and how can this DNA manifest itself today? How do we better integrate this DNA into our current size?  Needless to say I work hand-in-hand with Chris Beard and Mark Surman in these areas.

#4. Strategically advise Mozilla’s technology and product direction

I’ve played this role for just over 20 years now, working closely with Mozilla’s technologists, individual contributors and leadership. I help us take new directions that might be difficult to chart. And in this role I can take risks that may make us uncomfortable in the shorter term but yield us great value over the longer term. By helping to point us towards the cutting edge of our technology, I reinforce the importance of change and adaptation in how we express our values.

#5. Help Mozilla ideas expand into new contexts

I’ve been working with Mark Surman on this topic since he joined us. We’ve expanded our mission and programs into digital literacy and education, journalism, science, women and technology and now the Mozilla Leadership Network. I have also championed Mozilla’s expanded efforts in public policy. I continue to look at how we can do more (and am always open to suggestions).

So these are the different parts of my role. Hopefully it provides you with a framework for understanding what I do and how I see myself interacting with Mozilla. I’m planning to write a series of posts describing the work underway in these areas. Please send comments or feedback or questions to “office of the chair mailing list.” And thanks for your interest in Mozilla.

Mitchell BakerAll Change, All the Time

Mozilla is in a time of change.  For those of us deeply involved with Mozilla, we see this everywhere we turn.  Projects are changing, organizational structures are changing, product direction is under constant discussion, process is changing. Even more disorienting, the changes often don’t seem to be well connected.  And even within a particular group the changes are not always consistent — sometimes there is change after change in direction within the same group.

This amount of change can feel bad.  For many, it’s deeply destabilizing.  Nevertheless, it is required.  We need to be doing this, and the degree of change will not slow down anytime soon.  At least I hope not.  We need massive change to build openness into the next era of the internet and online life.  A working principle for us these days is that  change will be a constant.

Ongoing change is the iteration process through which we test ideas, iterate, change direction (“pivot” in Silicon Valley lingo), take what works, discard what doesn’t and repeat.  On the product side, this is classic Silicon Valley product and idea exploration.  We are also doing the same thing with our tools, engineering processes, organizational structures and skillsets —everything from how we build our software, to our toolchain, to how volunteers and employees interact, to how we build our values into software.

This process to test, iterate, magnify what works and drop what doesn’t can feel inconsistent.  If something doesn’t work well today, then tomorrow we won’t be doing it and we’ll be trying something else.  The new idea you just got used to is gone and now it’s time to talk about a different idea.  And next week there might be yet another focus.

This is where Mozilla is today.  For example, the Firefox team is testing ideas for what keeps Firefox great today.  I like their ideas today better than their ideas nine months ago, or six months ago.  (They probably do too 🙂 .)  That doesn’t mean the previous ideas were bad.  It means they are trying things, iterating, taking what they learn and trying new things.

We should expect this everywhere for a while.  It’s how we design the future.

We  can do a better or worse job of execution during this time of change.  There is a whole genre of literature on start-up iteration, “change management,” and leadership. So here I’ll note two things I find particularly important for Mozilla.

Mozillians identify with the mission, and with the sense of more people having more ability to affect our own lives.  As a result, building grass-roots and distributed understanding and engagement is extremely important and effective at Mozilla.  That in turn requires a bunch of things, which we can dive into shortly.  The key will be finding ways to do this that are also quick and nimble and allow rapid iteration, especially when we don’t have a consensus.

Second, change that is happening *to* me feels different than change that is happening *by* me.  Given Mozilla’s goals of distributed leadership and engagement, I predict that making the change process itself inclusive and empowering is also particularly important and effective at Mozilla.

That’s more change.  I’m excited about these kinds of activities; they should make life better for all of us.  I’ll be working on this, with the work on decision-making as a first step.

Simultaneously I encourage each of us to think about our own adaptive capacity — how do we assess and respond to change, both internal and external?  How do our teams, and how does and should Mozilla?  This is a rich topic for exploration and culture development.

Mitchell BakerPilot Working Group on Decision-Making

Decision-Making is hard at Mozilla.  We often face inertia and ambiguity regarding who owns a decision.  It can feel as if many people can say “no” and it’s hard to figure out who can and will say “yes.”  Our focus on individuals as empowered leaders can make it hard to understand how to make a decision that is accepted as legitimate.  This impacts everything from deciding what platforms or tools to use, saying yes or no to projects, to the role of  volunteers and supporters in our work.  I hear that decision-making at Mozilla is hard regularly, and I’ve experienced it myself.

A few months back I decided to get involved directly.  I decided to develop case studies of how to make decisions well, using the decision-making model I presented during Mozilla’s Portland gathering as a guide.  The slide is below.

If you’d like to spend 20 minutes or so listening to this part of the presentation go to minute 1:35 here.

ML SLIDE

I started by recruiting Jane to do the project management and make this a more consistent project than I would do on my own.  The next step was asking a few people if they have decisions they are struggling to get made and if they would be interested in piloting this plan with me.  I started with people who know me well enough to be comfortable with give-and-take.  In other words, a few people who aren’t so intimidated by my role and can tell me things I don’t really want to hear :-).  I’ll widen the circle over time.  There is also a set of people I think of as “standing members.”  This latter group currently includes:

   — Jane Finette, for project design and management;
   — George Roter, for his participation focus;
   — Larissa Shapiro, to help build inclusion of diverse voices into our decision-making; and
   — me.

We’ve learned a few things already, even though we’re not far enough to have a case study yet.  Here’s what we’ve learned so far.

  1. The decision-making model is missing at least one key element.  It’s so key that we currently call it “Item 0.”  (We’ll probably rename it item 1 at some point.)  This is — identify the actual decision that needs to be made.  This sounds obvious.  But the actual decision may be very different than the question as first presented.  For example, in one case the initial question was something like “what’s the most cost effective way to do X?”  But in reality the decision turns out to be things like “How important is this product feature?”
  1.  We need a shared understanding of what “community” means and how we think about our volunteers and community when we make decisions.   We need a way of doing this that respects the work being done and that simultaneously allows us to decide that not every activity should be supported forever.  George Roter is our point person for this.
  1.  We decided to create something I dubbed  the “Map of the Land Mines.”  This will start as a list of questions that tend to stop forward momentum and leave us paralyzed.  Our starting list here is:

— How public should we be? When is open appropriate / when is it not?
— Are we collecting data?
— Is it important we choose open source for our tools?
— How do we think about supporters and volunteers?

Of course we’ll want more than a list.  We’ll want tools for how to approach these topics so we can all get unstuck and stay unstuck.

We’ll meet every 2 weeks or so.  Steps 3, 4 and 6 of the decision-making model are about communication, participation and documentation.  So expect to hear more, both about the process and the decisions areas we’re using to build case studies.

Comments and suggestions are welcome, either here or via other channels.

Karl Dubost[worklog] Understanding Gecko code to better understand Webcompat bugs.

Last week-end, I went to see the work of a forest company in Kanagawa mountains from cutting down the trees, to manage the forest resources, to distribute them on the market. Sustainability is key for small structures. To achieve the sustainibility, you need to take care of the health of the ecosystem. I relate that a lot to the Web ecosystem and its zillion of small Web agencies versus big companies such as Facebook, Google, etc. Some needs to the Web to be sustainable, the others have the power to wipe out the entire landscape to replace the Web with something for their own needs. Web standards are keys for the Web ecosystem. Not caring for them because it just works creates the troy horse for big companies.

Tune of the week: 殯の森

Webcompat Life

Progress this week:

Today: 2016-05-30T07:35:11.615533
355 open issues
----------------------
needsinfo       4
needsdiagnosis  111
needscontact    29
contactready    43
sitewait        154
----------------------

You are welcome to participate

London agenda.

I wrote a couple of blog posts

The CSS rounding issue was interesting for me, because it helps me to dive in a place which is outside my comfort zone, aka the C++ code of Gecko. I have tried to fix a couple of issues here and there in the past, but nothing really too dangerous. Each time I ventured through the code, I always wondered where is this documented, things like the 1/60 for the CSS width that I explained this week. Maybe it's not necessary. But I have the feeling it would help beginners. Or maybe these are things that should be documented in the source code. Not sure.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • strange issue on Chromium blog where some images appear and some don't. Still investigating.
  • Found another instance of SVG Filter on image fail. Currently if there is a mistake in the filter, the image is not displayed in Firefox but is displayed in Safari.
  • A lot of false bug reports this week.

WebCompat.com dev

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Alex VincentAssociate’s Degree in Computer Science (Emphasis in Mathematics)

Hi, all.  I know I’ve been really quiet lately, because I’ve been really busy.  My fulltime job is continuing along well, and I just completed a Associate’s in Arts degree, majoring in Computer Science with an emphasis in Mathematics at Chabot College.

I have an online music course to take to complete my lower division education requirements, and then I’ll be starting in the fall quarter at California State University, East Bay on a Bachelor’s of Science degree, also majoring in Computer Science.

No, I don’t have any witty pearls of wisdom to offer in speeches, so I will defer to the expert in commencement speeches, Baz Luhrmann:

Eitan IsaacsonHere is a web interface for switching on your light

Like I mentioned in a previous post, I wanted to try out a more hackable wifi plug. I got a Kankun “smart” plug. Like the other one I have the software is horrible. The good news is that they left SSH enabled on it.

Once I get SSH access I got to work on a web app for it. I spent an hour styling a pretty button, look:

Screen shot of button in off state

And when you turn it on it looks like this:

Screen shot of button in on state

Anyway, if you want to have a pretty webby button for controlling you wifi socket you can grab it here.


Jeff WaldenUsing Yubikeys with Fedora 24, for example for Github two-factor authentication

My old laptop’s wifi went on the fritz, so I got a new Lenovo P50. Fedora 23 wouldn’t work with the Skylake architecture, so I had to jump headfirst into the Fedora 24 beta.

I’ve since hit one new issue: Yubikeys wouldn’t work for FIDO U2F authentication. Logging into a site using a Yubikey (inserting a Yubikey USB device and tapping the button when prompted) wouldn’t work. Attempting this on Github would display the error message, “Something went really wrong.” Nor would registering Yubikeys with sites work. On Github, attempting to register Yubikeys would give the error message, “This device cannot be registered.”

Interwebs sleuthing suggests that Yubikeys require special udev configuration to work on Linux. The problem is that udev doesn’t grant access to the Yubikey, so when the browser tries to access the key, things go Bad. A handful of resources pointed me toward a solution: tell udev to grant access to the device.

As root, go to the directory /etc/udev/rules.d. It contains files with names of the form *.rules, specifying rules for how to treat devices added and removed from the system. In that directory create the file 70-u2f.rules. Its contents should be those of 70-u2f.rules, from Yubico‘s libu2f-host repository. (Most of this file is just selecting various Yubikey devices to apply rules against. The important part of this file is the TAG+="uaccess" ending the various lines. This adds the “uaccess” tag to those devices; systemd-logind recognizes this tag and will grant access to the device to the current logged-in user.) Finally, run these two commands to refresh udev state:

udevadm control --reload
udevadm trigger

Yubikeys should now work for authentication.

These steps work for me, and they appear to me a sensible way to solve the problem. But I can’t say for sure that they’re the best way to solve it. (Nor am I sure why Fedora doesn’t handle this for me.) If anyone knows a better way, that doesn’t involve modifying the root file system, I’d love to hear it in comments.

Mike HoyeDevelopers Are The New Mainframes

This is another one of those rambling braindump posts. I may come back for some fierce editing later, but in the meantime, here’s some light weekend lunacy. Good luck getting through it. I believe in you.

I said that thing in the title with a straight face the other day, and not without reason. Maybe not good reasons? I like the word “reason”, I like the little sleight-of-hand it does by conflating “I did this on purpose” and “I thought about this beforehand”. It may not surprise you to learn that in my life at least those two things are not the same at all. In any case this post by Moxie Marlinspike was rattling around in the back of my head when somebody asked me on IRC why it’s hard-and-probably-impossible to make a change to a website in-browser and send a meaningful diff back to the site’s author, so I rambled for a bit and ended up here.

This is something I’ve asked for in the past myself: something like dom-diff and dom-merge, so site users could share changes back with creators. All the “web frameworks” I’ve ever seen are meant to make development easier and more manageable but at the end of the day what goes over the wire is a pile of minified angle-bracket hamburger that has almost no connection the site “at rest” on the filesystem. The only way share a usable change with a site author, if it can be done at all, is to stand up a containerized version of the entire site and edit that. This disconnect between the scale of the change and the work needed to make it is, to put it mildly, a huge barrier to somebody who wants to correct a typo, tweak a color or add some alt-text to an image.

I ranted about this for a while, about how JavaScript has made classic View Source obsolete and how even if you had dom-diff and dom-merge you’d need a carefully designed JS framework underneath designed specifically to support them, and how it makes me sad that I don’t have the skill set or free time to make that happen. But I think that if you dig a little deeper, there are some cold economics underneath that whole state of affairs that are worth thinking about.

I think that the basic problem here is the misconception that federation is a feature of distributed systems. I’m pretty confident that it’s not; specifically, I believe that federated systems are a byproduct of computational scarcity.

Building and deploying federated systems has a bunch of hard tradeoffs around development, control and speed of iteration that people are stuck with when computation is so expensive that no single organization can have or do enough of it to give a service global reach. Usenet, XMPP, email and so forth were products of this mainframe-and-minicomputer era; the Web is the last and best of them.

Protocol consensus is hard, but not as hard or expensive as a room full of $40,000 or $4,000,000 computers, so you do that work and accept the fact that what you gain in distributed stability you lose in iteration speed and design flexibility. The nature of those costs means the pressure to get it pretty close to right on the first try is very high, because real opportunities to revisit will be rare and costly. You’re fighting your own established success at that point, and nothing in tech has more inertia than a status quo whose supporters think is good enough. (See also: how IPV6 has been “right around the corner” for 20 years.)

But that’s just not true anymore. If you need a few thousand more CPUs, you twiddle the dials on your S3 page and go back to unified deployment, rapid experimental iteration and trying to stay ahead of everyone else who’s doing the same. That’s how WhatsApp can deploy end to end encryption with one software update, just like that. It’s how Facebook can update a billion users’ experiences whenever they feel like it, and presumably how Twitter does whatever the hell Twitter’s doing this week. They don’t ask permission or seek consensus because they don’t have to; they deploy, test and iterate.

So the work that used to enable, support and improve federated systems now mostly exists where domain-computation is still scarce and expensive: the development process itself. Specifically the inside of developers heads, developers who stubbornly and despite our best efforts remain expensive, high-maintenance and relatively low-bandwidth, with lots of context and application-reasoning locked up in their heads and poorly distributed.

Which is to say: developers are the new mainframes.

Right now great majority of what they’re “connected” to from a development-on-device perspective are de-facto dumb terminals. Apps, iPads, Android phones. Web pages you can’t meaningfully modify for values of “meaningful” that involve upstreaming a diff. From a development perspective those are the endpoints of one-way transmissions, and there’s no way to duplex that line to receive development-effort back.

So, if that’s the trend – that is, if in general centralized-then-federated systems get reconsolidated in socially-oriented verticals, (and that’s what issue trackers are when compared to mailing lists) – then development as a practice is floating around the late middle step, but development as an end product – via cheap CPU and hackable IoT devices – that’s just getting warmed up. The obvious Next Thing in that space will be a resurgence of something like the Web, made of little things that make little decisions – effectively distributing, commodifying and democratizing programming as a product, duplexing development across those newly commodified development-nodes.

That’s the real revolution that’s coming, not the thousand-dollar juicers or the bluetooth nosehair trimmers, but the mess of tiny hackable devices that start to talk to each other via decentralized, ultracommodified feedback loops. We’re missing a few key components – bug trackers aren’t quite source-code-managers or social-ey, IoT build tools aren’t one-click-to-deploy and so forth, but eventually there will be a single standard for how these things communicate and run despite everyone’s ongoing efforts to force users into the current and very-mainframey vendor lock-in, the same way there were a bunch of proprietary transport protocols before TCP/IP settled the issue. Your smarter long-game players will be the ones betting on JavaScript to come out on top there, though it’s possible there will be other contenders.

The next step will be the social one, though “tribal” might be a better way of putting it – the eventual recentralization of this web of thing-code into cultural-preference islands making choices about how they speak to the world around them and the world speaks back. Basically a hardware scripting site with a social aspect built in, communities and trusted sources building social/subscriber model out for IoT agency. What the Web became and is still in a lot of ways becoming as we figure the hard part – the people at scale part, out. The Web of How Stuff Works.

Anyway, if you want to know what the next 15-20 years will look like, that’s the broad strokes. Probably more like 8-12, on reflection. Stuff moves pretty quick these days, but like I said, building consensus is hard. The hard part is always people. This is one of the reasons I think Mozilla’s mission is only going to get more important for the foreseeable future; the Web was the last and best of the federated systems, worth fighting for on those grounds alone, and we’re nowhere close to done learning everything it’s got to teach us about ourselves, each other and what it’s possible for us to become. It might be the last truly open, participatory system we get, ever. Consensus is hard and maybe not necessary anymore, so if we can’t keep the Web and the lessons we’ve learned and can still learn from it alive long enough to birth its descendants, we may never get a chance to build another system like it.

[minor edits since first publication. -mhoye]

Mozilla WebDev CommunityDjango, Pipeline, and Gulp

Bedrock, the code behind www.mozilla.org, is a very large Django project. It is mostly large due to the volume and diversity of independent pages it serves. These pages come with a surprising amount of static media (css, js, images, fonts, etc.). So, any system that we use to deal with said media should be efficient in order to keep our development servers fast.

We like django-pipeline for managing our static media in production. It does a great job of bundling, minifying, and compressing our css and js. When using it in a development environment however, it does not scale well. The issue is that it does not watch for changes to your files, so all it can do is copy them all from their source to the static directory on every page load. For a reasonable number of files this is probably fine, but as I said ours is not that. This is exasperated in slow I/O environments like Docker on a non-linux system (like OSX). We’ve not been able to setup an acceptable Docker-based local dev environment yet because it can literally take several minutes to render the home page.

Due to all of the issues noted above we’ve been looking for other ways of handling static media. We’ve considered a few times to move to a completely nodejs based system that would be completely independent of the Django side, and may still do that some day, but the problem has always been scope and impact. Again because the project is so large, making sweeping changes that affect workflow and all static files can both take a lot of time and be very disruptive when they land. So for a long time we figured we were stuck. But recently a conversation started in IRC about being able to just disable django-pipeline’s copying of files. If we could do that we could use gulp-watch to much more quickly and efficiently manage these files while being edited and still get the benefits of django-pipline for production. It turned out that someone else already had this idea and mostly we just needed to upgrade django-pipeline.

After that it was a simple matter of adding a task to our Gulpfile:

gulp.task('media:watch', function () {
    return gulp.src('./media/**/*')
        .pipe(watch('./media/**/*', {
            'verbose': true
        }))
        .pipe(gulp.dest('./static'));
});

But it was still a bit odd now having to have two shells open, one for the gulp task and another for the Django dev server. So we did a little more gulp magic and now have a single command to start up both the file watching and the Django server that combines the output of both in a single terminal.

gulp.task('serve:backend', function () {
    var devServerPort = process.env.PORT || 8000;
    process.env.PYTHONUNBUFFERED = 1;
    process.env.PYTHONDONTWRITEBITECODE = 1;
    spawn('python', ['manage.py', 'runserver', '0.0.0.0:' + devServerPort], {
        stdio: 'inherit'
    });
});

gulp.task('default', function() {
    gulp.start('serve:backend');
    gulp.start('media:watch');
});

You can see the full gulpfile.js in our repo on Github if you’d like. It’s a simple but very effective change and has made a very noticeable improvement in the speed of the local development server. And now we hope that we can finally complete and recommend Docker as the default setup for local development on bedrock.

Yunier José Sosa VázquezConoce los complementos destacados para mayo

Si eres de lo que gusta de velar por su seguridad mientras navegas en Internet, este mes estarás a gusto pues te taremos un complemento que bloquea los scripts de terceros vinculados al sitio que estás visitando y te permite visualizar fácilmente quiénes son, y permitir o no su ejecución. Por otra parte, si deseas descargar cosas para más tarde, aquí te presentamos la solución. También podrás conocer de viejas funcionalidades de Firefox que regresan en forma de extensiones y del nuevo diseño que muestra AMO.

AMO estrena nuevo diseño

La galería de complemento de Mozilla (AMO por sus siglas en inglés de Addons.Mozilla.Org) recientemente cambió su cara y ahora luce un nuevo y atractivo diseño. Como podemos ver en las imágenes que se muestran a continuación, la nueva interfaz es más sencilla y limpia que la anterior. De esta forma, AMO renueva su imagen y ofrece a los usuarios mayor calidad y modernidad.

Página principal de AMO Viendo detalles de un complemento Navegando por una categoría

Elección de mes: uBlock Origin de Raymond Hill.

Es un eficiente y ligero bloqueador de peticiones a base de filtros que se aplican al sitio actual. Además, permite acelerar la carga de las páginas web y ver los sitios web enlazados desde un panel ubicado en la barra de herramientas de Firefox.

Interfaz principal de uBlock Origin uBlock Origin muestra los sitios webs bloqueados

Instalar uBlock Origin »

También te recomendamos

Download Plan de Abraham, un cercano conocido de nosotros.

Programa descargas para que sean ejecutadas en determinadas horas donde el uso de la red sea menor.

Interfaz de Download Plan Opciones de Download Plan

Instalar Download Plan »

Emoji Keyboard de Harry N.

Permite tener fácilmente a la vista los emoticons más empleados en la actualidad para enviar mensajes divertidos.

Panel de Emoji Keyboard Buscando un emoji

Instalar Emoji Keyboard »

Tab Groups de Quicksaver.

Permite organizar tus pestañas en diferentes grupos. Esta es la antigua función Panorama de Firefox llevada a un complemento.

Visualizando los grupos de pestañas Buscando una pestaña

Instalar Tab Groups »

Nomina tu complemento favorito

A nosotros nos encantaría que fueras parte del proceso de seleccionar los mejores complementos para Firefox y nos gustaría escucharte. ¿No sabes cómo? Sólo tienes que enviar un correo electrónico a la dirección amo-featured@mozilla.org con el nombre del complemento o el archivo de instalación y los miembros evaluarán tu recomendación.

Gervase MarkhamThank You For Trying, Switzerland

Various bits of the TiSA (Trade in Services Agreement, yet another multilateral trade treaty) were leaked recently. On the very first page of General Provisions:

[CH propose; AU/CA/CL/TW/CO/EU/IL/JP/MX/NZ/PE oppose; MU/PK considering:
Without prejudice to the policy objectives and legislation of the Parties in areas such as the protection of intellectual property, the protection of privacy and of the confidentiality of personal and commercial data, the protection of consumers and the protection and promotion of the diversity of cultural expressions (including through public funding and assistance) and fiscal measures.]

So the Swiss said “Hey, wouldn’t it be good if we had a thing at the start that said that this treaty doesn’t stop governments protecting privacy, the confidentiality of data, consumer rights, cultural diversity or other important things like that? Wouldn’t that be neat?”

And Australia, Canada, Chile, Taiwan, Colombia, the EU, Israel, Japan, Mexico, New Zealand and Peru all said “Er, no. We want this agreement to be capable of preventing us from protecting those things, thanks. Where it speaks, it should be more important than the domestic law enacted by your elected representatives.”

Seems like that tells you a lot of what you need to know about the way such treaties are assembled. At least Mauritius and Pakistan are still thinking about it… Sheesh.

Air MozillaFoundation Demos May 27 2016

Foundation Demos May 27 2016 Foundation Demos May 27 2016

Niko MatsakisThe 'Tootsie Pop' model for unsafe code

In my previous post, I spent some time talking about the idea of unsafe abstractions. At the end of the post, I mentioned that Rust does not really have any kind of official guidelines for what kind of code is legal in an unsafe block and what is not.What this means in practice is that people wind up writing what seems reasonable and checking it against what the compiler does today. This is of course a risky proposition since it means that if we start doing more optimization in the compiler, we may well wind up breaking unsafe code (the code would still compile; it would just not execute like it used to).

Now, of course, merely having published guidelines doesn’t entirely change that dynamic. It does allow us to assign blame to the unsafe code that took actions it wasn’t supposed to take. But at the end of the day we’re still causing crashes, so that’s bad.

This is partly why I have advocated that I want us to try and arrive at guidelines which are human friendly. Even if we have published guidelines, I don’t expect most people to read them in practice. And fewer still will read past the introduction. So we had better be sure that reasonable code works by default.

Interestingly, there is something of a tension here: the more unsafe code we allow, the less the compiler can optimize. This is because it would have to be conservative about possible aliasing and (for example) avoid reordering statements. We’ll see some examples of this as we go.

Still, to some extent, I think it’s possible for us to have our cake and eat it too. In this blog post, I outline a proposal to leverage unsafe abstaction boundaries to inform the compiler where it can be aggressive and where it must be conservative. The heart of the proposal is the intution that:

  • when you enter the unsafe boundary, you can rely that the Rust type system invariants hold;
  • when you exit the unsafe boundary, you must ensure that the Rust type system invariants are restored;
  • in the interim, you can break a lot of rules (though not all the rules).

I call this the Tootsie Pop model: the idea is that an unsafe abstraction is kind of like a Tootsie Pop. There is a gooey candy interior, where the rules are squishy and the compiler must be conservative when optimizing. This is separated from the outside world by a hard candy exterior, which is the interface, and where the rules get stricter. Outside of the pop itself lies the safe code, where the compiler ensures that all rules are met, and where we can optimize aggressively.

One can also compare the approach to what would happen when writing a C plugin for a Ruby interpreter. In that case, your plugin can assume that the inputs are all valid Ruby objects, and it must produce valid Ruby objects as its output, but internally it can cut corners and use C pointers and other such things.

In this post, I will elaborate a bit more on the model, and in particular cover some example problem cases and talk about the grey areas that still need to be hammered out.

How do you define an unsafe boundary?

My initial proposal is that we should define an unsafe boundary as being a module that unsafe code somewhere inside of it. So, for example, the module that contains split_at_mut, which we have seen earlier is a fn defined with unsafe code, would form an unsafety boundary. Public functions in this module would therefore be entry points into the unsafe boundary; returning from such a function, or issuing a callback via a closure or trait method, would be an exit point.

Initially when considering this proposal, I wanted to use a an unsafe boundary defined at the function granularity. So any function which contained an unsafe block but which did not contain unsafe in its signature would be considered the start of an unsafe boundary; and any unsafe fn would be a part of its callers boundary (note that its caller must contain an unsafe block). This would mean that e.g. split_at_mut is its own unsafe boundary. However, I have come to think that this definition is too precise and could cause problems in practice – we’ll see some examples below. Therefore, I have loosened it.

Ultimately I think that deciding where to draw the unsafe boundary is still somewhat of an open question. Even using the module barrier means that some kinds of refactorings that might seem innocent (migrating code between modules, specifically) can change code from legal to illegal. I will discuss various alternatives later on.

Permissions granted/required at the unsafe boundary

In the model I am proposing, most of your reasoning happens as you cross into or out of an unsafe abstraction. When you enter into an unsafe abstraction – for example, by calling a method like split_at_mut, which is not declared as unsafe but uses unsafe code internally – you implicitly provide that function with certain permissions. These permissions are derived from the types of the function’s arguments and the rules of the Rust type system. In the case of split_at_mut, there are two arguments:

  • The slice self that is being split, of type &'a mut [T]; and,
  • the midpoint mid at which to perform the split, of type usize.

Based on these types, the split_at_mut method can assume that the variable self refers to a suitably initialized slice of values of type T. That reference is valid for the lifetime 'a, which represents some span of execution time that encloses at least the current call to split_at_mut. Similarly, the argument mid will be an unsigned integer of suitable size.

At this point we are within the unsafe abstraction. It is now free to do more-or-less whatever it likes, so long as all the actions it takes fall within the initial set of permissions. More on this below.

Finally, when you exit from the unsafe boundary, you must ensure that you have restored whatever invariants and permissions the Rust type system requires. These are typically going to be derived from the types of the function’s outputs, such as its return type. In the case of split_at_mut, the return type is (&mut [T], &mut [T]), so this implies that you will return a tuple of slices. Since those slices are both active at the same time, they must (by the rules of Rust’s type system) refer to disjoint memory.

Specifying the permissions

In this post, I am not trying to define the complete set of permissions. We have a reasonably good but not formalized notion of what these permissions are. Ralf Jung and Derek Dryer have been working on making that model more precise as part of the Rust Belt project. I think writing up those rules in one central place would obviously be a big part of elaboring on the model I am sketching out here.

If you are writing safe code, the type system will ensure that you never do anything that exceeds the permissions granted to you. But if you dip into unsafe code, then you take on the responsibility for verifying that you obey the given permissions. Either way, the set of permissions remain the same.

Permissons on functions declared as unsafe

If a function is declared as unsafe, then its permissions are not defined by the type system, but rather in comments and documentation. This is because the unsafe keyword is a warning that the function arguments may have additional requirements of its caller – or may return values that don’t meet the full requirements of the Rust type system.

Optimizations within an unsafe boundary

So far I’ve primarily talked about what happens when you cross an unsafe boundary, but I’ve not talked much about what you can do within an unsafe boundary. Roughly speaking, the answer that I propose is: whatever you like, so long as you don’t exceed the initial set of permissions you were given.

What this means in practice is that when the compiler is optimizing code that originates inside an unsafe boundary, it will make pessimistic assumptions about aliasing. This is effectively what C compilers do today (except they sometimes employ type-based alias analysis; we would not).

As a simple example: in safe code, if you have two distinct variables that are both of type &mut T, the compiler would assume that they represent disjoint memory. This might allow it, for example, to re-order reads/writes or re-use values that have been read if it does not see an intervening write. But if those same two variables appear inside of an unsafe boundary, the compiler would not make that assumption when optimizing. If that was too hand-wavy for you, don’t worry, we’ll spell out these examples and others in the next section.

Examples

In this section I want to walk through some examples. Each one contains unsafe code doing something potentially dubious. In each case, I will do the following:

  1. walk through the example and describe the dubious thing;
  2. describe what my proposed rules would do;
  3. describe some other rules one might imagine and what their repercussions might be.

By the way, I have been collecting these sorts of examples in a repository, and am very interested in seeing more such dubious cases which might offer insight into other tricky situations. The names of the sections below reflect the names of the files in that repository.

split-at-mut-via-duplication

Let’s start with a familiar example. This is a variant of the familiar split_at_mut method that I covered in the previous post:

1
2
3
4
5
6
7
8
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        let copy: &mut [T] = unsafe { &mut *(self as *mut _) };
        let left = &mut self[0..mid];
        let right = &mut copy[mid..];
        (left, right)
    }
}

This version works differently from the ones I showed before. It doesn’t use raw pointers. Instead, it cheats the compiler by duplicating self via a cast to *mut. This means that both self and copy are &mut [T] slices pointing at the same memory, at the same time. In ordinary, safe Rust, this is impossible, but using unsafe code, we can make it happen.

The rest of the function looks almost the same as our original attempt at a safe implementation (also in the previous post). The only difference now is that, in defining right, it uses copy[mid..] instead of self[mid..]. The compiler accepts this because it assumes that copy and self, since they are both simultaneously valid, must be disjoint (remember that, in unsafe code, the borrow checker still enforces its rules on safe typess, it’s just that we can use tricks like raw pointers or transmutes to sidestep them).

Why am I showing you this? The key question here is whether the optimizer can trust Rust types within an unsafe boundary. After all, this code is only accepted because the borrowck thinks (incorrectly) that self and copy are disjoint; if the optimizer were to think the same thing, that could lead to bad optimizations.

My belief is that this program ought to be legal. One reason is just that, when I first implemented split_at_mut, it’s the most natural thing that I thought to write. And hence I suspect that many others would write unsafe code of this kind.

However, to put this in terms of the model, the idea is that the unsafe boundary here would be the module containing split_at_mut. Thus the dubious aliasing between left and right occurs within this boundary. In general, my belief is that whenever we are inside the boundary we cannot fully trust the types that we see. We can only assume that the user is supplying the types that seem most appropriate to them, not necessarily that they are accounting for the full implications of those types under the normal Rust rules. When optimizing, then, the compiler will not assume that the normal Rust type rules apply – effectively, it will treat &mut references the same way it might treat a *mut or *const pointer.

(I have to work a bit more at understanding LLVM’s annotations, but I think that we can model this using the aliasing metadata that LLVM provides. More on that later.)

Alternative models. Naturally alternative models might consider this code illegal. They would require that one use raw pointers, as the current implementation does, for any pointer that does not necessarily obey Rust’s memory model.

(Note that this raises another interesting question, though, about what the legal aliasing is between (say) a &mut and a *mut that are actively in use – after all, an &mut is supposed to be unique, but does that uniqueness cover raw pointers?)

refcell-ref

The borrow() method on the type RefCell employs a helper type that returns a value of a helper type called Ref:

1
2
3
4
pub struct Ref<'b, T: ?Sized + 'b> {
    value: &'b T,
    borrow: BorrowRef<'b>,
}

Here the value field is a reference to the interior of the RefCell, and the borrow is a value which, once dropped, will cause the lock on the RefCell to be released. This is important because it means that once borrow is dropped, value can no longer safely be used. (You could imagine the helper type MutexGuard employing a similar pattern, though actually it works ever so slightly differently for whatever reason.)

This is another example of unsafe code is using the Rust types in a creative way. In particular, the type &'b T is supposed to mean: a reference that can be safely used right up until the end of 'b (and whose referent will not be mutated). However, in this case, the actual meaning is until the end of `‘b` or until `borrow` is dropped, whichever comes first.

So let’s consider some imaginary method defined on Ref, copy_drop(), which works when T == u32. It would copy the value and then drop the borrow to release the lock.

1
2
3
4
5
6
7
8
use std::mem;
impl<'b> Ref<'b, u32> {
    pub fn copy_drop(self) -> u32 {
        let t = *self.value; // copy contents of `self.value` into `t`
        mem::drop(self.borrow); // release the lock
        t // return what we read before
    }
}

Note that there is no unsafe code in this function at all. I claim then that the Rust compiler would, ideally, be within its rights to rearrange this code and to delay the load of self.value to occur later, sort of like this:

1
2
3
mem::drop(self.borrow); // release the lock
let t = *self.value; // copy contents of `self.value` into `t`
t // return what we read before

This might seem surprising, but the idea here is that the type of self.value is &'b u32, which is supposed to mean a reference valid for all of 'b. Moreover, the lifetime 'b encloses the entire call to copy_drop. Therefore, the compiler would be free to say well, maybe I can save a register if I move this load down.

However, I think that reordering this code would be an invalid optimization. Logically, as soon as self.borrow is dropped, *self.value becomes inaccessible – if you imagine that this pattern were being used for a mutex, you can see why: another thread might acquire the lock!

Note that because these fields are private, this kind of problem can only arise for the methods defined on Ref itself. The public cannot gain access to the raw self.value reference. They must go through the deref trait, which returns a reference for some shorter lifetime 'r, and that lifetime 'r always ends before the ref is dropped. So if you were to try and write the same copy_drop routine from the outside, there would be no problem:

1
2
3
4
let some_ref: Ref<u32> = ref_cell.borrow();
let t = *some_ref;
mem::drop(some_ref);
use(t);

In particular, the let t = *some_ref desugars to something like:

1
2
3
4
let t = {
    let ptr: &u32 = Deref::deref(&some_ref);
    *ptr
};

Here the lifetime of ptr is just going to be that little enclosing block there.

Why am I showing you this? This example illustrates that, in the presence of unsafe code, the unsafe keyword itself is not necessarily a reliable indicator to where funny business could occur. Ultimately, I think what’s important is the unsafe abstraction barrier.

My belief is that this program ought to be legal. Frankly, to me, this code looks entirely reasonable, but also it’s the kind of code I expect people will write (after all, we wrote it). Examples like this are why I chose to extend the unsafe boundary to enclose the entire module that uses the unsafe keyword, rather than having it be at the fn granularity – because there can be functions that, in fact, do unsafe things where the full limitations on ordering and so forth are not apparent, but which do not directly involve unsafe code. Another classic example is modifying the length or capacity fields on a vector.

Now, I chose to extend to the enclosing, module because it corresponds to the privacy boundary, and there can be no unsafe abstraction barrier without privacy. But I’ll explain below why this is not a perfect choice and we might consider others.

usize-transfer

Here we have a trio of three functions. These functions collaborate to hide a reference in a usize and then later dereference it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Cast the reference `x` into a `usize`
fn escape_as_usize(x: &i32) -> usize {
    // interestingly, this cast is currently legal in safe code,
    // which is a mite unfortunate, but doesn't really affect
    // the example
    x as *const _ as usize
}

// Cast `x` back into a pointer and dereference it 
fn consume_from_usize(x: usize) -> i32 {
    let y: &i32 = unsafe { &*(x as *const i32) };
    *y
}

pub fn entry_point() {
    let x: i32 = 2;
    let p: usize = escape_as_usize(&x);

    // (*) At this point, `p` is in fact a "pointer" to `x`, but it
    // doesn't look like it!

    println!("{}", consume_from_usize(p));
}

The key point in this example is marked with a (*). At that point, we have effected created a pointer to x and stored it in p, but the type of p does not reflect that (it just says it’s a pointer-sized integer). Note also that entry_point does not itself contain unsafe code (further evidence that private helper functions can easily cause unsafe reasoning to spread beyond the border of a single fn). So the compiler might assume that the stack slot x is dead and reuse the memory, or something like that.

There are a number of ways that this code might be made less shady. escape_as_usize might have, for example, returned a *const i32 instead of usize. In that case, consume_from_usize would look like:

1
fn consume_from_usize(x: *const i32) -> i32 { ... }

This itself raises a kind of interesting question though. If a function is not declared as unsafe, and it is given a *const i32 argument, can it dereference that pointer? Ordinarily, the answer would clearly be no. It has no idea what the provenance of that pointer is (and if you think back to the idea of permissions that are granted and expected by the Rust type system, the type system does not guarantee you that a *const can be dereferenced). So effectively there is no difference, in terms of the public permissions, between x: usize and x: *const i32. Really I think the best way to structure this code would have been to declare consume_from_usize() as unsafe, which would have served to declare to its callers that it has extra requirements regarding its argument x (namely, that it must be a pointer that can be safely dereferenced).

Now, if consume_from_usize() were a public function, then not having an unsafe keyword would almost certainly be flat out wrong. There is nothing that stops perfectly safe callers from calling it with any old integer that they want; even if the signature were changed to take *const u32, the same is basically true. But consume_from_usize() is not public: it’s private, and that perhaps makes a difference.

It often happens, as we’ve seen in the other examples, that people cut corners within the unsafe boundary and declare private helpers as safe that are in fact assuming quite a bit beyond the normal Rust type rules.

Why am I showing you this? This is a good example for playing with the concept of an unsafe boundary. By moving these functions about, you can easily create unsafety, as they must all three be contained within the same unsafe boundary to be legal (if indeed they are legal at all). Consider these variations:

Private helper module.

1
2
3
4
5
6
7
8
mod helpers {
    pub fn escape_as_usize(x: &i32) -> usize { ... }
    pub fn consume_from_usize(x: usize) -> i32 { ... }
}

pub fn entry_point() {
    ... // calls now written as `helpers::escape_as_usize` etc
}

Private helper module, but restriced scope to an outer scope.

1
2
3
4
5
6
7
8
mod helpers {
    pub(super) fn escape_as_usize(x: &i32) -> usize { ... }
    pub(super) fn consume_from_usize(x: usize) -> i32 { ... }
}

pub fn entry_point() {
    ... // calls now written as `helpers::escape_as_usize` etc
}

Public functions, but restricted to an outer scope.

1
2
3
4
5
6
7
pub mod some_bigger_abstraction {
    mod helpers {
        pub(super) fn escape_as_usize(x: &i32) -> usize { ... }
        pub(super) fn consume_from_usize(x: usize) -> i32 { ... }
        pub(super) fn entry_point() { ... }
    }
}

Public functions, but de facto restricted to an outer scope.

1
2
3
4
5
6
7
8
9
pub mod some_bigger_abstraction {
    mod helpers {
        pub fn escape_as_usize(x: &i32) -> usize { ... }
        pub fn consume_from_usize(x: usize) -> i32 { ... }
        pub fn entry_point() { ... }
    }

    // no `pub use`, so in fact they are not accessible
}

Just plain public.

1
2
3
pub fn escape_as_usize(x: &i32) -> usize { ... }
pub fn consume_from_usize(x: usize) -> i32 { ... }
pub fn entry_point() { }

Different crates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// crate A:
pub fn escape_as_usize(x: &i32) -> usize { ... }
// crate B:
pub fn consume_from_usize(x: usize) -> i32 { ... }
// crate C:
extern crate a;
extern crate b;
pub fn entry_point() {
    ...
    let p = a::escape_as_usize(&x)
    ...
    b::consume_from_usize(p)
    ...
}

My belief is that some of these variations ought to be legal. The current model as I described it here would accept the original variation (where everything is in one module) but reject all other variations (that is, they would compile, but result in undefined behavior). I am not sure this is right: I think that at least the private helper module variations seems maybe reasonable.

Note that I think any or all of these variations should be fine with appropriate use of the unsafe keyword. If the helper functions were declared as unsafe, then I think they could live anywhere. (This is actually an interesting point that deserves to be drilled into a bit more, since it raises the question of how distinct unsafe boundaries interact; I tend to think of there as just being safe and unsafe code, full stop, and hence any time that unsafe code in one module invokes unsafe code in another, we can assume they are part of the same boundary and hence that we have to be conservative.)

On refactorings, harmless and otherwise

One interesting thing to think about with an kind of memory model or other guidelines is what sorts of refactorings people can safely perform. For example, under this model, manually inlining a fn body is always safe, so long as you do so within an unsafe abstraction. Inlining a function from inside an abstraction into the outside is usually safe, but not necessarily – the reason it is usually safe is that most such functions have unsafe blocks, and so by manually inlining, you will wind up changing the caller from a safe function into one that is part of the unsafe abstraction.

(Grouping items and functions into modules is another example that may or may not be safe, depending on how we chose to draw the boundary lines.)

EDIT: To clarify a confusion I have seen in a few places. Here I am talking about inlining by the user. Inlining by the compiler is different. In that case, when we inline, we would track the provenance of each instruction, and in particular we would track whether the instruction originated from unsafe code. (As I understand it, LLVM already does this with its aliased sets, because it is needed for handling C99 restrict.) This means that when we decide e.g. if two loads may alias, if one (or both) of those loads originated in unsafe code, then the answer would be different than if they did not.

Impact of this proposal and mapping it to LLVM

I suspect that we are doing some optimizations now that would not be legal under this proposal, though probably not that many – we haven’t gone very far in terms of translating Rust’s invariants to LLVM’s alias analysis metadata. Note though that in general this proposal is very optimization friendly: all safe code can be fully optimized. Unsafe code falls back to more C-like reasoning, where one must be conservative about potential aliasing (note that I do not want to employ any type-based alias analysis, though).

I expect we may want to add some annotations that unsafe code can use to recover optimizations. For example, perhaps something analogous to the restrict keyword in C, to declare that pointers are unaliased, or some way to say that an unsafe fn (or module) nonetheless ensures that all safe Rust types meet their full requirements.

One of the next steps for me personally in exploring this model is to try and map out (a) precisely what we do today and (b) how I would express what I want in LLVM’s terms. It’s not the best formalization, but it’s a concrete starting point at least!

Tweaking the concept of a boundary

As the final example showed, a module boundary is not clearly right. In particular, the idea of using a module is that it aligned to privacy, but by that definition it should probably include submodules (that is, any module where an unsafe keyword appears either in the module or in some parent of the module is considered to be an unsafe boundary module).

Conclusion

Here I presented a high-level proposal for how I think a Rust memory model ought to work. Clearly this doesn’t resemble a formal memory model and there are tons of details to work out. Rather, it’s a guiding principle: be aggressive outside of unsafe abstractions and conservative inside.

I have two major concerns:

  • First, what is the impact on execution time? I think this needs to be investigated, but ultimately I am sure we can overcome any deficit by allowing unsafe code authors to opt back in to more aggressive optimization, which feels like a good tradeoff.
  • Second, what’s the best way to draw the optimization boundary? Can we make it more explicit?

In particular, the module-based rule that I proposed for the unsafe boundary is ultimately a kind of heuristic that makes an educated guess as to where the unsafe boundary lies. Certainly the boundary must be aligned with modules, but as the last example showed, there may be a lot of ways to set thigns up that seem reasonable. It might be nicer if we could have a way to declare that boundary affirmatively. I’m not entirely sure that this looks like. But if we did add some way, we might then say that if you use the older unsafe keyword – where the boundary is implicit – we’ll just declare the whole crate as being an unsafe boundary. This likely won’t break any code (though of course I mentioned the different crates variation above…), but it would provide an incentive to use the more explicit form.

For questions or discussion, please see this thread on the Rust internals forum.

Edit log

Some of the examples of dubious unsafe code originally used transmute and transmute_copy. I was asked to change them because transmute_copy really is exceptionally unsafe, even for unsafe code (type inference can make it go wildly awry from what you expected), and so we didn’t want to tempt anyone into copy-and-pasting them. For the record: don’t copy and paste the unsafe code I labeled as dubious – it is indeed dubious and may not turn out to be legal! :)

Adam StevensonBack in Compat

In 2014 I had the opportunity to join the Web Compatibility team on a six month contract. This mission based team and community, opened me up to a way of working I had never experienced before. It had a major impact on how I view my work/life balance and overall happiness. When the possibility of returning came up this year, I was dead in the middle of a corporate job that had me flying around Canada and the U.S every week. When I’d return home for the weekend I was exhausted, with little time for family, friends and hobbies. I knew that I needed to get back to a place where I could be excited to work and able to find balance. Back in October of 2015, I attended a conference in Austin, TX for a week. Before leaving Austin I met up with Mike Taylor for some breakfast tacos and caught up with him. At some point I remember saying how much I enjoyed working at Mozilla and would love to return one day. Four months later when a position on the Web Compat team opened up, Mike reached out to see if I was still interested - I was pumped!

The on-boarding process this time around was pretty sweet. In May I attended the new hire on-boarding week in Mountain View. The group must have been 25+ people in size, all from different places in the world and within Mozilla. Each day we’d go through team building activities and get to hear from the leaders of each department. It was so much fun and we learned an incredible amount about the past, present and future of Mozilla. For sure, it was the best on-boarding experience I’ve been through. One topic that came up during our break time was imposter syndrome. Mozilla is full of talented people, which can be intimidating for new hires. I’m thankful that someone opened up about the topic as the conversations were reassuring.

In the last two years things have changed a bit at Mozilla and in Web Compat. We’re no longer supporting Firefox OS, which was previously our main focus. Firefox is now on iOS, unsurprisingly we don’t see many compatibility issues using webkit. Firefox on desktop is having more issues though, there are sites using only webkit prefixes, various video problems and even some tracking protection related bugs. Our website webcompat.com has come a long way and now we see anonymous bugs reported from regular web users.

It feels great to be back, I’m excited about the work ahead.

William LachanceQuarter of Contribution: June / July 2016 edition

Just wanted to announce that, once again, my team (Mozilla Engineering Productivity) is just about to start running another quarter of contribution — a great opportunity for newer community members to dive deep on some of the projects we’re working on, brush up on their programming and problem solving skills, and work with experienced mentors. You can find more information on this program here.

I’ve found this program to be a really great experience on both sides — it’s an opportunity for contributors to really go beyond the “good first bug” style of patches to having a really substantial impact on some of the projects that we’re working on while gaining lots of software development skills that are useful in the real world.

Once again, I’m going to be mentoring one or two people on the Perfherder project, a tool we use to measure and sheriff Firefox performance. If you’re inclined to work on some really interesting data analysis and user interface problems in Python and JavaScript, please have a look at the project page and get in touch. :)

QMOFirefox 48.0 Aurora Testday, June 3rd

Hello Mozillians,

We are happy to let you know that Friday, June 3rd, we are organizing Firefox 48.0 Aurora Testday. We’ll be focusing our testing on the following features: New Awesomebar, Windows Child Mode and  APZ. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Daniel Stenberga new curl logo

The original logo for the curl project was created in 1998. I can’t even recall who made the original one. When we started version controlling the web site contents in June 2001 it had already been revised slightly and it looked like this

the original logo

I subsequently did the slightly refreshed version that’s been visible on the site since early 2003 which sort of emphasized the look by squeezing the letters together. The general “cURL” look was obviously kept.

curl-refined

It was never considered a good-looking logo by anyone. People mostly at best thought it was cute because it looked goofy and it helped provide the web site with a sort of retro feeling. Something that was never intended.

(As you can see I shortened the tag line slightly between those two versions.)

Over the years I’ve tried to refresh the logo and even gently tried to ask around for help to get it revamped, but it was never done and my own futile attempts didn’t end up in any improvements.

The funny casing of the word cURL might’ve contributed to the fact that people considered it ugly. It came with the original naming of the project that was focused on the fact that we work with URLs. A client for URLs could be cURL. Or if you’d pronounce the c as “see”, you’d “see URL”. The casing is really only used in the logo these days. We just call it curl now. The project is curl, the command line tool is curl and even I have started to give up somewhat and have actually referred to libcurl as the curl library several times. “curl” with no upper case being the common denominator.

When I got this offer to refresh the logo this time, I was a bit skeptical since attempts had failed before, but what the heck, we had nothing to lose so sure, please help us!

We started out with their own free-style suggestions on a new logo. That proved these guys are good and have an esthetic sense for this. To me that first set also quite clearly showed me that the funny casing have to go. A lower case version seemed calmer, more modern and more in line with our current view of the naming.

As an example from their first set, a clean and stylish version that tried to use the letter c as a sort of symbol. Unfortunately it makes you mostly read “URL”.

experimental curl logo

As step two, I also suggested we’d try to work with :// (colon slash slash) somehow as a symbol, since after all the :// letter sequence is commonly used in all the URL formats that curl supports. And it is sort of a global symbol for URLs when you start to think about it. Made sense to me.

Their second set of logo versions were good. Really good. But they mostly focused on having the :// symbol to the left of ‘curl’ and that made it look a bit weird. Like “:// curl” – which looks so strange when you’re used to seeing URLs. They also did some attempts with writing it “c:URL” and having the // as part of the U letter like “c://RL” but those versions felt too crazy and ended more funny-looking than cool.

An example from the second set with a colon-slash-slash symbol on the left side:

experimental curl logo4

The third set of logos were then made for us with various kinds of :// variations put on the right side of the curl, the curl in lowercase only.

There, among that third set of suggestions, there was one that was a notch better than the rest. The new curl logo. We made it.

I offer you: the curl logo of 2016.

good_curl_logo

Used in a real-life like situation:

logo-on-box

and the plan is as this image shows, to be able to use the colon-slash-slash symbol stand-alone:

curl-symbolThe new curl logo is made by Soft Dreams. Thanks a lot for this stellar work!

Eitan IsaacsonGoogle Inbox Notifications

Do you use Google Inbox? Do you miss getting notifications on your desktop when new mail arrives? In Gmail you could opt in to get desktop notifications, and the tab title reflects how many unread messages you have.

icon.svg.png

I made a Firefox addon that brings that functionality to Google Inbox. It gives you a notification when new mail arrives and updates the pages title with the unread mail count. You can get it here!


Support.Mozilla.OrgWhat’s Up with SUMO – 26th May

Hello, SUMO Nation!

We’ve been through a few holidays here and there, so there’s not a lot to report this week. We hope you’ll enjoy this much lighter-than-usual set of updates :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 1st of June – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

This is it for the diet version of WUWS. We’re going “lite” for this week, to keep your minds slim for the summer ;-) See you around SUMO!

Air MozillaTwelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly

Twelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly Much of what will happen in the next thirty years is inevitable, driven by technological trends are already in motion. Wired founder Kevin Kelly has...

Mozilla Reps CommunityRep of the Month – February 2016

Abdelrahman Samy started his journey as a Firefox Student Ambassador in Egypt. As Club lead of the AAST club, he is involved in the day to day awareness raising of Mozilla’s mission at the Alexandria University. He is also engaged in localization and helped with l10n for Firefox OS.

Further he is one of the most active Core members of the Egypt community. Recently he coordinated Club launches in Egypt (MIS Club for example). With his enthusiasm there are two more Club launches he is coordinating at the end of April / start of May.

Karl DubostHow CSS width is computed in Gecko?

Yesterday, I have written about the strange value of the CSS width when set with a percentage. One Japanese night later, Boris Zbarsky sent me a private email explaining how CSS widths are computed in Gecko. These short emails are a gem of pleasure to read and insights into Gecko. I should almost open a new category here called "Santa Boris". I love them. Thanks.

So what did I learn with Boris? Here an extended version of Boris' email with C++ code. (I'm out of my comfort zone so if you see glaring mistakes, tell me I will happily fix them.)

Yesterday's Summary

In yesterday's article, the following CSS rule was applied on the child of a 360px parent element.

.foo {
    width: 49.99999%;
    border-right: 1px solid #CCC;
}
  • The 49.99999 % of 360 px in Gecko
  • is not 179.999964 px (as you could mathematically expect)
  • but 179.98333740234375 px (rounded to 179.983 px in the devtools UI).

Gecko storage of CSS

In Boris' words:

Gecko doesn't store CSS stuff as floating-point numbers. It stores them as integers, with the unit being 1/60 of a px (so that common fractions like 1/2, 1/3, 1/4, 1/5 are exactly representable).

Or said it another way 1 px = 60 (in Gecko). The 60 constant is defined in AppUnits.h (Brian Birtles helped me find it).

inline int32_t AppUnitsPerCSSPixel() { return 60; }

which in return is called a lot in Units.h.

Specifically in ToAppUnits()

    static nsRect ToAppUnits(const CSSRect& aRect) {
    return nsRect(NSToCoordRoundWithClamp(aRect.x * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.y * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.width * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.height * float(AppUnitsPerCSSPixel())));
  }

Let's continue with Boris' explanation broken down in steps. So instead of 360 px, we have an integer which is 360

  1. 49.99999% * 360
  2. 0.4999999 * (360 * 60)
  3. 0.4999999 * 21600
  4. 10799.99784

but as you can see above when computing the values, it uses a function NSToCoordRoundWithClamp().

inline nscoord NSToCoordRoundWithClamp(float aValue)
{
  /* cut for clarity */
  return NSToCoordRound(aValue);
}

And it goes to a rounding function.

The return trip is similar with FromAppUnits()

  static CSSIntRect FromAppUnitsRounded(const nsRect& aRect) {
    return CSSIntRect(NSAppUnitsToIntPixels(aRect.x, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.y, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.width, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.height, float(AppUnitsPerCSSPixel())));
  }

where

inline int32_t NSAppUnitsToIntPixels(nscoord aAppUnits, float aAppUnitsPerPixel)
{
  return NSToIntRound(float(aAppUnits) / aAppUnitsPerPixel);
/*       NSToIntRound(*/
}

which calls NStoIntRound()

inline int32_t NSToIntRound(float aValue)
{
  return NS_lroundf(aValue);
}

which calls NS_lroundf()

inline int32_t
NS_lroundf(float aNum)
{
  return aNum >= 0.0f ? int32_t(aNum + 0.5f) : int32_t(aNum - 0.5f);
}

(My C++ Kung-Fu (功夫) being miminal, I asked again Brian). NS_roundf() truncates the float number into an 32 bits integer. The construct

return aNum >= 0.0f ? int32_t(aNum + 0.5f) : int32_t(aNum - 0.5f);
/*      (condition) ? true                 : false                 */
  • 1.2 so a positive number, condition is true, then 1.2 + 0.5 = 1.7 convert to int32_t which is 1 by truncation.
  • 1.8, positive number, true, 1.8 + 0.5 = 2.3, truncation to 2.

Let's go back to our 10799.99784 found earlier. It is being truncated to 10799 (and not rounded to 10800).

You can see that happening in nsRuleNode.cpp

/* static */ nscoord
nsRuleNode::ComputeCoordPercentCalc(const nsStyleCoord& aCoord,
                                    nscoord aPercentageBasis)
{
  switch (aCoord.GetUnit()) {
    case eStyleUnit_Coord:
      return aCoord.GetCoordValue();
    case eStyleUnit_Percent:
      return NSToCoordFloorClamped(aPercentageBasis * aCoord.GetPercentValue());
    case eStyleUnit_Calc:
      return ComputeComputedCalc(aCoord, aPercentageBasis);
    default:
      MOZ_ASSERT(false, "unexpected unit");
      return 0;
  }
}

Now let's get back to pixel units.

10799 / 60 = 179.983333333333333… and beyond infinity of 3.

The acute reader would have noticed that this is still not 179.98333740234375 px but close. Boris says:

The difference between that (aka 179.983333333333333…) and 179.98333740234375 is because the multiplication is done on 32-bit floats, which only have 7 decimal digits of accuracy; 179.98333740234375 is the closest representable 32-bit float to 179.983333333333333…

And now you know exactly why and how Gecko computes its CSS width.

Browsers and their sub-pixel units

What other browsers do? The "Ahahahah" moment of this blog post.

Browsers do layout with subpixel units: WebKit/Blink: 1/64 pixels. Gecko: 1/60. Edge: 1/100 — smfr

If a developer from WebKit/Blink could explain how they compute their own widths with links to code, I'm interested. And Edge developers too (without the link to the code but I'll take snippets of code). All of this in the spirit of Web Compatibility and interoperability.

PS: Again thousand of thanks to Boris and Brian.

Otsukare!

Karl Dubost49.99999% of a CSS width is?

Chintai is yet another Japanese Web site with plenty of WebKit issues, but in this case I want to focus on width rounding in CSS. Let's see this navigation navbar in Firefox for Android (aka Gecko).

navbar as seen in Gecko

And in Blink (Chrome, Opera).

navbar as seen in Firefox

What's happening?

<ul class="btn_read tapon">
    <li><p><a href="/info/inquiry.html" 
              target="_blank" 
              class="ui-link">ご意見・お問い合わせ</a></p></li>
    <li><p><a href="/info/kiyaku.html" 
              target="_blank" 
              class="ui-link">利用規約</a></p></li>
    <li><p><a href="/info/privacy.html" 
              target="_blank" 
              class="ui-link">個人情報の取り扱い</a></p></li>
    <li><p><a id="pcLink" href="javascript:void(0);" 
              class="ui-link">PC版 マイナビ賃貸</a>
            <input id="pcUrl" 
                   value="https://chintai.mynavi.jp/" type="hidden"></p></li>
</ul>

The CSS used for this part too:

ul.btn_read li:nth-of-type(2n+1) {
    width: 49.99999%;
    border-right: 1px solid #CCC;
}

ul.btn_read li {
    display: table;
    float: left;
    width: 50%;
    height: 50px;
    background: gradient(linear, left top, left bottom, color-stop(0%,rgb(0,165,231)), color-stop(100%,rgb(0,142,217)));
    background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,white), color-stop(100%,#EFEFEF));
    border-bottom: 1px solid #CCC;
}

We see here that the person careful tried to fix the width so the li elements would fit inside. The 49.99999% smells already bad compared to using 50%. We can also notice the invalid gradient property. The correct value for the gradient is (easy to get with the CSS Fix me tool):

ul.btn_read li {
  /* … */
  background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,white), color-stop(100%,#EFEFEF));
  background: linear-gradient(to bottom, white 0%, #EFEFEF 100%);
}

But back to the rounding of the width. What is happening? Let's say the viewport is 360px wide. In both devtools, the ul takes 100% of the viewport size. All is good.

ul width in CSS (blink left, gecko right)

Let see the first li aka li:nth-of-type(2n+1) (width: 49.99999%;)

li width in CSS (blink left, gecko right)

and the second li (width: 50%;)

li width in CSS (blink left, gecko right)

Oooops… In Gecko, we get 179.983px + 1px + 180px = 360,983px. Wider than the viewport.

Math is hard

Update 2016-05-26: How CSS width is computed in Gecko?

At least, for browsers it seems. I'm taking a calculator.

360*(49.99999/100) = 179,999964. Yet another result. Hmmm let's reverse it to see how Gecko rounded the value.

(179.983/360)*100 = 49,995277778. what!?

Or maybe it's the devtools rounding the value in the UI? Ok let's try to see what the console is given us in Gecko.

width in the Firefox console

  • width: 180.98333740234375 (because the 1px border). The rounding in the devtools is correct.
  • width: 180

if I take this value, that would mean that Firefox used something for the width which is 49,995371501%

width in the Blink console

  • width: 180 for both li. At least consistent with their strange rounding.

Widths in CSS are not mathematical widths.

The Correct Way To Compute This Width

The front-end developer by now understands that he or she not use this kind of percentages for this issue. What can we do to solve this issue?

The simplest would be to use calc()

ul.btn_read li:nth-of-type(2n+1) {
    width: calc(50% - 1px);
    border-right: 1px solid #CCC;
}

but I suppose that at the time the Web site was done, this option was not available.

The other working solution would have been to set box-sizing: border-box; so that the border is included in the computation of the width. This removes the need of the 49.99999%

ul.btn_read li {
    display: table;
    float: left;
    width: 50%;
    height: 50px;
    background: linear-gradient(to bottom, white 0%, #EFEFEF 100%);
    border-bottom: 1px solid #CCC;

    /* set box-sizing on li */

    box-sizing: border-box;
}

ul.btn_read li:nth-of-type(2n+1) {
    /* width: 49.99999%; */
    border-right: 1px solid #CCC;
}

Otsukare!

Eitan IsaacsonAutonomous Plant Watering

Here in our house we keep our plants on the threshold of death. They are in constant limbo as we remember to water them every few months. It is really quite disgraceful.

A pathetic house plantOur sad sad plant.

One day I looked at our plants and thought we needed to do something. Go and water them? The plant doesn’t just need water. It is suffering from the systemic problem of us never remembering to water it. No. Watering the plant would be too easy, we need a technological solution that will hydrate the plant and not require us to change our comfortable habit of neglect.

I googled “automated house plant watering” and the first link that comes up is an instructable. It promised to be a cheap and easy project. So I went ahead and got all the materials: an aquarium air pump, some tubing and valves. I then followed the instructions and assembled the parts as they describe. The result was underwhelming. I got some gurgling at the plant end of the tube, not a steady or measurable flow. I really don’t understand the physics that makes that system work, but it has a lot to do with water pressure: the deeper your reservoir the more efficient the water gets pumped. As the water gets consumed, the pump gives less output. So to get an optimal plant-watering we would need to make sure the tank is always full. Whats the point of automating it if you have to fill the tank after every watering?? This won’t do. I plan to put this on a timer. I need to know that if I have it on for 1 minute I will always pump a comparable amount of water to the plant.

So I got to thinking, how can an air pump pump water? Specifically, how can it pump water with a constant pressure? I came up with this schematic:

Brilliant schematicAir is pumped into a sealed jar with an outlet tube that relieves the jar’s pressure by pushing water out through another tube!

I grabbed a mason jar and drilled two 3/16″ holes in the lid. This allowed me to insert the two air tube which are slightly thicker (5mm). They fit snuggly and formed a seal.

Jar lid with two tubes running through

Next I attached one of the tubes to the pump, and placed the second one’s end in a glass. Turned on the pump and! Yes! It works! Science! I was getting a steady flow of water. The jar was emptied in a constant rate. This setup will do. I’m so pleased with this!

I splurged and got a slightly more expensive pump with 4 outputs so I can water 4  plants individually.

Aquarium air pump with 4 outlets

This setup has a few advantages over other pump setups:

  • It is cheap. So far the bill of parts is around $12.50.
  • It offers predictable water throughput.
  • You can connect any sealable container. Don’t want to refill the water after 32oz of watering? Get a gallon jug.
  • If the reservoir runs dry the motor won’t catch fire. That apparently is a thing water pumps.
  • Since the water is only going through a simple tube and not an expensive motor, you can pump a nutrient solution. If you want to pamper your plants, we don’t.

I made a stupid pump. Why is that cool? Because with a WiFi plug it becomes smart! It is now a Connected Device™. I plugged it into a Bayit WiFi socket, and set it to turn on for 20 seconds each Monday afternoon. That will feed our plants about a 1/2 a cup a week. If we like the results we may extend it to a full cup!

Bayit WiFi SocketThe Internet is in control!

A Word on WiFi Sockets

They suck. They take the simplest operation of closing a circuit and abstract it in a shitty smartphone app that only works half the time. Well, at least that has been my impression with this Bayit gadget. For my next project I am going to use a Kankun smart plug. Apparently it runs OpenWRT and is very hackable.

Bill of Materials

Air Pump $6.99
Air tube $3.42
Mason jar $2.09
Wifi plug $24.99

James LongImplementing a Stepping Debugger in JavaScript

In my previous post I introduced Unwinder, a project which implements continuations in JavaScript. What does that have to do with stepping debuggers? Unwinder uses continuations to implement a debugger, since it can reuse the same machinery to pause code at any time.

This post could be titled "Implementing Continuations in JavaScript," but a lot more people know what a stepping debugger is. Besides, implementing a stepping debugger is a pretty cool use case for this stuff.

If you haven't read the previous article (which explains in-depth what continuations are and heavily uses stepping debuggers to help explain them), here is a live example of a stepping debugger. You can edit the code, and click on any line to set a breakpoint

function foo(x) { console.log(x); if (x <= 0) { return x; } else { return x + foo(x - 1); } } function main() { console.log(foo(3)); } main();
A working stepping debugger! Edit the code or click on any line to add a breakpoint.

Isn't that so cool!? Well... that may have been anti-climatic. It's not a fun visual or anything. But still, this is all implemented in JavaScript! No hook into the native JavaScript engine or anything! My previous post goes into a lot more detail about what you can do with Unwinder.

This works with a sophisticated transform explained in the paper "Exceptional Continuations in JavaScript". It's a very clear and approachable paper so you should read it if this is interesting. The neat thing about this technique is performance: we are still using native function scoping, variables, and more which allows the JIT to highly optimize it normally. However, the code is instrumented in ways that allow us to save the stack.

Both continuations and stepping debuggers need to arbitrarily pause the code at any time. This means that we need to save the entire stack, and be able to restore it. Let's take a look at the technique that paper describes to implement this. While the paper is very clear, it was fun for me to write down the basic process.

State Machine

The first thing we need to do is compile the code into something that we can control. That means a state machine. This is a rather common transform; it's how regenerator compiles generators, Clojure compiles go blocks, and many more. Unwinder actually started as a fork of regenerator for this reason.

Let's take a look at this code:

function foo() {
  var x = 5;
  var y = 6;
  return x + y;
}

Turning the function foo into a state machine looks like this:

function foo() {
  var $__next = 0, x, y;
  while(1) {
    switch($__next) {
      case 0:
        x = 5;
        $__next = 1;
        break;
      case 1:
        y = 6;
        $__next = 2;
        break;
      case 2:
        return x + y;
    }
  }
}

The $__next variable is the important thing here: it controls exactly where execution is happening. Note that we also hoisted x and y manually because we need to re-implement variable scoping. We compile various blocks into linear case statements so we need to maintain scopes and enforce any shadowing between var/let/const declarations.

You can already see that this is not a simple transform. We just started and the generated code is already a lot bigger than the original, but it works fine for small programs. This isn't for writing real-world production code; it's for smaller demos and experiments.

Re-implementing Native Control Constructs

Let's take a look at some more complicated code. Here we have a while statement and variable shadowing.

function foo(x) {
  while(Math.random() < .3) {
    let x = 5;
    console.log(x);
  }
}

What if we want to pause on line 3 inside the while loop? We cannot use the native while statement, of course. We must re-implement it in our state machine. A clean version of the generated code would look like this. Note how we shadowed the x passed into the function by renaming the x declared inside the while loop as x$0.

function foo(x) {
  var $__next = 0, $__t1, x$0;
  while(1) {
    switch($__next) {
      case 0:
        $__t1 = Math.random();
        $__next = 1;
        break;
      case 1:
        if(!($__t1 < .3)) {
          $__next = 5;
        }

        $__next = 2;
        break;
      case 2:
        x$0 = 5;
        $__next = 3;
        break;
      case 3:
        console.log(x$0)
        $__next = 4;
        break;
      case 4:
        $__next = 0;
        break;
      case 5:
        return;
    }
  }
}

The while loop starts at label 0 and loops around at label 4, where $__next is set back to 0. In label 1, the while condition is checked and if its falsy it skips to label 5 which exits the loop and returns from the function.

Unfortunately, we must re-implement all native control constructs. This means while, for, try, and everything else has custom logic in Unwinder (which was inherited from regenerator).

Temporary Variables

You may have noticed the weird $__t1 variable in the above code. That is a temporary variable allocated by the compiler. We must use temporary variables because we are splitting up nested expressions, and we must save the intermediate result of each expression somewhere.

For example, take the following code:

function foo(x) {
  return func1(func2(func3(x)));
}

The output would be:

function foo(x) {
  var $__next = 0, $__t1, $__t2, $__t3;
  while(1) {
    switch($__next) {
      case 0:
        $__next = 1;
        $__t3 = func3(x);
        break;
      case 1:
        $__next = 2;
        $__t2 = func2($__t3);
        break;
      case 2:
        $__next = 3;
        $__t1 = func1($__t2);
        break;
      case 3:
        return $__t1;
    }
  }
}

This allows us to pause in between each expression.

Optimizing Regenerator

I started this project by forking regenerator, which compiles generators to ES5 code. I was able to take advantage of its existing state machine transform, which does all the hard work. However, it compiles code into something like this:

var marked0$0 = [foo].map(regeneratorRuntime.mark);
function foo() {
  var x, y;

  return regeneratorRuntime.wrap(function foo~D(context$1$0) {
    while (1) switch (context$1$0.prev = context$1$0.next) {
    case 0:
      x = 5;
      y = 6;
      return context$1$0.abrupt("return", x + y);
    case 3:
    case "end":
      return context$1$0.stop();
    }
  }, marked0$0[0], this);
}

This is fine for generators because they aren't used everywhere. More importantly, you only need to break up yield expressions. Everything else can stay the same, as you see in the x and y assignments above. We need to break up every expression and pump each one through the switch statement. (We can't even fall-through case statements because we need to check for breakpoints, as you'll see below).

Wrapping every function is too expensive, and I wanted to remove the context object completely and only use local variables. We need to read $__next between every single expression, and surely it's faster to read from a local variable than off the context object (there might be ways to make sure the engine JIT's the context access appropriately, but local variables guarantees the best performance).

Wrapping the function is only needed to allocate the context object, so if we get rid of it we don't need to wrap every function. I changed the compiler to emit local variables instead, which is harder then it sounds. First, when we resume a function, how do we restore the $__next variable? It's trapped as a local variable now. We will show how to restore it below, and that technique is the only reason we can compile to local variables (normal regenerator could not make this optimization).

Temporary variables are also stored on the context, so we need to track those and declare them as local variables as well. That's why you saw $__t1, $__t2, and $__t3 in the "Temporary Variables" section above.

Doing all of this work means we that we generate code more like the example state machines you saw above. There are a few cases where we need direct access to the virtual machine, like checking if breakpoints exist at a certain expression, and right now the code assumes that a global variable VM exists. There are probably better ways to do that.

Saving and Restoring

Now that we have a state machine, we need a way to save all the state and restore it. This seems like a tricky problem, but the paper presents an interesting solution: exceptions.

Exceptions are the only way we can divert control flow. They allow us to generate code which use native calling convections: function calls are normal function calls. Exceptions allow us to stop the program and inspect the state machine.

Let's look at the first example again, but with a debugger statement:

function foo() {
  var x = 5;
  var y = 6;
  debugger;
  return x + y;
}

The trick is to wrap the entire function body in a try/catch statement, and within the catch save the values of $__next, x, and y. We need to save not only the state machine, but also the values of local variables so that we can restore them later.

Here's how the output would look:

function foo() {
  var $__next = 0, x, y;
  try {
    while(1) {
      switch($__next) {
        case 0:
          x = 5;
          $__next = 1;
          break;
        case 1:
          y = 6;
          $__next = 2;
          break;
        case 2:
          $__next = 3;
          throw new $ContinuationExc();
        case 3:
          return x + y;
      }
    }
  }
  catch(e) {
    if(!(e instanceof $ContinuationExc))
      e = new $ContinuationExc({ error: e })

    e.save($__next, { x: x, y: y });
    throw e;
  }
}

In label 2, which is what the debugger statement compiled to, we throw what's called a "continuation exception". This is caught by our catch statement, where we record the current state in it and throw it again.

If an error occurs, we will catch that as well and save the error object in our continuation exception.

In our virtual machine, we run all code within a top-level try/catch which switches into the "paused" state when continuation exceptions are thrown and saves them for a future resumption.

Note: you might be worried about the performance of throwing exceptions around, and you aren't wrong. This technique favors the performance of normal code at the cost of the performance of continuations. For some use cases, like debuggers, the performance of pausing and resuming does not matter. For other use cases (where continuations are frequently saved and restored), this technique will be too slow. But it's still a great way to teach continuations.

The Stack

You probably wondering about function calls and the stack. What happens to functions on the stack? We can't reliably restore a program without restoring the stack as well, so we must capture the entire stack.

It's natural for the above code to capture the whole stack. In the continuation exception, instead of saving a single function's state, make it save a list of frames (the stack). Let's rename the save method to pushFrame, introduce a $Frame object, and it looks like this:

function foo() {
  var $__next = 0, x, y;
  try {
    // while & switch statements ...
  }
  catch(e) {
    if(!(e instanceof $ContinuationExc))
      e = new $ContinuationExc({ error: e })
    e.pushFrame(new $Frame($__next, { x: x, y: y }));
    throw e;
  }
}

function bar() {
  var $__next = 0;
  try {
    // while & switch statements ...
    foo();
    // ...
  }
  catch(e) {
    if(!(e instanceof $ContinuationExc))
      e = new $ContinuationExc({ error: e })
    e.pushFrame(new $Frame($__next, {}));
    throw e;
  }
}

This shows two functions, bar which calls foo. When foo pauses, it creates a continuation exception, saves the frame, and re-throws it. bar will then capture it, save its own frame, and re-throw it.

As mentioned before, our virtual machine installed a top-level try/catch which will grab the accumulated call stack and save it.

This way the continuation exception will have all the frames of the current stack. We could serialize the continuation exception into this:

[["foo", 2, { x: 5, y: 6 }]
 ["bar", 10, {}]]

The stack starts with the innermost frame and goes out from there. The $__next value is saved as the second entry, and local variables as the third. This is all the information we need to restore the entire program!

Restoring

Now it's time to take that information and "resume" the program by restoring the call stack. This is also a bit tricky, but I think it's ingenious.

Let's start with this simple function again:

function foo() {
  var x = 5;
  var y = 6;
  debugger;
  return x + y;
}

We need to restore the position of the state machine ($__next), x, and y, and we have all this information in the frame object. The problem is how to give the function the frame to restore.

By toggling an internal flag doRestore, we tell functions to do a restore. They need a reference to the virtual machine, and right now they assume a global VM variable exists. There might be a better way to achieve that. To restore, they pop a frame off the saved stack in the VM and apply it. This new code exists at the top of the function:

function foo() {
  var $__next = 0, x, y;

  try {
    if(VM.doRestore) {
      var $__frame = VM.popFrame();
      $__next = $__frame.next;
      x = $__frame.state.x;
      y = $__frame.state.y;
    }

    while(1) {
      switch($__next) {
        // ... code ...
      }
    }
  }
  catch(e) {
    // ... save frame ...
  }
}

All of the local variables, including $__next, are restored so we can resume execution as if nothing happened.

The popFrame method on the virtual machine is pretty simple: pop a frame, and if there are no more frames left, set doRestore to false to turn of restoration mode:

Machine.prototype.popFrame = function() {
  var r = this.stack.pop();
  if(!this.stack.length) {
    this.doRestore = false;
    this.stack = null;
  }
  return r;
};

But we still have a problem with this: what about the stack? We need to restore the whole stack, and this only restores the first function. More trickery is needed for that.

The trick is to also save a reference to the corresponding function in the frame itself when saving in the catch block. This allows us to call it again when restoring (we also save this to call it with the right context). With these new fields on the frame, we can restore the entire stack with this code:

function bar() {
  var $__next = 0, $__t1;

  try {
    if(VM.doRestore) {
      var $__frame = VM.popFrame();
      $__next = $__frame.next;
      var $__child = $__frame.child;

      // This is new! It will call the next function, which will also
      // pop off a frame and restore
      if($__child) {
        $__frame.state.$__t1 = $__child.fn.call($__child.thisPtr);
      }

      $__t1 = $__frame.state.$__t1;
    }

    while(1) {
      switch($__next) {
        // ... code ...
      }
    }
  }
  catch(e) {
    // ... save frame ...
  }
}

We actually just call the function which re-creates the stack. The called function will also restore itself, cascading the restoration until the entire stack exists again. Without going into details, the compiler knows which temporary variable to store the result in, which is $__t1 in this case. The code within the switch is waiting for the result in this temporary variable.

Live Breakpoints

If you want the ability to set breakpoints on a running script, you need to add another check. Supporting debugger statements is easy: just compile it to a throw new $ContinuationExc(). However, to support live breakpoints we need to check if a breakpoints exists at every single step of execution.

We can do this in the while(1) loop which is running the state machine. Again, we need access to the virtual machine to check for breakpoints. All it takes is this:

while(1) {
  if (VM.breakpoints[1][$__next] !== undefined)
    throw new $ContinuationExc();

  switch($__next) {
      // ... code ...
  }
}

Don't worry about the structure of VM.breakpoints; it is broken up in a way that we can track functions separately (hence the [1] lookup). It gives us the ability to stop at any point a script by resolving a line/column location to a specific label in a function's state machine, and setting a breakpoint there.

That's It!

Although it's large transform, it gives us everything we need to arbitrarily save and resume the stack. It's not something you should use in production, but it's useful for demoing or prototyping ideas without having to know the internals of a real JS engine.

I showed a small function and incrementally explained the compiler output. To see the final output which has all the saving, restoration, and breakpoint machinery, check out this gist.

If you want to learn more, check out Unwinder, particularly the compiler and the virtual machine code. You can see the implementation of $ContinuationExc, $Frame, and how it runs programs.

You can also learn more about continuations (and see demos that this enables) in my last post.

Output:
Stack:

The Rust Programming Language BlogAnnouncing Rust 1.9

The Rust team is happy to announce the latest version of Rust, 1.9. Rust is a systems programming language focused on safety, speed, and concurrency.

As always, you can install Rust 1.9 from the appropriate page on our website, and check out the detailed release notes for 1.9 on GitHub. About 1000 patches were landed in this release.

What’s in 1.9 stable

Controlled unwinding

The biggest shift in Rust 1.9 is the stabilization of the std::panic module, which includes methods for halting the unwinding process started by a panic:

use std::panic;

let result = panic::catch_unwind(|| {
    println!("hello!");
});
assert!(result.is_ok());

let result = panic::catch_unwind(|| {
    panic!("oh no!");
});
assert!(result.is_err());

This new API was defined in RFC 1236.

In general, Rust distinguishes between two ways that an operation can fail:

  • Due to an expected problem, like a file not being found.
  • Due to an unexpected problem, like an index being out of bounds for an array.

Expected problems usually arise from conditions that are outside of your control; robust code should be prepared for anything its environment might throw at it. In Rust, expected problems are handled via the Result type, which allows a function to return information about the problem to its caller, which can then handle the error in a fine-grained way.

Unexpected problems are bugs: they arise due to a contract or assertion being violated. Since they are unexpected, it doesn’t make sense to handle them in a fine-grained way. Instead, Rust employs a “fail fast” approach by panicking, which by default unwinds the stack (running destructors but no other code) of the thread which discovered the error. Other threads continue running, but will discover the panic any time they try to communicate with the panicked thread (whether through channels or shared memory). Panics thus abort execution up to some “isolation boundary”, with code on the other side of the boundary still able to run, and perhaps to “recover” from the panic in some very coarse-grained way. A server, for example, does not necessarily need to go down just because of an assertion failure in one of its threads.

The new catch_unwind API offers a way to introduce new isolation boundaries within a thread. There are a couple of key motivating examples:

  • Embedding Rust in other languages
  • Abstractions that manage threads

For the first case, unwinding across a language boundary is undefined behavior, and often leads to segfaults in practice. Allowing panics to be caught means that you can safely expose Rust code via a C API, and translate unwinding into an error on the C side.

For the second case, consider a threadpool library. If a thread in the pool panics, you generally don’t want to kill the thread itself, but rather catch the panic and communicate it to the client of the pool. The catch_unwind API is paired with resume_unwind, which can then be used to restart the panicking process on the client of the pool, where it belongs.

In both cases, you’re introducing a new isolation boundary within a thread, and then translating the panic into some other form of error elsewhere.

A final point: why catch_unwind rather than catch_panic? We are in the process of adding an additional strategy for panics: aborting the entire process (possibly after running a general hook). For some applications, this is the most reasonable way to deal with a programmer error, and avoiding unwinding can have performance and code size wins.

Deprecation warnings

We introduced a new attribute for library authors: #[deprecated]. This attribute allows you to tag an API with a deprecation warning, which users of their crate will receive whenever they use the API, directing them to a replacement API. Deprecation warnings have long been a part of the standard library, but thanks to RFC 1270 they’re now usable ecosystem-wide.

New targets

We now publish standard library binaries for several new targets:

  • mips-unknown-linux-musl,
  • mipsel-unknown-linux-musl, and
  • i586-pc-windows-msvc.

The first two targets are particularly interesting from a cross-compilation perspective; see the recent blog post on rustup for more details.

Compile time improvements

The time complexity of comparing variables for equivalence during type unification is reduced from O(n!) to O(n). As a result, some programming patterns compile much, much more quickly.

Rolling out use of specialization

This release sees some of the first use of specialization within the standard library. Specialization, which is currently available only on nightly, allows generic code to automatically be specialized based on more specific type information.

One example where this comes up in the standard library: conversion from a string slice (&str) to an owned String. One method, to_string, comes from a generic API which was previously relatively slow, while the custom to_owned implementation provided better performance. Using specialization, these two functions are now equivalent.

With this simple test of specialization under our belt, we have more performance improvements on the way in upcoming releases.

Library stabilizations

About 80 library functions and methods are now stable in 1.9. The most major is the std::panic module, described earlier, but there’s a lot more too:

Networking

  • TcpStream, TcpListener, and UdpSocket gained a number of methods for configuring the connection.
  • SocketAddr and its variants gained set_ip() and set_port() conveniences.

Collections

  • BTreeSet and HashSet gained the take(), replace(), and get() methods, which make it possible to recover ownership of the original key.
  • OsString gained a few methods, bringing it closer to parity with String.
  • Slices gained copy_from_slice(), a safe form of memcpy.

Encoding

  • char gained the ability to decode into UTF-16.

Pointers

  • Raw pointers gained as_ref() and as_mut(), which returns an Option<&T>, translating null pointers into None.
  • ptr::{read,write}_volatile() allow for volatile reading and writing from a raw pointer.

Finally, many of the types in libcore did not contain a Debug implementation. This was fixed in the 1.9 release.

See the detailed release notes for more.

Cargo features

There were two major changes to Cargo:

First, Cargo can now be run concurrently.

Second, a new flag, RUSTFLAGS, was added. This flag allows you to specify arbitrary flags to be passed to rustc through an environment variable, which is useful for packagers, for example.

See the detailed release notes for more.

Contributors to 1.9

We had 127 individuals contribute to 1.9. Thank you so much!

  • Aaron Turon
  • Abhishek Chanda
  • Adolfo Ochagavía
  • Aidan Hobson Sayers
  • Alan Somers
  • Alejandro Wainzinger
  • Aleksey Kladov
  • Alex Burka
  • Alex Crichton
  • Amanieu d'Antras
  • Andrea Canciani
  • Andreas Linz
  • Andrew Cantino
  • Andrew Horton
  • Andrew Paseltiner
  • Andrey Cherkashin
  • Angus Lees
  • Ariel Ben-Yehuda
  • Benjamin Herr
  • Björn Steinbrink
  • Brian Anderson
  • Brian Bowman
  • Christian Wesselhoeft
  • Christopher Serr
  • Corey Farwell
  • Craig M. Brandenburg
  • Cyryl Płotnicki-Chudyk
  • Daniel J Rollins
  • Dave Huseby
  • David AO Lozano
  • David Henningsson
  • Devon Hollowood
  • Dirk Gadsden
  • Doug Goldstein
  • Eduard Burtescu
  • Eduard-Mihai Burtescu
  • Eli Friedman
  • Emanuel Czirai
  • Erick Tryzelaar
  • Evan
  • Felix S. Klock II
  • Florian Berger
  • Geoff Catlin
  • Guillaume Gomez
  • Gökhan Karabulut
  • JP Sugarbroad
  • James Miller
  • Jeffrey Seyfried
  • John Talling
  • Jonas Schievink
  • Jonathan S
  • Jorge Aparicio
  • Joshua Holmer
  • Kai Noda
  • Kamal Marhubi
  • Katze
  • Kevin Brothaler
  • Kevin Butler
  • Manish Goregaokar
  • Markus Westerlind
  • Marvin Löbel
  • Masood Malekghassemi
  • Matt Brubeck
  • Michael Huynh
  • Michael Neumann
  • Michael Woerister
  • Ms2ger
  • NODA, Kai
  • Nathan Kleyn
  • Nick Cameron
  • Niko Matsakis
  • Noah
  • Novotnik, Petr
  • Oliver Middleton
  • Oliver Schneider
  • Philipp Oppermann
  • Piotr Czarnecki
  • Pyfisch
  • Richo Healey
  • Ruud van Asseldonk
  • Scott Olson
  • Sean McArthur
  • Sebastian Wicki
  • Seo Sanghyeon
  • Simon Sapin
  • Simonas Kazlauskas
  • Steve Klabnik
  • Steven Allen
  • Steven Fackler
  • Stu Black
  • Sébastien Marie
  • Tang Chenglong
  • Ted Horst
  • Ticki
  • Tim Montague
  • Tim Neumann
  • Timon Van Overveldt
  • Tobias Bucher
  • Tobias Müller
  • Todd Lucas
  • Tom Tromey
  • Tshepang Lekhonkhobe
  • Ulrik Sverdrup
  • Vadim Petrochenkov
  • Valentin Lorentz
  • Varun Vats
  • Wang Xuerui
  • Wangshan Lu
  • York Xiang
  • arcnmx
  • ashleysommer
  • bors
  • ggomez
  • gohyda
  • ituxbag
  • mitaa
  • nicholasf
  • petevine
  • pierzchalski
  • pravic
  • srinivasreddy
  • tiehuis
  • ubsan
  • vagrant
  • vegai
  • vlastachu
  • Валерий Лашманов

Jen Kaganday 3: draw it out

i’m working on a patch for adding a button that lights up if there’s a playable URL in the page.

jared sent some links and explanations, but i also needed to draw it out.

button-logic-drawing

logic:

“OK, so you’ve got a content script, that runs in each web page, and your addon code, which isn’t in the page and can’t see it

So you’ll want your content script to always load on every page, and do a querySelectorAll check for the youtube href*= selectors used by the context menu code

Once the content script (loaded by pageMod) finds a hit in the page, it should send a message back to the main addon script, telling it to light up the button

We’ll also need to detect when a new page loads and un-highlight the button (hopefully you can do this from the addon, without involving a content script, but I’m not sure)”

step-by-step:

  1. get the script in the page to send over a message, and have the addon code just console.log something when it gets the message so you can convince yourself messaging is working. I think self.port is the addon SDK wrapper around postmessage.
  2. once you’re able to send messages over, then you can have your content script look for youtube links in the page, and send over a message that says if it found a match
  3. and then you can update the addon code to toggle the button state based on whether a matching link was found

links:

sdk guide to content scripts

port object

postmessage

page-mod

Kevin BrosnanServo python build error Windows 10

Compiling Servo on Windows 10 I hit the following error

./mach build -r
<time passes>
--- stderr
thread '<main>' panicked at 'Can't find python (tried python27.exe and python.exe)! Try fixing PATH or setting the PYTHON env var', C:\msys64\home\kbrosnan\src\servo\components\style\build.rs:22
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Setting the python environment variable allowed the build to succeed. export PYTHON=/c/Python27/python.exe

Air MozillaThe Joy of Coding - Episode 58

The Joy of Coding - Episode 58 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Open Policy & Advocacy BlogThe countdown is on: 24 months to GDPR compliance

Twenty four months from now, a new piece of legislation will apply throughout Europe: the General Data Protection Regulation (GDPR). Broadly speaking, we see the GDPR as advantageous for both users and companies, with trust and security being key components of a successful business in today’s digital age. We’re glad to see an update for European data protection law – the GDPR is replacing the earlier data protection “directive”,  95/46/EC, which was drafted over 20 years ago when only 1% of Europeans had access to the Internet. With the GDPR’s formal adoption as of 14th April 2016, the countdown to compliance has begun. Businesses operating in all 28 European Union (EU) member states have until 25th May 2018 to get ready for compliance, or face fines of up to 4% of their worldwide turnover.

The GDPR aims to modernise data protection rules for today’s digital challenges, increase harmonisation within the EU, strengthen enforcement powers, and increase user control over personal data. The Regulation moved these goals forward, although it is not without its flaws. With some elements of it, the devil will be in the details, and it remains to be seen what the impact will be in practice.

That aside, there are many good pieces of the Regulation which stand out. We want to call out five:

  1. Less is more: we welcome the reaffirmation of core privacy principles requiring that businesses should limit the amount of data they collect and justify for what purpose they collect data. At Mozilla, we put these principles into action and advocate for businesses to adopt lean data practices.
  2. Greater transparency equals smarter individual choice: we applaud the Regulation’s endorsement of transparency and user education as key assets.
  3. Privacy as the default setting: businesses managing data will have to consider privacy throughout the entire lifecycle of products and services. That means that from the day teams start designing a product, privacy must be top of mind. It also means that strong privacy should always be the “by-default setting”.
  4. Privacy and competition are mutually reinforcing: with added controls for users like the ability to port their personal data, users remain the owner of their data, even when they leave a service. Because this increases the ability to move to another provider, this creates competition and prevents user lock-in within one online platform.
  5. What’s good for the user is good for business: strengthened data and security practices also decreases the risks associated with personal data collection and processing for both users and businesses. This is not negligible: in 2015 data breaches have cost on average USD 3.79 million per impacted company, without mentioning the customer trust they lost.

Above and beyond the direct impact of the GDPR, its standard-setting potential is substantial. It is more than a purely regional regulation, as it will have global impact. Any business that markets goods or services to users in the EU will be subject to compliance, regardless of whether their business is located in the EU.

We will continue to track the implications of the GDPR over the next 24 months as it comes into force, and will stay engaged with any opportunities to work out the final details. We encourage European Internet users and businesses everywhere to join us – stay tuned as we continue to share thoughts and updates here.

George RoterRefreshing the Participation Buffet

Have an impact on Mozilla’s mission.

The answer has been consistent every time we’ve asked volunteer Mozillians what their primary motivation is for contributing to Mozilla. 3 years ago, last year, this month, “have an impact on Mozilla’s mission” is always the number one answer.

What we’ve also heard over the past few months is that lacking clarity on the best areas to have an impact is one of the main things limiting contribution. When I look at this objectively, I completely understand. Mozilla is a huge and complex project, with multiple products, projects and activities, and we’re right now in a constant state of change as Mitchell recently articulated nicely. To exacerbate the situation, we haven’t been very good at clearly communicating where Mozilla is heading.

Our much needed effort to reinvent participation at Mozilla has added yet more ambiguity to the mix. As a Participation Team, we’ve made some progress on bringing more clarity, and have laid out a broad strategy for 2016. But this isn’t enough. We need to do better.

We need to help Mozillians know where they can apply their skills, time and passions to have an impact on Mozilla’s mission.

My teammate Rosana Ardila had a great analogy that I’ll borrow: We once had a delicious, well laid out food buffet of contribution and impact areas at Mozilla. But that food has been sitting out for quite a while. Some of it is stale. Some of the dishes aren’t even on the menu anymore. We’ve moved around the serving stations so that some are sitting in a dark corner that nobody can find. Some of the food is still really tasty and just needs a flame lit underneath it … okay, I’ll stop killing the analogy!

Over the next couple of months we’re going to refresh this participation buffet.

We’re going to do this by both designing and rolling out some new, high impact and well designed contribution areas and campaigns, in partnership with teams around Mozilla. And we’re also going to curate and highlight fantastic contribution opportunities driven primarily by other teams.

We need all Mozillians (employees and volunteers) to help with this, by helping to build, highlight and lead great areas of participation. Please get in touch with your thoughts and ideas (groter@mozilla.com).

Our filters for what makes it into the buffet are simple and what we have come to understand will represent outstanding participation at Mozilla.

First, we will focus on participation areas that:

  1. Help Mozilla innovate, driving the leading edge of our work and thinking.
  2. Extend Mozilla’s reach, by bringing Mozilla products, ideas and issues to more people, and connecting more people with Mozilla.

Second, we will highlight contribution opportunities that have a well designed participant experience, and that have thought-through methods for bringing value to both Mozilla’s mission and to Mozillians. To be clear, these opportunities won’t each be applicable to everyone; most will require a specific set of interests or backgrounds. We do want to make sure that the full buffet will provide opportunities for a diverse range of participants and plenty of opportunities for people to build new skills.

Third, we know that we don’t have the next phase of Mozilla figured out — far from it! So activities that help us learn about the future of participation at Mozilla are a priority. And by definition, that will mean trying things out that may not work.

How about a sneak preview? What can you get involved with RIGHT NOW!?

You’ll be able to find great opportunities highlighted here – https://wiki.mozilla.org/Participation#Focus_Areas_for_Participation

There’s a couple ready right now, and many more that we’ll highlight soon on Discourse and the Wiki page above.

Helping Mozilla innovate:

  • Dino Tank London – We want you to pitch problems that are worth solving at Mozilla! This opportunity closes on May 27th, but will be refreshed again in June/July.

Coming soon…

Extending Mozilla’s reach:

  • Open Comms: Making Encryption Mainstream – submit ideas on how to educate people around the world or in your community about the importance of encryption. This closes on June 3rd, but Open Comms will have many more opportunities coming soon.

Coming soon…

  • Next phase of the Take Back the Web campaign
  • MakerParty
  • Growing the Firefox Nightly community
  • Something exciting on engaging developers

Final ask

Please do let me know what you think of the above. Does it resonate? Do you have areas you’d suggest we highlight or build for? What else might be missing from our approach?

Head over to this conversation on Discourse to share your thoughts and ideas.


Tantek ÇelikTomorrowland: Misjudging A Movie By Its Name And Associations

I have a distinct memory, it must have been late last May, of seeing the digital movie marquees on the outside of San Francisco’s Metreon complex. I don’t remember exactly what day it was, and the experience at the time was too unremarkable for me to bother capturing in my personal log.

I remember seeing “Tomorrowland, George Clooney” crawling on the marquee and reflexively thinking: another film based on a theme park (ride), likely to be silly and shallow. And in particular, disappointing.

As a child growing up in Southern California, Tomorrowland was my favorite part of Disneyland. So many rides that inspired imagination, and hope for the potential of technology to explore, educate, empower. From Space Mountain to Adventure Thru Inner Space. I also remember slowly becoming disillusioned with Tomorrowland. Rides changed from science hope & curiousity, to science fantasy & entertainment (Star Wars based Star Tours, Captain EO). Exploration fell out of fashion, the Submarine Voyage and Mission to Mars rides were both closed.

I had not even seen a trailer for the Tomorrowland film.

I expected disappointment from something I had no direct experience with, based on what I associated it with, and assumed it would be. I was also extrapolating from other theme-park-ride movies like Pirates of the Carribean.

The difference between a themed “land” and a specific ride didn’t seem important. Little did I know, that difference apparently allowed for sufficient writer(s)/director creativity for the film to be something much more than anything defined by a particular ride.

It would be almost three months before I returned to Tomorrowland.

Next: Tomorrowland: A Change Of Perspective & A Flight To Paris

Karl DubostHow To Fix border-image in your CSS?

I have already explained this in the context of Gmail, but I thought we should make it more explicit.

If you go on Music Oricon Japanese Web site with Firefox Android with about:config?filter=prefixes.webkit set to false, you will see massive breakages, including this navigation bar:

Oricon navigation bar in Firefox

There are a couple issues including flexbox, but let's ignore it for the moment and let's focus on background-image issues. The markup is simple.

<ul class="btn-navi">
        <li class="current"><a href="http://music.oricon.co.jp/?&amp;clt=WW5retn8C31gDygq5DOAPvc732o8EWMcQAtlkTOLQNcOUlh72xokjWq9tkN7-TQX24EuyA~~">J-POP</a></li>
        <li><a href="http://music-world.oricon.co.jp/?&amp;clt=WW5retn8C31gDygq5DOAPvc732o8EWMcQAtlkTOLQNcOUlh72xokjWq9tkN7-TQX24EuyA~~">洋楽</a></li>
        <li><a href="http://music-club.oricon.co.jp/?&amp;clt=WW5retn8C31gDygq5DOAPvc732o8EWMcQAtlkTOLQNcOUlh72xokjWq9tkN7-TQX24EuyA~~">クラブ</a></li>
        <li><a href="http://music-indies.oricon.co.jp/?&amp;clt=WW5retn8C31gDygq5DOAPvc732o8EWMcQAtlkTOLQNcOUlh72xokjWq9tkN7-TQX24EuyA~~"><span class="nowrap">イン</span><wbr><span class="nowrap">ディーズ</span></a></li>
        <li><a href="http://music-anime.oricon.co.jp/?&amp;clt=WW5retn8C31gDygq5DOAPvc732o8EWMcQAtlkTOLQNcOUlh72xokjWq9tkN7-TQX24EuyA~~">アニメ</a></li>
    </ul>

Let's see the CSS for each a href

.btn-navi > li.current > a {
    border-right: black solid 5px;
    border-left: black solid 5px;
    -webkit-border-image: url(image/background_btn_current.png) 0 10 0 10;
    -moz-border-image: url(image/background_btn_current.png) 0 10 0 10;
}

The author had good intents. -webkit- and -moz- are there. -ms- is not present but it's normal, IE never had a support for border-image with prefix.

Missing the Non-Prefixed Version

Always add the non-prefixed version in the last position. Two main reasons:

  • Managing the legacy. Your client might not give you a budget to maintain the CSS in the future. So better be ready and universal for other browsers coming along.
  • Helping browsers implementers to unship the support of the prefix. Less things to maintain. Moving forward.

Let's see what is happening if we add border-image.

.btn-navi > li.current > a {
    border-right: black solid 5px;
    border-left: black solid 5px;
    -webkit-border-image: url(image/background_btn_current.png) 0 10 0 10;
    border-image: url(image/background_btn_current.png) 0 10 0 10;
}

It's taken into account

Devtools CSS properties

Missing fill keyword

Always add the fill keyword to your border-image if it's the intended effect.

.btn-navi > li.current > a {
    border-right: black solid 5px;
    border-left: black solid 5px;
    -webkit-border-image: url(image/background_btn_current.png) 0 10 0 10;
    -moz-border-image: url(image/background_btn_current.png) 0 10 0 10;
    border-image: url(image/background_btn_current.png) 0 10 0 10 fill;
}

Now it looks already better.

navbar half fixed

But we just fixed the selected one. We do the same for the others.

.btn-navi > li > a {
    display: table-cell;
    vertical-align: middle;
    height: 36px;
    font-size: 12px;
    font-weight: bold;
    color: #333;
    line-height: 1em;
    text-decoration: none;
    text-shadow: 0px 2px 1px #FFF;
    border-right: black solid 5px;
    border-left: black solid 5px;
    -webkit-border-image: url(image/background_btn_normal.png) 0 10 0 10;
    -moz-border-image: url(image/background_btn_normal.png) 0 10 0 10;
    border-image: url(image/background_btn_normal.png) 0 10 0 10 fill;
}

And this time this is fixed!

navbar fixed

Simple no?

border-image

  • border-image is defined in CSS Backgrounds specification.
  • need the keyword fill at the end of the declaration. The ‘fill’ keyword, if present, causes the middle part of the border-image to be preserved. (By default it is discarded, i.e., treated as empty.)
  • border-style needs to be defined.

Ice On the Cake: flexbox

The issue is classical and simple.

.btn-navi {
    display: box;
    display: -webkit-box;
    display: -moz-box;
    width: 100%;
    list-style: none;
    margin: 8px 0;
    padding: 0;
}

And as you can notice, the devtools has chosen display: -moz-box:

devtools css

If I set about:config?filter=prefixes.webkit to true

navbar

To fix it properly, it just needs display: flex

devtools css

And we get a proper navbar.

navbar

Otsukare!

Mike TaylorNeil deGrasse Tyson is blurry

In Bug 1275069 (which I stumbled upon thanks to one of my "Neil deGrasse Tyson" bugmail filters), we run into an interesting bug that's the result of same origin policy restrictions and vendor prefixes.

For those unfamilar with how Buzzfeed works, frequently you get a list of blurry images and click on them for a dramatic reveal. Riveting stuff.

To set up the blur, if you're using Firefox, they serve you an SVG <filter>, because before Firefox 35, CSS filters weren't yet supported. Other browsers get a vendor prefixed CSS filter (and whatever the progid:DXImageTransform junk for IE is called).

But, there's a problem for Firefox users. There's no blurry image—just a blank space. A literal Neil deGrasse Tyson vacuum.

Here's what that SVG <filter> looks like in devtools:

The problem is that img.buzzfeed.com is a different origin from www.buzzfeed.com and that server isn't sending any CORS headers, so Firefox doesn't actually render the filter (...or the image at all. That seems weird to me—see 1105145).

OK, whatever. I still want to click on that to see some sensuous lounging (the user is probably thinking).

And the plan is that once you click the (missing) blurry image, they remove the inline style with the filter url() function, but, oops now the following class applies (they remove the entire class in other browsers):

.graphic_image {
...
  -webkit-filter: blur(30px);
  -moz-filter: blur(30px);
  -o-filter: blur(30px);
  -ms-filter: blur(30px);
...
}

So just when you thought you were gonna get the big Neil deGrasse Tyson reveal, you're stuck with the following:

(Until we added support for -webkit-filter as an alias of (unprefixed) filter (-moz-filter was never like, a thing), that rule was ignored, fun!)

Anyways, in your own web journalism going forward, you can just use unprefixed CSS blur filters and skip the super fragile UA sniffing code paths.

And if you'd like to stick with SVG filters, make sure you're serving those from the same origin as your content, or at the very least go copy pasta some CORS headers from stack overflow for your asset servers.

Mozilla Open Policy & Advocacy BlogMozilla’s Transparency Report

Today, Mozilla released our transparency report. Transparency and openness are among Mozilla’s founding principles and a key part of who we are and how we operate: from our open, auditable codebase to our open development work in Bugzilla and Github. The report is another example of our commitment to these principles.

Asa DotzlerMy New Role @ Mozilla

After a couple of years working on Mozilla’s mobile operating system project, I’m coming back to Firefox!

I’ll be doing some familiar things and some new things. My official title is Product Manager, Firefox Roadmap and Community. What that means, first and foremost, is that I’ll be returning as our storyteller, making sure that we’re communicating regularly about where Firefox is heading, and that we’re fully engaged with Firefox users, fans, and contributors.

My first few weeks will be spent getting up to speed with the Firefox teams, from Product  Management and User Experience to Engineering and Program/Project Management. We’re doing a lot with Firefox in 2016 and 2017. I can’t wait to start sharing that story.

If you’ve got ideas about what needs improving first with Firefox communications, perhaps the Monday all-hands meeting content, or the roadmap documents on the wiki, or something completely different, please let me know in comments or email.

I’m over the moon excited about this role. Stay tuned. It’s gonna be great.

Yunier José Sosa VázquezCómo se hace? Emplear ownCloud en Thunderbird para enviar archivos grandes

Generalmente, enviar y recibir archivos mediante correo electrónico es muy común por estos días tan movidos del mundo actual.  Pero, ¿que ocurre cuando lo que deseas enviar sobrepasa el límite admitido por tu proveedor de correo? En ese caso tendrías comprimir y picar el fichero en varias partes para que luego la otra persona descargue cada una de ellas y las una. Hasta aquí todo bien, pero, si se llenan los buzones, el nuestro o el del destinatario, la transferencia se vería afectada y habrá que intentar en otra ocasión para lograr nuestro objetivo. Por lo que este método es poco razonable pues, además de saturar los servidores, para algunas personas resulta ser muy complicado.

Gracias a los avances de la tecnología, hoy contamos con sistemas “en la nube” que nos permiten entre muchas funcionalidades más, sincronizar y almacenar archivos fácilmente. Entre estos sistemas podemos encontrar Box, Mega, Amazon, Dropbox, Google Drive y ownCloud, por solo mencionar algunos.

Thunderbird desde su versión 13, incorpora Filelink, funcionalidad permite utilizar estos servicios de almacenamientos en línea para enviar archivos grandes, de forma tal que nuestros destinatarios solo recibirán el enlace para descargar el archivo y ahorrar cuota en nuestro servidor.

Para nuestro beneplácito, en la UCI trabajadores y estudiantes contamos con servidores ownCloud para almacenar parte de nuestra información. Y de paso, podemos utilizarlo para enviar archivos grandes o pequeños.

¿Cómo añadir el soporte ownCloud en Thunderbird?

  1. Por defecto Thunderbird no incorpora ownCloud, así que tendremos que instalar este complemento y reiniciar Thunderbird.
  2. En las preferencias de Thunderbird, ir a la sección Adjuntos y luego a la pestaña Saliente. En esta ventana podremos configurar el tamaño que deben superar los archivos para que Thunderbird ofrezca enlazar los adjuntos a la nube, por defecto este valor es 5 MB.
    filelink_owncloud_opciones_thunderbird
  3. Dar clic en el botón Añadir. Cuando se muestre la ventana Configurar Filelink, debemos elegir ownCloud en la lista desplegable.filelink_seleccionar_servicio
  4. Introducir los siguientes datos:
    Servidor: https://misarchivos.uci.cu/owncloud, si eres estudiante debes utilizar https://misarchivos.estudiantes.uci.cu
    Username: nuestro usuario
    Storage path in ownCloud: /, por defecto. Si desean, pueden crear una carpeta para subir solamente los ficheros enlazados desde Thunderbird y tener un mejor control de los mismos.
    Password for uploaded files: si queremos poner contraseñas a todos los archivos que compartimos desde Thunderbird.
    filelink_owncloud_configuracion
  5. Dar clic en el botón Configurar cuenta y escribir nuestra contraseña para acceder al servicio, si desean pueden almacenarla en el administrador de contraseñas.
    filelink_owncloud_password
  6. Si todo ha salido bien, se mostrarán algunos datos relacionados con nuestra cuenta
    filelink_owncloud_mostrardatos
  7. Listo, ya puedes enviar un archivo utilizando Thunderbird y ownCloud.

Hasta ahora, ya tenemos instalado y configurado el complemento. Solo falta probarlo y verlo en acción.

¿Cómo enlazar un archivo?

  1. Al adjuntar un archivo a un correo, si este es mayor que la capacidad máxima configurada en Thunderbird, el cliente de correo automáticamente te preguntará si deseas “convertirlo a un adjunto enlazado”.
    filelink_enlazar_archivos
  2. Al dar clic en el botón Enlazar, inmediatamente el fichero será enviado a la nube y luego veremos como en el cuerpo del correo se adjuntan el nombre, el tamaño y el vínculo para descargarlo desde ownCloud.
    filelink_owncloud_adjuntar_archivo2
    También puedes “convertir” un adjunto haciendo clic derecho sobre el, ir hasta Convertir a … y seleccionar el servicio Filelink deseado, en este caso ownCloud.
    filelink_owncloud_adjuntar_archivo3
  3. Por defecto, Filelink se autentica con el servidor solamente cuando es necesario, por lo que algunas veces es necesario ejecutar esta acción manualmente en la sección Adjuntos/Saliente de las opciones de Thunderbird, seleccionando el servicio desea y dando clic en el botón Autorizar.
    filelink_owncloud_autorizar

Ahora solo resta comenzar a utilizar Filelink para enviar archivos y de esta forma no llenar nuestro buzón y el de los destinatarios. Si deseas probar otras formas de utilizar Filelink en Thunderbird, puedes visitar la etiqueta  filelink de nuestro sitio de complementos. Espero que les sea útil.

About:CommunityA New Firefox Development Forum

We’ve been looking for the right home for Firefox browser development Q&A for a while now. It’s taken longer than it should have, but after a lot of discussion and experimentation with different tools and forums, we’ve finally come to a conclusion.

In retrospect the decision was obvious; hindsight is like that. But here it is; if we want everyone in the community to be a part of making Firefox great, then we should be where the community is: part of the Mozilla Community Discourse forum.

Things are a bit thin on the ground there now; I’ll be migrating over some questions and answers from other forums to stock that pond shortly. In the meantime if you’re new to Discourse it’s a very civilized piece of forum software. You can keep track of discussions happening there by logging in and taking a look in the upper right-hand corner, where you’ll see “Watching”, “Tracking”, “Normal” and “Muted”. Set that to “Watching”, and you’ll get a notification when a new topic comes up for discussion. Set it to “Tracking”, and you’ll also get a note when you’re called out by name. You can also watch or track individual threads, which is a nice touch.

Alternatively, if you’re a fan of syndicated feeds you can grab an Atom feed as follows:

https://discourse.mozilla-community.org/c/firefox-development.rss

I hope you’ll join us in helping build Firefox into everything it can be, the best browser in the world and the cornerstone of a free, open and participatory Web. And as always, if you’ve got questions about that, please email me directly.

Thank you,

– mhoye

Air MozillaConnected Devices Weekly Program Update, 24 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Air MozillaMartes mozilleros, 24 May 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1274139] QuickSearch: searching for text containing colons no longer works
  • [1262039] typo in error message “Failed to retreive components…”
  • [1272548] bug link in header broken if format param specified
  • [1225241] Section titles should be highlighted
  • [1149384] implement time tracking interface
  • [1265392] Clicking “stop following this bug” also turned off the ‘restrict comments to users not in editbugs’ flag
  • [1262462] automatically collapse the first comment when it’s empty
  • [1262468] add reporter_accessible checkbox
  • [1275077] Add Alex Salkever to the VP drop down list? Thanks!:)
  • [1268305] Integrate Readable Bug Status in Bug Detail Page

discuss these changes on mozilla.tools.bmo.


Michael KaplyDefault Profile Directory Doesn’t Work in Firefox 46

It was recently discovered that support for using the defaults/profile directory to prepopulate a Firefox profile was removed in Firefox 46.

Here’s an AutoConfig file that adds back the functionality:

//
const {classes: Cc, interfaces: Ci, utils: Cu} = Components;
Cu.import("resource://gre/modules/Services.jsm");
Cu.import("resource://gre/modules/FileUtils.jsm");

if (!Services.prefs.prefHasUserValue("browser.startup.homepage_override.mstone")) {
  // New profile
  var defaultProfileDir = Services.dirsvc.get("GreD", Ci.nsIFile);
  defaultProfileDir.append("defaults");
  defaultProfileDir.append("profile");
  if (defaultProfileDir.exists()) {
    var profileDir = Services.dirsvc.get("ProfD", Ci.nsIFile);
    try {
      copyDir(defaultProfileDir, profileDir);
    } catch (e) {
      Components.utils.reportError(e);
    }
  }
}

function copyDir(aOriginal, aDestination) {
  var enumerator = aOriginal.directoryEntries;
  while (enumerator.hasMoreElements()) {
    var file = enumerator.getNext().QueryInterface(Components.interfaces.nsIFile);
    if (file.isDirectory()) {
      var subdir = aDestination.clone();
      subdir.append(file.leafName);
      try {
        subdir.create(Ci.nsIFile.DIRECTORY_TYPE, FileUtils.PERMS_DIRECTORY);
        copyDir(file, subdir);
      } catch (e) {
        Components.utils.reportError(e);
      }
    } else {
      try {
        file.copyTo(aDestination, null);
      } catch (e) {
        Components.utils.reportError(e);
      }
    }
  }
}

This should work for most file types, although it probably won’t work for bookmarks.html

I’ll also be adding this functionality to the next version of the CCK2 which will be released later today.

Giorgos LogiotatidisRemove users from git-crypt enabled repository

Git crypt is a neat git extension to encrypt some files - if not all - in a git repository. Integrates nicely with git using filters and it's use is completely transparent once you have unlocked a repository.

Using git-crypt you can still share a repository in public and maintain a set of files with secrets that are accessible to a limited number of users. Especially useful for open source projects.

At some point maybe you'll need to remove one of the users who have access to the encrypted files. Git-crypt does not provide a command to remove users (yet) because it's complicated to be done right while preserving history.

Until this issue is fixed, I put together a script to automate the procedure. The script will de-initialize git-crypt from the repository, completely remove the git-crypt keys and users. Then it will re-encrypt the repository and re-add all the keys (users) expect the one requested to be removed.

This is obviously a work-around and a proper solution with multiple git-crypt key versions is preferred but still it gets the work done. The solution was inspired by comments in the corresponding issue.

Grab with the script and documentation from GitHub.

Varun JoshiSolving Update Races in Balrog: The Plan

The coding period for the Google Summer of Code has begun, so here is the plan for the project, as was promised.

Currently, when two submitter tasks request Balrog for a blob to update at the same time, they both have the same data_version in the blob they send back with the added locale. This leads to the server rejecting one of these and the submitter having to retry. In most cases, the updates can simply be merged, preventing the retries. For example, this series of events is something like what happens now:

  1. Submitter 1 requests data from Balrog (and receives data specifying data_version = 1)
  2. Submitter 2 requests data from Balrog (data_version = 1)
  3. Submitter 1 submits release blob to Balrog with data_version (data_version changes to 2)
  4. Submitter 2 fails to submit release blob to Balrog (since data_version is now 2, but submitter specified 1 in the request)

At most times, the data in the blobs is similar apart from some added data such as added locales. So, we can try to reduce submission failures by devising a way to merge the blob versions when we receive a request with an outdated data_version. As my project, I will seek to implement the merging of two blobs. This will be with the utilization of a three-way merge algorithm, something similar to what git uses for merges. Since we currently do not have any libraries for this task, I will make a module for three-way merges of python data structures (and hopefully get it published on PyPI!) and use that within Balrog to accomplish the goals of this project.

The basic algorithm for three-way merges is described here:

  1. We first calculate a diff between the new versions of the file (data_version 2 and the version where we fail, in our example) and the old version (data_version 1). There are multiple tools that allow diffs of the form we want. DictDiffer and deepdiff are two alternatives we could utilize for this step. Both of them return an easy to use list of differences in our blobs.
  2. If there are no changes for that particular item in either diff or if both diffs have added an element with equal values, we add the item in the resultant object.
  3. If both diffs have removed some item, we ignore that item and do not include it in our output.
  4. If one diff has added an element while the other hasn't, we add the element.
  5. If both diffs have added or changed an element but they have unequal values, we recursively apply our algorithm. If there is a difference in a string that was present in the the root version (data_version 1 in our example), we might apply a string three way merge algorithm.
  6. For lists, we can treat every list element as a line in text and apply a traditional three way merge algorithm to it.
  7. For all other cases, we may consider the two changes to be conflicting and we may apply some conflict resolution strategies.
  8. If the type of an element changes, we can consider it to be a merge conflict.

Further analysis needs to be done for the handling of list and tuple values, since the preservation of order might be important in those data structures. We might want to support two modes: one where the preservation of the order of the list elements is important and another where it isn't. In some cases, merging strings might also be undesireable, so even might need to be made optional.

We may employ several merge-conflict resolution strategies:

  • Select one of the changes.
  • Discard both changes

This algorithm is based on this research paper and this email. Both show how a three way merge would work out for strings.

Here is a link to my proposal. Keep checking this space every week for updates on my project and feel free to point out if I can do anything better! Thank you :)

Air MozillaBringing the Next Billion Online

Bringing the Next Billion Online Nearly 4 billion people around the world don't use the Internet. Bringing developing countries into the global digital community should be a priority for the...

Jen Kaganday 1: learn all the words

i started my internship today! many etherpads and video calls and IRC windows. here are my first tasks:

  1. review XUL school on mozilla developer network. XUL is the language/framework for developing firefox extensions.
  2. review issues on github.com/meandavejustice/min-vid, an early prototype that i’ll be working on with a small but mighty team. start digging into issue #20, “doesn’t work for youtu.be links

here’s jared telling me how to get started.

Screen Shot 2016-05-23 at 4.23.57 PM

i’m interested in the structure of a distributed project like this. seems like the smaller the team/project, the less structure is needed. as projects get bigger, github issues need to be better maintained, style guidelines more strictly adhered to, deadlines created and met, meetings guided by agendas, etc. regardless of size, it’s amazing that so much can get done with so many people sitting in their own homes, checking in via IRC, video chats, and shared checklists.

anyway. so far, i’ve found that there are lots of words and concepts i don’t know. so, glossary, compiled from various parts of mdn:

gecko – “Gecko is the name of the layout engine developed by the Mozilla Project. It was originally named NGLayout. Gecko’s function is to read web content, such as HTML, CSS, XUL, JavaScript, and render it on the user’s screen or print it.” “Gecko SDK is a collection of header files and tools used to develop general XPCOM components which add functionality to the existing platform”

XULRunner – “XULRunner is a framework which can be used to run standalone or embedded applications based on the Mozilla framework” “includes the Gecko rendering engine, the Necko networking library, and several other components that provide OS-independent file management, accessibility, and localization, among others.”

webextensions – “WebExtensions are the future of Firefox add-ons. If you can use the WebExtensions API, it’s the best choice.”

sdk – “software development kit is typically a set of software development tools that allows the creation of applications for a certain software package, software framework, hardware platform, computer system, video game console, operating system, or similar development platform.”

add-on vs. extension – all extensions are add-ons, but add-ons can also be themes, plugins, language packs

elements of an add-on

  • chrome.manifest- chrome.manifest file tells Firefox where to look for chrome files; the chrome is composed of 3 sections:
    • content,
    • locale,
    • skin
  • install.rdf- holds unique IDs of the add-on

context-menu – “Instead of manually adding items when particular contexts occur and then removing them when those contexts go away, you bind items to contexts, and the adding and removing is automatically handled for you. Items are bound to contexts in much the same way that event listeners are bound to events.”

classic menu vase who knows, who cares? CLASSIC MENU VASE

Marcia KnousExcited to be working with our Outreachy intern!

Today marks the first day I will co-mentoring Outreachy intern Manel Rhaiem.  I will be teaming up with Kate Glazko who is part of the Project SmartHome team. I am excited to see what Manel will accomplish as part of this project - it should be an exciting few months! I plan on blogging about our experience and sharing everything that all of us are learning. One of the things we discussed the other day was StrengthsFinder 2.0 - and how all of compared as far as our strengths. I find this to be

Daniel PocockPostBooks, PostgreSQL and pgDay.ch talk

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

Mozilla Reps CommunityRep of the Month – January 2016

Please join us in congratulating Dian Ina Mahendra as Rep of the Month for January!

Dian Ina is one of the core contributors from Indonesia. Art Manager in the day enthusiastic Mozillian by night, Ina contributed heavily on building her community’s roadmap for 2016. She is also contributing on Firefox Social media materials and is helping along with the rest of the Indonesian’s community on creating their community space in Jakarta.
One of her biggest work was building the budget plan for her community’s activities for 2016 and taking the accountable role for her community along with Andi.
 

QMOFirefox 47 beta 7 Testday Results

Howdy mozillians!

Last week on Friday (May 20th), we held another successfull event – Firefox 47 beta 7 Testday.

Thank you all – Ilse Macías, Stelian Ionce, Iryna Thompson, Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Tanvir Rahman, Zayed News, Azmina Akter Papeya, Roman Syed, Raihan Ali, Sayed Ibn Masudn, Samad Talukdar, John Sujoy, Nafis Ahmed Muhit, Sajedul Islam, Asiful Kabir Heemel, Sunny, Maruf Rahman, Md. Tanvir Ahmed, Saddam Hossain, Wahiduzzaman Hridoy, Ishak Herock, Md.Tarikul Islam Oashi, Md Rakibul Islam, Niaz Bhuiyan Asif, MD. Nnazmus Shakib (Robin), Akash, Towkir Ahmed, Saheda Reza Antora, Md. Almas Hossain, Hasibul Hasan Shanto, Tazin Ahmed, Badiuzzaman Pranto, Md.Majedul islam, Aminul Islam Alvi, Toufiqul Haque Mamun, Fahim, Zubayer Alam, Forhad Hossain, Mahfuza Humayra Mohona – for the participation!

A big thank you goes out to all our active moderators too!

Results:

  • there were no bugs verified nor triaged
  • some failures were mentioned for APZ feature in the etherpads (link 1 and link 2); therefore, please add the requested details in the etherpads or, even better, join us on #qa IRC channel and let’s figure them out 😉

I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work! \o/

And keep an eye on QMO for upcoming events! 😉

Yunier José Sosa VázquezFirefox para iOS mejora su seguridad y te hace ir más rápido por la Web

La semana pasada Mozilla liberó una nueva versión Firefox para iOS y desde Mozilla Hispano te mostramos sus novedades. Principalmente, esta entrega mejora la privacidad y seguridad de las personas al navegar en la Web y aporta una experiencia más aerodinámica que te permitirá un mayor control sobre tu experiencia de navegación móvil.

¿Qué hay de nuevo en esta actualización?

El widget Today de iOS: Sabes que obtener lo que buscas en la Web rápidamente para ti es importante, especialmente en tu móvil. Por esa razón, ahora puedes acceder a Firefox a través del widget iOS Today para abrir nuevas pestañas o un enlace copiado recientemente.

El widget iOS Today en Firefox para iOS

El widget iOS Today en Firefox para iOS

La barra alucinante: De ahora en adelante al escribir en la barra de direcciones se mostrarán tus marcadores, historial y sugerencias de búsqueda que coincidan con el término deseado. Esto hará que el acceso a tus sitios web favoritos sea más rápido y fácil.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

Administra tu seguridad: Por defecto, Firefox contribuye a garantizar tu seguridad avisándote cuando la conexión a determinada web no es segura. Cuando trates de acceder a una web poco segura, verás un mensaje de “error” avisándote de que esa conexión no es de fiar y estarás protegido a la hora de acceder a ellas. Con Firefox para iOS, puedes ignorar temporalmente esos mensajes de error de las páginas web que has considerado como “seguras” pero pueden quedar registradas como potencialmente no-seguras por Firefox.

Error de certificado en Firefox para iOS

Error de certificado en Firefox para iOS

Debido a que el mecanismo empleado por Apple para descargar e instalar aplicaciones en sus teléfonos es muy complicado, no podemos proveer la descarga de esta versión desde nuestro sitio. Quizás más adelante, si esta regla varía, podremos hacerlo y completaremos en kit de versiones de Firefox. Por lo que para experimentar y gozar estas nuevas funcionalidades añadidas a Firefox para iOS debes descargar esta actualización desde la AppStore.

download_on_appstore

Fuentes: The Mozilla Blog y Mozilla Press

Niko MatsakisUnsafe abstractions

The unsafe keyword is a crucial part of Rust’s design. For those not familiar with it, the unsafe keyword is basically a way to bypass Rust’s type checker; it essentially allows you to write something more like C code, but using Rust syntax.

The existence of the unsafe keyword sometimes comes as a surprise at first. After all, isn’t the point of Rust that Rust programs should not crash? Why would we make it so easy then to bypass Rust’s type system? It can seem like a kind of flaw in the design.

In my view, though, unsafe is anything but a flaw: in fact, it’s a critical piece of how Rust works. The unsafe keyword basically serves as a kind of escape valve – it means that we can keep the type system relatively simple, while still letting you pull whatever dirty tricks you want to pull in your code. The only thing we ask is that you package up those dirty tricks with some kind of abstraction boundary.

This post introduces the unsafe keyword and the idea of unsafety boundaries. It is in fact a lead-in for another post I hope to publish soon that discusses a potential design of the so-called Rust memory model, which is basically a set of rules that help to clarify just what is and is not legal in unsafe code.

Unsafe code as a plugin

I think a good analogy for thinking about how unsafe works in Rust is to think about how an interpreted language like Ruby (or Python) uses C modules. Consider something like the JSON module in Ruby. The JSON bundle includes a pure Ruby implementation (JSON::Pure), but it also includes a re-implementation of the same API in C (JSON::Ext). By default, when you use the JSON bundle, you are actually running C code – but your Ruby code can’t tell the difference. From the outside, that C code looks like any other Ruby module – but internally, of course, it can play some dirty tricks and make optimizations that wouldn’t be possible in Ruby. (See this excellent blog post on Helix for more details, as well as some suggestions on how you can write Ruby plugins in Rust instead.)

Well, in Rust, the same scenario can arise, although the scale is different. For example, it’s perfectly possible to write an efficient and usable hashtable in pure Rust. But if you use a bit of unsafe code, you can make it go faster still. If this a data structure that will be used by a lot of people or is crucial to your application, this may be worth the effort (so e.g. we use unsafe code in the standard library’s implementation). But, either way, normal Rust code should not be able to tell the difference: the unsafe code is encapsulated at the API boundary.

Of course, just because it’s possible to use unsafe code to make things run faster doesn’t mean you will do it frequently. Just like the majority of Ruby code is in Ruby, the majority of Rust code is written in pure safe Rust; this is particularly true since safe Rust code is very efficient, so dropping down to unsafe Rust for performance is rarely worth the trouble.

In fact, probably the single most common use of unsafe code in Rust is for FFI. Whenever you call a C function from Rust, that is an unsafe action: this is because there is no way the compiler can vouch for the correctness of that C code.

Extending the language with unsafe code

To me, the most interesting reason to write unsafe code in Rust (or a C module in Ruby) is so that you can extend the capabilities of the language. Probably the most commonly used example of all is the Vec type in the standard library, which uses unsafe code so it can handle uninitialized memory; Rc and Arc, which enable shared ownership, are other good examples. But there are also much fancier examples, such as how Crossbeam and deque use unsafe code to implement non-blocking data structures, or Jobsteal and Rayon use unsafe code to implement thread pools.

In this post, we’re going to focus on one simple case: the split_at_mut method found in the standard library. This method is defined over mutable slices like &mut [T]. It takes as argument a slice and an index (mid), and it divides that slice into two pieces at the given index. Hence it returns two subslices: ranges from 0..mid, and one that ranges from mid...

You might imagine that split_at_mut would be defined like this:

1
2
3
4
5
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        (&mut self[0..mid], &mut self[mid..])
    }
}

If it compiled, this definition would do the right thing, but in fact if you try to build it you will find it gets a compilation error. It fails for two reasons:

  1. In general, the compiler does not try to reason precisely about indices. That is, whenever it sees an index like foo[i], it just ignores the index altogether and treats the entire array as a unit (foo[_], effectively). This means that it cannot tell that &mut self[0..mid] is disjoint from &mut self[mid..]. The reason for this is that reasoning about indices would require a much more complex type system.
  2. In fact, the [] operator is not builtin to the language when applied to a range anyhow. It is implemented in the standard library. Therefore, even if the compiler knew that 0..mid and mid.. did not overlap, it wouldn’t necessarily know that &mut self[0..mid] and &mut self[mid..] return disjoint slices.

Now, it’s plausible that we could extend the type system to make this example compile, and maybe we’ll do that someday. But for the time being we’ve preferred to implement cases like split_at_mut using unsafe code. This lets us keep the type system simple, while still enabling us to write APIs like split_at_mut.

Abstraction boundaries

Looking at unsafe code as analogous to a plugin helps to clarify the idea of an abstraction boundary. When you write a Ruby plugin, you expect that when users from Ruby call into your function, they will supply you with normal Ruby objects and pointers. Internally, you can play whatever tricks you want: for example, you might use a C array instead of a Ruby vector. But once you return values back out to the surrounding Ruby code, you have to repackage up those results as standard Ruby objects.

It works the same way with unsafe code in Rust. At the public boundaries of your API, your code should act as if it were any other safe function. This means you can assume that your users will give you valid instances of Rust types as inputs. It also means that any values you return or otherwise output must meet all the requirements that the Rust type system expects. Within the unsafe boundary, however, you are free to bend the rules (of course, just how free you are is the topic of debate; I intend to discuss it in a follow-up post).

Let’s look at the split_at_mut method we saw in the previous section. For our purposes here, we only care about the public interface of the function, which is its signature:

1
2
3
4
5
6
7
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        // body of the fn omitted so that we can focus on the
        // public inferface; safe code shouldn't have to care what
        // goes in here anyway
    }
}

So what can we derive from this signature? To start, split_at_mut can assume that all of its inputs are valid (for safe code, the compiler’s type system naturally ensures that this is true; unsafe callers would have to ensure it themselves). Part of writing the rules for unsafe code will require enumerating more precisely what this means, but at a high-level it’s stuff like this:

  • The self argument is of type &mut [T]. This implies that we will receive a reference that points at some number N of T elements. Because this is a mutable reference, we know that the memory it refers to cannot be accessed via any other alias (until the mutable reference expires). We also know the memory is initialized and the values are suitable for the type T (whatever it is).
  • The mid argument is of type usize. All we know is that it is some unsigned integer.

There is one interesting thing missing from this list, however. Nothing in the API assures us that mid is actually a legal index into self. This implies that whatever unsafe code we write will have to check that.

Next, when split_at_mut returns, it must ensure that its return value meets the requirements of the signature. This basically means it must return two valid &mut [T] slices (i.e., pointing at valid memory, with a length that is not too long). Crucially, since those slices are both valid at the same time, this implies that the two slices must be disjoint (that is, pointing at different regions of memory).

Possible implementations

So let’s look at a few different implementation strategies for split_at_mut and evaluate whether they might be valid or not. We already saw that a pure safe implementation doesn’t work. So what if we implemented it using raw pointers like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;

        // The unsafe block gives us access to raw pointer
        // operations. By using an unsafe block, we are claiming
        // that none of the actions below will trigger
        // undefined behavior.
        unsafe {
            // get a raw pointer to the first element
            let p: *mut T = &mut self[0];

            // get a pointer to the element `mid`
            let q: *mut T = p.offset(mid as isize);

            // number of elements after `mid`
            let remainder = self.len() - mid;

            // assemble a slice from 0..mid
            let left: &mut [T] = from_raw_parts_mut(p, mid);

            // assemble a slice from mid..
            let right: &mut [T] = from_raw_parts_mut(q, remainder);

            (left, right)
        }
    }
}

This is a mostly valid implementation, and in fact fairly close to what the standard library actually does. However, this code is making a critical assumption that is not guaranteed by the input: it is assuming that mid is in range. Nowhere does it check that mid <= len, which means that the q pointer might be out of range, and also means that the computation of remainder might overflow and hence (in release builds, at least by default) wrap around. So this implementation is incorrect, because it requires more guarantees than what the caller is required to provide.

We could make it correct by adding an assertion that mid is a valid index (note that the assert macro in Rust always executes, even in optimized code):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;

        // check that `mid` is in range:
        assert!(mid <= self.len());

        // as before, with fewer comments:
        unsafe {
            let p: *mut T = &mut self[0];
            let q: *mut T = p.offset(mid as isize);
            let remainder = self.len() - mid;
            let left: &mut [T] = from_raw_parts_mut(p, mid);
            let right: &mut [T] = from_raw_parts_mut(q, remainder);
            (left, right)
        }
    }
}

OK, at this point we have basically reproduced the implementation in the standard library (it uses some slightly different helpers, but it’s the same idea).

Extending the abstraction boundary

Of course, it might happen that we actually wanted to assume mid that is in bound, rather than checking it. We couldn’t do this for the actual split_at_mut, of course, since it’s part of the standard library. But you could imagine wanting a private helper for safe code that made this assumption, so as to avoid the runtime cost of a bounds check. In that case, split_at_mut is relying on the caller to guarantee that mid is in bounds. This means that split_at_mut is no longer safe to call, because it has additional requirements for its arguments that must be satisfied in order to guarantee memory safety.

Rust allows you express the idea of a fn that is not safe to call by moving the unsafe keyword out of the fn body and into the public signature. Moving the keyword makes a big difference as to the meaning of the function: the unsafety is no longer just an implementation detail of the function, it’s now part of the function’s interface. So we could make a variant of split_at_mut called split_at_mut_unchecked that avoids the bounds check:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
impl [T] {
    // Here the **fn** is declared as unsafe; calling such a function is
    // now considered an unsafe action for the caller, because they
    // must guarantee that `mid <= self.len()`.
    unsafe pub fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;
        let p: *mut T = &mut self[0];
        let q: *mut T = p.offset(mid as isize);
        let remainder = self.len() - mid;
        let left: &mut [T] = from_raw_parts_mut(p, mid);
        let right: &mut [T] = from_raw_parts_mut(q, remainder);
        (left, right)
    }
}

When a fn is declared as unsafe like this, calling that fn becomes an unsafe action: what this means in practice is that the caller must read the documentation of the function and ensure that what conditions the function requires are met. In this case, it means that the caller must ensure that mid <= self.len().

If you think about abstraction boundaries, declaring a fn as unsafe means that it does not form an abstraction boundary with safe code. Rather, it becomes part of the unsafe abstraction of the fn that calls it.

Using split_at_mut_unchecked, we could now re-implemented split_at_mut to just layer on top the bounds check:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        assert!(mid <= self.len());

        // By placing the `unsafe` block in the function, we are
        // claiming that we know the extra safety conditions
        // on `split_at_mut_unchecked` are satisfied, and hence calling
        // this function is a safe thing to do.
        unsafe {
            self.split_at_mut_unchecked(mid)
        }
    }

    // **NB:** Requires that `mid <= self.len()`.
    pub unsafe fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        ... // as above
    }
}

Unsafe boundaries and privacy

Although there is nothing in the language that explicitly connects the privacy rules with unsafe abstraction boundaries, they are naturally interconnected. This is because privacy allows you to control the set of code that can modify your fields, and this is a basic building block to being able to construct an unsafe abstraction.

Earlier we mentioned that the Vec type in the standard library is implemented using unsafe code. This would not be possible without privacy. If you look at the definition of Vec, it looks something like this:

1
2
3
4
5
pub struct Vec<T> {
    pointer: *mut T,
    capacity: usize,
    length: usize,
}

Here the field pointer is a pointer to the start of some memory. capacity is the amount of memory that has been allocated and length is the amount of memory that has been initialized.

The vector code is all very careful to maintain the invariant that it is always safe the first length elements of the the memory that pointer refers to. You can imagine that if the length field were public, this would be impossible: anybody from the outside could go and change the length to whatever they want!

For this reason, unsafety boundaries tend to fall into one of two categories:

  • a single functions, like split_at_mut
    • this could include unsafe callees like split_at_mut_unchecked
  • a type, typically contained in its own module, like Vec
    • this type will naturally have private helper functions as well
    • and it may contain unsafe helper types too, as described in the next section

Types with unsafe interfaces

We saw earlier that it can be useful to define unsafe functions like split_at_mut_unchecked, which can then serve as the building block for a safe abstraction. The same is true of types. In fact, if you look at the actual definition of Vec from the standard library, you will see that it looks just a bit different from what we saw above:

1
2
3
4
pub struct Vec<T> {
    buf: RawVec<T>,
    len: usize,
}

What is this RawVec? Well, that turns out to be an unsafe helper type that encapsulates the idea of a pointer and a capacity:

1
2
3
4
5
6
pub struct RawVec<T> {
    // Unique is actually another unsafe helper type
    // that indicates a uniquely owned raw pointer:
    ptr: Unique<T>,
    cap: usize,
}

What makes RawVec an unsafe helper type? Unlike with functions, the idea of an unsafe type is a rather fuzzy notion. I would define such a type as a type that doesn’t really let you do anything useful without using unsafe code. Safe code can construct RawVec, for example, and even resize the backing buffer, but if you want to actually access the data in that buffer, you can only do so by calling the ptr method, which returns a *mut T. This is a raw pointer, so dereferencing it is unsafe; which means that, to be useful, RawVec has to be incorporated into another unsafe abstraction (like Vec) which tracks initialization.

Conclusion

Unsafe abstractions are a pretty powerful tool. They let you play just about any dirty performance trick you can think of – or access any system capbility – while still keeping the overall language safe and relatively simple. We use unsafety to implement a number of the core abstractions in the standard library, including core data structures like Vec and Rc. But because all of these abstractions encapsulate the unsafe code behind their API, users of those modules don’t carry the risk.

How low can you go?

One thing I have not discussed in this post is a lot of specifics about exactly what is legal within unsafe code and not. Clearly, the point of unsafe code is to bend the rules, but how far can you bend them before they break? At the moment, we don’t have a lot of published guidelines on this topic. This is something we aim to address. In fact there has even been a first RFC introduced on the topic, though I think we can expect a fair amount of iteration before we arrive at the final and complete answer.

As I wrote on the RFC thread, my take is that we should be shooting for rules that are human friendly as much as possible. In particular, I think that most people will not read our rules and fewer still will try to understand them. So we should ensure that the unsafe code that people write in ignorance of the rules is, by and large, correct. (This implies also that the majority of the code that exists ought to be correct.)

Interestingly, there is something of a tension here: the more unsafe code we allow, the less the compiler can optimize. This is because it would have to be conservative about possible aliasing and (for example) avoid reordering statements.

In my next post, I will describe how I think that we can leverage unsafe abstractions to actually get the best of both worlds. The basic idea is to aggressively optimized safe code, but be more conservative within an unsafe abstraction (but allow people to opt back in with additional annotations).

Edit note: Tweaked some wording for clarity.

This Week In RustThis Week in Rust 131

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

  • Systemd Manager. A systemd service manager written in Rust with the GTK-rs wrapper and direct integration with dbus.
  • FLAME. A flamegraph profiling tool for Rust.
  • Jobsteal. A work-stealing fork-join threadpool written in Rust.
  • pest. Simple, efficient parser generator.

Crate of the Week

This weeks Crate of the Week is parking_lot which gives us synchronization primitives (Mutex, RWLock, CondVar and friends) that are both smaller and faster than the standard library's implementations.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last two weeks.

New Contributors

  • Daniel Campoverde [alx741]
  • mark-summerfield
  • Postmodern
  • Rémy Rakic
  • Robert Habermeier
  • Val Vanderschaegen

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

The Servo BlogThis Week In Servo 64

In the last week, we landed 99 PRs in the Servo organization’s repositories.

This is a week of many additions to the Servo team - a Research Assistant and two full-time hires!

Kyle Headley is joining us from U. Colorado at Boulder, where he is working with Matthew Hammer on incremental computation. He’s going to be working as a Research Assistant this summer, helping us find ways we can use incremental computation to improve the performance of Servo. He’s kheadley on IRC.

Manish Goregaokar is a long-time Servo contributor, initially participating in the first round of Google Summer of Code with Servo. He has mostly worked on DOM-related issues and Rust itself, but is looking forward to working on new things. He is currently working remotely from Mumbai, but will be relocating to the San Francisco office later this year. He is manishearth on IRC.

Diane Hosfelt previously did network and protocol analysis for the Department of Defense, and will start out working on Servo’s networking (an area sorely in need of some expert work!). Diane is working remotely from the UK. She is dd0x68 on IRC.

Welcome to the team, everybody!

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • Manish added support for submit button data in form submissions
  • Jack made Servo DPI-aware on Windows
  • nox hoisted out a channel creation to reduce the number of channels and threads Servo creates
  • larsberg enabled AppVeyor/Windows testing on ipc-channel
  • dati implemented Included Services support for WebBluetooth
  • ajeffrey reduced the number of threads used in our scheduler
  • rzambre changed the profiler file output from CSV to TSV format
  • emilio added support for constants in classes in geckolib
  • ms2ger implemented reporting of panics in web worker threads
  • bholley added basic support for Gecko atoms
  • mbrubeck optimized text shaping for ASCII text
  • KiChjang implemented support for -moz-user-* CSS longhands in geckolib
  • jdm created markers for network and JS-related events in the timeline profiler
  • izgzhen filled in many missing pieces related to file inputs in forms
  • fduraffourg ported a large set of HTML/JS tests for cookie handling to Rust unit tests
  • wafflespeanut improved the usability of the highfive automated tests
  • creativcoder enabled intercepting network requests and synthesizing responses

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

None this week.

Anjana VakilOutreachy: What? How? Why?

Today was my first day as an Outreachy intern with Mozilla! What does that even mean? Why is it super exciting? How did I swing such a sweet gig? How will I be spending my summer non-vacation? Read on to find out!

Outreachy logo

What is Outreachy?

Outreachy is a fantastic initiative to get more women and members of other underrepresented groups involved in Free & Open Source Software. Through Outreachy, organizations that create open-source software (e.g. Mozilla, GNOME, Wikimedia, to name a few) take on interns to work full-time on a specific project for 3 months. There are two internship rounds each year, May-August and December-March. Interns are paid for their time, and receive guidance/supervision from an assigned mentor, usually a full-time employee of the organization who leads the given project.

Oh yeah, and the whole thing is done remotely! For a lot of people (myself included) who don’t/can’t/won’t live in a major tech hub, the opportunity to work remotely removes one of the biggest barriers to jumping in to the professional tech community. But as FOSS developers tend to be pretty distributed anyway (I think my project’s team, for example, is on about 3 continents), it’s relatively easy for the intern to integrate with the team. It seems that most communication takes place over IRC and, to a lesser extent, videoconferencing.

What does an Outreachy intern do?

Anything and everything! Each project and organization is different. But in general, interns spend their time…

Coding (or not)

A lot of projects involve writing code, though what that actually entails (language, framework, writing vs. refactoring, etc.) varies from organization to organization and project to project. However, there are also projects that don’t involve code at all, and instead have the intern working on equally important things like design, documentation, or community management.

As for me specifically, I’ll be working on the project Test-driven Refactoring of Marionette’s Python Test Runner. You can click through to the project description for more details, but basically I’ll be spending most of the summer writing Python code (yay!) to test and refactor a component of Marionette, a tool that lets developers run automated Firefox tests. This means I’ll be learning a lot about testing in general, Python testing libraries, the huge ecosystem of internal Mozilla tools, and maybe a bit about browser automation. That’s a lot! Luckily, I have my mentor Maja (who happens to also be an alum of both Outreachy and RC!) to help me out along the way, as well as the other members of the Engineering Productivity team, all of whom have been really friendly & helpful so far.

Traveling

Interns receive a $500 stipend for travel related to Outreachy, which is fantastic. I intend, as I’m guessing most do, to use this to attend conference(s) related to open source. If I were doing a winter round I would totally use it to attend FOSDEM, but there are also a ton of conferences in the summer! Actually, you don’t even need to do the traveling during the actual 3 months of the internship; they give you a year-long window so that if there’s an annual conference you really want to attend but it’s not during your internship, you’re still golden.

At Mozilla in particular, interns are also invited to a week-long all-hands meet up! This is beyond awesome, because it gives us a chance to meet our mentors and other team members in person. (Actually, I doubly lucked out because I got to meet my mentor at RC during “Never Graduate Week” a couple of weeks ago!)

Blogging

One of the requirements of the internship is to blog regularly about how the internship and project are coming along. This is my first post! Though we’re required to write a post every 2 weeks, I’m aiming to write one per week, on both technical and non-technical aspects of the internship. Stay tuned!

How do you get in?

I’m sure every Outreachy participant has a different journey, but here’s a rough outline of mine.

Step 1: Realize it is a thing

Let’s not forget that the first step to applying for any program/job/whatever is realizing that it exists! Like most people, I think, I had never heard of Outreachy, and was totally unaware that a remote, paid internship working on FOSS was a thing that existed in the universe. But then, in the fall of 2015, I made one of my all-time best moves ever by attending the Recurse Center (RC), where I soon learned about Outreachy from various Recursers who had been involved with the program. I discovered it about 2 weeks before applications closed for the December-March 2015-16 round, which was pretty last-minute; but a couple of other Recursers were applying and encouraged me to do the same, so I decided to go for it!

Step 2: Frantically apply at last minute

Applying to Outreachy is a relatively involved process. A couple months before each round begins, the list of participating organizations/projects is released. Prospective applicants are supposed to find a project that interests them, get in touch with the project mentor, and make an initial contribution to that project (e.g. fix a small bug).

But each of those tasks is pretty intimidating!

First of all, the list of participating organizations is long and varied, and some organizations (like Mozilla) have tons of different projects available. So even reading through the project descriptions and choosing one that sounds interesting (most of them do, at least to me!) is no small task.

Then, there’s the matter of mustering up the courage to join the organization/project’s IRC channel, find the project mentor, and talk to them about the application. I didn’t even really know what IRC was, and had never used it before, so I found this pretty scary. Luckily, I was RC, and one of my batchmates sat me down and walked me through IRC basics.

However, the hardest and most important part is actually making a contribution to the project at hand. Depending on the project, this can be long & complicated, quick & easy, or anything in between. The level of guidance/instruction also varies widely from project to project: some are laid out clearly in small, hand-holdy steps, others are more along the lines of “find something to do and then do it”. Furthermore, prerequisites for making the contribution can be anything from “if you know how to edit text and send an email, you’re fine” to “make a GitHub account” to “learn a new programming language and install 8 million new tools on your system just to set up the development environment”. All in all, this means that making that initial contribution can often be a deceptively large amount of work.

Because of all these factors, for my application to the December-March round I decided to target the Mozilla project “Contribute to the HTML standard”. In addition to the fact that I thought it would be awesome to contribute to such a fundamental part of the web, I chose it because the contribution itself was really simple: just choose a GitHub issue with a beginner-friendly label, ask some questions via GitHub comments, edit the source markup file as needed, and make a pull request. I was already familiar with GitHub so it was pretty smooth sailing.

Once you’ve made your contribution, it’s time to write the actual Outreachy application. This is just a plain text file you fill out with lots of information about your experience with FOSS, your contribution to the project, etc. In case it’s useful to anyone, here’s my application for the December-March 2015-16 round. But before you use that as an example, make sure you read what happened next…

Step 3: Don’t get in

Unfortunately, I didn’t get in to the December-March round (although I was stoked to see some of my fellow Recursers get accepted!). Honestly, I wasn’t too surprised, since my contributions and application had been so hectic and last-minute. But even though it wasn’t successful, the application process was educational in and of itself: I learned how to use IRC, got 3 of my first 5 GitHub pull requests merged, and became a contributor to the HTML standard! Not bad for a failure!

Step 4: Decide to go for it again (at last minute, again)

Fast forward six months: after finishing my batch at RC, I had been looking & interview-prepping, but still hadn’t gotten a job. When the applications for the May-August round opened up, I took a glance at the projects and found some cool ones, but decided that I wouldn’t apply this round because a) I needed a Real Job, not an internship, and b) the last round’s application process was a pretty big time investment which hadn’t paid off (although it actually had, as I just mentioned!).

But as the weeks went by, and the application deadline drew closer, I kept thinking about it. I was no closer to finding a Real Job, and upheaval in my personal life made my whereabouts over the summer an uncertainty (I seem never to know what continent I live on), so a paid, remote internship was becoming more and more attractive. When I broached my hesitation over whether or not to apply to other Recursers, they unanimously encouraged me (again) to go for it (again). Then, I found out that one of the project mentors, Maja, was a Recurser, and since her project was one of the ones I had shortlisted, I decided to apply for it.

Of course, by this point it was once again two weeks until the deadline, so panic once again set in!

Step 5: Learn from past mistakes

This time, the process as a whole was easier, because I had already done it once. IRC was less scary, I already felt comfortable asking the project mentor questions, and having already been rejected in the previous round made it somehow lower-stakes emotionally (“What the hell, at least I’ll get a PR or two out of it!”). During my first application I had spent a considerable amount of time reading about all the different projects and fretting about which one to do, flipping back and forth mentally until the last minute. This time, I avoided that mistake and was laser-focused on a single project: Test-driven Refactoring of Marionette’s Python Test Runner.

From a technical standpoint, however, contributing to the Marionette project was more complicated than the HTML standard had been. Luckily, Maja had written detailed instructions for prospective applicants explaining how to set up the development environment etc., but there were still a lot of steps to work through. Then, because there were so many folks applying to the project, there was actually a shortage of “good-first-bugs” for Marionette! So I ended up making my first contributions to a different but related project, Perfherder, which meant setting up a different dev environment and working with a different mentor (who was equally friendly). By the time I was done with the Perfherder stuff (which turned out to be a fun little rabbit hole!), Maja had found me something Marionette-specific to do, so I ended up working on both projects as part of my application process.

When it came time to write the actual application, I also had the luxury of being able to use my failed December-March application as both a starting point and an example of what not to do. Some of the more generic parts (my background, etc.) were reusable, which saved time. But when it came to the parts about my contribution to the project and my proposed internship timeline, I knew I had to do a much better job than before. So I opted for over-communciation, and basically wrote down everything I could think of about what I had already done and what I would need to do to complete the goals stated in the project description (which Maja had thankfully written quite clearly).

In the end, my May-August application was twice as long as my previous one had been. Much of that difference was the proposed timeline, which went from being one short paragraph to about 3 pages. Perhaps I was a bit more verbose than necessary, but I decided to err on the side of too many details, since I had done the opposite in my previous application.

Step 6: Get a bit lucky

Spoiler alert: this time I was accepted!

Although I knew I had made a much stronger application than in the previous round, I was still shocked to find out that I was chosen from what seemed to be a large, competitive applicant pool. I can’t be sure, but I think what made the difference the second time around must have been a) more substantial contributions to two different projects, b) better, more frequent communication with the project mentor and other team members, and c) a much more thorough and better thought-out application text.

But let’s not forget d) luck. I was lucky to have encouragement and support from the RC community throughout both my applications, lucky to have the time to work diligently on my application because I had no other full-time obligations, lucky to find a mentor who I had something in common with and therefore felt comfortable talking to and asking questions of, and lucky to ultimately be chosen from among what I’m sure were many strong applications. So while I certainly did work hard to get this internship, I have to acknowledge that I wouldn’t have gotten in without all of that luck.

Why am I doing this?

Last week I had the chance to attend OSCON 2016, where Mozilla’s E. Dunham gave a talk on How to learn Rust. A lot of the information applied to learning any language/new thing, though, including this great recommendation: When embarking on a new skill quest, record your motivation somewhere (I’m going to use this blog, but I suppose Twitter or a vision board or whatever would work too) before you begin.

The idea is that once you’re in the process of learning the new thing, you will probably have at least one moment where you’re stuck, frustrated, and asking yourself what the hell you were thinking when you began this crazy project. Writing it down beforehand is just doing your future self a favor, by saving up some motivation for a rainy day.

So, future self, let it be known that I’m doing Outreachy to…

  • Write code for an actual real-world project (as opposed to academic/toy projects that no one will ever use)
  • Get to know a great organization that I’ve respected and admired for years
  • Try out working remotely, to see if it suits me
  • Learn more about Python, testing, and automation
  • Gain confidence and feel more like a “real developer”
  • Launch my career in the software industry

I’m sure these goals will evolve as the internship goes along, but for now they’re the main things driving me. Now it’s just a matter of sitting back, relaxing, and working super hard all summer to achieve them! :D

Got any more questions?

Are you curious about Outreachy? Thinking of applying? Confused about the application process? Feel free to reach out to me! Go on, don’t be shy, just use one of those cute little contact buttons and drop me a line. :)

Christian HeilmannGoogle IO – A tale of two Googles

Google IO main stage with audience

Disclaimer: The following are my personal views and experiences at this year’s Google IO. They are not representative of my employer. Should you want to quote me, please do so as Chris Heilmann, developer.

TL;DR: Is Google IO worth the $900? Yes, if you’re up for networking, getting information from experts and enjoy social gatherings. No, if you expect to be able to see talks. You’re better off watching them from home. The live streaming and recordings are excellent.

Google IO this year left me confused and disappointed. I found a massive gap between the official messaging and the tech on display. I’m underwhelmed with the keynote and the media outreach. The much more interesting work in the breakout sessions, talks and demos excited me. It seems to me that what Google wants to promote and the media to pick up is different to what its engineers showed. That’s OK, but it feels like sales stepping on a developer conference turf.

I enjoyed the messaging of the developer outreach and product owner team in the talks and demos. At times I was wondering if I was at a Google or a Mozilla event. The web and its technologies were front and centre. And there was a total lack of “our product $X leads the way” vibes.

Kudos to everyone involved. The messaging about progressive Web Apps, AMP and even the new Android Instant Apps was honest. It points to a drive in Google to return to the web for good.

Illuminated dinosaur at the after party

The vibe of the event changed a lot since moving out of Moscone Center in San Francisco. Running it on Google’s homestead in Mountain View made the whole show feel more like a music festival than a tech event. It must have been fun for the presenters to stand on the same stage they went to see bands at.

Having smaller tents for the different product and technology groups was great. It invited much more communication than booths. I saw a lot of neat demos. Having experts at hand to talk with about technologies I wanted to learn about was great.

Organisation

Feet in the sun watching a talk at the Amphitheatre

Here are the good and bad things about the organisation:

  • Good: traffic control wasn’t as much of a nightmare I expected. I got there two hours in advance as I anticipated traffic jams, but it wasn’t bad at all. Shuttles and bike sheds helped getting people there.
  • Good: there was no queue at badge pickup. Why I had to have my picture taken and a – somehow sticky – plastic badge printed was a bit beyond me, though. It seems wasteful.
  • Good: the food and beverages were plentiful and applicable. With a group this big it is hard to deliver safe to eat and enjoyable food. The sandwiches, apples and crisps did the trick. The food at the social events was comfort food/fast food, but let’s face it – you’re not at a food fair. I loved that all the packaging was paper and cardboard and there was not too much excess waste in the form of plastics. We also got a reusable water bottle you could re-fill at water dispensers like you have in offices. Given the weather, this was much needed. Coffee and tea was also available throughout the day. We were well fed and watered. I’m no Vegan, and I heard a few complaints about a lack of options, but that may have been personal experiences.
  • Good: the toilets were amazing. Clean, with running water and plenty of paper, mirrors, free sunscreen and no queues. Not what I expected from a music festival surrounding.
  • Great: as it was scorching hot on the first day the welcome pack you got with your badge had a bandana to cover your head, two sachets of sun screen, a reusable water bottle and sunglasses. As a ginger: THANK YOU, THANK YOU, THANK YOU. The helpers even gave me a full tube of sunscreen on re-entry the second day, taking pity on my red skin.
  • Bad: the one thing that was exactly the same as in Moscone was the abysmal crowd control. Except for the huge stage tent number two (called HYDRA - I am on to you, people) all others were far too small. It was not uncommon to stand for an hour in a queue for the talk you wanted to see just to be refused entry as it was full up. Queuing up in the scorching sun isn’t fun for anyone and impossible for me. Hence I missed all but two talks I wanted to see.
  • Good: if you were lucky enough to see a talk, the AV quality was great. The screens were big and readable, all the talks were live transcribed and the presenters audible.

The bad parts

Apart from the terrible crowd control, two things let me down the most. The keynote and a total lack of hardware giveaway – something that might actually be related.

Don’t get me wrong, I found the showering of attendees with hardware excessive at the first few IOs. But announcing something like a massive move into VR with Daydream and Tango without giving developers something to test it on is assuming a lot. Nine hundred dollars plus flying to the US and spending a lot of money on accommodation is a lot for many attendees. Getting something amazing to bring back would be a nice “Hey, thanks”.

There was no announcement at the keynote about anything physical except for some vague “this will be soon available” products. This might be the reason.

My personal translation of the keynote is the following:

We are Google, we lead in machine learning, cloud technology and data insights. Here are a few products that may soon come out that play catch-up with our competition. We advocate diversity and try to make people understand that the world is bigger than the Silicon Valley. That’s why we solve issues that aren’t a problem but annoyances for the rich. All the things we’re showing here are solving issues of people who live in huge houses, have awesome cars and suffer from the terrible ordeal of having to answer text messages using their own writing skills. Wouldn’t it be better if a computer did that for you? Why go and wake up your children with a kiss using the time you won by becoming more effective with our products when you can tell Google to do that for you? Without the kiss that is – for now.

As I put it during the event:

I actually feel poor looking at the #io16 keynote. We have lots of global problems technology can help with. This is pure consumerism.

I stand by this. Hardly anything in the keynote excited me as a developer. Or even as a well-off professional who lives in a city where public transport is a given. The announcement of Instant Apps, the Firebase bits and the new features of Android Studio are exciting. But it all got lost in an avalanche of “Look what’s coming soon!” product announcements without the developer angle. We want to look under the hood. We want to add to the experience and we want to understand how things work. This is how developer events work. Google Home has some awesome features. Where are the APIs for that?

As far as I understand it, there was a glitch in the presentation. But the part where a developer in Turkey used his skills to help the Syrian refugee crisis was borderline insulting. There was no information what the app did, who benefited from it and what it ran on. No information how the data got in and how the data was going to the people who help the refugees. The same goes for using machine learning to help with the issue of blindness. Both were teasers without any meat and felt like “Well, we’re also doing good, so here you go”.

Let me make this clear: I am not criticising the work of any Google engineer, product owner or other worker here. All these things are well done and I am excited about the prospects. I find it disappointing that the keynote was a sales pitch. It did not pay respect to this work and failed to show the workings rather than the final product. IO is advertised as a developer conference, not a end user oriented sales show. It felt disconnected.

Things that made me happy

Chris Heilmann covered in sunscreen, wearing a bandana in front of Google Loon

  • The social events were great – the concert in the amphitheatre was for those who wanted to go. Outside was a lot of space to have a chat if you’re not the dancing type. The breakout events on the second day were plentiful, all different and arty. The cynic in my sniggered at Burning Man performers (the anthithesis to commercialism by design) doing their thing at a commercial IT event, but it gave the whole event a good vibe.
  • Video recording and live streaming – I watched quite a few of the talks I missed the last two days in the gym and I am grateful that Google offers these on YouTube immediately, well described and easy to find in playlists. Using the app after the event makes it easy to see the talks you missed.
  • Boots on the ground – everyone I wanted to meet from Google was there and had time to chat. My questions got honest and sensible answers and there was no hand-waving or over-promising.
  • A good focus on health and safety – first aid tents, sunscreen and wet towels for people to cool down, creature comforts for an outside environment. The organisers did a good job making sure people are safe. Huge printouts of the Code of Conduct also made no qualm about it that antisocial or aggressive behaviour was not tolerated.

Conclusion

Jatinder and me at the keynote

I will go again to Google IO, to talk, to meet, to see product demos and to have people at hand that can give me insight further than the official documentation. I am likely to not get up early next time to see the keynote though and I would love to see a better handle on the crowd control. It is frustrating to queue and not being able to see talks at the conference of a company who prides itself at organising huge datasets and having self-driving cars.
Here are a few things that could make this better:

  • Having screening tents with the video and the transcription screens outside the main tents. These don’t even need sound (which is the main outside issue)
  • Use the web site instead of two apps. Advocating progressive web apps and then telling me in the official conference mail to download the Android app was not a good move. Especially as the PWA outperformed the native app at every turn – including usability (the thing native should be much better). It was also not helpful that the app showed the name of the stage but not the number of the tent.
  • Having more places to charge phones would have been good, or giving out power packs. As we were outside all the time and moving I didn’t use my computer at all and did everything on the phone.

I look forward to interacting and working with the tech Google. I am confused about the Google that tries to be in the hands of end users without me being able to crack the product open and learn from how it is done.

Daniel GlazmanCSS Variables in BlueGriffon

I guess the title says it all :-) Click on the thumbnail to enlarge it.

CSS Variables in BlueGriffon

Gian-Carlo PascuttoTechnical Debt, Episode 1

One of the projects I'm working on for Mozilla is our Content Sandboxing. We've been using sandboxing for a while to protect some plugins like Flash, as well as media plugins, but now that Firefox can render webpages in a separate process, we can apply restrictions to what those "Web Content" processes can do, too. Those processes are the part of Firefox that is essentially exposed to the internet, and hence to potentially dangerous webpages.

Although we go to great lengths to make this impossible, there is always a chance that a bug in Firefox would allow an attacker to exploit and take over a Web Content process. But by using features provided by the operating system, we can prevent them from taking over the rest of the computing device by disallowing many ways to interact with it, for example by stopping them from starting new programs or reading or writing specific files.

This feature has been enabled on Firefox Nightly builds for a while, at least on Windows and Mac OS X. Due to the diversity of the ecosystem, it's taken a bit longer for Linux, but we are now ready to flip that switch too.

The initial version on Linux will block very, very little. It's our goal to get Firefox working and shipping with this first and foremost, while we iterate rapidly and hammer down the hatches as we go, shipping a gradual stream of improvements to our users.

One of the first things to hammer down is filesystem access. If an attacker is free to write to any file on the filesystem, he can quickly take over the system. Similarly, if he can read any file, it's easy to leak out confidential information to an attacking webpage. We're currently figuring out the list of files and locations the Web Content process needs to access (e.g. system font directories) and which ones it definitely shouldn't (your passwords database).

And that's where this story about technical debt really starts.

While tracing filesystem access, we noticed at some point that the Web Content process accesses /etc/passwd. Although on most modern Unix systems this file doesn't actually contain any (hashed) passwords, it still typically contains the complete real name of the users on the system, so it's definitely not something that we'd want to leave accessible to an attacker.

My first thought was that something was trying to enumerate valid users on the system, because that would've been a good reason to try to read /etc/passwd.

Tracing the system call to its origin revealed another caller, though. libfreebl, a part of NSS (Network Security Services) was reading it during its initialization. Specifically, we traced it to this array in the source. Reading on what it is used for is, eh, quite eyebrow-raising in the modern security age.

The NSS random number generator seeds itself by attempting to read /dev/urandom (good), ignoring whether that fails or not (not so good), and then continuing by reading and hashing the password file into the random number generator as additional entropy. The same code then goes on to read in several temporary directories (and I do mean directories, not the files inside them) and perform the same procedure.

Should all of this have failed, it will make a last ditch effort to fork/exec "netstat -ni" and hash the output of that. Note that the usage of fork here is especially "amusing" from the sandboxing perspective, as it's the one thing you'll absolutely never want to allow.

Now, almost none of this has ever been a *good* idea, but in its defense NSS is old and caters to many exotic and ancient configurations. The discussion about /dev/urandom reliability was raised in 2002, and I'd wager the relevant Linux code has seen a few changes since. I'm sure that 15 years ago, this might've been a defensible decision to make. Apparently one could even argue that some unnamed Oracle product running on Windows 2000 was a defensible use case to keep this code in 2009.

Nevertheless, it's technical debt. Debt that hurt on the release of Firefox 3.5, when it caused Firefox startup to take over 2 minutes on some people's systems.

It's not that people didn't notice this idea was problematic:
I'm fully tired of this particular trail of tears. There's no good reason to waste users' time at startup pretending to scrape entropy off the filesystem.
-- Brendan Eich, July 2009
RNG_SystemInfoForRNG - which tries to make entropy appear out of the air.
-- Ryan Sleevi, April 2014
Though sandboxing was clearly not considered much of a use case in 2006:
Only a subset of particularly messed-up applications suffer from the use of fork.
-- Well meaning contributor, September 2006
Nevertheless, I'm - still - looking at this code in the year of our Lord 2016 and wondering if it shouldn't all just be replaced by a single getrandom() call.

If your system doesn't have getrandom(), well maybe there's a solution for that too.



Don't agree? Can we then at least agree that if your /dev/urandom isn't secure, it's your problem, not ours?

Air MozillaWebdev Beer and Tell: May 2016

Webdev Beer and Tell: May 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Support.Mozilla.OrgEvent Report: Mozilla Ivory Coast SUMO Sprint

We’re back, SUMO Nation! This time with a great event report from Abbackar Diomande, our awesome community spirit in Ivory Coast! Grab a cup of something nice to drink and enjoy his report from the Mozilla Ivory Coast SUMO Sprint.

The Mozilla Ivory Coast community is not yet ready to forget Saturday, May 15. It was then that the first SUMO Sprint in Ivory Coast took place, lasting six hours!
For this occasion, we were welcomed and hosted by the Abobo Adjame University, the second largest university in the country.
Many students, some members of the Mozilla local community, and other members of the free software community gathered on this day.

The event began with a Mozilla manifesto presentation by Kouadio – a young member of our local SUMO team and the Lead of the Firefox Club at the university.

After that, I introduced everyone to SUMO, the areas of SUMO contribution, the our Nouchi translation project, and Locamotion (the tool we use to localize).
During my presentation I learned that all the guests were really surprised and happy to learn of the existence of support.mozilla.org and a translation project for Nouchi
They were very happy and excited to participate in this sprint, and you can see that in the photos, emanating from their smiles and the joy that you can read from their the faces.

After all presentations and introductions, the really serious things could begin. Everyone spent two hours answering questions of French users on Twitter – the session passed very quickly in the friendly atmosphere.

We couldn’t reach the goal of answering all the Army of Awesome posts in French, but everyone appreciated what we achieved, providing answers to over half the posts – we were (and still are) very proud of our job!

After the Army of Awesome session, our SUMO warriors have turned to Locamotion for Nouchi localization. It was at once serious and fun. Originally planned for three hours, we localized for four – because it was so interesting :-)

Mozilla and myself received congratulations from all participants for this initiative, which promotes the Ivorian language and Ivory Coast as a digital country present on the internet.

Even though we were not able to reach all our objectives, we are still very proud of what we have done. We contributed very intensely, both to help people who needed it and to improve the scale and quality of Nouchi translations in open source, with the help of new and dynamic contributors.

The sprint ended with a group tasting of garba (a traditional local dish) and a beautiful family picture.

Thank you, Abbackar! It’s always great to see happy people contributing their skills and time to open source initiatives like this. SUMO is proud to be included in Ivory Coast’s open source movement! We hope to see more awesomeness coming from the local community in the future – in the meantime, I think it’s time to cook some garba! ;-)

Patrick ClokeGoogle Summer of Code 2016 projects

I’d like to introduce the 13 students that are being mentored by Mozilla this year as part of Google Summer of Code 2016! Currently the “community bonding” period is ongoing, but we are on the cusp of the “coding period” starting.

As part of Google Summer of Code (GSoC), we ask students to provide weekly updates of their progress in a public area (usually a blog). If you’re interested in a particular project, please follow along! Lastly, remember that GSoC is a community effort: if a student is working in an area where you consider yourself knowledgable, please introduce yourself and offer to provide help and/or advice!

Below is a listing of each student’s project (linked to their weekly updates), the name of each student and the name of their mentor(s).

Project Student Mentor(s)
Download app assets at runtime (Firefox for Android) Krish skaspari
File API Support (Servo) izgzhen Manishearth
Implement RFC7512 PKCS#11 URI support and system integration (NSS) varunnaganathan Bob Relyea, David Woodhoue
Implementing Service Worker Infrastructure in Servo Browser Engine creativcoder jdm
Improving and expanding the JavaScript XMPP Implementation Abdelrhman Ahmed aleth, nhnt11
Mozilla Calendar – Event in a Tab paulmorris Philipp Kewisch
Mozilla Investigator (MIG): Auditd integration Arun kang
Prevent Failures due to Update Races (Balrog) varunjoshi Ben Hearsum
Proposal of Redesign SETA MikeLing Joel Maher
Schedule TaskCluster Jobs in Treeherder martianwars armenzg
Thunderbird - Implement mbox -> maildir converter Shiva mkmelin
Two Projects to Make A-Frame More Useful, Accessible, and Exciting bryik Diego Marcos
Web-based GDB Frontend baygeldin jonasfj

Doug BelshawWhat does it mean to be a digitally literate school leader?

As part of the work I’m doing with London CLC, their Director, Sarah Horrocks, asked me to write something on what it means to be a digitally literate school leader. I’d like to thank her for agreeing to me writing this for public consumption.

Image CC BY K.W. Barrett

Image CC BY K.W. Barrett

Before I start, I think it’s important to say why I might be in a good position to be able to answer this question. First off, I’m a former teacher and senior leader. I used to be Director of E-Learning of a large (3,000 student), all-age, multi-site Academy. I worked for Jisc on their digital literacies programme, writing my thesis on the same topic. I’ve written a book entitled The Essential Elements of Digital Literacies. I also worked for the Mozilla Foundation on their Web Literacy Map, taking it from preliminary work through to version 1.5. I now consult with clients around identifying, developing, and credentialing digital skills.

That being said, it’s now been a little over six years since I last worked in a school, and literacy practices change quickly. So I’d appreciate comments and pushback on what follows.


Let me begin by saying that, as Allan Martin (2006) pointed out, “Digital literacy is a condition, not a threshold.” That’s why, as I pointed out in my 2012 TEDx talk, we shouldn’t talk about ‘digital literacy’ as a binary. People are not either digitally literate or digitally illiterate - instead literacy practices in a given domain exist on a spectrum.

In the context of a school and other educational institutions, we should be aware that that there are several cultures at play. As a result, there are multiple, overlapping literacy practices. For this reason we should talk of digital literacies in their plurality. As I found in the years spent researching my thesis, there is no one, single, definition of digital literacy that is adequate in capturing the complexity of human experience when using digital devices.

In addition, I think that it’s important to note that digital literacies are highly context dependent. This is perhaps most evident when addressing the dangerous myth of the 'digital native’. We see young people confidently using smartphones, tablets, and other devices and therefore we assume that their skillsets in one domain are matched by the requisite mindsets from another.

So to recap so far, I think it’s important to note that digital literacies are plural and context-dependent. Although it’s tempting to attempt to do so, it’s impossible to impose a one-size-fits-all digital literacy programme on students, teachers, or leaders and meet with success. Instead, and this is the third 'pillar’ one which my approach rests, I’d suggest that definitions of digital literacies need to be co-created.

By 'co-created’ I mean that there are so many ways in which one can understand both the 'digital’ and 'literacies’ aspects of the term 'digital literacies’ that it can be unproductively ambiguous. Instead, a dialogic approach to teasing out what this means in your particular context is much more useful. In my thesis and book I came up with eight elements of digital literacies from the research literature which prove useful to scaffold these conversations:

  1. Cultural
  2. Cognitive
  3. Constructive
  4. Communicative
  5. Confident
  6. Creative
  7. Critical
  8. Civic

In order not to make this post any longer than it needs to be, I’ll encourage you to look at my book and thesis for more details on this. Suffice to say, it’s important both to collaboratively define the above eight terms and define then what you mean by 'digital literacies’ in a particular context.

All of this means that the job of the school leader is not to reach a predetermined threshold laid down by a governing body or professional body. Instead, the role of the school leader is to be always learning, questioning their practice, and encouraging colleagues and students in all eight of the 'essential elements’ listed above.

As with any area of interest and focus, school leaders should model the kinds of knowledge, skills, and behaviours they want to see develop in those around them. Just as we help people learn that being punctual is important by always turning up on time ourselves, so the importance of developing digital literacies can be demonstrated by sharing learning experiences and revelations.

There is much more on this in my thesis, book, and presentations but I’ll finish with some recommendations as to what school leaders can do to ensure they are constantly improving their practices around digital literacies:

  • Seek out new people: it’s easy for us to become trapped in what are known as filter bubbles, either through the choices we make as a result of confirmation bias, or algorithmically-curated newsfeeds. Why not find people and organisations who you wouldn’t usually follow, and add them to your daily reading habits?
  • Share what you learn: why not create a regular way to update those in your school community about issues relating to the considered use of technology? This could be a discussion forum, a newsletter pointing to the work of people like the Electronic Frontier Foundation or Common Sense Media, or 'clubs’ that help staff and students get to grips with new technologies.
  • Find other ways: the danger of 'best practices’ or established workflows is that they can make you blind to new, better ways of doing things. As Clay Shirky notes in this interview it can be liberating to jettison existing working practices in favour of new ones. What other ways can you find to write documents, collaborate with others, be creative, and/or keep people informed?

Comments? Questions? I’m @dajbelshaw or you can get in touch with me at: hello@dynamicskillset.com. I consult around identifying, developing, and credentialing digital skills.

Air MozillaBay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day

Bay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day Digital Accessibility meetup with speakers for Global Accessibility Awareness Day. #a11ybay. 6pm Welcome with 6:30pm Start Time.

Support.Mozilla.OrgWhat’s Up with SUMO – 19th May

Hello, SUMO Nation!

Glad to see all of you on this side of spring… How are you doing? Have you missed us as much as we missed you? Here we go yet again,  another small collection of updates for your reading pleasure :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 25th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • The Polish team have reached their monthly milestone – congratulations!
  • Final reminder: if you want to participate in the ongoing discussion about source material quality and frequency, take a look at this thread. We are going to propose a potential way of addressing your issues once we collate enough feedback.
  • Reminder: L10n hackathons everywhere! Find your people and get organized!

Firefox

  • for Android
    • Version 46 support discussion thread.
    • Reminder: version 47 will stop supporting Gingerbread. High time to update your Android installations!
      • Other than that, it should be a minor release. Documentation in progress!

And that’s it! We hope you are looking forward to the end of this week and the beginning of the next one… We surely are! Don’t forget to follow us on Twitter!

Yunier José Sosa VázquezMozilla presenta a Alex Salkever como Vice Presidente de Marketing

En el día de hoy, Mozilla ha hecho público su más reciente adición del equipo de liderazgo en la fundación. Se trata de Alex Salkever, quién ejercerá como nuevo Vice Presidente de Marketing.

En el artículo publicado en el blog de Mozilla, Jascha Kaykas-Wolff (Director de Marketing) comenta que en su nuevo rol, Alex tendrá bajo su mando la conducción de las campañas estrategias de posicionamiento y marketing. Unido a ello, también se encargará de supervisar las comunicaciones globales, los medios de comunicación social, la asistencia de los usuarios y los equipos de marketing de contenido y de trabajo en toda la organización para desarrollar comunicaciones externas impactantes para los productos de Mozilla y Firefox.

Alex Salkever, foto tomada de blog.mozilla.org

Alex Salkever, foto tomada de blog.mozilla.org

Anteriormente, Alex fue Director de Marketing de Silk.co, donde centró sus esfuerzos al crecimiento de usuarios y las asociaciones de la plataforma. Además, Salkever ha ocupado una variedad de cargos relacionados con el mundo del marketing de productos en los campos de instrumentos científicos, computación en la nube, telecomunicaciones e Internet de las Cosas. En estas diversas capacidades, Alex ha gestionado campañas a través de todos los aspectos de marketing y comercialización de productos que incluyen relaciones públicas, marketing de contenidos, adquisición de usuarios, contratación de desarrolladores y análisis de marketing.

Alex también brindará a Mozilla su experiencia como ex editor de tecnología en BusinessWeek.com. Entre sus muchos logros, Alex es el co-autor del libro “The Immigrant Exodus” (en español El Éxodo del Inmigrante), un libro llamado El Libro de Economista de la lista del año en la categoría Libros de Negocio en 2012.

¡Bienvenido a Mozilla Alex!

Air MozillaWeb QA Team Meeting, 19 May 2016

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Air MozillaReps weekly, 19 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

About:CommunityJakarta Community Space Launch

This post was written by Fauzan Alfi.

It was not an ordinary Friday 13th for Mozilla Indonesia because on May 13th, 2016, it was a very big day for us. After months of planning and preparation, the Mozilla Community Space Jakarta finally launched and opened for the community. It’s the 4th volunteer-run physical community space after Bangalore (now closed), Manila and Taipei and another one is opening soon in Berlin. Strategically located in Cikini – Central Jakarta, the Space will become a place for Mozillians from Greater Jakarta and Bandung to do many activities, especially developer-focused events, and to build relationships with other tech communities in the city.

The Space

The Space. Photo by Yofie Setiawan

Invited to the event were many open source and other communities around the city. Mozilla Reps, FSAs and Mozillians also joined to celebrate the Space opening. On his presentation, Yofie Setiawan (Mozilla Rep, Jakarta Space Manager) hopes that Jakarta Community Space can be useful for many people and communities, especially to educate anyone who comes and joins events that take place in the space.

Opening Event

Dian Ina and Rara talk to guests. Photo by Yofie Setiawan

Ceremonial first piece

Brian gets the ceremonial first bite. Photo by Yofie Setiawan

Also joining the event, Brian King from Participation Team at Mozilla. During his remarks, Brian said that the reason behind the Jakarta Community Space is because “the Mozilla community here is one of the most active globally, with deep roots and a strong network in tech scene”. He also added that “Indonesia is an important country with a very dynamic Web presence, and we’d like to engage with more people to make the online experience better for everyone.”

The Jakarta Community Space is around 40 square meters in area that fits 20-30 people inside. On the front side, it has glass wall that’s covered by frosted sticker with some Mozilla projects wording printed on it. Inside, we have some chairs, tables, home theater set, food & drink supplies and coffee machine. Most of the items were donated by Mozillians in Jakarta.

The tour

The tour. Photo by Yofie Setiawan

One area where the Jakarta Community excelled was with the planning and design. All the processes are done by the community itself. One of Reps from Indonesia, Fauzan Alfi – who has a background in architecture, helped design the space and kept the process transparent on the Community Design GitHub. The purpose is to ignite collaborative design, not only from Indonesian community but also from other parts of the globe. More creativity was shown by creating mural drawings of landmarks in selected cities around the world – including Monas of Jakarta.

Jakarta Community Space means a lot for Mozilla community in Greater Jakarta and Indonesia, in general. Having a physical place means the Indonesian community will have their own home to spread the mission and collaborate with more communities that are aligned with Mozilla, especially developer communities. Hopefully, the Space will bring more and more people to contribute to Mozilla and help shape the future of the Web.

Pascal ChevrelLet's give Firefox Nightly some love!

After a decade working on making Mozilla Web properties available in dozens of languages, creating communities of localizers around the globe and building Quality Assurance tools, dashboards and APIs to help ship our software and websites internationally, I recently left the Localization department to report to Doug Turner and work on a new project benefiting directly the Platform and Firefox teams!

I am now in charge of a project aiming to turn Nightly into a maintained channel (just as we have the Aurora, Beta and Release channels) whose goal will be to engage our very technical Nightly users into the Mozilla project in activities that have a measurable impact on the quality of our products.

Here are a few key goals I would like us to achieve in 2016-2017:

  • Double the number of Nightly users so as to detect much earlier regressions, crashes and Web compatibility issues. A regression detected and reported a couple of days after the code landed on mozilla-central is a simple backout, the same regression reported weeks or even months later in the Aurora, Beta or even discovered on the Release channel can be much more work to get fixed.

  • Make of Firefox Nightly a real entry point for the more technical users that want to get involved in Mozilla and help us ship software (QA, code, Web Compatibility, security…). Not only for Firefox but also to all technical Mozilla projects that would benefit from a wider participation.

  • Make of Firefox Nightly a better experience for these technical contributors. This means as a first step using the built-in communication channels (about:home promotional snippets, default tiles, first run / what's New pages…) to communicate information adapted to technical users and propose resources, activities and ways to participate in Mozilla that are technical by nature. I also want to have a specific focus on three countries, Germany, France and Spain, where we have strong local communities, staff and MozSpaces and can engage people more easily IRL.

I will not work on that alone, Sylvestre Ledru, our Release Management Lead, has created a new team (with Marcia Knous in the US and Calixte Denizet in France) to work on improving the quality of the Nightly channel and analyse crashes and regressions. Members of other departments (Participation, MDN, Security, Developer Relations…) have also shown interest in the project and intend to get involved.

But first and foremost, I do intend to get the Mozilla community involved and hopefully also get people not involved in Mozilla yet to join us and help us make of this "Nightly Reboot" project a success!

A few pointers for this project:

  • There is an existing #nightly IRC channel that we are restoring with Marcia and a few contributors. I am pascalc on IRC and I am in the CET timezone, don't hesitate to ping me there if you want to propose your help, know more about the project or propose your own ideas.

  • Marcia created a "Nightly Testers" Telegram channel, ping me if you are already using Nightly to report bugs and want to be added

  • For asynchronous communication, there is a Nightly Testers mailing list

  • If you want to download Nightly, go to nightly.mozilla.org. Unfortunately the site only proposes en-US builds and this is definitely something I want to get fixed! If you are a French speaker, our community maintains its own download site for Nightly with links to French builds that you can find at nightly.mozfr.org, otherwise other localized builds can be found on our FTP.

  • If you want to know all the new stuff that gets into our Nightly channel, follow our @FirefoxNightly twitter account

  • If you are a Nightly user and report a bug on https://bugzilla.mozilla.org, please put the tag [nightly-community] in the whiteboard field of your bug report, this allows us to measure the impact of our active Nightly community on Bugzilla.

Interested? Do get involved and don't hesitate to contact me if you have any suggestion or idea that could fit into that project. Several people I spoke with in the last weeks gave me very interesting feedback and concrete ideas that I preciously noted!

You can contact me (in English, French or Spanish) through the following communication channels:

  • Email: pascal AT mozilla DOT com
  • IRC on Moznet and Freenode: pascalc
  • Twitter: @pascalchevrel

update 15:33 See also this blog post by Mozilla Engineer Nicholas Nethercote I want more users on the Nightly channel