Ben HearsumStop stripping (OS X builds), it leaves you vulnerable

While investigating some strange update requests on our new update server, I discovered that we have thousands of update requests from Beta users on OS X that aren’t getting an update, but should. After some digging I realized that most, if not all of these are coming from users who have installed one of our official Beta builds and subsequently stripped out the architecture they do not need from it. In turn, this causes our builds to report in such a way that we don’t know how to serve updates for them.

We’ll look at ways of addressing this, but the bottom line is that if you want to be secure: Stop stripping Firefox binaries!

Lucas RochaNew Features in Picasso

I’ve always been a big fan of Picasso, the Android image loading library by the Square folks. It provides some powerful features with a rather simple API.

Recently, I started working on a set of new features for Picasso that will make it even more awesome: request handlers, request management, and request priorities. These features have all been merged to the main repo now. Let me give you a quick overview of what they enable you to do.

Request Handlers

Picasso supports a wide variety of image sources, from simple resources to content providers, network, and more. Sometimes though, you need to load images in unconventional ways that are not supported by default in Picasso.

Wouldn’t it be nice if you could easily integrate your custom image loading logic with Picasso? That’s what the new request handlers are about. All you need to do is subclass RequestHandler and implement a couple of methods. For example:

public class PonyRequestHandler extends RequestHandler {
    private static final String PONY_SCHEME = "pony";

    @Override public boolean canHandleRequest(Request data) {
        return PONY_SCHEME.equals(data.uri.getScheme());
    }

    @Override public Result load(Request data) {
         return new Result(somePonyBitmap, MEMORY);
    }
}

Then you register your request handler when instantiating Picasso:

Picasso picasso = new Picasso.Builder(context)
    .addRequestHandler(new PonyHandler())
    .build();

Voilà! Now Picasso can handle pony URIs:

Picasso.with(context)
       .load("pony://somePonyName")
       .into(someImageView)

This pull request also involved rewriting all built-in bitmap loaders on top of the new API. This means you can also override the built-in request handlers if you need to.

Request Management

Even though Picasso handles view recycling, it does so in an inefficient way. For instance, if you do a fling gesture on a ListView, Picasso will still keep triggering and canceling requests blindly because there was no way to make it pause/resume requests according to the user interaction. Not anymore!

The new request management APIs allow you to tag associated requests that should be managed together. You can then pause, resume, or cancel requests associated with specific tags. The first thing you have to do is tag your requests as follows:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .tag(someTag)
       .into(someImageView)

Then you can pause and resume requests with this tag based on, say, the scroll state of a ListView. For example, Picasso’s sample app now has the following scroll listener:

public class SampleScrollListener implements AbsListView.OnScrollListener {
    ...
    @Override
    public void onScrollStateChanged(AbsListView view, int scrollState) {
        Picasso picasso = Picasso.with(context);
        if (scrollState == SCROLL_STATE_IDLE ||
            scrollState == SCROLL_STATE_TOUCH_SCROLL) {
            picasso.resumeTag(someTag);
        } else {
            picasso.pauseTag(someTag);
        }
    }
    ...
}

These APIs give you a much finer control over your image requests. The scroll listener case is just the canonical use case.

Request Priorities

It’s very common for images in your Android UI to have different priorities. For instance, you may want to give higher priority to the big hero image in your activity in relation to other secondary images in the same screen.

Up until now, there was no way to hint Picasso about the relative priorities between images. The new priority API allows you to tell Picasso about the intended order of your image requests. You can just do:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .priority(HIGH)
       .into(someImageView);

These priorities don’t guarantee a specific order, they just tilt the balance towards higher-priority requests.


That’s all for now. Big thanks to Jake Wharton and Dimitris Koutsogiorgas for the prompt code and API reviews!

You can try these new APIs now by fetching the latest Picasso code on Github. These features will probably be available in the 2.4 release. Enjoy!

Michael KaplyFileblock

One of the things that I get asked the most is how to prevent a user from accessing the local file system from within Firefox. This generally means preventing file:// URLs from working, as well as removing the most common methods of opening files from the Firefox UI (the open file button, menuitem and shortcut). Because I consider this outside of the scope of the CCK2, I wrote an extension to do this and gave it out to anyone that asked. Unfortunately over time it started to have a serious case of feature creep.

Going forward, I've decided to go back to basics and just produce a simple local file blocking extension. The only features that it supports are whitelisting by directory and whitelisting by file extension. I've made that available here. There is a README that gives full information on how to use it.

For the other functionality that used to be a part of FileBlock, I'm going to produce a specific extension for each feature. They will probably be AboutBlock (for blocking specific about pages), ChromeBlock (for preventing the loading of chrome files directly into the browser) and SiteBlock (for doing simple whitelisting).

Hopefully this should cover the most common cases. Let me know if you think there is a case I missed.

Gervase MarkhamLicensing Policy Change: Tests are Now Public Domain

I’ve updated the Mozilla Foundation License Policy to state that:

PD Test Code is Test Code which is Mozilla Code, which does not carry an explicit license header, and which was either committed to the Mozilla repository on or after 10th September 2014, or was committed before that date but all contributors up to that date were Mozilla employees, contractors or interns. PD Test Code is made available under the Creative Commons Public Domain Dedication. Test Code which has not been demonstrated to be PD Test Code should be considered to be under the MPL 2.

So in other words, new tests are now CC0 (public domain) by default, and some or many old tests can be relicensed as well. (We don’t intend to do explicit relicensing of them ourselves, but people have the ability to do so in their copies if they do the necessary research.) This should help us share our tests with external standards bodies.

This was bug 788511.

Gervase MarkhamSurvey on FLOSS Contribution Policies

In the “dull but important” category: my friend Allison Randal is doing a survey on people’s attitudes to contribution policies (committer’s agreements, copyright assignment, DCO etc.) in free/libre/open source software projects. I’m rather interested in what she comes up with. So if you have a few minutes (it should take less than 5 – I just did it) to fill in her survey about what you think about such things, she and I would be most grateful:

http://survey.lohutok.net is the link. You want the “FLOSS Developer Contribution Policy Survey” – I’ve done the other one on Mozilla’s behalf.

Incidentally, this survey is notable as I believe it’s the first online multiple-choice survey I’ve ever taken where I didn’t think “my answer doesn’t fit into your narrow categories” about at least one of the questions. So it’s definitely well-designed.

Erik VoldJetpack Pro Tip - Reusing files for content and chrome

I’ve seen this issue come up a lot. Where an add-on developer is trying to reuse a library file, like underscore in both their add-on code, and their content scripts.

Typically the SDK documentation will say to put all of the content scripts in your add-on’s data/ folder, and that is the best thing to do if the script is only going to be used as a content script, but if you want to use the file in your add-on scope too then the file should not be in the data/ folder, and it should be in your lib/ folder instead.

Once this is done, the add-on scope can easy require it, so all that is left is to figure out a uri for the file in your lib/ folder which can be used for content scripts. To do this there are two options, one of which only works on JPM.

JPM

The JPM solution is very simple (thanks to Jordan Santell for implementing this), it is:

let underscoreURI = require.resolve("./lib/underscore");

if the file is in lib/underscore, but it should only be there if you copied and pasted it there, which pros don’t do. Pros use NPM because they know underscore is there, so they just make that a dependency, by adding this to package.json:

{
  // ..
  "dependencies": {
    "underscore": "1.6.0"
  }
  //..
}

Then, simply use:

let underscoreURI = require.resolve("underscore");

CFX

WIth CFX you will have to copy and paste the file in to your lib/ folder, then you can get a URL for the file by doing this:

let underscoreURI = module.uri.replace("main.js", "underscore.js");

Assuming that the code above is evaluated in lib/main.js.

So you can see an issue with the above code is that you have to know the name of the file which this code is evaluated in, so another approach could be:

let underscoreURI = module.uri.replace(/\/[^\/]*$/, "/underscore.js");

Julien VehentBatch inserts in Go using channels, select and japanese gardening

I was recently looking into the DB layer of MIG, searching for ways to reduce the processing time of an action running on several thousands agents. One area of the code that was blatantly inefficient concerned database insertions.

When MIG's scheduler receives a new action, it pulls a list of eligible agents, and creates one command per agent. One action running on 2,000 agents will create 2,000 commands in the ''commands'' table of the database. The scheduler needs to generate each command, insert them into postgres and send them to their respective agents.

MIG uses separate goroutines for the processing of incoming actions, and the storage and sending of commands. The challenge was to avoid individually inserting each command in the database, but instead group all inserts together into one big operation.

Go provides a very elegant way to solve this very problem.

At a high level, MIG Scheduler works like this:

  1. a new action file in json format is loaded from the local spool and passed into the NewAction channel
  2. a goroutine picks up the action from the NewAction channel, validates it, finds a list of target agents and create a command for each agent, which is passed to a CommandReady channel
  3. a goroutine listens on the CommandReady channel, picks up incoming commands, inserts them into the database and sends them to the agents (plus a few extra things)
The CommandReady goroutine is where optimization happens. Instead of processing each command as they come, the goroutine uses a select() statement to either pick up a command, or timeout after one second of inactivity.
// Goroutine that loads and sends commands dropped in ready state
// it uses a select and a timeout to load a batch of commands instead of
// sending them one by one
go func() {
    ctx.OpID = mig.GenID()
    readyCmd := make(map[float64]mig.Command)
    ctr := 0
    for {
        select {
        case cmd := <-ctx.Channels.CommandReady:
            ctr++
            readyCmd[cmd.ID] = cmd
        case <-time.After(1 * time.Second):
            if ctr > 0 {
                var cmds []mig.Command
                for id, cmd := range readyCmd {
                    cmds = append(cmds, cmd)
                    delete(readyCmd, id)
                }
                err := sendCommands(cmds, ctx)
                if err != nil {
                    ctx.Channels.Log <- mig.Log{OpID: ctx.OpID, Desc: fmt.Sprintf("%v", err)}.Err()
                }
            }
            // reinit
            ctx.OpID = mig.GenID()
            ctr = 0
        }
    }
}()

As long as messages are incoming, the select() statement will elect the case when a message is received, and store the command into the readyCmd map.

When messages stop coming for one second, the select statement will fall into its second case: time.After(1 * time.Second).

In the second case, the readyCmd map is emptied and all commands are sent as one operation. Later in the code, a big INSERT statement that include all commands is executed against the postgres database.

In essence, this algorithm is very similar to a Japanese Shishi-odoshi.

shishi-odoshi.gif

The current logic is not yet optimal. It does not currently set a maximum batch size, mostly because it does not currently need to. In my production environment, the scheduler manages  about 1,500 agents, and that's not enough to worry about limiting the batch size.

Eric ShepherdThe Sheppy Report: September 19, 2014

I’ve been working on getting a first usable version of my new server-side sample server project (which remains unnamed as yet — let me know if you have ideas) up and running. The goals of this project are to allow MDN to host samples that require a server side component (for example, examples of how to do XMLHttpRequest or WebSockets), and to provide a place to host samples that require the ability to do things that we don’t allow in an <iframe> on MDN itself. This work is going really well and I think I’ll have something to show off in the next few days.

What I did this week

  • Caught up on bugmail and other messages that came in while I was out after my hospital stay.
  • Played with JSMESS a bit to see extreme uses of some of our new technologies in action.
  • Did some copy-editing work.
  • Wrote up a document for my own reference about the manifest format and architecture for the sample server.
  • Got most of the code for processing the manifests for the sample server modules and running their startup scripts written. It doesn’t quite work yet, but I’m close.
  • Filed a bug about implementing a drawing tool within MDN for creating diagrams and the like in-site. Pointed to draw.io as an example of a possible way to do it. Also created a developer project page for this proposal.
  • Exchanged email with Piotr about the editor revamp. It’s making good progress.

Wrap up

I’m really, really excited about the sample server work. With this up and running (hopefully soon), we’ll be able to create examples for technologies we were never able to properly demonstrate in the past. It’s been a long time coming. It’s also been a fun, fun project!

 

Curtis KoenigThe Curtisk report: 2014-09-21

People wanna know what I do, so I am going to give this a shot, so each Monday I will make a post about the stuff I did in the previous week.

Idea shamlessly stolen from Eric Shepherd

What I did this week

  • MWoS: SeaSponge Project Proposal (Review)
  • Crusty Bugs data digging
  • Mozillians.org security review (move along)
  • Firefox OS Sec discussion
  • sec triage process massaging
  • Firefox OS Security coordination
  • Vendor site review
    • testing plan for vendor site testing
    • testing coordination with team and vendor
  • CBT Training survey
  • security scan of [redacted]

Meetings attended this week

Mon

  • Weekly Project Meeting
  • Web Bounty Triage

Tue

  • SecAutomation
  • Cloud Services Security Team

Wed

  • MWoS team Project meeting
  • Vendor testing call
  • Web Bug Triage

Thu

  • Security Open Mic
  • Grow Mozilla / Community Building
  • Computer Science Teachers Association (guest speaker)

Christian HeilmannNotes on my closing keynote of From the Front 2014

These are some notes about my closing keynote at From the Front in Bologna, Italy last week. The overall theme of the event was “Temple of the DOM” thus I kept it Indiana Jones themed (one could say shoehorned, but I wasn’t alone with this).

from the front 2014 speakers

The slides are available on Slideshare.

In Indiana Jones and the Temple of Doom the Sankara Stones are very powerful stones that can bring prosperity or destroy people, depending how they are used. When you bring the stones together they light up and in general all is very mystic and amazing. It gives the movie an adventure angle that can not explained and allows us to suspend our disbelief and see Indy as being capable of more than a normal human being.

A tangent: Blowing people’s mind is pretty easy. All you need to do is take a known concept and then make an assumption from it. For example, when you see Luigi from Super Mario Brothers and immediately recognise him, there is quite a large chance you have an older sibling. You were always the one who had to wait till your sibling failed in the game so it was your turn to play with “green Mario”. Also, if Luigi and Mario are the Mario brothers then Mario’s name is Mario Mario. Ponder this.

The holy trinity of web development

On the web we also have magical stones that we can put together and create good or evil. These are the standardised technologies of the web: HTML, CSS and JavaScript. These are available in every browser, understood by them without any extra compilation step, very well documented and easy to learn (but harder to master).

Back in 1999, Jeffrey Zeldman taught us all not to write tag-soup any longer and use the technologies of the web to build intelligent solutions that use them to their strengths. These are commonly referred to as the separation of concerns:

  • Structure (HTML and added extra-value semantics like Microformats)
  • Presentation (CSS, Images)
  • Behaviour (JavaScript)

Back then this was a very necessary stake in the ground, explaining that web development is not a random WYSIWYG result but something with a lot of planning and organisation behind it. The separation of concerns played all the different technologies to their strengths and also meant that one or two can fail and nothing will go wrong.

This also paved the way for the progressive enhancement idea that all you really need is a proper HTML document and the rest gets added when and if needed or – in the case of JavaScript – once it has been tested to be available and applicable.

The problems started when people with different agendas skewed the concept of the separation of concerns:

  • HTML and semantic markup enthusiasts advocated far too loudly for very clean markup, validation and adding things like Microformats. For engineers just trying to get something to show up in a browser this has always been a confusion as the tangible benefits of this are, well, not tangible. Browsers are very forgiving and will fix HTML for you and when there is no interface in browsers that surfaces the data in Microformats, why do it? Of course, I disagree and stated very often that semantic, clean markup is the good grammar of the web – you don’t need it, but it does make you much easier to understand and shows that you learned what you are doing. But that doesn’t really matter. Fact is that we continuously try to make people understand something we hold dear without giving them tangible benefits.
  • JavaScript enthusiasts, on the other hand, create far too much with JavaScript. This is a matter of control. You know JavaScript, you are happy seeing parts of an app or a page as objects and you want to instantiate them, inherit from them and re-use them. You don’t want to write much code but feel that generating it is the most clever way of using technology. Many JS enthusiasts also keep citing that browser differences are a real issue and that in JS they have the chance to test and fix problems browsers have. The fallacy here, of course, is that by doing that they also made the current and future browser issues their own.
  • CSS enthusiasts started to shoot against JavaScript as a tool when CSS became more powerful. Are animations and transitions behaviour or presentation? Should it be done in CSS or in JS where there is much more granular control? What about generated content? Where does this fall into? We can create whole drawings from one DIV element, but should we?

All of this, together with lots and lots of libraries promising us to solve all kind of cross-browser issues lead to the massively obese web we see today. An average web site size of almost 2MB would have blown our minds in the past, but these days it seems the right thing to do if you want to be professional and use the tools professionals use. Starting a vanilla HTML file feels like a hack – using a build script to start a boiler plate seems to be the intelligent, full stack development thing to do.

Best practice reminders, repeated

This is nothing new, of course.

Back in 2004, I wrote a self training course on Unobtrusive JavaScript trying to make people understand the need for separation of behaviour and look and feel. In 2005 I questioned the validity of three layers of separation as I worked on CMS and framework driven web products where I did not have full control over the markup but had to deal with the result of .NET 1.0 renderers.

Web technologies have always been a struggle for people to grasp and understand. JavaScript is very powerful whilst being a very loosely architected language compared to C or Java. The ability to use inline styling and scripting always tempted people to write everything in one document rather than separating it out into several which allows for caching and re-use. That way we always created bloated, hard to maintain documents or over-used scripts and style sheets we don’t control and understand.

It seems the epic struggle about what technology to use for what is far from over and we still argue until we are blue in the face if an animation should be done in CSS or in JavaScript and whether static HTML or deferred loading and creation of markup using template engines is the fastest way to go.

So what can we do to stop this endless debate?

The web has moved on a lot since Zeldman laid down the law and I think it is time to move on with it. We have to understand that not everything is readily enhanceable. We also have standard definitions that just seem odd and could have very much been better with our input. But we, the people who know and love the web, were too busy fighting smaller fights and complaining about things we should have taken for granted a while ago:

  • There will always be marketing materials or commercial training programs that get everything wrong we stand for. Mentioning them or trying to debunk them will just get more people to look at them. Yes, I do consider W3Schools part of this. We make these obsolete and unnecessary by creating better resources, not by telling people about their dangers.
  • Browsers will always get things wrong and no, there will not be an amazing future where all browsers are ever-green and users upgrade all the time.
  • Materials by standards bodies like this “Standards for Web Applications on Mobile: current state and roadmap” will always be verbose and seem academic in their wording. That’s what a standard is. There can not be wiggle room that’s why it sounds far more complex than we think it is.
  • There will always be people who use a certain technology for things we consider inappropriate. A great example I saw lately was a Mandelbrot fractal renderer creating a span for each pixel written in SASS and needing 5 minutes to compile.

A fault tolerant web? Think again

One of the great things of old about the web was that it was fault tolerant. Meaning that if something breaks, you can provide a fallback or the browser ignores it. There were no broken interfaces.

This changed when multimedia became a larger part of HTML5. Of course, you can use a fallback image for a CANVAS element (and you should as these get shown as thumbnails on Facebook for example) but it isn’t the same thing as you don’t add a CANVAS for the fun of it but as an interactive part of the page.

The plain fallback case does not quite cut it any longer.

Take a simple example of an image in the page:

<img src="meh.jpg" alt="cute kitten photo">

This is cool. If the image can not be loaded or rendered, the browser shows the alternative text provided in the alt attribute (no, it is not a tag). In most browsers these days, this is just a text display. You even have full control in JavaScript knowing if the image wasn’t loaded and you could provide a different fallback:

var img = document.querySelector('img');
img.addEventListener('error', function(ev) {
  if (this.naturalWidth === 0 && 
      this.naturalHeight === 0) {
    console.log('Image ' + this.src + ' not loaded');
  }
}, false);

With video, it is slightly different. Take the following example:

<video controls>
  <source src="dynamicsearch.mp4" type="video/mp4">
  </source>
  <a href="dynamicsearch.mp4">
    <img src="dynamicsearch.jpg" 
         alt="Dynamic app search in Firefox OS">
  </a>
  <p>Click image to play a video demo of 
     dynamic app search</p>
</video>

If the browser is not capable of supporting HTML5 video, we get a fallback image (again, great for indexing by Facebook and others). However, these browsers are not that likely to be in use any longer. The more interesting question is what happens when the browser can not play the video because the video codec is not supported? What end users get now is a grey box with the grace of a Java Applet that failed to load.

How to find out that the video playback failed? You’d expect an error handler on the video would do it, right? Well, not according to the specs which ask for an error handler on the last source element in the video element. That means that if you want to have the alternative content in the video element to show up when the video can not be played you need the following code:

var v = document.querySelector('video'),
    sources = v.querySelectorAll('source'),
    lastsource = sources[sources.length-1];
lastsource.addEventListener('error', function(ev) {
  var d = document.createElement('div');
  d.innerHTML = v.innerHTML;
  v.parentNode.replaceChild(d, v);
}, false);

Codec detection is incredibly flaky and hard as it is on OS level of the hardware and not fully in the control of the browser. That’s probably also the reason why the return value of the canPlayType() method of a video element (which is meant to tell you if a video format is supported) returns “maybe”, “probably” or an empty string. A coy method, that one.

It is the web, deal with it!

We could get very annoyed with this, or we can just deal with it. In my 18 years of web development I learned to take things like that in stride and I am actually happy about the quirky issues of the web. It makes it a constantly changing and interesting environment to be in.

I really think Mattias Petter Johansson of Spotify nailed it when he answered on Quora to someone why JavaScript is the only language in a browser:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

This is also why we should stop trying to make people love the web no matter what and force our ideas down their throats.

Longevity? Meh!

One of the main things we keep harping on about is the lovely longevity of the web. Whether it is Microsoft’s first web page still working in browsers now after 20 years or the web being the only platform with backwards compatibility and forward enhancement – we love to point out that we are in for the long game.

Sadly, this argument means nothing to developers who currently work in the mobile app space where being first out of the door is the most important part and people know that two months down the line nobody is going to be excited about your game any more. This is not sustainable, and reminds me of other fast-moving technologies that came and went. So let’s not waste our time trying to convince people who already subscribed to an idea of creating consumable software with a very short shelf-life.

I put it this way:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Let’s analyse our own behaviour

A lot of the bloat and repetitiveness of the web seems to me to stem from three mistakes we make:

  • we optimise prematurely
  • we tend to strive for generic solutions for very specific problems.
  • we build stop-gap solutions to use upcoming technology before it is ready and become dependent on those

A glimpse at the state of the componentised web seems to validate this. Web Components are amazingly necessary for the future of apps on the web platform, but they aren’t ready yet. Many of these frameworks give me great solutions right now and the effort I have to put in to learn them will make it hard for me to ever switch away from them. We’ve been there before: just try to find a junior JavaScript developer that knows the DOM instead of using jQuery for everything.

The cool new thing now are static HTML pages as they run fast, don’t take many resources and are very portable. Except that we already have 298 different generators to choose from if we want to create them. Or, we could write static HTML if all we have is a few sites. But where’s the fun in that?

Fredrik Noren had a great article about this lately called On Generalisation and put it quite succinctly:

Generalization is, in fact, prediction. We look at some things we have and we predict that any current and following entities in that group will look and behave sufficiently similar in the future to what we have now. We predict that we can write something that will cater to all, or most, of its future needs. And that makes perfect sense, if you just disregard one simple fact of life: humans are awful at predicting the future!

So let’s stop trying to build for an assumed problematic future that probably will never come and instead be thankful for what we have right now.

Such amazing times we live in

If you play with the web these days and you leave your “everything is broken, I must fix it!” hat off, it is amazing how much fun you can have. The other day I wrote Makethumbnails.com – a quick app that allows you to drag and drop images into your browser and get a zip of thumbnails back. All without a server in between, all working offline and written on a plane without a web connection using only the developer tools built into the browser these days.

We have an amazing amount of new events, sensors and data to play with. For example, reading out the ambient light around a laptop is a simple event handler:

window.addEventListener('devicelight', function(e) {
  var lv = e.value;
  // lv is the light in lux
});

You can use this to switch from a dark on light to a light on dark display. Or you could detect a 0 and know that the end user is currently covering their camera with their hands and provide a very simple hand gesture interface that way. This sensor is always on and you don’t need to have the camera enabled. How cool is that?

Are there other sensors or features in devices you’d like to have? Please ask on the feedback form about Open Web Apps and you can be part of the next iteration of web interaction.

Developer tools across browsers moved on beyond the view-source functionality and all of them now offer timelines, step-by-step debugging, network information and even device or screen emulation and visual editors for colours and animations. Most also offer some sort of in-built editor and remote debugging of devices. If you miss something there, here is another channel to tell the makers about that.

It is a big, fragmented world out there

The next big boom of the web is not in the Western world and on laptops and desktops that are connected with massively fast lines. We live in a mobile world and the predictability of what device our end users will have is gone. Surveys in Android usage showed 18,796 different devices in use already and both Mozilla’s and Google’s reach into emerging markets with under $100 devices means the light-weight web is going to be a massive thing for all of us. This is why we need to re-think our ways.

First of all, offline first should be our mantra. There is no steady connection we can rely on. Alex Feyerke has a great talk about this.

Secondly, we need to ensure that our solutions run smoothly on very low end devices. For this, there are a few tricks and developer tools give us great insight into where we waste memory and framerate. Angelina Fabbro has a great talk about that.

In general, the web is and stays an amazingly useful resource now more than ever. Tools like Github, JSFiddle, JSBin, CodePen and many others allow us to build things together and be in constant communication. Together.js (built into JSFiddle as the ‘collaboration’ button) allows us to code together with a text or voice chat and seeing each other’s cursors. This is an incredible opportunity to make coding more human and help another whilst we develop instead of telling each other how we should develop.

Let’s use the web to work on things together. Don’t wait to build the perfect solution. Share it early, take on advice and pull requests and together we can build much better products.

Soledad PenadesJSConf.eu 2014

I accidentally ended up attending JSConf.eu 2014–it wasn’t my initial intent, but someone from Mozilla who was going to be at the Hacker Lounge couldn’t make it for personal reasons, and he asked me to join in, so I did!

I hung around the lounge for a while every day, but at times it was so full of people that I just went downstairs and talked hacks & business while having coffee, or simply attended some of the talks instead. The following are notes from the talks I attended and from random conversations on the Hallway and Hacker Lounge tracks ;)

Parallel JavaScript by Jaswanth Sreeram

After having heard about it during the “Future JS” session at the Extensible Web Summit, this one seemed most exciting to me! Data crunching in JS via “invisible” translation to OpenCL? Yay! Power save of 8x, speed increases of 6x thanks to the power of the GPU! Also it is already available in Firefox Nightly.

I got so excited that I started building a test on the same day to try and trigger the parallel code path, but the performance is 2x slower than traditional sequential code. I spoke to Jaswanth in one of the breaks and explained him my issue, he said that the code needs to be complex enough for the “paralleliser” to get in action, and there was a certain amount of work involved in determining this, so that might be the reason why performance is so bad.

Still, existing PJS examples seem a bit too contrived to explain/demonstrate to people why it is so cool in a nutshell, so I would be interested in getting to the right function that triggers parallelism and is not overly complex—things with matrices just get over the head of people who are not used to this kind of data manipulation and the rest of the example just “does not compute” in their mind.

What Harry Potter can teach us about JavaScript by Sara Robinson

I went to this one because the title seemed intriguing. Basically, if people like something they will talk about it on the Internet, and also: regionalisms and variations to better target the market are important.

This rang a tiny little bell for me as it sounded a bit like the work we’re doing at Moz by working closely with communities where Firefox OS is launching–each launch is different as the features are specific to each market.

Bookwise, I am not overly convinced about adaptations that try to adapt the work and convert it so that it conveys something that is not initially being conveyed in the work itself. E.g. in France there was a strong push for highlighting the teaching/learning/school concept and so the book title was translated into something like “Harry Potter and the Wizard’s SCHOOL”. I’m totally OK with good translations that have to change some character name in order for it to still sound funny, but trying to change the meaning of the book is offlimits for me–I think the metaphor with JS didn’t quite work here.

We’re struggling to keep up (a brief story of browser security features) by Frederik Braun

I was expecting more scare and more in-depth tech from this one! Frederik, step it up! (Disclaimer: Frederik works at Mozilla so we’re colleagues and hence the friendly complaint).

Keeping secrets with JavaScript: an introduction to the WebCrypto API by Tim Taubert

I also went to this talk by another fellow German Mozillian (seems like the Berlin office has a thing for security and privacy… which makes total sense). It was a good introduction to how all the pieces fit together. After the talk there were some discussions in the “hallway track” about whether everyone developer should know, or not, cryptography and up to which extent. I have mixed feelings: it is hard enough to mess with it and render it useless (but still think you’re safe, even if you’re not), so maybe we need better libraries/tooling that make it easy to not to mess with it. Or maybe we need easier crypto. I definitely think anyone handling data should know about cryptography. If you’re a purely front-end person and only doing things such as CSS… well, maybe you can go a long way without knowing your SHAs from your MD5s…

Monster Audio-Visual demos in a TCP packet by Matthieu Henry ‘p01′

I went to this one expecting a whole bunch of demoscene tricks but I ended coming back from the forest of dropped jaws and “OMG it’s just 4K” utterances, which was fun anyway! It’s always entertaining to see people’s minds being blown up, although I expected a bit of new material from p01 too.

Usefulness of Uselessness by Brad Bouse

I saw Brad at CascadiaJS in Vancouver past year and he was entertaining, but maybe I wasn’t in the right mood. This talk, in contrast, was way more focused on a simple message: do something useless. Do more useless stuff. Useless stuff is actually useful.

So now I’m giving myself free reign to do more useless stuff. Not that I wasn’t already, but now it is CONSCIOUSLY USELESS and just because.

The meaning of words by Stephan Seidt

Speaking about usefulness… I don’t know if this is because it was the last talk and I was developing a massive headache, but I found it a bit of a gimmick. Maybe if I watched that in other time and moment I’d find it more impressive, but it didn’t quite work for me. Other people clapped to it, so I guess it did work for them.

Javascript for Everybody by Marcy Sutton

This one was a really moving talk on how we should not break accessibility with JavaScript. It’s not just about ARIA roles in mark-up, it’s also about the things we create live, and about patching our frameworks of choice so they help less experienced developers be accessible by default, thus improving the ecosystem.

After the talk I was left with this persistent sensation that I wasn’t doing the right thing in my code, which prompted me to review it and file bugs. Uuuurgh (and yaaaay, thanks for calling us out).

This is bigger than us: Building a future for Open Source by Lena Reinhard

Lena made a very compelling talk about why you should analyse your project and get worried if it is not diverse, because it won’t survive for long, as monocultures are fragile and prone to disappear.

Communities start diverse by default, but each incident makes the community less diverse, as people abstain from participating ever again. Do you care about your community? then you need to ensure it keeps being diverse.

A note about diversity not only being “having women”, but about having people who are representative of your population. Also it is not only about having a representation of the developers that use your code but a representation of the USERS that use the code the developers use–and this is way more important than we usually deem it to be, as the ratio tends to be 1 developer per 400 users.

Yet another demonstration of team Hoodie‘s high human standards :-)

(I’m also very excited that I got to meet and speak to a few of them during the conf, but sadly not the doge in their twitter account avatar–although it would have been weird to have him speak, but who knows what offline can enable?)

Server-less applications powered by Web Components by Sébastien Cevey

Sébastien had been at the Web Components session at the Extensible Web Summit, but he didn’t share as much as he did during this talk.

First he asked the audience how many people had heard about Web Components before; I’d estimate about 40% of people raised their hands. Then he asked them how many had actually used web components and I’d say the number of raised hands was just 5% of the audience.

Basically they had a series of status dashboards rendered with “horrible PHP” and other horrors of legacy code, and they didn’t want to have this mashup of front-end/back-end code because it was unmaintainable. So they set to rewrite the whole thing with Web Components, and so they did.

In the process they came up with a bit of metalanguage to connect the whole thing together, and some metamagic too, and finally they managed to have the whole thing running on the front-end. With just one big caveat: you have to be logged in The Guardian’s VPN to access the dashboards because the auth seemed to be taking place client-wise, and heh heh.

I was looking at the diagram of the web components they were using and the whole message passing chart and maybe it was because it was a bunch of information all of a sudden but I had the same experience of metamagic overdose I get with these all-declarative approaches to web components: some elements send messages to other elements by detecting them in the same document, like for example the modules that needed config would try to detect a config element in the tree, and use it. Maybe I didn’t understand it correctly, but this seemed akin to a global variable :-/

I still need to get my thoughts in order re: the all declarative web components pattern, but I think that one major reason for them not working for me is that the DOM is a hierarchical structure, so when people tuck several elements into it without any hierarchical relation between them, but still things happen magically and the elements interact with each other without hardly any way to know, I feel something’s not quite right there.

Another interesting take-away was that they were able to include other modular components into their component. For example they used a google chart element, and Paper components. I guess there is another minor unmentioned caveat here, and it is that it worked because they used Polymer to build their components, so the 2-way data binding worked seamlessly :-P

Using the web for music production and for live performances by Jan Monschke

I had seen Jan’s earlier talk at Scotland JS which was similar to this one but less cool–this time he convinced his brother and a friend to connect to his online collaborative audio workstation so we could see them playing live via WebRTC, and then he arranged the tracks they had recorded remotely from different points in the country. It was way more engaging and spectacular!

Then he also made a demo with an iPad and a home-made web audio app for live performances, which was really cool–you don’t need to program native code in order to build audiovisual apps! It is super awesome, come to think of it!

He still hasn’t fixed the things that I found “not OK” (as discussed in my Scotland JS post) but he is aware now of them! So maybe we might collaborate on The Definitive Collaborative Editor!

The Linguistic Relativity of Programming Languages by Jenna Zeigen

I didn’t catch this one in its entirety, but I got a few take-aways:

To all language snobs:

  • stop criticising
  • let other people use whatever language they’re comfortable with
  • languages that do not evolve will become obsolete

and a fantastic motto:

let’s keep JavaScript weird.

I think this was Jenna’s first talk. I want to see more!

Abusing phones to make the internet of things by Jan Jongboom

Jan works in Telenor and contributes heavily to Firefox OS just as his coworker Sergi that I had been following on the internets for a while and that I met at the Mozilla Summit past year. I thought I’d finally meet Jan when I went to Amsterdam for GOTO, but it wasn’t meant to be. I then happened to find him in the Hacker Lounge and he gave me an advance of the talk he was going to give later on, which promised to be super exciting. And it was!

Jan was a very entertaining speaker, and delighted the audience with both technical prowess and loads of jokes, including his own take on Firefox OS competition—Jan OS:

Basically he took away the UI layer in Firefox OS and got full root access to do whatever he pleases with the phones which, when the screen is not used, have an extremely very long lasting battery life (in the WEEKS scale). So they are effectively integrated autonomous computers that come in cheaper than Raspberry Pi and similar. He showed some practical examples of “Things” for the Internet of Things, such as building custom GPS trackers to keep track of where one of his very easy-to-get-lost friends was via Push Notifications, or a cheap wireless contactless (via using the proximity sensor) doorbell that can play a custom sound using Bluetooth speakers.

This reminds me that I wanted to ask someone of the people in QA if they had any spare old phone they’re not testing with any more so I could rip its guts apart, but maybe now it’s not such a good idea–I would need to come up with a project!

GIFs vs Web Components by Glen Maddern

I finally could watch Glen’s talk! He gave it too at CascadiaJS but I was hiding on my room preparing for mine so I couldn’t join in the GIF celebration.

The most important takeaway is: GIF with a hard G.

As it should be.

And then, in no less serious terms:

GIFs are important.

Do not impose your conventions or the conventions of your framework of choice onto other potential users—just talk HTML/DOM/JS. He initially started building the <x-gif> component with Polymer, then got requests to port it to Angular, to React, to… every imaginable framework, but adoption wasn’t really catching up, and translating to each framework made him have to learn about each framework and its mannerisms, so it was a really long and nonproductive process, until he realised that it’s better if your component is generic and not tied to any framework.

Finally, lesson learnt: Polymer !== Web Components.

Know your /’s by Lindsay Eyink

This was a good closing talk and I’m so grateful that it wasn’t super overloaded with FEELINGS but with pragmatism and common sense, and a call to have more common sense out there–specially in planning departments that try to foster artificial constructs such as the “Silicon Roundabout” and etc.

Take away: don’t try to copy Silicon Valley. Be your own city, with your idiosyncrasies and differences. That’s what makes your environment unique, and what attracts people from other places, not that you make a bad copy of Silicon Valley but with expensive gas and rent.

All in all—yet another great JSConf.eu! For many more years!

flattr this!

Mozilla WebDev CommunityBeer and Tell – September 2014

September’s Beer and Tell has come and gone.

A practical lesson in the ephemeral nature of networks interrupted the live feed and the recording, but fear not! A wiki page archives the meeting structure and this very post will lay plain the private ambitions of the Webdev cabal.

Mike Cooper: GMR.js

Mythmon is a Civilization V enthusiast, but multiplayer games are difficult — games can last a dozen hours or more. The somewhat archaic play-by-mail format removes the simultaneous, continuous time commitment, and the Giant Multiplayer Robot service abstracts away the other hassles of coordinating turns and save game files.

GMR provides a client for Windows only, so Mythmon created GMR.js to provide equivalent functionality cross platform with Node.js. It presents an interactive command line UI. This enables participation from a Steam Box and other non-windows platforms.

Bramwelt: pewpew

Trevor Bramwell, summer intern for the Web Engineering team, presenting a homebrew clone of space invaders he calls pewpew. He built is using PhaserJS as an exercise to better understand prototypal inheritance. You can follow along as he develops it by playing the live demo on gh-pages.

Cvan: honeyishrunktheurl

Chris Van shared two new takes on the classic url shortener. The first is written in go, with configuration stored in JSON on the server. It was used as an exercise for learning go. The second is an html page that handles the redirect on the server side.

He intends to put them into production on a side project, but hasn’t found a suitable domain name.

Cvan: honeyishrunktheurl

Chris Van held the stage for a second demo. He showed how the CSS order property can be used to cheaply rearrange DOM nodes without destroying and re-rendering new nodes. An accompanying blog post delves into the details. The post is worth a read, since it covers some limitations of the technique that came up in discussion during the demo.

Lonnen: Alonzo, pt II

Last time he joined us, Lonnen was showing off a scheme interpreter he was writing in Haskell called Alonzo. This month Alonzo had a number of new features, including variable assignment, functions, closures, and IO. Next he’ll pursue building a standard library and adding a test suite.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Daniel Stenbergdaniel.haxx.se week #3

I won’t keep posting every video update here, but I mostly wanted to mention that I’ve kept posting a weekly video over at youtube basically explaining what’s going on right now within my dearest projects. Mostly curl and some Firefox stuff.

This week: libcurl server cert verification API got a bashing at SEC-T, is HTTP for UDP a good idea? How about adding HTTP cache support to libcurl? HTTP/2 is getting deployed as we speak. Interesting curl bug when used by XBMC. The patch series for Firefox bug 939318 is improving slowly – will it ever land?

Erik VoldAdd-on Directionless

At the moment there is no one at Mozilla “in charge” that has awareness of the add-on community’s plight. It’s sadly true that Mozilla has been divesting in add-on support for awhile now. To be fair, I think the inital divestments were good ones, the Add-on Builder website was a money pit for example. The priority has shifted to web apps, and the old addons.mozilla.org team is working mostly on the marketplace (which is no longer for add-ons, and is now just for apps), and the Add-on SDK team is now mostly working on Firefox DevTool projects.

At the moment we have only a few staffers working on addons.mozilla.org and the SDK, and none of us have an authority to make decisions or end debates, there is a tech lead for the SDK but that is not a position that has the authority to make directional decisions, or decide how staffers prioritize their work and spend their time, each person’s manager will be in charge of that, and our managers are DevTool and Marketplace people..

Either we all agree on a direction or we don’t.

Chris McAvoyHand Crafted Open Badges Display

Earning an Open Badge is easy, there’s plenty of places that offer them, with more issuers signing up every day. Once you’ve earned an open badge, you can push it to your backpack, but what if you want to include the badge on your blog, or your artisanal hand crafted web page?

You could download the baked open badge and host it on your site. You could tell people it’s a baked badge, but using that information isn’t super easy. Last year, Mike Larsson had a great idea to build a JS library that would discover open badges on a page, and make them dynamic so that a visitor to the page would know what they were, not just a simple graphic, but a full-blown recognition for a skill or achievement.

Since his original prototype, the process of baking a badge has changed, plus Atul Varma built a library to allow baking and unbaking in the browser. This summer, Joe Curlee and I took all these pieces, prototypes and ideas and pulled them together into a single JS library you can include in a page to make the open badges on that page more dynamic.

There’s a demo of the library in action on Curlee’s Github. It shows a baked badge on the page, when you click the unbake button, it takes the baked information from the image and makes the badge dynamic and clickable. We added the button to make it clear what was happening on the page, but in a normal scenario, you’d just let the library do it’s thing and transform the badges on the page automatically. You can grab the source for the library on Github, or download the compiled / minified library directly.

There’s lot’s more we can do with the library, I’ll be writing more about it soon.

John O'DuinnSan Francisco Car Culture: Unusual Jaguar XK8 paint job

Found this earlier this month while on the way to work. The color scheme really threw me off, so at first I couldn’t even tell it was a Jaguar. I remain speechless.

Mark SurmanYou did it! (maker party)

This past week marked the end of Maker Party 2014. The results are well beyond what we expected and what we did last year — 2,513 learning events in 86 countries. If you we’re one of the 5,000+ teachers, librarians, parents, Hivers, localizers, designers, engineers and marketing ninjas who contributed to Webmaker over the past few months, I want to say: Thank you! You did it! You really did it!

makerparty_postparty_infographic_static_vertical_v2-600x892

What did you do? You taught over 125,000 people how to make things on the web — which is the point of the program and an important end in itself. At the same time, you worked tirelessly to build out and expand Webmaker in meaningful ways. Some examples:

  • Mozilla India organized over 250 learning events in the past two months, showing the kind of scale and impact you can get with well organized corps of volunteers.
  • Countries including Iran, New Zealand, and Sweden held their first ever Maker Party, adding to the idea that Webmaker is a truly global effort.
  • Tools and curriculum focused on mobile were added into the Webmaker suite — AppMaker was launched in June and was well received in Maker Parties around the world.
  • Over 300 partners orgs including major library and after school networks participated, bringing even more skilled teachers and mentors into our community.
  • New and innovative ways to teach the web in a very low touch manner rolled out, including a Firefox snippet that let you hack our home page x-ray goggles style.
  • Webmaker teamed up with Mozilla’s policy team, with a sub-campaign for Net Neutrality teach-ins plus a related reddit AMA.

It’s important to say: these things add up to something. Something big. They add up to a better Webmaker — more curriculum, better tools, a larger network of contributors. These things are assets that we can build on as we move forward. And you made them.

You did one other thing this summer that I really want to call out — you demonstrated what the Mozilla community can be when it is at its best. So many of you took leadership and organized the people around you to do all the things I just listed above. I saw that online and as I traveled to meet with local communities this summer. And, as you did this, so many of you also reached out an mentored others new to this work.You did exactly what Mozilla needs to do more of: you demonstrated the kind of commitment, discipline and thoughtfulness that is needed to both grow and have impact at the same time. As I wrote in July, I believe we need simultaneously drive hard on both depth and scale if we want Webmaker to work. You showed that this was possible.

Celebrating at MozFest East Africa

Celebrating at MozFest East Africa

So, if you were one of the 5000+ people who contributed to Webmaker during Maker Party: pat yourself on the back. You did something great! Also, consider: what do you want to do next? Webmaker doesn’t stop at the end of Maker Party. We’re planning a fall campaign with key partners and networks. We’re also moving quickly to expand our program for mentors and leaders, including thinking through ideas like Webmaker Clubs. These are all things that we need your help with as we build on the great work of the past few months.


Filed under: education, mozilla, webmakers

ArkyNoto Fonts Update

Google Internationalization team released new update of Noto Fonts this week. The update brings numerous new features enhancements. Please read the project release notes for the full list of changes.

You can preview the fonts and download them at google.com/get/noto.

Google Noto project logo

Testing fonts on Firefox OS device

It is very simple to test the Noto fonts on a Firefox OS device. Just copy the the font files into /system/fonts folder and reboot the device. Don't forget to back-up the existing fonts on device first.

Am writing this blog post in Bangkok, So I am going to use Thai Noto fonts in these instructions. Connect your Firefox OS device to the computer with a USB cable. Make sure to turn on developer settings to enable debugging via USB.


# Backup the existing Thai font
$ adb pull /system/fonts/DroidSansThai.ttf  

# Remount the /system partition as read-write
$ adb remount /system 

# Remove the font on the device
$ adb shell rm  /system/fonts/DroidSansThai

# Unzip the previously downloaded Thai font package
$ unzip NotoSansThai-hinted.zip 

# Push to Firefox OS device 
$ adb push NotoSansThai-Regular.ttf /system/fonts

# Reboot the phone. Test your localization by selecting your language 
#in Language settings menu or navigating to local language webpage with browser app.
$ adb reboot


Wait, All I see is Tofu?

If you see square blocks (lovingly referred as Tofu) instead of characters, that means the font file for your language is missing. Please double check the steps, if everything fails restore the previously copy of your font file.

What is Font Tofo, firefox OS screenshot

Happy Hacking!

Mozilla Release Management TeamFirefox 33 beta4 to beta5

  • 36 changesets
  • 81 files changed
  • 990 insertions
  • 1572 deletions

ExtensionOccurrences
cpp21
js10
java10
h6
in3
html3
cc3
xml2
mozbuild2
ini2
txt1
nsi1
mn1
list1
jsm1
css1

ModuleOccurrences
mobile16
netwerk13
layout7
media6
browser6
gfx5
toolkit4
security3
dom3
js2
widget1
extensions1
caps1

List of changesets:

Sylvestre LedruPost Beta 4: disable EARLY_BETA_OR_EARLIER a=me - abf1c1e6b222
Aaron KlotzBug 937306 - Improvements to WinUtils::WaitForMessage. r=jimm, a=sylvestre - 60aecc9d11ab
Richard NewmanBug 1065523 - Part 1: locale picker screen displays short locale display name, not capitalized region-decorated name. r=nalexander, a=sledru - cea1db6ec4ac
Jason DuellBug 966713 - Intermittent test_cookies_read.js times out. r=mcmanus, a=test-only - bd8bbb683257
Brian HackettBug 1061600 - Fix PropertyWriteNeedsTypeBarrier. r=jandem, a=abillings - 025117f71163
Bobby HolleyBug 1066718 - Get sIOService before invoking ReadPrefs. r=bz, a=sledru - 262de5944a01
Richard NewmanBug 1045087 - Remove Product Announcements integration points from Fennec. r=mfinkle, a=sledru - c0ba357c4c89
Richard NewmanBug 1045085 - Remove main Product Announcements code. r=mcomella, a=lmandel - d5ed7dd8f996
Oscar PatinoBug 1064882 - Receive RTCP SR's on recvonly streams for A/V sync. r=jesup, a=sledru - e99eaafdbda1
Matt WoodrowBug 1044129 - Don't crash if ContainerLayer temporary surface allocation fails. r=jrmuizel, a=sledru - 11e34dc2f591
Eric FaustBug 1033873 - "Differential Testing: Different output message involving __proto__". r=jandem, a=sledru - 2dbe6d8a5c30
Mo ZanatyBug 1054624 - Fix high-packet-loss problems with H.264 WebRTC calls. r=jesup, a=lmandel - 75eddbd6dc80
Michal NovotnyBug 1056919 - Crash in memcpy | mozilla::net::CacheFileChunk::OnDataRead(mozilla::net::CacheFileHandle*, char*, tag_nsresult). r=honzab, a=sledru - 62d020eff891
Stephen PohlBug 1065509: Bump maximum download size from 35 MB to 70 MB in stub installer. r=rstrong, a=lmandel - e85a6d689148
Cameron McCormackBug 1041512 - Mark intrinsic widths dirty on a style change even if the frame hasn't had its first reflow yet. r=dbaron, a=abillings - dafe68644b45
Andrea MarchesiniBug 1064481 - URLSearchParams should encode % values correcty. r=ehsan, a=lmandel - f44f06112715
Jonathan WattBug 1067998 - Fix OOM crash in gfxAlphaBoxBlur::Init on large blur surface. r=Bas, a=sylvestre - 023a362fab21
Matt WoodrowBug 1037226 - Don't crash when surface allocation fails in BasicCompositor. r=Bas, a=sledru - 9dd2e1834651
Margaret LeibovicBug 996753 - Telemetry probes for settings pages. r=liuche, a=sledru - 8d7b3bfaf3ab
Margaret LeibovicBug 996753 - Telemetry probes for changing settings and hitting back. r=liuche, a=sledru - 3504f727e58c
Margaret LeibovicBug 1063128 - Make sure all preferences have keys. r=liuche, a=sledru - e981cc82a3e5
Margaret LeibovicBug 1058813 - Add telemetry probe for clicking sync preference. r=liuche, a=sledru - 340bddec5bf5
Honza BambasBug 1065478 - POSTs are coming from offline application cache. r=jduell, a=sledru - 6c39ccb686a5
Honza BambasBug 1066726 - Concurrent HTTP cache read and write issues. r=michal, r=jduell, a=sledru - 8a1cffa4c130
David KeelerBug 1066190 - Ensure that pinning checks are done for otherwise overridable errors. r=mmc, a=sledru - 1e3320340bd2
Wes JohnstonBug 1063896 - Loop over all url list, not just ones with metadata. r=lucasr, a=sledru - 792d0824a8f0
Blair McBrideBug 1039028 - Show license info for OpenH264 plugin. r=irving, a=sledru - 01411f43df67
Drew WillcoxonBug 1066794 - Make the search suggestions popup on about:home/about:newtab more consistent with the main search bar's popup. r=MattN, a=sledru - 44cc9f25426d
Drew WillcoxonBug 1060888 - Autocomplete drop down list item should not be copied to the search fields when mouse over the list item. r=MattN, a=sledru - 6975bbd6c73a
Richard NewmanBug 1057247 - Increase favicon refetch time to four hours. r=mfinkle, a=sledru - 515fa121e700
Mats PalmgrenBug 1067088 - Use aBorderArea when not skipping any sides (e.g. ::first-letter), not the joined border area. r=roc, a=sledru - af1dbe183e3d
Ryan VanderMeulenBacked out changeset af1dbe183e3d (Bug 1067088) for bustage. - f5ba94d7170d
Honza BambasBug 1000338 - nsICacheEntry.lastModified not properly implemented. r=michal, a=sledru - b88069789828
Benjamin SmedbergBug 1063052 - In case a user ends up with unpacked chrome, on update use omni.ja again by removing chrome.manifest. r=rstrong, r=glandium, sr=dbaron, a=lmandel - 2dce6525ddfe
Mats PalmgrenBug 1067088 - Use aBorderArea when not skipping any sides (e.g. ::first-letter), not the joined border area. r=roc a=sledru - 9f2dc7a2df34
Nick AlexanderBug 996753 - Workaround for Fx33 not having AppConstants.Versions. r=rnewman, a=bustage - 7cd3ae0255ec

Kevin NgoBuilding the Marketplace Feed

New front page of the Firefox Marketplace.

The "Feed", the new feature I spent the last three months grinding out for the Firefox Marketplace. The Feed transforms the FirefoxOS app store to provide an engaging and customized app discovery experience by presenting fresh user-tailored content on every visit. The concept was invented by Liu Liu, a Mozilla design intern I briefly hung out with last year. Well, it quickly gained traction by getting featured on Engadget, presented at the Mozilla Summit, and shown off on prototypes at Mobile World Congress. With more traction came more pressure to ship. We built that ship, and it sailed on time.

Planning Phase

The whole concept had a large scope so we broke it into four versions. For the first version, we focused on getting initial content pushing into the Feed. We planned to build a curation tool for our editorial team to control the content of the Feed, with the ability to tailor different content for different countries. Users would then be able to see fresh and new content on the homepage every day, thus increasing engagement.

Curation Tools

The final product for the Curation Tool, used by the editorial team to feature content and tailor different content for different countries.

Some time was spent on the feature requirements and specifications as well as the visual design. We mainly had three engineers: myself, Chuck, and Davor. It was a slow start. Chuck started early on the technical architecture, building out some APIs and documentation. A few months later, I started work on the curation tool. My initial work was actually done off-the-grid in an RV in the middle of Alaska.

But then the project to optimize the app for the $25 FirefoxOS smartphone took over. Once the air cleared, we all gathered for a work week hosted at Davor's place in Fort Myers, FL (since I was already in FL for a vacation).

Building Phase

In early June, we had a solid start during the work week with each of the three of us on working on separate components: Chuck on the backend API, Davor on the Feed visuals, me on the curation tools. The face-to-face over remote working was nice. I could ask any question that came to mind about unclear requirements, quickly ask for an API endpoint to be whipped up, or check on how the visuals were looking.

After the foundations were in place, I transitioned to working on all parts of the feature from ElasticSearch, the API, the JS powering the curation tools and the Feed, to the CSS for the newer layout of the site. It was a fun grind for a couple of months just writing and refactoring tons of code, getting dirty with ElasticSearch, optimizing the backend.

The launch date was set for late August. A couple of weeks out, the feature was there, but there were some bugs to iron out and tons of polish to be done. The most sketchy part was having no design mocks for desktop-sized screens so I had to improvise. The last week and a half was a sprint. I split my day up into 6 hours, a break to play some tournament poker at the poker club, and then 2 hours at night. Throw in a late Friday, and we made it to the finish line.

Backend Bits

The Feed needed to have an early focus on scalability. We should be able to add more and more factors (such as where they're from, what device they're using, apps they previously installed, content they "loved") that tailor the Feed towards each user. ElasticSearch lets us easily dump a bunch of stuff into it, and we can do a quick query weighing in all of those factors. We cache the results behind a CDN, throw in couple of layers of client-side caching, and we got a stew going.

Figuring out how to relate data between different indices was a difficult decision. I had several options in managing relations, but chose to manually denormalize the data and manage the relations myself. This allowed for flexibility and fine-tuned optimizations without having to wrestle with ElasticSearch under-the-hood. We had three indices to relate:

  • Apps
  • Feed Elements - an individual piece of the Feed such as a featured app or a collection of apps that can contain accompanying descriptions or images. Many-to-many relationship with Apps.
  • Feed Items - a wrapper around Feed Elements containing metadata (region, category, carrier) that helps determine to whom the Feed Element should be displayed to. Many-to-many relationship with Feed Elements.

This meant three ES queries overall (and 0 database queries). I used the new ElasticSearch Python DSL to help construct queries. Wrote a query (weighing all of the factors about a user) to fetch Feed Items, a query to fetch all of the Feed Elements to attach to the Feed Items, and a query to fetch all of the Apps to attach to the Feed Elements. We throw all of the data to our Django Rest Framework serializers to deserialize the final response.

A complication was having to filter out apps, such as when an app is not public, or when an app is banned from the user's region, or when an app is not supported on the user's device. With time constraints, I wrote the filtering code in the view. But after the launch, I was able to consolidate our project's app filtering code into single query factory, use that, and tweak our serializers to handle filtered apps.

Frontend Bits

With the Feed being made of visual blocks and components, encapsulated template macros kept things clean. These reusable macros could be reused by our curation tools to display previews of the Feed on-the-fly for content curators. We used Isotope.js to arrange the layout of these visual blocks.

Our CSS eventually turned messy, having everything being namespaced by the CSS class .feed, and falling under the trap of OOP-style CSS. A good style guide to follow in the future is @fat's CSS style guide for Medium.

And since a lot of the code is shared between the frontend of the Firefox Marketplace and the curation tools, I'm currently trying to get our reusable frontend assets managed under Bower.

Conclusion

The Feed is only going to grow in features. It's a breath of fresh air in comparison to what used to be a never-changing front page on the app store. I've seen new apps I've never even knew we had and some are actually fun like Astro Alpaca. Hope everyone likes it!

Yunier José Sosa VázquezMozilla lanza “Miniaturas” patrocinadas en Firefox

Mozilla acaba de lanzar las miniaturas patrocinadas en Firefox Nightly. El mosaico con miniaturas son los recuadros con enlaces a páginas web, que aparecen cuando se abre una nueva pestaña en Firefox. Estos se dividen en tres tipos:

  • Miniaturas de directorio: aparecen a los nuevos usuarios de Firefox para sugerirles sitios de interés, los cuales se construyen a partir de las miniaturas y sitios  mas populares entre los usuarios de Firefox,  luego, poco a poco van siendo remplazados por miniaturas históricas basadas en los sitios más visitados por el usuario.
  • Miniaturas mejoradas: para usuarios con miniaturas existentes (históricos) en su página de nueva pestaña, ahora la imagen de la previsualización será reemplazada por una de mayor calidad, obtenida a través del sitio o de un asociado. Las páginas mostradas en estas miniaturas, son aquellas registradas en el historial del usuario.
  • Miniaturas patrocinadas: es cualquier miniatura que incluya un sitio con un acuerdo comercial con Mozilla y se denota como Sponsored.

Si te preocupa la privacidad, no te inquietes, pues sólo la información de la miniatura en una página de nueva pestaña es recolectada, con el fin de ofrecer sitios mas interesantes a nuevos usuarios de Firefox y mejorar las recomendaciones a usuarios existentes. Toda esa información es recopilada y no incluye ninguna manera de distinguir al usuario, pues solo se recogen los datos necesarios para asegurarse que los recuadros envían valor a nuestros usuarios y socios comerciales.

¿A donde van los datos compartidos?

Los datos son transmitidos directamente a Mozilla y ésta es almacenada en los servidores de Mozilla. Para todo tipo de Miniaturas, Mozilla está compartiendo números a los socios como: cantidad de impresiones, clics, fijaciones y ocultamiento del contenido recibido.

¿Cómo lo desactivo?

Puedes desactivarlo haciendo clic en el ícono del engranaje, ubicado en la esquina superior derecha de una página de nueva pestaña y seleccionando Clásico para mostrar las Miniaturas no mejoradas, o el modo Vacío que desactiva esta característica por completo.

Fuente: Mozilla Hispano

Ben HearsumNew update server has been rolled out to Firefox/Thunderbird Beta users

Yesterday marked a big milestone for the Balrog project when we made it live for Firefox and Thunderbird Beta users. Those with a good long term memory may recall that we switched Nightly and Aurora users over almost a year ago. Since then, we’ve been working on and off to get Balrog ready to serve Beta updates, which are quite a bit more complex than our Nightly ones. Earlier this week we finally got the last blocker closed and we flipped it live yesterday morning, pacific time. We have significantly (~10x) more Beta users than Nightly+Aurora, so it’s no surprise that we immediately saw a spike in traffic and load, but our systems stood up to it well. If you’re into this sort of thing, here are some graphs with spikey lines:
The load average on 1 (of 4) backend nodes:

The rate of requests to 1 backend node (requests/second):

Database operations (operations/second):

And network traffic to the database (MB/sec):

Despite hitting a few new edge cases (mostly around better error handling), the deployment went very smoothly – it took less than 15 minutes to be confident that everything was working fine.

While Nick and I are the primary developers of Balrog, we couldn’t have gotten to this point without the help of many others. Big thanks to Chris and Sheeri for making the IT infrastructure so solid, to Anthony, Tracy, and Henrik for all the testing they did, and to Rail, Massimo, Chris, and Aki for the patches and reviews they contributed to Balrog itself. With this big milestone accomplished we’re significantly closer to Balrog being ready for Release and ESR users, and retiring the old AUS2/3 servers.

Soledad PenadesExtensible Web Summit Berlin 2014: my lightning talk on Web Components

I was invited to join and give a lightning talk at the Extensible Web Summit that was held in Berlin past week, as part of the whole JSFest.berlin series of events.

The structure of the event consisted in having a series of introductory lightning talks to “set the tone” and then the rest would be a sort of unconference where people would suggest topics to talk about and then we would build a timetable collaboratively.

My lightning talk

The topic for my talk was… Web Components. Which was quite interesting because I have been working/fighting with them and various implementations in various levels of completeness at the same time lately, so I definitely had some things to add!

I didn’t want people to get distracted by slides (including myself) so I didn’t have any. Exciting! Also challenging.

These are the notes I more or less followed for my minitalk:

When I speak to the average developer they often cannot see any reason to use Web Components

The question I’m asked 99% of the time is “why, when I can do the same with divs? with jQuery even? what is the point?”

And that’s probably because they are SO hard to understand

  • The specs–the four of them– are really confusing / dense. Four specs you need to understand to totally grasp how everything works together.
  • Explainer articles too often drink the Kool Aid, so readers are like: fine, this seems amazing, but what can it do for me and why should I use any of this in my projects?
  • Libraries/frameworks built on top of Web Components hide too much of their complexity by adding more complexity, further confusing matters (perhaps they are trying to do too many things at the same time?). Often, people cannot even distinguish between Polymer and Web Components: are they the same? which one does what? do I need Polymer? or only some parts?
  • Are we supposed to use Web Components for visual AND non visual components? Where do you draw the line? How can you explain to people that they let you write your own HTML elements, and next thing you do is use an invisible tag that has no visual output but performs some ~~~encapsulated magic~~~?

And if they are supposed to be a new encapsulation method, they don’t play nicely with established workflows–they are highly disruptive both for humans and computers:

  • It’s really hard to parse in our minds what the dependencies of a component (described in an HTML import) and all its dependencies (described in possibly multiple nested HTML imports) are. Taking over a component based project can easily get horrible.
  • HTML imports totally break existing CSS/JS compression/linting chains and workflows.
  • Yes, there is Vulcaniser, a tool from Polymer that amalgamates all the imports into a couple of files, but it still doesn’t feel quite there: we get an HTML and a CSS file that still need a polyfill to be loaded.
  • We need this polyfill for using HTML imports, and we will need it for a while, and it doesn’t work with file:/// urls because it makes a hidden XMLHttpRequest that no one expects. In contrast, we don’t need one for loading JS and CSS locally.
  • HTML Imports generate a need for tools that parse the imports and identify the dependencies and essentially… pretend to be a browser? Doesn’t this smell like duplication of efforts? And why two dependency loading systems? (ES6 require and HTML imports)

There’s also a problem with hiding too much complexity and encapsulating too much:

  • Users of “third party” Web Components might not be aware of the “hell” they are conjuring in the DOM when said components are “heavyweight” but also encapsulated, so it’s hard to figure out what is going on.
  • It might also make components hard to extend: you might have a widget that almost does all you need except for one thing, but it’s all encapsulated and ooh you can’t hook on any of the things it does so you have to rewrite it all.
  • Perhaps we need to discuss more about use cases and patterns for writing modular components and extending them.

It’s hard to make some things degrade nicely or even just make them work at all when the specs are not fully implemented in a platform–specially the CSS side of the spec

  • For example, the Shadow DOM selectors are not implemented in Firefox OS yet, so a component that uses Shadow DOM needs double styling selectors and some weird tricks to work in Gaia (Firefox OS’ UI layer) and platforms that do have support for Shadow DOM

And not directly related to Web Components, but in relation to spec work and disruptive browser support for new features:

  • Spec and browser people live in a different bubble where you can totally rewrite things from one day to the other. Throw everything away! Change the rules! No backwards compatibility? No problem!
  • But we need to be considerate with “normal” developers.
  • We need to understand that most of the people cannot afford to totally change their chain or workflows, or they just do not understand what we are getting at (reasons above)
  • Then if they try to understand, they go to mailing lists and they see those fights and all the politics and… they step back, or disappear for good. It’s just not a healthy environment. I am subscribed to several API lists and I only read them when I’m on a plane so I can’t go and immediately reply.
  • If the W3C, or any other standardisation organisation wants to attract “normal” developers to get more diverse inputs, they/we should start by being respectful to everyone. Don’t try to show everyone how superclever you are. Don’t be a jerk. Don’t scare people away, because then only the loud ones stay, and the quieter shy people, or people who have more urgent matters to attend (such as, you know, having a working business website even if it’s not using the latest and greatest API) will just leave.
  • So I want to remind everyone that we need to be considerate of each others’ problems and needs. We need to make an effort to speak other people’s language–and specially technical people need to do that. Confusing or opaque specs only lead to errors and misinterpretations.
  • We all want to make the web better, but we need to work on this together!

Thanks

With thanks to all the people whose brain I’ve been picking lately on the subject of Web Components: Angelina, Wilson, Francisco, Les, Potch, Fred and Christian.

flattr this!

Daniel StenbergUsing APIs without reading docs

This morning, my debug session was interrupted for a brief moment when two friends independently of each other pinged me to inform me about a talk at the current SEC-T conference going on here in Stockholm right now. It was yet again time to bring up the good old fun called libcurl API bashing. Again from the angle that users who don’t read the API docs might end up using it wrong.

Updated: You can see Meredith Patterson’s talk here, and the libcurl parts start at 24:15.

The specific libcurl topic at hand once again mostly had the CURLOPT_VERIFYHOST option in focus, with basically is the same argument that was thrown at us two years ago when libcurl was said to be dangerous. It is not a boolean. It is an option that takes (or took) three different values, where 2 is the secure level and 0 is disabled.

SEC-T on curl API

(This picture is a screengrab from the live stream off youtube, I don’t have any link to a stored version of it yet. Click it for slightly higher resolution.)

Speaker Meredith L. Patterson actually spoke for quite a long time about curl and its options to verify server certificates. While I will agree that she has a few good points, it was still riddled with errors and I think she deliberately phrased things in a manner to make the talk good and snappy rather than to be factually correct and trying to understand why things are like they are.

The VERIFYHOST option apparently sounds as if it takes a boolean (accordingly), but it doesn’t. She says verifying a certificate has to be a Yes/No question so obviously it is a boolean. First, let’s be really technical: the libcurl options that take numerical values always accept a ‘long’ and all documentation specify which values you can pass in. None of them are boolean, not by actual type in the C language and not described like that in the man pages. There are however language bindings running on top of libcurl that may use booleans for the values that take 0 or 1, but there’s no guarantee we won’t add more values in a future to numerical options.

I wrote down a few quotes from her that I’d like to address.

“In order for it to do anything useful, the value actually has to be set to two”

I get it, she wants a fun presentation that makes the audience listen and grin cheerfully. But this is highly inaccurate. libcurl has it set to verify by default. An application doesn’t have to set it to anything. The only reason to set this value is if you’re not happy with checking the cert unconditionally, and then you’ve already wondered off the secure route.

“All it does when set to to two is to check that the common name in the cert matches the host name in the URL. That’s literally all it does.”

No, it’s not. It “only” verifies the host name curl connects to against the name hints in the server cert, yes, but that’s a lot more than just the common name field.

“there’s been 10 versions and they haven’t fixed this yet [...] the docs still say they’re gonna fix this eventually [...] I wanna know when eventually is”

Qualified BS and ignorance of details. Let’s see the actual code first: it ignores the 1 value and returns an error and thus leaves the internal default 2, Alas, code that sets 1 or 2 gets the same effect == verified certificate. Why is this a problem?

Then, she says she really wants to know when “eventually” is. (The docs say “Future versions will…”) So if she was so curious you’d think she would’ve tried to ask us? We’re an accessible bunch, on mailing lists, on IRC and on twitter. No she didn’t ask.

But perhaps most importantly: did she really consider why it returns an error for 1? Since libcurl silently accepted 1 as a value for something like 10 years, there are a lot of old installations “out there” in the wild, and by returning an error for 1 we try to make applications notice and adjust. By silently accepting 1 without errors, there would be no notice and people will keep using 1 in new applications as well and thus when running such an newly written application with an older libcurl – you’d be back to having the security problem again. So, we have the error there to improve the situation.

“a peer is someone like you [...] a host is a server”

I’m a networking guy since 20+ years and I’m not used to people having a hard time to understand these terms. While perhaps there are rookies out in the world who don’t immediately understand some terms in the curl option names, should we really be criticized for that? I find that a hilarious critique. Also, these names were picked 13 years ago and we have them around for compatibility and API stability.

“why would you ever want to …”

Welcome to the real world. Why would an application author ever want to have these options to something else than just full check and no check? Because people and software development is a large world with many different desires and use case scenarios and curl is more widely used and abused than what many people consider. Lots of people have wanted something else than just a Yes/No to server cert verification. In fact, I’ve had many users ask for even more switches and fine-grained ways to fiddle with verification. Yes/No is a lay mans simplified view of certificate verification.

SEC-T curl slide

(This picture is the slide from the above picture, just zoomed and straightened out a bit.)

API age, stability and organic growth

We started working on libcurl in spring 1999, we added the CURLOPT_SSL_VERIFYPEER option in October 2000 and we added CURLOPT_SSL_VERIFYHOST in August 2001. All that quite a long time ago.

Then add thousands of hours, hundreds of hackers, thousands of applications, a user count that probably surpasses one billion users by now. Then also add the fact that option names are sticky in the way we write docs, examples pop up all over the internet and everyone who’s close to the project learns them by name and spirit and we quite simply grow attached to them and the way they work. Changing the name of an option is really painful and cause of a lot of confusion.

I’ve instead tried to more and more emphasize the functionality in the docs, to stress what the options do and how to do server cert verifications with curl the safe way.

I can’t force users to read docs. I can’t forbid users to blindly assume something and I’m not in control of, nor do I want to affect, the large population of third party bindings that exist for using on top of libcurl to cater for every imaginable programming language – and some of them may of course themselves have documentation problems and what not.

Would I change some of the APIs and names for options we have in libcurl if I would redo them today? Yes I would.

So what do we do about it?

I think this is the only really interesting question to take from all this. Everyone wants stable APIs. Everyone wants sensible and easy to understand APIs and as we can see they should also basically be possible to figure out without reading any documentation. And yet the API has to be powerful and flexible enough to be really useful for all those different applications.

At this point where we have these options that we do, when you’ve done your mud slinging and the finger of blame is firmly pointed at us. How exactly do you suggest we move forward to fix these claimed problems?

Taking it personally

Before anyone tells me to not take it personally: curl is my biggest hobby and a project I’ve spent many years and thousands of hours on. Of course I take it personally, otherwise I would’ve stopped working in the project a long time ago. This is personal to me. I give it my loving care and personal energy and then someone comes here and throw ill-founded and badly researched criticisms at me. I think criticizers of open source projects should learn to discuss the matters with the projects as their primary way instead of using it to make their conference presentations become more feisty.

Luke Wagnerasm.js on status.modern.ie

I was excited to see that asm.js has been added to status.modern.ie as “Under Consideration”. Since asm.js isn’t a JS language extension like, say, Generators, what this means is that Microsoft is currently considering adding optimizations to Chakra for the asm.js subset of JS. (As explained in my previous post, explicitly recognizing asm.js allows an engine to do a lot of exciting things.)

Going forward, we are hopeful that, after consideration, Microsoft will switch to “Under Development” and we are quite happy to collaborate with them and any other JS engine vendors on the future evolution of asm.js.

On a more general note, it’s exciting to see that there has been across-the-board improvements on asm.js workloads in the last 6 months. You can see this on arewefastyet.com or by loading up the Dead Trigger 2 demo on Firefox, Chrome or (beta) Safari. Furthermore, with the recent release of iOS8, WebGL is now shipping in all modern browsers. The future of gaming and high-performance applications on the web is looking good!

Kartikaya GuptaMaker Party shout-out

I've blogged before about the power of web scale; about how important it is to ensure that everybody can use the web and to keep it as level of a playing field as possible. That's why I love hearing about announcements like this one: 127K Makers, 2513 Events, 86 Countries, and One Party That Just Won't Quit. Getting more people all around the world to learn about how the web works and keeping that playing field level is one of the reasons I love working at Mozilla. Even though I'm not directly involved in Maker Party, it's great to see projects like this having such a huge impact!

Henrik SkupinMemchaser 0.6 has been released

The Firefox Automation team would like to announce the release of memchaser 0.6. After nearly a year of no real feature updates, but also some weeks of not being able to run Memchaser in Firefox Aurora (34.0a2) at all (due to a regression), we decided to release the current state of development as a public release. We are aware that we still do not fully support the default Australis theme since Firefox 29.0, but that’s an issue, which takes some more time to finish up.

Changes in 0.6

  • Upgrade to Add-on SDK 1.17 (#201)
  • Fix test_start_stop_logging for ‘File not found’ error (#199)
  • contentURL expects a string in Panel() and Widget() constructors (#198)
  • Bring the minimizeMemory() implementation up to date (#193)
  • Use ‘residentFast’ instead of ‘resident’ for memory reporter (#192)
  • Use postMessage for widget and panel communication instead port object (#185)
  • Support incremental cycle collection statistics (#187)
  • Added a usage section to the README (#178)
  • Change require statements and update the SDK to 1.14 (#174)
  • Simplify .travis.yml (#172)
  • Use . (dot) to include files while invoking /bin/sh (#169)
  • Use failonerror attribute in exec tasks (#168)

For all the details about the 0.6 release please check our issue tracker on Github.

Matt BrubeckLet's build a browser engine! Part 6: Block layout

Welcome back to my series on building a toy HTML rendering engine:

This article will continue the layout module that we started in Part 5. This time, we’ll add the ability to lay out block boxes. These are boxes that are stack vertically, such as headings and paragraphs.

To keep things simple, this code implements only normal flow: no floats, no absolute positioning, and no fixed positioning.

Traversing the Layout Tree

The entry point to this code is the layout function, which takes a takes a LayoutBox and calculates its dimensions. We’ll break this function into three cases, and implement only one of them for now:

impl LayoutBox {
    /// Lay out a box and its descendants.
    fn layout(&mut self, containing_block: Dimensions) {
        match self.box_type {
            BlockNode(_) => self.layout_block(containing_block),
            InlineNode(_) => {} // TODO
            AnonymousBlock => {} // TODO
        }
    }

    // ...
}

A block’s layout depends on the dimensions of its containing block. For block boxes in normal flow, this is just the box’s parent. For the root element, it’s the size of the browser window (or “viewport”).

You may remember from the previous article that a block’s width depends on its parent, while its height depends on its children. This means that our code needs to traverse the tree top-down while calculating widths, so it can lay out the children after their parent’s width is known, and traverse bottom-up to calculate heights, so that a parent’s height is calculated after its children’s.

fn layout_block(&mut self, containing_block: Dimensions) {
    // Child width can depend on parent width, so we need to calculate
    // this box's width before laying out its children.
    self.calculate_block_width(containing_block);

    // Determine where the box is located within its container.
    self.calculate_block_position(containing_block);

    // Recursively lay out the children of this box.
    self.layout_block_children();

    // Parent height can depend on child height, so `calculate_height`
    // must be called *after* the children are laid out.
    self.calculate_block_height();
}

This function performs a single traversal of the layout tree, doing width calculations on the way down and height calculations on the way back up. A real layout engine might perform several tree traversals, some top-down and some bottom-up.

Calculating the Width

The width calculation is the first step in the block layout function, and also the most complicated. I’ll walk through it step by step. To start, we need the values of the CSS width property and all the left and right edge sizes:

fn calculate_block_width(&mut self, containing_block: Dimensions) {
    let style = self.get_style_node();

    // `width` has initial value `auto`.
    let auto = Keyword("auto".to_string());
    let mut width = style.value("width").unwrap_or(auto.clone());

    // margin, border, and padding have initial value 0.
    let zero = Length(0.0, Px);

    let mut margin_left = style.lookup("margin-left", "margin", &zero);
    let mut margin_right = style.lookup("margin-right", "margin", &zero);

    let border_left = style.lookup("border-left-width", "border-width", &zero);
    let border_right = style.lookup("border-right-width", "border-width", &zero);

    let padding_left = style.lookup("padding-left", "padding", &zero);
    let padding_right = style.lookup("padding-right", "padding", &zero);

    // ...
}

This uses a helper function called lookup, which just tries a series of values in sequence. If the first property isn’t set, it tries the second one. If that’s not set either, it returns the given default value. This provides an incomplete (but simple) implementation of shorthand properties and initial values.

Note: This is similar to the following code in, say, JavaScript or Ruby:

margin_left = style["margin-left"] || style["margin"] || zero;

Since a child can’t change its parent’s width, it needs to make sure its own width fits the parent’s. The CSS spec expresses this as a set of constraints and an algorithm for solving them. The following code implements that algorithm.

First we add up the margin, padding, border, and content widths. The to_px helper method converts lengths to their numerical values. If a property is set to 'auto', it returns 0 so it doesn’t affect the sum.

let total = [&margin_left, &margin_right, &border_left, &border_right,
             &padding_left, &padding_right, &width].iter().map(|v| v.to_px()).sum();

This is the minimum horizontal space needed for the box. If this isn’t equal to the container width, we’ll need to adjust something to make it equal.

If the width or margins are set to 'auto', they can expand or contract to fit the available space. Following the spec, we first check if the box is too big. If so, we set any expandable margins to zero.

// If width is not auto and the total is wider than the container, treat auto margins as 0.
if width != auto && total > containing_block.width {
    if margin_left == auto {
        margin_left = Length(0.0, Px);
    }
    if margin_right == auto {
        margin_right = Length(0.0, Px);
    }
}

If the box is too large for its container, it overflows the container. If it’s too small, it will underflow, leaving extra space. We’ll calculate the underflow—the amount of extra space left in the container. (If this number is negative, it is actually an overflow.)

let underflow = containing_block.width - total;

We now follow the spec’s algorithm for eliminating any overflow or underflow by adjusting the expandable dimensions. If there are no 'auto' dimensions, we adjust the right margin. (Yes, this means the margin may be negative in the case of an overflow!)

match (width == auto, margin_left == auto, margin_right == auto) {
    // If the values are overconstrained, calculate margin_right.
    (false, false, false) => {
        margin_right = Length(margin_right.to_px() + underflow, Px);
    }

    // If exactly one size is auto, its used value follows from the equality.
    (false, false, true) => { margin_right = Length(underflow, Px); }
    (false, true, false) => { margin_left  = Length(underflow, Px); }

    // If width is set to auto, any other auto values become 0.
    (true, _, _) => {
        if margin_left == auto { margin_left = Length(0.0, Px); }
        if margin_right == auto { margin_right = Length(0.0, Px); }

        if underflow >= 0.0 {
            // Expand width to fill the underflow.
            width = Length(underflow, Px);
        } else {
            // Width can't be negative. Adjust the right margin instead.
            width = Length(0.0, Px);
            margin_right = Length(margin_right.to_px() + underflow, Px);
        }
    }

    // If margin-left and margin-right are both auto, their used values are equal.
    (false, true, true) => {
        margin_left = Length(underflow / 2.0, Px);
        margin_right = Length(underflow / 2.0, Px);
    }
}

At this point, the constraints are met and any 'auto' values have been converted to lengths. The results are the the used values for the horizontal box dimensions, which we will store in the layout tree. You can see the final code in layout.rs.

Positioning

The next step is simpler. This function looks up the remanining margin/padding/border styles, and uses these along with the containing block dimensions to determine this block’s position on the page.

fn calculate_block_position(&mut self, containing_block: Dimensions) {
    let style = self.get_style_node();
    let d = &mut self.dimensions;

    // margin, border, and padding have initial value 0.
    let zero = Length(0.0, Px);

    // If margin-top or margin-bottom is `auto`, the used value is zero.
    d.margin.top = style.lookup("margin-top", "margin", &zero).to_px();
    d.margin.bottom = style.lookup("margin-bottom", "margin", &zero).to_px();

    d.border.top = style.lookup("border-top-width", "border-width", &zero).to_px();
    d.border.bottom = style.lookup("border-bottom-width", "border-width", &zero).to_px();

    d.padding.top = style.lookup("padding-top", "padding", &zero).to_px();
    d.padding.bottom = style.lookup("padding-bottom", "padding", &zero).to_px();

    // Position the box below all the previous boxes in the container.
    d.x = containing_block.x +
          d.margin.left + d.border.left + d.padding.left;
    d.y = containing_block.y + containing_block.height +
          d.margin.top + d.border.top + d.padding.top;
}

Take a close look at that last statement, which sets the y position. This is what gives block layout its distinctive vertical stacking behavior. For this to work, we’ll need to make sure the parent’s height is updated after laying out each child.

Children

Here’s the code that recursively lays out the box’s contents. As it loops through the child boxes, it keeps track of the total content height. This is used by the positioning code (above) to find the vertical position of the next child.

fn layout_block_children(&mut self) {
    let d = &mut self.dimensions;
    for child in self.children.iter_mut() {
        child.layout(*d);
        // Track the height so each child is laid out below the previous content.
        d.height = d.height + child.dimensions.margin_box_height();
    }
}

The total vertical space taken up by each child is the height of its margin box, which we calculate just by adding all up the vertical dimensions.

impl Dimensions {
    /// Total height of a box including its margins, border, and padding.
    fn margin_box_height(&self) -> f32 {
        self.height + self.padding.top + self.padding.bottom
                    + self.border.top + self.border.bottom
                    + self.margin.top + self.margin.bottom
    }
}

For simplicity, this does not implement margin collapsing. A real layout engine would allow the bottom margin of one box to overlap the top margin of the next box, rather than placing each margin box completely below the previous one.

The ‘height’ Property

By default, the box’s height is equal to the height of its contents. But if the 'height' property is set to an explicit length, we’ll use that instead:

fn calculate_block_height(&mut self) {
    // If the height is set to an explicit length, use that exact length.
    match self.get_style_node().value("height") {
        Some(Length(h, Px)) => { self.dimensions.height = h; }
        _ => {}
    }
}

And that concludes the block layout algorithm. You can now call layout() on a styled HTML document, and it will spit out a bunch of rectangles with widths, heights, margins, etc. Cool, right?

Exercises

Some extra ideas for the ambitious implementer:

  1. Collapsing vertical margins.

  2. Relative positioning.

  3. Parallelize the layout process, and measure the effect on performance.

If you try the parallelization project, you may want to separate the width calculation and the height calculation into two distinct passes. The top-down traversal for width is easy to parallelize just by spawning a separate task for each child. The height calculation is a little trickier, since you need to go back and adjust the y position of each child after its siblings are laid out.

To Be Continued…

Thank you to everyone who’s followed along this far!

These articles are taking longer and longer to write, as I journey further into unfamiliar areas of layout and rendering. There will be a longer hiatus before the next part as I experiment with font and graphics code, but I’ll resume the series as soon as I can.

Paul Rouget(video) DevTools Timeline and Firefox OS

Just thought I'd share that (if you're a Firefox OS hacker, you want to read this).

youtube video

We recently landed a very basic timeline in the Firefox DevTools (enable it in the devtools options. Firefox and B2G nightly both required). It is basic. But already useful, especially to debug Firefox OS apps. For example today, we were looking at the System App and realized that the main thread was never idle. Looking at the timeline, we realized that a restyle was triggered many many times per seconds, even if nothing was happening on the screen. By tweaking the DOM with the inspector, we figured it was coming from a CSS animation that was display:none (see bug 962594).

We are working hard to build new Firefox performance tools. Expect to see better tools coming in the coming months. More info about this new timeline tool here.

James LongTransducers.js: A JavaScript Library for Transformation of Data

If you didn't grab a few cups of coffee for my last post, you're going to want to for this one. While writing my last post about js-csp, a port of Clojure's core.async, they announced transducers which solves a key problem when working with transformation of data. The technique works particularly well with channels (exactly what js-csp uses), so I dug into it.

What I discovered is mind-blowing. So I also ported it to JavaScript, and today I'm announcing transducers.js, a library to build transformations of data and apply it to any data type you could imagine.

Woha, what did I just say? Let's take a step back for a second. If you haven't heard of transducers before, you can read about their history in Clojure's announcement. Additionally, there's an awesome post that explores these ideas in JavaScript and walks you through them from start to finish. I give a similar (but brief) walkthrough at the end of this post.

The word transduce is just a combination of transform and reduce. The reduce function is the base transformation; any other transformation can be expressed in terms of it (map, filter, etc).

var arr = [1, 2, 3, 4];

arr.reduce(function(result, x) {
    result.push(x + 1);
    return result;
}, []);
// -> [ 2, 3, 4, 5 ]

The function passed to reduce is a reducing function. It takes a result and an input and returns a new result. Transducers abstract this out so that you can compose transformations completely independent of the data structure. Here's the same call but with transduce:

function append(result, x) {
    result.push(x);
    return result;
}

transduce(map(x => x + 1), append, [], arr);

We created append to make it easier to work with arrays, and are using ES6 arrow functions (you really should too, they are easy to cross-compile). The main difference is that the push call on the array is now moved out of the transformation. In JavaScript we always couple transformation with specific data structures, and we've got to stop doing that. We can reuse transformations across all data structures, even streams.

There are three main concerns here that reduce needs to work. First is to iterate over the source data structure. Second is to transform each value. Third is to build up a new result.

These are completely separate concerns, and yet most transformations in JavaScript are tightly coupled with specific data structures. Transducers decouples this and you can apply all the available transformations on any data structure.

Transformations

We have a small amount of transformations that will solve most of your needs like map, filter, dedupe, and more. Here's an example of composing transformations:

sequence(
  compose(
    cat,
    map(x => x + 1),
    dedupe(),
    drop(3)
  ),
  [[1, 2], [3, 4], [4, 5]]
)
// -> [ 5, 6 ]

The compose function combines transformations, and sequence just creates a new collection of the same type and runs the transformations. Note that nothing within the transformations assume anything about the data structure from where it comes or where it's going.

Most of the transformations that transducers.js provides can also simply take a collection, and it will immediately run the transformation over the collection and return a new collection of the same type. This lets you do simple transformations the familiar way:

map(x => x + 1, [1, 2, 3, 4]);
filter(x => x % 2 === 0, [1, 2, 3, 4])

These functions are highly optimized for the builtin types like arrays, so the above map literally just runs a while loop and applies your function over each value.

Iterating and Building

These transformations aren't useful unless you can actually apply them. We figured out the transform concern, but what about iterate and build?

First lets take a look at the available functions for applying transducers:

  • sequence(xform, coll) - get a collection of the same type and fill it with the results of applying xform over each item in coll
  • transduce(xform, f, init, coll) - reduce a collection starting with the initial value init, applying xform to each value and running the reducing function f
  • into(to, xform, from) - apply xform to each value in the collection from and append it to the collection to

Each of these has different levels of assumptions. transduce is the lowest-level in that it iterates over coll but lets you build up the result. into assumes the result is a collection and automatically appends to it. Finally, sequence assumes you want a collection of the same type so it creates it and fills it with the results of the transformation.

Ideally our library wouldn't care about the details of iteration or building either, otherwise it kind of kills the point of generic transformations. Luckily ES6 has an iteration protocol, so we can use that for iteration.

But what about building? Unfortunately there is no protocol for that, so we need to create our own. transducers.js looks for @@append and @@empty methods on a collection for adding to it and creating new collections. (Of course, it works out of the box for native arrays and objects).

Let's drive this point home with an example. Say you wanted to use the immutable-js library. It already supports iteration, so you can automatically do this:

into([],
     compose(
       map(x => x * 2),
       filter(x => x > 5)
     ),
     Immutable.Vector(1, 2, 3, 4));
// -> [ 6, 8 ]

We really want to use immutable vectors all the way through, so let's augment the vector type to support "building":

Immutable.Vector.prototype['@@append'] = function(x) {
  return this.push(x);
};

Immutable.Vector.prototype['@@empty'] = function(x) {
  return Immutable.Vector();
};

Now we can just use sequence, and we get an immutable vector back:

sequence(compose(
           map(x => x * 2),
           filter(x => x > 5)
         ),
         Immutable.Vector(1, 2, 3, 4));
// -> Immutable.Vector(6, 8)

This is experimental, so I would wait a little while before using this in production, but so far this gives a surprising amount of power for a 500-line JavaScript library.

Implications

Works with Everything (including Streams and Channels)!

Let's play around with all the kinds of data structures we can use now. A type must at least be iterable to use with into or transduce, but if it is also buildable then it can also be used with sequence or the target collection of into.

var xform = compose(map(x => x * 2),
                    filter(x => x > 5));


// arrays (iterable & buildable)

sequence(xform, [1, 2, 3, 4]);
// -> [ 6, 8 ]

// objects (iterable & buildable)

into([],
     compose(map(kv => kv[1]), xform),
     { x: 1, y: 2, z: 3, w: 4 })
// -> [ 6, 8 ]

sequence(map(kv => [kv[0], kv[1] + 1]),
         { x: 1, y: 2, z: 3, w: 4 })
// -> { x: 2, y: 3, z: 4, w: 5 }

// generators (iterable)

function *data() {
  yield 1;
  yield 2;
  yield 3;
  yield 4;
}

into([], xform, data())
// -> [ 6, 8 ]

// Sets and Maps (iterable)

into([], xform, new Set([1, 2, 3, 3]))
// -> [ 6 ]

into({}, map(kv => [kv[0], kv[1] * 2], new Map([['x', 1], ['y', 2]])))
// -> { x: 2, y: 4 }

// or make it buildable

Map.prototype['@@append'] = Map.prototype.add;
Map.prototype['@@empty'] = function() { return new Map(); };
Set.prototype['@@append'] = Set.prototype.add;
Set.prototype['@@empty'] = function() { return new Set(); };

sequence(xform, new Set([1, 2, 3, 2]))
sequence(xform, new Map([['x', 1], ['y', 2]]));

// node lists (iterable)

into([], map(x => x.className), document.querySelectorAll('div'));

// custom types (iterable & buildable)

into([], xform, Immutable.Vector(1, 2, 3, 4));
into(MyCustomType(), xform, Immutable.Vector(1, 2, 3, 4));

// if implemented append and empty:
sequence(xform, Immutable.Vector(1, 2, 3, 4));

// channels

var ch = chan(1, xform);

go(function*() {
  yield put(ch, 1);
  yield put(ch, 2);
  yield put(ch, 3);
  yield put(ch, 4);
});

go(function*() {
  while(!ch.closed) {
    console.log(yield take(ch));
  }
});
// output: 6 8

Now that we've decoupled the data that comes in, how it's transformed, and what comes out, we have an insane amount of power. And with a pretty simple API as well.

Did you notice that last example with channels? That's right, a js-csp channel which I introduced in my last post now can take a transducer to apply over each item that passes through the channel. This easily lets us do Rx-style (reactive) code by simple reusing all the same transformations.

A channel is basically just a stream. You can reuse all of your familiar transformations on streams. That's huge!

This is possible because transducers work differently in that instead of applying each transformation to a collection one at a time (and creating multiple intermediate collections), they take each value separately and fire them through the whole transformation pipeline. That's leads us to the next point, in which there are...

No Intermediate Allocations!

Not only do we have a super generic way of transforming data, we get good performance on large arrays. This is because transducers create no intermediate collections. If you want to apply several transformations, usually each one is performed in order, creating a new collection each time.

Transducers, however, take one item off the collection at a time and fire it through the whole transformation pipeline. So it doesn't need any intermediate collections; each value runs through the pipeline separately.

Think of it as favoring a computational burden over a memory burden. Since each value runs through the pipeline, there are several function calls per item but no allocations, instead of 1 function call per item but an allocation per transformation. For small arrays there is a small difference, but for large arrays the computation burden easily wins out over the memory burden.

To be frank, early benchmarks show that this doesn't win anything in V8 until you reach a size of around 100,000 items (after that this really wins out). So it only matters for very large arrays. It's too early to post benchmarks.

How a Transducer is Born

If you are interested in walking through how transducers generalize reduce into what you see above, read the following. Feel free to skip this part though, or read this post which also does a great job of that.

The reduce function is the base transformation; any other transformation can be expressed in terms of it (map, filter, etc), so let's start with that. Here's an example call to reduce, which is available on native JS arrays:

var arr = [1, 2, 3, 4];

arr.reduce(function(result, x) { return result + x; }, 0);
// -> 10

This sums up all numbers in arr. Pretty simple, right? Hm, let's try and implement map in terms of reduce:

function map(f, coll) {
  return coll.reduce(function(result, x) {
    result.push(f(x));
    return result;
  }, []);
}

map(function(x) { return x + 1; }, arr);
// -> [2, 3, 4, 5]

That works. But our map only works with native JS arrays. It assumes a lot of knowledge about how to reduce, how to append an item, and what kind of collection to create. Shouldn't our map only be concerned with mapping? We've got to stop coupling transformations with data; every single collection is forced to completely re-implement map, filter, take, and all the collection operations, with varying incompatible properties!

But how is that possible? Well, let's start with something simple: the mapping function that we meant to create. It's only concernced with mapping. The key is that reduce will always be at the bottom of our transformation, but there's nothing stopping us from abstracting the function we pass to reduce:

function mapper(f) {
  return function(result, x) {
    result.push(f(x));
    return result;
  }
}

That looks better. We would use this by doing arr.reduce(mapper(function(x) { return x + 1; }), []). Note that now mapper has no idea how the reduction is actually done, or how the initial value is created. Unfortunately, it still has result.push embedded so it still only works with arrays. Let's abstract that out:

function mapper(f) {
  return function(combine) {
    return function(result, x) {
      return combine(result, f(x));
    }
  }
}

That looks crazy, but now we have a mapper function that is literally only concerned about mapping. It calls f with x before passing it to combine. The above function may look daunting, but it's simple to use:

function append(arr, x) {
    arr.push(x);
    return arr;
}

arr.reduce(mapper(function(x) { return x + 1; })(append),
           []);
// -> [ 2, 3, 4, 5 ]

We create append to make it easy to functionally append to arrays. So that's about it, now we can just make this a little easi-- hold on, doesn't combine look a little like a reducer function?

If the result of applying append to the result of mapper creates a reducer function, can't we apply that itself to mapper?

arr.reduce(
  mapper(function(x) { return x * 2; })(
    mapper(function(x) { return x + 1; })(append)
  ),
  []
);
// -> [ 3, 5, 7, 9 ]

Wow! So now we can compose these super generic transformation functions. For example, let's create a filterer. You wouldn't normally apply two maps right next to each other, but you would certainly map and filter!

function filterer(f) {
  return function(combine) {
    return function(result, x) {
      return f(x) ? combine(result, x) : result;
    }
  }
}

arr.reduce(
  filterer(function(x) { return x > 2; })(
    mapper(function(x) { return x * 2; })(append)
  ),
  []
);
// -> [ 6, 8 ]

Nobody wants to write code like that though. Let's make one more function compose which makes it easy to compose these, that's right, transducers. You just wrote transducers without even knowing it.

// All this does is it transforms
// `compose(x(), y(), z())(val)` into x(y(z(val)))`
function compose() {
  var funcs = Array.prototype.slice.call(arguments);
  return function(r) {
    var value = r;
    for(var i=funcs.length-1; i>=0; i--) {
      value = funcs[i](value);
    }
    return value;
  }
}

arr.reduce(
  compose(
    filterer(function(x) { return x > 2; }),
    mapper(function(x) { return x * 2; })
  )(append),
  []
);
// -> [ 6, 8 ]

Now we can write really clean sequential-looking transformations! Hm, there's still that awkward syntax to pass in append. How about we make our own reduce function?

function transduce(xform, f, init, coll) {
  return coll.reduce(xform(f), init);
}

transduce(
  compose(
    filterer(function(x) { return x > 2; }),
    mapper(function(x) { return x * 2; })
  ),
  append,
  [],
  arr
);
// -> [ 6, 8 ]

Voila, you have transduce. Given a transformation, a function for appending data, an initial value, and a collection, run the whole process and return the final result from whatever append is. Each of those arguments are distinct pieces of information that shouldn't care at all about the others. You could easily apply the same transformation to any data structure you can imagine, as you will see below.

This transduce is not completely correct in that it should not care how the collection reduces itself.

Final Notes

You might think that this is sort of lazy evaluation, but that's not true. If you want lazy sequences, you will still have to explicitly build a lazy sequence type that handles those semantics. This just makes transformations first-class values, but you still always have to eagerly apply them. Lazy sequences are something I think should be added to transducers.js in the future. (edit: well, this paragraph isn't exactly true, but we'll have to explain laziness more in the future)

Some of the examples my also feel similar to ES6 comprehensions, and while true comprehensions don't give you the ability to control what type is built up. You can only get a generator or an array back. They also aren't composable; you will still need to solve the problem of building up transformations that can be reused.

When you correctly separate concerns in a program, it breeds super simple APIs that allow you build up all sorts of complex programs. This is a simple 500-line JavaScript library that, in my opinion, radically changes how I interact with data, and all with just a few methods.

transducers.js is still early work and it will be improved a lot. Let me know if you find any bugs (or if it blows your mind).

David BoswellLearning more about the Mozilla community

We’ve learned a lot this year as we’ve been working on enabling communities that have impact. We discovered there is a high contributor churn rate, scaling the size of the community doesn’t meet the needs of all teams and there is a need to make community building more reliable and predictable.

There are still many more questions than answers right now though and there is much more to learn. A contributor audit was done in 2011 that had useful findings and recommendations, but it has now been 3 years since that research was completed. It is time to do more.

400px-Audit_presentation

Research has provided other community-driven organizations with lessons that help them be more successful. For example, Lego has an active community and research has helped them develop a set of principles that promote successful interactions that provide value for both community members and the Lego organization.

We’ll be kicking off a new research project soon and we’d love to get your help. This will involve creating a survey to send out to Mozillians and will also dive into the contributor data we’ve started collecting. This won’t answer all the questions we have, but this will give us some insight and can provide a starting point for other research projects.

areweamillion_coding

Some specific asks for helping with the survey include thinking about what questions we want to ask and thinking about the audience of people we want to reach out to. For the data analysis part, please comment here or contact me if you’re interested in helping.


Henrik Skupinmozdownload 1.12 has been released

The Firefox Automation team would like to announce the release of mozdownload 1.12. Without any other release of our universal download tool in the last 7 months, a couple of nice new features and bug fixes will make this release even more useful. You can upgrade your installation easily via pip, or by downloading it from PyPI.

Changes in 1.12

  • Display selected build when downloading (#149)
  • Add support for downloading B2G desktop builds (#104)
  • Download candidate builds from candidates/ and not nightly/ (#218)
  • Add Travis CI build status and PyPI version badges to README (#220)
  • Add Python 2.6 to test matrix (#210)
  • Fix broken download of mac64 tinderbox builds (#144)
  • Allow download even if content-length header is missing (#194)
  • Convert run_tests script to Python (#168)
  • Ensure that –date option is a valid date (#196)

Henrik SkupinMozmill 2.0.7 and 2.0.8 have been released

The Firefox Automation team would like to announce the release of Mozmill 2.0.7 and Mozmill 2.0.8. Both versions had to be released in such a short time frame to ensure continuing support for Firefox. Some latest changes done for Firefox Nightly broke Mozmill, or at least made it misbehaving. If you run tests with Mozmill ensure to upgrade to the latest version. You can do this via PyPI, or simply download the already pre-configured environment.

Changes in 2.0.7

  • Bug 1066295 – testMultipleLoads.js is failing due to new HTTP Cache v2
  • Bug 1000098 – Fix testPageLoad.js test for invalid cert page
  • Bug 1065436 – Disable e10s until full support landed
  • Bug 1062773 – Disconnect errors invalidate the report
  • Bug 999393 – Expose assert and expect by default in sub modules
  • Bug 970820 – Mozmill aborts with socket.timeout when trying to send the report

Changes in 2.0.8

  • Bug 1067939 – JSBridge and Mozmill broken due to ‘let’ changes in bug 1001090

Please keep in mind that Mozmill 2.0 does not support electrolysis (e10s) builds of Firefox yet. We are working hard to get full support for e10s added, and hope it will be done until the next version bump mid of October.

Thanks everyone who was helping with those releases!

Sean MartellMozilla ID Project: Palette Explorations

Churning along with out explorations into the Mozilla brand, we’ve been looking at nailing down an expanded base color palette to bring more choice and a bolder feel. We’ve injected the colors used in the Firefox color palette – derived from the logo itself – and added a few more previously missed hues, such as greens and pinks.

colors-and-shapes

Along with the straight expansion of a single set, we’ve picked a vibrant set to compliment the base. With the two sets we then chose two stepped-down variants for each. This makes 6 variants for each of the named colors within the set.

All this is still in the works and not fully locked down, but we thought we’d share the thinking around this and offer a glimpse at where we’re going. We’re also updating the names of the base colors and adding a few fun labels in the process. Again, the labels aren’t finalized yet either, but thought we’d share. More to come on color and usage scenarios later!

Byron Joneswe’re changing the default search settings for advanced search

at the end of this week we will change the default search settings on bugzilla.mozilla.org’s advanced search page:

resolution-duplicate

the resolution DUPLICATE will no longer be selected by default – only open bugs will be searched if the resolution field is left unmodified.

this change will not impact any existing saved searches or queries.

 

DUPLICATE has been a part of our default query for as long as i can remember, and was included to accommodate using that form to search for existing bugs.

since the addition of “possible duplicates” to the bug creation workflow, the importance of searching duplicates has lessened, and returning duplicates by default to advanced users is more of a hindrance than a help.  the data reflects this – the logs indicate that over august less than 4% of advanced queries included DUPLICATE as a resolution.

 

this change is being tracked in bug 1068648.


Filed under: bmo

Mozilla Release Management TeamFirefox 33 beta3 to beta4

  • 43 changesets
  • 114 files changed
  • 6806 insertions
  • 3156 deletions

ExtensionOccurrences
cpp49
h28
js12
cc9
build4
xul2
java2
xml1
sh1
dat1

ModuleOccurrences
security50
gfx12
media11
js9
ipc7
mobile3
dom3
content3
browser3
image2
caps2
accessible2
netwerk1
modules1

List of changesets:

Robert O'CallahanBug 1063052. NS_RUNTIMEABORT if a builtin stylesheet fails to load. r=heycam,a=lmandel - c43d3d833973
EKRBug 1063730 - Require HTTPS for Screen/window sharing. r=mt,sstamm a=lmandel - a706b85f6d4d
Jim MathiesBug 1066242 - Use a 'ui' chromium message loop/pump for the Windows compositor thread so that it can process native windowing events. r=Bas a=sylvestre - e3fe616ef9a2
Alexander SurkovBug 1020039 - Fix intermittent failures in relations/test_embeds.xul. a=test-only - 5007a59d2d92
Nick AlexanderBug 1041770 - Update missed reference. r=mrbkap, a=lmandel - 40044a225ae7
Chia-hung TaiBug 1057174 - [WebRTC] |DesktopDeviceInfoImpl::initializ| in desktop_device_info.cc use wrong argument while calling snprintf. r=rjesup, a=sledru - 645d232705b3
Tim AbraldesBug 1027906 - Set delayed token level for GMP plugin processes to USER_RESTRICTED. Whitelist certain files and registry keys that are required for EME plugins to successfully load. r=bobowen. r=jesup, r=bent, a=lmandel - 0af2575571f3
Jim MathiesBug 1066242 - Use a 'ui' chromium message loop/pump for the Windows compositor thread so that it can process native windowing events. r=Bas a=sylvestre - a128f3f1ce1f
Jim MathiesBug 1060738 - Add support for webrtc ThreadWindowsUI for use by webrtc desktop capture thread. r=jesup a=sylvestre - 2c6a2069023a
Jim MathiesBug 1060738 - Implement MessagePumpForNonMainUIThreads for Windows, a xpcom compatible subclass of chromium's MessagePumpForUI. r=tabraldes a=sylvestre - b6a5a3973477
Jim MathiesBug 1060738 - Switch to using chromium's Thread/tasks in MediaManager. On Windows, use MessagePumpForNonMainUIThreads for the background media thread. r=jesup a=sylvestre - 1355bb2a2765
Jim MathiesBug 1060738 - Add IsGUIThread asserts in various webrtc capture related methods. r=jesup a=sylvestre - 84daded3719c
Gervase MarkhamBug 1065977 - Uplift recent PSL changes to the release branches. a=lmandel - 5b7a15b4fee2
Gian-Carlo PascuttoBug 1063547 - Return no available devices where not supported, disable on Android. r=jesup, a=lmandel - abbbaa040046
Michael WuBug 1063733 - Optimize DataSourceSurface allocation. r=bas, r=seth, a=sledru - 5982da7a1215
Dão GottwaldBug 1061947 - Avoid flushing layout and making it dirty repeatedly in ToolbarIconColor.inferFromText. r=gijs, a=lmandel - 2938d6cea847
Lucas RochaBug 1041448 - Fix crash when double-tapping on empty top site spot. r=bnicholson, a=sledru - e0c49c71cc55
Margaret LeibovicBug 1063518 - Hide MLS "Learn More" link when MLS is disabled. r=liuche, a=lmandel - 275330447f6d
Mark FinkleBug 887755 - Lightweight theme preview is broken. r=margaret, a=lmandel - c21b3ccb9c19
Nicolas SilvaBug 1041744 - Don't crash if tile allocation fails. r=Bas, a=sledru - 9148cd599e9f
Nicolas SilvaBug 1061696 - Don't crash release builds when failing to allocate a surface in AutoRestoreClippedOut::save. r=Bas, a=sledru - ff9cef7b2f9d
Richard NewmanBug 1065531 - Crash in java.lang.NoSuchMethodError: android.os.Bundle.getString at org.mozilla.gecko.preferences.GeckoPreferences.setupPreferences. r=nalexander, a=lmandel - 430b3512f177
David MajorBug 1058131 - Avoid getting a crashy hook from Avast 10 Beta. r=bzbarsky, a=sledru - cd04e5bf0fec
Bobby HolleyBug 1061136 - Assume both http:// and https:// for schemeless URIs in CAPS prefs. r=bz, a=sledru - e608db37bafb
Bobby HolleyBug 1053725 - When one domain is whitelisted for file:// URI access, whitelist all subdomains. r=bz, a=sledru - a91c79c7e64e
Bobby HolleyBug 1008481 - Switch to the root dir instead of the profile dir. a=test-only - f58da8f6f47e
Michael ComellaBug 1062338 - Remove useless ic_menu_back drawable xml. r=lucasr, a=sledru - 1c636d0e8ec1
Brian SmithBug 1039064: Use strongly-typed enum instead of NSPR-style error handling, r=keeler a=lmandel - f3115a9f645c
David KeelerBug 1040446 - mozilla::pkix: add error code for CA cert used as end-entity cert r=briansmith a=lmandel - 15c382469fd1
David KeelerBug 1034124 - allow overrides when a CA cert is used as an end-entity cert r=briansmith a=lmandel - 198d06258284
Ryan VanderMeulenBug 1037618 - Skip ice_unittest on OSX. a=test-only - 0225b61c4f71
Jan de MooijBug 1057598 - Suppress the object metadata callback in RStringSplit::recover. r=nbp, a=sledru - 62f5d35f2210
Ryan VanderMeulenBug 1057598 - s/warmup/usecount on older release branches. rs=nbp, a=bustage - 3e6571e74e01
Dan GohmanBug 1054972 - IonMonkey: Truncation for phis. r=nbp, a=sledru - 94dc71a06159
Dan GohmanBug 1054972 - IonMonkey: GVN: More misc cleanups. r=nbp, a=sledru - c0d46e44a6cb
Dan GohmanBug 1054972 - IonMonkey: GVN: Avoid setting UseRemoved flags unnecessarily. r=nbp, a=sledru - 316374007734
Dan GohmanBug 1062612 - IonMonkey: Fix cast insertion for truncation of phi operands. r=nbp, a=lmandel - c5ee54bc44f8
Ryan VanderMeulenBacked out 3 changesets (Bug 1039064, Bug 1040446, Bug 1034124) for ASAN xpcshell hangs. - b8c9b76b6585
Chris CooperBug 1066403 - replace empty blocklist - a=blocklist-update - 06300676d4cd
Gabriel LuongBug 1061003 - Add New Rule won't work in non-english locales. r=harth, a=lmandel - bacdfedd7241
Brian SmithBug 1039064 - Use strongly-typed enum instead of NSPR-style error handling. r=keeler, a=lmandel - 1f599d357743
David KeelerBug 1040446 - mozilla::pkix: add error code for CA cert used as end-entity cert. r=briansmith, a=lmandel - 93cd4a068e9d
David KeelerBug 1034124 - Allow overrides when a CA cert is used as an end-entity cert. r=briansmith, a=lmandel - a6856f90ce36

Kim MoirMozilla Releng: The ice cream

A week or so ago, I was commenting in IRC that I was really impressed that our interns had such amazing communication and presentation skills.  One of the interns, John Zeller said something like "The cream rises to the top", to which I replied "Releng: the ice cream of CS".  From there, the conversation went on to discuss what would be the best ice cream flavour to make that would capture the spirit of Mozilla releng.  The consensus at the end was was that Irish Coffee (coffee with whisky) with cookie dough chunks was the favourite.  Because a lot of people like on the team like coffee, whisky makes it better and who doesn't like cookie dough?

I made this recipe over the weekend with some modifications.  I used the coffee recipe from the Perfect Scoop.  After it was done churning in the ice cream maker,  instead of whisky, which I didn't have on hand, I added Kahlua for more coffee flavour.  I don't really like cookie dough in ice cream but cooked chocolate chip cookies cut up with a liberal sprinkling of Kahlua are tasty.

Diced cookies sprinkled with Kahlua

Ice cream ready to put in freezer

Finished product
I have to say, it's quite delicious :-) If I open source ever stops being fun, I'm going to start a dairy empire.  Not really. Now back to bugzilla...

Daniel StenbergSnaxx delivers

A pint of guinnessLate in the year 1999 I quit my job. I handed over a signed paper where I wrote that I quit and then I started my new job first thing in the year 2000. I had a bunch of friends at the work I left and together with my closest friends (who coincidentally also switched jobs at roughly the same time) we decided we needed a way to keep in touch with friends that isn’t associated with our current employer.

The fix, the “employer independent” social thing to help us keep in touch with friends and colleagues in the industry, started on the last of February 2000. The 29th of February, since it was a leap year and that fact alone is a subject that itself must’ve been discussed at that meetup.

Snaxx was born.

Snaxx is getting a bunch of friends to a pub somewhere in Stockholm. Preferably a pub with lots of great beers and a sensible sound situation. That means as little music as possible and certainly no TVs or anything. We keep doing them at a pace of two or three per year or so.

Bishops Arms logo

Yesterday we had the 31st Snaxx and just under 30 guests showed up (that might actually have been the new all time high). We had many great beers, food and we argued over bug reporting, discussed source code formats, electric car charging, C64 nostalgia, mentioned Linux kernel debugging methods, how to transition from Erlang to javascript development and a whole load of other similarly very important topics. The Bishops Arms just happens to be a brand of pubs here that have a really sensible view on how to run pubs to be suitable for our events so yesterday we once again visited one of their places.

Thanks for a great time yesterday, friends! I’ll be setting up a date for number 32 soon. I figure it’ll be in the January 2015 time frame…If you want to get notified with an email, sign up yourself on the snaxx mailing list.

A few pictures from yesterday can be found on the Snaxx-31 G+ event page.

Raniere SilvaMathml September Meeting

Mathml September Meeting

../../../_images/mathml2.jpg

This is a report about the Mozilla MathML September Meeting. The topics of the meeting can be found in this PAD (local copy of the PAD). This meeting happens all at appear.in and because of that we don’t have a log.

The next meeting will be in October 10th at 8pm UTC (note that October 10th is Friday, we change the day in our try make the meeting suitable to as many people possible). Please add topics in the PAD.

Leia mais...

Ludovic HirlimannGnupg / PGP key signing party in mozilla's San francisco space

I’m organizing a pgp Keysigning party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will attend. I’ve setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-of-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don’t promise).

For those using lanyrd you will be able to use http://lanyrd.com/ccckzw.(Please tweet the event to get more people in).

Clint TalbertGetting Ready for Fall

As leaves begin to turn in some places, the Bay area gets a nice blast of heat in September. It’s like a week of summer that forgot to happen back when we were socked in with fog in July. But, that nice last bit of warmth before our leaves start to turn (yes we have some leaves that turn) is also the harbinger of the fourth quarter at Mozilla. This year, I’m excited to start putting into place some of the changes we identified at the work week. So, to that end, three quick notes.

First, we are starting off with a Quality and Automation Community Call. This is going to be in the style of the WebMaker Community Calls, and the WebMaker folks have been graciously teaching us how they work. Our first one will be next Tuesday at 8AM Pacific/15:00 UTC. Unlike most of our other meetings where we talk in glib Mozilla shorthand, this will be a call that will focus on fully explaining a few specific projects that people can help with Right Now, regardless of their skill level. And, this will also be a forum for folks in the community to tell us about what they are doing. Giving our community a way to easily let us know what they are doing, and giving them a forum to talk to one another is not something we do enough of at Mozilla. So, I’m extremely excited to see where this goes. Additionally, we are going to do this in conjunction with the Automation and Tools team and I’m extremely psyched about starting this process off. Many thanks to Lyre Calliope who helped nudge us in this direction.

Second, we are going to define our efforts in quality around some core principles, which I can talk more about later, but I’ll briefly introduce them here. They are probably best envisioned as a set of overlapping circles in a Venn diagram:

Dependability, Delight, Security & Privacy, Performance underpinned by the web platform

There are likely only two things in there that you weren’t expecting. One is Delight–in this ultra competitive world, it’s not enough for our products to be right and correct. We have to go the extra mile and delight our users with our products. Nothing short of that is good enough.

The other one is the web platform. At Mozilla, our mission to extend, enhance and empower the web platform while ensuring it remains open underpins everything we do. As an odd anachronism, our QA systems and metrics have not historically taken into account how a given feature does or does not help to move the web platform forward. Likewise, we’ve not been very involved in looking at how we are implementing and testing our support for various web standards in any kind of systemic way. While I don’t believe that there are problems in these areas, I do believe that this is a core piece of quality at Mozilla and it is an area we should work to get more involved in and be more cognizant of.

Third, time is unfortunately a zero sum game.  The amount of work doesn’t shrink, especially when attempting to venture into new areas. So in order to make room for our new directions, we are going to experiment with stopping and/or pausing some of our endeavors. That can be scary when you’re changing how things have been done for years, but it’s what we need to do to move Mozilla forward. The world we live in has changed, and there is no going back. There is constant iteration toward our vision of being a more technically astute, more data-driven, more community empowering team that propels quality forward.

Armen ZambranoWhich builders get added to buildbot?

To add/remove jobs on tbpl.mozilla.org, we have to modify buildbot-configs.

Making changes can be learnt by looking at previous patches, however, there's a bit of an art to it to get it right.

I just landed a script that sets up buildbot for you inside of a virtualenv and you can pass a buildbot-config patch and determine which builders get added/removed.

You can run this by checking out braindump and running something like this:
buildbot-related/list_builder_differences.sh -j path_to_patch.diff

NOTE: This script does not check that the job has all the right parameters once live (e.g. you forgot to specify the mozharness config for it).

Happy hacking!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Tantek ÇelikIndieWebCampUK 2014 Hack Day Demos: HTTPS, #webactions, new & improved #indieweb sites

One weekend ago, 18 IndieWebCampUK participants (including 2 remote) showed 25 demos in just under 75 minutes of what they designed and built that weekend in 19 different interoperable projects. Every single demo exemplified an indieweb community member scratching their own personal site itch(es), helping each other do so, and together advancing the state of the indieweb. We can all say:

I'm building Indie Web Camp.

During the demos I took realtime notes in IRC, with some help from Barnaby Walters. Archived on the IndieWebCamp wiki, here's a summary of what each of us got working.

Glenn Jones

Glenn Jones built improvements to Transmat. (IRC notes)

He built a map view that shows the venues nearest to his current location (via GeoLocation API).

He also found an open source HTML5 JS open source pedometer and repurposed it into Transmat so that when running on his Android as a web app, it can detect when he's walking, and only do GPS lookups when he's walking, so it saves battery.

Now he has an HTML5 JS app that can auto-checkin for him while he's walking.

Barnaby and Pelle

Barnaby WaltersPelle Wessman Barnaby Walters and Pelle Wessman built cross-site reply webactions that work purely via their websites - no browser extension needed! This is the first time this has been done. (IRC notes)

Barnaby has setup registerProtocolHandler on Taproot to register a handler for the "web+indie:" (since updated to "web+action:") protocol when he loads a particular page on his website so that his website is registered to handle webactions via the <indie-action> tag.

Barnaby demonstrates loading the page that calls registerProtocolHandler. The browser asks to confirm that he wants waterpigs.co.uk to handle "web+indie" URLs.

Then Barnaby goes to Pelle's website home page where he has a list of posts that he's written, now with "Reply", "Like", and "Tip" webactions next to each post, each webaction represented and wrapped by <indie-action> tags in the markup.

Pelle's site also has a web component ([https://github.com/voxpelli/indie-action-component open sourced on github]) to handle his <indie-action> tags, which creates an iframe that uses that same protocol handler using a Promise, which connects the iframe to calling the handler that Taproot registered.

Thus without anything installed in the browser, Barnaby can go to Pelle's site, click the "Reply" button next to a post which automatically goes to Barnaby's site's Taproot UI to post a reply!

Barnaby Walters

Barnaby Walters also built a map-view post aggregator that shows icons for people at the locations embedded in their recent posts. (IRC notes)

The map-view aggregator is at a self-standing demo URL for now, but Barnaby plans to include this view as another column type in Shrewdness, so you can have a map view of recent posts from people you're following.

Grant Richmond

Grant Richmond got a fancy new domain (grant.codes) and setup Glenn Jones's Transmat on it - which makes it the second installation of Transmat! (IRC notes)

Grant also built a contact page: grant.codes/contact that has links for various methods of communication:

All of the links are text links for now, no icons yet.

Grant has implemented a people focused communication UI on his site!

Jeremy Keith

Jeremy Keith added https on adactio.com, and implemented <indie-action> tag webactions. (IRC notes)

adactio https

Jeremy took his site adactio.com from no https support to https Level 4. All adactio.com URLs redirect to https. However subdomains (e.g. austin.adactio.com) are still http.

adactio webactions

Jeremy's also implemented the new <indie-action> tag for webactions around his existing Tweet action links, both on his post permalinks, and on his posts in-stream (e.g. on his home page or when paginated).

Shane Hudson

Shane Hudson went from no SSL and no comments yesterday to https level 5! He also imported the contents of all his old comments from his WordPress blog to his Craft install (the CMS he's dogfooding, contributing plugins to, selfdogfooding). (IRC notes)

He was able to get SSL setup on his site with an A rating, and forward secrecy, and is thus https level 5.

Shane also wrote a script to do the import of comments from WordPress to Craft. It's "a bit crude, dealing with XML to CSV a few times".

Nat Welch

Nat Welch (AKA icco on IRC) got his blog running (his own software) in Go (language) hosted on AppEngine with SSL, achieving https level 4! (IRC notes)

AppEngine does SSL for free if you're ok with SNI.

So now Nat has SSL Labs rating A- on writing.natwelch.com! And also automatic redirect works from http to https. Thus he has also achieved https Level 4!

Right now he's using AppEngine default auth, using his Google account. Eventually he wants to use indieauth to auth into his site.

Tim Retout

Tim Retout got pump.io running on his site and added support to it for POSSEing to Twitter. (IRC notes)

His goal is to add all the indieweb feature support too like webmentions, microformats etc. He has to run off to catch a train.

He is also too humble, as he helped numerous people in person at the camp get on SSL, https level 4 or 5 at that. A round of applause for Tim!

Tom Morris

Tom Morris added https to his site, made it responsive, and setup mf2py as a service. (IRC notes)

responsive tommorris.org

Tom showed his current site tommorris.org with different window sizes. His CSS is now "less sucky" and he has made his site more responsive on mobile / small display etc.

mf2py as a service

Tom also got the Python microformats2 parser (mf2py) running as a service that you can submit your URLs to and get back pretty-printed JSON.

tommorris https

Tom got his main site tommorris.org up to https Level 4 with an A- rating, but has not yet done so with *.tommorris.org (e.g. wiki.tommorris.org).

During the next demo, Tom got his SSL Labs rating from A- to A with some help from Aral. And during the demo after that took his rating up to A+ thanks to this blog post.

Kevin Beynon

Kevin Beynon got IndieAuth login to his own site working! (IRC notes)

Kevin started by showing us his site home page kevinbeynon.com using a tablet. We projected it by holding up to the Talky HD camera.

He pointed out that there is no admin link on the home page then went to his "secret" URL at /admin/ which has an IndieAuth login screen. He entered his own URL, and chose to RelMeAuth authenticate using Twitter which redirected to it and back and came back with the message "Log-in Successful".

Kevin went to his home page again, and showed that it now has visible links to "admin" and "log out". Next he plans to bring his post creating and editing interface into his home page front end, so that he can do inline editing and post notes from his home page.

Joschi Kuphal

Joschi Kuphal got his site's https support to SSL rating A+, fixed his webmention implementation, and implemented webactions on permalinks. (IRC notes)

jkphl https A+

Joschi noted that his site was running with SSL before but had some flaws. He worked on it and improved his site's rating from F to A+.

jkphl webmentions fixed

He also fixed some flaws with his webmention implementation thanks to feedback from Ryan Barrett online.

jkphl permalinks webactions

Third, Joschi implemented webactions on permalinks, in particular he added <indie-action> markup around his default Twitter, G+, Facebook "share" links. He then demonstrated his site working with Barnaby Walters's Web Action Hero Toolkit browser extension.

Chris Asteriou

Chris Asteriou is fairly new to the IndieWeb and started with going through IndieMark, adding h-entry and h-card markup, and a notes section to his site.(IRC notes)

digitalbliss microformats

Chris showed digitalbliss.uk.com, noted that he added h-entry on his page with entries. He clicked the "Play" link at top to show this. And then he marked up the info at bottom of his home page with h-card.

digitalbliss notes

Chris added a notes section and used the verification tools on indiewebify.me to check it and verify that he reached IndieMark Level 2.

Tantek

Tantek Çelik switched his permalink webactions from <action> tags to <indie-action> tags and researched the UX of webactions on posts in a stream (e.g. a home page).

tantek indie-action

Based on the webactions discussion session in the first day with Tantek, Jeremy, and Pelle, they concluded that the <indie-action> tag was more appropriate than the <action> tag.

Tantek initially publicly proposed the <action> tag for consideration in a session on Web Actions at Open Source Bridge 2012, and then later implemented them at last year's IndieWebcampUK 2013 which were then demonstrated working with Barnaby Walters's browser extension.

Changing from <action> to <indie-action> at a minimum better fits with the web component model. Jeremy Keith pointed out that an <indie-action> tag in particular would be a good example of a web component, worthy as a case-study for web components.

Tantek updated his permalink webactions to use <indie-action> tags and Barnaby updated his browser extension to support them as well.

in-stream webactions

Tantek analyzed the UI of various silos, in particular Instagram and Twitter.

Instagram has a very minimal simple webaction UI, with just "Like", "Comment", and "..." (more) buttons, the first two with both icon and text labels, which makes sense since their primary content is large (relative to the UI) images/video (visual media). Instagram's webactions are identical on photos viewed on their own screen, and when in a stream of media. Deliberately designed consistency.

Twitter on the other hand is horribly inconsistent between different views of tweets, and even different streams, sometimes their webactions are:

  • on the right with text labels
  • on the left with text labels
  • on the left without text labels

Their trend seems to be icon only, likely because the text label distracts from the tweet text content around it, especially in a stream of tweets that are primarily (nearly all) just text.

Tantek walked through comparisons of Twitter's different webactions button icon/text usage/placements with Aral, who came to the same conclusions from the data.

It may be ok to use both icon and text labels on note/post permalink pages, as there is more distinction between the (single) content area, and the footer of webactions.

However, the conclusions is that in-stream webactions should use just icons (clear ones at that) when among posts that are primarily, mostly, or perhaps even often just text.

Next Tantek is working on implementing icon-only webactions on his home page posts stream. He made some progress but realized it will require him to rework some storage code first.

Aral Balkan

Aral Balkan upgraded his site's https support to SSL rating A+ and https Level 5, and his how-to blog post about it! (IRC notes)

Aral already supported https on his site aralbalkan.com beforehand. On IndieWebCampUK hack day he added support for forward secrecy, which raised its SSL rating from A- to A+ and thus he achieved https Level 5!

Apparently it took him only 2 lines of code to implement that change on nginx, and noted that it's a bit harder on Apache.

After his demo, Aral also updated his blog post about SSL setup with nginx with what he learned and how to get to SSL rating A+.

Rosa Fox

Rosa Fox created a UI on her site for CRUD posting of projects. (IRC notes)

Rosa wanted to make her own CMS with support for posting images and tags. She demonstrated her local dev install of her new CMS with the following new features she built at Hack Day:

  • a UI for creating a new project
  • CRUD posting interface for projects
  • using Postgres to store data

Aaron Parecki

Aaron Parecki participated remotely, added support for posting bookmarks to his site, and added bookmarks posting via micropub to his Quill app! (IRC notes)

Aaron has been publishing bookmarks to another place for a long time in a WordPress install at aaron.pk/bookmarks and he wanted to integrate them into his main site aaronparecki.com.

Once Aaron got the bookmark post type implemented in his publishing software p3k and deployed to his site, he did a mass import from the aaron.pk/bookmarks WordPress XML export.

That was the last thing aaronpk was using WordPress for, so he's no longer using WordPress to publish any of his own content.

Now all of Aaron's bookmarks are at aaronparecki.com/bookmarks all marked up with microformats. Each bookmark is an h-entry, and embedded inside is an h-cite of the bookmark itself.

This also means you can comment, bookmark, and like his bookmarks themselves!

During later demos, Aaron also updated his Quill app with a bookmark posting interface, as well as a bookmarklet so you can quickly open the Quill UI to make a bookmark.

Kevin Marks

Kevin Marks built a feed coverter that takes legacy RSS/Atom feeds and produces modern readable and usable h-entry page, including such niceties as inline playable audio elements in converted podcasts. (IRC notes)

Kevin noticed that people are building h-feed readers, so he built a tool that takes legacy RSS Atom feeds and unmunges them and produces nice clean h-entry feeds.

The converter is at feed.unmung.com/. Unmung.com is a URL he bought ages ago, and set it up on Google AppEngine.

E.g. if you put in xkcd.com/rss.xml into it, it generates a nice readable HTML page with h-entry, which you can then subscribe to in an indie reader like Barnaby's Shrewdness.

Kevin demonstrated using unmung to convert a podcast feed feeds.wnyc.org/onthemedia into an h-feed with embedded playable HTML5 <audio> elements, providing an actual useful interface, much better than the original feed.

Kevin made the point that no one wants to parse RSS or Atom any more. Now by parsing the microformats JSON representation, you can get any existing RSS or Atom etc.

You can now subscribe to iTunes podcasts etc. in your indieweb reader!

Robin Taylor

Robin Taylor added support for https (including forward secrecy, getting an SSL "A" rating) to his site robintaylor.uk and automatic redirects from http to https, achieving https Level 5! (IRC notes)

UK Homebrew Website Clubs

As we were wrapping up, Tom Morris asked openly if anyone would be interested in coming to a Homebrew Website Club in London. Jeremy Keith similarly asked the group for interest in a Homebrew Website Club Brighton.

Both had quite a bit of interest, so we can expect to start seeing more Homebrew Website Club meetups in more locations!

See also

Join Us At The Next IndieWebCamp In Cambridge

IndieWebCamp Cambridge is next month on the East Coast.

Join us. Share ideas. Come work on your personal web site. Help grow and evolve the independent web. Be the change you want to see in the world wide web.

"The people I met at @indiewebcamp are the A-Team of the Internet. Give them some tape and an oxy-acetalyne torch and they'll fix the web."

Adam Lofting“Conclusions”

Mile long string of baloons (6034077499)

  • Removing the second sentence increases conversion rate (hypothesis = simplicity is good).
  • The button text ‘Go!’ increased the conversion rate.
  • Both variations on the headline increased conversion rate, but ‘Welcome to Webmaker’ performed the best.
  • We should remove the bullet points on this landing page.
  • The log-in option is useful on the page, even for a cold audience who we assume do not have accounts already.
  • Repeating the ask ‘Sign-up for Webmaker’ at the end of the copy, even when it duplicates the heading immediately above, is useful. Even at the expense of making the copy longer.
  • The button text ‘Create an account’ works better than ‘Sign up for Webmaker’ even when the headline and CTA in the copy are ‘Sign up for Webmaker’.
  • These two headlines are equivalent. In the absence of other data we should keep the version which includes the brand name, as it adds one further ‘brand impression’ to the user journey.
  • The existing blue background color is the best variant, given the rest of the page right now.

The Webmaker Testing Hub

If any of those “conclusions” sound interesting to you, you’ll probably want to read more about them on the Webmaker Testing Hub (it’s a fancy name for a list on a wiki).

This is where we’ll try and share the results of any test we run, and document the tests currently running.

And why that image for this blog post?

Because blog posts need and image, and this song came on as I was writing it. And I’m sure it’s a song about statistical significance, or counting, or something…

Lucas RochaIntroducing Probe

We’ve all heard of the best practices regarding layouts on Android: keep your view tree as simple as possible, avoid multi-pass layouts high up in the hierarchy, etc. But the truth is, it’s pretty hard to see what’s actually going on in your view tree in each UI traversal (measure → layout → draw).

We’re well served with developer options for tracking graphics performance—debug GPU overdraw, show hardware layers updates, profile GPU rendering, and others. However, there is a big gap in terms of development tools for tracking layout traversals and figuring out how your layouts actually behave. This is why I created Probe.

Probe is a small library that allows you to intercept view method calls during Android’s layout traversals e.g. onMeasure(), onLayout(), onDraw(), etc. Once a method call is intercepted, you can either do extra things on top of the view’s original implementation or completely override the method on-the-fly.

Using Probe is super simple. All you have to do is implement an Interceptor. Here’s an interceptor that completely overrides a view’s onDraw(). Calling super.onDraw() would call the view’s original implementation.

public class DrawGreen extends Interceptor {
    private final Paint mPaint;

    public DrawGreen() {
        mPaint = new Paint();
        mPaint.setColor(Color.GREEN);
    }

    @Override
    public void onDraw(View view, Canvas canvas) {
        canvas.drawPaint(mPaint);
    }
}

Then deploy your Interceptor by inflating your layout with a Probe:

Probe probe = new Probe(this, new DrawGreen(), new Filter.ViewId(R.id.view2));
View root = probe.inflate(R.layout.main_activity, null);

Just to give you an idea of the kind of things you can do with Probe, I’ve already implemented a couple of built-in interceptors. OvermeasureInterceptor tints views according to the number of times they got measured in a single traversal i.e. equivalent to overdraw but for measurement.

LayoutBoundsInterceptor is equivalent to Android’s “Show layout bounds” developer option. The main difference is that you can show bounds only for specific views.

Under the hood, Probe uses Google’s DexMaker to generate dynamic View proxies during layout inflation. The stock ProxyBuilder implementation was not good enough for Probe because I wanted to avoid using reflection entirely after the proxy classes were generated. So I created a specialized View proxy builder that generates proxy classes tailored for Probe’s use case.

This means Probe takes longer than your usual LayoutInflater to inflate layout resources. There’s no use of reflection after layout inflation though. Your views should perform the same. For now, Probe is meant to be a developer tool only and I don’t recommend using it in production.

The code is available on Github. As usual, contributions are very welcome.

ArkyFirefox OS: Designing Khmer Keyboards and Fonts

Back in Cambodia this week to participate in Barcamp Phnom Penh 2014. It is great to experience the energy and openness of Phnom Penh and the Cambodian youth's insatiable zeal to learn all things tech. Over the past few years, the barcamps helped us build the Mozilla community in Cambodia.

Cambodia is a fast growing economy in the region. One survey notes significant increase in smart phone ownership from last year. And also increase in Khmer supported smart phones and feature phone in the market. At Barcamp Phnom Penh I presented a Firefox OS talk about the on-going Khmer Internationalization (i18n) work and invited the audience to contribute to Firefox OS. Planning to organize hackathons to work on Khmer keyboards with the Mozilla community here.

After my talk, Vannak of Mozilla Cambodia community talked briefly about Mozilla community to the audience. And we did a presentation about Mozilla Web Maker tools. I hope we'll organize more web literacy events in future. Keep watching this space for more news from Cambodia, the kingdom of wonder.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1064878] Use of uninitialized value in pattern match (m//) at /loader/0x7ffa9dedc498/Bugzilla/Extension/BugmailFilter/Filter.pm line 172
  • [1020558] Add Involved with Bugs and Never Visited Query to MyDashboard
  • [1062944] Product::Component autocomplete when filing new bug shows disabled components.
  • [1046213] datetime_from() generates wrong dates if year < 1901
  • [1053513] remove last-visited entries when a user removes involvement from a bug
  • [1021902] UI to view a user’s review history
  • [1064678] searching for tracking flag “is empty” is generating incorrect sql
  • [1064329] splinter displays patches that remove lines starting with hyphens incorrectly
  • [1065594] Enable ‘due date’ field in ‘Community Building’ product (all components)
  • [1052851] add the ability to search by “assignee last login date”
  • [1066777] The kick-off form isn’t creating dependent bugs
  • [1039940] serialisation of objects for webservice responses is extremely slow
  • [1058615] New Custom Bugzilla Form Needed For PR Team

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

William Lachancemozregression 0.24

I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:

  1. Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like:

    mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14

  2. Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
  3. Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly

If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.

Mozilla Open Policy & Advocacy BlogFCC Reply Comments on Net Neutrality

Today is the final deadline to file comments as part of the Federal Communications Commission’s open proceeding on net neutrality in the United States. The show of support for real net neutrality over the past six months has been tremendous – so much so that this issue has now received more public comments than any other in FCC history, nearly 1.5 million in total.

Mozilla has been pulling out all the stops as well. In May, we submitted an original petition to the FCC to propose a new path forward on the difficult question of authority, and to shake up a debate that had not seen many new ideas. We’ve also launched global teach-ins, filed comments, and joined last week’s Day of Action, among other activities.

Our reply comments filed today build on these past actions, summarize the state of the debate, and respond to net neutrality opponents. Our comments are structured around four points:

  1. Most parties agree on most issues. The FCC should adopt enforceable rules, including some form of a no blocking and no unreasonable discrimination rule, with an exception for reasonable network management.
  2. Mozilla’s classification theory is a viable and strong path forward. Mozilla’s approach ensures real net neutrality while bypassing the political conversation over reclassification, by articulating a new, not yet classified Title II service offered to remote end points.
  3. The FCC must adopt a presumption against paid prioritization. Allowing prioritization would degrade other uses of the Internet, and thus cause harm to user choice, innovation, and competition.
  4. The same rules should apply to mobile and fixed services. There is one Internet and it must remain open for all. Technical requirements for mobile networks can be protected through reasonable network management.

This week, the FCC will conduct roundtables on net neutrality, with varying focuses including technical, legal, and enforcement aspects. I’ll be participating in one of the Friday sessions, focusing on the topic of enforcement. The roundtables will be held in DC, and will include a moderated discussion among a diversity of viewpoints. At this stage of the process, I don’t expect much in the way of agreement – but at least a range of options will be presented and defended for the agency’s consideration. The public has been invited to submit questions in advance over email or Twitter – roundtables@fcc.gov or #FCCRoundtables – though a caveat from the session description: Your questions and identifying information will be made public and included in the official record.

We’ve seen comments, petitions, roundtables, protests, and events for months now. The main thing left is for the agency to make some decisions – and as we note in our comments, the outcome will set the course for the future of the industry, for better or for worse.

There’s still time for you to make your voice heard. You can contact the FCC and members of Congress, and ask them to protect net neutrality, and the choice, innovation, and freedom enjoyed today by all Internet users and developers.

Eric ShepherdThe Sheppy Report: September 12, 2014

You may notice that I’m posting this on Monday instead of Friday (despite what the title says). Last week was a bit of a mess; I had a little bit of a surprise on Wednesday that involved my being taken to the hospital by ambulance (no lights and siren — it wasn’t that big a surprise), and that kept me away from work for most of the rest of the week.

I’m feeling fine now, although there’s some follow-up medical testing to do to check things out.

Anyway! On to the important stuff!

What I did this week

  • Deleted an inappropriate article from the MDN inbox and emailed the author explaining why.
  • Succeeded in first deployment of a test of my new live sample server-side component server project.
  • Discovered the native GitHub application for Mac, which makes my life enormously easier.
  • Added more information to the comments on bug 1063580, about how shift-refreshing of macros no longer lets you pick up changes to macros and submacros.
  • Updated “How to create an MDN account” to consider that you can use either Persona or GitHub now as your authentication service. This fixed bug 1049972.
  • Added the dev-doc-needed keyword to bug 1064843, which is about implementing ::backdrop in Gecko.
  • Updated the draft of the State of MDN for August to include everything I could think of, so Ali can get it posted (which has happened!).
  • Created the soapbox message to appear at the top of MDN pages about the net neutrality petition. This has since been removed once again, since the event in question has passed. This resolved bug 1060483.
  • Finished debugging the SubpageMenuByCategories and MakeColumnsForDL macros. The former is new and the latter has been updated to support multiple <dl> lists with interspersed headings.

Meetings attended this week

Monday

  • #mdndev planning meeting.
  • #mdndev triage meeting.

Last week started off on a high note, with some very productive activity, and then went sideways on Wednesday and never really recovered. I have high hopes for this week though!

Daniel StenbergDaladevelop hackathon

On Saturday the 13th of September, I took part in a hackathon in Falun Sweden organized by Daladevelop.

20-something hacker enthusiasts gathered in a rather large and comfortable room in this place, an almost three hour drive from my home. A number of talks and lectures were held through the day and the difficulty level ranged from newbie to more advanced. My own contribution was a talk about curl followed by one about HTTP/2. Blabbermouth as I am, I exhausted the friendly audience by talking a good total of almost 90 minutes straight. I got a whole range of clever and educated questions and I think and hope we all had a good time as a result.

The organizers ran a quiz for two-person teams. I teamed up with Andreas Olsson in team Emacs, and after having identified x86 assembly, written binary, spotted perl, named Ada Lovelace, used the term lightfoot and provided about 15 more answers we managed to get first prize and the honor of having beaten the others. Great fun!

Daniel GlazmanMolly needs you, again!

There are bad mondays. This is a bad monday. And this is a bad monday because I just discovered two messages - among others - posted by our friend Molly Holzschlag (ANC is Absolute Neutrophil Count):

First message

Second message

If you care about our friend Molly and value all what she gave to Web Standards and CSS across all these years, please consider donating again to the fund some of her friends set up a while ago to support her health and daily life expenses. There are no little donations, there are only love messages. Send Molly a love message. Please.

Thank you.

Marco ZeheYour must read post for this week

This goes out to all my readers who are web developers, or who work with web developers closely enough to hand this to them.

It’s Monday morning, and for this week, I have a must read post for you which you will now bookmark and reference and use with every single web component you build! No, this is not a suggestion, it’s an order which you will follow. Because if you don’t, you’ll miss out on a lot of fun and grattitude! I’m serious! So here goes:

Web Components punch list by Steve Faulkner of the Paciello Group

Read. Read again. Begin to understand. Read again. Understand more. Read yet another time. Get the tools referenced in the post. Check your web component(s) against this list top to bottom. If even a single point is answered “no”, fix it, or get on Twitter and ask for help in the accessibility community on how to fix it. Listen and learn. And repeat for every future web component you build!

And don’t be shy! Tell the world about that your web component is accessible from the start, usable by at least twenty percent of people more than would otherwise! I kid you not!

Happy Monday, and happy coding!

Andy McKayWorking on Open Source

When hiring at Mozilla, having potential candidates who know open source software is almost a requirement. But there's a huge difference between people that work with open source software and those who work on open source.

About 14 years ago when I started interviewing candidates for open source software, even seeing candidates who knew what open source was could be unusual and seen as a advantage. That's not enough now. These days working with open source software is seen as a base requirement. But that's not still enough.

In fact it's almost staggering these days to understand how anyone can build any systems, especially web sites, without using a large amount of open source. So go ahead fill resumes with how you've used Linux, MySQL, PostgreSQL, Python, JavaScript, Ruby and so on. Show me your github, bitbucket, whatever account. Those are buzzwords that keep recruiters happy.

What I really want to see is that you've worked on open source. Have you:

  • contributed to an open source project?
  • published an open source project that is used by someone other yourself (or your company)?
  • given a talk at a open source conference?
  • helped out an open source foundation?
  • written some documentation on open source?
  • participated on mailing lists with other developers about an open source project?
  • dealt with those awesome people who want to help and those trolls who don't?

There's a reason we look for open source developers at Mozilla. It's partly because Mozilla is basically a collection of open source projects with some funding behind it. But also because developers on open source are great at developing code and at working with other people.

Working on open source separates you from those who just use it.

Jeff WaldenRacism from a United States judge. You’ll never guess which one!

A couple days ago I found this ugly passage in a United States legal opinion:

The white race deems itself to be the dominant race in this country. And so it is in prestige, in achievements, in education, in wealth and in power. So, I doubt not, it will continue to be for all time if it remains true to its great heritage and holds fast to the principles of constitutional liberty.

Take a guess who wrote it, and in what context.

A hint

The same person who wrote this immediately continued with these further words, some of which might sound familiar (if improbable):

But in view of the Constitution, in the eye of the law, there is in this country no superior, dominant, ruling class of citizens. There is no caste here. Our Constitution is color-blind, and neither knows nor tolerates classes among citizens. In respect of civil rights, all citizens are equal before the law. The humblest is the peer of the most powerful. The law regards man as man, and takes no account of his surroundings or of his color when his civil rights as guaranteed by the supreme law of the land are involved.

I’ll give you a little space to try to come up with the name and context, if you haven’t already gotten it.

The answer

These passages were written by the first Justice Harlan, dissenting in the notorious Plessy v. Ferguson case. It’s interesting how we now remember Justice Harlan for this solo dissent and for his statement that, “Our Constitution is color-blind, and neither knows nor tolerates classes among citizens.” Yet I’d never heard before, anywhere, that in the exact same paragraph he validated the idea of a dominant race and basically asserted that whites would always be so in the United States.

Justice Harlan certainly deserves credit as the only one of eight justices to hold in favor of Homer Plessy, the New Orleans Comité des Citoyens, and the railroad company that ejected him from a whites-only car (all of whom conspired in a test case to overturn the law). (The ninth justice, David Josiah Brewer, didn’t participate in the case because of the abrupt death of his daughter. It’s unclear how he would have voted had he participated, with his personal history and voting record pointing in somewhat different directions.) As the only Southerner on the Court, and a former slave owner at that, it’s far from what one might have expected of Harlan, or of his colleagues.

Yet at the same time, Justice Harlan adhered to some of the beliefs and prejudices of his time. It is an unfortunate gloss on history that we are less aware of this, than we are of his better-known, more admirable words. We should be aware of both: to correctly understand history, to not fall prey to knowing only that which we want to be true, and to place a historical figure in full context.

Tantek ÇelikHappy 8-bit day 2014! #8bitday

8-bit day is the 256th day of the year. This year (and most years) that happens to be Gregorian September 13th. Five years ago I proposed making today an (un)official holiday in honor of all things 8-bit: art, music, video, games, and sure programmers too.

The Math

If you start the year with day 0, in the year 2014, 2014-09-13 (or 2014-256) is day number 255.

  • 255 decimal = FF hex
  • FF hex = 11111111 binary
  • 11111111 binary = 8 bits.

Enjoy some 8-bit stuff

Music

Videos

See Also

Related

Previously

Previously I kept this on my wiki, which is unfortunately still on pbworks.com, so starting this year, I'm retaking that content and blogging it here on my site, until I've implemented my own wiki pages. I'll write a new post once a year, like I have in past years.

Post your favorite 8-bit stuff

Take a moment today to post and celebrate the 8-bit things that you've found and enjoy, and hashtag it #8bitday (e.g. on your own site, Twitter, Instagram, etc.)

Tim TaubertTalk: Keeping secrets with JavaScript - An Introduction to the WebCrypto API

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.

Slides

Code

https://github.com/ttaubert/secret-notes

Mozilla WebDev CommunityWebdev Extravaganza – September 2014

Once a month, web developers from across Mozilla gather to continue work on our doomsday robot that will force the governments of the world to relinquish control of the internet to us. Crafting robotic monsters is hard work, so we take frequent breaks to avoid burnout, and we find these breaks are a convenient time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Marketplace Redesign

clouserw stopped by to tell us that Firefox Marketplace shipped a redesign! The front page now has a set of modules that can be customized using a set of admin tools, including changing what apps are shown, setting colors and features, and more. Of particular note is the fact that the admin interface for the modules was given a lot of UX attention as well (as opposed to our standard practice of using the default Django admin design), and includes a live preview of what the modules will look like.

Socorro: Out of Memory Crashes and new ADI source

lonnen informs us that Socorro has landed support for logging out-of-memory crashes, meaning that crashes that are suspected of relating to memory now include about:memory logs in the crash data, to help us diagnose those problems. In addition, Socorro is now fetching data about the number of active daily instances of Firefox instead of depending on the data being sent to Socorro in bulk. Socorro uses this data to normalize crash data, and the new source reduces the time spent pulling in the data to under ten minutes.

Air Mozilla now supports pop-out videos

peterbe shared the news that Air Mozilla now supports pop-out videos, meaning you can now launch a new window with the video you want to watch. This gives the viewer more options in how to watch a video while working on something else, as previously you were limited to in-page viewing or full-screen viewing.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

contribute.json

peterbe had a few pieces of news about contribute.json. First, Air Mozilla and Peekaboo both have live contribute.json files, and Socorro is deploying one soon. Second,  seanbolton and espressive are working on a redesign of the contribute.json webpage. And finally, the validator now supports text and file upload as well as URLs.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor. Unfortunately we had no one new to introduce this month.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Markeplace in multiple datacenters

clouserw shared an “exploration” he’s working on for moving Marketplace into being hosted in multiple datacenters. While the primary goals are redundancy (if a datacenter goes down) and performance (geographically close to users who normally have to reach servers in the US), one major issue that was raised was handling differing privacy laws between countries that we have datacenters in. Feedback is welcome!

Bedrock running on Cloud9

jgmize wanted to let everyone know that Bedrock can now be set up on Cloud9, allowing developers and contributors to get a running instance of Bedrock with almost no interaction or software installed on their own machine. There’s a quickstart guide for setting it up, and he’s looking for people to try it out and also to consider trying out the model on their own projects as a way of helping on-board new contributors.


If you’re curious, the robot is coming along nicely. Once we’re able to get the imported railgun to clear customs, we should be good to go!

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

 

Nick CameronA gotcha with raw pointers and unsafe code

This bit me today. It's not actually a bug and it only happens in unsafe code, but it is non-obvious and something to be aware of.

There are a few components to the issue. First off, we must look at `&expr` where `expr` is an rvalue, that is a temporary value. Rust allows you to write (for example) `&42` and through some magic, `42` will be allocated on the stack and `x` will be a reference to it with an inferred lifetime shorter than the value. For example, `let x = &42i;` works as does

struct Foo<'a> {
    f: &'a int,
}
fn main() {
    let x = Foo { f: &42 };
}

Next, we must know that borrowed pointers (`&T`) can be implicitly coerced to raw pointers (`*T`). So if you write `let x: *const int = &42;`, `x` is a raw pointer produced by coercing the borrowed pointer. Once this happens, you have no safety guarantees - a raw pointer can point at memory that has already been freed. This is fine, since you see the raw pointer type and must be aware, but if the type comes from a struct field what looks like a borrowed pointer could actually be a raw pointer:

struct Bar {
    f: *const int,
}
fn main() {
    let x = Bar { f: &42 };
}

Imagine that `Bar` is some other module or crate, then you might assume that `main` is ok here. But it is not. Since the borrowed pointer has a narrow scope and is not stored (the raw pointer does not count for this analysis), the compiler can choose to delete the `42` allocated on the stack and reuse that memory straight after (or even during, probably) the `let` statement. So, `x.f` is potentially a dangling pointer as soon as `x` is available and accessing it will give you bugs. That is OK, you can only do so in an `unsafe` block and thus you should check (i.e., as a programmer you should check) that you can't get a dangling pointer. You must do this whenever you dereference a raw pointer, and the fact that it must happen in unsafe code is your cue to do so.

The final part of this gotcha was that I was already in unsafe code and was transmuting. Of course transmuting is awful and you should never do it, but sometimes you have to. If you do `unsafe { transmute(x) }` then there is no cue in the code that you have a raw pointer. You have no cue to check the dereference, because there is no dereference! You just get a weird bug that only appears on some platforms and depends on the optimisation level of compilation.

Unfortunately, there is nothing we can really do from the language point of view - you just have to be super-careful around unsafe code, and especially transmutes.

Hat-tip to eddyb for figuring out what was going on here.

Mark SurmanSnapping the puzzle together

I’ve had a picture in mind for a while: a vision of FirefoxOS + Appmaker + Webmaker mentor programs coming together to drive a new wave of creativity and content on the web. I believe this would be a way to really show what Mozilla stands for right now: putting access to the Internet in more hands and then helping people unlock the full potential of the web as a part of their lives and their livelihoods.

Puzzle pieces

The thing is: this picture has felt a bit like a puzzle until recently — I can see where it’s going, but we don’t have all the pieces. It’s like a vision or a theory more than a plan. However, over the past few months, things are getting clearer — feels like the puzzle pieces are becoming real and snapping together.

Bangladesh

Dinner w/ Mozilla Bangladesh

I had this ‘it’s coming together’ feeling in spades the other day as I had dinner w/ 20 members of the Mozilla community in Bangladesh. Across from me was a college student named Ani who was telling me about the Bengali keyboard he’d written for FirefoxOS. To his right was a woman named Maliha who was explaining how she’d helped the Mozilla Bangladesh community organize nearly 50 Webmaker workshops in the last two months. And then beside me, Mak was enthusiastically — and accurately — describing Mozilla’s new Mobile Webmaker to the rest of the group. I was rapt. And energized.

More importantly, I was struck by how the people around the table had nearly all the pieces of the puzzle amongst them. At a practical level, they are all actively working on the practicalities of localizing FirefoxOS and making it work on the ground in Bangladesh. They are finding people and places to teach Webmaker workshops. They have offered to help develop and test Appmaker to see if it can really work for users in Bangladesh. And, they see how these things fit together: people around the table talked about how all these things combined have the potential for huge impact. In particular, they talked about the role phones, skills and publishing tools built with Mozilla values could unleash a huge wave of Bengali language content onto the mobile internet. In a country where less than 10% of people speak English. This is a big deal.

The overall theory behind this puzzle is: open platforms + digital skills + local content = an opportunity to disrupt and open up the mobile Internet.

IMG_20140912_204932

Well, at least, that’s my theory. I see local platforms like Firefox OS — and HTML5 in general — as the baseline. They make it possible for anyone to create apps and content for the mobile web on their own terms — and they are easy to learn. In order to unlock the potential of these platforms, we also need large numbers of people to have the skills to create their own apps and content. Which is what we’re trying to tee up with our Webmaker program. Finally, we need a huge wave of local content that smartphone users make for each other — which both Webmaker and Appmaker are meant to fuel. These are the puzzle pieces I think we need.

On this last point: the content needn’t be local per se — but it does need to be something of value to users that the web / HTML5 can provide this better than existing mobile app stores and social networks. Local apps and content — and especially local language content — is a very likely sweet spot here. The Android Play Store and Facebook are bad — or at least limited — in how they support people creating content and apps. In languages like Bengali, the web — and Mozilla — have historically been much better.

But it’s a theory with enough promise — with enough pieces of the puzzle coming together — that we should get out there and test it out in practice. Doing this will require both discipline and people on the ground. Luckily, the Mozilla community has these things in spades.

India Community

Mozillians at Webmaker event in Pune

Talking with a bunch of people from the Mozilla India community underlined this part of things for me — and helped my thinking on how to test the local content theory. Vineel, Sayak and others told me about the recent launch of low cost Firefox OS smartphones in India — including a $33/R1999 phone from a company called Intex. As with Firefox releases in many other countries, the core launch team behind this effort were volunteer Mozilla contributors.
Working with Mozilla marketing staff from Taiwan, members of the Mozilla India community made a plan, trained Intex sales staff and promoted the phone. Early results: Intex sold 15,000 units in the first three days. And things have been picking up from there.

It’s exactly this kind of community driven plan and discipline that we will need to test out the Firefox OS + Appmaker + Webmaker theory. What we need is something like:

  1. Pick a couple of places to test out our theory — India and Bangladesh are likely options, maybe also Brazil and Kenya.
  2. Work with the community to test out the ‘everyone can author an app’ software first — find out what regular users want, adapt the software with them, test again.
  3. Make sure this test includes a strong Webmaker / training component — we should be testing how to teach skills at the same time as testing the software idea.
  4. Make sure we have both phones and a v1 of Mobile Webmaker in local languages
  5. Also, work with community to develop a set of basic app templates in local language — it’s important not to have an ‘empty shelf’ and also to build around things people actually want to make.
  6. Move from research to ‘market’ testing — put Mobile Webmaker on FirefoxOS phones and do a campaign of related Webmaker training sessions.
  7. Step back. See what worked. What didn’t. Iterate. In the market.

This sort of thing is doable in the next six months — but only if we get the right community teams behind us. I’m going to work on doing just that at ReMoCamp in Berlin this weekend. If there is interest and traction, we’ll start moving ahead quickly.

In the meantime, I’d be interested in comments on my theory above. We’re going to do something like this — we need everybody’s feedback and ideas to increase the likelihood of getting it right.


Filed under: mozilla, webmakers

Doug BelshawWeeknote 37/2014

This week I’ve been:

  • Interviewing more people about Web Literacy Map 2.0:
  • Agreeing (with Lyndsey Britton & Lauren Summers) on 8th November 2014 as the date for a re-arranged Maker Party North East.
  • Selling my iPad Mini. I’ve bought a 6.4″ Sony Xperia Z Ultra phablet to replace both it and my Moto G (the 3G version). I’m still waiting for delivery as it was about £50 cheaper to order it from Amazon Germany instead of Amazon UK!
  • Giving feedback on designs for a (much needed) Webmaker badge landing page.
  • Attending GitClub, led by Ricardo Vazquez.
  • Connecting with Gordon Gow about some potentially overlapping interests in web literacy-related mobile learning projects in Sri Lanka.
  • Speaking at (and attending) a Code Acts in Education seminar at the University of Stirling, organised by Ben Williamson. It was a great event, at which I met great people and learned loads! My slides are here.
  • Recording videos as part of the guidance for people to earn Mozilla’s ‘remix the web’ badge on the forthcoming iDEA award platform.

Next week I’m at home all week, interviewing more people about the Web Literacy Map, and starting to think about synthesizing what I’ve been hearing so far. I should also start thinking about my Mozilla Festival sessions and deliverable for the Badge Alliance Digital & Web Literacy working group….

Image CC BY Michael Himbeault

Will Kahn-GreeneInput status: September 12th, 2014

Development

High-level summary:

  • Updated to ElasticUtils v0.10 which will allow us to upgrade our cluster to Elasticsearch 1.1. I'm working on a fix that'll let us to go to Elasticsearch 1.2, but that hasn't been released, yet.
  • Integrated the spicedham library prototype and set it up to classify abusive Input feedback. It's not working great, but that's entirely to be expected. I'm hoping to spend more time on spicedham and classification in Input in 2014q4. Ian did a great job with laying the foundation! Thank you, Ian!
  • Implemented a data retention policy and automated data purging.
  • Made some changes to the Input feedback GET and POST APIs to clarify things in the docs, fix some edge cases and make it work better for Firefox for Android and Loop.
  • Fixed the date picker in Chrome. Thank you, Ruben!

Landed and deployed:

  • c4e8e34 [bug 1055520] Update to ElasticUtils v0.10
  • e023fa4 [bug 1055520] Fix two reshape issues post EU 0.10 update
  • f9ba829 [bug 1055785] Codify data retention policy
  • 91396a8 Generalize About page text so it works for all products
  • 6fc03bf [bug 1053863] Update django to 1.5.9
  • 85709b2 [bug 1055788] Implement data purging
  • c0677a1 [bug 965796] Add a products update page
  • 121588d [bug 1057353] Update django-statsd and pystatsd
  • fe1c740 Add PII-related notes to the API fields
  • f77ecfa [bug 799562] Clarify API field documentation
  • c5eec03 [bug 1055789] Restrict front page dashboard and api to 6 months
  • 0892546 [bug 1059826] Add max_length to url field in API
  • f192f84 [bug 1057617] Fix url data validation
  • aad961d [bug 1030901] Document Input GET API
  • 2f212c5 [bug 1015788] Add flake8 linting
  • d673947 Update coding conventions
  • 27a1b6b Add "maximum" arg to GET API
  • 4f671e4 [bug 1062436] Add flags app and Flag model
  • 9d03d4b Fix flake8_lint issues
  • 0411d91 [bug 1062453] Add flagged view
  • 56f7e24 [bug 1062439] Celery task for classification
  • 7aa2930 [bug 1062455] Add spicedham to vendor (Ian Kronquist)
  • 0d90df3 We don't need spicedham under vendor/packages (Ian Kronquist)
  • a2a491d fix bug 1012965 - Date picker looks broken in chrome (Ruben Vereecken)
  • 0c42213 [bug 1063825] Integrate spicedham into fjord
  • 78a2d63 [bug 1062444] Initial training data
  • 5ca816e [bug 1020307] Prepare for adding gradient to generic form

Current head: 5ca816e

Rough plan for the next two weeks

  1. Working on Dashboards-for-everyone bits. Documenting the GET API. Making it a bit more functional. Writing up some more examples. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
  2. Gradients (https://wiki.mozilla.org/Firefox/Input/Gradient_Sentiment)

What I need help with

  1. (django) Update to django-rest-framework 2.3.14 (bug #934979) -- I think this is straight-forward. We'll know if it isn't if the tests fail.
  2. (django, cookies, debugging) API response shouldn't create anoncsrf cookie (bug #910691) -- I have no idea what's going on here because I haven't looked into it much.

For details, see our GetInolved page:

https://wiki.mozilla.org/Webdev/GetInvolved/input.mozilla.org

If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

Additional thoughts

I've been codifying project plan details on the wiki:

https://wiki.mozilla.org/Firefox/Input

I have no idea who's going to use that information or whether it helps. If you see things that are missing, let me know. It'll help me hone the project management templates I'm using and know which information is important to keep up to date and which information I can let slide until rainy days.

That's it!

Matěj CeplOn bibshare

(this is originally a comment on the post about “scientific Markdown”)

In my previous life I was using heavily TeX and BibTeX for writing a scholarly articles when working on my PhD in sociology. When doing a large BibTeX database of bibliopgraphy there is a certain moment when one needs to establish some order in creating new keys for the individual references. When I hit that moment, I started to look around whether somebody didn’t do some thinking about the design of the bibliography keys. I found almost nothing on the Web perhaps because there was actually a file bibshare (originally in $TEXMF/doc/bibtex/base/bibshare now I cannot find it anywhere, so I have download a version from older tetex RPM to my website). It describes pretty nice standard, which really should be rewritten into RFC or something of that sort. The two biggest advantages are stable keys (so bibliographies can be exchanged) and a more rememberable ones. So, whenever I see now granovetter:AJS-1973-1360 I do remember (and it has been couple of years, since I used BibTeX last time) that it is an awesome article "The Strength of Weak Ties" by Mark Granovetter.

Mozilla Release Management TeamFirefox 33 beta2 to beta3

  • 21 changesets
  • 55 files changed
  • 696 insertions
  • 410 deletions

ExtensionOccurrences
list14
js11
h5
cpp5
html4
jsm2
css2
cc2
xul1
webidl1
java1
ini1
inc1
in1
build1

ModuleOccurrences
layout15
dom9
toolkit8
mobile5
js5
widget3
media3
services2
gfx1
content1

List of changesets:

Mark FinkleBug 1042715 - Add support for Restricted Profiles r=rnewman a=lmandel - 080344c7c80b
Ryan VanderMeulenBacked out changeset 080344c7c80b (Bug 1042715) for Android bustage. - fcf16a67fed4
Tim TaubertBug 1046645 - Mark moz-page-thumb:// as local resources to prevent mixed content warnings f=Mardak r=gavin a=lmandel - e0b583b1210e
Jason OrendorffFollow-up 2 to Bug 1041631, part 1 - Make one last test work when Symbol is not defined. Backported from rev 74637aa07226. a=testonly. - 337d96ca1194
Jason OrendorffBug 1041631, part 2 - Make ES6 Symbols Nightly-only for now. r=Waldo, a=sledru. - eee93220473c
Martijn WargersBug 1058797 - Intermittent test_303567.xul | Result logged after SimpleTest.finish(). r=mak, a=test-only - d2d97af8ecdd
Hiroyuki IkezoeBug 1041262 - Disable autofilling of search engines to avoid failures in unified complete tests when searchengines is in the platform directory. r=mak, a=test-only - 320e081cac62
Landry BreuilBug 1014375 - Properly define JS_PUNBOX64 or JS_NUNBOX32 depending on the CPU arch r=nbp a=lmandel - 31a06334affd
Landry BreuilBug 1014375 followup - add missing ;; to unbreak the tree a=ryanvm. - f89c8ca38b12
Ryan VanderMeulenBug 1023323 - Mark 413361-1.html as fuzzy on Android 4.0. a=test-only - 8338468a7588
Benoit JacobBug 1063048 - Backout 35ff4bfb198f because on DriverVersionMismatch our blacklisting logic is fooled and doesn't protect us against real crashes. r=Bas, a=lmandel - d0885f177e37
Wes JohnstonBug 763671 - Remove gradient from form elements. r=mfinkle, a=lmandel - 7b4d4b3b7598
Mark CapellaBug 1057685 - Regression: Tweak Browser:Quit to maintain existing support for add-ons - part deux. r=wesj, a=lmandel - dbfd31597299
Matt WoodrowBug 1059807 - Mark OSX printing surfaces as being write-only. r=roc, a=lmandel - 7b689c3657e4
Gian-Carlo PascuttoBug 1053264 - Do not use CAPTUREBLT when Desktop Composition is enabled. r=jimm, a=lmandel - 5638564e0d94
Gian-Carlo PascuttoBug 1060796 - Limit screen capture FPS. r=jesup, a=lmandel - 18ba9aece9bd
Benjamin SmedbergBug 1053745 - Add GMP plugin data to FHR. r=gfritzsche, a=lmandel - 02474d192901
Jan-Ivar BruaroeyBug 1062981 - Disable bfcache for pages active MediaManager. r=smaug, r=jesup, a=lmandel - 46abad0899f9
Xidorn QuanBug 1063856 - Add more counter styles from the Predefined Counter Styles document, for better interop and web-compat. r=jfkthame, a=lmandel - 8e9b139e30b9
Nils Ohlmeier [:drno]Bug 1021220 - Verify absence of loopback in SDP offer. r=bwc, a=test-only - fdf2f580b665
Jan-Ivar BruaroeyBug 1063808 - Support old constraint-like RTCOfferOptions for a bit. r=smaug, r=abr, a=lmandel - d4082d3a082c

Mozilla Open Policy & Advocacy BlogReflections on the 9th Internet Governance Forum

I recently returned from Istanbul, Turkey where I attended the 9th annual Internet Governance Forum. This was my third IGF in a row, and my second with Mozilla. Like the others I’ve attended, it was a vibrant event, with over 3000 registrants from very different regions and interests culminating in an energizing, inspiring forum.

This year’s event reinforced my positive position on the IGF. It has a crucial role to play at the core of the Internet governance ecosystem, and it continues to fulfill that role far, far better than any other event. The IGF brings people from all walks of life into the same venue and it gets them to interact with each other and talk about difficult issues, face to face and in real time. This year, even remote participation worked fairly smoothly, as I attended a couple sessions that included speakers on video-conference connections.

Some viewed Turkey as an odd choice for a host, given the country’s history of social media blocking and other interference with free expression and activity online (including a law adopted just after the conclusion of IGF to make it even easier to block Web pages). The sentiment was strong enough to inspire the creation of a competing “Internet Ungovernance Forum” focused on promoting an open, secure, and free-as-in-speech Internet. Despite the undercurrents, both forums were well attended, and featured a broad range of interesting and expert speakers (and even some who were both!).

There is always a spotlight on IGF in the international Internet policy world. This year’s comes from NETmundial in Brazil, and, looking ahead a bit, this October’s ITU Plenipotentiary Conference in Korea, a once-every-four-years convening for high-level intergovernmental activity at the core of the ITU’s mission.

So, what did that spotlight illuminate? As always, there were many broad-ranging discussions on Internet policy issues, and no structural mechanisms to move from policy development to any formalized decision-making. (But for the IGF, this is a feature, not a bug.)

Topically, if last year was the Snowden/surveillance IGF, this year was the net neutrality IGF, with at least three feeder sessions and a three-hour “main session” focused on the topic. I spoke at two of the net neutrality sessions, and attended the others. One of my sessions examined “network enhancement” and its relationship to net neutrality – a timely topic here in the United States, where opponents of strong net neutrality rules often indicate that excessive regulation will discourage investment in infrastructure. The other was the annual working session of the Dynamic Coalition on Network Neutrality, which was praised by conference organizers as one of the most effective examples of the ad-hoc IGF working coalitions. I also contributed a paper to the Coalition’s second annual report, drawing from Mozilla’s petition to the FCC and our July comments.

Surveillance had its moments in the spotlight as well, though it was less emphasized than last year. I spoke on two surveillance-related panels. A session organized by CIGI went straight to one of our core policy themes, trust, and how revelations of expansive surveillance have harmed trust, and what we can do to restore it. A separate session, co-organized by the Internet Society and CDT, focused on responses to surveillance, such as proposals to build additional IXPs and undersea cables, and new laws to mandate localization of data within a country. The group collectively opposed localization mandates as both unhelpful for protecting Internet users from surveillance and potentially disastrous to the global free and open Internet.

The IGF isn’t perfect. But it deserves the role it has as the first stop for collaborative discussion of issues related to governance “on” the Internet. Its mandate from the UN runs for one more year, through the 10th IGF in 2015, and then unless renewed the events will stop. But with massive support from many stakeholder groups in many regions of the world – and a host country for 2016 already lined up, by some accounts – I think the IGF will, and should, continue for many years to come.

Benjamin KerensaOff to Berlin

Right now, as this post is published, I’m probably settling into my seat for the next ten hours headed to Berlin, Germany as part of a group of leaders at Mozilla who will be meeting for ReMo Camp. This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both […]

William LachanceHacking on the Treeherder front end: refreshingly easy

Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.

One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:


git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js

Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.

We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:

https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking

If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.

Liz HenryHow to test new features in Firefox 34 Aurora

If you’re a fan of free and open source software and would like to contribute to Firefox, join me for some Firefox feature testing!

There are some nifty features under development right now for Firefox 34 including translation in the browser, making voice or video calls (a feature called “Hello” or “Loop”), debugging information for web developers in the Dev Tools Inspector, and recent improvements to HTML5 gaming.

I’ve written step by step instructions on these
ways to test Firefox 34. If you would like to see what it’s like to improve a popular open source project, trying out these tasks is a good introduction.

Aurora

First, Install the Aurora version of Firefox. It is best to set it up to use multiple profiles. That ensures you don’t use your everyday version of Firefox for testing, so you won’t risk losing your usual profile information. It also makes it easy to restart Firefox with a new, clean profile with all the default settings, very useful for testing. Sometimes I realize I’m running 5 different versions of Firefox at once!

To test “Hello”, try making some voice or video calls from Firefox Aurora. You will need a friend to test with. Or, use two computers that you control. This is a good task to try while joining our chat channels, #qa or #testday on irc.mozilla.org; ask if anyone there wants to test Hello with you. The goal here is mostly to find and report new bugs.

If you test the translation infobar in Aurora you may find some new bugs. This is a fun feature to test. I like trying it on Wikipedia in many different languages, and also looking at newspapers!

If you’re a web developer, you may use Developer Tools in Firefox. I’m asking Aurora users to go through some unconfirmed bug reports, to help improve the Developer Tools Inspector.

If you like games you can test HTML5 web-based games in Firefox Aurora. This helps us improve Firefox and also helps the independent game developers. We have a list of demo games so you can play them, report glitches, and feel like a virtuous open source citizen all at once. Along the way you have opportunities to learn some interesting stuff about how graphics on the web can work (or not work).

Monster madness

These testing tasks are all set up in One and Done, Mozilla QA’s site to start people along the path to joining our open source community. This site was developed with a lot of community contribution including the design and concept by long-time community member Parul and a lot of code by two interns this summer, Pankaj and Maja.

Testing gives a great view into the development process for people who may not (yet) be programmers. I especially love how transparent Mozilla’s process can be. Anyone can report a bug, visible to the entire world in bugzilla.mozilla.org. There are many people watching that incoming stream of bug reports, confirming them and routing them to developer teams, sometimes tagging them as good first bugs for new contributors. Developers who may or may not be Mozilla employees show up in the bugs, like magic . . . if you think of bugmail notifications as magic . . .

It is amazing to see this very public and somewhat anarchic collaboration process at work. Of course, it can also be extremely satisfying to see a bug you discovered and reported, your pet bug, finally get fixed.

Related posts:

Gervase MarkhamPraise and Criticism

Praise and criticism are not opposites; in many ways, they are very similar. Both are primarily forms of attention, and are most effective when specific rather than generic. Both should be deployed with concrete goals in mind. Both can be diluted by inflation: praise too much or too often and you will devalue your praise; the same is true for criticism, though in practice, criticism is usually reactive and therefore a bit more resistant to devaluation.

— Karl Fogel, Producing Open Source Software

Wounds from a friend can be trusted, but an enemy multiplies kisses.

Proverbs 27:6

Armen ZambranoRun tbpl jobs locally with Http authentication (developer_config.py) - take 2

Back in July, we deployed the first version of Http authentication for mozharness, however, under some circumstances, the initial version could fail and affect production jobs.

This time around we have:

  • Remove the need for _dev.py config files
    • Each production config had an associated _dev.py config file
  • Prevented it from running in production environment
    • The only way to enable the developer mode is by appending --cfg developer_config.py
If you read How to run Mozharness as a developer you should see the new changes.

As quick reminder, it only takes 3 steps:

  1. Find the command from the log. Copy/paste it.
  2. Append --cfg developer_config.py
  3. Append --installer-url/--test-url with the right values
To see a real example visit this

Niko MatsakisAttribute and macro syntax

A few weeks back pcwalton introduced a PR that aimed to move the attribute and macro syntax to use a leading @ sigil. This means that one would write macros like:

@format("SomeString: {}", 22)

or

@vec[1, 2, 3]

One would write attributes in the same way:

@deriving(Eq)
struct SomeStruct {
}

@inline
fn foo() { ... }

This proposal was controversial. This debate has been sitting for a week or so. I spent some time last week reading every single comment and I wanted to lay out my current thoughts.

Why change it?

There were basically two motivations for introducing the change.

Free the bang. The first was to “free up” the ! sign. The initial motivation was aturon’s error-handling RFC, but I think that even if we decide not to act on that specific proposal, it’s still worth trying to reserve ! and ? for something related to error-handling. We are very limited in the set of characters we can realistically use for syntactic sugar, and ! and ? are valuable “ASCII real-estate”.

Part of the reason for this is that ! has a long history of being the sigil one uses to indicate something dangerous or surprising. Basically, something you should pay extra attention to. This is partly why we chose it for macros, but in truth macros are not dangerous. They can be mildly surprising, in that they don’t necessarily act like regular syntax, but having a distinguished macro invocation syntax already serves the job of alerting you to that possibility. Once you know what a macro does, it ought to just fade into the background.

Decorators and macros. Another strong motivation for me is that I think attributes and macros are two sides of the same coin and thus should use similar syntax. Perhaps the most popular attribute – deriving – is literally nothing more than a macro. The only difference is that its “input” is the type definition to which it is attached (there are some differences in the implementation side presently – e.g., deriving is based off the AST – but as I discuss below I’d like to erase that distiction eventually). That said, right now attributes and macros live in rather distinct worlds, so I think a lot of people view this claim with skepticism. So allow me to expand on what I mean.

How attributes and macros ought to move closer together

Right now attributes and macros are quite distinct, but looking forward I see them moving much closer together over time. Here are some of the various ways.

Attributes taking token trees. Right now attribute syntax is kind of specialized. Eventually I think we’ll want to generalize it so that attributes can take arbitrary token trees as arguments, much like macros operate on token trees (if you’re not familiar with token trees, see the appendix). Using token trees would allow more complex arguments to deriving and other decorators. For example, it’d be great to be able to say:

@deriving(Encodable(EncoderTypeName<foo>))

where EncoderTypeName<foo> is the name of the specific encoder that you wish to derive an impl for, vs today, where deriving always creates an encodabe impl that works for all encoders. (See Issue #3740 for more details.) Token trees seem like the obvious syntax to permit here.

Macros in decorator position. Eventually, I’d like it to be possible for any macro to be attached to an item definition as a decorator. The basic idea is that @foo(abc) struct Bar { ... } would be syntactic sugar for (something like) @foo((abc), (struct Bar { ... })) (presuming foo is a macro).

An aside: it occurs to me that to make this possible before 1.0 as I envisioned it, we’ll need to at least reserve macro names so they cannot be used as attributes. It might also be better to have macros declare whether or not they want to be usable as decorators, just so we can give better error messages. This has some bearing on the “disadvantages” of the @ syntax discussed below, as well.

Using macros in decorator position would be useful for those cases where the macro is conceptually “modifying” a base fn definition. There are numerous examples: memoization, some kind of generator expansion, more complex variations on deriving or pretty-printing, and so on. A specific example from the past was the externfn! wrapper that would both declare an extern "C" function and some sort of Rust wrapper (I don’t recall precisely why). It was used roughly like so:

externfn! {
    fn foo(...) { ... }
}

Clearly, this would be nicer if one wrote it as:

@extern
fn foo(...) { ... }

Token trees as the interface to rule them all. Although the idea of permitting macros to appear in attribute position seems to largely erase the distinction between today’s “decorators”, “syntax extensions”, and “macros”, there remains the niggly detail of the implementation. Let’s just look at deriving as an example: today, deriving is a transform from one AST node to some number of AST nodes. Basically it takes the AST node for a type definition and emits that same node back along with various nodes for auto-generated impls. This is completely different from a macro-rules macro, which operates only on token trees. The plan has always been to remove deriving out of the compiler proper and make it “just another” syntax extension that happens to be defined in the standard library (the same applies to other standard macros like format and so on).

In order to move deriving out of the compiler, though, the interface will have to change from ASTs to token trees. There are two reasons for this. The first is that we are simply not prepared to standardize the Rust compiler’s AST in any public way (and have no near term plans to do so). The second is that ASTs are insufficiently general. We have syntax extensions to accept all kinds of inputs, not just Rust ASTs.

Note that syntax extensions, like deriving, that wish to accept Rust ASTs can easily use a Rust parser to parse the token tree they are given as input. This could be a cleaned up version of the libsyntax library that rustc itself uses, or a third-party parser module (think Esprima for JS). Using separate libraries is advantageous for many reasons. For one thing, it allows other styles of parser libraries to be created (including, for example, versions that support an extensible grammar). It also allows syntax extensions to pin to an older version of the library if necessary, allowing for more independent evolution of all the components involved.

What are the objections?

There were two big objections to the proposal:

  1. Macros using ! feels very lightweight, whereas @ feels more intrusive.
  2. There is an inherent ambiguity since @id() can serve as both an attribute and a macro.

The first point seems to be a matter of taste. I don’t find @ particularly heavyweight, and I think that choosing a suitable color for the emacs/vim modes will probably help quite a bit in making it unobtrusive. In constrast, I think that ! has a strong connotation of “dangerous” which seems inappropriate for most macros. But neither syntax seems particularly egregious: I think we’ll quickly get used to either one.

The second point regarding potential ambiguities is more interesting. The ambiguities are easy to resolve from a technical perpsective, but that does not mean that they won’t be confusing to users.

Parenthesized macro invocations

The first ambiguity is that @foo() can be interpreted as either an attribute or a macro invocation. The observation is that @foo() as a macro invocation should behave like existing syntax, which means that either it should behave like a method call (in a fn body) or a tuple struct (at the top-level). In both cases, it would have to be followed by a “terminator” token: either a ; or a closing delimeter (), ], and }). Therefore, we can simply peek at the next token to decide how to interpret @foo() when we see it.

I believe that, using this disambiguation rule, almost all existing code would continue to parse correctly if it were mass-converted to use @foo in place of the older syntax. The one exception is top-level macro invocations. Today it is common to write something like:

declaremethods!(foo, bar)

struct SomeUnrelatedStruct { ... }

where declaremethods! expands out to a set of method declarations or something similar.

If you just transliterate this to @, then the macro would be parsed as a decorator:

@declaremethods(foo, bar)

struct SomeUnrelatedStruct { ... }

Hence a semicolon would be required, or else {}:

@declaremethods(foo, bar);
struct SomeUnrelatedStruct { ... }

@declaremethods { foo, bar }
struct SomeUnrelatedStruct { ... }

Note that both of these are more consistent with our syntax in general: tuple structs, for example, are always followed by a ; to terminate them. (If you replace @declaremethods(foo, bar) with struct Struct1(foo, bar), then you can see what I mean.) However, today if you fail to include the semicolon, you get a parser error, whereas here you might get a surprising misapplication of the macro.

Macro invocations with braces, square or curly

Until recently, attributes could only be applied to items. However, recent RFCs have proposed extending attributes so that they can be applied to blocks and expressions. These RFCs introduce additional ambiguities for macro invocations based on [] and {}:

  • @foo{...} could be a macro invocation or an annotation @foo applied to the block {...},
  • @foo[...] could be a macro invocation or an annotation @foo applied to the expression [...].

These ambiguities can be resolved by requiring inner attributes for blocks and expressions. Hence, rather than @cold x + y, one would write (@!cold x) + y. I actually prefer this in general, because it makes the precedence clear.

OK, so what are the options?

Using @ for attributes is popular. It is the use with macros that is controversial. Therefore, how I see it, there are three things on the table:

  1. Use @foo for attributes, keep foo! for macros (status quo-ish).
  2. Use @foo for both attributes and macros (the proposal).
  3. Use @[foo] for attributes and @foo for macros (a compromise).

Option 1 is roughly the status quo, but moving from #[foo] to @foo for attributes (this seemed to be universally popular). The obvious downside is that we lose ! forever and we also miss an opportunity to unify attribute and macro syntax. We can still adopt the model where decorators and macros are interoperable, but it will be a little more strange, since they look very different.

The advantages of Option 2 are what I’ve been talking about this whole time. The most significant disadvantage is that adding a semicolon can change the interpretation of @foo() in a surprising way, particularly at the top-level.

Option 3 offers most of the advantages of Option 2, while retaining a clear syntactic distinction between attributes and macro usage. The main downside is that @deriving(Eq) and @inline follow the precedent of other languages more closely and arguably look cleaner than @[deriving(Eq)] and @[inline].

What to do?

Currently I personally lean towards options 2 or 3. I am not happy with Option 1 both because I think we should reserve ! and because I think we should move attributes and macros closer together, both in syntax and in deeper semantics.

Choosing between options 2 and 3 is difficult. It seems to boil down to whether you feel the potential ambiguities of @foo() outweigh the attractiveness of @inline vs @[inline]. I don’t personally have a strong feeling on this particular question. It’s hard to say how confusing the ambiguities will be in practice. I would be happier if placing or failing to place a semicolon at the right spot yielded a hard error.

So I guess I would summarize my current feeling as being happy with either Option 2, but with the proviso that it is an error to use a macro in decorator position unless it explicitly opts in, or Option 3, without that proviso. This seems to retain all the upsides and avoid the confusing ambiguities.

Appendix: A brief explanation of token trees

Token trees are the basis for our macro-rules macros. They are a variation on token streams in which tokens are basically uninterpreted except that matching delimeters ((), [], {}) are paired up. A macro-rules macro is then “just” a translation from a token tree to another token. This output token tree is then parsed as normal. Similarly, our parser is actually not defined over a stream of tokens but rather a token tree.

Our current implementation deviates from this ideal model in some respects. For one thing, macros take as input token trees with embedded asts, and the parser parses a stream of tokens with embedded token trees, rather than token trees themselves, but these details are not particularly relevant to this post. I also suspect we ought to move the implementation closer to the ideal model over time, but that’s the subject of another post.

Ricky RosarioSUMO Development Update 2012.1

SUMO Dev goes agile

Inspired by the MDN Dev team, the SUMO Dev team decided to try an agile-style planning process in 2012.

To be fair, we have always been pretty agile, but perhaps we were more on the cowboy side than the waterfall side. We planned our big features for the quarter and worked towards that. Along the way, we picked up (or were thrown) lots of other bugs based on the hot issue of the day or week, contributor requests, scratching our own itch, etc. These bugs ended up taking time away from the major features we set as goals and, in some cases, ended up delaying them. This new process should help us become more predictable.

Starting out by copying what MDN has been doing for some time now, we are doing two week sprints. We will continue to push out new code weekly for now, so it is kind of weird in that each sprint has two two milestones within it. We will continue to name the milestones by the date of the push (ie, "2012-01-24" for today's push) and we are naming sprints as YEAR.sprint_number (ie, "2012.1" was our first sprint). We hope to will be doing continuous deployment soon. At that point we will only have to track one milestone (the sprint) at a time. For more details on our process, check out our Support/SUMOdev Sprints wiki page.

2012.1 sprint

We just pushed the second half of our first sprint to production. Some data:

  • Closed Stories: 26
  • Closed Points: 34
  • Developer Days: 36
  • Velocity: .94 pts/day

Our major focus of this sprint was getting our Elastic Search implementation (we are in the process of switching from Sphinx) to the point where we can index and start rolling it out to users. After today's push, we will find out whether this is working properly. *fingers crossed* (UPDATE: we did hit an issue with the indexing.)

Other stuff we landed:

  • Initial support for the apps marketplace. Basically, a landing page and a question workflow that integrates with zendesk for 1:1 help.
  • KPI (Key Performance Indicator) Dashboard. We landed the first chart which displays % of solved questions (it has a math bug in it that will get fixed in the next push).
  • Some minor UI fixes and improvements.

2012.2 sprint

We are currently halfway through our second sprint. Our main goals with this sprint are to get Elastic Search out to 15% of our users and to add a bunch of new metrics charts to the KPI Dashboard.

In my opinion, this new planning process is going well so far. The product team has better insight into what the dev team is up to day to day. And the dev team has better sense about what the short term priorities are. Probably the most awesome thing about it is that we are collecting lots of great data. The part I have liked the least so far has been the actual planning sessions, I end up pretty tired after those. I think it just needs a little getting used to and it is only 1-2 hours every two weeks.

:-)

Ricky RosarioSUMO Development: 2012.3 and 2012.4 Update

Oops, I procrastinated forgot to post an update for 2012.3 and we are done with 2012.4 too now.

2012.3 sprint

  • Closed Stories: 26
  • Closed Points: 37 (3 aren't used in the velocity calculation as they were fixed by James and Kadir - Thanks!)
  • Developer Days: 28
  • Velocity: 1.21 pts/day

The 2012.3 sprint went very well. We accomplished most of the goals we set out to do. We rolled out Elastic Search to 50% of our users and had it going for several days. We fixed some of the blocker bugs and came up with a plan for reindexing without downtime. Everything was great until we decided to add some timers to the search view in order to compare times of the Elastic Search vs the Sphinx code path. As soon as we saw some data, we decided to shut down Elastic Search. Basically, the ES path was taking about 4X more time than the Sphinx path. Yikes! We got on that right away and started looking for improvements.

On the KPI Dashboard side, we landed 4 new charts as well as some other enhancements. The new charts show metrics for:

  • Search click-through rate
  • Number of active contributors to the English KB
  • Number of active contributors to the non-English KB
  • Number of active forum contributors

We did miss the goal of adding a chart for active Army of Awesome contributors, as it turned out to be more complicated than we initially thought. So that slipped to 2012.4.

2012.4 sprint

  • Closed Stories: 20
  • Closed Points: 24
  • Developer Days: 19
  • Velocity: 1.26 pts/day

The 2012.4 sprint was sad. It was the first sprint without ErikRose :-(. We initially planned to have TimW help us part time, but he ended up getting too busy with his other projects. We did miss some of our initial goals, but we did as good as we could.

The good news is that we improved the search performance with ES a bunch. It still isn't on par with Sphinx but it is good enough to where we went back to using it for 50% of the users. We have plans to make it faster, but for now it looks like the click-through rates on results are already higher than what we get with Sphinx. That makes us very happy :-D.

We added two new KPI dashboard charts: daily unique visitors and active Army of Awesome contributors. We also landed new themes for the new Aurora community discussion forums.

2012.5 sprint

This week we started working on the 2012.5 sprint. Our goals are:

  • Elastic Search: refactor search view to make it easier to do ES-specific changes.
  • Elastic Search: improve search view performance (get us closer to Sphinx).
  • Hide unanswered questions that are over 3 months old. They don't add any value, so there is no reason to show them to anybody or have them indexed by google and other search engines.
  • Branding and styling updates for Marketplace pages
  • KPI Dashboard: l10n chart
  • KPI Dashboard: Combine solved and responded charts

We are really hoping to be ready to start dialing up the Elastic Search flag to 100% by the time we are done with this sprint.

Ricky RosarioSUMO Development: 2012.2 Update

Yesterday we shipped the second half of the 2012.2 sprint. We ended up accomplishing most of our goals:

  • [Elastic Search] Perform full index in prod - DONE
  • [Elastic Search] Roll out to 15% of users - DONE
  • Add more metrics to KPI dashboard - INCOMPLETE (We landed 3 out of the 4 new graphs we wanted).

Not too bad. In addition to this, we made other nice improvements to the site:

Great progress for two weeks of work! Some data from the sprint:

  • Closed Stories: 30
  • Closed Points: 38
  • Developer Days: 35
  • Velocity: 1.08 pts/day

Onward to 2012.3

We are now a little over halfway into the 2012.3 sprint. Our goals are to roll out Elastic Search to 50% of users, be ready to roll out to 100% (fix all blockers) and add 5 new KPI metrics to the KPI dashboard. So far so good, although we keep finding new issues as we continue to roll out Elastic Search to more users. That deserves it's own blog post though.

Ricky RosarioJoined the Mozilla Web Team

After 3 great years at Razorfish, I decided to move on and joined Mozilla 2 weeks ago. I will be working remote, but I spent the first week in Mountain View doing new hire orientation, setting up my shiney new MBP i7, setting up development environments for zamboni (new addons site) and kitsune (new support site), and fixing some easy bugs to start getting familiar with the codebase.

So far, I am loving it. Some of my initial observations:

  • My coworkers are super smart and awesome.
  • The main communication channel is through IRC (even when people are sitting nearby in the office). This works out great for the remote peeps like myself.
  • We use git/github for the our branch -> work on bug/feature -> review -> commit workflow. I am loving the process and github helps a ton with their UI for commenting on code.
  • Continuous Integration is the nuts.
  • Automated functional testing ^^.
  • Writing open source software full-time, and getting paid? Unreal!

I am working on SUMO (support.mozilla.com). It is currently going through a rewrite from tiki wiki to django (kitsune project). Working full time with django is like a dream come true for me (a very nerdy dream :).

Anyway, it is very exciting to work for Mozilla serving over 400 million Firefox users. I am looking forward to this new chapter in my career!