Yunier José Sosa VázquezNueva versión de Firefox añade más seguridad

Firefox Update

En el día de hoy Mozilla ha liberado un nuevo Firefox y ya lo puedes descargar desde nuestra zona de Descargas.

Como dice en el título, destacan entre las novedades la adición de funcionalidades que te harán navegar más seguro. Entre ellas puedes encontrar:

  • De forma oportuna se cifra el tráfico HTTP cuando el servidor soporta HTTP/2 AltSvc.
  • Mejorada la protección contra la suplantación del sitio a través de revocación de certificados centralizado OneCRL.
  • Deshabilitada la versión insegura de TLS para la seguridad del sitio.
  • El reporte de errores en SSL se ha extendido para informar de errores no certificados.
  • Mejora del certificado y la seguridad en las comunicaciones TLS mediante la eliminación del soporte a DSA.

Desde ahora en adelante Firefox en turco utiliza Yandex como buscador por defecto y Bing usa HTTPS para búsquedas seguras.

Los desarrolladores ahora pueden depurar las pestañas abiertas en Chrome para escritorios y móviles, y Safari para iOS gracias al proyecto Valece, depurar desde el panel las URIs chrome:// y about:// y se ha añadió soporte a la propiedad CSS display:contents.

Mientras tanto, Firefox para Android ahora habla Albanés [sq], Burmese [my], Lower Sorbian [dsb], Songhai [son], Upper Sorbian [hsb], Uzbeko [uz]

Si deseas conocer más, puedes leer las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Linux, Mac, Windows y Android. Recuerda que para navegar a través de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

 

Christian HeilmannRedact.js – having 60FPS across devices made simple

A lot of people are releasing amazing frameworks these days, so I thought I should have a go at an opinionated micro framework, too.

Redact.js logo

Redact.js allows you to have really fast performing JS apps across devices and on Desktop and Mobile. The framework is only a few bytes and uses gulp to get minified and ready to use.

The main trick is to avoid HTML rendering before the user interacts with it. In many cases, this is just by error as some JS wasn’t loaded. I thought, however, why not grab this opportunity?

Read more about Redact.js on GitHub and download the source to use it in your own solutions.

Built with love in London, England, where it is now 00:27 on April 1st.

Justin CrawfordExperiments: Services

The vision for our Services products to bring the power of MDN directly into professional web developers’ daily coding environments. Experiments in this area will take the form of web services built iteratively. Each iteration should either attract enthusiastic users or provide market insight that helps guide the next iteration.

In addition to exploring the market for developer services, these experiments will also explore new architectures, form factors and contribution pathways for MDN’s information products.

Four services have been identified for exploration so far.

1. Compatibility Data service

The compatibility data service (a.k.a. Browsercompat.org) is a read/write API intended to replace the tables of compatibility data that currently accompany many features in MDN’s reference documentation. The project is justified for maintenance reasons alone: Unstructured compatibility data on MDN is very difficult to keep current because it requires editors to maintain every page (in every language) where identical data might appear. It offers a fantastic opportunity to answer several questions likely to recur in MDN’s future evolution:

  • Can we maintain so-called “micro-services” without creating significant engineering overhead?
  • Can we build reasonable contribution pathways and monitoring processes around structured data residing in a separate data store?
  • Is the effort involved in “decomposing” MDN’s wiki content into structured APIs justified by the improvement in reusability such services provide?

These questions are essential to understand as we move toward a future where data are increasingly structured and delivered contextually.

Status of this experiment: Underway and progressing. Must achieve several major milestones before it can be assessed.

2. Security scanning service

In surveys run in Q4 2014 and Q12015, a large number of MDN visitors said they currently do not use a security scanning service but are motivated to do so. This experiment will give many more web developers access to security scanning tools. It will answer these questions:

  • Can we disrupt the security scanning space with tools aimed at individual web developers?
  • Can we help more web developers make their web sites more secure by providing services in a more familiar form factor?
  • Is there value in releasing services for web developers under the MDN brand?

Status of this experiment: Underway and progressing toward an MVP release. Must achieve several major milestones before it can be assessed.

3. Compatibility scanning service

In the surveys mentioned above, a large number of MDN visitors said they currently do not use a compatibility scanning tool but are motivated to do so. This experiment will build such a tool using a variety of existing libraries. It will answer these questions:

  • Are web developers enthusiastic about using a tool that promises to make their web sites more compatible across devices?
  • What form factor is most effective?
  • Can we successfully create automation from MDN’s information products and contribution workflows?

Status of this experiment: MVP planned for Q2/Q3 2015.

4. Accessibility scanning service

Also in the surveys mentioned above, a large number of MDN visitors said they currently do not scan for accessibility but are motivated to do so. This experiment will build an accessibility scanning service that helps answer the questions above, as well as:

  • If the tool fits into their workflow, will more developers make their web sites more accessible?

Status of this experiment: MVP planned for Q2/Q3 2015.

The market success of any of the latter three services would make possible an additional experiment:

5. Revenue

Professional web developers are accustomed to paying for services that increase their capacity to deliver high-quality professional results. The success of such services as Github, Heroku, NewRelic and many others is evidence of this.

MDN services that bring the high quality of MDN into professional web developers’ workflows may be valuable enough to generate revenue for Mozilla. This possibility depends on a number of important milestones before it is feasible, such as…

  • Market demand for services built
  • Community discussion about paid services under the MDN banner
  • Analysis of appropriate pricing and terms
  • Integration with payment systems

In other words, this cannot happen until services prove themselves valuable. Meanwhile, simply discussing it is an experiment: Is the possibility of MDN generating revenue with valuable developer-facing services conceivable?

Justin CrawfordExperiments: Reference

The vision of MDN’s Reference product is to use the power of MDN to build the most accessible, authoritative source of information about web standard technologies for web developers. Accomplishing this vision means optimizing and improving on the product’s present success.

Optimization requires measurement, and MDN’s current measurements need improvement. Below I describe two measurement improvements underway, plus a few optimization experiments:

1. Helpfulness Ratings

Information quality is the essential feature of any reference, but MDN currently does not implement direct quality measures. Bug 1032455 hypothesizes that MDN’s audience would provide qualitative feedback that will help measure and improve MDN’s content quality. But qualitative feedback is a new feature on MDN that we need to explore. Comment 37 on that bug suggests that we use a 3rd-party “micro-survey” widget to help us understand how to get the most from this mechanism before we implement it in our own codebase. The widget will help us answer these critical questions:

  • How can we convince readers to rate content? (We can experiment with different calls to action in the widget.)
  • How do we make sense of ratings? (We can tune the questions in the widget until their responses give us actionable information.)
  • How can we use those ratings to improve content? (We can design a process that turns good information gleaned from the widget into a set of content improvement opportunities; we can solicit contributor help with those opportunities.)
  • How will we know it is working? (We can review revisions before and after the widget’s introduction; our own qualitative assessment should be enough to validate whether a qualitative feedback mechanism is worth more investment.)

If the 3rd-party widget and lightweight processes we build around it make measurable improvements, we may wish to invest more heavily into…

  • a proprietary micro-survey tool
  • dashboards for content improvement opportunities
  • integration with MDN analytics tools

Status of this experiment: MDN’s product council has agreed with the proposal and vendor review bugs for the 3rd party tool are filed.

2. Metrics Dashboard
In an earlier post I depicted the state of MDN’s metrics with this illustration:

metrics_status

The short summary of this is, MDN has not implemented sufficient measures to make good data-driven decisions. MDN doesn’t have any location to house most of those measurements. Bug 1133071 hypothesizes that creating a place to visualize metrics will help us identify new opportunities for improvement. With a metrics dashboard we can answer these questions:

  • What metrics should be on a metrics dashboard?
  • Who should have access to it?
  • What metrics are most valuable for measuring the success of our products?
  • How can we directly affect the metrics we care about?

Status of this experiment: At the 2015 Hack on MDN meetup, this idea was pitched and undertaken. A pull request attached to bug 973612 includes code to extract data from the MDN platform and add it to Elasticsearch. Upcoming bugs will create periodic jobs to populate the Elasticsearch index, create a Kibana dashboard for the data and add it (via iframe) to a page on MDN.

3. Social Sharing
For user-generated content sites like MDN, social media is an essential driver of traffic. People visiting a page may be likely to share the page with their social networks and those shares will drive more traffic to MDN. But MDN lacks a social sharing widget (among other things common to user-generated content sites):

feature_statusBug 875062 hypothesizes that adding a social sharing widget to MDN’s reference pages could create 20 times more social sharing than MDN’s current average. Since that bug was filed MDN saw some validation of this via the Fellowship page. That page included a social sharing link at the bottom that generated 10 times as many shares as MDN’s average. This experiment will test social sharing and answer questions such as…

  • What placement/design is the most powerful?
  • What pages get the most shares and which shares get the most interaction?
  • Can we derive anything meaningful from the things people say when they share MDN links?

Status of this experiment: The code for social sharing has been integrated into the MDN platform behind a feature flag. Bug 1145630 proposes to split-test placement and design to determine the optimal location before final implementation.

4. Interactive Code Samples

Popular online code sandboxes like Codepen.io and JSFiddle let users quickly experiment with code and see its effects. Some of MDN’s competitors also implement such a feature. Surveys indicate that MDN’s audience considers this a gap in MDN’s features. Anecdotes indicate that learners consider this feature essential to learning. Contributors also might benefit from using a code sandbox for composing examples since such tools provide validation and testing opportunities.

These factors suggest that MDN should implement interactive code samples, but they imply a multitude of use cases that do not completely overlap. Bug 1148743 proposes to start with a lightweight implementation serving one use case and expand to more as we learn more. It will create a way for viewers of a code sample in MDN to open the sample in JSFiddle. This experiment will answer these questions:

  • Do people use the feature?
  • Who uses it?
  • How long do they spend tinkering with code in the sandbox?
  • Was it helpful to them?

The 3rd party widget required for the Helpfulness Ratings experiment can power the qualitative assessment necessary to know how this feature performs with MDN’s various audiences. If it is successful, future investment in this specific approach (or another similar approach) could…

  • Allow editors of a page to open samples in JSFiddle from the editing interface
  • Allow editors of a sample to save it to an MDN page
  • Create learning exercises that implement the sandbox

Status of this experiment: A pull request attached to Bug 1148743 will make this available for testing by administrators.

5. Akismet spam integration

Since late 2014 MDN has been the victim of a persistent spam attack. Triaging this spam is a constant vigil for MDN contributors and staff. Most of the spam is blatant: It seems likely that a heuristic spam detection application could spare the human triage team some work. Bug 1124358 hypothesizes that Akismet, a popular spam prevention tool, might be up to the task. Implementing this bug will answer just one question:

  • Can Akismet accurately flag spam posts like the ones MDN’s triage team handles, without improperly flagging valid content?

Status of this experiment: Proposed. MDN fans and contributors with API development experience are encouraged to reach out!

Monica ChewTwo Short Stories about Tracking Protection

Here are two slide decks I made about why online tracking is a privacy concern, and a metaphor for how tracking works.

[Animated gif version]



[Animated gif version]

Nathan Froydtsan bug finding update

At the beginning of Q1, I set a goal to investigate races with Thread Sanitizer and to fix the “top” 10 races discovered with the tool.  Ten races seemed like a conservative number; we didn’t know how many races there were, their impact, or how difficult fixing them would be.  We also weren’t sure how much participation we could count on from area experts, and it might have turned out that I would spend a significant amount of time figuring out what was going on with the races and fixing them myself.

I’m happy to report that according to Bugzilla, nearly 30 of the races reported this quarter have been fixed.  Folks in the networking stack, JavaScript GC, JavaScript JIT, and ImageLib have been super-responsive in addressing problems.  There are even a few bugs in the afore-linked query that have been fixed by folks on the WebRTC team running TSan or similar thread-safety tools (Clang’s Thread Safety Analysis, to be specific) to detect bugs themselves, which is heartening to see.  And it’s also worth mentioning that at least one of the unfixed DOM media bugs detected by TSan has seen some significant threadsafety refactoring work in its dependent bugs.  All this attention to data races has been very encouraging.

I plan on continuing the TSan runs in Q2 along with getting TSan-on-Firefox working with more-or-less current versions of Clang.  Having to download specific Subversion revisions of the tools or precompiled Clang 3.3 (!) binaries to make TSan work is discouraging, and that could obviously work much better.

Chris CooperThe changing face of buildduty

Buildduty is the friendly face of Mozilla release engineering that contributors see first. Whether you need a production machine to debug a failure, your try server push is slow, or hg is on the fritz, we’re the ones who dig in, help, and find out why. The buildduty role is almost entirely operational and interrupt-driven: we respond to requests as they come in.

We think it’s important for everyone in release engineering to rotate through the role so they can see how the various systems interact (and fail!) in production. It also allows them to forge the relationships with other teams — developers, sheriffs, release management, developer services, IT — necessary to fix failures when the occur. The challenge has been finding a suitable tenure for buildduty that allows us to make quantifiable improvements in the process.

Originally the tenure for buildduty was one week. This proved to be too short, and often conflicted with other work. Sometimes a big outage would make it impossible to tackle any other buildduty tasks in a given week. Some people were more conscientious than others about performing all of the buildduty tasks. Work tended to pile up until one of those conscientious people cycled through buildduty again. There were enough people on the team that each person might not be on buildduty more than once a quarter. One week was not long enough to become proficient at any of the buildduty tasks. We made almost no progress on process during this time, and our backlog of work grew.

In September of last year, we made buildduty a quarter-long (3 month) commitment. This made it easy to plan quarterly goals for the people involved in buildduty, but also proved hard to swallow for release engineers who were more used to doing development work than operational work. 3 months was too long, and had the potential to burn people out.

One surprising development was that even though the duration of buildduty was longer, it didn’t necessarily translate to process improvements. The volume of interrupts has been quite high over the past 6 months, so despite some standout innovations in machine health monitoring and reconfig automation, many buildduty focus areas still lack proper tooling.

Now we’re trying something different.

For the next 3 months, we’ve changed the buildduty tenure to be one month. This will allow more release engineers to rotate through the position more quickly, but hopefully still give them each enough time in the role to become proficient.

To address the tooling deficiency, we also created an adjunct role called “buildduty tools.” The buildduty person from one month will automatically rotate into the buildduty tools role for the following month. While in the buildduty tools role, you assist the front-line buildduty person as required, but primarily you write tools or fix bugs that you wish had existed when you were doing the front-line support the month before.

Hopefully this will prove to be the “Goldilocks” zone for buildduty.

Without further ado, here’s the buildduty schedule for Q2:

  • April: Massimo, with Callek in buildduty tools
  • May 1-22: Selena, with Massimo in buildduty tools
  • May 25-June 19: Kim, with Selena in buildduty tools
  • June 22-30: me. I’m not going to Whistler, so I’ll be back-stopping buildduty while everyone else is in BC.

This is also in the shared buildduty Google calendar.

Callek will be covering afternoons PT for Massimo in April because, honestly, that’s when most of the action happens anyway, and it would be irresponsible to not have coverage during that part of the day.

Massimo starts his buildduty tenure tomorrow. It’s hard but rewarding work. Please be gentle as he finds his feet.

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

The Mozilla BlogNew Firefox Releases Now Available

New versions of Firefox for Windows, Mac, Linux and Android are now available to update or download. For more info on what’s new in Firefox, please see the release notes for Firefox and Firefox for Android.

Mozilla Science LabNew Workshop on Negative Results in e-Science

This guest post is by Ketan Maheshwari, Daniel S. Katz,  Justin Wozniak, Silvia Delgado Olabarriaga, and Douglas Thain on the ERROR Conference, 3 September 2015.

Introduction

Edison performed 10,000 failed experiments before successfully creating the long-lasting electrical light bulb. While Edison meticulously kept a list of failed experiments, a wider dissemination of earlier failures might have led to a quicker invention of the bulb and related technologies. Scientists learn a great deal from their own mistakes, as well as from the mistakes of others.  The pervasive use of computing in science, or “e-science,” is fraught with complexity and is extremely sensitive to technical difficulties, leading to many missteps and mistakes. Our new workshop intends to treat this as a first-class problem, by focusing on the hard cases where computing broke down. We believe that these computational processes or experiment that yielded negative results can be a source of information for others to learn from.

Why it’s time for this workshop

  1. Publicizing negative results leads to quicker and more critical evaluation of new techniques, tools, technologies, and ideas by the community.
  2. Negative results and related issues are real and happen frequently. A publication bias towards positive results hurts progress since not enough people learn from these experiences.
  3. We want to get something valuable out of failed experiments and processes. This redeems costs, time and agony. Analysis of these failures helps narrow down possible causes and hastens progress.
  4. We want to promote a culture of accepting, analyzing, communicating and learning from negative results.

The ERROR Workshop

The 1st E-science ReseaRch leading tO negative Results (ERROR) workshop (https://press3.mcs.anl.gov/errorworkshop), to be held in conjunction with the 11th IEEE International Conference on eScience (http://escience2015.mnm-team.org) on 3 September 2015 in Munich, Germany will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results.

The ERROR workshop aims at providing a forum for researchers who have invested significant efforts in a piece of work that failed to bear the expected fruit. The workshop will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results. The focus is on various aspects of negative results, such as premises and assumptions made, divergence between expected and actual outcomes, possible causes and remedial actions to avoid or prevent such situations, and possible course corrections. Both applications and systems areas are covered, including topics in research methodology, reproducibility, the applications/systems interface, resilience, fault tolerance and social problems in computational science, and other relevant areas. We invite original work in an accessible format of 8 pages.

Rail AliievTaskcluster: First Impression

Good news. We decided to redesign Funsize a little and now it uses Taskcluster!

The nature of Funsize is that we may start hundreds of jobs at the same time, then stop sending new jobs and wait for hours. In other words, the service is very bursty. Elastic Beanstalk is not ideal for this use case. Scaling up and down very fast is hard to configure using EB-only tools. Also, running zero instances is not easy.

I tried using Terraform, Cloud Formation and Auto Scaling, but they were also not well suited. There were too many constrains (e.g. Terraform doesn't support all needed AWS features) and they required considerable bespoke setup/maintenance to auto-scale properly.

The next option was Taskcluster, and I was pleased that its design fitted our requirements very well! I was impressed by the simplicity and flexibility offered.

I have implemented a service which consumes Pulse messages for particular buildbot jobs. For nightly builds, it schedules a task graph with three tasks:

  • generate a partial MAR
  • sign it (at the moment a dummy task)
  • publish to Balrog

All tasks are run inside Docker containers which are published on the docker.com registry (other registries can also be used). The task definition essentially comprises of the docker image name and a list of commands it should run (usually this is a single script inside a docker image). In the same task definition you can specify what artifacts should be published by Taskcluster. The artifacts can be public or private.

Things that I really liked

  • Predefined task IDs. This is a great idea! There is no need to talk to the Taskcluster APIs to get the ID (or multiple IDs for task graphs) nor need to parse the response. Fire and forget! The task IDs can be used in different places, like artifact URLs, dependant tasks, etc.
  • Task graphs. This is basically a collection of tasks that can be run in parallel and can depend on each other. This is a nice way to declare your jobs and know them in advance. If needed, the task graphs can be extended by its tasks (decision tasks) dynamically.
  • Simplicity. All you need is to generate a valid JSON document and submit it using HTTP API to Taskcluster.
  • User defined docker images. One of the downsides of Buildbot is that you have a predefined list of slaves with predefined environment (OS, installed software, etc). Taskcluster leverages Docker by default to let you use your own images.

Things that could be improved

  • Encrypted variables. I spent 2-3 days fighting with the encrypted variables. My scheduler was written in Python, so I tried to use a half dozen different Python PGP libraries, but for some reason all of them were generating an incompatible OpenPGP format that Taskcluster could not understand. This forced me to rewrite the scheduling part in Node.js using openpgpjs. There is a bug to address this problem globally. Also, using ISO time stamps would have saved me hours of time. :)
  • It would be great to have a generic scheduler that doesn't require third party Taskcluster consumers writing their own daemons watching for changes (AMQP, VCS, etc) to generate tasks. This would lower the entry barrier for beginners.

Conclusion

There are many other things that can be improved (and I believe they will!) - Taskcluster is still a new project. Regardless of this, it is very flexible, easy to use and develop. I would recommend using it!

Many thanks to garndt, jonasfj and lightsofapollo for their support!

Justin CrawfordMDN Product Talk: Vision

As I wrote this post, MDN’s documentation wiki hit a remarkable milestone: For the first time ever MDN saw more than 4 million unique visitors in a single month.

I always tell people, if we have a good quarter it’s because of the work we did three, four, and five years ago. It’s not because we did a good job this quarter.

- Jeff Bezos


Mozilla’s MDN project envisions a world where software development begins with web development — a world where developers build for the web by default. That is our great ambition, an audacious summary of MDN’s raison d’être. I discussed MDN’s business strategy at length in an earlier post. In this post I will talk about MDN’s product strategy.

Several posts ago I described MDN as a product ecosystem — “…a suite of physical products that fits into a bigger ecosystem that may entail services and digital content and support”, to use one designer’s words. The components of MDN’s product ecosystem — audience, contributors, platform, products, brand, governance, campaigns, and so forth — are united by a common purpose: to help people worldwide understand and build things with standard web technologies.

The Future

The efforts we undertake now must help people years hence. But projecting even a few years into the future of technology is … challenging, to say the least. It is also exactly what MDN’s product vision must do. So what does the future look like?

Looking at the future, we see futuristic shapes emerging from fog. We can’t tell yet whether they are riding hoverboards or wearing virtual reality headsets in self-driving cars. Maybe they are piloting webcam-equipped moth cyborgs. Will hoverboards implement web-standard APIs? Will MDN contributors need watch-optimized contribution pathways? We cannot know now.

We can be confident about a few things:

  1. Future information tools will take a marvelous variety of forms, from fashion accessories to appliances to autonomous vehicles. There is no replacement for the web; it will appear in some shape on all of these devices.
  2. Future information tools will deliver information when and where it is needed. Digging for information in search results will be less common than it is today, even among web developers.
  3. The future will have even more demand for capable web developers than the present has. Many of them will read documentation in their own language.

The three MDN products under heavy development now — the mature Reference product (MDN’s documentation wiki) and the new Services and Learning products — will evolve to meet this future:

Reference

In the future, web developers will still need a source of truth about open web technology development. MDN’s last 10 years of success have established it as that source of truth. Millions of web developers choose MDN over other online sources because, as a reference, MDN is more authoritative, more comprehensive, and more global. The vision of our Reference product is to use the power of MDN to build the most accessible, authoritative source of information about standard and emerging web technologies for web developers.

Services

In the future, MDN will still be an information resource, but its information will take different shapes depending on how and where it is accessed. MDN may look like the present-day reference product when it is rendered in a web browser; but sometimes it may render in a browser’s developer tools, in a pluggable code editor, or elsewhere. In those instances the information presented may be more focused on the things developers commonly need while coding. Some developers may still access MDN via search results; others will get something from MDN the moment they need it, in the context where it is most helpful. MDN’s articles will be used for learning and understanding; but subsets of MDN’s information may also power automation that enhances productivity and quality. These new uses all share one characteristic: They bring MDN’s information closer to developers through a service architecture. The vision for our Services products is to bring the power of MDN directly into professional web developers’ daily coding environments.

Learning

The future’s web developers are learning web development right now. MDN’s present-day material is already essential to many of them even though it is aimed at a more advanced audience. MDN’s new learning content and features will deliver beginner-focused content that authoritatively covers the subjects essential to becoming a great web developer. Unlike many other online learning tools, MDN needn’t keep learners inside the platform: We can integrate with any 3rd-party tool that helps learners become web developers, and we can create opportunities for web developers to learn from each other in person. The vision for our Learning products is to use the power of MDN to teach web development to future web developers.

The power of MDN

The success of all three products depends on something I above call “the power of MDN” — something that sets MDN apart from other sources of information about web development.

I have previously described information about web development as an “oral tradition”. Web development is a young, complex and constantly changing field. It is imperfectly documented in innumerable blogs, forums, Q&A sites and more. MDN’s unique power is its ability to aggregate the shared experience of web developers worldwide into an authoritative catalog of truth about web technologies.

This aspect of MDN is constant: We carry it with us into the future, come what may. For any MDN product to succeed at scale it must implement a contribution pathway that allows web developers to directly contribute their knowledge about web development. MDN’s products advance the field of web development at a global scale by sharing essential information discovered through the collective experience of the world’s web developers.

Together we are advancing the field. In 10 years web development will be concerned with new questions and new challenges, thanks to the state of the art that MDN aggregates and promulgates. We build on what we know; we share what we know on MDN. Here’s to another 10 years!

Mark SurmanBuilding an Academy

Last December in Portland, I said that Mozilla needs a more ambitious stance on how we teach the web. My argument: the web is at an open vs. closed crossroads, and helping people build know-how and agency is key if we want to take the open path. I began talking about Mozilla needing to do something in ‘learning’ in ways that can have  the scale and impact of Firefox if we want this to happen.

Mozilla Academy

The question is: what does this look like? We’ve begun talking about developing a common approach and brand for all our learning efforts: something like Mozilla University or Mozilla Academy. And we have a Mozilla Learning plan in place this year to advance our work on Webmaker products, Mozilla Clubs (aka Maker Party all year round), and other key building blocks. But we still don’t have a crisp and concrete vision for what all this might add up to. The idea of a global university or academy begins to get us there.

My task this quarter is to take a first cut at this vision — a consolidated approach  for Mozilla’s efforts in learning. My plan is to start a set of conversations that get people involved in this process. The first step is to start to document the things we already know. That’s what this post is.

What’s the opportunity?

First off, why are we even having this conversation? Here’s what we said in the Mozilla Learning three-year plan:

Within 10 years there will be five billion citizens of the web. Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, tools and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to be full citizens of the web.

We wrote this paragraph right before Portland. I’d be interested to hear what people think about it a few months on?

What do we want to build?

The thing is even if we agree that we want everyone to know what the web can do, we may not yet agree on how we get there. My first cut at what we need to build is this:

By 2017, we want to build a Mozilla Academy: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla to unlock the power of the web for themselves, their organizations and the world.

This language is more opinionated than what’s in the Mozilla Learning plan: it states we want a global classroom and lab. And it suggests a name.

Andrew Sliwinski has pointed out to me that this presupposes we want to engage primarily with people who want to learn. And, that we might move toward our goals in other ways, including using our product and marketing to help people ‘just figure the right things out’ as they use the web. I’d like to see us debate these two paths (and others) as we try to define what it is we need to build. By the way, we also need to debate the name — Mozilla Academy? Mozilla University? Something else?

What do we want people to know?

We’re fairly solid on this part: we want people to know that the web is a platform that belongs to all of us and that we can all use to do nearly anything.

We’ve spent three years developing Mozilla’s web literacy map to describe exactly what we mean by this. It breaks down ‘what we want people know’ into three broad categories:

  • Exploring the web safely and effectively
  • Building things on the web that matter to you and others
  • Participating on the web as a critical, collaborative human

Helping people gain this know-how is partly about practical skills: understanding enough of the technology and mechanics of the web so they can do what they want to do (see below). But it is also about helping people understand that the web is based on a set of values — like sharing information and human expression — that are worth fighting for.

How do we think people learn these things?

Over the last few years, Mozilla and our broader network  of partners have been working on what we might call ‘open source learning’ (my term) or ‘creative learning’ (Mitch Resnick’s term, which is probably better :)). The first principles of this approach include::

  • Learn by making things
  • Make real shit that matters
  • Do it with other people (or at least with others nearby)

There is another element though that should be manifested in our approach to learning, which is something like ‘care about good’ or even ‘care about excellence’ — the idea that people have a sense of what to aspire to and feedback loops that help them know if they are getting there. This is important both for motivation and for actually having the impact on ‘what people know’ that we’re aiming for.

My strong feeling is that this approach needs to be at the heart of all Mozilla’s learning work. It is key to what makes us different than most people who want to teach about the web — and will be key to success in terms of impact and scale. Michelle Thorne did a good post on how we embrace these principles today at Mozilla. We still need to have a conversation about how we apply this approach to everything we do as part of our broader learning effort.

How much do we want people  to know?

Ever since we started talking about learning five years ago, people have asked: are you saying that everyone on the planet should be a web developer? The answer is clearly ‘no’. Different people need — and want — to understand the web at different levels. I think of it like this:

  • Literacy: use the web and create basic things
  • Skill: know how that gets you a better job / makes your life better
  • Craft: expert knowledge that you hone over a lifetime

There is also a piece that includes  ‘leadership’ — a commitment and skill level that has you teaching, helping, guiding or inspiring others. This is a fuzzier piece, but very important and something we will explore more deeply as we develop a Mozilla Academy.

We want a way to engage with people at all of these levels. The good news is that we have the seeds of an approach for each. SmartOn is an experiment by our engagement teams to provide mass scale web literacy in product and using marketing. Mozilla Clubs, Maker Party and our Webmaker Apps offer deeper web literacy and basic skills. MDN and others are think about teaching web developer skills and craft. Our fellowships do the same, although use a lab method rather than teaching. What we need now is a common approach and brand like Mozilla Academy that connects all of these activities and speaks to a variety of audiences.

What do we have?

It’s really worth making this point again: we already have much of what we need to build an ambitious learning offering. Some of the things we have or are building include:

We also have an increasingly good reputation among people who care about and  fund learning, education and empowerment programs. Partners like MacArthur Foundation, UNESCO, the National Writing Project and governments in a bunch of countries. Many of these organizations want to work with us to build — and be a part of — a more ambitious approach teaching people about the web.

What other things are we thinking about?

In addition to the things we have in hand, people across our community are also talking about a whole range of ideas that could fit into something like a Mozilla Academy. Things I’ve heard people talking about include:

  • Basic web literacy for mass market (SmartOn)
  • Web literacy marketing campaigns with operators
  • Making and learning tools in Firefox (MakerFox)
  • MDN developer conference
  • Curriculum combining MDN + Firefox Dev Edition
  • Developer education program based on Seneca model
  • A network of Mozilla alumni who mentor and coach
  • Ways to help people get jobs based on what they’ve learned
  • Ways to help people make money based on what they’ve learned
  • Ways for people to make money teaching and mentoring with Mozilla
  • People teaching in Mozilla spaces on a regular basis
  • Advanced leadership training for our community
  • Full set of badges and credentials

Almost all of these ideas are at a nascent stage. And many of them are extensions or versions of the things we’re already doing, but with an open source learning angle. Nonetheless, the very fact that these conversations are actively happening makes me believe that we have the creativity and ambition we need to build something like a Mozilla Academy.

Who is going to do all this?

There is a set of questions that starts with ‘who is the Mozilla Academy?’ Is it all people who are flag waving, t-shirt donning Mozillians? Or is it a broader group of people loosely connected under the Mozilla banner but doing their own thing?

If you look at the current collection of people working with Mozilla on learning, it’s both. Last year, we had nearly 10,000 contributors working with us on some aspect of this ‘classroom and lab’ concept. Some of these people are Mozilla Reps, Firefox Student Ambassadors and others heavily identified as Mozillians. Others are teachers, librarians, parents, journalists, scientists, activists and others who are inspired by what we’re doing and want to work alongside us. It’s a big tent.

My sense is that this is the same sort of mix we need if we want to grow: we will want a core set of dedicated Mozilla people and a broader set of people working with us in a common way for a common cause. We’ll need a way to connect (and count) all these people: our tools, skills framework and credentials might help. But we don’t need them all to act or do things in exactly the same way. In fact, diversity is likely key to growing the level of scale and impact we want.

Snapping it all together

As I said at the top of this post, we need to boil all this down and snap it into a crisp vision for what Mozilla — and others — will build in the coming years.

My (emerging) plan is to start this with a series of blog posts and online conversations that delve deeper into the topics above. I’m hoping that it won’t just be me blogging — this will work best if others can also riff on what they think are the key questions and opportunities. We did this process as we were defining Webmaker, and it worked well. You can see my summary of that process here.

In addition, I’m going to convene a number of informal roundtables with people who might want to participate and help us build Mozilla Academy. Some of these will happen opportunistically at events like eLearning Africa in Addis and the Open Education Global conference in Banff that are happening over the next couple of months. Others will happen in Mozilla Spaces or in the offices of partner orgs. I’ll write up a facilitation script so other people can organize their own conversations, as well. This will work best if there is a lot of conversation going on.

In addition to blogging, I plan to report out on progress at the Mozilla All-Hands work week in Whistler this June. By then, my hope is that we have a crisp vision that people can debate and get involved in building out. From there, I expect we can start figuring out how to build some of the pieces we’ll need to pull this whole idea together in 2016. If all goes well, we can use MozFest 2015 as a bit of a barn raising to prototype and share out some of these pieces.

Process-wise, we’ll use the Mozilla Learning wiki to track all this. If you write something or are doing an event, post it there. And, if you post in response to my posts, please put a link to the original post so I see the ping back. Twittering #mozacademy is also a good thing to do, at least until we get a different  name.

Join me in building Mozilla Academy. It’s going to be fun. And important.


Filed under: mozilla

QMOMarcela Oniga: open source fan, Linux enthusiast and proud Mozillian

Marcela Oniga has been involved with Mozilla since 2011. She is from Cluj Napoca, Romania and works as a software quality assurance engineer in Bucharest. She has solid Linux system administration skills, based on which she founded her own web hosting company that provides VPS and managed hosting services. She keeps herself focused by being actively involved in many challenging projects. Volunteering is one of her favorite things to do. In her spare time, she plays ping pong and lawn tennis.

Marcela Oniga is from Romania in eastern Europe.

Marcela Oniga is from Romania in eastern Europe.

Hi Marcela! How did you discover the Web?

I guess I discovered the Web when I first installed Firefox. Before that, I had read articles about the Internet in computer magazines.

How did you hear about Mozilla?

I heard about Mozilla in 2010. This was a time when open source conferences and events in Romania were not so popular. Now it is very easy to teach and learn about Mozilla projects; Mozillians are all over.

How and why did you start contributing to Mozilla?

I’m passionate about technology and I’m a big fan of open source philosophy. This is one of the reasons why I founded a non-profit organization called Open Source Open Mind (OSOM) in 2010. OSOM compelled me to contribute to open source projects. Along with other fans of open source, I have organized FLOSS events in many cities across Romania. We organize an annual OSOM conference, through which we support and promote Free, Libre and Open Source Software.

Marcela Oniga speaks at the Open Source Open Mind conference in February 2013.

Marcela Oniga speaks at the Open Source Open Mind conference in February 2013.

Ubuntu was my first open source project. I’m a big Ubuntu fan, Ubuntu Evangelist, an active member of Ubuntu LoCo Romania and also part of the Ubuntu-Women project.

In 2011 Ioana Chiorean and many others started to rebuild the tech community in Romania. At that point Mozilla became the obvious choice for me. I knew it was one of the biggest open-source software projects around and that it was very community-driven.

Have you contributed to any other Mozilla projects in any other way?

I performed quality assurance activities for Firefox for Android and Firefox OS. I also contribute a bit to SUMO and the Army of Awesome. I’m a part of WoMoz, through which I attended AdaCamp, an unconference dedicated to increasing gender diversity in open technology and culture.

Marcela Oniga at AdaCamp 2013, San Francisco. During the make-a-thon session she created a robot badge whose eyes are small LEDs.

Marcela Oniga at AdaCamp 2013, San Francisco. During the make-a-thon session she created a robot badge whose eyes are small LEDs.

What’s the contribution you’re the most proud of?

Firefox OS is my favorite project in Mozilla. I’m very excited about having the chance to meet and work with the Firefox OS QA team. I received lot of support from the team. I’m proud and happy that my small contribution to Firefox OS as a quality assurance engineer matters.

You belong to the Mozilla Romania community. Please tell us more about your community. Is there anything you find particularly interesting or special about it?

We are a bunch of different people with different ideas and views but we are have the same mission: to grow Mozilla and help the open web.

Marcela Oniga along with Alina Mierlus and Alex Lakatos at the Mozilla Romania booth during the World Fair, Mozilla Summit 2013.

Marcela Oniga along with Alina Mierlus and Alex Lakatos at the Mozilla Romania booth during the World Fair, Mozilla Summit 2013.

What’s your best memory with your fellow community members?

My best memory is a trip I took back in 2013 with the coolest Mozilla Romania community members. We went to Fundatica, a commune in the historic region of Transylvania. Best weekend ever!

Marcela Oniga on a 2013 trip to Fundatica, Brașov County, Romania with members of the Mozilla Romania community.

Marcela Oniga on a 2013 trip to Fundatica, Brașov County, Romania with members of the Mozilla Romania community.

What advice would you give to someone who is new and interested in contributing to Mozilla?

I advise everyone to contribute to open source projects, especially to Mozilla. It is an opportunity to learn something new; it’s fun and interesting and you can only gain from it.

Marcela Oniga with other members of WoMoz - Ioana Chiorean, Flore Allemandou and Delphine Lebédel - in front of the San Francisco Bay Bridge in June 2013.

Marcela Oniga with other members of WoMoz – Ioana Chiorean, Flore Allemandou and Delphine Lebédel – in front of the San Francisco Bay Bridge in June 2013.

If you had one word or sentence to describe Mozilla, what would it be?

Open minded community that’s making the web a better place.

What exciting things do you envision for you and Mozilla in the future?

I believe Mozilla’s future is bright. Millions of people around the world will help push the open web forward through amazing open source software and new platforms and tools.

Is there anything else you’d like to say or add to the above questions?

Let’s keep the web open :)


The Mozilla QA and Tech Evangelism teams would like to thank Marcela Oniga for her contribution over the past 4 years.

Marcela has been a very enthusiastic contributor to the Firefox OS project. She really “thinks like a tester” when she files a bug, and I enjoy looking at the issues she uncovers during her testing. – Marcia Knous


I met Marcela when she invited me to speak at the OSOM conference in Cluj-Napoca, Romania. It was my first time that far in Eastern Europe and I wasn’t too sure of what to expect there. Granted, I had already met Ioana and a few other Mozilla reps from Romania at the whirlpool of activity which is MozFest, but sadly it was just briefly lived interactions.

However, Marcela shone from day one. She organised everything super efficiently, told me what they needed from me, ensured all my questions were answered promptly and made us feel like at home. On the day of the conference, she orchestrated the strings and everything just fell into place, as if it was the most natural thing to do.

Probably the thing I liked most is that she is an accomplished, quiet leader. You don’t need to be loud and immensely popular to be successful. Passion and hard, constant work are what actually matters. Marcela is passionate about doing good and good things, and her dedication is nothing short of spectacular.

I’m equally proud and humbled that she chose to contribute to Mozilla. Thanks for being so excellent, Marcela! – Soledad Penadés

Byron Jonesbugzilla.mozilla.org’s new look

this quarter i’ve been working on redesigning how bugs are viewed and edited on bugzilla.mozilla.org — expect large changes to how bmo looks and feels!

unsurprisingly some of the oldest code in bugzilla is that which displays bugs; it has grown organically over time to cope with the many varying requirements of its users worldwide.  while there has been ui improvements over time (such as the sandstone skin), we felt it was time to take a step back and start looking at bugzilla with a fresh set of eyes. we wanted something that was designed for mozilla’s workflow, that didn’t look like it was designed last century, and would provide us with a flexible base upon which we could build further improvements.

a core idea of the design is to load the bug initially in a read-only “view” mode, requiring the user to click on an “edit” button to make most changes. this enables us to defer loading of a lot of data when the page is initially loaded, as well as providing a much cleaner and less overwhelming view of bugs.

bug-modal-1

major interface changes include:

  • fields are grouped by function, with summaries of the functional groups where appropriate
  • fields which do not have a value set are not shown
  • an overall “bug summary” panel at the top of the bug should provide an “at a glance” status of the bug

the view/edit mode:

  • allows for deferring of loading data only required while editing a bug (eg. list of all products, components, versions, milestones, etc)
    • this results in 12% faster page loads on my development system
  • still allows for common actions to be performed without needing to switch modes
    • comments can always be added
    • the assignee can change the bug’s status/resolution
    • flag requestee can set flags

bug-modal-2

you can use it today!

this new view has been deployed to bugzilla.mozilla.org, and you can enable it by setting the user preference “experimental user interface” to “on”.

you can also enable it per-bug by appending &format=modal to the url (eg. https://bugzilla.mozilla.org/show_bug.cgi?id=1096798&format=modal).  once enabled you can disable it per-bug by appending &format=default to the url.

what next?

there’s still a lot to be done before there’s feature parity between the new modal view and the current show_bug.  some of the major items missing with the initial release include:

  • cannot edit cc list (cannot remove or add other people)
  • comment previews
  • comment tagging (existing tags are shown, cannot add/delete tags)
  • cc activity is not visible
  • bulk comment collapsing/expanding (all, by tag, tbpl push bot)
  • alternative ordering of comments (eg. newest-first)
  • bmo show_bug extensions (eg mozreview, orange factor, bounty tracking, crash signature rendering)

you can view the complete list of bugs, or file a new bug if you discover something broken or missing that hasn’t already been reported.


Filed under: bmo

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1146806] “new bug” menu has literal “…” instead of a horizontal ellipsis
  • [1146360] remove the winqual bug entry form
  • [1147267] the firefox “iteration” and “points” fields are visible on all products
  • [1146886] after publishing a review with splinter, the ‘edit’ mode doesn’t work
  • [1138767] retry and/or avoid push_notify deadlocks
  • [1147550] Require a user to change their password if they log in and their current password does not meet the password complexity rules
  • [1147738] the “Rank” field label is visible when editing, even if the field itself isn’t
  • [1147740] map format=default to format=__default__
  • [1146762] honour gravatar visibility preference
  • [1146910] Button styles are inconsistent and too plentiful
  • [1146906] remove background gradient from assignee and reporter changes
  • [1125987] asking for review in a restricted bug doesn’t work as expected (“You must provide a reviewer for review requests” instead of “That user cannot access that bug” error)
  • [1149017] differentiate between the bug’s short-desc and the bug’s status summary in the header
  • [1149026] comment/activity buttons are not top-aligned
  • [1141770] merge_users.pl fails if the two accounts have accessed the same bug and is in the bug_interest table
  • [972040] For bugs filed against Trunk, automatically set ‘affected’ release-tracking flags
  • [1149233] Viewing a bug with timetracking information fails: file error – formattimeunit: not found
  • [1149390] “duplicates” are missing from the modal view
  • [1149038] renaming a tracking flag isn’t clearing a memcached cache, resulting in Can’t locate object method “cf_status_thunderbird_esr39″ via package “Bugzilla::Bug” errors

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Daniel StenbergThe state and rate of HTTP/2 adoption

http2 logoThe protocol HTTP/2 as defined in the draft-17 was approved by the IESG and is being implemented and deployed widely on the Internet today, even before it has turned up as an actual RFC. Back in February, already upwards 5% or maybe even more of the web traffic was using HTTP/2.

My prediction: We’ll see >10% usage by the end of the year, possibly as much as 20-30% a little depending on how fast some of the major and most popular platforms will switch (Facebook, Instagram, Tumblr, Yahoo and others). In 2016 we might see HTTP/2 serve a majority of all HTTP requests – done by browsers at least.

Counted how? Yeah the second I mention a rate I know you guys will start throwing me hard questions like exactly what do I mean. What is Internet and how would I count this? Let me express it loosely: the share of HTTP requests (by volume of requests, not by bandwidth of data and not just counting browsers). I don’t know how to measure it and we can debate the numbers in December and I guess we can all end up being right depending on what we think is the right way to count!

Who am I to tell? I’m just a person deeply interested in protocols and HTTP/2, so I’ve been involved in the HTTP work group for years and I also work on several HTTP/2 implementations. You can guess as well as I, but this just happens to be my blog!

The HTTP/2 Implementations wiki page currently lists 36 different implementations. Let’s take a closer look at the current situation and prospects in some areas.

Browsers

Firefox and Chome have solid support since a while back. Just use a recent version and you’re good.

Internet Explorer has been shown in a tech preview that spoke HTTP/2 fine. So, run that or wait for it to ship in a public version soon.

There are no news about this from Apple regarding support in Safari. Give up on them and switch over to a browser that keeps up!

Other browsers? Ask them what they do, or replace them with a browser that supports HTTP/2 already.

My estimate: By the end of 2015 the leading browsers with a market share way over 50% combined will support HTTP/2.

Server software

Apache HTTPd is still the most popular web server software on the planet. mod_h2 is a recent module for it that can speak HTTP/2 – still in “alpha” state. Give it time and help out in other ways and it will pay off.

Nginx has told the world they’ll ship HTTP/2 support by the end of 2015.

IIS was showing off HTTP/2 in the Windows 10 tech preview.

H2O is a newcomer on the market with focus on performance and they ship with HTTP/2 support since a while back already.

nghttp2 offers a HTTP/2 => HTTP/1.1 proxy (and lots more) to front your old server with and can then help you deploy HTTP/2 at once.

Apache Traffic Server supports HTTP/2 fine. Will show up in a release soon.

Also, netty, jetty and others are already on board.

HTTPS initiatives like Let’s Encrypt, helps to make it even easier to deploy and run HTTPS on your own sites which will smooth the way for HTTP/2 deployments on smaller sites as well. Getting sites onto the TLS train will remain a hurdle and will be perhaps the single biggest obstacle to get even more adoption.

My estimate: By the end of 2015 the leading HTTP server products with a market share of more than 80% of the server market will support HTTP/2.

Proxies

Squid works on HTTP/2 support.

HAproxy? I haven’t gotten a straight answer from that team, but Willy Tarreau has been actively participating in the HTTP/2 work all the time so I expect them to have work in progress.

While very critical to the protocol, PHK of the Varnish project has said that Varnish will support it if it gets traction.

My estimate: By the end of 2015, the leading proxy software projects will start to have or are already shipping HTTP/2 support.

Services

Google (including Youtube and other sites in the Google family) and Twitter have ran HTTP/2 enabled for months already.

Lots of existing services offer SPDY today and I would imagine most of them are considering and pondering on how to switch to HTTP/2 as Chrome has already announced them going to drop SPDY during 2016 and Firefox will also abandon SPDY at some point.

My estimate: By the end of 2015 lots of the top sites of the world will be serving HTTP/2 or will be working on doing it.

Content Delivery Networks

Akamai plans to ship HTTP/2 by the end of the year. Cloudflare have stated that they “will support HTTP/2 once NGINX with it becomes available“.

Amazon has not given any response publicly that I can find for when they will support HTTP/2 on their services.

Not a totally bright situation but I also believe (or hope) that as soon as one or two of the bigger CDN players start to offer HTTP/2 the others might feel a bigger pressure to follow suit.

Non-browser clients

curl and libcurl support HTTP/2 since months back, and the HTTP/2 implementations page lists available implementations for just about all major languages now. Like node-http2 for javascript, http2-perl, http2 for Go, Hyper for Python, OkHttp for Java, http-2 for Ruby and more. If you do HTTP today, you should be able to switch over to HTTP/2 relatively easy.

More?

I’m sure I’ve forgotten a few obvious points but I might update this as we go as soon as my dear readers point out my faults and mistakes!

How long is HTTP/1.1 going to be around?

My estimate: HTTP 1.1 will be around for many years to come. There is going to be a double-digit percentage share of the existing sites on the Internet (and who knows how many that aren’t even accessible from the Internet) for the foreseeable future. For technical reasons, for philosophical reasons and for good old we’ll-never-touch-it-again reasons.

The survey

Finally, I asked friends on twitter, G+ and Facebook what they think the HTTP/2 share would be by the end of 2015 with the help of a little poll. This does of course not make it into any sound or statistically safe number but is still just a collection of what a set of random people guessed. A quick poll to get a rough feel. This is how the 64 responses I received were distributed:

http2 share at end of 2015

Evidently, if you take a median out of these results you can see that the middle point is between 5-10 and 10-15. I’ll make it easy and say that the poll showed a group estimate on 10%. Ten percent of the total HTTP traffic to be HTTP/2 at the end of 2015.

I didn’t vote here but I would’ve checked the 15-20 choice, thus a fair bit over the median but only slightly into the top quarter..

In plain numbers this was the distribution of the guesses:

0-5% 29.1% (19)
5-10% 21.8% (13)
10-15% 14.5% (10)
15-20% 10.9% (7)
20-25% 9.1% (6)
25-30% 3.6% (2)
30-40% 3.6% (3)
40-50% 3.6% (2)
more than 50% 3.6% (2)

Mike TaylorWeb Compatibility Summit Summary

This afternoon I officially moved into the "bend over to use a bike bump and throw your back out so you need to be on pain meds and muscle relaxers" phase of life. Which is also probably considered to be the best time to write a blog post (by 0 out of 10 doctors).

Anyways.

On February 17th Mozilla hosted the first Web Compatibility Summit in sunny-yet-weird Mountain View, California. If you want to get a sense of the schedule for the day, we have a wiki page for that.

Justin Crawford wrote up a nice summary of his presentation on structured compatibility data. Karl Dubost was kind enough to write a summary of the presentations during the day. The talks were also recorded, so go check out all these resources.

It was great to have people from Mozilla, Google, Microsoft and Vivaldi discussing problems that we all face together. Remotely we also had the participation of the good people of Opera and the W3C. In terms of collaboration among vendors it was cool to see.

I think if we do this again next year I'd like to work harder on inviting developers and framework authors. If you would like to get involved or have any ideas, feel free to shoot an email to the Compatibility mailing list.

But for now I'm gonna go turn on The Cure's Faith album and let the drugs do their thing, man.

Gervase MarkhamHappy Birthday, Mozilla!

17 years ago today, the code shipped, and the Mozilla project was born. I’ve been involved for over 15 of those years, and it’s been a fantastic ride. With Firefox OS taking off, and freedom coming to the mobile space (compare: when the original code shipped, the hottest new thing you could download to your phone was a ringtone), I can’t wait to see where we go next.

Tim Guan-tin ChienService Worker and the grand re-architecture proposal of Firefox OS Gaia

TL;DR: Service Worker, a new Web API, can be used as a mean to re-engineering client-side web applications, and a departure from the single-page web application paradigm. Detail of realizing that is being experimented on Gaia and proposed. In Gaia, particularly, “hosted packaged app” is served as a new iteration of security model work to make sure Service Workers work with Gaia.

Last week, I spent an entire week, in face-to-face meetings, going through the technical plans of re-architecture Gaia apps, the web applications that powers the front-end of Firefox OS, and the management plan on resourcing and deployment. Given the there were only a few of developers in the meeting and the public promise of “the new architecture”, I think it’s make sense to do a recap on what’s being proposed and what are the challenges already foreseen.

Using Service Worker

Before dive into the re-architecture plan, we need to explain what Service Worker is. From a boarder perspective, Service Worker can be understood as simply a browser feature/Web API that allow web developers to insert a JavaScript-implemented proxy between the server content and the actual page shown. It is the latest piece of sexy Web technologies that is heavily marketed by Google. The platform engineering team of Mozilla is devoting to ship it as well.

Many things previously not possible can be done with the worker proxy. For starter, it could replace AppCache while keeping the flexibility of managing cache in the hand of the app. The “flexibility” bits is the part where it gets interesting — theologically everything not touching the DOM can be moved into the worker — effectively re-creating the server-client architecture without a real remote HTTP server.

The Gaia Re-architecture Plan

Indeed, that’s what the proponent of the re-architecture is aiming for — my colleagues, mostly whom based in Paris, proposed such architecture as the 2015 iteration/departure of “traditional” single-page web application. What’s more, the intention is to create a framework where the backend, or “server” part of the code, to be individually contained in their own worker threads, with strong interface definitions to achieve maximum reusability of these components — much like Web APIs themselves, if I understand it correctly.

It does not, however, tie to a specific front-end framework. User of the proposed framework should be free to use any of the strategy she/he feel comfortable with — the UI can be as hardcore as entirely rendered in WebGL, or simply plain HTML/CSS/jQuery.

The plan is made public on a Wiki page, where I expect there will be changes as progress being made. This post intentionally does not cover many of the features the architecture promise to unlock, in favor of fresh contents (as opposed of copy-edit) so I recommend readers to check out the page.

Technical Challenges around using Service Workers

There are two major technical challenges: one is the possible performance (memory and cold-launch time) impact to fit this multi-thread framework and it’s binding middleware in to a phone, the other is the security model changes needed to make the framework usable in Gaia.

To speak about the backend, “server” side, the one key difference between real remote servers and workers is one lives in data centers with endless power supply, and the other depend on your phone battery. Remote servers can push constructed HTML as soon as possible, but for an local web app backed by workers, it might need to wait for the worker to spin up. For that the architecture might be depend on yet another out-of-spec feature of Service Worker, a cache that the worker thread have control of. The browser should render these pre-constructed HTML without waiting for the worker to launch.

Without considering the cache feature, and considering the memory usage, we kind of get to a point where we can only say for sure on performance, once there is a implementation to measure. The other solution the architecture proposed to workaround that on low-end phones would be to “merge back” the back-end code into one single thread, although I personally suspect the risk of timing issues, as essentially doing so would require the implementation to simulate multi-threading in one single thread. We would just have to wait for the real implementation.

The security model part is really tricky. Gaia currently exists as packaged zips shipped in the phone and updates with OTA images, pinning the Gecko version it ships along with. Packaging is one piece of sad workaround since Firefox OS v1.0 — the primary reasons of doing so are (1) we want to make sure proprietary APIs does not pollute the general Web and (2) we want a trusted 3rd-party (Mozilla) to involve in security decisions for users by check and sign contents.

The current Gecko implementation of Service Worker does not work with the classic packaged apps which serve from an app: URL. Incidentally, the app: URL something we feel not webby enough so we try to get rid off. The proposal of the week is called “hosted packaged apps”, which serve packages from the real, remote Web and allow references of content in the package directly with a special path syntax. We can’t get rid of packages yet because of the reasons stated above, but serving content from HTTP should allow us to use Service Worker from the trusted contents, i.e. Gaia.

One thing to note about this mix is that a signed package means it is offline by default by it’s own right, and it’s updates must be signed as well. The Service Worker spec will be violated a bit in order to make them work well — it’s a detail currently being work out.

Technical Challenges on the proposed implementation

As already mentioned on the paragraph on Service Worker challenges, one worker might introduce performance issue, let along many workers. With each worker threads, it would imply memory usage as well. For that the proposal is for the framework to control the start up and shut down threads (i.e. part of the app) as necessary. But again, we will have to wait for the implementation and evaluate it.

The proposed framework asks for restriction of Web API access to “back-end” only, to decouple UI with the front-end as far as possible. However, having little Web APIs available in the worker threads will be a problem. The framework proposes to workaround it with a message routing bus and send the calls back to the UI thread, and asking Gecko to implement APIs to workers from time to time.

As an abstraction to platform worker threads and attempt to abstract platform/component changes, the architecture deserves special attention on classic abstraction problems: abstraction eventually leaks, and abstraction always comes with overhead, whether is runtime performance overhead, or human costs on learning the abstraction or debugging. I am not the expert; Joel is.

Technical Challenges on enabling Gaia

Arguably, Gaia is one of the topmost complex web projects in the industry. We inherit a strong Mozilla tradition on continuous integration. The architecture proposal call for a strong separation between front-end application codebase and the back-end application codebase — includes separate integration between two when build for different form factors. The integration plan, itself, is something worthy to rethink along to meet such requirement.

With hosted packaged apps, the architecture proposal unlocks the possibility to deploy Gaia from the Web, instead of always ships with the OTA image. How to match Gaia/Gecko version all the way to every Nightly builds is something to figure out too.

Conclusion

Given everything is in flux and the immense amount of work (as outlined above), it’s hard to achieve any of the end goals without prioritizing the internals and land/deploy them separately. From last week, it’s already concluded parts of security model changes will be blocking Service Worker usage in signed package — we’ll would need to identify the part and resolve it first. It’s also important to make sure the implementation does not suffer any performance issue before deploy the code and start the major work of revamping every app. We should be able to figure out a scaled-back version of the work and realizing that first.

If we could plan and manage the work properly, I remain optimistic on the technical outcome of the architecture proposal. I trust my colleagues, particularly whom make the architecture proposal, to make reasonable technical judgements. It’s been years since the introduction of single-page web application — it’s indeed worthy to rethink what’s possible if we depart from it.

The key here is trying not to do all the things at once, strength what working and amend what’s not, along the process of making the proposal into a usable implementation.

Edit: This post have since been modified to fix some of the grammar errors.

Air MozillaMozilla Winter of Security: Seasponge, a tool for easy Threat Modeling

Mozilla Winter of Security: Seasponge, a  tool for easy Threat Modeling Threat modeling is a crucial but often neglected part of developing, implementing and operating any system. If you have no mental model of a system...

Air MozillaMozilla Weekly Project Meeting

Mozilla Weekly Project Meeting The Monday Project Meeting

QMOImproving Recognition

Earlier this month I blogged about improving recognition of our Mozilla QA contributors. The goal was to get some constructive feedback this quarter and to take action on that feedback next quarter. As this quarter is coming to a close, we’ve received four responses. I’d like to think a lot more people out there have ideas and experiences from which we could learn. This is an opportunity to make your voice heard and make contributing at Mozilla better for everyone.

Please, take some time this week to send me your feedback.

Thank you

Zack WeinbergAnnouncing readings.owlfolio.org

I’d like to announce my new project, readings.owlfolio.org, where I will be reading and reviewing papers from the academic literature mostly (but not exclusively) about information security. I made a false start at this near the end of 2013 (it is the same site that’s been linked under readings in the top bar since then) but now I have a posting queue and a rhythm going. Expect three to five reviews a week. It’s not going to be syndicated to Planet Mozilla, but I may mention it here when I post something I think is of particular interest to that audience.

Longtime readers of this blog will notice that it has been redesigned and matches readings. That process is not 100% complete, but it’s close enough that I feel comfortable inviting people to kick the tires. Feedback is welcome, particularly regarding readability and organization; but unfortunately you’re going to have to email it to me, because the new CMS has no comment system. (The old comments have been preserved.) I’d also welcome recommendations of comment systems which are self-hosted, open-source, database-free, and don’t involve me manually copying comments out of my email. There will probably be a technical postmortem on the new CMS eventually.

(I know about the pages that are still using the old style sheet.)

Michael Verdi5 Years at Mozilla

Today is my 5 year Mozilla anniversary. Back in 2010, I joined the support team to create awesome documentation for Firefox. That quickly evolved into looking for ways to help users before ever reaching the support site. And this year I joined the Firefox UX team to expand on that work. A lot of things have changed in those five years but Mozilla’s work is as relevant as ever. That’s why I’m even more excited about the work we’re doing today as I was back in 2010. These last 5 years have been amazing and I’m looking forward to many more to come.

For fun, here’s some video from my first week.

Planet Mozilla viewers – you can watch this video on YouTube.

Mozilla Science LabMozilla Science Lab Week in Review March 23-29

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Conferences & Events

  • The International Science 2.0 Conference in Hamburg ran this past week; a couple of highlights from the conference:
    • The OKFN highlighted their proposed Open Definition, to help bring technical and legal clarity to what is meant by ‘openness’ in science.
    • GROBID (site, code), a tool for extracting bibliographic information from PDFs, was well-received by attendees.
  • The first World Seabird Twitter Conference presentations have been compiled in a post over at Storify (what’s a ‘Twitter Conference’? Check out their event info here).
  • Document Freedom Day was this past Wednesday; more than 50 events worldwide highlighted the value of open standards, and the key role of interoperability in functional openness.
  • Right here at the Mozilla Science Lab, we ran our first Ask us Anything forum event on ‘local user groups for coding in research’, co-organized with Noam Ross – check out the thread and our reflections on the event.

Blogs & Papers

Government & Policy

Open Projects

 

Andreas GalData is at the heart of search. But who has access to it?

In my February 23 blog post, I gave a brief overview of how search engines have evolved over the years and how today’s search engines learn from past searches to anticipate which results will be most relevant to a given query. This means that who succeeds in the $50 billion search business and who doesn’t mostly depends on who has access to search data. In this blog post, I will explore how search engines have obtained queries in the past and how (and why) that’s changing.

For some 90% of searches, a modern search engine analyzes and learns from past queries, rather than searching the Web itself, to deliver the most relevant results. Most the time, this approach yields better results than full text search. The Web has become so vast that searches often find millions or billions of result pages that are difficult to rank algorithmically.

One important way a search engine obtains data about past queries is by logging and retaining search results from its own users. For a search engine with many users, there’s enough data to learn from and make informed predictions. It’s a different story for a search engine that wants to enter a new market (and thus has no past search data!) or compete in a market where one search engine is very dominant.

In Germany, for example, where Google has over 95% market share, competing search engines don’t have access to adequate past search data to deliver search results that are as relevant as Google’s. And, because their search results aren’t as relevant as Google’s, it’s difficult for them to attract new users. You could call it a vicious circle.

Search engines with small user bases can acquire search traffic by working with large Internet Service providers (also called ISPs, think Comcast, Verizon, etc.) to capture searches that go from users’ browsers to competing search engines. This is one option that was available in the past to Google’s competitors such as Yahoo and Bing as they attempted to become competitive with Google’s results.

In an effort to improve privacy, Google began using encrypted connections to make searches unintelligible to ISPs. One side effect was that an important avenue was blocked for competing search engines to obtain data that would improve their products.

An alternative to working with ISPs is to work with popular content sites to track where visitors are coming from. In Web lingo this is called a “referer header.” When a user clicks on a link, the browser tells the target site where the user was before (what site “referred” the user). If the user was referred by a search result page, that address contains the query string, making it possible to associate the original search with the result link. Because the vast majority of Web traffic goes to a few thousand top sites, it is possible to reconstruct a pretty good model of what people frequently search for and what results they follow.

Until late 2011, that is, when Google began encrypting the query in the referer header. Today, it’s no longer possible for the target site to reconstruct the user’s original query. This is of course good for user privacy—the target site knows only that a user was referred from Google after searching for something. At the same time, though, query encryption also locked out everyone (except Google) from accessing the underlying query data.

This chain of events has led to a “winner take all” situation in search, as a commenter on my previous blog post noted: a successful search engine is likely to get more and more successful, leaving in the dust the competitors who lack access to vital data.

These days, the search box in the browser is essentially the last remaining place where Google’s competitors can access a large volume of search queries. In 2011, Google famously accused Microsoft’s Bing search engine of doing exactly that: logging Google search traffic in Microsoft’s own Internet Explorer browser in order to improve the quality of Bing results. Having almost tripled the market share of Chrome since then, this is something Google has to worry much less about in the future. Its competitors will not be able to use Chrome’s search box to obtain data the way Microsoft did with Internet Explorer in the past.

So, if you have ever wondered why, in most markets, Google’s search results are so much better than their competitors’, don’t assume it’s because Google has a better search engine. The real reason is that Google has access to so much more search data. And, the company has worked diligently over the past few years to make sure it stays that way.


Filed under: Mozilla

Rob HawkesJoin 250 others in the Open Data Community on Slack

This is a cross-post with the ViziCities blog.

This is a short and personal post written with the hope that it encourages you to join the new Open Data Community on Slack – a place for realtime communication and collaboration on the topic of open data.



It's important to foster open data, as it is to provide a place for the discussion and sharing of ideas around the production and use of open data. It's for this reason that we've created the Open Data Community in the hope of not only giving something back for the things that we have taken, but to provide a place for people to come together to help further this common goal.

The Open Data Community is not ViziCities; it's a group of like-minded invidividuals, non-profits and corporations alike. It's for anyone interested in open data, as well as for those who produce, use or are otherwise involved in its lifecycle.

In just 2 days the community has grown to 250 strong – I look forward to seeing you there and talking open data!

Sign up and get involved.

Robin - ViziCities Founder

Cameron Kaiser31.6.0 available

31.6.0 is available (downloads, release notes, hashes). This includes all the security issues to date, but no specific TenFourFox changes. It becomes final Monday evening Pacific time as usual assuming no critical issues are identified by you, our lovely and wonderful testing audience.

Geoff Lankow1 Million Add-Ons Downloaded

This is a celebratory post. Today I learned that the add-ons I've created for Firefox, Thunderbird, and SeaMonkey have been downloaded over 1,000,000 times in total. For some authors I'm sure that's not a major milestone – some add-ons have more than a million users – but for me it's something I think I can be proud of. (By comparison my add-ons have a collective 80,000 users.)

Here are some of them:

I started six years ago with Shrunked Image Resizer, which makes photos smaller for uploading them to websites. Later I modified it to also make photos smaller in Thunderbird email, and that's far more popular that the original purpose.

Around the same time I got frustrated when developing websites, having to open the page I was looking at in different browsers to test. The process involved far more keystrokes and mouse clicks for my liking, so I created Open With, which turned that into a one-click job.

Later on I created Tab Badge, to provide a visual alert of stuff happening with a tab. This can be quite handy when watching trees, as well as with Twitter and Facebook.

Then there's New Tab Tools – currently my most popular add-on. It's the standard Firefox new tab page, plus a few things, and minus a few things. Kudos to those of you who wrote the built in page, but I like mine better. :-)

Lastly I want to point out my newest add-on, which I think will do quite well once it gets some publicity. I call it Noise Control and it provides a visual indicator of tabs playing audio. (Yes, just like Chrome does.) I've seen lots of people asking for this sort of thing over the years, and the answer was always "it can't be done". Yes it can.

Big thanks to all of you reading this who've downloaded my add-ons, use them, helped me fix bugs, translated, sent me money, answered my inane questions or otherwise done something useful. Thank you. Really.

Robert O'CallahanEclipse + Gecko = Win

With Eclipse 4.4.1 CDT and the in-tree Eclipse project builder (./mach build-backend -b CppEclipse), the Eclipse C++ tools work really well on Gecko. Features I really enjoy:

  • Ctrl-click to navigate to definitions/declarations
  • Ctrl-T to popup the superclasses/subclasses of a class, or the overridden/overriding implementations of a method
  • Shift-ctrl-G to find all uses of a declaration (not 100% reliable, but almost always good enough)
  • Instant coloring of syntax errors as you type (useless messages, but still worth having)
  • Instant coloring of unknown identifier and type errors as you type; not 100% reliable, but good enough that most of my compiler errors are caught before doing a build.
  • Really good autocomplete. E.g. given
    nsTArray<nsRefPtr<Foo>> array;
    for (auto& v : array) {
    v->P
    Eclipse will autocomplete methods of Foo starting with P ... i.e., it handles "auto", C++ for-range loops, nsTArray and nsRefPtr operator overloading.
  • Shift-ctrl-R: automated renaming of identifiers. Again, not 100% reliable but a massive time saver nonetheless.

Thanks to Jonathan Watt and Benoit Girard for the mach support and other Eclipse work over the years!

I assume other IDEs can do these things too, but if you're not using a tool at least this powerful, you might be leaving some productivity on the table.

With Eclipse, rr, and a unified hg repo, hacking Gecko has never felt so good :-).

L. David BaronThe need for government

I've become concerned about the attitudes towards government in the technology industry. It seems to me (perhaps exaggerating a little) that much of the effort in computer security these days considers the major adversary to be the government (whether acting legally or illegally), rather than those attempting to gain illegal access to systems or information (whether private or government actors).

Democracy requires that the government have power. Not absolute power, but some, limited, power. Widespread use of technology that makes it impossible for the government to exercise certain powers could be a threat to democracy.

Let's look at one recent example: a recent article in the Economist about ransomware: malicious software that encrypts files on a computer, whose authors then demand payment to decrypt the files. The payment demanded these days is typically in Bitcoin, a system designed to avoid the government's power. This means that Bitcoin avoids the mechanisms that the international system has to find and catch criminals by following the money they make, and thus makes a perfect system for authors of ransomware and other criminals. The losers are those who don't have the mix of computer expertise and luck needed to avoid the ransomware.

One of the things that democracies often try to do is to protect the less powerful. For example, laws to protect property (in well-functioning governments) protect everybody's property, not just the property of those who can defend their property by force. Having laws like these not only (often) provides a fairer chance for the weak, but it also lets people use their labor on things that can improve people's lives rather than on zero-sum fighting over existing resources. Technology that keeps government out risks making it impossible for government to do this.

I worry that things like ransomware payment in Bitcoin could be just the tip of the iceberg. Technology is changing society quickly, and I don't think this will be the only harmful result of technology designed to keep government out. I don't want the Internet to turn into a “wild west,” where only the deepest experts in technology can survive. Such a change to the Internet risks either giving up many of the potential benefits of the Internet for society by keeping important things off of it, or alternatively risks moving society towards anarchy, where there is no government power that can do what we have relied on governments to do for centuries.

Now I'm not saying today's government is perfect; far from it. Government has responsibility too, including to deserve the trust that we need to place in it. I hope to write about that more in the future.

Andrew HalberstadtMaking mercurial bookmarks more git-like

I mentioned in my previous post a mercurial extension I wrote for making bookmarks easier to manipulate. Since then it has undergone a large overhaul, and I believe it is now stable and intuitive enough to advertise a bit more widely.

Introducing bookbinder

When working with bookmarks (or anonymous heads) I often wanted to operate on the entire series of commits within the feature I was working on. I often found myself digging out revision numbers to find the first commit in a bookmark to do things like rebasing, grafting or diffing. This was annoying. I wanted bookmarks to work more like a git-style branch, that has a definite start as well as an end. And I wanted to be able to easily refer to the set of commits contained within. Enter bookbinder.

First, you can install bookbinder by cloning:

bash $ hg clone https://bitbucket.org/halbersa/bookbinder

Then add the following to your hgrc:

ini [extensions] bookbinder = path/to/bookbinder

Usage is simple. Any command that accepts a revset with --rev, will be wrapped so that bookmark labels are replaced with the series of commits contained within the bookmark.

For example, let's say we create a bookmark to work on a feature called foo and make two commits:

```bash $ hg log -f changeset: 2:fcd3bdafbc88 bookmark: foo summary: Modify foo

changeset: 1:8dec92fc1b1c summary: Implement foo

changeset: 0:165467d1f143 summary: Initial commit ```

Without bookbinder, bookmarks are only labels to a commit:

bash $ hg log -r foo changeset: 2:fcd3bdafbc88 bookmark: foo summary: Modify foo

But with bookbinder, bookmarks become a logical series of related commits. They are more similar to git-style branches:

```bash $ hg log -r foo changeset: 2:fcd3bdafbc88 bookmark: foo summary: Modify foo

changeset: 1:8dec92fc1b1c summary: Implement foo ```

Remember hg log is just one example. Bookbinder automatically detects and wraps all commands that have a --rev option and that can receive a series of commits. It even finds commands from arbitrary extensions that may be installed! Here are few examples that I've found handy in addition to hg log:

```bash $ hg rebase -r <bookbark> -d <dest> $ hg diff -r <bookmark> $ hg graft -r <bookmark> $ hg grep -r <bookmark> $ hg fold -r <bookmark> $ hg prune -r <bookmark>

etc.

```

They all replace the single commit pointed to by the bookmark with the series of commits within the bookmark. But what if you actually only want the single commit pointed to by the bookmark label? Bookbinder uses '.' as an escape character, so using the example above:

bash $ hg log -r .foo changeset: 2:fcd3bdafbc88 bookmark: foo summary: Modify foo

Bookbinder will also detect if bookmarks are based on top of one another:

bash $ hg rebase -r my_bookmark_2 -d my_bookmark_1

Running hg log -r my_bookmark_2 will not print any of the commits contained by my_bookmark_1.

The gory details

But how does bookbinder know where one feature ends, and another begins? Bookbinder implements a new revset called "feature". The feature revset is roughly equivalent to the following alias (kudos to smacleod for coming up with it):

ini [revsetalias] feature($1) = ($1 or (ancestors($1) and not (excludemarks($1) or hg ancestors(excludemarks($1))))) and not public() and not merge() excludemarks($1) = ancestors(parents($1)) and bookmark()

Here is a formal definition. A commit C is "within" a feature branch ending at revision R if all of the following statements are true:

  1. C is R or C is an ancestor of R
  2. C is not public
  3. C is not a merge commit
  4. no bookmarks exist in [C, R) for C != R
  5. all commits in (C, R) are also within R for C != R

In easier to understand terms, this means all ancestors of a revision that aren't public, a merge commit or part of a different bookmark, are within that revision's 'feature'. One thing to be aware of, is that this definition allows empty bookmarks. For example, if you create a new bookmark on a public commit and haven't made any changes yet, that bookmark is "empty". Running hg log -r with an empty bookmark won't have any output.

The feature revset that bookbinder exposes, works just as well on revisions that don't have any associated bookmark. For example, if you are working with an anonymous head, you could do:

bash $ hg log -r 'feature(<rev>)'

In fact, when you pass in a bookmark label to a supported command, bookbinder is literally just substituting -r <bookmark> with -r feature(<bookmark>). All the hard work is happening in the feature revset.

In closing, bookbinder has helped me make a lot more sense out of my bookmark based workflow. It's solving a problem I think should be handled in mercurial core, maybe one day I'll attempt to submit a patch upstream. But until then, I hope it can be useful to others as well.

Christian HeilmannNo more excuses – a “HTML5 now” talk at #codemotion Rome

Yesterday I closed up the “inspiration” track of Codemotion Rome with a talk about the state of browsers and how we as developers make it much too hard for ourselves. You can see the slides on Slideshare and watch a screencast on YouTube.

Mozilla Privacy BlogMozilla Privacy Teaching Task Force

For about a year, the Mozilla community in Pune, India has had an informal task force where four active members go out to local universities and conferences to speak about privacy, security, surveillance, and other topics. Mozilla Reps Ankit Gadgil, … Continue reading

Cameron KaiserIonPower: phase 5!

Progress! I got IonPower past the point PPCBC ran aground at -- it can now jump in and out of Baseline and Ion code on PowerPC without crashing or asserting. That's already worth celebrating, but as the judge who gave me the restraining order on behalf of Scarlett Johansson remarked, I always have to push it. So I tried our iterative π calculator again and really gave it a workout by forcing 3 million iterations. Just to be totally unfair, I've compared the utterly unoptimized IonPower (in full Ion mode) versus the fully optimized PPCBC (Baseline) in the forthcoming TenFourFox 31.6. Here we go (Quad G5, Highest Performance mode):

% /usr/bin/time /Applications/TenFourFoxG5.app/Contents/MacOS/js --no-ion -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,3000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1415929869229293
0.48 real 0.44 user 0.03 sys

% /usr/bin/time ../../../obj-ff-dbg/dist/bin/js --ion-offthread-compile=off -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,3000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1415929869229293
0.37 real 0.21 user 0.16 sys

No, that's not a typo. The unoptimized IonPower, even in its primitive state, is 23 percent faster than PPCBC on this test largely due to its superior use of floating point. It gets even wider when we do 30 million iterations:

% /usr/bin/time /Applications/TenFourFoxG5.app/Contents/MacOS/js --no-ion -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1415926869232984
4.20 real 4.15 user 0.03 sys

% /usr/bin/time ../../../obj-ff-dbg/dist/bin/js --ion-offthread-compile=off -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1415926869232984
1.55 real 1.38 user 0.16 sys

That's 63 percent faster. And I'm not even to fun things like leveraging the G5's square root instruction (the G3 and G4 versions will use David Kilbridge's software square root from JaegerMonkey), parallel compilation on the additional cores or even working on some of the low-hanging fruit with branch optimization, and on top of all that IonPower is still running all its debugging code and sanity checks. I think this qualifies as IonPower phase 5 (basic operations), so now the final summit will be getting the test suite to pass in both sequential and parallel modes. When it does, it's time for TenFourFox 38!

By the way, for Ben's amusement, how does it compare to our old, beloved and heavily souped up JaegerMonkey implementation? (17.0.11 was our fastest version here; 19-22 had various gradual degradations in performance due to Mozilla's Ion development screwing around with methodjit.)

% /usr/bin/time /Applications/TenFourFoxG5-17.0.11.app/Contents/MacOS/js -m -n -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
3.1415926869232984
4.15 real 4.11 user 0.02 sys

Yup. I'm that awesome. Now I'm gonna sit back and go play some well-deserved Bioshock Infinite on the Xbox 360 (tri-core PowerPC, thank you very much, and I look forward to cracking the firmware one of these days) while the G5 is finishing the 31.6 release candidates overnight. They should be ready for testing tomorrow, so watch this space.

Jordan LundMozharness is moving into the forest

Since its beginnings, Mozharness has been living in its own world (repo). That's about to change. Next quarter we are going to be moving it in-tree.

what's Mozharness?

it's a configuration driven script harness

why in tree?
  1. First and foremost: transparency.
    • There is an overarching goal to provide developers the keys to manage and stand up their own builds & tests (AKA self-serve). Having the automation step logic side by side to the compile and test step logic provides developers transparency and a sense of determinism. Which leads to reason number 2.
  2. deterministic builds & tests
    • This is somewhat already in place thanks to Armen's work on pinning specific Mozharness revisions to in-tree revisions. However the pins can end up behind the latest Mozharness revisions so we end up often landing multiple changes to Mozharness at once to one in-tree revsion.
  3. Mozharness automated build & test jobs are not just managed by Buildbot anymore. Taskcluster is starting to take the weight off Buildbot's hands and, because of its own behaviour, Mozharness is better suited in-`tree.
  4. ateam is going to put effort this quarter into unifying how we run tests locally vs automation. Having mozharness in-tree should make this easier
this sounds great. why wouldn't we want to do this?

There are downsides. It arguably puts extra strain on Release Engineering for managing infra health. Though issues will be more isolated, it does become trickier to have a higher view of when and where Mozharness changes land.

In addition, there is going to be more friction for deployments. This is because a number of our Mozharness scripts are not directly related to continuous integration jobs: e.g. releases, vcs-sync, b2g bumper, and merge tasks.

why wasn't this done yester-year?

Mozharness now handles > 90% of our build and test jobs. Its internal components: config, script, and log logic, are starting to mature. However, this wasn't always the case.

When it was being developed and its uses were unknown, it made sense to develop on the side and tie itself close to buildbot deployments.

okay. I'm sold. can we just simply hg add mozharness?

Integrating Mozharness in-tree comes with a fe6 challenges

  1. chicken and egg issue

    • currently, for build jobs, Mozharness is in charge of managing version control of the tree itself. How can Mozharness checkout a repo if it itself lives within that repo?
  2. test jobs don't require the src tree

    • test jobs only need a binary and a tests.zip. It doesn't make sense to keep a copy of our branches on each machine that runs tests. In line with that, putting mozharness inside tests.zip also leads us back to a similar 'chicken and egg' issue.
  3. which branch and revisions do our release engineering scripts use?

  4. how do we handle releases?

  5. how do we not cause extra load on hg.m.o?

  6. what about integrating into Buildbot without interruption?

it's easy!

This shouldn't be too hard to solve. Here is a basic outline my plan of action and roadmap for this goal:

  • land copy of mozharness on a project branch
  • add an end point on relengapi with the following logic
    1. endpoint will contain 'mozharness' and a '$REVISION'
    2. look in s3 for equivalent mozharness archive
    3. if not present: download a sub repo dir archive from hg.m.o, run tests, and push that archive to s3
    4. finally, return the url to the s3 archive
  • integrate the endpoint into buildbot
    • call endpoint before scheduling jobs
    • add builder step: download and unpack the archive on the slave
  • for machines that run mozharness based releng scripts
    • add manifest that points to 'known good s3 archive'
    • fix deploy model to listen to manifest changes and downloads/unpacks mozharness in a similar manner to builds+tests

This is a loose outline of the integration strategy. What I like about this

  1. no code change required within Mozharness' code
  2. there is very little code change within Buildbot
  3. allows Taskcluster to use Mozharness in whatever way it likes
  4. no chicken and egg problem as (in Buildbot world), Mozharness will exist before the tree exists on the slave
  5. no need to manage multiple repos and keep them in sync

I'm sure I am not taking into account many edge cases and I look forward to hitting those edges head on as I start this in Q2. Stay tuned for further developments.

One day, I'd like to see Mozharness (at least its internal parts) be made into isolated python packages installable by pip. However, that's another problem for another day.

Questions? Concerns? Ideas? Please comment here or in the tracking bug

Doug BelshawWeeknote 13/2015

This week I’ve been:

Mozilla

  • Finishing off my part of the Hive Toronto Privacy badges project. GitHub repo here.
  • Submitting my final expenses and health & wellness invoices.
  • Writing about Web Literacy Map v1.5 (my last post on the Webmaker blog!)
  • Editing the Learning Pathways whitepaper. I’ll do as much as I can, but it’s up to Karen Smith to shepherd from this point forward!
  • Backing up everything.
  • Catching-up one to one with a few people.
  • Leaving Mozilla. I wrote about that here. Some colleagues gave me a Gif tribute send-off and dressed up an inflatable dinosaur in a party hat. Thanks guys!

Dynamic Skillset

  • Helping out DigitalMe with an event in Leeds around Open Badges. I wrote that up here.
  • Preparing my presentation for a keynote next week.
  • Collaborating on a proposal to scope out Open Badges for UK Scouting.
  • Replying to lots of people/organisations who’d like to work with me! :)
  • Finalising things for next week when I start working with City & Guilds for most (OK, nearly all) of my working week.
  • Getting to grips with Xero (which is what I’m using for accounting/invoicing)

Other

Next week I’m spending most of Monday with my family before heading off to London. I’ll be keynoting and running a workshop at the London College of Fashion conference on Tuesday. On Wednesday and Thursday I’ll be working from the City & Guilds offices, getting to know people and putting things into motion!

Image CC BY Kenny Louie