Air MozillaBringing the Next Billion Online

Bringing the Next Billion Online Nearly 4 billion people around the world don't use the Internet. Bringing developing countries into the global digital community should be a priority for the...

Marcia KnousExcited to be working with our Outreachy intern!

Today marks the first day I will co-mentoring Outreachy intern Manel Rhaiem.  I will be teaming up with Kate Glazko who is part of the Project SmartHome team. I am excited to see what Manel will accomplish as part of this project - it should be an exciting few months! I plan on blogging about our experience and sharing everything that all of us are learning. One of the things we discussed the other day was StrengthsFinder 2.0 - and how all of compared as far as our strengths. I find this to be

Air MozillaMozilla Weekly Project Meeting, 23 May 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

Daniel PocockPostBooks, PostgreSQL and pgDay.ch talk

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

Mozilla Reps CommunityRep of the Month – January 2016

Please join us in congratulating Dian Ina Mahendra as Rep of the Month for January!

Dian Ina is one of the core contributors from Indonesia. Art Manager in the day enthusiastic Mozillian by night, Ina contributed heavily on building her community’s roadmap for 2016. She is also contributing on Firefox Social media materials and is helping along with the rest of the Indonesian’s community on creating their community space in Jakarta.
One of her biggest work was building the budget plan for her community’s activities for 2016 and taking the accountable role for her community along with Andi.
 

QMOFirefox 47 beta 7 Testday Results

Howdy mozillians!

Last week on Friday (May 20th), we held another successfull event – Firefox 47 beta 7 Testday.

Thank you all – Ilse Macías, Stelian Ionce, Iryna Thompson, Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Tanvir Rahman, Zayed News, Azmina Akter Papeya, Roman Syed, Raihan Ali, Sayed Ibn Masudn, Samad Talukdar, John Sujoy, Nafis Ahmed Muhit, Sajedul Islam, Asiful Kabir Heemel, Sunny, Maruf Rahman, Md. Tanvir Ahmed, Saddam Hossain, Wahiduzzaman Hridoy, Ishak Herock, Md.Tarikul Islam Oashi, Md Rakibul Islam, Niaz Bhuiyan Asif, MD. Nnazmus Shakib (Robin), Akash, Towkir Ahmed, Saheda Reza Antora, Md. Almas Hossain, Hasibul Hasan Shanto, Tazin Ahmed, Badiuzzaman Pranto, Md.Majedul islam, Aminul Islam Alvi, Toufiqul Haque Mamun, Fahim, Zubayer Alam, Forhad Hossain, Mahfuza Humayra Mohona – for the participation!

A big thank you goes out to all our active moderators too!

Results:

  • there were no bugs verified nor triaged
  • some failures were mentioned for APZ feature in the etherpads (link 1 and link 2); therefore, please add the requested details in the etherpads or, even better, join us on #qa IRC channel and let’s figure them out 😉

I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work! \o/

And keep an eye on QMO for upcoming events! 😉

Yunier José Sosa VázquezFirefox para iOS mejora su seguridad y te hace ir más rápido por la Web

La semana pasada Mozilla liberó una nueva versión Firefox para iOS y desde Mozilla Hispano te mostramos sus novedades. Principalmente, esta entrega mejora la privacidad y seguridad de las personas al navegar en la Web y aporta una experiencia más aerodinámica que te permitirá un mayor control sobre tu experiencia de navegación móvil.

¿Qué hay de nuevo en esta actualización?

El widget Today de iOS: Sabes que obtener lo que buscas en la Web rápidamente para ti es importante, especialmente en tu móvil. Por esa razón, ahora puedes acceder a Firefox a través del widget iOS Today para abrir nuevas pestañas o un enlace copiado recientemente.

El widget iOS Today en Firefox para iOS

El widget iOS Today en Firefox para iOS

La barra alucinante: De ahora en adelante al escribir en la barra de direcciones se mostrarán tus marcadores, historial y sugerencias de búsqueda que coincidan con el término deseado. Esto hará que el acceso a tus sitios web favoritos sea más rápido y fácil.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

La barra alucinante muestra los marcadores y sugerencias de búsqueda.

Administra tu seguridad: Por defecto, Firefox contribuye a garantizar tu seguridad avisándote cuando la conexión a determinada web no es segura. Cuando trates de acceder a una web poco segura, verás un mensaje de “error” avisándote de que esa conexión no es de fiar y estarás protegido a la hora de acceder a ellas. Con Firefox para iOS, puedes ignorar temporalmente esos mensajes de error de las páginas web que has considerado como “seguras” pero pueden quedar registradas como potencialmente no-seguras por Firefox.

Error de certificado en Firefox para iOS

Error de certificado en Firefox para iOS

Debido a que el mecanismo empleado por Apple para descargar e instalar aplicaciones en sus teléfonos es muy complicado, no podemos proveer la descarga de esta versión desde nuestro sitio. Quizás más adelante, si esta regla varía, podremos hacerlo y completaremos en kit de versiones de Firefox. Por lo que para experimentar y gozar estas nuevas funcionalidades añadidas a Firefox para iOS debes descargar esta actualización desde la AppStore.

download_on_appstore

Fuentes: The Mozilla Blog y Mozilla Press

Niko MatsakisUnsafe abstractions

The unsafe keyword is a crucial part of Rust’s design. For those not familiar with it, the unsafe keyword is basically a way to bypass Rust’s type checker; it essentially allows you to write something more like C code, but using Rust syntax.

The existence of the unsafe keyword sometimes comes as a surprise at first. After all, isn’t the point of Rust that Rust programs should not crash? Why would we make it so easy then to bypass Rust’s type system? It can seem like a kind of flaw in the design.

In my view, though, unsafe is anything but a flaw: in fact, it’s a critical piece of how Rust works. The unsafe keyword basically serves as a kind of escape valve – it means that we can keep the type system relatively simple, while still letting you pull whatever dirty tricks you want to pull in your code. The only thing we ask is that you package up those dirty tricks with some kind of abstraction boundary.

This post introduces the unsafe keyword and the idea of unsafety boundaries. It is in fact a lead-in for another post I hope to publish soon that discusses a potential design of the so-called Rust memory model, which is basically a set of rules that help to clarify just what is and is not legal in unsafe code.

Unsafe code as a plugin

I think a good analogy for thinking about how unsafe works in Rust is to think about how an interpreted language like Ruby (or Python) uses C modules. Consider something like the JSON module in Ruby. The JSON bundle includes a pure Ruby implementation (JSON::Pure), but it also includes a re-implementation of the same API in C (JSON::Ext). By default, when you use the JSON bundle, you are actually running C code – but your Ruby code can’t tell the difference. From the outside, that C code looks like any other Ruby module – but internally, of course, it can play some dirty tricks and make optimizations that wouldn’t be possible in Ruby. (See this excellent blog post on Helix for more details, as well as some suggestions on how you can write Ruby plugins in Rust instead.)

Well, in Rust, the same scenario can arise, although the scale is different. For example, it’s perfectly possible to write an efficient and usable hashtable in pure Rust. But if you use a bit of unsafe code, you can make it go faster still. If this a data structure that will be used by a lot of people or is crucial to your application, this may be worth the effort (so e.g. we use unsafe code in the standard library’s implementation). But, either way, normal Rust code should not be able to tell the difference: the unsafe code is encapsulated at the API boundary.

Of course, just because it’s possible to use unsafe code to make things run faster doesn’t mean you will do it frequently. Just like the majority of Ruby code is in Ruby, the majority of Rust code is written in pure safe Rust; this is particularly true since safe Rust code is very efficient, so dropping down to unsafe Rust for performance is rarely worth the trouble.

In fact, probably the single most common use of unsafe code in Rust is for FFI. Whenever you call a C function from Rust, that is an unsafe action: this is because there is no way the compiler can vouch for the correctness of that C code.

Extending the language with unsafe code

To me, the most interesting reason to write unsafe code in Rust (or a C module in Ruby) is so that you can extend the capabilities of the language. Probably the most commonly used example of all is the Vec type in the standard library, which uses unsafe code so it can handle uninitialized memory; Rc and Arc, which enable shared ownership, are other good examples. But there are also much fancier examples, such as how Crossbeam and deque use unsafe code to implement non-blocking data structures, or Jobsteal and Rayon use unsafe code to implement thread pools.

In this post, we’re going to focus on one simple case: the split_at_mut method found in the standard library. This method is defined over mutable slices like &mut [T]. It takes as argument a slice and an index (mid), and it divides that slice into two pieces at the given index. Hence it returns two subslices: ranges from 0..mid, and one that ranges from mid...

You might imagine that split_at_mut would be defined like this:

1
2
3
4
5
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        (&mut self[0..mid], &mut self[mid..])
    }
}

If it compiled, this definition would do the right thing, but in fact if you try to build it you will find it gets a compilation error. It fails for two reasons:

  1. In general, the compiler does not try to reason precisely about indices. That is, whenever it sees an index like foo[i], it just ignores the index altogether and treats the entire array as a unit (foo[_], effectively). This means that it cannot tell that &mut self[0..mid] is disjoint from &mut self[mid..]. The reason for this is that reasoning about indices would require a much more complex type system.
  2. In fact, the [] operator is not builtin to the language when applied to a range anyhow. It is implemented in the standard library. Therefore, even if the compiler knew that 0..mid and mid.. did not overlap, it wouldn’t necessarily know that &mut self[0..mid] and &mut self[mid..] return disjoint slices.

Now, it’s plausible that we could extend the type system to make this example compile, and maybe we’ll do that someday. But for the time being we’ve preferred to implement cases like split_at_mut using unsafe code. This lets us keep the type system simple, while still enabling us to write APIs like split_at_mut.

Abstraction boundaries

Looking at unsafe code as analogous to a plugin helps to clarify the idea of an abstraction boundary. When you write a Ruby plugin, you expect that when users from Ruby call into your function, they will supply you with normal Ruby objects and pointers. Internally, you can play whatever tricks you want: for example, you might use a C array instead of a Ruby vector. But once you return values back out to the surrounding Ruby code, you have to repackage up those results as standard Ruby objects.

It works the same way with unsafe code in Rust. At the public boundaries of your API, your code should act as if it were any other safe function. This means you can assume that your users will give you valid instances of Rust types as inputs. It also means that any values you return or otherwise output must meet all the requirements that the Rust type system expects. Within the unsafe boundary, however, you are free to bend the rules (of course, just how free you are is the topic of debate; I intend to discuss it in a follow-up post).

Let’s look at the split_at_mut method we saw in the previous section. For our purposes here, we only care about the public interface of the function, which is its signature:

1
2
3
4
5
6
7
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        // body of the fn omitted so that we can focus on the
        // public inferface; safe code shouldn't have to care what
        // goes in here anyway
    }
}

So what can we derive from this signature? To start, split_at_mut can assume that all of its inputs are valid (for safe code, the compiler’s type system naturally ensures that this is true; unsafe callers would have to ensure it themselves). Part of writing the rules for unsafe code will require enumerating more precisely what this means, but at a high-level it’s stuff like this:

  • The self argument is of type &mut [T]. This implies that we will receive a reference that points at some number N of T elements. Because this is a mutable reference, we know that the memory it refers to cannot be accessed via any other alias (until the mutable reference expires). We also know the memory is initialized and the values are suitable for the type T (whatever it is).
  • The mid argument is of type usize. All we know is that it is some unsigned integer.

There is one interesting thing missing from this list, however. Nothing in the API assures us that mid is actually a legal index into self. This implies that whatever unsafe code we write will have to check that.

Next, when split_at_mut returns, it must ensure that its return value meets the requirements of the signature. This basically means it must return two valid &mut [T] slices (i.e., pointing at valid memory, with a length that is not too long). Crucially, since those slices are both valid at the same time, this implies that the two slices must be disjoint (that is, pointing at different regions of memory).

Possible implementations

So let’s look at a few different implementation strategies for split_at_mut and evaluate whether they might be valid or not. We already saw that a pure safe implementation doesn’t work. So what if we implemented it using raw pointers like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;

        // The unsafe block gives us access to raw pointer
        // operations. By using an unsafe block, we are claiming
        // that none of the actions below will trigger
        // undefined behavior.
        unsafe {
            // get a raw pointer to the first element
            let p: *mut T = &mut self[0];

            // get a pointer to the element `mid`
            let q: *mut T = p.offset(mid as isize);

            // number of elements after `mid`
            let remainder = self.len() - mid;

            // assemble a slice from 0..mid
            let left: &mut [T] = from_raw_parts_mut(p, mid);

            // assemble a slice from mid..
            let right: &mut [T] = from_raw_parts_mut(q, remainder);

            (left, right)
        }
    }
}

This is a mostly valid implementation, and in fact fairly close to what the standard library actually does. However, this code is making a critical assumption that is not guaranteed by the input: it is assuming that mid is in range. Nowhere does it check that mid <= len, which means that the q pointer might be out of range, and also means that the computation of remainder might overflow and hence (in release builds, at least by default) wrap around. So this implementation is incorrect, because it requires more guarantees than what the caller is required to provide.

We could make it correct by adding an assertion that mid is a valid index (note that the assert macro in Rust always executes, even in optimized code):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;

        // check that `mid` is in range:
        assert!(mid <= self.len());

        // as before, with fewer comments:
        unsafe {
            let p: *mut T = &mut self[0];
            let q: *mut T = p.offset(mid as isize);
            let remainder = self.len() - mid;
            let left: &mut [T] = from_raw_parts_mut(p, mid);
            let right: &mut [T] = from_raw_parts_mut(q, remainder);
            (left, right)
        }
    }
}

OK, at this point we have basically reproduced the implementation in the standard library (it uses some slightly different helpers, but it’s the same idea).

Extending the abstraction boundary

Of course, it might happen that we actually wanted to assume mid that is in bound, rather than checking it. We couldn’t do this for the actual split_at_mut, of course, since it’s part of the standard library. But you could imagine wanting a private helper for safe code that made this assumption, so as to avoid the runtime cost of a bounds check. In that case, split_at_mut is relying on the caller to guarantee that mid is in bounds. This means that split_at_mut is no longer safe to call, because it has additional requirements for its arguments that must be satisfied in order to guarantee memory safety.

Rust allows you express the idea of a fn that is not safe to call by moving the unsafe keyword out of the fn body and into the public signature. Moving the keyword makes a big difference as to the meaning of the function: the unsafety is no longer just an implementation detail of the function, it’s now part of the function’s interface. So we could make a variant of split_at_mut called split_at_mut_unchecked that avoids the bounds check:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
impl [T] {
    // Here the **fn** is declared as unsafe; calling such a function is
    // now considered an unsafe action for the caller, because they
    // must guarantee that `mid <= self.len()`.
    unsafe pub fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        use std::slice::from_raw_parts_mut;
        let p: *mut T = &mut self[0];
        let q: *mut T = p.offset(mid as isize);
        let remainder = self.len() - mid;
        let left: &mut [T] = from_raw_parts_mut(p, mid);
        let right: &mut [T] = from_raw_parts_mut(q, remainder);
        (left, right)
    }
}

When a fn is declared as unsafe like this, calling that fn becomes an unsafe action: what this means in practice is that the caller must read the documentation of the function and ensure that what conditions the function requires are met. In this case, it means that the caller must ensure that mid <= self.len().

If you think about abstraction boundaries, declaring a fn as unsafe means that it does not form an abstraction boundary with safe code. Rather, it becomes part of the unsafe abstraction of the fn that calls it.

Using split_at_mut_unchecked, we could now re-implemented split_at_mut to just layer on top the bounds check:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        assert!(mid <= self.len());

        // By placing the `unsafe` block in the function, we are
        // claiming that we know the extra safety conditions
        // on `split_at_mut_unchecked` are satisfied, and hence calling
        // this function is a safe thing to do.
        unsafe {
            self.split_at_mut_unchecked(mid)
        }
    }

    // **NB:** Requires that `mid <= self.len()`.
    pub unsafe fn split_at_mut_unchecked(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        ... // as above
    }
}

Unsafe boundaries and privacy

Although there is nothing in the language that explicitly connects the privacy rules with unsafe abstraction boundaries, they are naturally interconnected. This is because privacy allows you to control the set of code that can modify your fields, and this is a basic building block to being able to construct an unsafe abstraction.

Earlier we mentioned that the Vec type in the standard library is implemented using unsafe code. This would not be possible without privacy. If you look at the definition of Vec, it looks something like this:

1
2
3
4
5
pub struct Vec<T> {
    pointer: *mut T,
    capacity: usize,
    length: usize,
}

Here the field pointer is a pointer to the start of some memory. capacity is the amount of memory that has been allocated and length is the amount of memory that has been initialized.

The vector code is all very careful to maintain the invariant that it is always safe the first length elements of the the memory that pointer refers to. You can imagine that if the length field were public, this would be impossible: anybody from the outside could go and change the length to whatever they want!

For this reason, unsafety boundaries tend to fall into one of two categories:

  • a single functions, like split_at_mut
    • this could include unsafe callees like split_at_mut_unchecked
  • a type, typically contained in its own module, like Vec
    • this type will naturally have private helper functions as well
    • and it may contain unsafe helper types too, as described in the next section

Types with unsafe interfaces

We saw earlier that it can be useful to define unsafe functions like split_at_mut_unchecked, which can then serve as the building block for a safe abstraction. The same is true of types. In fact, if you look at the actual definition of Vec from the standard library, you will see that it looks just a bit different from what we saw above:

1
2
3
4
pub struct Vec<T> {
    buf: RawVec<T>,
    len: usize,
}

What is this RawVec? Well, that turns out to be an unsafe helper type that encapsulates the idea of a pointer and a capacity:

1
2
3
4
5
6
pub struct RawVec<T> {
    // Unique is actually another unsafe helper type
    // that indicates a uniquely owned raw pointer:
    ptr: Unique<T>,
    cap: usize,
}

What makes RawVec an unsafe helper type? Unlike with functions, the idea of an unsafe type is a rather fuzzy notion. I would define such a type as a type that doesn’t really let you do anything useful without using unsafe code. Safe code can construct RawVec, for example, and even resize the backing buffer, but if you want to actually access the data in that buffer, you can only do so by calling the ptr method, which returns a *mut T. This is a raw pointer, so dereferencing it is unsafe; which means that, to be useful, RawVec has to be incorporated into another unsafe abstraction (like Vec) which tracks initialization.

Conclusion

Unsafe abstractions are a pretty powerful tool. They let you play just about any dirty performance trick you can think of – or access any system capbility – while still keeping the overall language safe and relatively simple. We use unsafety to implement a number of the core abstractions in the standard library, including core data structures like Vec and Rc. But because all of these abstractions encapsulate the unsafe code behind their API, users of those modules don’t carry the risk.

How low can you go?

One thing I have not discussed in this post is a lot of specifics about exactly what is legal within unsafe code and not. Clearly, the point of unsafe code is to bend the rules, but how far can you bend them before they break? At the moment, we don’t have a lot of published guidelines on this topic. This is something we aim to address. In fact there has even been a first RFC introduced on the topic, though I think we can expect a fair amount of iteration before we arrive at the final and complete answer.

As I wrote on the RFC thread, my take is that we should be shooting for rules that are human friendly as much as possible. In particular, I think that most people will not read our rules and fewer still will try to understand them. So we should ensure that the unsafe code that people write in ignorance of the rules is, by and large, correct. (This implies also that the majority of the code that exists ought to be correct.)

Interestingly, there is something of a tension here: the more unsafe code we allow, the less the compiler can optimize. This is because it would have to be conservative about possible aliasing and (for example) avoid reordering statements.

In my next post, I will describe how I think that we can leverage unsafe abstractions to actually get the best of both worlds. The basic idea is to aggressively optimized safe code, but be more conservative within an unsafe abstraction (but allow people to opt back in with additional annotations).

Edit note: Tweaked some wording for clarity.

This Week In RustThis Week in Rust 131

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

  • Systemd Manager. A systemd service manager written in Rust with the GTK-rs wrapper and direct integration with dbus.
  • FLAME. A flamegraph profiling tool for Rust.
  • Jobsteal. A work-stealing fork-join threadpool written in Rust.
  • pest. Simple, efficient parser generator.

Crate of the Week

This weeks Crate of the Week is parking_lot which gives us synchronization primitives (Mutex, RWLock, CondVar and friends) that are both smaller and faster than the standard library's implementations. I wonder why those are not put into std, but there's probably some reason.

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last two weeks.

New Contributors

  • Daniel Campoverde [alx741]
  • mark-summerfield
  • Postmodern
  • Rémy Rakic
  • Robert Habermeier
  • Val Vanderschaegen

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

Christian HeilmannGoogle IO – A tale of two Googles

Google IO main stage with audience

Disclaimer: The following are my personal views and experiences at this year’s Google IO. They are not representative of my employer. Should you want to quote me, please do so as Chris Heilmann, developer.

TL;DR: Is Google IO worth the $900? Yes, if you’re up for networking, getting information from experts and enjoy social gatherings. No, if you expect to be able to see talks. You’re better off watching them from home. The live streaming and recordings are excellent.

Google IO this year left me confused and disappointed. I found a massive gap between the official messaging and the tech on display. I’m underwhelmed with the keynote and the media outreach. The much more interesting work in the breakout sessions, talks and demos excited me. It seems to me that what Google wants to promote and the media to pick up is different to what its engineers showed. That’s OK, but it feels like sales stepping on a developer conference turf.

I enjoyed the messaging of the developer outreach and product owner team in the talks and demos. At times I was wondering if I was at a Google or a Mozilla event. The web and its technologies were front and centre. And there was a total lack of “our product $X leads the way” vibes.

Kudos to everyone involved. The messaging about progressive Web Apps, AMP and even the new Android Instant Apps was honest. It points to a drive in Google to return to the web for good.

Illuminated dinosaur at the after party

The vibe of the event changed a lot since moving out of Moscone Center in San Francisco. Running it on Google’s homestead in Mountain View made the whole show feel more like a music festival than a tech event. It must have been fun for the presenters to stand on the same stage they went to see bands at.

Having smaller tents for the different product and technology groups was great. It invited much more communication than booths. I saw a lot of neat demos. Having experts at hand to talk with about technologies I wanted to learn about was great.

Organisation

Feet in the sun watching a talk at the Amphitheatre

Here are the good and bad things about the organisation:

  • Good: traffic control wasn’t as much of a nightmare I expected. I got there two hours in advance as I anticipated traffic jams, but it wasn’t bad at all. Shuttles and bike sheds helped getting people there.
  • Good: there was no queue at badge pickup. Why I had to have my picture taken and a – somehow sticky – plastic badge printed was a bit beyond me, though. It seems wasteful.
  • Good: the food and beverages were plentiful and applicable. With a group this big it is hard to deliver safe to eat and enjoyable food. The sandwiches, apples and crisps did the trick. The food at the social events was comfort food/fast food, but let’s face it – you’re not at a food fair. I loved that all the packaging was paper and cardboard and there was not too much excess waste in the form of plastics. We also got a reusable water bottle you could re-fill at water dispensers like you have in offices. Given the weather, this was much needed. Coffee and tea was also available throughout the day. We were well fed and watered. I’m no Vegan, and I heard a few complaints about a lack of options, but that may have been personal experiences.
  • Good: the toilets were amazing. Clean, with running water and plenty of paper, mirrors, free sunscreen and no queues. Not what I expected from a music festival surrounding.
  • Great: as it was scorching hot on the first day the welcome pack you got with your badge had a bandana to cover your head, two sachets of sun screen, a reusable water bottle and sunglasses. As a ginger: THANK YOU, THANK YOU, THANK YOU. The helpers even gave me a full tube of sunscreen on re-entry the second day, taking pity on my red skin.
  • Bad: the one thing that was exactly the same as in Moscone was the abysmal crowd control. Except for the huge stage tent number two (called HYDRA - I am on to you, people) all others were far too small. It was not uncommon to stand for an hour in a queue for the talk you wanted to see just to be refused entry as it was full up. Queuing up in the scorching sun isn’t fun for anyone and impossible for me. Hence I missed all but two talks I wanted to see.
  • Good: if you were lucky enough to see a talk, the AV quality was great. The screens were big and readable, all the talks were live transcribed and the presenters audible.

The bad parts

Apart from the terrible crowd control, two things let me down the most. The keynote and a total lack of hardware giveaway – something that might actually be related.

Don’t get me wrong, I found the showering of attendees with hardware excessive at the first few IOs. But announcing something like a massive move into VR with Daydream and Tango without giving developers something to test it on is assuming a lot. Nine hundred dollars plus flying to the US and spending a lot of money on accommodation is a lot for many attendees. Getting something amazing to bring back would be a nice “Hey, thanks”.

There was no announcement at the keynote about anything physical except for some vague “this will be soon available” products. This might be the reason.

My personal translation of the keynote is the following:

We are Google, we lead in machine learning, cloud technology and data insights. Here are a few products that may soon come out that play catch-up with our competition. We advocate diversity and try to make people understand that the world is bigger than the Silicon Valley. That’s why we solve issues that aren’t a problem but annoyances for the rich. All the things we’re showing here are solving issues of people who live in huge houses, have awesome cars and suffer from the terrible ordeal of having to answer text messages using their own writing skills. Wouldn’t it be better if a computer did that for you? Why go and wake up your children with a kiss using the time you won by becoming more effective with our products when you can tell Google to do that for you? Without the kiss that is – for now.

As I put it during the event:

I actually feel poor looking at the #io16 keynote. We have lots of global problems technology can help with. This is pure consumerism.

I stand by this. Hardly anything in the keynote excited me as a developer. Or even as a well-off professional who lives in a city where public transport is a given. The announcement of Instant Apps, the Firebase bits and the new features of Android Studio are exciting. But it all got lost in an avalanche of “Look what’s coming soon!” product announcements without the developer angle. We want to look under the hood. We want to add to the experience and we want to understand how things work. This is how developer events work. Google Home has some awesome features. Where are the APIs for that?

As far as I understand it, there was a glitch in the presentation. But the part where a developer in Turkey used his skills to help the Syrian refugee crisis was borderline insulting. There was no information what the app did, who benefited from it and what it ran on. No information how the data got in and how the data was going to the people who help the refugees. The same goes for using machine learning to help with the issue of blindness. Both were teasers without any meat and felt like “Well, we’re also doing good, so here you go”.

Let me make this clear: I am not criticising the work of any Google engineer, product owner or other worker here. All these things are well done and I am excited about the prospects. I find it disappointing that the keynote was a sales pitch. It did not pay respect to this work and failed to show the workings rather than the final product. IO is advertised as a developer conference, not a end user oriented sales show. It felt disconnected.

Things that made me happy

Chris Heilmann covered in sunscreen, wearing a bandana in front of Google Loon

  • The social events were great – the concert in the amphitheatre was for those who wanted to go. Outside was a lot of space to have a chat if you’re not the dancing type. The breakout events on the second day were plentiful, all different and arty. The cynic in my sniggered at Burning Man performers (the anthithesis to commercialism by design) doing their thing at a commercial IT event, but it gave the whole event a good vibe.
  • Video recording and live streaming – I watched quite a few of the talks I missed the last two days in the gym and I am grateful that Google offers these on YouTube immediately, well described and easy to find in playlists. Using the app after the event makes it easy to see the talks you missed.
  • Boots on the ground – everyone I wanted to meet from Google was there and had time to chat. My questions got honest and sensible answers and there was no hand-waving or over-promising.
  • A good focus on health and safety – first aid tents, sunscreen and wet towels for people to cool down, creature comforts for an outside environment. The organisers did a good job making sure people are safe. Huge printouts of the Code of Conduct also made no qualm about it that antisocial or aggressive behaviour was not tolerated.

Conclusion

Jatinder and me at the keynote

I will go again to Google IO, to talk, to meet, to see product demos and to have people at hand that can give me insight further than the official documentation. I am likely to not get up early next time to see the keynote though and I would love to see a better handle on the crowd control. It is frustrating to queue and not being able to see talks at the conference of a company who prides itself at organising huge datasets and having self-driving cars.
Here are a few things that could make this better:

  • Having screening tents with the video and the transcription screens outside the main tents. These don’t even need sound (which is the main outside issue)
  • Use the web site instead of two apps. Advocating progressive web apps and then telling me in the official conference mail to download the Android app was not a good move. Especially as the PWA outperformed the native app at every turn – including usability (the thing native should be much better). It was also not helpful that the app showed the name of the stage but not the number of the tent.
  • Having more places to charge phones would have been good, or giving out power packs. As we were outside all the time and moving I didn’t use my computer at all and did everything on the phone.

I look forward to interacting and working with the tech Google. I am confused about the Google that tries to be in the hands of end users without me being able to crack the product open and learn from how it is done.

Daniel GlazmanCSS Variables in BlueGriffon

I guess the title says it all :-) Click on the thumbnail to enlarge it.

CSS Variables in BlueGriffon

Gian-Carlo PascuttoTechnical Debt, Episode 1

One of the projects I'm working on for Mozilla is our Content Sandboxing. We've been using sandboxing for a while to protect some plugins like Flash, as well as media plugins, but now that Firefox can render webpages in a separate process, we can apply restrictions to what those "Web Content" processes can do, too. Those processes are the part of Firefox that is essentially exposed to the internet, and hence to potentially dangerous webpages.

Although we go to great lengths to make this impossible, there is always a chance that a bug in Firefox would allow an attacker to exploit and take over a Web Content process. But by using features provided by the operating system, we can prevent them from taking over the rest of the computing device by disallowing many ways to interact with it, for example by stopping them from starting new programs or reading or writing specific files.

This feature has been enabled on Firefox Nightly builds for a while, at least on Windows and Mac OS X. Due to the diversity of the ecosystem, it's taken a bit longer for Linux, but we are now ready to flip that switch too.

The initial version on Linux will block very, very little. It's our goal to get Firefox working and shipping with this first and foremost, while we iterate rapidly and hammer down the hatches as we go, shipping a gradual stream of improvements to our users.

One of the first things to hammer down is filesystem access. If an attacker is free to write to any file on the filesystem, he can quickly take over the system. Similarly, if he can read any file, it's easy to leak out confidential information to an attacking webpage. We're currently figuring out the list of files and locations the Web Content process needs to access (e.g. system font directories) and which ones it definitely shouldn't (your passwords database).

And that's where this story about technical debt really starts.

While tracing filesystem access, we noticed at some point that the Web Content process accesses /etc/passwd. Although on most modern Unix systems this file doesn't actually contain any (hashed) passwords, it still typically contains the complete real name of the users on the system, so it's definitely not something that we'd want to leave accessible to an attacker.

My first thought was that something was trying to enumerate valid users on the system, because that would've been a good reason to try to read /etc/passwd.

Tracing the system call to its origin revealed another caller, though. libfreebl, a part of NSS (Network Security Services) was reading it during its initialization. Specifically, we traced it to this array in the source. Reading on what it is used for is, eh, quite eyebrow-raising in the modern security age.

The NSS random number generator seeds itself by attempting to read /dev/urandom (good), ignoring whether that fails or not (not so good), and then continuing by reading and hashing the password file into the random number generator as additional entropy. The same code then goes on to read in several temporary directories (and I do mean directories, not the files inside them) and perform the same procedure.

Should all of this have failed, it will make a last ditch effort to fork/exec "netstat -ni" and hash the output of that. Note that the usage of fork here is especially "amusing" from the sandboxing perspective, as it's the one thing you'll absolutely never want to allow.

Now, almost none of this has ever been a *good* idea, but in its defense NSS is old and caters to many exotic and ancient configurations. The discussion about /dev/urandom reliability was raised in 2002, and I'd wager the relevant Linux code has seen a few changes since. I'm sure that 15 years ago, this might've been a defensible decision to make. Apparently one could even argue that some unnamed Oracle product running on Windows 2000 was a defensible use case to keep this code in 2009.

Nevertheless, it's technical debt. Debt that hurt on the release of Firefox 3.5, when it caused Firefox startup to take over 2 minutes on some people's systems.

It's not that people didn't notice this idea was problematic:
I'm fully tired of this particular trail of tears. There's no good reason to waste users' time at startup pretending to scrape entropy off the filesystem.
-- Brendan Eich, July 2009
RNG_SystemInfoForRNG - which tries to make entropy appear out of the air.
-- Ryan Sleevi, April 2014
Though sandboxing was clearly not considered much of a use case in 2006:
Only a subset of particularly messed-up applications suffer from the use of fork.
-- Well meaning contributor, September 2006
Nevertheless, I'm - still - looking at this code in the year of our Lord 2016 and wondering if it shouldn't all just be replaced by a single getrandom() call.

If your system doesn't have getrandom(), well maybe there's a solution for that too.



Don't agree? Can we then at least agree that if your /dev/urandom isn't secure, it's your problem, not ours?

Air MozillaWebdev Beer and Tell: May 2016

Webdev Beer and Tell: May 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Support.Mozilla.OrgEvent Report: Mozilla Ivory Coast SUMO Sprint

We’re back, SUMO Nation! This time with a great event report from Abbackar Diomande, our awesome community spirit in Ivory Coast! Grab a cup of something nice to drink and enjoy his report from the Mozilla Ivory Coast SUMO Sprint.

The Mozilla Ivory Coast community is not yet ready to forget Saturday, May 15. It was then that the first SUMO Sprint in Ivory Coast took place, lasting six hours!
For this occasion, we were welcomed and hosted by the Abobo Adjame University, the second largest university in the country.
Many students, some members of the Mozilla local community, and other members of the free software community gathered on this day.

The event began with a Mozilla manifesto presentation by Kouadio – a young member of our local SUMO team and the Lead of the Firefox Club at the university.

After that, I introduced everyone to SUMO, the areas of SUMO contribution, the our Nouchi translation project, and Locamotion (the tool we use to localize).
During my presentation I learned that all the guests were really surprised and happy to learn of the existence of support.mozilla.org and a translation project for Nouchi
They were very happy and excited to participate in this sprint, and you can see that in the photos, emanating from their smiles and the joy that you can read from their the faces.

After all presentations and introductions, the really serious things could begin. Everyone spent two hours answering questions of French users on Twitter – the session passed very quickly in the friendly atmosphere.

We couldn’t reach the goal of answering all the Army of Awesome posts in French, but everyone appreciated what we achieved, providing answers to over half the posts – we were (and still are) very proud of our job!

After the Army of Awesome session, our SUMO warriors have turned to Locamotion for Nouchi localization. It was at once serious and fun. Originally planned for three hours, we localized for four – because it was so interesting :-)

Mozilla and myself received congratulations from all participants for this initiative, which promotes the Ivorian language and Ivory Coast as a digital country present on the internet.

Even though we were not able to reach all our objectives, we are still very proud of what we have done. We contributed very intensely, both to help people who needed it and to improve the scale and quality of Nouchi translations in open source, with the help of new and dynamic contributors.

The sprint ended with a group tasting of garba (a traditional local dish) and a beautiful family picture.

Thank you, Abbackar! It’s always great to see happy people contributing their skills and time to open source initiatives like this. SUMO is proud to be included in Ivory Coast’s open source movement! We hope to see more awesomeness coming from the local community in the future – in the meantime, I think it’s time to cook some garba! ;-)

Patrick ClokeGoogle Summer of Code 2016 projects

I’d like to introduce the 13 students that are being mentored by Mozilla this year as part of Google Summer of Code 2016! Currently the “community bonding” period is ongoing, but we are on the cusp of the “coding period” starting.

As part of Google Summer of Code (GSoC), we ask students to provide weekly updates of their progress in a public area (usually a blog). If you’re interested in a particular project, please follow along! Lastly, remember that GSoC is a community effort: if a student is working in an area where you consider yourself knowledgable, please introduce yourself and offer to provide help and/or advice!

Below is a listing of each student’s project (linked to their weekly updates), the name of each student and the name of their mentor(s).

Project Student Mentor(s)
Download app assets at runtime (Firefox for Android) Krish skaspari
File API Support (Servo) izgzhen Manishearth
Implement RFC7512 PKCS#11 URI support and system integration (NSS) varunnaganathan Bob Relyea, David Woodhoue
Implementing Service Worker Infrastructure in Servo Browser Engine creativcoder jdm
Improving and expanding the JavaScript XMPP Implementation Abdelrhman Ahmed aleth, nhnt11
Mozilla Calendar – Event in a Tab paulmorris Philipp Kewisch
Mozilla Investigator (MIG): Auditd integration Arun kang
Prevent Failures due to Update Races (Balrog) varunjoshi Ben Hearsum
Proposal of Redesign SETA MikeLing Joel Maher
Schedule TaskCluster Jobs in Treeherder martianwars armenzg
Thunderbird - Implement mbox -> maildir converter Shiva mkmelin
Two Projects to Make A-Frame More Useful, Accessible, and Exciting bryik Diego Marcos
Web-based GDB Frontend baygeldin jonasfj

Doug BelshawWhat does it mean to be a digitally literate school leader?

As part of the work I’m doing with London CLC, their Director, Sarah Horrocks, asked me to write something on what it means to be a digitally literate school leader. I’d like to thank her for agreeing to me writing this for public consumption.

Image CC BY K.W. Barrett

Image CC BY K.W. Barrett

Before I start, I think it’s important to say why I might be in a good position to be able to answer this question. First off, I’m a former teacher and senior leader. I used to be Director of E-Learning of a large (3,000 student), all-age, multi-site Academy. I worked for Jisc on their digital literacies programme, writing my thesis on the same topic. I’ve written a book entitled The Essential Elements of Digital Literacies. I also worked for the Mozilla Foundation on their Web Literacy Map, taking it from preliminary work through to version 1.5. I now consult with clients around identifying, developing, and credentialing digital skills.

That being said, it’s now been a little over six years since I last worked in a school, and literacy practices change quickly. So I’d appreciate comments and pushback on what follows.


Let me begin by saying that, as Allan Martin (2006) pointed out, “Digital literacy is a condition, not a threshold.” That’s why, as I pointed out in my 2012 TEDx talk, we shouldn’t talk about ‘digital literacy’ as a binary. People are not either digitally literate or digitally illiterate - instead literacy practices in a given domain exist on a spectrum.

In the context of a school and other educational institutions, we should be aware that that there are several cultures at play. As a result, there are multiple, overlapping literacy practices. For this reason we should talk of digital literacies in their plurality. As I found in the years spent researching my thesis, there is no one, single, definition of digital literacy that is adequate in capturing the complexity of human experience when using digital devices.

In addition, I think that it’s important to note that digital literacies are highly context dependent. This is perhaps most evident when addressing the dangerous myth of the 'digital native’. We see young people confidently using smartphones, tablets, and other devices and therefore we assume that their skillsets in one domain are matched by the requisite mindsets from another.

So to recap so far, I think it’s important to note that digital literacies are plural and context-dependent. Although it’s tempting to attempt to do so, it’s impossible to impose a one-size-fits-all digital literacy programme on students, teachers, or leaders and meet with success. Instead, and this is the third 'pillar’ one which my approach rests, I’d suggest that definitions of digital literacies need to be co-created.

By 'co-created’ I mean that there are so many ways in which one can understand both the 'digital’ and 'literacies’ aspects of the term 'digital literacies’ that it can be unproductively ambiguous. Instead, a dialogic approach to teasing out what this means in your particular context is much more useful. In my thesis and book I came up with eight elements of digital literacies from the research literature which prove useful to scaffold these conversations:

  1. Cultural
  2. Cognitive
  3. Constructive
  4. Communicative
  5. Confident
  6. Creative
  7. Critical
  8. Civic

In order not to make this post any longer than it needs to be, I’ll encourage you to look at my book and thesis for more details on this. Suffice to say, it’s important both to collaboratively define the above eight terms and define then what you mean by 'digital literacies’ in a particular context.

All of this means that the job of the school leader is not to reach a predetermined threshold laid down by a governing body or professional body. Instead, the role of the school leader is to be always learning, questioning their practice, and encouraging colleagues and students in all eight of the 'essential elements’ listed above.

As with any area of interest and focus, school leaders should model the kinds of knowledge, skills, and behaviours they want to see develop in those around them. Just as we help people learn that being punctual is important by always turning up on time ourselves, so the importance of developing digital literacies can be demonstrated by sharing learning experiences and revelations.

There is much more on this in my thesis, book, and presentations but I’ll finish with some recommendations as to what school leaders can do to ensure they are constantly improving their practices around digital literacies:

  • Seek out new people: it’s easy for us to become trapped in what are known as filter bubbles, either through the choices we make as a result of confirmation bias, or algorithmically-curated newsfeeds. Why not find people and organisations who you wouldn’t usually follow, and add them to your daily reading habits?
  • Share what you learn: why not create a regular way to update those in your school community about issues relating to the considered use of technology? This could be a discussion forum, a newsletter pointing to the work of people like the Electronic Frontier Foundation or Common Sense Media, or 'clubs’ that help staff and students get to grips with new technologies.
  • Find other ways: the danger of 'best practices’ or established workflows is that they can make you blind to new, better ways of doing things. As Clay Shirky notes in this interview it can be liberating to jettison existing working practices in favour of new ones. What other ways can you find to write documents, collaborate with others, be creative, and/or keep people informed?

Comments? Questions? I’m @dajbelshaw or you can get in touch with me at: hello@dynamicskillset.com. I consult around identifying, developing, and credentialing digital skills.

Air MozillaBay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day

Bay Area Accessibility and Inclusive Design meetup: Fifth Annual Global Accessibility Awareness Day Digital Accessibility meetup with speakers for Global Accessibility Awareness Day. #a11ybay. 6pm Welcome with 6:30pm Start Time.

Support.Mozilla.OrgWhat’s Up with SUMO – 19th May

Hello, SUMO Nation!

Glad to see all of you on this side of spring… How are you doing? Have you missed us as much as we missed you? Here we go yet again,  another small collection of updates for your reading pleasure :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 25th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

  • The Polish team have reached their monthly milestone – congratulations!
  • Final reminder: if you want to participate in the ongoing discussion about source material quality and frequency, take a look at this thread. We are going to propose a potential way of addressing your issues once we collate enough feedback.
  • Reminder: L10n hackathons everywhere! Find your people and get organized!

Firefox

  • for Android
    • Version 46 support discussion thread.
    • Reminder: version 47 will stop supporting Gingerbread. High time to update your Android installations!
      • Other than that, it should be a minor release. Documentation in progress!

And that’s it! We hope you are looking forward to the end of this week and the beginning of the next one… We surely are! Don’t forget to follow us on Twitter!

Yunier José Sosa VázquezMozilla presenta a Alex Salkever como Vice Presidente de Marketing

En el día de hoy, Mozilla ha hecho público su más reciente adición del equipo de liderazgo en la fundación. Se trata de Alex Salkever, quién ejercerá como nuevo Vice Presidente de Marketing.

En el artículo publicado en el blog de Mozilla, Jascha Kaykas-Wolff (Director de Marketing) comenta que en su nuevo rol, Alex tendrá bajo su mando la conducción de las campañas estrategias de posicionamiento y marketing. Unido a ello, también se encargará de supervisar las comunicaciones globales, los medios de comunicación social, la asistencia de los usuarios y los equipos de marketing de contenido y de trabajo en toda la organización para desarrollar comunicaciones externas impactantes para los productos de Mozilla y Firefox.

Alex Salkever, foto tomada de blog.mozilla.org

Alex Salkever, foto tomada de blog.mozilla.org

Anteriormente, Alex fue Director de Marketing de Silk.co, donde centró sus esfuerzos al crecimiento de usuarios y las asociaciones de la plataforma. Además, Salkever ha ocupado una variedad de cargos relacionados con el mundo del marketing de productos en los campos de instrumentos científicos, computación en la nube, telecomunicaciones e Internet de las Cosas. En estas diversas capacidades, Alex ha gestionado campañas a través de todos los aspectos de marketing y comercialización de productos que incluyen relaciones públicas, marketing de contenidos, adquisición de usuarios, contratación de desarrolladores y análisis de marketing.

Alex también brindará a Mozilla su experiencia como ex editor de tecnología en BusinessWeek.com. Entre sus muchos logros, Alex es el co-autor del libro “The Immigrant Exodus” (en español El Éxodo del Inmigrante), un libro llamado El Libro de Economista de la lista del año en la categoría Libros de Negocio en 2012.

¡Bienvenido a Mozilla Alex!

Air MozillaWeb QA Team Meeting, 19 May 2016

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Air MozillaReps weekly, 19 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

About:CommunityJakarta Community Space Launch

This post was written by Fauzan Alfi.

It was not an ordinary Friday 13th for Mozilla Indonesia because on May 13th, 2016, it was a very big day for us. After months of planning and preparation, the Mozilla Community Space Jakarta finally launched and opened for the community. It’s the 4th volunteer-run physical community space after Bangalore (now closed), Manila and Taipei and another one is opening soon in Berlin. Strategically located in Cikini – Central Jakarta, the Space will become a place for Mozillians from Greater Jakarta and Bandung to do many activities, especially developer-focused events, and to build relationships with other tech communities in the city.

The Space

The Space. Photo by Yofie Setiawan

Invited to the event were many open source and other communities around the city. Mozilla Reps, FSAs and Mozillians also joined to celebrate the Space opening. On his presentation, Yofie Setiawan (Mozilla Rep, Jakarta Space Manager) hopes that Jakarta Community Space can be useful for many people and communities, especially to educate anyone who comes and joins events that take place in the space.

Opening Event

Dian Ina and Rara talk to guests. Photo by Yofie Setiawan

Ceremonial first piece

Brian gets the ceremonial first bite. Photo by Yofie Setiawan

Also joining the event, Brian King from Participation Team at Mozilla. During his remarks, Brian said that the reason behind the Jakarta Community Space is because “the Mozilla community here is one of the most active globally, with deep roots and a strong network in tech scene”. He also added that “Indonesia is an important country with a very dynamic Web presence, and we’d like to engage with more people to make the online experience better for everyone.”

The Jakarta Community Space is around 40 square meters in area that fits 20-30 people inside. On the front side, it has glass wall that’s covered by frosted sticker with some Mozilla projects wording printed on it. Inside, we have some chairs, tables, home theater set, food & drink supplies and coffee machine. Most of the items were donated by Mozillians in Jakarta.

The tour

The tour. Photo by Yofie Setiawan

One area where the Jakarta Community excelled was with the planning and design. All the processes are done by the community itself. One of Reps from Indonesia, Fauzan Alfi – who has a background in architecture, helped design the space and kept the process transparent on the Community Design GitHub. The purpose is to ignite collaborative design, not only from Indonesian community but also from other parts of the globe. More creativity was shown by creating mural drawings of landmarks in selected cities around the world – including Monas of Jakarta.

Jakarta Community Space means a lot for Mozilla community in Greater Jakarta and Indonesia, in general. Having a physical place means the Indonesian community will have their own home to spread the mission and collaborate with more communities that are aligned with Mozilla, especially developer communities. Hopefully, the Space will bring more and more people to contribute to Mozilla and help shape the future of the Web.

Pascal ChevrelLet's give Firefox Nightly some love!

After a decade working on making Mozilla Web properties available in dozens of languages, creating communities of localizers around the globe and building Quality Assurance tools, dashboards and APIs to help ship our software and websites internationally, I recently left the Localization department to report to Doug Turner and work on a new project benefiting directly the Platform and Firefox teams!

I am now in charge of a project aiming to turn Nightly into a maintained channel (just as we have the Aurora, Beta and Release channels) whose goal will be to engage our very technical Nightly users into the Mozilla project in activities that have a measurable impact on the quality of our products.

Here are a few key goals I would like us to achieve in 2016-2017:

  • Double the number of Nightly users so as to detect much earlier regressions, crashes and Web compatibility issues. A regression detected and reported a couple of days after the code landed on mozilla-central is a simple backout, the same regression reported weeks or even months later in the Aurora, Beta or even discovered on the Release channel can be much more work to get fixed.

  • Make of Firefox Nightly a real entry point for the more technical users that want to get involved in Mozilla and help us ship software (QA, code, Web Compatibility, security…). Not only for Firefox but also to all technical Mozilla projects that would benefit from a wider participation.

  • Make of Firefox Nightly a better experience for these technical contributors. This means as a first step using the built-in communication channels (about:home promotional snippets, default tiles, first run / what's New pages…) to communicate information adapted to technical users and propose resources, activities and ways to participate in Mozilla that are technical by nature. I also want to have a specific focus on three countries, Germany, France and Spain, where we have strong local communities, staff and MozSpaces and can engage people more easily IRL.

I will not work on that alone, Sylvestre Ledru, our Release Management Lead, has created a new team (with Marcia Knous in the US and Calixte Denizet in France) to work on improving the quality of the Nightly channel and analyse crashes and regressions. Members of other departments (Participation, MDN, Security, Developer Relations…) have also shown interest in the project and intend to get involved.

But first and foremost, I do intend to get the Mozilla community involved and hopefully also get people not involved in Mozilla yet to join us and help us make of this "Nightly Reboot" project a success!

A few pointers for this project:

  • There is an existing #nightly IRC channel that we are restoring with Marcia and a few contributors. I am pascalc on IRC and I am in the CET timezone, don't hesitate to ping me there if you want to propose your help, know more about the project or propose your own ideas.

  • Marcia created a "Nightly Testers" Telegram channel, ping me if you are already using Nightly to report bugs and want to be added

  • For asynchronous communication, there is a Nightly Testers mailing list

  • If you want to download Nightly, go to nightly.mozilla.org. Unfortunately the site only proposes en-US builds and this is definitely something I want to get fixed! If you are a French speaker, our community maintains its own download site for Nightly with links to French builds that you can find at nightly.mozfr.org, otherwise other localized builds can be found on our FTP.

  • If you want to know all the new stuff that gets into our Nightly channel, follow our @FirefoxNightly twitter account

  • If you are a Nightly user and report a bug on https://bugzilla.mozilla.org, please put the tag [nightly-community] in the whiteboard field of your bug report, this allows us to measure the impact of our active Nightly community on Bugzilla.

Interested? Do get involved and don't hesitate to contact me if you have any suggestion or idea that could fit into that project. Several people I spoke with in the last weeks gave me very interesting feedback and concrete ideas that I preciously noted!

You can contact me (in English, French or Spanish) through the following communication channels:

  • Email: pascal AT mozilla DOT com
  • IRC on Moznet and Freenode: pascalc
  • Twitter: @pascalchevrel

update 15:33 See also this blog post by Mozilla Engineer Nicholas Nethercote I want more users on the Nightly channel

Nicholas NethercoteI want more users on the Nightly channel

I have been working recently on a new Platform Engineering initiative called Uptime, the goal of which is to reduce Firefox’s crash rate on both desktop and mobile. As a result I’ve been spending a lot of time looking at crash reports, particular on the Nightly channel. This in turn has increased my appreciation of how important Nightly channel users are.

A crash report from a Nightly user is much more useful than a crash report from a non-Nightly user, for two reasons.

  • If a developer lands a change that triggers crashes for Nightly users, they will get fast feedback via crash reports, often within a day or two.  This maximizes the likelihood of a fix, because the particular change will be fresh in the developer’s mind. Also, backing out changes is usually easy at this point. In contrast, finding out about a crash weeks or months later is less useful.
  • Because a new Nightly build is done every night, if a new crash signature appears, we have a fairly small regression window. This makes it easier to identify which change caused the new crashes.

Also, Nightly builds contain some extra diagnostics and checks that can also be helpful with identifying a range of problems. (See MOZ_DIAGNOSTIC_ASSERT for one example.)

If we could significantly increase the size of our Nightly user population, that would definitely help reduce crash rates. We would get data about a wider range of crashes. We would also get stronger signals for specific crash-causing defects. This is important because the number of crash reports received for each Nightly build is relatively low, and it’s often the case that a cluster of crash reports that come from two or more different users will receive more attention than a cluster that comes from a single user.

(You might be wondering how we distinguish those two cases. Each crash report doesn’t contain enough information to individually identify the user — unless the user entered their email address into the crash reporting form — but crash reports do contain enough information that you can usually tell if two different crash reports have come from two different users. For example, the installation time alone is usually enough, because it’s measured to the nearest second.)

All this is doubly true on Android, where the number of Nightly users is much smaller than on Windows, Mac and Linux.

Using the Nightly channel is not the best choice for everyone. There are some disadvantages.

  • Nightly is less stable than later channels, but not drastically so. The crash rate is typically 1.5–2.5 times higher than Beta or Release, though occasionally it spikes higher for a short period. So a Nightly user should be comfortable with the prospect of less stability.
  • Nightly gets updated every 24 hours, which some people would find annoying.

There are also advantages.

  • Nightly users get to experience new features and fixes immediately.
  • Nightly users get the satisfaction that they are helping produce a better Firefox. The frustration of any crash is offset by the knowledge that the information in the corresponding crash report is disproportionately valuable. Indeed, there’s a non-trivial likelihood that a single crash report from a Nightly user will receive individual attention from an engineer.

If you, or somebody you know, thinks that those advantages outweigh the disadvantages, please consider switching. Thank you.

Robert O'Callahanx86-64 SSE/AVX Register Usage

Following up on my previous post, I counted the fraction of instructions in Firefox opt/debug libxul.so that use each XMM/YMM register.

Observations:

  • As before, debug builds are heavily weighted towards use of the first few registers, and opt builds allocate across more registers as you'd expect.
  • In debug builds, usage of the higher-numbered registers (up to 7) is a combination of va_start spilling all parameter registers (0-7) to the stack, and handwritten-assembly. It looks like almost all the handwritten assembly in Firefox restricts itself to registers 0-7, presumably so it works in x86-32 as well. Maybe some of that code would benefit from being updated for x86-64 with more registers?
  • In opt builds there's a clear drop-off in usage after register 7, more than can be explained by handwritten assembly or va_start spilling (since those equally affect debug). It's not related to caller/callee-saves status because all MM registers are caller-saves on Linux. It appears that in some functions experiencing moderate register pressure, gcc has freely used registers 0-7 but avoided using 8-15. Maybe that's because the latter require longer instruction encodings in some cases. You don't see the same dropoff moving to the upper eight GP registers, which have the same encoding length issue, but that may because of callee-saves and generally increased register pressure.
  • In libxul at least, MM registers are used far less often than GP registers. Register 0, the most-used by far, is used by barely 1% of instructions, comparable to the least-used GP registers. Registers 8 to 15 are each used by less than 0.1% of instructions.

As before, these are static counts and I'd expect weighting instructions by dynamic frequency would change the results dramatically --- on the right workloads --- since most of the hand-written assembly in Firefox is hand-written specifically to optimize use of MM registers in hot loops.

Update One interesting takeaway is that you have eight huge registers (256 bits each, 512 soon) unused by most code. That creates some interesting possibilities...

The Mozilla BlogWelcome Alex Salkever, Vice President of Marketing Communications

I’m excited to announce that Alex Salkever joins the Mozilla leadership team today as the Vice President of Marketing Communications.

In this role, Alex Salkever will be responsible for driving strategic positioning and marketing communications campaigns. Alex will oversee the global communications, social media, user support and content marketing teams and work across the organization to develop impactful outbound communications for Mozilla and Firefox products.

Alex Salkever, MozillaAlex was most recently Chief Marketing Officer of Silk.co, a data publishing and visualization startup, where he led efforts focused on user growth and platform partnerships. Alex has held a variety of senior marketing, marketing communications and product marketing roles working on products in the fields of scientific instruments, cloud computing, telecommunications and Internet of Things. In these various capacities, Alex has managed campaigns across all aspects of marketing and product marketing including PR, content marketing, user acquisition, developer marketing and marketing analytics.

Alex also brings to Mozilla his experience as a former Technology Editor for BusinessWeek.com. Among his many accomplishments, Alex is the co-author of “The Immigrant Exodus”, a book named to The Economist Book of the Year List in the Business Books category in 2012.

Welcome Alex!

Background:

Alex’s bio & Mozillians profile

LinkedIn profile

High-resolution photo

Air MozillaThe Joy of Coding - Episode 57

The Joy of Coding - Episode 57 mconley livehacks on real Firefox bugs while thinking aloud.

Karl DubostWorking AS a distributed team. No more remote workers

When you say remoter workers, you already failed your intents of having an effective team. Companies setting up "friendly remote work environment" are most of the time failing at understanding the nature of the issues. The best way you can set up a distributed team is by forgetting about the workers and focusing on the work itself.

I'm working at Mozilla in the Web Compatibility Activity. I'm part of a now 4 persons team (Mozilla staff) and a couple of active contributors. For the Mozilla staff part:

  • Mike Taylor in Texas, USA
  • Hallvord Steen in Norway
  • Adam Stevenson in Ontario, Canada
  • Karl Dubost (myself) in Kanagawa, Japan
  • We have contributors in India, Japan, France, Romania, Brazil, Mexico, etc.

We are working together.

The most important part of creating a successful distributed team is when you stop thinking that there are remote workers in your team. You need to consider that the work can be done from anywhere by anyone competent for the job. This will give a good base for organizing the work in terms of process, protocols and tools to be productive and effective. That's the key, the only one.

Some tips:

  1. Choose open first: Opening a private discussion is a lot harder, than making a private comment on a discussion.
  2. Record Action Items: Have action items which are identifiable by all the team members you are working with (and broader when possible. See 1.). These items need an owner, an unambiguous actionable task, a target or context and a deadline.
  3. Record any meetings: When there is a meeting, write down detailed minutes on the spot. Give these minutes a unique and stable URI. The context might be useful for another team or a new employee later on.
  4. Share your work assets with others: Anything you produce, code, documents, etc. Give access to it. Share it as early as possible with again stable and public URIs.
  5. Share your worklog: This helps others to decide if they can request more things from you. This will help them to decide if they can make progress on their own job.
  6. Web Archived Mailing-Lists: Set your mailing-list archives in a way that makes it accessible to everyone (to the world if possible, to the entire company, and in some very rare contexts to only your team)

There are many others small tips to make this more effective, but these will go a long way in achieving your goals.

The amazing benefits of working that way is that it doesn't allow only for a distributed team, it makes the whole organization more robust by having a solid information flow management.

PS: I have been working in a distributed way at W3C, Opera and now Mozilla for the last 15 years. In my work history, I still consider the W3C (2000-2008) the best place for distributed work among staff. I don't know about W3C today.

Otsukare!

Maja FrydrychowiczA Week of Reflection and Programming for Fun

I recently got to spend a week back at the heart of an excellent delightful inspiring technical community: Recurse Center or RC. This friendly group consists mostly of programmers from around the world who have, at some point, participated in RC’s three-month “retreat” in New York City to work on whatever projects happen to interest them. The retreat’s motto is “never graduate”, and so participants continue to support each other’s technical growth and curiosity forever and ever.

I’m an RC alum from 2014! RC’s retreat is how I ended up contributing to open source software and eventually gathering the courage to join Mozilla. Before RC, despite already having thousands of hours of programming and fancy math under my belt, I held myself back with doubts about whether I’m a “real programmer”, whatever that stereotype means. That subconscious negativity hasn’t magically disappeared, but I’ve had a lot of good experiences in the past few years to help me manage it. Today, RC helps me stay excited about learning all the things for the sake of learning all the things.

A retreat at RC looks something like this: you put your life more-or-less on hold, move to NYC, and spend three months tinkering in a big, open office with around fifty fellow (thoughtful, kind, enthusiastic) programmers. During my 2014 retreat, I worked mostly on lowish-level networking things in Python, pair programmed on whatever else people happened to be working on, gave and received code review, chatted with wise “residents”, attended spontaneous workshops, presentations and so on.

Every May, alumni are invited to return to the RC space for a week, and this year I got to go! (Thanks, Mozilla!) It was awesome! Exclamation points! This past week felt like a tiny version of the 3-month retreat. After two years away, I felt right at home — that says a lot about the warm atmosphere RC manages to cultivate. My personal goal for the week was just to work in a language that’s relatively new to me - JavaScript - but I also happened to have really interesting conversations about things like:

  • How to implement a basic debugger?
  • How to improve the technical interview process?
  • What holds developers back or slows them down? What unnecessary assumptions do we have about our tools and their limitations?

RC’s retreat is a great environment for growing as a developer, but I don’t want to make it sound like it’s all effortless whimsy. Both the hardest and most wonderful part of RC (and many other groups) is being surrounded by extremely impressive, positive people who never seem to struggle with anything. It’s easy to slip into showing off our knowledge or to get distracted by measuring ourselves against our peers. Sometimes this is impostor syndrome. Sometimes it’s the myth of the 10x developer. RC puts a lot of effort into being a safe space where you can reveal your ignorance and ask questions, but insecurity can always be a challenge.

Similarly, the main benefit of RC is learning from your peers, but the usual ways of doing this seem to be geared toward people who are outgoing and think out loud. These are valuable skills, but when we focus on them exclusively we don’t hear from people who have different defaults. There is also little structure provided by RC so you are free to self-organize and exchange ideas as you deem appropriate. The risk is that quiet people are allowed to hide in their quiet corners, and then everyone misses out on their contributions. I think RC makes efforts to balance this out, but the overall lack of structure means you really have to take charge of how you learn from others. I’m definitely better at this than I used to be.

RC is an experiment and it’s always changing. Although at this point my involvement is mostly passive, I’m glad to be a part of it. I love that I’ve been able to work closely with vastly different people, getting an inside look at their work habits and ways of thinking. Now, long after my “never-graduation”, the RC community continues to expose me to a variety of ideas about technology and learning in a way that makes us all get better. Continuous improvement, yeah!

Armen ZambranoSchedule a Linux64 TaskCluster task from the command line

I've created an experimental repository to play with TaskCluster scheduling using your personal temporary credentials.

If you want to try to schedule a real task from the command line feel free to give it a try:
https://github.com/armenzg/TC_developer_scheduling_experiments


Here's the output of scheduling a Linux64 debug task.
NOTE: It will not post to Treeherder
NOTE: It will open a new tab asking you to grant access to your TaskCluster temp credentials.

(TC_scheduling) armenzg@armenzg-thinkpad:~/repos/TC_developer_scheduling_experiments$ python schedule_linux64_task.py 
04:48:50 root Setting INFO level
04:48:50 mozci.taskcluster.tc We're going to open a new tab and authenticate you with TaskCluster.
-------------------------------------------------------
  Opening browser window to login.taskcluster.net
  Asking you to grant temporary credentials to:
     http://localhost:39025
-------------------------------------------------------
04:48:54 mozci.taskcluster.tc Inspect the task in https://tools.taskcluster.net/task-inspector/#bmt-5IqPTwmn8JrMzdofGg




Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaConnected Devices Weekly Program Update, 17 May 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Addons BlogAdd-on Compatibility for Firefox 48

Firefox 48 will be released on August 2nd. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 48 for Developers, so you should also give it a look.

General

XPCOM and Modules

New

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 48, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 47.

About:CommunityReinventing Mozilla on Campus

Re-post from George Roter’s blog, “Reinventing Mozilla on Campus” .

Throughout history, University students, staff and professors have often shaped the leading edge of change and innovation. The history of the web is no different: the student-built Lynx browser was one of the first and Mosaic (Firefox’s distant ancestor!), pioneered by students and staff, opened the graphical web to millions.

I saw the impact that students and professors can make through my own experience at Engineers Without Borders Canada. Engineering students and professors on campuses across Canada and in Africa built remarkable ventures, reshaped curriculum, changed on-campus and government policy, and taught hundreds of thousands of young people about global development.

I fully believe in the potential of students, staff and professors on campuses around the world to have massive impact on Mozilla’s mission. As innovators, contributors and open web advocates. Engineers, scientists, lawyers, social scientists, economists and designers.

From what I know about my past experience and have heard in the past year working for Mozilla, our mission resonates tremendously with students and professors. The range of impact and involvement is considerable. Until now, we’ve only just scraped the surface of this potential.

We need to reinvent Mozilla on campus.

Our existing engagement on University campuses around the world is an assortment of largely disconnected programs and people. Firefox Student Ambassadors and Firefox Clubs. Mozilla Clubs. Code contribution by individual contributors. Maker Party. Mozilla Science Lab. Various professor and lab partnerships. Employee recruitment. Many of these are successful in their own right; there’s an opportunity learn from each of them, find connections, and imagine opportunity to scale their impact with a more coordinated approach.

Photo credit: Tanha Islam and Trisa Islam

The largest of these by student involvement, Firefox Student Ambassadors (FSAs) and Firefox Clubs, has been constrained by limited and variable employee support and a focus on marketing. Our student leaders have already been “hacking” this program to introduce advocacy, code contribution, support, localization, teaching and many other activities; official support for this has lagged.

Our team came into this year with a key hypothesis as part of our strategy: That we can supercharge participation with a reinvented campus program.

The Take Back the Web campus campaign focused on privacy and security has been our first effort to test this hypothesis. Already it’s showing great promise, with over 600 campus teams signed up (including hundreds of FSAs) to have impact in 3 areas. We’re focused on learning as much as we can from this campaign.

The campus campaign is a step toward reinvention. But I think it’s now time to take a step back to ask: What impact can we imagine with a coordinated effort on campuses around the world? What do students, staff and professors want and need to be involved with Mozilla’s mission? How might we evolve our existing programs? What programs and structures would we design, and how do they relate to one another? How can we invite people on campus to innovate with Mozilla?

These are the broad questions that will guide a process over the next 9 weeks. By July 15th we aim to have a clear articulation of the impact we can have, the programs we’ll invest in and how they relate to one another, and the opportunities for students, staff and professors to participate.

We’re hoping that this process of reinventing Mozilla on campus will be participatory, and we’re inviting many voices to contribute. Lucy Harris on the Participation Team will be stewarding this process and shaping the final options. Mark Surman, Mitchell Baker, Chris Lawrence, Katharina Borchert and I will be involved in making a final decision on the direction we take.

You can read more about the details of the process in this post, but let me summarize it and the opportunities you have to be involved:

Phase 1: Listening (May 16-27)

→ provide thoughts on existing programs and opportunities you see

Phase 2: Synthesis and options (May 27-June 10)

→ we’ll frame some tensions for you to weigh in on

→ we’ll shape a set of options for conversation during the London All Hands

Phase 3: Final input (June 10-24)

→ we’ll articulate a set of options for you to consider as we move forward, and will be diving deep into these and key questions during the Mozilla All Hands in London

Phase 4: Final Decision and Disseminate (June 24-July 15)

→ we’ll take all the input and decide on a direction for moving forward

Let me finish by reiterating the opportunity. University campuses are a hotbed of innovation and a locus for creating change. Mozilla can tap into this energy and catalyze involvement in unleashing the next wave of openness and opportunity in online life. Finally, our team is excited about helping to shape a direction we can take, and investing in a robust program of participation moving forward.

I’m excited for this journey of reinventing Mozilla on campus.

George RoterReinventing Mozilla on campus

Throughout history, University students, staff and professors have often shaped the leading edge of change and innovation. The history of the web is no different: the student-built Lynx browser was one of the first and Mosaic (Firefox’s distant ancestor!), pioneered by students and staff, opened the graphical web to millions.

I saw the impact that students and professors can make through my own experience at Engineers Without Borders Canada. Engineering students and professors on campuses across Canada and in Africa built remarkable ventures, reshaped curriculum, changed on-campus and government policy, and taught hundreds of thousands of young people about global development.

I fully believe in the potential of students, staff and professors on campuses around the world to have massive impact on Mozilla’s mission. As innovators, contributors and open web advocates. Engineers, scientists, lawyers, social scientists, economists and designers.

From what I know about my past experience and have heard in the past year working for Mozilla, our mission resonates tremendously with students and professors. The range of impact and involvement is considerable. Until now, we’ve only just scraped the surface of this potential.

We need to reinvent Mozilla on campus.

Our existing engagement on University campuses around the world is an assortment of largely disconnected programs and people. Firefox Student Ambassadors and Firefox Clubs. Mozilla Clubs. Code contribution by individual contributors. Maker Party. Mozilla Science Lab. Various professor and lab partnerships. Employee recruitment. Many of these are successful in their own right; there’s an opportunity learn from each of them, find connections, and imagine opportunity to scale their impact with a more coordinated approach.

mg_3830

Photo credit: Tanha Islam and Trisa Islam [1]

The largest of these by student involvement, Firefox Student Ambassadors (FSAs) and Firefox Clubs, has been constrained by limited and variable employee support and a focus on marketing. Our student leaders have already been “hacking” this program to introduce advocacy, code contribution, support, localization, teaching and many other activities; official support for this has lagged.

Our team came into this year with a key hypothesis as part of our strategy: That we can supercharge participation with a reinvented campus program.

The Take Back the Web campus campaign focused on privacy and security has been our first effort to test this hypothesis. Already it’s showing great promise, with over 600 campus teams signed up (including hundreds of FSAs) to have impact in 3 areas. We’re focused on learning as much as we can from this campaign.

The campus campaign is a step toward reinvention. But I think it’s now time to take a step back to ask: What impact can we imagine with a coordinated effort on campuses around the world? What do students, staff and professors want and need to be involved with Mozilla’s mission? How might we evolve our existing programs? What programs and structures would we design, and how do they relate to one another? How can we invite people on campus to innovate with Mozilla?

These are the broad questions that will guide a process over the next 9 weeks. By July 15th we aim to have a clear articulation of the impact we can have, the programs we’ll invest in and how they relate to one another, and the opportunities for students, staff and professors to participate.

We’re hoping that this process of reinventing Mozilla on campus will be participatory, and we’re inviting many voices to contribute. Lucy Harris on the Participation Team will be stewarding this process and shaping the final options. Mark Surman, Mitchell Baker, Chris Lawrence, Katharina Borchert and I will be involved in making a final decision on the direction we take.

You can read more about the details of the process in this post, but let me summarize it and the opportunities you have to be involved:

Phase 1: Listening (May 16-27)

→ provide thoughts on existing programs and opportunities you see

Phase 2: Synthesis and options (May 27-June 10)

→ we’ll frame some tensions for you to weigh in on

→ we’ll shape a set of options for conversation during the London All Hands

Phase 3: Final input (June 10-24)

→ we’ll articulate a set of options for you to consider as we move forward, and will be diving deep into these and key questions during the Mozilla All Hands in London

Phase 4: Final Decision and Disseminate (June 24-July 15)

→ we’ll take all the input and decide on a direction for moving forward

 

Let me finish by reiterating the opportunity. University campuses are a hotbed of innovation and a locus for creating change. Mozilla can tap into this energy and catalyze involvement in unleashing the next wave of openness and opportunity in online life. Finally, our team is excited about helping to shape a direction we can take, and investing in a robust program of participation moving forward.

I’m excited for this journey of reinventing Mozilla on campus.

 

[1] Photo credit: Tanha Islam and Trisa Islam


The Mozilla BlogMozilla Expands Its National Gigabit Project to Austin, TX

Mozilla will provide $150,000 in funding, and also grow the local maker community, to spur gigabit innovation in Texas’ capital

When you couple lightning-fast Internet with innovative projects in the realms of education and workforce development, amazing things can happen.

That’s the philosophy behind the Mozilla Gigabit Community Fund, our joint initiative with the National Science Foundation and US Ignite. The Mozilla Gigabit Community Fund brings funding and staffing to U.S. cities equipped with gigabit connectivity, the next-generation Internet that’s 250-times faster than most other connections. Our goal: Spark the creation of groundbreaking, gigabit-enabled educational technologies so that more people of all ages and backgrounds can read, write, and participate on this next-generation Web.

As we just announced at the Gigabit City Summit in Kansas City, we’re expanding our gigabit work to the city of Austin, TX in August 2016. Selected from a list of contenders from across the country, Austin stood out due to its existing city-wide digital inclusion plan, active developer community, and growing informal education landscape. Beginning this fall, Mozilla will provide $150,000 in grant funding to innovative and local projects and tools that leverage Austin’s Google Fiber network. Think: 4K streaming in classrooms, immersive virtual reality, and more.

(In the existing Mozilla Gigabit cities of Chattanooga, TN and Kansas City, projects include real-time water monitoring systems, 3D learning tools for classrooms, and specialized technology for first responder training. Read more about those projects here.)

Individuals from the Chattanooga gigabit project Hyperaudio participate in a New York City Maker Party.

Individuals from the Chattanooga gigabit project Hyperaudio participate in a New York City Maker Party.

Mozilla is also investing in the makers and educators who make Austin great. We’ll help create Gigabit Hive Austin — a network of individuals, schools, nonprofits, museums, and other local organizations passionate about teaching and learning the Web. Hive Austin will be one of 14 Mozilla Hive networks and communities across four continents that teach web literacy and 21st-century skills.

Mozilla will open the first round of grant applications in Austin this August, and accept applications through October 18, 2016. Applicants and projects don’t have to be from Austin originally, but must be piloted locally. Click here to learn about the RFP process.

This spring, Mozilla is also providing $134,000 in new gigabit funding in Chattanooga and Kansas City. Funds will support projects that explore gigabit and robotics, big data, the Internet of Things, and more. Learn more.

Over the next two years, Mozilla will be expanding its Gigabit work to two additional cities. Interested in becoming a future Gigabit Hive city? We will reopen the city application process in late 2016.

Karl Dubost[worklog] From 17s to 0.5s, testing for performance

Tune of the week: Hip-Hop Mixtapes.

Webcompat Life

Progress this week:

Today: 2016-05-23T15:18:18.501043
354 open issues
----------------------
needsinfo       3
needsdiagnosis  119
needscontact    27
contactready    43
sitewait        157
----------------------

You are welcome to participate

Londong agenda.

French Mozilla Community seems interested by Web Compatibility work. Replying to a thread where we asked me more information.

Seen this http-prompt library. Shared the link with mike who replied "wow! gonna install this". A couple of minutes later I opened a new issue for the missing HEAD method. Then I thought "Oh shoot! I can solve this". I created a pull request, which was merged in the following hour. ❤️ opensource.

Webcompat issues

(a selection of some of the bugs worked on this week).

WebCompat.com dev

  • We got a performance regression with a code I had written recently. Maybe there is a solution. A bit later. Found. Reminder to myself: do performance testing for any I/O operations before pushing them to prod. On Thursday, I did a lot of testing. Maybe I should write a separate post about this. It was an interesting experiment. Upload performances went from 17s to 0.5s
  • We probably need to block the about: URIs in reporting. It doesn't make sense in the context of our work.

Reading List

  • Evaluating scripts using Marionette: "I finally managed to land a patch that makes evaluated scripts have lasting side-effects in the document. This means that scripts such as window.foo = 42 or document.documentElement.style.background = "pink" will take effect on the DOM. This is a requirement for WebDriver conformance, and a major inhibitor for people trying to replace FirefoxDriver with Marionette in their automation. "
  • Web Standards: "Standards are made by the people who show up, and not nearly enough people are showing up. We need publishers and designers and engineers, typographers and librarians and entrepreneurs, web folks and book folks and business folks. We certainly need more diversity of gender and color and class."
  • GridViz: A Firefox extension to visualize grid lines defined by the CSS Grid API

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

James LongWhat’s in a Continuation

Many people have heard the word “continuation” because it has something to do with node’s callback hell. I don’t think most people understand what continuations really are, though. They aren’t just a callback function used by async functions.

A continuation is a representation of the control flow of your program at any point in time, essentially the stack. In abstract terms, it represents "the rest of your program." In languages like Scheme that expose continuations as first-class values, you can capture the current continuation and invoke it later. When invoked, the current program state is replaced with the state at which the continuation was captured (i.e. the current stack is replaced with the stack from the continuation).

Continuations allow you to literally "jump" to different places in your code. They are a low-level primitive that gives you control over execution flow, allowing you implement everything from resumable exceptions to coroutines. Understanding continuations was the best thing I did as a young programmer; it forces you to understand how control flow works.

It would be neat to see something like continuations implemented in JavaScript engines because you can implement everything on top of them (note that I said something like, as continuations themselves are very hard to optimize). I'm a fan of low-level primitives for the same reasons as the Extensible Web Manifesto: let users evolve the language over time.

I recently ended up implementing continuations in JavaScript. It wasn’t on purpose; I was originally pursuing a way to arbitrarily pause JavaScript in user-land so I could write tutorials and interactive editors. I realized that to arbitrarily pause JS, I would need all the machinery necessary for continuations. The ability to save a stack and resume it later. Eventually I discovered the paper "Exceptional Continuations in JavaScript" and was able to achieve my in-browser stepping debugger by implementing continuations.

I wrote more about the backstory of my work at the end of this post. I did most of this work 2 years ago and I'm now polishing it up and publishing it.

function foo(x) { console.log(x); if (x <= 0) { return x; } else { return x + foo(x - 1); } } function main() { console.log(foo(3)); } main();
A working stepping debugger! Click on any line to add a breakpoint.

And just like that, I realized that I could expose continuations first-class to this special variant of JS, which is way more interesting than my stepping debugger. In this article, I will use my work to explain what continuations are and give you a chance to interact with them.

I will explain how it is implemented in the next post. A short version: it transforms all code into a state machine and uses exceptions to save the state of all functions on the stack. This means that every function is transformed into a big switch statement with every expression as separate cases, giving us the ability to arbitrarily jump around.

This transform is very similar to what regenerator does, which compiles JavaScript generators to ES5 code. In fact, that project is what motivated this work. Two years ago, I forked regenerator and implemented everything you see here. That means that it doesn't support a lot of recent ES6 features and it's missing a lot of bug fixes.

Visit the unwinder repo to see the code and try it yourself. Warning: this is very prototype quality and many things are ugly. There's a very good chance that you will hit bugs. However, with some polish work this has a chance to become a place where we can explore interesting patterns.

A few other caveats:

  • You cannot step through native frames, or use continuations when they are on the stack. If you use the native array forEach and capture the continuation in the callback, things will go badly. This requires all code to be compiled through this transformer if you want to use continuations (normal code can call out to native code just fine, however).

  • This technique favors performance of code that does not use continuations. Capturing continuations is not very fast, but if you are implementing something like a debugger, that doesn't matter. However, if you are implementing advanced control flow operators, you will likely hit performance problems. This is a good place to experiment with them though.

Introducing Continuations

Let's revisit the definition of a continuation; wikipedia describes it as "an abstract representation of the control state of a computer program. A continuation reifies the program control state…" The key words are control state. This means that when a continuation is created, it contains all the necessary information to resume the program exactly at the point in time which is was created.

This is how the stepping debugger works internally. The generated code looks for breakpoints, and when one is hit, it captures the current continuation and stops executing. Resuming is as simple as invoking the saved continuation.

Let's have some real fun though and expose continuations as first-class values! In Scheme, you use call-with-current-continuation to capture the current continuation, or the shorthand call/cc. Any experienced Scheme coder is familiar with code like this:

(define (foo)
  (let ([x (call/cc
            (lambda (cont)
              (display "captured continuation")
              (cont 5)
              (display "continuation called")))])
    (display "returning x")
    x))

(display (foo))

I implemented a callCC function in my special JavaScript variant since it already has all the necessary machinery. Additionally, we can use the stepping debugger to study how continuations affect the control flow.

function foo() { var x = callCC(function(cont) { console.log("captured continuation"); cont(5); console.log("continuation called"); }); console.log("returning x", x); return x; } console.log(foo());
A basic example of continuations.

This is a very simple example of using a continuation. Click "Run & Ignore Breakpoints" to see what happens. We capture the continuation using callCC, which gives us the continuation as the function cont. We then log "captured continuation" and invoke cont. Note how "continuation called" is never logged. Why is that?

Now go back and click "Run" to hit the breakpoint on line 13, and continually click "Step" to step through the program to see what happened. What happens when cont is invoked?

It jumps back to line 3! The previous control flow is aborted and the stack when cont was captured is restored. Any arguments passed to continuations replace the call to callCC, as if callCC returned that value. The continuation represents a state of the program where the callCC function is waiting to return a value.

Note: While the interface for calling a continuation looks like calling a function, it's quite different. The call to a continuation never returns. While this may be confusing, we will look into alternative continuation interfaces in future posts. Also, continuations are a low-level interface and rarely used directly.

Continuations are like portals. If the image below represents your control flow, you can capture the current stack (the blue portal) and jump back to it at any time (the orange portals).

The control flow of a program. Jumping through the orange portals will reset the stack to where the blue portal was created.

However, just like in the game Portal, these portals do not traverse time. The only thing that a continuation saves is the stack, so any changes to variables will still be seen after jumping through a continuation. Look at what happens with a closure:

function foo() {  
  var x = 5;
  var func = function() { return x; };
  x = 6;
  return func;  
}

console.log(foo()())

This will print 6 because closures reference the same variable that is mutated later. With continuations, it's the same thing except stack frames are pointing to the variables. We will show examples of this later.

Practical Uses

Now that you understand the general idea, let's put continuations to good use. This certainly seems powerful, but you might have trouble thinking of problems this solves. In fact, you might be thinking that this would just make programs harder to follow.

Abusing continuations definitely makes programs harder to follow. But there are constructs that continuations allow you to build that are generally helpful; break and continue might make your program a little harder to follow, but they solve real problems, just like other control flow operators. Additionally, in a future post we will talk about delimited continuations which force developers to use continuations in a clearer way.

The first exercise is to implement the JavaScript some method, which checks if any element in an array passes a predicate check. Importantly, it is "short-circuiting," meaning it stops iterating after finding the first passing element because it doesn't need to check the rest.

function some(predicate, arr) { var x = callCC(function(cont) { for (var idx = 0; idx < arr.length; idx++) { console.log('testing', arr[idx]); if(predicate(arr[idx])) { cont(true); } } return false; }); return x; } console.log(some(x => x >= 2, [1, 2, 3, 4]));
An implementation of some using continuations.

If you run this, you'll notice that it doesn't check 3 or 4. It stops executing once 2 passes the predicate check. Step through the code and watch how it does that.

Of course, we could use break to stop the while loop. But this is a trivial example; it's common to call out to other functions within the loop where you can't break. The native control operators are quite limiting. Continuations, however, allow you to travel across stack frames.

For example, let's say you wanted to use forEach method instead of a for loop, because you already use that method everywhere else. Here is the example now:

// Note: don't use native forEach so you can step // into this one function forEach(arr, func) { for (var i=0; i < arr.length; i++) { func(arr[i]); } } function some(predicate, arr) { var x = callCC(function(cont) { forEach(arr, function(val) { console.log('testing', val); if(predicate(val)) { cont(true); } }); return false; }); return x; } console.log(some(x => x === 2, [1, 2, 3, 4]));
An implementation of some using continuations across stack frames.

It works exactly the same way, even if we are calling the predicate within the function passed to forEach. It still short-circuits. Note how we didn't have to change anything about forEach; we are able to reuse the same method that we already use everywhere else.

This highlights a fundamental difference of continuations and anything currently in JavaScript: it suspends the entire stack. Generators suspend code as well, but their yield is shallow. It only suspends one frame, the generator itself.

While that makes code clearer, it leads to a proliferation of special syntax across all code and forces a lot of work on the developer over the lifetime of a project. Converting a single sync function to async requires a massive refactoring, changing the interface of every thing that uses it. I recommend reading "What Color is Your Function?" for a great description of this problem.

In the next post we will show how having a single function interface (no function* or async function) and deep stack control greatly improves the reusability and readability of code.

Exception Handling

Let's get real. The above exercises are pretty stupid. You wouldn't actually use continuations like that; there are much better constructs for looping over values and short-circuiting. Those examples were simple on purpose for illustrative purposes.

Now we will implement a new fundamental control construct: exceptions. This shows that continuations allow you implement things previously built-in to the language.

Users should be able to throw exceptions and install handlers to catch them. Installed handlers are dynamically scoped for a given section of code: any exception that occurs within a given block of code, even if it comes from an external function, should be caught.

Exception handlers must exist as a stack: you can install new handlers that override existing ones for a given period of time, but the previous ones are always restored once the newer ones are "popped" off the stack. So we must manage a stack.

The stack is a list of continuations, because when a throw happens we need to be able to jump back to where the try was created. They means in try we need to capture the current continuation, push it on the stack, run the code, and dispatch exceptions. Here is the full implementation of try/catch:

var tryStack = [];

function Try(body, handler) {
  var ret = callCC(function(cont) {
    tryStack.push(cont);
    return body();
  });
  tryStack.pop();

  if(ret.__exc) {
    return handler(ret.__exc);
  }
  return ret;
}

function Throw(exc) {
  if(tryStack.length > 0) {
    tryStack[tryStack.length - 1]({ __exc: exc });
  }
  console.log("unhandled exception", exc);
}

The key here is that continuations can be resumed with values. The return body() line will return the final value of the code. At that point no continuation was invoked; it just passes that value through. But if Throw is invoked, it will call the captured continuation with an exception value, which gets assigned to ret, and we check for that type of value and call the handler. (We could do more sophisticated detection of exception types.)

Note that we pop the current handler off the stack before calling it, meaning that any exceptions that occur within exception handlers will properly be passed up the handler stack.

Here's what it looks like using Try/Catch:

function bar(x) {
  if(x < 0) {
    Throw(new Error("error!"));
  }
  return x * 2;
}

function foo(x) {
  return bar(x);
}

Try(
  function() {
    console.log(foo(1));
    console.log(foo(-1));
  },
  function(ex) {
    console.log("caught", ex);
  }
);

Unfortunately JavaScript does not allow us to extend syntax (although this can be solved with sweet.js macros, as we'll show in future posts). Instead of using blocks we must pass functions into Try. The output of this code would be 2 \n caught "error!".

The above implementation and example code are loaded into the editor below, with a breakpoint already set at the Try block. Hit "Run & Ignore Breakpoints" to verify the output, and "Run" to break and step through the code to see how it unfolds.

var tryStack = []; function Try(body, handler) { var ret = callCC(function(cont) { tryStack.push(cont); return body(); }); console.log('ret is', JSON.stringify(ret)); tryStack.pop(); if(ret.__exc) { return handler(ret.__exc); } return ret; } function Throw(exc) { if(tryStack.length > 0) { tryStack[tryStack.length - 1]({ __exc: exc }); } console.log("unhandled exception", exc); } // Example code: function bar(x) { console.log('x is', x); if(x < 0) { Throw(new Error("error!")); } return x * 2; } function foo(x) { return bar(x); } Try( function() { console.log(foo(1)); console.log(foo(-1)); }, function(ex) { console.log("caught", ex); } );
When x is -1 in bar, it will throw an exception which will be handled by our handler. Step through the code to see.

There are far more complicated control constructs that you can implement using continuations, and we will look into many future in a future post.

Calling from the Outside

So far we have always invoked a continuation inside the callCC call. That means we are always only jumping up the stack, meaning we're trying to jump back to a previous stack frame.

There's a name for these kinds of continuations: escape continuations. These are a more limited continuation that can only be called within the dynamic extent of the function passed to callCC (in this case it would be callWithEscapeContinuation or callEC). A lot of things like exceptions can be implemented only with escape continuations.

The reason for the differentiation is performance. Escape continuations don't need to save the entire stack and they can assume that the stack frames at the point of the callEC call will always exist in memory whenever the continuation is invoked.

However, my implementation of continuations is full continuations. This is where things really start getting mind-bending. In future posts, we will use this technique to implement features like coroutines. It's worth looking at a simple example for now.

Within the callCC call, you can just return the continuation itself:

var value = callCC(cont => cont);

value will be the continuation, but we don't name it cont because it will be different values later in time when the continuation is invoked. value will be whatever value the continuation is invoked with. We can make this easily reusable by wrapping it into a function:

function currentContinuation() {
  return callCC(cont => cont);
}

Now we can do things like:

function currentContinuation() { return callCC(cont => cont); } function foo() { var value = currentContinuation(); if(typeof value === "function") { console.log("got a continuation!"); // Do some stuff var x = 5; value(x * 2); } else { console.log("computation finished", value); } } foo();
We "fork" the control flow depending on if we got the continuation or a normal value.

This is really powerful because it shows that we can invoke a continuation from any point in time, and it all works.

The above example is trivial, so in the spirit of attempting to show more value, here is a more complex example. This implements a very basic form of a coroutine that can pause itself and resumed with a value.

function currentContinuation() {
  return callCC(cont => ({ __cont: cont }));
}

function pause() {
  var value = currentContinuation();

  if(value.__cont) {
    throw value;
  }
  else {
    return value;
  }
}

function run(func) {
  try {
    return func();
  }
  catch(e) {
    if(!e.__cont) {
      throw e;
    }

    var exc = e;
    return {
      send: function(value) {
        exc.__cont(value);
      }
    };
  }
}

When a coroutine calls pause, the continuation is thrown, the scheduler catches it, and returns an object that gives the caller the ability to resume it. A very simple program that uses this:

function foo() {
  var x = pause();
  return x * 2;
}

var process = run(foo);
if(process.send) {
  process.send(10);
}
else {
  console.log(process);
}

The check for process.send is needed because our implementation is very naive. It saved the full continuation, which includes the top-level stack at the point when run is called. That means when the process is resumed, the top-level control is restored as well and we will see run return again.

Challenge: implement a version where process.send returns the final value instead of forcing the user to handle the return from run multiple times.

Here is the full program in an editor that lets you step through:

function currentContinuation() { return callCC(cont => ({ __cont: cont })); } function pause() { var value = currentContinuation(); if(value.__cont) { throw value; } else { return value; } } function run(func) { try { return func(); } catch(e) { if(!e.__cont) { throw e; } var exc = e; return { send: function(value) { exc.__cont(value); } }; } } function foo() { var x = pause(); return x * 2; } var process = run(foo); if(process.send) { process.send(10); } else { console.log(process); }
The full program with a breakpoint already. Click "run" to study it.

In future posts, we will look at more robust techniques for implementing coroutines with continuations.

Closing Over Data

It's very important to understand that continuations only save the call stack, not any of the data that stack frames may reference. Restoring a continuation does not restore any of the variables that those stack frames use. In this way, think of each stack frame as a closure that simply references those variables, and any external changes will still be seen.

This is confusing for beginners, but hopefully this is a simple illustration:

function foo() { var x = 5; callCC(function(cont) { x = 6; cont(); }); console.log(x); } foo();
This will log 6 because the change to x is still seen after the continuation is restored. Capturing does not save the value of x.

It doesn't matter when the continuation is invoked. If we saved the continuation for later, changed some local variables, and returned from the function, when the continuation is invoked it will still see all the local variable changes. A continuation closes over its data.

Note: There may be bugs in my continuation implementation where this is not the case. In my implementation, I have to go out of my way to ensure that data is closed over instead of copied. If changes are need seen from continuations invoked later, that's a bug.

The Backstory

There's a long history here, but I'll keep it short:

  • In 2011 I worked on an in-browser game editor and I wanted the ability to interactively debug code.

  • Around this time I implemented my own Scheme-inspired language, Outlet, and tried to make it debuggable. I did it with a continuation-passing-style (CPS) transformation, effectively implementing continuations, but this forced me to re-implement stacks and scopes. It was very slow (can't compete with native JS stacks & scopes). I blogged the details here:

  • In an attempt to use native JS function scoping, I thought about abusing generators to suspend functions. While I still needed to re-implement the stack, at least variables are native and the implementation is much simpler (generators were just landing in JS engines). I called it YPS and it works by yielding every single expression and running in a special machine. It was horrendously slow.

    • Later @amasad ended up doing exactly this and writing an awesome post about it.
  • In response to my generator-based suspension idea, @msimoni pointed me to the paper "Exceptional Continuations in JavaScript". I realized that what I wanted required all the machinery of continuations, and that paper outlines a technique to implement them without much run-time performance cost. Although capturing continuations is slow, all other code has a minimal perf hit.

That paper describes a really neat trick to implement continuations, which gives me the power to arbitrarily jump around code. Unfortunately it requires a sophisticated transformation, but right around this time regenerator came out which implemented a similar transformation! I forked regenerator, implemented continuations, got a stepping debugger working, and then realized that I could expose continuations first-class and be able to do all the things I'm about to show you. (That was around 2 years ago. This project sat on my computer for that time until I resurrected it a few weeks ago.)

Next: The Implementation Details

I was going to explore the implementation details in this post, but it's already so dense that I am pulling this out into a separate post. Check out the next post if you are interested in details!

Explore!

I think this is could be a fun playground for playing with various advanced control operators. I'm also pretty proud that I was able to get an in-browser stepping debugger working for interactive tutorials.

I will go into more advanced usages of continuations, particularly delimited continuations, in future posts.

If you are interested in this, check out unwinder!

Output:
Stack:
Hi

Robert KaiserTools I Wrote for Crash (Stats) Analysis

Now that I'm off the job that dominated my life (and almost burned me out) for the last years, I finally have some time again to blog. And I'll start with stuff I actually did for that job, as I still am happy to help others to continue from where I left.

The more fun part of the stability management job was actually creating new analysis - and tools. And those tools are still helpful to people working on crash analysis or crash stats analysis now - so as my last task on the job, I wrote some documentation for the tools I had created.

One of the first things I created (and which was part of the original job description when I started) was a prototype for detecting crash "explosiveness", i.e. a detector for crashes that are rising significantly in volume. This turned out to be quite helpful for me and others to use, and the newest reports of it are listed in my Report Overview. I probably should talk about it in more detail at some point, but I did write up a plan on the wiki for the tool, and the (PHP) code is on hg.m.o (that was the language I knew best and gave me the fastest result for a prototype). I had plans to port/rewrite it in python, but didn't get to it. Calixte, who is looking after most of "my" tools now, is working on that though, and I have already promised to review his work as a volunteer so we can make sure we have this helpful capability in better code (and hopefully better UI in the end) for future use.

In general, I have created one-line docs for all the PHP scripts I had in the Mercurial repository, and put them into the run-reports script that is called by a daily cron job. Outside of the explosiveness script, most of those have been obsoleted by Socorro Super Search (yay for Adrian's work and for the ElasticSearch backend!) nowadays.

Also, the scripts that generate the summed-up data for Are We Stable Yet dashboard and graphs (also see an older blog post discussing the graphs) have been ported to python (thanks Peter for helping me to get started there) - and those are available in the Magdalena repository on GitHub. You'll see that this repository doesn't just have more modern code, using python instead of PHP and the public Socorro API instead of private PostgreSQL access, it also has a decent README documenting what it and every script in it does. :)

The most important tools for people analyzing crash stats are in the Datil repository on GitHub (and its deployment on crash-analysis), though. I used all those 4 dashboards/tools daily in the last months to determine what to report to Release Managers and other parties, find out what we need to file as bugs and/or push to get fixed. Datil, like Magdalena, has good docs right in the repository now, readable directly on GitHub.

So, what's there?
Well, the before-mentioned "Are We Stable Yet" dashboard and graphs, for sure (see the longtermgraph docs for what graphs you can get and a legend of what the lines mean).
There's also a tool/prototype for "what's important" weighed top crash lists that I called "Top Crash Score", see the score docs for what it does and examples on how to use that tool.
And finally, I created a search query comparison tools that did let me answer questions like "which crashes happen more with or without multi-process support (e10s) being active?" or "which crashes have vanished with the new beta and which have appeared (instead)?" - which was incredibly helpful to me at least. Read the searchcompare docs for more details and examples.

I probably won't spend a lot of time with those tools any more, neither in usage nor in development, but I'm still happy about people using them, giving me feedback, and I'm also happy to review and merge pull requests that feel like making sense to me!

Varun JoshiMultifile Responses in Balrog

Apart from Firefox, Balrog is also used by Mozilla to provide updates for the Gecko Media Plugin (GMP) package. The Gecko Media Plugin package contains various plugins for media support, like the OpenH264 codec and the Widevine plugin. To handle updates to these, we have a speical GMP blob that lists updates to the plugins. Updates to every plugin are included in one blob. This leads to problems when there are multiple versions of a plugin that we can use. For example, we might serve OpenH264 version 1.5.3 on Firefox 42 on Windows and version 1.5.2 on Firefox 40. We have to maintain a blob for each possible combination of versions. With an increase in the number of versions available, this method of serving updates might become intractable.

Ben and I discussed various strategies we could use to tackle this. A gist of what went through our minds is on the bug page. What we eventually went ahead with was to add a blob type that got its contents from other blobs. We call this blob type SuperBlob.

SuperBlob

A SuperBlob is basically just a redirection mechanism. It just contains the names of the products that we wish to include in the generated XML.:

 {
    "name": "fake",
    "schema_version": 4000,
    "products": [
        "c",
        "d"
    ]
}

This superblob is called fake. It just asks to look at the products named c and d. A rule can be set to point at this SuperBlob if we wish the response product to have all the files listed in products c and d.

How does this work?

The web interface, while processing requests, checks if the rule evaluates to a SuperBlob. If it is, it gets the product names from the SuperBlob and gets the corresponding blobs by evaluating the query with the requested product changed. So, in our example, the web interface will evaluate the query with the product name changed to c and d and obtain the resultant blobs. It will pick up the header and footer XML from the output of processing the blob obtained from the first product and the inner XML will include the concatenated inner XMLs of all the blobs obtained from the products listed in the SuperBlob.

So, if product a gave:

<updates>
    <update type="minor" version="None" extensionVersion="2.5" buildID="25">
        <patch type="complete" URL="http://a.com/b" hashFunction="sha512" hashValue="23" size="27777777"/>
    </update>
</updates>

and product b gave:

<updates>
    <update type="minor" version="None" extensionVersion="2.5" buildID="25">
        <patch type="complete" URL="http://a.com/public" hashFunction="sha512" hashValue="23" size="22"/>
    </update>
</updates>

the SuperBlob will give:

<updates>
    <update type="minor" version="None" extensionVersion="2.5" buildID="25">
        <patch type="complete" URL="http://a.com/public" hashFunction="sha512" hashValue="23" size="22"/>
        <patch type="complete" URL="http://a.com/b" hashFunction="sha512" hashValue="23" size="27777777"/>
    </update>
</updates>

So, we can now have one fixed rule for all GMP responses and have several rules for each constituent plugin without having to worry about the combinations like we had to do earlier.

While working on this, I also removed the createXML method in favour of three methods that return the header, the inner XML and the footer respectively. This helped in seperating the various components of the XML output without having to parse it. The XML generation logic has moved to the client view.

Tarek ZiadéConsistent Load Balancing

This is an interesting problem to look at. Let's say you have 1 million users and 5 storage servers. You are building an application that needs to pick one of the 5 storage servers depending on the user id.

You want your app to do it as consistently as possible. e.g. a given user should always land on the same backend.

When a request comes in:

  • each request is identified by a unique user id.
  • the app should always call the same storage server given that id.
  • you want all your users to be distributed equally across your storage servers.
  • when a storage server is removed from the list, you want users to stick with the servers they initially got. For users on the removed server, they should be dispatched equally on other servers.
  • when a server is added, you want the minimal numbers of users to be moved around.
  • The application is stateless about this, so when I deploy a new node and give it the list of the storage servers, it should be able to start distributing users among them without prior knowledge.

Point 4 and 6 discards a simple round-robin solution.

The solution to that problem is to build a deterministic function that projects a user id into the space composed of the servers. (yeah well, clustering I guess.)

There are two known algorithms to do that. The consistent hashing algorithm and the rendezvous hashing.

Consistent Hashing

Consistent Hashing is a hashing that can be used to minimize the shuffling of users when a server is removed or added.

This is how it's implemented:

  • each server name is converted into a unique number
  • that number is projected on an modulo interval (a circle)
  • every user is also converted into a unique number and projected on the circle
  • the server that's the closest to the user is picked

If you want nice drawing go here.

This is an elegant solution because removing a server keeps the rest stable, and adding one server shuffles a minimal number of users.

The conversion from a name to a integer is key here: you have to be deterministic but in the same time try to have the numbers randomly and kind-of-evenly distributed on the circle.

Here's how you can do it using MD5:

import hashlib

def hash(key):
    return long(hashlib.md5(key).hexdigest(), 16)

Using a classical hash like MD5 gives us the random part, but depending on the server name you might well end up with two servers that are very close to each other on the circle

And the result will be that when the users are converted into numbers, a very small amount of users will go to some servers.

One solution to fix that is to add replicas: for each server, instead of projecting a single number on the circle, we'll project 100. For example, "server1" becomes "server1:1", "server1:2", .., "server1:100" and those values are transformed into numbers.

Using replicas is very efficient to make sure users are spread evenly.

RendezVous

The other algorithm is called RendezVous and is based on a similar idea where servers are converted into numbers with a hash.

The difference is that instead of projecting servers and their replicas on a circle, the algorithm uses weights. To find which server a user should use, for each combination of server and user, a number is created with a classical hash function.

The server that's picked is the one with the highest number.

The Python code looks like this:

def get_server(user):
    high_score = -1
    winner = None

    for server in server:
        score = hash(server + user)
        if score > high_score:
            high_score, winner = score, ip
        elif score == high_score:
            high_score, winner = score, max(server, winner)

    return winner

The advantage of this method is that you don't have to create replicas to worry about distribution. In fact, according to my tests, RendezVous is doing a better job than Consistent Hashing for distributing users.

One key decision is to decide which hashing algorithm you want to use.

It's all about the hashing!

Both RendezVous and Consistent hashing are using a classical hashing function to convert the servers and users into numbers - and picking one was not obvious.

I came across this amazing stackexchange post that shed some light on different hashing, their strengths and weaknesses. You should read it, you will learn a lot.

The take away for my experiment was that some hashing functions are doing a better job at randomizing values in the hash space. Some very simple functions are also colliding quite often, which can be a problem.

So I tried a bunch of them and benchmarked them.

It's interesting to note that I had much better results for my use case using RendezVous & sha256 than RendezVous & Murmur, when the latter is usually what people use with RendezVous.

I ended up removing Murmur from my tests, the results where too bad.

Anyways, here's the full implementation I did, based on snippets I found here and there, and the result:

The gist

And the winnner is : RendezVous and sha256

Of course, that entirely depends on how many servers & users you have.

Chris CooperRelEng & RelOps Weekly highlights - May 16, 2016

REQUEST FOR URGENT BUSINESS RELATIONSHIP

FIRST, I MUST SOLICIT YOUR STRICTEST CONFIDENCE IN THIS MIGRATION. THIS IS BY VIRTUE OF ITS NATURE AS BEING UTTERLY CONFIDENTIAL AND ‘TOP SECRET’. I AM SURE AND HAVE CONFIDENCE OF YOUR ABILITY AND RELIABILITY TO HANDLE A TASK OF THIS GREAT MAGNITUDE INVOLVING A PENDING MIGRATION REQUIRING MAXIMUM CONFIDENCE.

WE ARE TOP OFFICIAL OF THE RELENG TEAM WHO ARE INTERESTED IN MIGRATION OF TASKS INTO TASKCLUSTER WITH JOBS WHICH ARE PRESENTLY TRAPPED IN BUILDBOT. IN ORDER TO COMMENCE THIS BUSINESS WE SOLICIT YOUR ASSISTANCE TO ENABLE US MIGRATE INTO YOUR TASKGRAPH THE SAID TRAPPED JOBS.

<marked as spam><deleted>

Modernize infrastructure:

The ongoing work to get TaskCluster building Firefox on Windows reached an important milestone with our first Mozharness based build going green in the TreeHerder dashboard. This represents underlying effort in documenting the dependency chain for Windows builders and producing simple manifests that give greater transparency to changes in this area.

Alin, Amy, and Van brought 192 new OS X 10.10.5 Mac minis online. This should eliminate our Yosemite backlog and allow us to enable more e10s tests.

Q, catlee, and jmaher migrated the first batch of Windows 7 tests to AWS this week. Currently we’re running these suites for all branches of gecko 49 and higher: Web platform tests + reftests, gtest, cppunit, jittest, jsreftest, crashtest. This will reduce our reliance on hardware and allow us to scale dynamically. They are still working on greening larger sets of tests which are more sensitive to their operating environment. Once we have moved a significant portion of tests, we can add additional e10s tests on w7 as well.

Improve Release Pipeline:

In the interest of making progress on migrating Nightly builds to TaskCluster, we had a meeting last week to discuss the security requirements around our nightly release process. Based on the discussions in that meeting, Aki is now iterating on a “one graph” solution for Nightlies (as opposed to a two-graph approach where signing is separate). If this approach works, i.e. we can’t find major security holes in the proposed model, it will simplify our process greatly.

Improve CI Pipeline:

We have achieved our first deprecation milestone in the TaskCluster migration by turning off Linux 64-bit debug builds/tests for aurora/trunk branches in buildbot. These bugs are only generated in TaskCluster now.

Aki added scriptworker to readthedocs: http://scriptworker.readthedocs.io/en/latest/

Operational:

We had a planned tree closing window over the weekend where, amongst other things, we upgraded our database hosts. It went smoothly, so you probably didn’t even notice.

Our new buildduty associates from Taipei were able to successfully loan machines to developers for the first time last week. Welcome aboard!

Release:

Lots of releases published or in-flight: Firefox 47 betas 3-5, Fennec 47.0b4, and Thunderbird 38.8.0 and 45.1.0. As always, you can find more details in our post-mortem wiki pages:

See you next week, freelance writer!

Giorgos LogiotatidisTakis - A util that blocks until a port is open.

Over at Mozilla's Engagement Engineering we use Docker to ship our websites. We build the docker images in CI and then we run tests against them. Our tests usually need a database or a cache server which you can get it running simply with a single command:

docker run -d mariadb

The problem is that this container will take some time to initialize and become available to accept connections. Depending on what your test and how you run your tests this delay can cause a test failure to due database connection timeouts.

We used to wait on executing our tests with sleep command but that -besides being an ugly hack- will not always work. For example you may set sleep timeout to 10 seconds and due to CI server load database initialization takes 11 seconds. And nobody wants a non-deterministic test suite.

Meet Takis. Takis checks once per second if a host:port is open. Once it's open, it just returns. It blocks the execution of your pipeline until services become available. No messages or other output to get in the way of your build logs. No complicated configuration either: It reads CHECK_PORT and optionally CHECK_HOST environment variables, waits and eventually returns.

Takis is build using Go and it's fully statically linked as Adriaan explains in this intriguing read. You can download it and directly use it in your scripts

~$ wget https://github.com/glogiotatidis/takis/raw/master/bin/takis
~$ chmod +x takis
~$ CHECK_PORT=3306 ./takis

or use it's super small Docker image

docker run -e CHECK_PORT=3306 -e CHECK_HOST=database.example.com giorgos/takis

For example here's how we use it to build Snippets Service in TravisCI:

script:
  - docker run -d --name mariadb -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=snippets mariadb:10.0
  # Wait mariadb to initialize.
  - docker run --link mariadb:db -e CHECK_PORT=3306 -e CHECK_HOST=db giorgos/takis
  - docker run --env-file .env --link mariadb:db mozorg/snippets:latest coverage run ./manage.py test

My colleague Paul also build urlwait, a Python utility and library with similar functionality that can be nicely added to your docker-compose workflow to fight the same problem. Neat!

Daniel Stenbergcurl user poll 2016

It is time for our annual survey on how you use curl and libcurl. Your chance to tell us how you think we’ve done and what we should do next. The survey will close on midnight (central European time) May 27th, 2016.

If you use curl or libcurl from time to time, please consider helping us out with providing your feedback and opinions on a few things:

http://goo.gl/forms/e4CoSDEKde

It’ll take you a couple of minutes and it’ll help us a lot when making decisions going forward. Thanks a lot!

The poll is hosted by Google and that short link above will take you to:

https://docs.google.com/forms/d/1JftlLZoOZLHRZ_UqigzUDD0AKrTBZqPMpnyOdF2UDic/viewform

This Week In RustThis Week in Rust 130

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

  • Announcing RustFest Berlin 2016 (17-18 September 2016).
  • hydrogen. Multithreaded, non-blocking Linux server framework in Rust.
  • cargo-profiler. Cargo subcommand to profile binaries.
  • cargo clippy subcommand for convenient production of annoying warnings.
  • rfmt. Rust source code formatter.
  • RustDT 0.6.0 is out with improved build target support and experimental auto-check.

Crate of the Week

This week's Crate of the Week is cargo-profiler, which lets us profile our code directly from cargo using a variety of tools, notably valgrind's callgrind and cachegrind. Thanks to kbknapp for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

132 pull requests were merged in the last two weeks.

New Contributors

  • billyevans
  • bnewbold
  • Brian Green
  • Cristian Oliveira
  • Dan Fockler
  • Geordon Worley
  • Haiko Schol
  • mrmiywj
  • Pavel Sountsov
  • silvo38
  • Stefan Schindler
  • Steven Burns

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

No RFCs are currently in final comment period.

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

Karl Dubost[worklog] Make Web sites simpler.

Not a song this week, but just a documentary to remind me that some sites are overly complicated and there are strong benefits and resilience in chosing a solid simple framework for working. Not that it makes easier the work. I think it's even the opposite, it's basically harder to make a solid simple Web site. But that the cost is beneficial on the longterm. Tune of the week: The Depth of simplicity in Ozu's movie.

Webcompat Life

Progress this week:

Today: 2016-05-16T10:12:01.879159
354 open issues
----------------------
needsinfo       3
needsdiagnosis  109
needscontact    30
contactready    55
sitewait        142
----------------------

In my journey in getting the contactready and needscontact lower, we are making progress. You are welcome to participate

Londong agenda.

Reorganizing a bit the wiki so it better aligns with our current work. In Progress.

Good news on the front of appearance in CSS.

The CSSWG just resolved that "appearance: none" should turn checkbox & radio <input> elements into a normal non-replaced element.

Learning on how to do mozregression

We are looking at creating a mechanism similar to Opera browser.js into Firefox. Read and participate to the discussion.

Webcompat issues

(a selection of some of the bugs worked on this week).

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Nick DesaulniersWhat's in a Word?

Recently, there some was some confusion between myself and a coworker over the definition of a “word.” I’m currently working on a blog post about data alignment and figured it would be good to clarify some things now, that we can refer to later.

Having studied computer engineering and being quite fond of processor design, when I think of a “word,” I think of the number of bits wide a processor’s general purpose registers are (aka word size). This places hard requirements on the largest representable number and address space. A 64 bit processor can represent 264-1 (1.8x1019) as the largest unsigned long integer, and address up to 264-1 (16 EiB) different addresses in memory.

Further, word size limits the possible combinations of operations the processor can perform, length of immediate values used, inflates the size of binary files and memory needed to store pointers, and puts pressure on instruction caches.

Word size also has implications on loads and stores based on alignment, as we’ll see in a follow up post.

When I think of 8 bit computers, I think of my first microcontroller: an Arduino with an Atmel AVR processor. When I think of 16 bit computers, I think of my first game console, a Super Nintendo with a Ricoh 5A22. When I think of 32 bit computers, I think of my first desktop with Intel’s Pentium III. And when I think of 64 bit computers, I think modern smartphones with ARMv8 instruction sets. When someone mentions a particular word size, what are the machines that come to mind for you?

So to me, when someone’s talking about a 64b processor, to that machine (and me) a word is 64b. When we’re referring to a 8b processor, a word is 8b.

Now, some confusion.

Back in my previous blog posts about x86-64 assembly, JITs, or debugging, you might have seen me use instructions that have suffixes of b for byte (8b), w for word (16b), dw for double word (32b), and qw for quad word (64b) (since SSE2 there’s also double quadwords of 128b).

Wait a minute! How suddenly does a “word” refer to 16b on a 64b processor, as opposed to a 64b “word?”

In short, historical baggage. Intel’s first hit processor was the 4004, a 4b processor released in 1971. It wasn’t until 1979 that Intel created the 16b 8086 processor.

The 8086 was created to compete with other 16b processors that beat it to the market, like the Zilog Z80 (any Gameboy emulator fans out there? Yes, I know about the Sharp LR35902). The 8086 was the first design in the x86 family, and it allowed for the same assembly syntax from the earlier 8008, 8080, and 8085 to be reassembled for it. The 8086’s little brother (8088) would be used in IBM’s PC, and the rest is history. x86 would become one of the most successful ISAs in history.

For backwards compatibility, it seems that both Microsoft’s (whose success has tracked that of x86 since MS-DOS and IBM’s PC) and Intel’s documentation refers to words still as being 16b. This allowed 16b PE32+ executables to be run on 32b or even 64b newer versions of Windows, without requiring recompilation of source or source code modification.

This isn’t necessarily wrong to refer to a word based on backwards compatibility, it’s just important to understand the context in which the term “word” is being used, and that there might be some confusion if you have a background with x86 assembly, Windows API programming, or processor design.

So the next time someone asks: why does Intel’s documentation commonly refer to a “word” as 16b, you can tell them that the x86 and x86-64 ISAs have maintained the notion of a word being 16b since the first x86 processor, the 8086, which was a 16b processor.

Side Note: for an excellent historical perspective programming early x86 chips, I recommend Michael Abrash’s Graphics Programming Black Book. For instance he talks about 8086’s little brother, the 8088, being a 16b chip but only having an 8b bus with which to access memory. This caused a mysterious “cycle eater” to prevent fast access to 16b variables, though they were the processor’s natural size. Michael also alludes to alignment issues we’ll see in a follow up post.

Mark CôtéBMO's database takes a leap forward

For historical reasons (or “hysterical raisins” as gps says) that elude me, the BMO database has been in (ughhh) Pacific Time since it was first created. This caused some weirdness on every daylight savings time switch (particularly in the fall when 2:00-3:00 am technically occurs twice), but not enough to justify the work in fixing it (it’s been this way for close to two decades, so that means lots of implicit assumptions in the code).

However, we’re planning to move BMO to AWS at some point, and their standard db solution (RDS) only supports UTC. Thus we finally had the excuse to do the work, and, after a bunch of planning, developing, and reviewing, the migration happened yesterday without issues. I am unreasonably excited by this and proud to have witnessed the correction of this egregious violation of standard db principles 18 years after BMO was originally deployed.

Thanks to the BMO team and the DBAs!

The Servo BlogThis Week In Servo 63

In the last week, we landed 156 PRs in the Servo organization’s repositories.

Emily Dunham has been working on an org-wide report of the contributor metrics. It’s looking good already, but there is still a lot of fun python work to do on the scripts, for those who want to make sure we’re not counting things incorrectly!

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • nox fixed all the warnings on Windows
  • mbrubeck added support for vertical-align in geckolib
  • paul made Servo no longer always quit when the Escape key is pressed
  • larsberg added initial builder support for more “zealous” SpiderMonkey GC configurations
  • jack improved our build time by adding ccache support to skia and azure
  • larsberg enabled caching on Windows builds, taking them down from 50 minutes to 15
  • kaksmet made text-align: justify safe for incremental layout
  • ms2ger added support for Gecko’s string atoms in rust-selectors
  • ddefisher implemented several WebGL functions
  • pcwalton improved window resize performance on MacOS
  • rzambre implemented the ability to log profiling output to a CSV file
  • shinglyu added support for *-reverse flex directions for flexbox
  • mmatyas enabled the use of NEON instructions on ARM and AArch64
  • dzbarsky implemented the readPixels WebGL API
  • asajeffrey made the WebDriver server wait for pages to load before continuing
  • jdm fixed a garbage collection hazard
  • connorgbrewster merged the redundant-yet-awkwardly-intertwined Page and BrowsingContext types
  • kichjang implemented text/plain form encoding
  • broesamle fixed a panick on reddit

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

A trace of loading CNN.com’s 356 web fonts using the snazzy new HTML-based profiler:

Profiler HTML output

Servo’s new application icon:

Servo application in the OS X dock

The Rust Programming Language BlogOne year of Rust

Rust is a language that gives you:

It’s a language for writing highly reliable, screamingly fast software—and having fun doing it.

And yesterday, Rust turned one year old.

Rust in numbers

A lot has happened in the last 365 days:

  • 11,894 commits by 702 contributors added to the core repository;
  • 88 RFCs merged;
  • 18 compiler targets introduced;
  • 9 releases shipped;
  • 1 year of stability delivered.

On an average week this year, the Rust community merged two RFCs and published 53 brand new crates. Not a single day went by without at least one new Rust library hitting the central package manager. And Rust topped the “most loved language” in this year’s StackOverflow survey.

Speaking of numbers: we recently launched a survey of our own, and want to hear from you whether you are an old hat at Rust, or have never used it.

One place where our numbers are not where we want them to be: community diversity. We’ve had ongoing local outreach efforts, but the Rust community team will soon be launching a coordinated, global effort following the Bridge model (e.g. RailsBridge). If you want to get involved, or have other ideas for outreach, please let the community team know.

Rust in production

This year saw more companies betting on Rust. Each one has a story, but two particularly resonated.

First, there’s Dropbox. For the last several years, the company has been secretively working on a move away from AWS and onto its own infrastructure. The move, which is now complete, included developing custom-build hardware and the software to drive it. While much of Dropbox’s back-end infrastructure is historically written in Go, for some key components the memory footprint and lack of control stood in the way of achieving the server utilization they were striving for. They rewrote those components in Rust. In the words of Jamie Turner, a lead engineer for the project, “the advantages of Rust are many: really powerful abstractions, no null, no segfaults, no leaks, yet C-like performance and control over memory.”

Second, there’s Mozilla. They’ve long been developing Servo as a research browser engine in Rust, but their first production Rust code shipped through a different vehicle: Firefox. In Firefox 45, without any fanfare, Rust code for mp4 metadata parsing went out to OSX and 64-bit Linux users; it will hit Windows in version 48. The code is currently running in test mode, with its results compared against the legacy C++ library: 100% correctness on 1 billion reported executions. But this code is just the tip of the iceberg: after laying a lot of groundwork for Rust integration, Firefox is poised to bring in significant amounts of new Rust code, including components from Servo—and not just in test mode.

We’re hearing similar stories from a range of other shops that are putting Rust into production: Rust helps a team punch above its weight. It gives many of the same benefits as traditional systems languages while being more approachable, safer and often more productive.

These are just a few stories of Rust in production, but we’d love to hear yours!

Rust, improved

Of course, Rust itself hasn’t been standing still. The focus in its first year has been growing and polishing its ecosystem and tooling:

There’s a lot more to say about what’s happened and what’s coming up in the Rust world—over the coming months, we’ll be using this blog to say it.

Rust in community

It turns out that people like to get together and talk Rust. We had a sold out RustCamp last August, and several upcoming events in 2016:

  • September 9-10, 2016: the first RustConf in Portland, OR, USA;
  • September 17, 2016: RustFest, the European community conference, in Berlin, Germany;
  • October 27-18, 2016: Rust Belt Rust, a Rust conference in Pittsburgh, PA, USA;
  • 71 Rust-related meetup groups worldwide.

And that’s no surprise. From a personal perspective, the best part about working with Rust is its community. It’s hard to explain quite what it’s like to be part of this group, but two things stand out. First, its sheer energy: so much happens in any given week that This Week in Rust is a vital resource for anyone hoping to keep up. Second, its welcoming spirit. Rust’s core message is one of empowerment—you can fearlessly write safe, low-level systems code—and that’s reflected in the community. We’re all here to learn how to be better programmers, and support each other in doing so.

There’s never been a better time to get started with Rust, whether through attending a local meetup, saying hello in the users forum, watching a talk, or reading the book. No matter how you find your way in, we’ll be glad to have you.

Happy birthday, Rust!

Andreas TolfsenEvaluating scripts using Marionette

One roadblock using Marionette as the backend for WebDriver automation has been its lacking execute script primitives: It has been impossible to modify the DOM as all scripts were evaluated in sandboxes.

As of last week, I finally managed to land a patch that makes evaluated scripts have lasting side-effects in the document. This means that scripts such as window.foo = 42 or document.documentElement.style.background = "pink" will take effect on the DOM. This is a requirement for WebDriver conformance, and a major inhibitor for people trying to replace FirefoxDriver with Marionette in their automation.

The patch also refactors the way we do script evaluation in Marionette. Six distinct and different script evaluation implementations have been merged into one, and sandbox management, which is still needed for our Gecko specific automation, is more reliable.

Triggering the lasting side-effects behaviour happens through the absence of a "sandboxName" parameter on the JSON object passed to the GeckoDriver#executeScript function. This lets us retain backwards compatibility for the sandboxed script behaviour for the Mozilla-specific Marionette Python client. However, if you’re using the geckodriver HTTP frontend for Marionette, this will trigger the correct WebDriver compatible behaviour out of the box.

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1270295] don’t update timestamps when the tab is not active / in the background
  • [1270867] confusing error message when I was just searching for a bug
  • [232193] bmo’s systems (webheads, database, etc) should use UTC natively for o/s timezone and date storage

discuss these changes on mozilla.tools.bmo.


Emma HumphriesReadable Bug Statuses For Bugzilla: Update

First, thank you for your interest in this project. Over 100 npm users have downloaded the package this week!

Second, I've been making updates:

  • Added a script so that you can run npm script bundle and create a browserify'ed version of the module to include on web pages.
  • Use target milestone to indicate the release, before any uplifts, a bug is targeted for.
  • Aggressive exception handling so you just need to handle an error message if the package can't parse a bug.

Make sure you're using the latest version, and if you have a feature request, find a bug, or want to make an improvement, submit it to the GitHub repo.



comment count unavailable comments

Yunier José Sosa VázquezMozilla abre su programa de ayuda al software libre a todos los proyectos

El año pasado Mozilla lanzó MOSS (por sus siglas en inglés de Mozilla Open Source Support), un programa para ayudar económicamente a proyectos de código abierto. En sus inicios, MOSS estuvo dirigido principalmente a los proyectos que Mozilla emplea a diario. Ahora, con la adición de “Mozilla Partners” cualquier proyecto que está realizando actividades relacionadas con la Misión de Mozilla podrá acceder a él.

Nuestra misión, tal como se plasma en nuestro Manifiesto, es garantizar que Internet permanezca siendo un recurso público global, abierto y accesible a todos. Una Internet que realmente ponga a las personas de primero. Sabemos que muchos otros proyectos de software comparten estas metas con nosotros, y queremos utilizar nuestros recursos para ayudar y animar a otros a trabajar hacia ellos.

Si usted piensa que su proyecto califica, le recomendamos que envíe su solicitud llenando este formulario. Los criterios de selección en que se basa el comité encargado de elegir los proyectos que apliquen puedes leerlos en la Wiki. El presupuesto para este año es de aproximadamente 1,25 millones de dólares estadounidenses (USD).

El plazo para recibir solicitudes para la etapa inicial cerrará el jueves 31 de mayo a las 11:59 PM (hora del pacífico). Los primeros premiados serán dados a conocer a mediados de junio en Londres durante el evento Mozilla All Hands. Es válido mencionar que las solicitudes permanecerán abiertas.

Si deseas unirte a la lista de discusión o mantenerte informado del avance del programa, puedes hacerlo mediante las siguientes vías:

Fuente: The Mozilla Blog

Matt ThompsonOur network is full of stories

Our network is full of stories, impact and qualitative data. Colleagues and community members discover and use these narratives daily across a broad range — from communications and social media, to metrics and evaluation, to grant-writing, curriculum case studies, and grist for new outlets like the State of the Web.

Our challenge is: how do we capture and analyze these stories and qualitative data in a more systematic and rigorous way?

Can we design a unified “Story Engine” that serves multiple customers and use cases simultaneously — in ways that knit together much of our existing work? That’s the challenge we undertook in our first “Story Engine 0.1” sprint: document the goals, interview colleagues, a develop personae. Then design a process, ship a baby prototype, and test it out using some real data.

inputs

Designing a network story Engine

Here’s what we shipped in our first 3-day sprint:

  1. A prototype web site. With a “file a story tip” intake process.
  2. A draft business plan / workflow
  3. A successful test around turning network survey data into story leads
  4. Some early pattern-matching / ways to code and tag evidence narratives
  5. Documented our key learnings and next steps

1) A prototype web site

http://mzl.la/story is now a thing! It packages what we’ve done so far. Plus a work-bench for ongoing work. It includes:

  • “File a story” tip sheet — A quick, easy intake form for filing potential story leads to follow up on. Goal: make it fast and easy for anyone to file the “minimum viable info” we’d need to then triage and follow up. http://mzl.la/tip
  • See stories — See story tips submitted via the tip sheet. (Requires password for now, as it contains member emails.) Just a spreadsheet at this point — it will eventually become a Git Hub repo for easier tasking, routing and follow-up. And maybe: a “story garden” with a prettier, more usable interface for humans to browse through and see the people and stories inside our network. http://mzl.la/leads
  • PersonasWho does this work need to serve? Who are the humans at the center of the design? We interviewed colleagues and documented their context, jobs, pains, and gains. Plus the claims they’d want to make and how they might use our findings. Focused on generating quick wins for the Mozilla Foundation grants, State of the Web, communications and metrics teams. http://mzl.la/customers
  • AboutOutlining our general thinking, approach and potential methodologies. http://mzl.la/about
  • How-To Guides — (Coming soon.) Will eventually become: interview templates, guidance and training on how to conduct effective interviews, our methodology, and coding structure.

2) A draft business process / workflow 

What happens when a story tip gets filed? Who does what? Where are the decision points? We mapped some of this initial process, including things like: assess the lead, notify the right staff, conduct follow-up interviews, generate writing/ artefacts, share via social, code and analyze the story, then package and use findings.

story-engine-flow

3) Turning network survey data into stories

Our colleagues in the “Insights and Impact” team recently conducted their first survey of the network. These survey responses are rich in potential stories, evidence narratives, and qualitative data that can help test and refine our value proposition.

We tested the first piece of our baby story engine by pulling from the network survey and mapping data we just gathered.

This proved to be rich. It proved that our network surveys are not only great ways to gather quantitative data and map network relationships — they also provide rich leads for our grants / comms / M&E / strategy teams to follow up on.

Sample story leads

(Anonymous for privacy reasons):

  • The network helps us form connections to small organizations that offer digital media and learning programs. We learn from their practices and are able to share them out to our broader network of over 1600 Afterschool providers in NYC. It also expands our staff capacity to teach Digital Media and Learning activities.”
  • My passion is youth advocacy and fighting in solidarity with them in their corner. Being part of the network helps me do more with them like working with libraries in the UK to develop ‘Open source library days; lead by our youths who have so much to share with us all.”
  • The collaboration has allowed the local community to learn about the Internet and be able to contribute to it. The greatest joy is seeing young community girls being a part of this revolution through clubs. Through the process of learning they also meet local girls who share the same passion as they do.”

These are examples of leads that may be worth following up on to help flesh out theory of change, analyze trends, and tell a story about impact. Some of the leads we gathered also include critique or ways we need to do better — combined with explicit offers to help.

Draft-MLN-codes copy

4) Early Pattern-matching / coding  AND tagging

One of our goals is to combine the power of both qualitative and quantitative data. Out of this can come tagging and codes around the benefit / value the network is providing to members. Some early patterns in the benefits network members are reporting:

  • Support — advice, links to resources, financial support, partners (“matchmaking”)
  • Connections — professional, social
  • Credibility / legitimacy of being associated with Mozilla
  • Belongingness — being part of a group and drawing strength from that
  • Skills / practises / knowhow
  • Employability / “Helped me get a job”
  • Educational opportunity / “Helped me get into school”
  • Entrepreneurship & innovation / developing new models, products, services

Imagine these as simple tags we could apply to story tickets in a repo. This will help colleagues sift, sort and follow up on specific evidence narratives that matter to them. Plus allow us to spot patterns, develop claims, and test assumptions over time.

Value-Proposition-Canvas_03

5) Key Learnings

Some of our “a ha!” moments from this first sprint:

  • Increased empathy and understanding is key. Increasing our empathy and understanding of network members is a key goal for this work.

This is a key muscle we want to strengthen in MoFo’s culture and systems: the ability to empathize with our members’ aspirations, challenges and benefits.

Regularly exposing staff and community to these stories from our network can ground our strategy, boost motivation, aid our change management process, and regularly spark “a ha” moments.

  • We are rich in qualitative data. We sometimes fall into a trap of assuming that what we observe, hear about and help facilitate is too ephemeral and anecdotal to be useful. In reality, it’s a rich source of data that needs to be systematically aggregated, analyzed, and fed back to teams and partners. Working on processes and frameworks to improve that was illuminating in terms of the quality of what we already have.
  • The network mapping survey is already full of great stories. Our early review and testing proved this thesis — there’s greater fodder for evidence narratives / human impact in that data.
  • Connect the dots between existing work. This “story engine” work is not about creating another standalone initiative; the opportunity is to provide some process and connective tissue to good work that is already ongoing.
  • We can start to see patterns emerging. In terms of: the value members are seeing in the network. We can turn these into a recurring set of themes / tags / codes that can inform our research and feedback loops.

Feedback on the network survey process:

Open-ended questions like: “what’s the value or benefit you get from the network” generate great material.

  • This survey question was a rich vein. (Mozilla Science Lab did not ask this open-ended question about value, which meant we lost an opportunity to gather great stories there — we can’t get story tips when people are selecting from a list of benefits.)
  • Criticism / suggestions for improvement are great. We’re logging people who will likely also have good critiques, not just ra-ra success stories. And (importantly) some of these critiques come with explicit offers to help.
  • Consider adding an open-ended “link or artefact field” to the survey next time. e.g., “Got a link to something cool that you made or documented as part of your interaction with the network?” This could be blog posts, videos, tweets, etc. These can generate easy wins and rich media.

Story Engine 2

What’s next?

We’ve documented our next steps here. Over the last three days, we’ve dug into how to better capture the impact of what we do. We’ve launched the first discovery phase of a design thinking process centred around: “How might we create stories that are also data?”

We’re listening, reviewing existing thinking, digging into people’s needs and context — asking “what if?” Based on the Mozilla Foundation strategy, we’ve created personas, thought about claims they might want to make, pulled from the results of a first round of surveys on network impacts (New York Hive, Open Science Lab, Mozilla Clubs), and created a prototype workflow and tip sheet. Next up: more digging, listening, and prototyping.

What would you focus on next?

If we consider what we’ve done above as version 0.1, what would you prioritize or focus on for version 0.2? Let us know!

 

Tim TaubertSix Months as a Security Engineer

It’s been a little more than six months since I officially switched to the Security Engineering team here at Mozilla to work on NSS and related code. I thought this might be a good time to share what I’ve been up to in a short status update:

Removed SSLv2 code from NSS

NSS contained quite a lot of SSLv2-specific code that was waiting to be removed. It was not compiled by default so there was no way to enable it in Firefox even if you wanted to. The removal was rather straightforward as the protocol changed significantly with v3 and most of the code was well separated. Good riddance.

Added ChaCha20/Poly1305 cipher suites to Firefox

Adam Langley submitted a patch to bring ChaCha20/Poly1305 cipher suites to NSS already two years ago but at that time we likely didn’t have enough resources to polish and land it. I picked up where he left and updated it to conform to the slightly updated specification. Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites enabled.

RSA-PSS for TLS v1.3 and the WebCrypto API

Ryan Sleevi, also a while ago, implemented RSA-PSS in freebl, the lower cryptographic layer of NSS. I hooked it up to some more APIs so Firefox can support RSA-PSS signatures in its WebCrypto API implementation. In NSS itself we need it to support new handshake signatures in our experimental TLS v1.3 code.

Improve continuous integration for NSS

Kai Engert from RedHat is currently doing a hell of a job maintaining quite a few buildbots that run all of our NSS tests whenever someone pushes a new changeset. Unfortunately the current setup doesn’t scale too well and the machines are old and slow.

Similar to e.g. Travis CI, Mozilla maintains its own continuous integration and release infrastructure, called TaskCluster. Using TaskCluster we now have an experimental Docker image that builds NSS/NSPR and runs all of our 17 (so far) test suites. The turnaround time is already very promising. This is an ongoing effort, there are lots of things left to do.

Joined the WebCrypto working group

I’ve been working on the Firefox WebCrypto API implementation for a while, long before I switched to the Security Engineering team, and so it made sense to join the working group to help finalize the specification. I’m unfortunately still struggling to carve out more time for involvement with the WG than just attending meetings and representing Mozilla.

Added HKDF to the WebCrypto API

The main reason the WebCrypto API in Firefox did not support HKDF until recently is that no one found the time to implement it. I finally did find some time and brought it to Firefox 46. It is fully compatible to Chrome’s implementation (RFC 5869), the WebCrypto specification still needs to be updated to reflect those changes.

Added SHA-2 for PBKDF2 in the WebCrypto API

Since we shipped the first early version of the WebCrypto API, SHA-1 was the only available PRF to be used with PBKDF2. We now support PBKDF2 with SHA-2 PRFs as well.

Improved the Firefox WebCrypto API threading model

Our initial implementation of the WebCrypto API would naively spawn a new thread every time a crypto.subtle.* method was called. We now use a thread pool per process that is able to handle all incoming API calls much faster.

Added WebCrypto API to Workers and ServiceWorkers

After working on this on and off for more than six months, so even before I officially joined the security engineering team, I managed to finally get it landed, with a lot of help from Boris Zbarsky who had to adapt our WebIDL code generation quite a bit. The WebCrypto API can now finally be used from (Service)Workers.

What’s next?

In the near future I’ll be working further on improving our continuous integration infrastructure for NSS, and clean up the library and its tests. I will hopefully find the time to write more about it as we progress.

Kim MoirWelcome Mozilla Releng summer interns

We're delighted to have Francis Kang and Connor Sheehan join the Mozilla release engineering team as summer interns.  Francis is studying at the University of Toronto while Connor attends McMaster University in Hamilton, Ontario.  We'll have another intern (Anthony) join us later on in the summer who will be working from our San Francisco office.

Francis and Connor will be working on implementing some new features in release promotion as well as  migrating some builds to taskcluster.  I'll be mentoring Francis,  while Rail will be mentoring Connor.  If you are in the Toronto office, please drop by to say hi to them.  Or welcome them on irc as fkang or sheehan. 

Kim, Francis, Connor and Rail
They are both already off to a great start and have pull requests merged into production that fixed some release promotion issues.  Their code was used in the Firefox 47.0 beta 5 release promotion that we ran last night so their first week was quite productive.


Mentoring an intern provides an opportunity to see the systems we run from a fresh perspective.  They both have lots of great questions which makes us revisit why design decisions were made, could we do things better?   Like all teaching roles, I always find that I learn a tremendous amount from the experience, and hope they have fun learning real world software engineering concepts with respect to running large distributed systems.

Welcome to Mozilla!

Shing LyuRust Code Completion with YouCompleteMe

Every programmer knows that code completion can give you a productivity boost. Without code completion, you’ll have to read the documentation (or StackOverflow?) every time you forget what to type next. Since I’m a vim user and I worked with Rust very often, I’m glad to know YouCompleteMe (YCM), a powerful code completion plug-in for vim, officially supports Rust code completion (see this blog post). I’ll walk you through the process of installing YCM on Ubuntu/Linux Mint in this post.

Prerequisite

You’ll need:

  • vim (obviously!)
  • git (Vundle needs it)
  • latest stable rustc and cargo (YCM needs to compile the racerd completion server, follow this installation guide)
  • Python 2
  • Install the required packages for YCM: sudo apt-get install build-essential cmake python-dev python3-dev

If you use other OS, you can find the installation guide here.

Installation

First, we’ll install YouCompleteMe through Vundle. YCM is not a simple vimscript or python based vim plug-in, there are compiled components so you’ll have to compile it when you first install it, and re-compile it every time you update. Vundle is a package manager for vim; It allows you to specify the plug-ins you want in your .vimrc file and it will handle the install/uninstall for you. Vundle can be installed easily by following the official quick start guide.

After you installed Vundle, you can add the following line to your ~/.vimrc:

Plugin 'Valloric/YouCompleteMe'

Then run :PluginInstall in vim (in normal mode). Vundle should download and install YCM into the ~/.vim/bundle/YouCompleteMe folder. YCM is quite big in size, so if the :PluginInstall step may takes a long time based on your network connection, be patient and wait for it to finish. If you already have an old version of YCM installed, you can run :PluginUpdate instead.

Now, let’s compile YCM and include the Rust support:

cd ~/.vim/bundle/YouCompleteMe
./install.py --racer-completer

The install.py command It will take some time to compile. The --racer-completer argument tells the YCM installer to download racerd via Cargo and compile it for you.

If you ever wonder why it’s called --racer-completer, it’s because YCM relies on racerd for Rust semantic completion. racerd is a JSON/HTTP server powered by racer, a Rust code completion tool. (You can install racer directly and use it with vim plugins like vim-racer, but it’s not as convenient as YCM.) When you type your Rust code in vim, YCM will communicate with the racerd to get the code completion suggestions.

For the Rust completer to work, you’ll also need a local copy of the Rust source code. You can git clone the Rust source code from GitHub or download the tarball. Choose the version you’ll be writing Rust code in. Then you’ll need to point to the source code in your ~/.vimrc like so:

" In this example, the rust source code zip has been extracted to
" /usr/local/rust/rustc-1.8.0
let g:ycm_rust_src_path = '/usr/local/rust/rustc-1.8.0/src'

Now we are all set! Next time you opened a Rust file, YCM will start to work.

Usage

YCM automatically kicks in when you type :: or .. Here are some example:

Code completion for use:

use

Code completion for functions: string

As you type more, the list of suggestions will be filtered automatically: string_narrow

It can also help you “jump” to the definition of a variable, function, struct, etc. Simply locate your cursor on anything (in normal mode) and run :YcmCompleter GoTo to jump.

Let’s say we want to know the definition of inline_mode_assign_inline_sizes():

before jump

Boom! It jumps to the function definition:

after jump

You can map the verbose command to some hotkey, for example \] (backslash followed by right square bracket), add the following line to your ~/.vimrc:

nnoremap <Leader>] :YcmCompleter GoTo<CR>

QMOFirefox 47 Beta 7 Testday, May 20th

Hey y’all!

I am writing to let you know that next week on Friday (May 20th) we are organizing Firefox 47 Beta 7 Testday. The main focus will be on APZ feature and plugin compatibility. Check out all the details via this etherpad.

No previous testing experience is needed, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! 😉

See you on Friday! \o/

Pascal ChevrelMozFR Transvision Reloaded: 1 year later

Just one year ago, the French Mozilla community was living times of major changes: several key historical contributors were leaving the project, our various community portals were no longer updates or broken, our tools were no longer maintained. At the same time a few new contributors were also popping in our IRC channel asking for ways to get involved in the French Mozilla community.

As a result, Kaze decided to organize the first ever community meetup for the French-speaking community in the Paris office (and we will repeat this meetup in June in the brand new Paris office!) .

IMG_0151

This resulted in a major and successful community reboot. Leaving contributors passed on the torch to other members of the community, newer contributors were meeting in real life for the first time. This is how Clarista officially became our events organizer, this is how Théo replaced Cédric as the main Firefox localizer and this is how I became the new developer for Transvision! :)

What is Transvision? Transvision is a web application created by Philippe Dessantes which was helping the French team finding localized/localizable strings in Mozilla repositories.

Summarized like that, it doesn't sound that great, but believe me, it is! Mozilla applications have big gigantic repos, there are tens of thousands of strings in our mercurial repositories, some of them we have translated like a decade ago, when you decide to change a verb for a better one for example, it is important to be able to find all occurrences of this verb you have used in the past to see if they need update too. When somebody spots a typo or a clumsy wording, it's good to be able to check if you didn't make the same translation mistakes in other parts of the Mozilla applications several years ago and of course, it's good to be able to check that in just a few seconds. Basically, Phillippe had built the QA/assistive technology for our localization process that best fitted our team needs and we just couldn't let it die.

During the MozFR meetup, Philippe showed to me how the application worked and we created a github repository where we put the currently running version of the code. I tagged that code as version 1.0.

Over the summer, I familiarized myself with the code which was mostly procedural PHP, several Bash scripts to maintain copies of our mercurial repos and a Python script used to extract the strings. Quickly, I decided that I would follow the old Open Source strategy of Release early, release often. Since I was doing that on the sidelines of my job at Mozilla,  I needed the changes to be small but frequent incremental steps as I didn't know how much time I could devote to this project. Basically, having frequent releases means that I always have the codebase in mind which is good since I can implement an idea quickly, without having to dive into the code to remember it.

One year and 15 releases later, we are now at version 2.5, so here are the features and achievements I am most proud of:

  1. Transvision is alive and kicking :)
  2. We are now a team! Jesús Perez has been contributing code since last December, a couple more people have shown interest in contributing and Philippe is interested in helping again too. We have also a dynamic community of localizers giving feedback, reporting bugs are asking for immrovements
  3. The project is now organized and if some day I need to step down and pass the torch to another maintainer, he should not have difficulties setting the project up and maintaining it. We have a github repo, release notes, bugs, tagged releases, a beta server, unit testing, basic stats to understand what is used in the app and a mostly cleaned up codebase using much more modern PHP and tools (Atoum, Composer). It's not perfect, but I think that for amateur developers, it's not bad at all and the most important thing is that the code keeps on improving!
  4. There are now more than 3000 searches per week done by localizers on Transvision. That was more like 30 per week a year ago. There are searches in more than 70 languages, although 30 locales are doing the bulk of searches and French is still the biggest consumer with 40% of requests.
  5. Some people are using Transvision in ways I hadn't anticipated, for example our documentation localizers use it to find the translation of UI mentioned in help articles they translate for support.mozilla.org, people in QA use it to point to localized strings in Bugzilla

A quick recap of what we have done, feature-wise, in the last 12 months:

  • Completely redesigned the application to look and feel good
  • Locale to Locale searches, English is not necessarily the locale you want to use as the source (very useful to check differences from languages of the same family, for example Occitan/French/Catalan/Spanish...).
  • Hints and warnings for strings that look too long or too short compare to English, potentially bad typography, access keys that don't match your translation...
  • Possibility for anybody to file a bug in Bugzilla with a pointer to the badly translated string (yes we will use it for QA test days within the French community!)
  • Firefox OS strings are now there
  • Search results are a lot more complete and accurate
  • We now have a stable Json/JsonP API, I know that Pontoon uses it to provide translation suggestions, I heard that the Moses project uses it too. (if you use the Transvision API, ping me, I'd like to know!)
  • We can point any string to the right revision controled file in the source and target repos
  • We have a companion add-on called MozTran for heavy users of the tool provided by Goofy, from our Babelzilla friends.

The above list is of course just a highlight of the main features, you can get more details on the changelog.

If you use Transvision, I hope you enjoy it and that it is useful oo you. If you don't use Transvision (yet), give it a try, it may help you in your translation process, especially if your localization process is similar to the French one (targets Firefox Nighty builds first, work directly on the mercurial repo, focus on QA).

This was the first year of the rebirth of Transvision, I hope that the year to come will be just as good as this one. I learnt a lot with this project and I am happy to see it grow both in terms of usage and community, I am also happy that one tool that was created by a specific localization team is now used by so many other teams in the world :)

Yunier José Sosa VázquezAyúdanos a construir el futuro de Firefox con el nuevo programa Test Pilot

El programa Test Pilot de Mozilla tiene una nueva cara y sitio web, así nos los ha mostrado Mozilla en un artículo publicado en su blog por Nick Nguyen, Vice Presidente de Firefox. Test Pilot representa la posibilidad de probar las funcionalidades experimentales que serán incorporadas a Firefox y decir lo que te parece, que debería ser cambiado o nuevas ideas a través de la retroalimentación con cada nueva característica.

En el video que te mostramos a continuación podrás ver rápidamente los experimentos disponibles.

Los experimentos a fondo

Activity Stream: Este hará tu forma de navegar más fácil a través de tu historial para encontrar sitios y contenido web más rápido. Activity stream te ayuda a re-descubrir las cosas que amas en la Web. cada vez que abras una nueva pestaña, verás tus sitios más visitados con contenido destacado desde tus marcadores e historial.

Activity Stream

Activity Stream

Tab Center: Muestra las pestañas de forma vertical del lado izquierdo de la pantalla para proporcionar una nueva experiencia de navegación con las pestañas.

Tab Center

Tab Center

Universal search: Combina el historial de la Barra Alucinante con el menú de búsqueda desplegable de Firefox para brindarte las mejores recomendaciones y que utilices menos tiempo filtrando los resultados de la búsqueda. Verás que las sugerencias de búsqueda ahora lucen diferente. Si has visitado recientemente un sitio, este se mostrará como una sugerencia de búsqueda, además, los resultados recomendados incluyen más información acerca del sitio sugerido.

Universal Search

Universal Search

¿Cómo empezar con Test Pilot?

Si deseas contribuir con Mozilla, debes acceder a ⇒ https://testpilot.firefox.com, crear o entrar empleando una Cuenta de Firefox (dando clic en el botón que dice Get Started with a Firefox Account) y luego instalar el complemento. Por ahora los experimentos sólo están disponibles en inglés pero más adelante se añadirán nuevos idiomas.

The Rust Programming Language BlogTaking Rust everywhere with rustup

Cross-compilation is an imposing term for a common kind of desire:

  • You want to build an app for Android, or iOS, or your router using your laptop.

  • You want to write, test and build code on your Mac, but deploy it to your Linux server.

  • You want your Linux-based build servers to produce binaries for all the platforms you ship on.

  • You want to build an ultraportable binary you can ship to any Linux platform.

  • You want to target the browser with Emscripten or WebAssembly.

In other words, you want to develop/build on one “host” platform, but get a final binary that runs on a different “target” platform.

Thanks to the LLVM backend, it’s always been possible in principle to cross-compile Rust code: just tell the backend to use a different target! And indeed, intrepid hackers have put Rust on embedded systems like the Raspberry Pi 3, bare metal ARM, MIPS routers running OpenWRT, and many others.

But in practice, there are a lot of ducks you have to get in a row to make it work: the appropriate Rust standard library, a cross-compiling C toolchain including linker, headers and binaries for C libraries, and so on. This typically involves pouring over various blog posts and package installers to get everything “just so”. And the exact set of tools can be different for every pair of host and target platforms.

The Rust community has been hard at work toward the goal of “push-button cross-compilation”. We want to provide a complete setup for a given host/target pair with the run of a single command. Today we’re happy to announce that a major portion of this work is reaching beta status: we’re building binaries of the Rust standard library for a wide range of targets, and shipping them to you via a new tool called rustup.

Introducing rustup

At its heart, rustup is a toolchain manager for Rust. It can download and switch between copies of the Rust compiler and standard library for all supported platforms, and track Rust’s nightly, beta, and release channels, as well as specific versions. In this way rustup is similar to the rvm, rbenv and pyenv tools for Ruby and Python. I’ll walk through all of this functionality, and the situations where it’s useful, in the rest of the post.

Today rustup is a command line application, and I’m going to show you some examples of what it can do, but it’s also a Rust library, and eventually these features are expected to be presented through a graphical interface where appropriate — particularly on Windows. Getting cross-compilation set up should eventually be a matter of checking a box in the Rust installer.

Our ambitions go beyond managing just the Rust toolchain: to have a true push-button experience for cross-compilation, it needs to set up the C toolchain as well. That functionality is not shipping today, but it’s something we hope to incorporate over the next few months.

Basic toolchain management

Let’s start with something simple: installing multiple Rust toolchains. In this example I create a new library, ‘hello’, then test it using rustc 1.8, then use rustup to install and test that same crate on the 1.9 beta.

That’s an easy way to verify your code works on the next Rust release. That’s good Rust citizenship!

We can use rustup show to show us the installed toolchains, and rustup update to keep them up to date with Rust’s releases.

Finally, rustup can also change the default toolchain with rustup default:

$ rustc --version
rustc 1.8.0 (db2939409 2016-04-11)
$ rustup default 1.7.0
info: syncing channel updates for '1.7.0-x86_64-unknown-linux-gnu'
info: downloading component 'rust'
info: installing component 'rust'
info: default toolchain set to '1.7.0-x86_64-unknown-linux-gnu'

  1.7.0-x86_64-unknown-linux-gnu installed - rustc 1.7.0 (a5d1e7a59 2016-02-29)

$ rustc --version
rustc 1.7.0 (a5d1e7a59 2016-02-29)

On Windows, where Rust supports both the GNU and MSVC ABI, you might want to switch from the default stable toolchain on Windows, which targets the 32-bit x86 architecture and the GNU ABI, to a stable toolchain that targets the 64-bit, MSVC ABI.

$ rustup default stable-x86_64-pc-windows-msvc
info: syncing channel updates for 'stable-x86_64-pc-windows-msvc'
info: downloading component 'rustc'
info: downloading component 'rust-std'
...

  stable-x86_64-pc-windows-msvc installed - rustc 1.8.0-stable (db2939409 2016-04-11)

Here the “stable” toolchain name is appended with an extra identifier indicating the compiler’s architecture, in this case x86_64-pc-windows-msvc. This identifier is called a “target triple”: “target” because it specifies a platform for which the compiler generates (targets) machine code; and “triple” for historical reasons (in many cases “triples” are actually quads these days). Target triples are the basic way we refer to particular common platforms; rustc by default knows about 56 of them, and rustup today can obtain compilers for 14, and standard libraries for 30.

Example: Building static binaries on Linux

Now that we’ve got the basic pieces in place, let’s apply them to a simple cross-compilation task: building an ultraportable static binary for Linux.

One of the unique features of Linux that has become increasingly appreciated is its stable syscall interface. Because the Linux kernel puts exceptional effort into maintaining a backward-compatible kernel interface, it’s possible to distribute ELF binaries with no dynamic library dependencies that will run on any version of Linux. Besides being one of the features that make Docker possible, it also allows developers to build self-contained applications and deploy them to any machine running Linux, regardless of whether it’s Ubuntu or Fedora or any other distribution, and regardless of exact mix of software libraries they have installed.

Today’s Rust depends on libc, and on most Linuxes that means glibc. It’s technically challenging to fully statically link glibc, which presents difficulties when using it to produce a truly standalone binary. Fortunately, an alternative exists: musl, a small, modern implementation of libc that can be easily statically linked. Rust has been compatible with musl since version 1.1, but until recently developers have needed to build their own compiler to benefit from it.

With that background, let’s walk through compiling a statically-linked Linux executable. For this example you’ll want to be running Linux — that is, your host platform will be Linux, and your target platform will also be Linux, just a different flavor: musl. (Yes, this is technically cross-compilation even though both targets are Linux).

I’m going to be running on Ubuntu 16.04 (using this Docker image). We’ll be building the basic hello world:

rust:~$ cargo new --bin hello && cd hello
rust:~/hello$ cargo run
   Compiling hello v0.1.0 (file:///home/rust/hello)
     Running `target/debug/hello`
Hello, world!

That’s with the default x86_64-unknown-linux-gnu target. And you can see it has many dynamic dependencies:

rust:~/hello$ ldd target/debug/hello
        linux-vdso.so.1 =>  (0x00007ffe5e979000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fca26d03000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fca26ae6000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fca268cf000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fca26506000)
        /lib64/ld-linux-x86-64.so.2 (0x000056104c935000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fca261fd000)

To compile for musl instead call cargo with the argument --target=x86_64-unknown-linux-musl. If we just go ahead and try that we’ll get an error:

rust:~/hello$ cargo run --target=x86_64-unknown-linux-musl
   Compiling hello v0.1.0 (file:///home/rust/hello)
error: can't find crate for `std` [E0463]
error: aborting due to previous error
Could not compile `hello`.
...

The error tells us that the compiler can’t find std. That is of course because we haven’t installed it.

To start cross-compiling, you need to acquire a standard library for the target platform. Previously, this was an error-prone, manual process — cue those blog posts I mentioned earlier. But with rustup, it’s just part of the usual workflow:

rust:~/hello$ rustup target add x86_64-unknown-linux-musl
info: downloading component 'rust-std' for 'x86_64-unknown-linux-musl'
info: installing component 'rust-std' for 'x86_64-unknown-linux-musl'
rust:~/hello$ rustup show
installed targets for active toolchain
--------------------------------------

x86_64-unknown-linux-gnu
x86_64-unknown-linux-musl

active toolchain
----------------

stable-x86_64-unknown-linux-gnu (default)
rustc 1.8.0 (db2939409 2016-04-11)

So I’m running the 1.8 toolchain for Linux on 64-bit x86, as indicated by the x86_64-unknown-linux-gnu target triple, and now I can also target x86_64-unknown-linux-musl. Neat. Surely we are ready to build a slick statically-linked binary we can release into the cloud. Let’s try:

rust:~/hello$ cargo run --target=x86_64-unknown-linux-musl
   Compiling hello v0.1.0 (file:///hello)
     Running `target/x86_64-unknown-linux-musl/debug/hello`
Hello, world!

And that… just worked! Run ldd on it for proof that it’s the real deal:

rust:~/hello$ ldd target/x86_64-unknown-linux-musl/debug/hello
        not a dynamic executable

Now take that hello binary and copy it to any x86_64 machine running Linux and it’ll run just fine.

For more advanced use of musl consider rust-musl-builder, a Docker image set up for musl development, which helpfully includes common C libraries compiled for musl.

Example: Running Rust on Android

One more example. This time building for Android, from Linux, i.e., arm-linux-androideabi from x86_64-unknown-linux-gnu. This can also be done from OS X or Windows, though on Windows the setup is slightly different.

To build for Android we need to the Android target, so let’s set up another 'hello, world’ project and install it.

rust:~$ cargo new --bin hello && cd hello
rust:~/hello$ rustup target add arm-linux-androideabi
info: downloading component 'rust-std' for 'arm-linux-androideabi'
info: installing component 'rust-std' for 'arm-linux-androideabi'
rust:~/hello$ rustup show
installed targets for active toolchain
--------------------------------------

arm-linux-androideabi
x86_64-unknown-linux-gnu

active toolchain
----------------

stable-x86_64-unknown-linux-gnu (default)
rustc 1.8.0 (db2939409 2016-04-11)

So let’s see what happens if we try to just build our 'hello’ project without installing anything further:

rust:~/hello$ cargo build --target=arm-linux-androideabi
   Compiling hello v0.1.0 (file:///home/rust/hello)
error: linking with `cc` failed: exit code: 1
... (lots of noise elided)
error: aborting due to previous error
Could not compile `hello`.

The problem is that we don’t have a linker that supports Android yet, so let’s take a moment’s digression to talk about building for Android. To develop for Android we need the Android NDK. It contains the linker rustc needs to create Android binaries. To just build Rust code that targets Android the only thing we need is the NDK, but for practical development we’ll want the Android SDK too.

On Linux, download and unpack them with the following commands (the output of which is not included here):

rust:~/home$ cd
rust:~$ curl -O https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz
rust:~$ tar xzf android-sdk_r24.4.1-linux.tgz
rust:~$ curl -O http://dl.google.com/android/repository/android-ndk-r10e-linux-x86_64.zip
rust:~$ unzip android-ndk-r10e-linux-x86_64.zip

We further need to create what the NDK calls a “standalone toolchain”. We’re going to put ours in a directory called android-ndk-r10e:

rust:~$ android-ndk-r10e/build/tools/make-standalone-toolchain.sh \
      --platform=android-18 --toolchain=arm-linux-androideabi-clang3.6 \
      --install-dir=android-18-toolchain --ndk-dir=android-ndk-r10e/ --arch=arm
Auto-config: --toolchain=arm-linux-androideabi-4.8, --llvm-version=3.6
Copying prebuilt binaries...
Copying sysroot headers and libraries...
Copying c++ runtime headers and libraries...
Copying files to: android-18-toolchain
Cleaning up...
Done.

Let’s notice a few things about these commands. First, the NDK we downloaded, android-ndk-r10e-linux-x86_64.zip is not the most recent release (which at the time of this writing is 'r11c’). Rust’s std is built against r10e and links to symbols that are no longer included in the NDK. So for now we have to use the older NDK. Second, in building the standalone toolchain we passed --platform=android-18 to make-standalone-toolchain.sh. The “18” here is the Android API level. Today, Rust’s arm-linux-androideabi target is built against Android API level 18, and should theoretically be forwards-compatible with subsequent Android API levels. So we’re picking level 18 to get the greatest Android compatibility that Rust presently allows.

The final thing for us to do is tell Cargo where to find the android linker, which is in the standalone NDK toolchain we just created. To do that we configure the arm-linux-androideabi target in .cargo/config with the 'linker’ value. And while we’re doing that we’ll go ahead and set the default target for this project to Android so we don’t have to keep calling cargo with the --target option.

[build]
target = "arm-linux-androideabi"

[target.arm-linux-androideabi]
linker = "/home/rust/android-18-toolchain"

Now let’s change back to the 'hello’ project directory and try to build again:

rust:~$ cd hello
rust:~/hello$ cargo build
   Compiling hello v0.1.0 (file:///home/rust/hello)

Success! Of course just getting something to build is not the end of the story. You’ve also got to package your code up as an Android APK. For that you can use cargo-apk.

Rust everywhere else

Rust is a software platform with the potential to run on anything with a CPU. In this post I showed you a little bit of what Rust can already do, with the rustup tool. Today Rust runs on most of the platforms you use daily. Tomorrow it will run everywhere.

So what should you expect next?

In the coming months we’re going to continue removing barriers to Rust cross-compilation. Today rustup provides access to the standard library, but as we’ve seen in this post, there’s more to cross-compilation than rustc + std. It’s acquiring and configuring the linker and C toolchain that is the most vexing — each combination of host and target platform requires something slightly different. We want to make this easier, and will be adding “NDK support” to rustup. What this means will again depend on the exact scenario, but we’re going to start working from the most demanded, like Android, and try to automate as much of the detection, installation and configuration of the non-Rust toolchain components as we can. On Android for instance, the hope is to automate everything for a basic initial setup except for accepting the licenses.

In addition to that there are multiple efforts to improve Rust cross-compilation tooling, including xargo, which can be used to build the standard library for targets unsupported by rustup, and cargo-apk, which builds Android packages from Cargo packages.

Finally, the most exciting platform on the horizon for Rust is not a traditional target for systems languages: the web. With Emscripten today it’s quite easy to run C++ code on the web by converting LLVM IR to JavaScript (or the asm.js subset of JavaScript). And the upcoming WebAssembly (wasm) standard will cement the web platform as a first-class target for programming languages.

Rust is uniquely-positioned to be the most powerful and usable wasm-targetting language for the immediate future. The same properties that make Rust so portable to real hardware makes it nearly trivial to port Rust to wasm. The same can’t be said for languages with complex runtimes that include garbage collectors.

Rust has already been ported to Emscripten (at least twice), but the code has not yet fully landed. This summer it’s happening though: Rust + Emscripten. Rust on the Web. Rust everywhere.

Epilogue

While many people are reporting success with rustup, it remains in beta, with some key outstanding bugs, and is not yet the officially recommended installation method for Rust (though you should try it). We’re going to keep soliciting feedback, applying polish, and fixing bugs. Then we’re going to improve the rustup installation experience on Windows by embedding it into a GUI that behaves like a proper Windows installer.

At that point we’ll likely update the download instructions on www.rust-lang.org to recommend rustup. I expect all the existing installation methods to remain available, including the non-rustup Windows installers, but at that point our focus will be on improving the installation experience through rustup. It’s also plausible that rustup itself will be packaged for package managers like Homebrew and apt.

If you want to try rustup for yourself, visit www.rustup.rs and follow the instructions. Then leave feedback on the dedicated forum thread, or file bugs on the issue tracker. More information about rustup is available in the README.

Thanks

Rust would not be the powerful system it is without the help of many individuals. Thanks to Diggory Blake for creating rustup, to Jorge Aparicio for fixing lots of cross-compilation bugs and documenting the process, Tomaka for pioneering Rust on Android, and Alex Crichton for creating the release infrastructure for Rust’s many platforms.

And thanks to all the rustup contributors: Alex Crichton, Brian Anderson, Corey Farwell, David Salter, Diggory Blake, Jacob Shaffer, Jeremiah Peschka, Joe Wilm, Jorge Aparicio, Kai Noda, Kamal Marhubi, Kevin K, llogiq, Mika Attila, NODA, Kai, Paul Padier, Severen Redwood, Taylor Cramer, Tim Neumann, trolleyman, Vadim Petrochenkov, V Jackson, Vladimir, Wayne Warren, Yasushi Abe, Y. T. Chung

Mozilla Addons BlogAMO technical architecture

addons.mozilla.org (AMO) has been around for more than 12 years, making it one of the oldest websites at Mozilla. It celebrated its 10th anniversary a couple of years ago, as Wil blogged about.

AMO started as a PHP site that grew and grew as new pieces of functionality were bolted on. In October 2009 the rewrite from PHP to Python began. New features were added, the site grew ever larger, and now a few cracks are starting to appear. These are merely the result of a site that has lots of features and functionality and has been around for a long time.

The site architecture is currently something like below, but please note this simplifies the site and ignores the complexities of AWS, the CDN and other parts of the site.

Basically, all the code is one repository and the main application (a Django app) is responsible for generating everything—from HTML, to emails, to APIs, and it all gets deployed at the same time. There’s a few problems with this:

  • The amount of functionality in the site has caused such a growth in interactions between the features that it is harder and harder to test.
  • Large JavaScript parts of the site have no automated testing.
  • The JavaScript and CSS spill over between different parts of the site, so changes in one regularly break other parts of the site.
  • Not all parts of the site have the same expectation of uptime but are all deployed at the same time.
  • Not all parts of the site have the same requirements for code contributions.

We are moving towards a new model similar to the one used for Firefox Marketplace. Whereas Marketplace built its own front-end framework, we are going to be using React on the front end.

The end result will start look something like this:

image00

A separate version of the site is rendered for the different use cases, for example developers or users. In this case a request comes in hits one of the appropriate front-end stacks. That will render the site using React universal in node.js on the server. It will access the data store by calling the appropriate Python REST APIs.

In this scenario, the legacy Python code will migrate to being a REST API that manages storage, transactions, workflow, permissions and the like. All the front-facing user interface work will be done in React and be independent from each other as much as possible.

It’s not quite micro services, but the breaking of a larger site into smaller independent pieces. The first part of this is happening with the “discovery pane” (accessible at about:addons). This is our first project using this infrastructure, which features a new streamlined way to install add-ons with a new technical architecture to serve it to users.

As we roll out this new architecture we’ll be doing more blog posts, so if you’d like to get involved then join our mailing list or check out our repositories on Github.

Support.Mozilla.OrgWhat’s Up with SUMO – 12th May

Hello, SUMO Nation!

Yes, we know, Friday the 13th is upon us… Fear not, in good company even the most unlucky days can turn into something special ;-) Pet a black cat, find a four leaf clover, smile and enjoy what the weekend brings!

As for SUMO, we have a few updates coming your way. Here they are!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 18th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

  • for iOS
    • Firefox for iOS 4.0 IS HERE! The highlights are:
      • Firefox is now present on the Today screen.
      • You can access your bookmarks in the search bar.
      • You can override the certificate warning on sites that present them (but be careful!).
      • You can print webpages.
      • Users with older versions of iOS 8 or lower will not be able to add the Firefox widget. See Common Response Available.
    • Start your countdown clocks ;-) Firefox for iOS 5.0 should be with us in approximately 6 weeks!

Thanks for your attention and see you around SUMO, soon!

Air MozillaWeb QA Team Meeting, 12 May 2016

Web QA Team Meeting Weekly Web QA team meeting - please feel free and encouraged to join us for status updates, interesting testing challenges, cool technologies, and perhaps a...

Air MozillaReps weekly, 12 May 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Daniel GlazmanBlueGriffon 2.0 approaching...

BlueGriffon 2.0 is approaching, a major revamp of my cross-platform Wysiwyg Gecko-based editor. You can find previews here for OSX, Windows and Ubuntu 16.04 (64 bits).

BlueGriffon 2.0

Warnings:

  • it's HIGHLY recommended to NOT overwrite your existing 1.7 or 1.8 version ; install it for instance in /tmp instead of /Applications
  • it's VERY HIGHLY recommended to start it creating a dedicated profile
    • open BlueGriffon.app --args -profilemanager (on OSX)
    • bluegriffon.exe -profilemanager (on Windows)
  • add-ons will NOT work with it, don't even try to install them in your test profile
  • it's a work in progress, expect bugs, issues and more

Changes:

- major revamp, you won't even recognize the app :-)
- based on a very recent version of Gecko, that was a HUGE work.
- no more floating panels, too hacky and expensive to maintain
- rendering engine support added for Blink, Servo, Vivliostyle and
Weasyprint!
- tons of debugging in *all* areas of the app
- BlueGriffon now uses the native colorpicker on OSX. Yay!!!
The native colorpicker of Windows is so weak and ugly we just can't
use it (it can't even deal with opacity...) and decided to stick
to our own implementation. On Linux, the situation is more
complicated, the colorpicker is not as ugly as the Windows' one,
but it's unfortunately too weak compared to what our own offers.
- more CSS properties handled
- helper link from each CSS property in the UI to MDN
- better templates handling
- auto-reload of html documents if modified outside of BlueGriffon
- better Markdown support
- zoom in Source View
- tech changes for future improvements: support for :active and
other dynamic pseudo-classes, support for ::before and ::after
pseudo-elements in CSS Properties; rely on Gecko's CSS lexer
instead of our own. We're also working on cool new features on
the CSS side like CSS Variables and even much cooler than that :-)

Christian HeilmannChakraCore and Node musings at NodeConf London

Yesterday morning I dragged myself to present at NodeConf London in the Barbican to present. Dragged not because I didn’t want to, but because I had 3 hours sleep coming back from Beyond Tellerand the day before.

Presenting at NodeConfLondonPhoto by Adrian Alexa

I didn’t quite have time to prepare my talk, and I ended up finishing my slides 5 minutes before it. That’s why I was, to use a simple term, shit scared of my talk. I’m not that involved in the goings on in Node, and the impostor in me assumed the whole audience to be all experts and me making an utter berk of myself. However, this being a good starting point I just went with it and used the opportunity to speak to an audience that much in the know about something I want Node to be.

I see the Node environment and ecosystem as an excellent opportunity to test out new JavaScript features and ideas without the issue of browser interoperability and incompatibility.

The thing I never was at ease about it though is that *everything is based on on one JS engine&. This is not how you define and test out a standard. You need to have several runtimes to execute your code. Much like a browser monoculture was a terrible thing and gave us thousands of now unmaintainable and hard to use web sites, not opening ourselves to various engines can lead to terrible scripts and apps based on Node.

The talk video is already live and you can also see all the other talks in this playlist:

The slides are on Slideshare:

A screencast recording of the talk is on YouTube.

Resources I mentioned:

I was very happy to get amazing feedback from everyone I met, and that people thoroughly enjoyed my presentation. Goes to show that the voice in your head telling you that you’re not good enough often is just being a a dick.