Air MozillaFoundation Demos September 30 2016

Foundation Demos September 30 2016 Foundation Demos September 30 2016

Niko MatsakisAnnouncing intorust.com

For the past year or so, I and a few others have been iterating on some tutorial slides for learning Rust. I’ve given this tutorial here at the local Boston Rust Meetup a few times, and we used the same basic approach at RustConf; I’ve been pretty happy with the results. But until now it’s been limited to in person events.

That’s why I’m so happy to announce a new site, Into Rust. Into Rust contains screencasts of many of these slides, and in particular the ones I consider most important: those that cover Ownership and Borrowing, which I think is the best place to start teaching Rust. I’ve divided up the material into roughly 30min screencasts so that they should be relatively easy to consume in one sitting – each also has some associated exercises to help make your knowledge more concrete.

I want to give special thanks to Liz Baillie, who did all the awesome artwork on the site.

Cameron Kaisergdb7 patchlevel 4 available

First, in the "this makes me happy" dept.: a Commodore 64 in a Gdansk, Poland auto repair shop still punching a clock for the last quarter-century. Take that, Phil Schiller, you elitist pig.

Second, as promised, patchlevel 4 of the TenFourFox debugger (our hacked version of gdb) is available from SourceForge. This is a minor bugfix update that wallpapers a crash when doing certain backtraces or other operations requiring complex symbol resolution. However, the minimum patchlevel to debug TenFourFox is still 2, so this upgrade is merely recommended, not required.

Yunier José Sosa VázquezCómo se hace? Evitar que Firefox afecte tu SSD

Los dispositivos de estado sólido o SSD, como comúnmente se les conoce siguen ganando terreno a los discos duros tradicionales y prácticamente cualquiera que compre un ordenador moderno elegirá estas unidades de almacenamiento en lugar de un disco mecánico. Sin embargo, los SSD no son eternos y tienen un período de vida limitado a la cantidad de operaciones de escritura establecidas por sus fabricantes.

Teniendo en cuenta lo antes mencionado, entonces deberíamos tener cuidado y estar informados del tiempo “que le queda” a nuestra unidad SSD para no perder los datos almacenados repentinamente. Si desean saber más sobre el tema pueden leer este artículo publicado en Blogthinkbig.

Según un estudio realizado por STH, los navegadores Firefox y Chrome afectan las SSD al escribir aproximadamente unos 10 Gb cada día y como principal responsable de este problema a la generación de archivos recovery.js empleados para guardar los datos de la sesión actual en caso de un cierre o fallo inesperado.

La buena noticia para los usuarios de Firefox es que este valor se puede modificar gracias a la página about:config. En Chrome no es posible ajustar esta configuración.

En Firefox debemos hacer lo siguiente:

  1. Abrir la página about:config y aceptar la advertencia.
  2. Localizar la preferencia browser.sessionstore.interval y modificar su valor a uno deseado. Verás que aparece el valor 15000, esto quiere decir que cada 15 segundos se genera un nuevo recovery.js, así que simplemente tenemos que cambiar ese número por uno mayor. 1000 equivale a 1 segundo.
  3. Si deseas que Firefox no almacene el estado de lo que haces (no recomendado), debes cambiar la preferencia browser.sessionhistory.max_entries a 0. Por defecto se mantienen en el historial 50 entradas.

Espero que les haya sido útil el artículo a todos aquell@s que tienen SSD.

Fuente: omicrono

Mitchell BakerMozilla Hosting the U.S. Commerce Department Digital Economy Board of Advisors

Today Mozilla is hosting the second meeting of the Digital Economy Board of Advisors of the United States Department of Commerce, of which I am co-chair.

Support for the global open Internet is the heart of Mozilla’s identity and strategy. We build for the digital world. We see and understand the opportunities it offers, as well as the threats to its future. We live in a world where a free and open Internet is not available to all of the world’s citizens; where trust and security online cannot be taken for granted; and where independence and innovation are thwarted by powerful interests as often as they are protected by good public policy. As I noted in my original post on being named to the Board, these challenges are central to the “Digital Economy Agenda,” and a key reason why I agreed to participate.

Department of Commerce Secretary Pritzker noted earlier this year: “we are no longer moving toward the digital economy. We have arrived.” The purpose of the Board is to advise the Commerce Department in responding to today’s new status quo. Today technology provides platforms and opportunities that enable entrepreneurs with new opportunities. Yet not everyone shares the benefits. The changing nature of work must also be better understood. And we struggle to measure these gains, making it harder to design policies that maximize them, and harder still to defend the future of our digital economy against myopic and reactionary interests.

The Digital Economy Board of Advisors was convened to explore these challenges, and provide expert advice from a range of sectors of the digital economy to the Commerce Department as it develops future policies. At today’s meeting, working groups within the Board will present their initial findings. We don’t expect to agree on everything, of course. Our goal is to draw out the shared conclusions and direction to provide a balanced, sustainable, durable basis for future Commerce Department policy processes. I will follow up with another post on this topic shortly.

Today’s meeting is a public meeting. There will be two live streams: one for the 8:30 am-12:30 pm PT pre-lunch session and one for the afternoon post-lunch 1:30-3:00pm PT. We welcome you to join us.

Although the Board has many more months left in its tenure, I can see a trend towards healthy alignment between our mission and the outcomes of the Board’s activities. I’m proud to serve as co-chair of this esteemed group of individuals.

Tim TaubertTLS Version Intolerance

A few weeks ago I listened to Hanno Böck talk about TLS version intolerance at the Berlin AppSec & Crypto Meetup. He explained how with TLS 1.3 just around the corner there again are growing concerns about faulty TLS stacks found in HTTP servers, load balancers, routers, firewalls, and similar software and devices.

I decided to dig a little deeper and will use this post to explain version intolerance, how version fallbacks work and why they’re insecure, as well as describe the downgrade protection mechanisms available in TLS 1.2 and 1.3. It will end with a look at version negotiation in TLS 1.3 and a proposal that aims to prevent similar problems in the future.

What is version intolerance?

Every time a new TLS version is specified, browsers usually are the fastest to implement and update their deployments. Most major browser vendors have a few people involved in the standardization process to guide the standard and give early feedback about implementation issues.

As soon as the spec is finished, and often far before that feat is done, clients will have been equipped with support for the new TLS protocol version and happily announce this to any server they connect to:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I too support TLS 1.2 so let’s use that to communicate.
[TLS 1.2 connection will be established.]

In this case the highest TLS version supported by the client is 1.2, and so the server picks it because it supports that as well. Let’s see what happens if the client supports 1.2 but the server does not:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I only support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

This too is how it should work if a client tries to connect with a protocol version unknown to the server. Should the client insist on any specific version and not agree with the one picked by the server it will have to terminate the connection.

Unfortunately, there are a few servers and more devices out there that implement TLS version negotiation incorrectly. The conversation might go like this:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! I don’t know that version. Handshake failure.
[Connection will be terminated.]

Or:

Client: Hi! The highest TLS version I support is 1.2.
Server: TCP FIN! I don’t know that version.
[Connection will be terminated.]

Or even worse:

Client: Hi! The highest TLS version I support is 1.2.
Server: (I don’t know this version so let’s just not respond.)
[Connection will hang.]

The same can happen with the infamous F5 load balancer that can’t handle ClientHello messages with a length between 256 and 512 bytes. Other devices abort the connection when receiving a large ClientHello split into multiple TLS records. TLS 1.3 might actually cause more problems of this kind due to more extensions and client key shares.

What are version fallbacks?

As browsers usually want to ship new TLS versions as soon as possible, more than a decade ago vendors saw a need to prevent connection failures due to version intolerance. The easy solution was to decrease the advertised version number by one with every failed attempt:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! Handshake failure. (Or FIN. Or hang.)
[TLS version fallback to 1.1.]
Client: Hi! The highest TLS version I support is 1.1.
Server: Hi! I support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

A client supporting everything from TLS 1.0 to TLS 1.2 would start trying to establish a 1.2 connection, then a 1.1 connection, and if even that failed a 1.0 connection.

Why are these insecure?

What makes these fallbacks insecure is that the connection can be downgraded by a MITM, by sending alerts or TCP packets to the client, or blocking packets from the server. To the client this is indistinguishable from a network error.

The POODLE attack is one example where an attacker abuses the version fallback to force an SSL 3.0 connection. In response to this browser vendors disabled version fallbacks to SSL 3.0, and then SSL 3.0 entirely, to prevent even up-to-date clients from being exploited. Insecure version fallback in browsers pretty much break the actual version negotiation mechanisms.

Version fallbacks have been disabled since Firefox 37 and Chrome 50. Browser telemetry data showed it was no longer necessary as after years, TLS 1.2 and correct version negotiation was deployed widely enough.

The TLS_FALLBACK_SCSV cipher suite

You might wonder if there’s a secure way to do version fallbacks, and other people did so too. Adam Langley and Bodo Möller proposed a special cipher suite in RFC 7507 that would help a client detect whether the downgrade was initiated by a MITM.

Whenever the client includes TLS_FALLBACK_SCSV {0x56, 0x00} in the list of cipher suites it signals to the server that this is a repeated connection attempt, but this time with a version lower than the highest it supports, because previous attempts failed. If the server supports a higher version than advertised by the client, it MUST abort the connection.

The drawback here however is that a client even if it implements fallback with a Signaling Cipher Suite Value doesn’t know the highest protocol version supported by the server, and whether it implements a TLS_FALLBACK_SCSV check. Common web servers will likely be updated faster than others, but router or load balancer manufacturers might not deem it important enough to implement and ship updates for.

Signatures in TLS 1.2

It’s been long known to be problematic that signatures in TLS 1.2 don’t cover the list of cipher suites and other messages sent before server authentication. They sign the ephemeral DH params sent by the server and include the *Hello.random values as nonces to prevent replay attacks:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signing at least the list of cipher suites would have helped prevent downgrade attacks like FREAK and Logjam. TLS 1.3 will sign all messages before server authentication, even though it makes Transcript Collision Attacks somewhat easier to mount. With SHA-1 not allowed for signatures that will hopefully not become a problem anytime soon.

Downgrade Sentinels in TLS 1.3

With neither the client version nor its cipher suites (for the SCSV) included in the hash signed by the server’s certificate in TLS 1.2, how do you secure TLS 1.3 against downgrades like FREAK and Logjam? Stuff a special value into ServerHello.random.

The TLS WG decided to put static values (sometimes called downgrade sentinels) into the server’s nonce sent with the ServerHello message. TLS 1.3 servers responding to a ClientHello indicating a maximum supported version of TLS 1.2 MUST set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x01

If the client advertises a maximum supported version of TLS 1.1 or below the server SHOULD set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x00

If not connecting with a downgraded version, a client MUST check whether the server nonce ends with any of the two sentinels and in such a case abort the connection. The TLS 1.3 spec here introduces an update to TLS 1.2 that requires servers and clients to update their implementation.

Unfortunately, this downgrade protection relies on a ServerKeyExchange message being sent and is thus of limited value. Static RSA key exchanges are still valid in TLS 1.2, and unless the server admin disables all non-forward-secure cipher suites the protection can be bypassed.

The comeback of insecure fallbacks?

Current measurements show that enabling TLS 1.3 by default would break a significant fraction of TLS handshakes due to version intolerance. According to Ivan Ristić, as of July 2016, 3.2% of servers from the SSL Pulse data set reject TLS 1.3 handshakes.

This a very high number and would affect way too many people. Alas, with TLS 1.3 we have only limited downgrade protection for forward-secure cipher suites. And that is assuming that most servers either support TLS 1.3 or update their 1.2 implementations. TLS_FALLBACK_SCSV, if supported by the server, will help as long as there are no attacks tampering with the list of cipher suites.

The TLS working group has been thinking about how to handle intolerance without bringing back version fallbacks, and there might be light at the end of the tunnel.

Version negotiation with extensions

The next version of the proposed TLS 1.3 spec, draft 16, will introduce a new version negotiation mechanism based on extensions. The current ClientHello.version field will be frozen to TLS 1.2, i.e. {3, 3}, and renamed to legacy_version. Any number greater than that MUST be ignored by servers.

To negotiate a TLS 1.3 connection the protocol now requires the client to send a supported_versions extension. This is a list of versions the client supports, in preference order, with the most preferred version first. Clients MUST send this extension as servers are required to negotiate TLS 1.2 if it’s not present. Any to the server unknown version numbers MUST be ignored.

This still leaves potential problems with big ClientHello messages or choking on unknown extensions unaddressed, but according to David Benjamin the main problem is ClientHello.version. We will hopefully be able to ship browsers that have TLS 1.3 enabled by default, without bringing back insecure version fallbacks.

However, it’s not unlikely that implementers will screw up even the new version negotiation mechanism and we’ll have similar problems in a few years down the road.

GREASE-ing the future

David Benjamin, following Adam Langley’s advice to have one joint and keep it well oiled, proposed GREASE (Generate Random Extensions And Sustain Extensibility), a mechanism to prevent extensibility failures in the TLS ecosystem.

The heart of the mechanism is to have clients inject “unknown values” into places where capabilities are advertised by the client, and the best match selected by the server. Servers MUST ignore unknown values to allow introducing new capabilities to the ecosystem without breaking interoperability.

These values will be advertised pseudo-randomly to break misbehaving servers early in the implementation process. Proposed injection points are cipher suites, supported groups, extensions, and ALPN identifiers. Should the server respond with a GREASE value selected in the ServerHello message the client MUST abort the connection.

Kim MoirBeyond the Code 2016 recap

I've had the opportunity to attend the Beyond the Code conference for the past two years.  This year, the venue moved to a location in Toronto, the last two events had been held in Ottawa.  The conference is organized by Shopify who again managed to have a really great speaker line up this year on a variety of interesting topics.  It was a two track conference so I'll summarize some of the talks I attended.  

The conference started off with Anna Lambert of Shopify welcoming everyone to the conference.





The first speaker was Atlee Clark, Director of App and Developer relations at Shopify who discussed the wheel of diversity.


The wheel of diversity is a way of mapping the characteristics that you're born with (age, gender, gender expression, race or ethnicity, national origin, mental/physical ability), along with those that you acquire through life (appearance, education, political belief, religion, income, language and communication skills, work experience, family,  organizational role).  When you look at your team, you can map how diverse it is by colour.  (Of course, some of these characteristics are personal and might not be shared with others).  You can see how diverse the team is by mapping different characteristics with different colours.  If you map your team and it's mostly the same colour, then you probably will not bring different perspectives together when you work because you all have similar backgrounds and life experiences.  This is especially important when developing products. 



This wheel also applies to hiring too.  You want to have different perspectives when you're interviewing someone.  Atlee mentioned when she was hiring for a new role, she mapped out the characteristics of the people who would be conducting the hiring interviews and found there was a lot of yellow.


So she switched up the team that would be conducting the interviews to include people with more diverse perspectives.

She finished by stating that this is just a tool, keep it simple, and practice makes it better. 

The next talk was by Erica Joy, who is a build and release engineer at Slack, as well as a diversity advocate.  I have to admit, when I saw she was going to speak at Beyond the Code, I immediately pulled out my credit card and purchased a conference ticket.  She is one of my tech heroes.  Not only did she build the build and release pipeline at Slack from the ground up, she is an amazing writer and advocate for change in the tech industry.   I highly recommend reading everything she has written on Medium, her chapter in Lean Out and all her discussions on twitter.  So fantastic.

Her talk at the conference was "Building a Diverse Corporate Culture: Diversity and Inclusion in Tech".  She talked about how literally thousands of companies say they value inclusion and diversity.  However, few talk about what they are willing to give up to order to achieve it.  Are you willing to give up your window seat with a great view?   Something else so that others can be paid fairly?  She mentioned that change is never free.  People need both mentorship and sponsorship in in order to progress in their career.





I really liked her discussion around hiring and referrals.  She stated that when you're hire people you already know you're probably excluding equally or better qualified that you don't know.  By default, women of colour are underpaid.

Pay gap for white woman, African American women and Hispanic women compared to a white man in the United States.

Some companies have referral system to give larger referral bonuses to people who are underrepresented in tech, she gave the example of Intel which has this in place.  This is a way to incentivize your referral system so you don't just hire all your white friends.  

The average white American has 91 white friends and one black friend so it's not very likely that they will refer non-white people. Not sure what the numbers are like in Canada but I'd guess that they are quite similar.
  
In addition, don't ask people to work for free, to speak at conferences or do diversity and inclusion work.  Her words were "We can't pay rent with exposure".

Spend time talking to diversity and inclusion experts.  There are people that have spent their entire lives conducting research in this area and you can learn from their expertise.  Meritocracy is a myth, we are just lucky to be in the right place in the right time.  She mentioned that her colleague Duretti Hirpa at Slack points out the need for accomplices, not allies. People that will actually speak up for others.  So people feeling pain or facing a difficult work environment don't have to do all the work of fighting for change. 




In most companies, there aren't escalation paths for human issues either.  If a person is making sexist or racist remarks, shouldn't that be a firing offense? 

If people were really working hard on diversity and inclusion, we would see more women and people of colour on boards and in leadership positions.  But we don't.

She closed with a quote from Beyonce:

"If everything was perfect, you would never learn and you would never grow"

💜💜💜

The next talk I attended was by Coraline Ada Ehmke, who is an application engineer at Github.  Her talk was about the "Broken Promise of Open Source".  Open source has the core principals of the free exchange of ideas, success through collaboration, shared ownership and meritocracy.


However, meritocracy is a myth.  Currently, only 6% of Github users are women.  The environment can be toxic, which drives a lot of people away.  She mentioned that we don't have numbers for diversity in open source other than women, but Github plans to do a survey soon to try to acquire more data.


Gabriel Fayant from Assembly of Seven Generation's talk was entitled "Walking in Both Worlds, traditional ways of being and the world of technology".  I found this quite interesting, she talked about traditional ceremonies and how they promote the idea of living in the moment, and thus looking at your phone during a drum ceremony isn't living the full experience.  A question from the audience from someone who worked in the engineering faculty at the University of Toronto was how we can work with indigenous communities to share our knowledge of the technology and make youth both producers of tech, not just consumers. 

If everything was perfect, you would never learn and you would never grow.
Read more at: http://www.brainyquote.com/quotes/quotes/b/beyoncekno596349.html

f everything was perfect, you would never learn and you would never grow.
Read more at: http://www.brainyquote.com/quotes/quotes/b/beyoncekno596349.html
The next talk was by Sandi Metz, entitled "Madame Santi tells your future".  This was a totally fascinating look at the history of printing text from scrolls all the way to computers.

She gave the same talk at another conference earlier so you watch it here.  It described the progression of printing technology from 7000 years ago until today.  Each new technology disrupted the previous one, and it was difficult for those who worked on the previous technology to make the jump to work on the new one. 

So according to Sandi, what is your future?
  • What you are working on now probably won't be relevant in 10 years
  • You will all die
  • All the people you love will die
  • Your body will start to fail you
  • Life is short
  • Tell people that you love them
  • Guard your health
  • Spend time with your kids
  • Get some exercise (she loves to bike)
  • We are bigger than tech
  • Community and schools need help
  • She gave the example of Habitat for Humanity where she volunteers
  • These organizations also need help to write code, they might not have the knowledge or time to do it right

The last talk I attended was by Sabrina Geremia of Google Canada.  She talked about the factors that encourage a girl to consider computer science (encouragement, career perception, self-perception and academic exposure.)


I found that this talk was interesting but it focused a bit too much on the pipeline argument - that the major problem is that girls are not enrolling in CS courses.  If you look at all the problems with environment, culture, lack of pay equity and opportunities for promotion due to bias, maybe choosing a career where there is more diversity is a better choice.  For instance, law, accounting and medicine have much better numbers for these issues, despite there still being an imbalance.

At the end of the day, there was a panel to discuss diversity issues:

Moderator: Ariti Sharma, Shopify, Panelists: Mohammed Asaduallah, Format, Katie Krepps, Capital One Canada, Lateesha Thomas, Dev Bootcamp, Ramya Raghavan, Google, Kara Melton, TWG, Gladstone Grant, Microsoft Canada
Some of my notes from the panel
  • Be intentional about seeking out talent
  • Fix culture to be more diverse
  • Recruit from bootcamps. Better diversity today.  Don't wait for universities to change the ratios.
  • Environment impacts retention
  • Conduct and engagement survey to see if underrepresented groups feel that their voices are being heard.
  • There is a need for sponsorship, not just mentoring.  Define a role that doesn't exist at the company.  A sponsor can make that role happen by advocating for it at higher levels
  • Mentors do better if matched with demographics.  They will realize the challenges that you will face in the industry better than a white man who has never directly experienced sexism or racism.
  • Sponsors tend to be men due to the demographics of our industry
  • At Microsoft, when you reach a certain level your are expected to mentor an unrepresented person
  • Look at compensation and representation across diverse groups
  • Attrition is normal, it varies by region, especially acute in San Francisco.
  • Women leave companies at 2x the rate of men due to culture
  • You shouldn't stay at a place if you are burnt out, take care of yourself.

Compared to the previous two iterations of this conference, it seemed that this time it focused a lot more on solutions to have more diversity and inclusion in your company. The previous two conferences I attended seemed to focus more on technical talks by diverse speakers.


As a side note, there were a lot of Shopify folks in attendance because they ran the conference.  They sent a bus of people from their head office in Ottawa to attend it.  I was really struck at how diverse some of the teams were.  I met group of women who described themselves as a team of "five badass women developers" 💯 As someone who has been the only woman on her team for most of her career, this was beautiful to see and gave me hope for the future of our industry.   I've visited the Ottawa Shopify office several times (Mr. Releng works there) and I know that the representation of of their office doesn't match the demographics of the Beyond the Code attendees which tended to be more women and people of colour.  But still, it is refreshing to see a company making a real effort to make their culture inclusive.  I've read that it is easier to make your culture inclusive from the start, rather than trying to make difficult culture changes years later when your teams are all homogeneous. So kudos to them for setting an example for other companies.

Thank you Shopify for organizing this conference, I learned a lot and I look forward to the next one!

Mozilla Security BlogMitigating Logjam: Enforcing Stronger Diffie-Hellman Key Exchange

In response to recent developments attacking Diffie-Hellman key exchange (https://weakdh.org/) and to protect the privacy of Firefox users, we have increased the minimum key size for TLS handshakes using Diffie-Hellman key exchange to 1023 bits. A small number of servers are not configured to use strong enough keys. If a user attempts to connect to such a server, they will encounter the error “ssl_error_weak_server_ephemeral_dh_key”.

Support.Mozilla.OrgWhat’s Up with SUMO – 29th September

Hello, SUMO Nation!

Change is a constant, and Mozilla is no different. Bigger and smaller changes are coming up across many a project, including SUMO – and we need your help figuring out what they should be like. Learn more about the ways you can make us be better below!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 28th of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 5th of October!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

Social

  • Thank you for the SUMO Day today! It was a record day for the number of people logging in – you rock!
  • The new training for filtering in widgets is available here:

    http://screencast.com/t/llm6PF5rI2 – it also shows the new support thread-specific inbox for the dashboard.

  • Some issues popping up nowadays are startup crashes – caused by AVG and WebSense in particular.
  • Inactive accounts may be removed soon, so if you’re still active, please log in this week. If you no longer have an account, please get in touch with Rachel!
  • Want to join us? Please email Rachel and/or Madalina to get started supporting Mozilla’s product users on Facebook and Twitter. We need your help! Use the step-by-step guide here. Take a look at some useful videos:

Support Forum

Knowledge Base & L10n

  • We are 5 weeks before next release / 1 week after current release What does that mean? (Reminder: we are following the process/schedule outlined here).
    • No work on next release content for KB editors or localizers 
    • All existing content is open for editing and localization as usual; please focus on localizing the most recent / popular content
  • Since pizza turned out to be a great success, if you have ideas how to virtually gather your l10n team mates, contact me about that!

Firefox

  • for Android
    • Version 50 is slated to come out on November 8th. It should bring video viewing and controlling improvements.
  • for Desktop
    • Version 50 (November 8th as well) will bring the following goodies:
      • WebRTS – full duplex audio streams
      • Tracking Protection supporting Do Not Track
      • Electrolysis – e10s RTL for Windows and Mac
      • First e10s sandbox for Mac OS X and Windows
      • Find in page with a mode to search for whole words only
      • New preference for cycling tabs using Ctrl + Tab
      • Improved printing options via the Reader Mode
  • for iOS
    • Still quiet… Keep using 5.0!

…and that’s it for this week! Remember that we <3 you all for being there for the users when it matters most! Keep rocking the helpful web!

Soledad PenadesTalking about Web Audio in WeCodeSign Podcast

I recorded an episode for the WeCodeSign podcast. It’s in Spanish!

You can download / listen from their website.

We actually talked about more than Web Audio; there’s a list of links to things we mentioned during the episode. From progressive enhancement to Firefox’s Web Audio editor, to the old PCMania tracking stories, to Firefox for iOS… lots of things!

I was really pleased with the experience. The guys were really good at planning, and did a great job editing the podcast as well (and they use Audacity!).

Totally recommended—in fact I suggested that both my fantastic colleague Belén and the very cool Buriticá are interviewed at some point in the future.

I’d love to hear what they have to say!

Throwback to the last time I recorded a podcast in Spanish–at least this time I wasn’t under a massive cold! 🙃

flattr this!

Air MozillaReps Weekly Meeting

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Addons BlogWebExtensions in Firefox 51

Firefox 51 landed in Developer Edition this week, so we have another update on WebExtensions for you. In this update, we’re making it easier for you to port your existing add-ons to WebExtensions. In addition to being fully compatible with multiprocess Firefox, WebExtensions are becoming the standard for add-on development.

Embedded WebExtensions

In Firefox Developer Edition, you can now embed a WebExtensions add-on inside an existing SDK or bootstrapped add-on.

This is especially useful to developers of SDK or bootstrapped add-ons who want to start migrating to WebExtensions and take advantage of new APIs like Native Messaging, but can’t fully migrate yet. It’s also useful for developers who want to complete data migration towards WebExtensions, and who want to take parts of their add-on that are not compatible with multiprocess Firefox and make them compatible.

For more documentation on this, please head over to MDN or check out some examples.

If you need help porting to WebExtensions, please start with the compatibility checker, and check out these resources.

Manifest Change

Because of confusion around the use of strict_min_version in WebExtensions manifests, we’ve prevented the use of * in strict_min_version, for example 48.* is no longer valid. If you upload an add-on to addons.mozilla.org we’ll warn you of that fact.

API Changes

The clipboardWrite permission is now enabled which removes the need to be in a user gesture. This is usable from extension tabs, popups and content scripts.

When a WebExtensions add-on is uninstalled, any local storage is now cleared. If you’d like to persist data across an uninstall then you can use the upcoming sync storage.

The management API now supports the uninstallSelf and getSelf methods. The idle.queryState API has been updated to accurately reflect the state, previously it always returned the value “idle”.

In the webRequest API, onBeforeRequest is now supported in Firefox Nightly and Developer Edition. There are some platform changes that are required to get that to land in a Release version of Firefox.

Developers have been testing out Native messaging and a couple of bugs were filed and fixed on that. New, more detailed, documentation has been written. One of the useful pieces of feedback involved the performance of the round-trip time, and that has now improved.

There has been a few improvements to the appearance of popup windows including the popup arrow, the corners of the popup and reducing flicker on the animation. Here’s a before and after:

popup-before

popup-after

Out of process extensions

Now that the majority of the work multi process Firefox has been completed, we are looking ahead to the many improvements it can bring. One of them is allowing WebExtensions to be run in a separate process. This process-sandboxing of add-ons will bring clear performance and security benefits.

But before we can do that, there is quite a bit of work that needs to be done. The main tracking bug lists some of these tasks. There is also a video of Rob Wu presenting the work he has done on this. There currently isn’t a timeline for when this will be landed, but the work is progressing.

Recognition

We’d also like to give a thank you to four new contributors to WebExtensions, who’ve helped with this release. Thanks to sj, Jorg K, fiveNinePlusR and Tomislav.

Update: link to Robs presentation fixed.

Firefox NightlyFirefox Nightly got its “What’s New” page back last week!

Years ago, every time we were releasing a new version of Firefox and bumped the version number for all Firefox channels, nightly builds were also getting a “What’s New” page displayed at restart after that major version number change (this old page is still available on the WayBack Machine and you can even see a video with ex-QA team lead Juan Becerra).

Then, at some point (Bug 748503), the call to that What’s New page was redirected to the First Run page. It made sense at the time as nobody was actively maintaining that content and it had not been updated in years, but it was also shutting down one of the few direct communication channels with our Nightly users.

Kohei Yoshino and myself worked on resurrecting that page and turn it into a simple yet effective communication channel with our Nightly users where they can get news about what’s new in the Nightly world.

What's New page for Nightly

Unlike the old page we had, this new updated version is integrated correctly into mozilla.org framework (bedrock) which means that we inherit from the nice templates they create and have a workflow which allows localization of that page (see the French and Japanese version of the page) and we might even be able to provide conditional content based on geolocation in the future.

We have created this page with the objective of increasing participation and communication with our core technical users and we intend to update it periodically and make it useful not only to Mozilla with calls to feedback and testing of recently landed features but also to Nightly users (how about having a monthly power-user tip there for example?).

If you have ideas on what information could be part of this What’s New page, don’t hesitate to leave a comment on the blog or to reach out to me directly (pascal At mozilla Dot com)!

CREDITS

Many thanks to Kohei for his great work on the design and the quality of his code. Thanks to the rest of the Release Management team and in particular to Liz Henry and Marcia Knous for helping fix my English! Many thanks to the mozilla.org webdev team for helping with reviews and suggesting nice visual tricks such as the responsive multi-column layout and improved typography tips for readability. Finally, thanks to the localizers that took the time to translate that page in a couple of days before we shipped it even though the expected audience is very small!

BONUS

We were asked via our @FirefoxNightly Twitter account if we could provide the nice background on the What’s New page as a wallpaper for desktop. Instead of providing the file, I am showing you in the following video tutorial how you can do it by yourself with Firefox Nightly Developer Tools, enjoy hacking with your browser and the Web, that’s what Nightly is for!

Michael KaplyKeyword Search is No Longer Feeling Lucky

I’m getting a lot of reports that the Google “I’m Feeling Lucky” option is no longer working with Keyword Search. Unfortunately Google seems to have broken this in their latest search update even though they’ve left the button on the homepage. There’s nothing I can really do to work around it at this time.

If you want a similar feature, you can switch to DuckDuckGo and use their “I’m Feeling Ducky” option.

Daniel Stenberg25,000 curl questions on stackoverflow

stackoverflow-logoOver time, I’ve reluctantly come to terms with the fact that a lot of questions and answers about curl is not done on the mailing lists we have setup in the project itself.

A primary such external site with curl related questions is of course stackoverflow – hardly news to programmers of today. The questions tagged with curl is of course only a very tiny fraction of the vast amount of questions and answers that accumulate on that busy site.

The pile of questions tagged with curl on stackoverflow has just surpassed the staggering number of 25,000. Of course, these questions involve persons who ask about particular curl behaviors (and a large portion is about PHP/CURL) but there’s also a significant amount of tags for questions where curl is only used to do something and that other something is actually what the question is about. And ‘libcurl’ is used as a separate tag and is often used independently of the ‘curl’ one. libcurl is tagged on almost 2,000 questions.

curl-symbolBut still. 25,000 questions. Wow.

I visit that site every so often and answer to some questions but I often end up feeling a great “distance” between me and questions there, and I have a hard time to bridge that gap. Also, stackoverflow the site and the format isn’t really suitable for debugging or solving problems within curl so I often end up trying to get the user move over to file an issue on curl’s github page or discuss the curl problem on a mailing list instead. Forums more suitable for plenty of back-and-forth before the solution or fix is figured out.

Now, any bets for how long it takes until we reach 100K questions?

Niko MatsakisDistinguishing reuse from override

In my previous post, I started discussing the idea of intersection impls, which are a possible extension to specialization. I am specifically looking at the idea of making it possible to add blanket impls to (e.g.) implement Clone for any Copy type. We saw that intersection impls, while useful, do not enable us to do this in a backwards compatible way.

Today I want to dive a bit deeper into specialization. We’ll see that specialization actually couples together two things: refinement of behavior and reuse of code. This is no accident, and its normally a natural thing to do, but I’ll show that, in order to enable the kinds of blanket impls I want, it’s important to be able to tease those apart somewhat.

This post doesn’t really propose anything. Instead it merely explores some of the implications of having specialization rules that are not based purely on subsets of types, but instead go into other areas.

Requirements for backwards compatibility

In the previous post, my primary motivating example focused on the Copy and Clone traits. Specifically, I wanted to be able to add an impl like the following (we’ll call it impl A):

1
2
3
4
5
impl<T: Copy> Clone for T { // impl A
    default fn clone(&self) -> Point {
        *self
    }
}

The idea is that if I have a Copy type, I should not have to write a Clone impl by hand. I should get one automatically.

The problem is that there are already lots of Clone impls in the wild (in fact, every Copy type has one, since Copy is a subtrait of Clone, and hence implementing Copy requires implememting Clone too). To be backwards compatible, we have to do two things:

  • continue to compile those Clone impls without generating errors;
  • give those existing Clone impls precedence over the new one.

The last point may not be immediately obvious. What I’m saying is that if you already had a type with a Copy and a Clone impl, then any attempts to clone that type need to keep calling the clone() method you wrote. Otherwise the behavior of your code might change in subtle ways.

So for example imagine that I am developing a widget crate with some types like these:

1
2
3
4
5
6
7
8
9
10
11
struct Widget<T> { data: Option<T> }

impl<T: Copy> Copy for Widget<T> { } // impl B

impl<T: Clone> Clone for Widget<T> { // impl C
    fn clone(&self) -> Widget<T> {
        Widget {
            data: self.data.clone()
        }
    }
}

Then, for backwards compatibility, we want that if I have a variable widget of type Widget<T> for any T (including cases where T: Copy, and hence Widget<T>: Copy), then widget.clone() invokes impl C.

Thought experiment: Named impls and explicit specialization

For the purposes of this post, I’d like to engage now in a thought experiment. Imagine that, instead of using type subsets as the basis for specialization, we gave every impl a name, and we could explicitly specify when one impl specializes another using that name. When I say that an impl X specializes an impl Y, I mean primarily that items in the impl X override items in impl Y:

  • When we go looking for an associated item, we use the one in X first.

However, in the specialization RFC as it currently stands, specializing is also tied to reuse. In particular:

  • If there is no item in X, then we go looking in Y.

The point of this thought experiment is to show that we may want to separate these two concepts.

To avoid inventing syntax, I’ll use a #[name] attribute to specify the name of an impl and a #[specializes] attribute to declare when one impl specializes another. So we might declare our two Clone impls from the previous section as follows:

1
2
3
4
5
6
#[name = "A"]
impl<T: Copy> Clone for T {...}

#[name = "B"]
#[specializes = "A"]
impl<T: Clone> Clone for Widget<T> {...}

Interestingly, it turns out that this scheme of using explicit names interacts really poorly with the reuse aspects of the specialization RFC. The Clone trait is kind of too simple to show what I mean, so let’s consider an alternative trait, Dump, which has two methods:

1
2
3
4
trait Dump {
    fn display(&self);
    fn debug(&self);
}

Now imagine that I have a blanket implementation of Dump that applies to any type that implements Debug. It defines both display and debug to print to stdout using the Debug trait. Let’s call this impl D.

1
2
3
4
5
6
7
8
9
10
11
12
#[name = "D"]
impl<T> Dump
    where T: Debug,
{
    default fn display(&self) {
        self.debug()
    }

    default fn debug(&self) {
        println!("{:?}", self);
    }
}

Now, maybe I’d like to specialize this impl so that if I have an iterator over items that also implement Display, then display dumps out their debug instead. I don’t want to change the behavior for debug, so I leave that method unchanged. This is sort of analogous to subtyping in an OO language: I am refining the impl for Dump by tweaking how it behaves in certain scenarios. We’ll call this impl E.

1
2
3
4
5
6
7
8
9
#[name = "E"]
#[specializes = "D"]
impl<T> Dump
    where T: Display + Debug,
{
    fn display(&self) {
        println!("{}", value);
    }
}

So far, everything is fine. In fact, if you just remove the #[name] and #[specializes] annotations, this example would work with specialization as currently implemented. But imagine that we did a slightly different thing. Imagine we wrote impl E but without the requirement that T: Debug (everything else is the same). Let’s call this variant impl F.

1
2
3
4
5
6
7
8
9
#[name = "F"]
#[specializes = "D"]
impl<T> Dump
    where T: Display,
{
    fn display(&self) {
        println!("{}", value);
    }
}

Now we no longer have the subset of types property. Because of the #[specializes] annotation, impl F specializes impl D, but in fact it applies to an overlapping, but different set of types (those that implement Display rather than those that implement Debug).

But losing the subset of types property makes the reuse in impl F invalid. Impl F only defines the display() method and it claims to inherit the debug() method from Impl D. But how can it do that? The code in impl D was written under the assumption that the types we are iterating over implement Debug, and it uses methods from the Debug trait. Clearly we can’t reuse that code, since if we did so we might not have the methods we need.

So the takeaway here is that if an impl A wants to reuse some items from impl B, then impl A must apply to a subset of impl B’s types. That guarantees that the item from impl B will still be well-typed inside of impl A.

What does this mean for copy and clone?

Interesting thought experiment, you are thinking, but how does this relate to `Copy` and `Clone`? Well, it turns out that if we ever want to be able to add add things like an autoconversion impl between Copy and Clone (and Ord and PartialOrd, etc), we are going to have to move away from subsets of types as the sole basis for specialization. This implies we will have to separate the concept of when you can reuse (which requires subset of types) from when you can override (which can be more general).

Basically, in order to add a blanket impl backwards compatibly, we have to allow impls to override one another in situations where reuse would not be possible. Let’s go through an example. Imagine that – at timestep 0 – the Dump trait was defined in a crate dump, but without any blanket impl:

1
2
3
4
5
// In crate `dump`, timestep 0
trait Dump {
    fn display(&self);
    fn debug(&self);
}

Now some other crate widget implements Dump for its type Widget, at timestep 1:

1
2
3
4
5
6
7
8
9
10
11
12
13
// In crate `widget`, timestep 1
extern crate dump;

struct Widget<T> { ... }

// impl G:
impl<T: Debug> Debug for Widget<T> {...}

// impl H:
impl<T> Dump for Widget<T> {
    fn display(&self) {...}
    fn debug(&self) {...}
}

Now, at timestep 2, we wish to add an implementation of Dump that works for any type that implements Debug (as before):

1
2
3
4
5
6
7
8
9
10
11
12
// In crate `dump`, timestep 2
impl<T> Dump // impl I
    where T: Debug,
{
    default fn display(&self) {
        self.debug()
    }

    default fn debug(&self) {
        println!("{:?}", self);
    }
}

If we assume that this set of impls will be accepted – somehow, under any rules – we have created a scenario very similar to our explicit specialization. Remember that we said in the beginning that, for backwards compatibility, we need to make it so that adding the new blanket impl (impl I) does not cause any existing code to change what impl it is using. That means that Widget<T>: Dump also needs to be resolved to impl H, the original impl from the crate widget: even if impl I also applies.

This basically means that impl H overrides impl I (that is, in cases where both impls apply, impl H takes precedence). But impl H cannot reuse from impl I, since impl H does not apply to a subset of blanket impl’s types. Rather, these impls apply to overlapping but distinct sets of types. For example, the Widget impl applies to all Widget<T>, even in cases where T: Debug does not hold. But the blanket impl applies to i32, which is not a widget at all.

Conclusion

This blog post argues that if we want to support adding blanket impls backwards compatibly, we have to be careful about reuse. I actually don’t think this is a mega-big deal, but it’s an interesting observation, and one that wasn’t obvious to me at first. It means that subset of types will always remain a relevant criteria that we have to test for, no matter what rules we wind up with (which might in turn mean that intersection impls remain relevant).

The way I see this playing out is that we have some rules for when one impl specializes one another. Those rules do not guarantee a subset of types and in fact the impls may merely overlap. If, additionally, one impl matches a subst of the other’s types, then that first impl may reuse items from the other impl.

PS: Why not use names, anyway?

You might be thinking to yourself right now boy, it is nice to have names and be able to say explicitly what we specialized by what. And I would agree. In fact, since specializable impls must mark their items as default, you could easily imagine a scheme where those impls had to also be given a name at the same time. Unfortunately, that would not at all support my copy-clone use case, since in that case we want to add the base impl after the fact, and hence the extant specializing impls would have to be modified to add a #[specializes] annotation. Also, we tried giving impls names back in the day; it felt quite artificial, since they don’t have an identity of their own, really.

Comments

Since this is a continuation of my previous post, I’ll just re-use the same internals thread for comments.

Christian HeilmannQuick tip: using modulo to re-start loops without the need of an if statement

the more you know

A few days ago Jake Archibald posted a JSBin example of five ways to center vertically in CSS to stop the meme of “CSS is too hard and useless”. What I found really interesting in this example is how he animated showing the different examples (this being a CSS demo, I’d probably would’ve done a CSS animation and delays, but he wanted to support OldIE, hence the use of className instead of classList):

var els = document.querySelectorAll('p');
var showing = 0;
setInterval(function() {
  // this is easier with classlist, but meh:
  els[showing].className = els[showing].className.replace(' active', '');
  showing = (showing + 1) % 5;
  els[showing].className += ' active';
}, 4000);

The interesting part to me here is the showing = (showing + 1) % 5; line. This means that if showing is 4 showing becomes 0, thus starting the looping demo back from the first example. This is the remainder operator of JavaScript, giving you the remaining value of dividing the first value with the second. So, in the case of 4 + 1 % 5, this is zero.

Whenever I used to write something like this, I’d do an if statement, like:

showing++;
if (showing === 5) { showing = 0; }

Using the remainder seems cleaner, especially when instead of the hard-coded 5, you’d just use the length of the element collection.

var els = document.querySelectorAll('p');
var all = els.length;
var c = 'active';
var showing = 0;
setInterval(function() {
  els[showing].classList.remove(c);
  showing = (showing + 1) % all;
  els[showing].classList.add(c);
}, 4000);

A neat little trick to keep in mind.

Chris McDonaldi-can-manage-it Weekly Update 2

A little over a week ago, I started this series about the game I’m writing. Welcome to the second installment. It took a little longer than a week to get around to writing. I wanted to complete the task, determining what tile the user clicked on, I set out for myself at the end of my last post before coming back and writing up my progress. But while we’re on the topic, the “weekly” will likely be a loose amount of time. I’ll aim for each weekend but I don’t want guilt from not posting getting in the way of building the game.

Also, you may notice the name changed just a little bit. I decided to go with the self motivating and cuter name of i-can-manage-it. The name better captures my state of mind when I’m building this. I just assume I can solve a problem and keep working on it until I understand how to solve it or why that approach is not as good as some other approach. I can manage building this game, you’ll be able to manage stuff in the game, we’ll all have a grand time.

So with the intro out of the way, lets talk progress. I’m going to bullet point the things I’ve done and then discuss them in further detail below.

  • Learned more math!
  • Built a bunch of debugging tools into my rendering engine!
  • Can determine what tile the mouse is over!
  • Wrote my first special effect shader!

Learned more math!

If you are still near enough to high school to remember a good amount of the math from it and want to play with computer graphics, keep practicing it! So far I haven’t needed anything terribly advanced to do the graphics I’m currently rendering. In my high school Algebra 2 covered matrix math to a minor degree. Now back then I didn’t realize that this was a start into linear algebra. Similarly, I didn’t consider all the angle and area calculations in geometry to be an important life lesson, just neat attributes of the world expressed in math.

In my last post I mentioned this blog post on 3d transformations which talks about several but not necessarily all coordinate systems a game would have. So, I organized my world coordinate system, the coordinates that my map outputs and game rules use, so that it matched how the X and Y change in OpenGL coordinates. X, as you’d expect gets larger going toward the right of the screen. And if you’ve done much math or looked at graphs, you’ve seen demonstrations of the Y getting larger going toward the top. OpenGL works this way and so I made my map render this way.

You then apply a series of 4×4 matrices that correspond to things like moving the object to where it should be in world coordinates from it’s local coordinates which are the coordinates that might be exported from 3d modelling or generated by the game engine. You also apply a 4×4 matrix for the window’s aspect ratio, zoom, pan and probably other stuff too.

That whole transform process I described above results in a bunch of points that aren’t even on the screen. OpenGL determines that by looking at points between -1 and 1 on each axis and anything outside of that range is culled, which means that the graphics card wont put it on the screen.

I learned that a neat property of these matrices is that many of them are invertable. Which means you can invert the matrix then apply it to a point on the screen and get back where that point is in your world coordinates. If we wanted to know what object was at the center of the screen, we’d take that inverted matrix and multiply it by {x: 0, y: 0, z: 0, w: 1} (as far as I can tell the w servers to make this math magic all work) and get back what world coordinates were at the center of the view. In my case because my world is 2d, that means I can just calculate what tile is at that x and y coordinate and what is the top most thing on that tile. If you had a 3d world, you’d then need to something like ray casting, which sends a ray out from the specified point at the camera’s z axis and moves across the z axis until it encounters something (or hits the back edge).

I spent an afternoon at the library and wrote a few example programs to test this inversion stuff to check my pen and paper math using the cgmath crate. That way I could make sure I understood the math, as well as how to make cgmath do the same thing. I definitely ran into a few snags where I multiplied or added the wrong numbers when working on paper due to taking short cuts. Taking the time to also write the math using code meant I’d catch these errors quickly and then correct how I thought about things. It was so productive and felt great. Also, being surrounded by knowledge in the library is one of my favorite things.

Built a bunch of debugging tools into my rendering engine!

Through my career, I’ve found that the longer you expect the project to last, the more time you should spend on making sure it is debuggable. Since I expect this project to take up the majority of my spare time hacking for at least a few years, maybe even becoming the project I work on longer than any other project before it I know that each debugging tool is probably a sound investment.

Every time I add in a 1 off debugging tool, I work on it for a while getting it to a point to solve my problem at hand. Then, once I’m ready to clean up my code, I think about how many other types or problems that debugging tool might solve and how hard it would be to make easy to access in the future. Luckily, most debugging tools are more awesome when you can toggle them on the fly. If the tool is easy to toggle, I definitely leave it in until it causes trouble adding a new feature.

An example of adapting tools to keep them, my FPS (frames per second) counter I built was logging the FPS to the console every second and had become a hassle. When working on other problems because other log lines would scroll by due to the FPS chatter. So I added a key to toggle the FPS printing, but keep calculating it every frame. I’d thought about removing the calculation too, but decided I’ll probably want to track that metric for a long time so it should probably be a permanent fixture and cost.

A tool I’m pretty proud of had to do with my tile map rendering. My tiles are rendered as a series of triangles, 2 per tile, that are stitched in a triangle strip, which is a series of points where each 3 points is a triangle. I also used degenerate triangles which are triangles that have no area so OpenGL doesn’t render them. I generate this triangle strip once then save it and reuse it with some updated meta data on each frame.

I had some of the points mixed up causing the triangles to cross the whole map that rendered over the tiles. I added the ability to switch to line drawing instead of filled triangles, which helped some of the debugging because I could see more of the triangles. I realized I could take a slice of the triangle strip and only render the first couple points. Then by adding a couple key bindings I could make that dynamic, so I could step through the vertices and verify the order they were drawn in. I immediately found the issue and felt how powerful this debug tool could be.

Debugging takes up an incredible amount of time, I’m hoping by making sure I’ve got a large toolkit I’ll be able to overcome any bug that comes up quickly.

Can determine what tile the mouse is over!

I spent time learning and relearning the math mentioned in the first bullet point to solve this problem. But, I found another bit of math I needed to do for this. Because of how older technology worked, mouse pointers coordinates start in the upper left of the screen and grow bigger going toward the right (like OpenGL) and going toward the bottom (the opposite of OpenGL). Also, because OpenGL coordinates are a -1 to 1 range for the window, I needed to turn the mouse pointer coordinates into that as well.

This inversion of the Y coordinate were a huge source of my problems for a couple days. To make a long story short, I inverted the Y coordinate when I first got it, then I was inverting it again when I was trying to work out what tile the mouse was over. This was coupled with me inverting the Y coordinate in the triangle strip from my map instead of using a matrix transform to account for how I was drawing the map to the console. This combination of bugs meant that if I didn’t pan the camera at all I could get the tile the mouse was over correctly. But, as soon as I panned it up or down, the Y coordinate would be off, moving in the opposite direction of the panning. Took me a long time to hunt this combination of bugs down.

But, the days of debugging made me take a lot of critical looks at my code, taking the time to cleaned up my code and math. Not abstracting it really, just organizing it into more logical blocks and moving some things out of the rendering loop, only recalculating them as needed. This may sound like optimization, but the goal wasn’t to make the code faster, just more logically organized. Also I got a bunch of neat debugging tools in addition to the couple I mentioned above.

So while this project took me a bit longer than expected, I made my code better and am better prepared for my next project.

Wrote my first special effect shader!

I was attempting to rest my brain from the mouse pointer problem by working on shader effects. It was something I wanted to start learning and I set a goal of having a circle at the mouse point that moves outwards. I spent most of my hacking on Sunday on this problem and here are the results. In the upper left click the 2 and change it to 0.5 to make it nice and smooth. Hide the code up in the upper left if that isn’t interesting to you.

First off, glslsandbox is pretty neat. I was able to immediately start experimenting with a shader that had mouse interaction. I started by trying to draw a box around the mouse pointer. I did this because it was simple and I figured calculating the circle would be more expensive than checking the bounding box. I was quickly able to get there. Then a bit of Pythagorean theorem, thanks high school geometry, and I was able to calculate a circle.

The only trouble was that it wasn’t actually a circle. It was an elliptical disc instead, matching the aspect ratio of the window. Meaning that because my window was a rectangle instead of a square, my circle reflected that the window was shorter than it was wide. In the interest of just getting things working, I pulled the orthographic projection I was using in my rendering engine and translated it to glsl and it worked!

Next was to add another circle on the inside, which was pretty simple because I’d already done it once, and scaling the circle’s size with time. Honestly, despite all the maybe scary looking math on that page, it was relatively simple to toss together. I know there are whole research papers on just parts of graphical effects, but it is good to know that some of the more simple ones are able to be tossed together in a few hours. Then later, if I decide I want to really use the effect, I can take the time to deeply understand the problem and write a version using less operations to be more efficient.

On that note, I’m not looking for feedback on that shader I wrote. I know the math is inefficient and the code is pretty messy. I want to use this shader as a practice for taking and effect shader and making it faster. Once I’ve exhausted my knowledge and research I’ll start soliciting friends for feedback, thanks for respecting that!

Wrapping up this incredibly long blog post I want to say everyone in my life has been so incredibly supportive of me building my own game. Co-workers have given me tips on tools to use and books to read, friends have given input on the ideas for my game engine helping guide me in an area I don’t know well. Last and most amazing is my wife, who listens to me prattle away about my problems in my game engine or how some neat math thing I learned works, and then encourages me with her smile.

Catch you in the next update!


Mitchell BakerUN High Level Panel and UN Secretary General Ban Ki-moon issue report on Women’s Economic Empowerment

“Gender equality remains the greatest human rights challenge of our time.”  UN Secretary General Ban Ki-moon, September 22, 2016.

To address this challenge the Secretary General championed the 2010 creation of UN Women, the UN’s newest entity. To focus attention on concrete actions in the economic sphere he created the “High Level Panel on Women’s Economic Empowerment” of which I am a member.

The Panel presented its initial findings and commitments last week during the UN General Assembly Session in New York. Here is the Secretary General, with the the co-chairs, and the heads of the IMF and the World Bank, the Executive Director of the UN Women, and the moderator and founder of All Africa Media, each of whom is a panel member.

UN General Assembly Session in New York

Photo Credit: Anar Simpson

The findings are set out in the Panel’s initial report. Key to the report is the identification of drivers of change, which have been deemed by the panel to enhance women’s economic empowerment:

  1. Breaking stereotypes: Tackling adverse social norms and promoting positive role models
  2. Leveling the playing field for women: Ensuring legal protection and reforming discriminatory laws and regulations
  3. Investing in care: Recognizing, reducing and redistributing unpaid work and care
  4. Ensuring a fair share of assets: Building assets—Digital, financial and property
  5. Businesses creating opportunities: Changing business culture and practice
  6. Governments creating opportunities: Improving public sector practices in employment and procurement
  7. Enhancing women’s voices: Strengthening visibility, collective voice and representation
  8. Improving sex-disaggregated data and gender analysis

Chapter Four of the report describes a range of actions that are being undertaken by Panel Members for each of the above drivers. For example under the Building assets driver: DFID and the government of Tanzania are extending land rights to more than 150,000 Tanzanian women by the end of 2017. Tanzania will use media to educate people on women’s land rights and laws pertaining to property ownership. Clearly this is a concrete action that can serve as a precedent for others.

As a panel member, Mozilla is contributing to the working on Building Assets – Digital. Here is my statement during the session in New York:

“Mozilla is honored to be a part of this Panel. Our focus is digital inclusion. We know that access to the richness of the Internet can bring huge benefits to Women’s Economic Empowerment. We are working with technology companies in Silicon Valley and beyond to identify those activities which provide additional opportunity for women. Some of those companies are with us today.

Through our work on the Panel we have identified a significant interest among technology companies in finding ways to do more. We are building a working group with these companies and the governments of Costa Rica, Tanzania and the U.A. E. to address women’s economic empowerment through technology.

We expect the period from today’s report through the March meeting to be rich with activity. The possibilities are huge and the rewards great. We are committed to an internet that is open and accessible to all.”

You can watch a recording of the UN High Level Panel on Women’s Economic Empowerment here. For my statement, view starting at: 2.07.53.

There is an immense amount of work to be done to meet the greatest human rights challenge of our time. I left the Panel’s meeting hopeful that we are on the cusp of great progress.

Hub FiguièreIntroducing gudev-rs

A couple of weeks ago, I released gudev-rs, Rust wrappers for gudev. The goal was to be able to receive events from udev into a Gtk application written in Rust. I had a need for it, so I made it and shared it.

It is mostly auto-generated using gir-rs from the gtk-rs project. The license is MIT.

Source code

If you have problems, suggestion, patches, please feel free to submit them.

The Rust Programming Language BlogAnnouncing Rust 1.12

The Rust team is happy to announce the latest version of Rust, 1.12. Rust is a systems programming language with the slogan “fast, reliable, productive: pick three.”

As always, you can install Rust 1.12 from the appropriate page on our website, and check out the detailed release notes for 1.12 on GitHub. 1361 patches were landed in this release.

What’s in 1.12 stable

The release of 1.12 might be one of the most significant Rust releases since 1.0. We have a lot to cover, but if you don’t have time for that, here’s a summary:

The largest user-facing change in 1.12 stable is the new error message format emitted by rustc. We’ve previously talked about this format and this is the first stable release where they are broadly available. These error messages are a result of the effort of many hours of volunteer effort to design, test, and update every one of rustcs errors to the new format. We’re excited to see what you think of them:

A new borrow-check error

The largest internal change in this release is moving to a new compiler backend based on the new Rust MIR. While this feature does not result in anything user-visible today, it paves the way for a number of future compiler optimizations, and for some codebases it already results in improvements to compile times and reductions in code size.

Overhauled error messages

With 1.12 we’re introducing a new error format which helps to surface a lot of the internal knowledge about why an error is occurring to you, the developer. It does this by putting your code front and center, highlighting the parts relevant to the error with annotations describing what went wrong.

For example, in 1.11 if a implementation of a trait didn’t match the trait declaration, you would see an error like the one below:

An old mismatched trait error

In the new error format we represent the error by instead showing the points in the code that matter the most. Here is the relevant line in the trait declaration, and the relevant line in the implementation, using labels to describe why they don’t match:

A new mismatched trait error

Initially, this error design was built to aid in understanding borrow-checking errors, but we found, as with the error above, the format can be broadly applied to a wide variety of errors. If you would like to learn more about the design, check out the previous blog post on the subject.

Finally, you can also get these errors as JSON with a flag. Remember that error we showed above, at the start of the post? Here’s an example of attempting to compile that code while passing the --error-format=json flag:

$ rustc borrowck-assign-comp.rs --error-format=json
{"message":"cannot assign to `p.x` because it is borrowed","level":"error","spans":[{"file_name":"borrowck-assign-comp.rs","byte_start":562,"byte_end":563,"line_start":15,"line_end":15,"column_start":14,"column_end":15,"is_primary":false,"text":[{"text":"    let q = &p;","highlight_start":14,"highlight_end":15}],"label":"borrow of `p.x` occurs here","suggested_replacement":null,"expansion":null}],"label":"assignment to borrowed `p.x` occurs here","suggested_replacement":null,"expansion":null}],"children":[],"rendered":null}
{"message":"aborting due to previous error","code":null,"level":"error","spans":[],"children":[],"rendered":null}

We’ve actually elided a bit of this for brevity’s sake, but you get the idea. This output is primarily for tooling; we are continuing to invest in supporting IDEs and other useful development tools. This output is a small part of that effort.

MIR code generation

The new Rust “mid-level IR”, usually called “MIR”, gives the compiler a simpler way to think about Rust code than its previous way of operating entirely on the Rust abstract syntax tree. It makes analysis and optimizations possible that have historically been difficult to implement correctly. The first of many upcoming changes to the compiler enabled by MIR is a rewrite of the pass that generates LLVM IR, what rustc calls “translation”, and after many months of effort the MIR-based backend has proved itself ready for prime-time.

MIR exposes perfect information about the program’s control flow, so the compiler knows exactly whether types are moved or not. This means that it knows statically whether or not the value’s destructor needs to be run. In cases where a value may or may not be moved at the end of a scope, the compiler now simply uses a single bitflag on the stack, which is in turn easier for optimization passes in LLVM to reason about. The end result is less work for the compiler and less bloat at runtime. In addition, because MIR is a simpler ‘language’ than the full AST, it’s also easier to implement compiler passes on, and easier to verify that they are correct.

Other improvements

See the detailed release notes for more.

Library stabilizations

This release sees a number of small quality of life improvements for various types in the standard library:

See the detailed release notes for more.

Cargo features

The biggest feature added to Cargo this cycle is “workspaces.” Defined in RFC 1525, workspaces allow a group of Rust packages to share the same Cargo.lock file. If you have a project that’s split up into multiple packages, this makes it much easier to keep shared dependencies on a single version. To enable this feature, most multi-package projects need to add a single key, [workspace], to their top-level Cargo.toml, but more complex setups may require more configuration.

Another significant feature is the ability to override the source of a crate. Using this with tools like cargo-vendor and cargo-local-registry allow vendoring dependencies locally in a robust fashion. Eventually this support will be the foundation of supporting mirrors of crates.io as well.

There are some other improvements as well:

See the detailed release notes for more.

Contributors to 1.12

We had 176 individuals contribute to 1.12. Thank you so much!

  • Aaron Gallagher
  • abhi
  • Adam Medziński
  • Ahmed Charles
  • Alan Somers
  • Alexander Altman
  • Alexander Merritt
  • Alex Burka
  • Alex Crichton
  • Amanieu d’Antras
  • Andrea Pretto
  • Andre Bogus
  • Andrew
  • Andrew Cann
  • Andrew Paseltiner
  • Andrii Dmytrenko
  • Antti Keränen
  • Aravind Gollakota
  • Ariel Ben-Yehuda
  • Bastien Dejean
  • Ben Boeckel
  • Ben Stern
  • bors
  • Brendan Cully
  • Brett Cannon
  • Brian Anderson
  • Bruno Tavares
  • Cameron Hart
  • Camille Roussel
  • Cengiz Can
  • CensoredUsername
  • cgswords
  • Chiu-Hsiang Hsu
  • Chris Stankus
  • Christian Poveda
  • Christophe Vu-Brugier
  • Clement Miao
  • Corey Farwell
  • CrLF0710
  • crypto-universe
  • Daniel Campbell
  • David
  • decauwsemaecker.glen@gmail.com
  • Diggory Blake
  • Dominik Boehi
  • Doug Goldstein
  • Dridi Boukelmoune
  • Eduard Burtescu
  • Eduard-Mihai Burtescu
  • Evgeny Safronov
  • Federico Ravasio
  • Felix Rath
  • Felix S. Klock II
  • Fran Guijarro
  • Georg Brandl
  • ggomez
  • gnzlbg
  • Guillaume Gomez
  • hank-der-hafenarbeiter
  • Hariharan R
  • Isaac Andrade
  • Ivan Nejgebauer
  • Ivan Ukhov
  • Jack O’Connor
  • Jake Goulding
  • Jakub Hlusička
  • James Miller
  • Jan-Erik Rediger
  • Jared Manning
  • Jared Wyles
  • Jeffrey Seyfried
  • Jethro Beekman
  • Jonas Schievink
  • Jonathan A. Kollasch
  • Jonathan Creekmore
  • Jonathan Giddy
  • Jonathan Turner
  • Jorge Aparicio
  • José manuel Barroso Galindo
  • Josh Stone
  • Jupp Müller
  • Kaivo Anastetiks
  • kc1212
  • Keith Yeung
  • Knight
  • Krzysztof Garczynski
  • Loïc Damien
  • Luke Hinds
  • Luqman Aden
  • m4b
  • Manish Goregaokar
  • Marco A L Barbosa
  • Mark Buer
  • Mark-Simulacrum
  • Martin Pool
  • Masood Malekghassemi
  • Matthew Piziak
  • Matthias Rabault
  • Matt Horn
  • mcarton
  • M Farkas-Dyck
  • Michael Gattozzi
  • Michael Neumann
  • Michael Rosenberg
  • Michael Woerister
  • Mike Hommey
  • Mikhail Modin
  • mitchmindtree
  • mLuby
  • Moritz Ulrich
  • Murarth
  • Nick Cameron
  • Nick Massey
  • Nikhil Shagrithaya
  • Niko Matsakis
  • Novotnik, Petr
  • Oliver Forral
  • Oliver Middleton
  • Oliver Schneider
  • Omer Sheikh
  • Panashe M. Fundira
  • Patrick McCann
  • Paul Woolcock
  • Peter C. Norton
  • Phlogistic Fugu
  • Pietro Albini
  • Rahiel Kasim
  • Rahul Sharma
  • Robert Williamson
  • Roy Brunton
  • Ryan Scheel
  • Ryan Scott
  • saml
  • Sam Payson
  • Samuel Cormier-Iijima
  • Scott A Carr
  • Sean McArthur
  • Sebastian Thiel
  • Seo Sanghyeon
  • Shantanu Raj
  • ShyamSundarB
  • silenuss
  • Simonas Kazlauskas
  • srdja
  • Srinivas Reddy Thatiparthy
  • Stefan Schindler
  • Stephen Lazaro
  • Steve Klabnik
  • Steven Fackler
  • Steven Walter
  • Sylvestre Ledru
  • Tamir Duberstein
  • Terry Sun
  • TheZoq2
  • Thomas Garcia
  • Tim Neumann
  • Timon Van Overveldt
  • Tobias Bucher
  • Tomasz Miąsko
  • trixnz
  • Tshepang Lekhonkhobe
  • ubsan
  • Ulrik Sverdrup
  • Vadim Chugunov
  • Vadim Petrochenkov
  • Vincent Prouillet
  • Vladimir Vukicevic
  • Wang Xuerui
  • Wesley Wiser
  • William Lee
  • Ximin Luo
  • Yojan Shrestha
  • Yossi Konstantinovsky
  • Zack M. Davis
  • Zhen Zhang
  • 吴冉波

Mozilla Addons BlogHow Video DownloadHelper Became Compatible with Multiprocess Firefox

Today’s post comes from Michel Gutierrez (mig), the developer of Video DownloadHelper, among other add-ons. He shares his story about the process of modernizing his XUL add-on to make it compatible with multiprocess Firefox (e10s).

***

Video DownloadHelper (VDH) is an add-on that extracts videos and image files from the Internet and saves them to your hard drive. As you surf the Web, VDH will show you a menu of download options when it detects something it can save for you.

It was first released in July 2006, when Firefox was on version 1.5. At the time, both the main add-on code and DOM window content were running in the same process. This was helpful because video URLs could easily be extracted from the window content by the add-on. The Smart Naming feature was also able to extract video names from the Web page.

When multiprocess Firefox architecture was first discussed, it was immediately clear that VDH needed a full rewrite with a brand new architecture. In multiprocess Firefox, DOM content for webpages run in a separate process, which means required asynchronous communication with the add-on code would increase significantly. It wasn’t possible to simply make adaptations to the existing code and architecture because it would make the code hard to read and unmaintainable.

The Migration

After some consideration, we decided to update the add-on using SDK APIs. Here were our requirements:

  • Code running in the content process needed to run separately from code running in Javascript modules and the main process. Communication must occur via message passing.
  • Preferences needed to be available in the content process, as there are many adjustable parameters that affect the user interface.
  • Localization of HTML pages within the content script should be as easy as possible.

In VDH, the choice was made to handle all of these requirements using the same Client-Server architecture commonly used in regular Web applications: the components that have access to the preferences, localization, and data storage APIs (running in the main process) serve this data to the UI components and the components injected into the page (running in the content process), through the messaging API provided by the SDK.

Limitations

Migrating to the SDK enabled us to become compatible with multiprocess Firefox, but it wasn’t a perfect solution. Low-level SDK APIs, which aren’t guaranteed to work with e10s or stay compatible with future versions of Firefox, were required to implement anything more than simple features. Also, an increased amount of communication between processes is required even for seemingly simple interactions.

  • Resizing content panels can only occur in the background process, but only the content process knows what the dimensions should be. This gets more complicated when the size dynamically changes or depends on various parameters.
  • Critical features like monitoring network traffic or launching external programs in VDH requires low-level APIs.
  • Capturing tab thumbnails from the Add-on SDK API does not work in e10s mode. This feature had to be reimplemented in the add-on using a framescript.
  • When intercepting network responses, the Add-on SDK does not decode compressed responses.
  • The SDK provides no easy means to determine if e10s is enabled or not, which would be useful as long as glitches remain where the add-on has to act differently.

Future Direction

Regardless of the limitations posed, making VDH compatible to multiprocess Firefox was a great success. Taking the time to rewrite the add-on also improved the general architecture and prepared it for changes needed for WebExtensions. The first e10s-compatible version of VDH is version 5.0.1 and had been available since March 2015.

Looking forward, the next big challenge is making VDH compatible with WebExtensions. We considered migrating directly to WebExtensions, but the legacy and low-level SDK APIs used in VDH could not be replaced at the time without compromising the add-on’s features.

To fully complete the transition to WebExtensions, additional APIs may need to be created. As an extension developer we’ve found it helpful to work with Mozilla to define those APIs, and design them in a way that is general enough for them to be useful in many other types of add-ons.

A note from the add-ons team: resources for migrating your add-ons to WebExtensions can be found here.

Armen Zambrano[NEW] Added build status updates - Usability improvements for Firefox automation initiative - Status update #6

[NEW] Starting on this newsletter we will start giving you build automation improvements since they help with the end to end time of your pushes

On this update we will look at the progress made in the last two weeks.

A reminder that this quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

Status update:
Debugging tests on interactive workers
---------------------------------------------------

Accomplished recently:
  • Fixed regression that broke the interactive wizard
  • Support for Android reftests landed

Upcoming:
  • Support for Android xpcshell
  • Video demonstration


Thunder Try - Improve end to end times on try
---------------------------------------------

Project #1 - Artifact builds on automation
##########################################

Accomplished recently:
  • Windows and Mac artifact builds are soon to land
  • |mach try| now supports --artifact option
  • Compiled-code tests jobs error-out early when run with --artifact on try

Upcoming:
  • Windows and Mac artifact builds available on Try
  • Fix triggering of test jobs on Buildbot with artifact build

Project #2 - S3 Cloud Compiler Cache
####################################

Nothing new in this edition.

Project #3 - Metrics
####################

Accomplished recently:

  • Drill-down charts:

  • Which lead to a detailed view:

  • With optional wait times included (missing 10% outliers, so looks almost the same):


Upcoming:
  • Iron out interactivity bugs
  • Show outliers
  • Post these (static) pages to my people page
  • Fix ActiveData production to handle these queries (I am currently using a development version of ActiveData, but that version has some nasty anomalies)

Project #4 - Build automation improvements
##########################################
Upcoming:


Project #5 - Run Web platform tests from the source checkout
############################################################
Accomplished recently:
  • WPT is now running from the source checkout in automation

Upcoming:
  • There are still parts in automation relying on a test zip. Next steps is to minimize those so you can get a loner, pull any revision from any repo, and test WPT changes in an environment that is exactly what the automation tests run in.

Other
#####
  • Bug 1300812 - Make Mozharness downloads and unpacks actions handle better intermittent S3/EC2 issues
    • This adds retry logic to reduce intermittent oranges


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaThe Joy of Coding - Episode 73

The Joy of Coding - Episode 73 mconley livehacks on real Firefox bugs while thinking aloud.

Cameron KaiserFirefox OS goes Tier WONTFIX

I suppose it shouldn't be totally unexpected given the end of FirefoxOS phone development a few months ago, but a platform going from supported to take-it-completely-out-of-mozilla-central in less than a year is rather startling: not only has commercial development on FirefoxOS completely ceased (at version 2.6), but the plan is to remove all B2G code from the source tree entirely. It's not an absolutely clean comparison to us because some APIs are still relevant to current versions of OS X macOS, but even support for our now ancient cats was only gradually removed in stages from the codebase and even some portions of pre-10.4 code persisted until relatively recently. The new state of FirefoxOS, where platform support is actually unwelcome in the repository, is beyond the lowest state of Tier-3 where even our own antiquated port lives. Unofficially this would be something like the "tier WONTFIX" BDS referenced in mozilla.dev.planning a number of years ago.

There may be a plan to fork the repository, but they'd need someone crazy dedicated to keep chugging out builds. We're not anything on that level of nuts around here. Noooo.

The Mozilla BlogFirefox’s Test Pilot Program Launches Three New Experimental Features

Earlier this year we launched our first set of experiments for Test Pilot, a program designed to give you access to experimental Firefox features that are in the early stages of development. We’ve been delighted to see so many of you participating in the experiments and providing feedback, which ultimately, will help us determine which features end up in Firefox for all to enjoy.

Since our launch, we’ve been hard at work on new innovations, and today we’re excited to announce the release of three new Test Pilot experiments. These features will help you share and manage screenshots; keep streaming video front and center; and protect your online privacy.

What Are The New Experiments?

Min Vid:

Keep your favorite entertainment front and center. Min Vid plays your videos in a small window on top of your other tabs so you can continue to watch while answering email, reading the news or, yes, even while you work. Min Vid currently supports videos hosted by YouTube and Vimeo.

Page Shot:

The print screen button doesn’t always cut it. The Page Shot feature lets you take, find and share screenshots with just a few clicks by creating a link for easy sharing. You’ll also be able to search for your screenshots by their title, and even the text captured in the image, so you can find them when you need them.

Tracking Protection:

We’ve had Tracking Protection in Private Browsing for a while, but now you can block trackers that follow you across the web by default. Turn it on, and browse free and breathe easy. This experiment will help us understand where Tracking Protection breaks the web so that we can improve it for all Firefox users.

How do I get started?

Test Pilot experiments are currently available in English only. To activate Test Pilot and help us build the future of Firefox, visit testpilot.firefox.com.

As you’re experimenting with new features within Test Pilot, you might find some bugs, or lose some of the polish from the general Firefox release, so Test Pilot allows you to easily enable or disable features at any time.

Your feedback will help us determine what ultimately ends up in Firefox – we’re looking forward to your thoughts!

Wil ClouserTest Pilot is launching three new experiments for Firefox

The Test Pilot team has been heads-down for months working on three new experiments for Firefox and you can get them all today!

Min Vid

Min Vid is an add-on that allows you to shrink a video into a small always-on-top frame in the corner of your browser. This lets you watch and interact with a video while browsing the web in other tabs. Opera and Safari are implementing similar features so this one might have some sticking power.

Thanks to Dave, Jen, and Jared for taking this from some prototype code to in front of Firefox users in six months.

Tracking Protection

Luke has been working hard on Tracking Protection - an experiment focused on collecting feedback from users about which sites break when Firefox blocks the trackers from loading. As we collect data from everyday users we can make decisions about how best to block what people don't want and still show them what they do. Eventually this could lead to us protecting all Firefox users with Tracking Protection by default.

Page Shot

Page Shot is a snappy experiment that enables users to quickly take screenshots in their browser and share them on the internet. There are a few companies in this space already, but their products always felt too heavy to me, or they ignored privacy, or some simply didn't even work (this was on Linux). Page Shot is light and quick and works great everywhere.

As a bonus, a feature I haven't seen anywhere else, Page Shot also offers searching the text within the images themselves. So if you take a screenshot of a pizza recipe and later search for "mozzarella" it will find the recipe.

I was late to the Page Shot party and my involvement is just standing on the shoulders of giants at this point: by the time I was involved the final touches were already being put on. A big thanks to Ian and Donovan for bringing this project to life.

I called out the engineers who have been working to bring their creations to life, but of course there are so many teams who were critical to today's launches. A big thank you to the people who have been working tirelessly and congratulations on launching your products! :)

Daniel Stenberg1,000,000 sites run HTTP/2

… out of the top ten million sites that is. So there’s at least that many, quite likely a few more.

This is according to w3techs who runs checks daily. Over the last few months, there have been about 50,000 new sites per month switching it on.

ht2-10-percent

It also shows that the HTTP/2 ratio has increased from a little over 1% deployment a year ago to the 10% today.

HTTP/2 gets more used the more  popular site it is. Among the top 1,000 sites on the web, more than 20% of them use HTTP/2. HTTP/2 also just recently (September 9) overcame SPDY among the top-1000 most popular sites.

h2-sep28

On September 7, Amazon announced their CloudFront service having enabled HTTP/2, which could explain an adoption boost over the last few days. New CloudFront users get it enabled by default but existing users actually need to go in and click a checkbox to make it happen.

As the web traffic of the world is severely skewed toward the top ones, we can be sure that a significantly larger share than 10% of the world’s HTTPS traffic is using version 2.

Recent usage stats in Firefox shows that HTTP/2 is used in half of all its HTTPS requests!

http2

Cameron KaiserAnd now for something completely different: Rehabilitating the Performa 6200?

The Power Macintosh 6200 in its many Performa variants has one of the worst reputations of any Mac, and its pitifully small 603 L1 caches add insult to injury (its poor 68K emulation performance was part of the reason Apple held up the PowerBook's migration to PowerPC until the 603e, and then screwed it up with the PowerBook 5300, a unit that is IMHO overly harshly judged by history but not without justification). LowEndMac has a long list of its perceived faults.

But every unloved machine has its defenders, and I noticed that the Wikipedia entry on the 6200 series radically changed recently. The "Dtaylor372" listed in the edit log appears to be this guy, one "Daniel L. Taylor". If it is, here's his reasoning why the seething hate for the 6200 series should be revisited.

Daniel does make some cogent points, cites references, and even tries to back them some of them up with benchmarks (heh). He helpfully includes a local copy of Apple's tech notes on the series, though let's be fair here -- Apple is not likely to say anything unbecoming in that document. That said, the effort is commendable even if I don't agree with everything he's written. I'll just cite some of what I took as highlights and you can read the rest.

  • The Apple tech note says, "The Power Macintosh 5200 and 6200 computers are electrically similar to the Macintosh Quadra 630 and LC 630." It might be most accurate to say that these computers are Q630 systems with an on-board PowerPC upgrade. It's an understatement to observe that's not the most favourable environment for these chips, but it would have required much less development investment, to be sure.

  • He's right that the L2 cache, which is on a 64-bit bus and clocked at the actual CPU speed, certainly does mitigate some of the problems with the Q630's 32-bit interface to memory, and 256K L2 in 1995 would have been a perfectly reasonable amount of cache. (See page 29 for the block diagram.) A 20-25% speed penalty (his numbers), however, is not trivial and I think he underestimates how this would have made the machines feel comparatively in practice even on native code.

  • His article claims that both the SCSI bus and the serial ports have DMA, but I don't see this anywhere in the developer notes (and at least one source contradicts him). While the NCR controller that the F108 ASIC incorporates does support it, I don't see where this is hooked up. More to the point, the F108's embedded IDE controller -- because the 6200 actually uses an IDE hard drive -- doesn't have DMA set up either: if the Q630 is any indication, the 6200 is also limited to PIO Mode 3. While this was no great sin when the Q630 was in production, it was verging on unacceptable even for a low-to-midrange system by the time the 6200 hit the market. More on that in the next point.

    Do note that the Q630 design does support bus mastering, but not from the F108. The only two entities which can be bus master are the CPU or either the PDS expansion card or communications card via the PrimeTime II IC "southbridge."

  • Daniel makes a very well-reasoned assertion that the computer's major problems were due to software instead of hardware design, which is at least partially true, but I think his objections are oversimplified. Certainly the Mac OS (that's with a capital M) was not well-suited for handling the real-time demands of hardware: ADB, for example, requires quite a bit of polling, and the OS could not service the bus sufficiently often to make it effective for large-volume data transfer (condemning it to a largely HID-only capacity, though that's all it was really designed for). Even interrupt-driven device drivers could be problematic; a large number of interrupts pending simultaneously could get dropped (the limit on outstanding secondary interrupt requests prior to MacOS 9.1 was 40, see Apple TN2010) and a badly-coded driver that did not shunt work off to a deferred task could prevent other drivers from servicing their devices because those other interrupts were disabled while the bad driver tied up the machine.

    That said, however, these were hardly unknown problems at the time and the design's lack of DMA where it counts causes an abnormal reliance on software to move data, which for those and other reasons the MacOS was definitely not up to doing and the speed hit didn't help. Compare this design with the 9500's full PCI bus, 64-bit interface and hardware assist: even though the 9500 was positioned at a very different market segment, and the weak 603 implementation is no comparison to the 604, that doesn't absolve the 6200 of its other deficiencies and the 9500 ran the same operating system with considerably fewer problems (he does concede that his assertions to the contrary do "not mean that [issues with redraw, typing and audio on the 6200s] never occurred for anyone," though his explanation of why is of course different). Although Daniel states that relaying traffic for an Ethernet card "would not have impacted Internet handling" based on his estimates of actual bandwidth, the real rate limiting step here is how quickly the CPU, and by extension the OS, can service the controller. While the comm slot at least could bus master, that only helps when it's actually serviced to initiate it. My personal suspicion is because the changes in OpenTransport 1.3 reduced a lot of the latency issues in earlier versions of OT, that's why MacOS 8.1 was widely noted to smooth out a lot of the 6200's alleged network problems. But even at the time of these systems' design Copland (the planned successor to System 7) was already showing glimmers of trouble, and no one seriously expected the MacOS to explosively improve in the machines' likely sales life. Against that historical backdrop the 6200 series could have been much better from the beginning if the component machines had been more appropriately engineered to deal with what the OS couldn't in the first place.

In the United States, at least, the Power Macintosh 6200 family was only ever sold under the budget "Performa" line, and you should read that as Michael Spindler being Spindler, i.e., cheap. In that sense putting as little extra design money into it wasn't ill-conceived, even if it was crummy, and I will freely admit my own personal bias in that I've never much cared for the Quadra 630 or its derivatives because there were better choices then and later. I do have to take my hat off to Daniel for trying to salvage the machine's bad image and he goes a long way to dispelling some of the more egregious misconceptions, but crummy's still as crummy does. I think the best that can be said here is that while it's likely better than its reputation, even with careful reconsideration of its alleged flaws the 6200 family is still notably worse than its peers.

Air MozillaConnected Devices Meetup - Sensor Web

Connected Devices Meetup - Sensor Web Mozilla's own Cindy Hsiang to discuss SensorWeb SensorWeb wants to advance Mozilla's mission to promote the open web when it evolves to the physical world....

Air MozillaConnected Devices Meetup - Laurynas Riliskis

Connected Devices Meetup - Laurynas Riliskis We are on the verge of next revolution Connected devices have emerged during the last decade into what's known as the Internet of Things. These...

Air MozillaConnected Devices Meetup - Johannes Ernst: UBOS and the Indie IoT

Connected Devices Meetup - Johannes Ernst: UBOS and the Indie IoT We are on the verge of next revolution Connected devices have emerged during the last decade into what's known as the Internet of Things. These...

Air MozillaConnected Devices Meetup - Nicholas van de Walle

Connected Devices Meetup - Nicholas van de Walle Nicholas van de Walle is a local web developer who loves shoving computers into everything from clothes to books to rocks. He has worked for...

Air MozillaConnected Devices Meetup - September 27, 2016

Connected Devices Meetup - September 27, 2016 6:30pm - Johannes Ernst; UBOS and the Indie IoT: Will the IoT inevitably make us all digital serfs to a few overlords in the cloud,...

David LawrenceHappy BMO Push Day 2!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1305713] BMO: Persistent XSS via Git commit messages in comments

discuss these changes on mozilla.tools.bmo.


Air MozillaB2G OS Announcements September 2016

B2G OS Announcements September 2016 The weekly B2G meeting on Tuesday 27th September will be attended by Mozilla senior staff members who would like to make some announcements to the...

Air MozillaMartes Mozilleros, 27 Sep 2016

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Christian HeilmannJavaScript aus ist nicht das Problem – Vortrag beim Frontend Rhein Main Meetup

Vor ein paar Tagen war ich zu Gast bei AOE in Wiesbaden um beim Frontend Rhein-Main meetup über die Gefahren von ueberspitzter Kritik an JavaScript zu sprechen. Dieser Vortrag wird auch bald auf Englisch erhältlich sein.

Chris beim Vortragen

Das Video des Vortrags ist von AOE schon fertig editiert und auf YouTube zu finden:

Die (englischen) Slides sind auf Slideshare:

Eine Vorschau auf den ganzen Talk gab es von mir auch auf der SmashingConf Freiburg Jam Session, und der Screencast davon (schreiend, auf Englisch) ist auch auf YouTube:

Sowohl AOE als auch die UserGruppe haben ueber den Abend gebloggt.

Es war ein netter Abend, und obschon ich dachte es wäre das zehnjährige Jubiläum und nicht das zehnte Meetup hat sichs gelohnt, nach der SmashingConf und vor dem A-Tag in Wiesbaden vorbei zu kommen. Vor allem auch, weil Patricia Kuchen gebacken hat und nun ganz wild JavaScript lernt!

Gervase MarkhamOff Trial

Six weeks ago, I posted “On Trial”, which explained that I was taking part in a medical trial in Manchester. In the trial, I was trying out some interesting new DNA repair pathway inhibitors which, it was hoped, might have a beneficial effect on my cancer. However, as of ten days ago, my participation has ended. The trial parameters say that participants can continue as long as their cancer shrinks or stays the same. Scans are done every six weeks to determine what change, if any, there has been. As mine had been stable for the five months before starting participation, I was surprised to discover that after six weeks of treatment my liver metastasis had grown by 7%. This level of growth was outside the trial parameters, so they concluded (probably correctly!) the treatment was not helping me and that was that.

The Lord has all of this in his hands, and I am confident of his good purposes for me :-)

Emily DunhamRust's Community Automation

Rust’s Community Automation

Here’s the text version, with clickable links, of my Automacon lightning talk today.

Intro

I’m a DevOps engineer at Mozilla Research and a member of the Rust Community subteam, but the conclusions and suggestions in this talk are my own observations and opinions.

The slides are a result of trying to write my own CSS for sliderust... Sorry about the ugliness.

I have 10 minutes, so this is not the time to teach you any Rust. Check out rust-lang.org, the Rust Community Resources, or your city’s Rust meetup to get started with the language.

What we are going to cover is how Rust automates some community tasks, and what you can learn from our automation.

Community

I define “community”, in this context, as “the human interaction work necessary for a project’s success”. This work is done by a wide variety of people in many situations. Every interaction, from helping a new contributor to discussing a proposed code change to criticizing someone’s behavior, affects the overall climate of a project’s community.

Automation

To me, “automation” means “offloading peoples’ work onto a system”. This can be a computer system, but I think it can also mean changes to the social systems that guide peoples’ behavior.

So, community automation is a combination of:

  • Building tools to do things the humans used to have to
  • Tweaking the social systems to minimize the overhead they create

Scoping the Problem

While not all things can be automated and not all factors of the community are under the project leadership’s control, it’s not totally hopeless.

Choices made and automation deployed by project leaders can help control:

  • Which contributors feel welcome or unwelcome in a project
  • What code makes it into the project’s tree
  • Robots!

Moderation

Our robots and social systems to improve workflow and contributor experience all rely on community members’ cooperation. To create a community of people who want to work constructively together and not be jerks to each other, Rust defines behavior expectations code of conduct. The important thing to note about the CoC is that half the document is a clear explanation of how the policies in it will be enforced. This would be impossible without the dedication of the amazing mod team.

The process of moderation cannot and should not be left to a computer, but we can use technology to make our mods’ work as easy as possible. We leave the human tasks to humans, but let our technologies do the rest.

In this case, while the mods need to step in when a human has a complaint about something, we can automate the process of telling peole that the rules exist. You can’t join the IRC channel, post on the Discourse forums, or even read the Rust subreddit without being made aware that you’re expected to follow the CoC’s guidelines in official Rust spaces.

Depending on the forums where your project communicates, try to automate the process of excluding obvious spammers and trolls. Not everybody has the skills or interest to be an excellent moderator, so when you find them, avoid wasting their time on things that a computer could do for them!

It didn’t fit in the talk, but this Slashdot post is one of my favorite examples of somebody being filtered out of participating in the Rust community due to their personal convictions about how project leadership should work. While we do miss out on that person’s potential technical contributions, we also save all of the time that might be spent hashing out our disagreements with them if we had a less clear set of community guideines.

Robots

This lightning talk highlighted 4 categories of robots:

  • Maintaining code quality
  • Engaging in social pleasantries
  • Guiding new contributors
  • Widening the contributor pipeline

Longer versions of this talk also touch on automatically testing compiler releases, but that’s more than 10 minutes of content on its own.

The Not Rocket Science Rule of Software Engineering

To my knowledge, this blog post by Rust’s inventor Graydon Hoare is the first time that this basic principle has been put so succinctly:

Automatically maintain a repository of code that always passes all the tests.

This policy guides the Rust compiler’s development workflow, and has trickled down into libraries and projects using the language.

Bors

The name Bors has been handed down from Graydon’s original autolander bot to an instance of Homu, and is often verbed to refer to the simple actions he does:

  1. Notice when a human says “r+” on a PR
  2. Create a branch that looks like master will after the change is applied
  3. Test that branch
  4. Fastforward the master branch to the tested state, if it passed.

Keep your tree green

Saying “we can’t break the tests any more” is a pretty significant cultural change, so be prepeared for some resistance. With that disclaimer, the path to following the Not Rocket Science Rule is pretty simple:

  1. Write tests that fail when your code is bad and pass when it’s good
  2. Run the tests on every change
  3. Only merge code if it passes all the tests
  4. Fix the tests whenever thy’re wrong.

This strategy encourages people to maintain the tests, because a broken test becomes everyone’s problem and interrupts their workflow until it’s fixed.

I believe that tests are necessary for all code that people work on. If the code was fully and perfectly correct, it wouldn’t need changes – we only write code when something is wrong, whether that’s “It crashes” or “It lacks such-and-such a feature”. And regardless of the changes you’re making, tests are essential for catching any regressions you might accidentally introduce.

Automating social pleasantries

Have you ever submitted an issue or change request to a project, then not heard back for several months? It feels bad to be ignored, and the project loses out on potential contributors.

Rust automates basic social pleasantries with a robot called Highfive. Her tasks are easy to explain, though the implementaion details can be tricky:

  1. Notice when a change is submitted by a new contributor, then welcome them
  2. Assign reviewers, based on what code changed, to all PRs
  3. Nag the reviewer if they seem to have forgotten about their assignment

If you don’t want a dedicated greeter-bot, you can get many of these features from your code management system:

  • Use issue and pull request templates to guide potential contributors to the docs that can help them improve their report or request.
  • Configure notifications so you find out when someone is trying to interact with your project. This could mean muting all the noise notifications so the signal ones are available, or intermittently polling the repositories that you maintain (a daily cron job or weekly calendar reminder works just fine).

Guide new contributors

In open source projects, “I’m new; what can I work on?” is a common inquiry. In internal projects, you’ll often meet colleagues from elsewhere in your organization who ask you to teach them something about the project or the skills you use when working on it.

The Rust-implemented browser engine Servo is actually a slightly better example of this than the compiler itself, since the smaller and younger codebase has more introductory-level issues remaining. The site starters.servo.org automatically scrapes the organization’s issue trackers for easy and unclaimed issues.

Issue triage is often unrewarding, but using the tags for a project like this creates a greater incentive to keep them up to date.

When filing introductory issues, try to include links to the relevant documentation, instructions for reproducing the bug, and a suggestion of what file you would look in first if you tackled the problem yourself.

Automating mentorship

Mentorship is a highly personalized process in which one human transfers their skills to another. However, large projects often have more contributors seeking the same basic skills than mentors with time to teach them.

The parts of mentorship which don’t explicitly require a human mentor can be offloaded onto technology.

The first way to automate mentorship tasks is to maintain correct and up-to-date documentation. Correct docs train humans to consult them before interrupting an expert, whereas docs that are frequently outdated or wrong condition their users to skip them entirely.

Use tools like octohatrack and your project status updates to identify and recognize contributors who help with docs and issue triage. Docs contributions may actually save more developer and community time than new code features, so respect them accordingly.

Finally, maintain a list of introductory or mentored issues – even if that’s just a Google Doc or Etherpad.

Bear in mind that an introductory issue doesn’t necessarily mean “suitable for someone who has never coded before”. Someone with great skills in a scripting language might be looking for a place to help with an embedded codebase, or a UX designer might want to get involved with a web framework that they’ve used. Introductory issues should be clear about what knowledge a contributor should acquire in order to try them, but they don’t have to all be “easy”.

Automating the pipeline

Drive-by fixes are to being a core contributor as interviews are to full time jobs. Just as a company attempts to interview as many qualified candidates as it can, you can recruit more contributors by making your introductory issues widely available.

Before publicizing your project, make sure you have a CONTRIBUTING.txt or good README outlining where a new contributor should start, or you’ll be barraged with the same few questions over and over.

There are a variety of sites, which I call issue aggregators, where people who already know a bit about open source development can go to find a new project to work on. I keep a list on this page <http://edunham.net/pages/issue_aggregators.html>, pull requests welcome <https://github.com/edunham/site/blob/master/pages/issue_aggregators.rst> if I’m missing anything. Submitting your introductory issues to these sites broadens your pipeline, and may free up humans’ recruiting time to focus on peole who need more help getting up to speed.

If you’re working on internal rather than public projects, issue aggregators are less relevant. However, if you have the resources, it’s worthwhile to consider the recruiting device of open sourcing an internal tool that would be useful to others. If an engineer uses and improves that tool, you get a tool improvement and they get some mentorship. In the long term, you also get a unique opportunity to improve that engineer’s opinion of your organization while networking with your engineers, which can make them more likely to want to work for you later.

Follow Up

For questions, you’re welcome to chat with me on Twitter (@QEDunham), email (automacon <at> edunham <dot> net), or IRC (edunham on irc.freenode.net and irc.mozilla.org).

Slides from the talk are here.

The Mozilla BlogHelp Fix Copyright: Send a Rebellious Selfie to European Parliament (Really!)

The EU’s proposed copyright reform keeps in place retrograde laws that make many normal online creative acts illegal. The same restrictive laws will stifle innovation and hurt technology businesses. Let’s fix it. Sign Mozilla’s petition, watch and share videos, and snap a rebellious selfie

Earlier this month, the EU Commission released their proposal for a reformed copyright framework. In response, we are asking everyone reading this post to take a rebellious selfie and send that doctored snapshot to EU Parliament. Seem ridiculous? So is an outdated law that bans taking and sharing selfies in front of the Eiffel Tower at night in Paris, or in front of the Little Mermaid in Copenhagen.

Of course, no one is actually going to jail for subversive selfies. But the technical illegality of such a basic online act underscores the grave shortcomings in the EU’s latest proposal on copyright reform. As Mozilla’s Denelle Dixon-Thayer noted in her last post on the proposed reform, it “thoroughly misses the goal to deliver a modern reform that would unlock creativity and innovation.” It doesn’t, for instance, include needed exceptions for panorama, parody, or remixing, nor does it include a clause that would allow noncommercial transformations of works (like remixes, or mashups) or a flexible user clause like an open norm, or fair dealing.

Translation? Making memes and gifs will remain an illicit act.

And that’s just the start. Exceptions for text and data mining are limited to public institutions. This could stifle startups looking to online data to build innovative businesses. Then there is the dangerous “neighbouring right,” similar to the ancillary copyright laws we’ve seen in Spain and Germany (which have been clear failures, respectively). This misguided part of the reform would allow online publishers to copyright “press publications” for up to 20 years, with retroactive effect. The vague wording makes it unclear exactly to whom and for whom this new exclusive right would apply.

Finally, another unclear provision would require any internet service that provides access to “large amounts” of works to users to broker agreements with rightsholders for the use of, and protection of, their works. This could include the use of “effective content recognition technologies” — which imply universal monitoring and strict filtering technologies that identify and/or remove copyrighted content.

These proposals, if adopted as they are, would deal a blow to EU startups, to independent coders, creators, and artists, and to the health of the internet as a driver for economic growth and innovation.

We’re not advocating plagiarism or piracy. Creators must be treated fairly, including proper remuneration, for their creations and works. Mozilla wants to improve copyright for everyone,  so individuals are not discouraged from creating and innovating.

Mozilla isn’t alone in our objections: Over 50,000 individuals have signed our petition and demanded modern copyright laws that foster creativity, innovation, and opportunity online.

We have our work cut out for us. As the European Parliament revises the proposal this fall, we need a movement — a collection of passionate internet users who demand better, modern laws. Today, Mozilla is launching a public education campaign to support that movement.

post-crimes

Mozilla has created an app to highlight the absurdity of some of Europe’s outdated copyright laws. Try Post Crimes: Take a selfie in front of European landmarks that can be technically unlawful to photograph — like the Eiffel Tower’s night-time light display, or the Little Mermaid in Denmark — due to restrictive copyright laws.

Then, send your selfie as a postcard to your Member of the European Parliament (MEP). Show European policymakers how outdated copyright laws are, and encourage them to forge a more future-looking and innovation-friendly copyright reform.

We’ve also created three short videos that outline the need for reform. They’re educational, playful, and a little bit weird — just like the internet. But they explore a serious issue: The harmful effect outdated and restrictive copyright laws have on our creativity and the open internet. We hope you’ll watch them and share them with others.

We need your help standing up for better copyright laws. When you sign the petition, snap a selfie, or share our videos, you’re supporting creativity, innovation and opportunity online — for everyone.

This Week In RustThis Week in Rust 149

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

RustConf/RustFest Experiences

New Crates & Project Updates

Crate of the Week

Somewhat unsurprisingly, this week's crate of the week is ripgrep. In case you've missed it, this is a grep/ag/pt/whatever search tool you use replacement that absolutely smokes the competition in most performance tests. Thanks to DanielKeep for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

77 pull requests were merged in the last two weeks.

New Contributors

  • aclarry
  • Alexander von Gluck IV
  • Andrew Lygin
  • Ashley Williams
  • Austin Hicks
  • Eitan Adler
  • Gianni Ciccarelli
  • jacobpadkins
  • James Duley
  • Joe Neeman
  • Niels Sascha Reedijk
  • Vanja Cosic

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

You can actually return Iterators without summoning one of the Great Old Ones now, which is pretty cool.

/u/K900_ on reddit.

Thanks to Johan Sigfrids for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Nikki BeeRustConf 2016

So, a couple weeks ago I got to go to RustConf 2016, the first instance of an annual event! What was it about? The relatively new programming language Rust, from Mozilla!

Attending

I was only able to attend thanks to three factors: travel re-reimbursement allowance from Outreachy for my recent internship with them; a scholarship ticket from RustConf to cover attendance fees; and being able to stay at a friends place in Portland for a few days. Were it not for each of those, I would not have felt financially comfortable attending! I’m very grateful for all of the support.

The Morning

I went to RustConf with my now wife, Decky Coss (who also was able to attend under the same circumstances as me!). We decided that since the event is only a day long, and due to our track record for having a hard time getting the “full” experience out of conventions, that we would work extra hard on getting there on time. As such, we only missed 25 minutes of the first talk.

Of course, I’m not pleased that that happened, but we had the disadvantage of some rough flights the day before, as well as not staying on or near the location. Plus, we’re just not cut out for whatever it takes to attend conferences studiously. But we did our best that day and I think that was OK.

As I said, we missed the first half of the opening keynote, given by Aaron Turon and Niko Matsakis, but I really liked what I saw. It felt like a neat overview of Rust, especially concerning common issues newbies (like me!) run into with the language. It was gratifying hearing them point out the very same trip-ups I deal with, especially since they talked about how Rust developers are aware of these issues! They are looking into ways to help stop these trip-ups, such as better language syntax, and updating the Rust documentation.

Acting as a perfect follow-up to the keynote was “The Illustrated Adventure Survival Guide”, by Liz Baillie, which was easily the funnest talk. It was a drawn slideshow/comic making many metaphors and jokes representing how Rust works. I felt like it would be hard to enjoy if you didn’t have a decent understanding of Rust but then again, I suppose nobody would be there if they didn’t!

The last talk of the morning, “Integrating Rust and VLC”, by Geoffroy Couprie. It felt a bit hard to follow in that I didn’t know what it was leading to. All the same I greatly enjoyed seeing both a practical implementation of Rust (in this case, handling video streaming codecs), and how Rust can act as a friendlier alternative to C-like languages (something that has been heavily on my mind).

The Afternoon

Lunchtime! Lunch was set to be an hour and a half long, which, when I first saw the schedule, I thought would be excessively long. Somehow it went by very fast! Plus, following such a long morning, I was very grateful to get to unwind for a bit from listening to so much talking.

However, the dessert I had gave me a bad sugar rush and crash and I felt very out of it for the next few hours, so I don’t remember much about the talks during this time. None of them really did much for me, not that they were bad, but I just wasn’t able to pay much attention.

The Evening

Snack time! Again, I was glad to have a respite where I could chill out for a bit. Especially after having a hard time keeping focus for a few hours. This break was not nearly as long as lunch, but if it was any longer, I think the conference would have been too long.

The first talk of the evening was “A Modern Editor in Rust”, presented by Raph Levien. Raph talked about the text editor Xi. I was excited by some of the things offered in it, since it’s made to tackle unique issues, such as quickly loading and scrolling through massive (like, hundreds of megabytes big) files. I didn’t see much in it that I thought I needed personally, but I appreciate seeing programs for solving unique challenges like that.

The next talk I ended up writing so much about, I put it into another section.

The Second-Last Talk

Next was “RFC: In Order to Form a More Prefect union”, by Josh Triplett, which was easily the most impactful talk to me. It was a chronicle of a Request For Comments idea created by the speaker, which was recently committed into Rust! Josh described the RFC process in the Rust community as “welcoming, open, and inclusive”, all of which echoes my experience as a Rust newbie.

Josh talked about each step of their submission, including how far their proposal went (which was a bit of a surprise to them!). The best part to me was, despite it becoming one of the most discussed issues on Rust’s Github, Josh never found the proposal being taken from them by a more experienced contributor, no matter how high-level the discussion got. It’s not that I would expect such a rude thing to happen easily, but more that I greatly appreciate the speaker being given ample room to learn how to become a better contributor.

Previously, I contributed to Servo (the browser engine written in Rust) for my Outreachy internship, which had formal processes for my additions to be merged into the main code. That makes it obviously different than how it would be for any another contributor, since my internship gave me a unique position in relation to a mentor who knew the ins and outs of the Servo project. I never felt like I would be able to replicate this anywhere else.

All that said, if I ever contribute to Rust more informally, this talk has given me a great idea of what to expect, and how to prepare to enter into doing so. Whenever I have an idea I want to share with Rust, I won’t hesitate over doing so! Also, the next time I contribute to another FOSS project, I’ll be thinking of the process here, and try to map the open Rust process to that project!

The Closing Keynote

Finally, the closing keynote, presented by Julia Evans. She’s the only person I already recognized, as we’re both Recurse Center alums. During the snack break Decky and I decided to try to find her to chat for a couple minutes. We managed to do so- as well as get copies of a delightful zine she wrote about Linux tools she’s found invaluable for programming! You can find it on her website.

By this time I felt pretty fatigued from the long day, but Julia’s talk was very energetic which made me enthused to pay attention. Her keynote was focused on learning systems programming with Rust, while new to the language- two things that felt challenging to her, but that she was able to get into well surprisingly quickly. I really relate to that sort of experience, since many times a new project I wanted to do seemed insurmountable, but became much easier after I started!

I’ve never done any systems programming, but I’ve had a couple friends ask me when I’ve told them about Rust, if they can do X or Y sort of thing in it, to which I don’t often have an answer. Based on Julia’s talk, I feel comfortable saying systems programming is a good bet!

Other Things

That’s it for the talks themselves, although I had a few more thoughts I wanted to share.

  • As part of being late (as I mentioned before), Decky and I missed the breakfast offered by RustConf, although we hadn’t realized there was any before we got there. Having lunch and snacks offered as part of the conference was fantastic. If Decky and I had to go out for lunch we probably would have missed a talk. Although, in the future, I’ll be skipping out on sweets, even free ones, because sugar is just really bad for me :(.

  • I’ve never done an event that’s so continuously long. The schedule for RustConf was about 9 hours straight, and I was quite exhausted by the end of it. In the future, I’d prefer an event with multiple, shorter days! Conventions in general are fatiguing for me, but having to sit all the time for a long day takes a lot out of me.

  • I know better than to assume on an individual level, but I felt dismayed by the low ratio of women to men present at the conference. As a result of feeling a bit stranded by that, I felt uncomfortable talking to strangers. The only people I talked to - for more than a few sentences - were people I knew, or recognized from online.

  • One of the talks ended in a very forced parody of Trump’s campaign slogan which got a lot of laughs from the crowd. When that happened, my only reaction was to turn to Decky and grimace to show her how unhappy I was. Comedy shouldn’t be used to trivialize fascists, it should be used to kill fascism.

Doug BelshawCurriculum as algorithm

algorithm.jpg

Way back in Episode 39 of Today In Digital Education, the podcast I record every week with Dai Barnes, we discussed the concept of ‘curriculum as algorithm’. If I remember correctly, it was Dai who introduced the idea.

The first couple of things that pop into my mind when considering curricula through an algorithmic lens are:

But let’s rewind and define our terms, including their etymology. First up, curriculum:

In education, a curriculum… is broadly defined as the totality of student experiences that occur in the educational process. The term often refers specifically to a planned sequence of instruction, or to a view of the student’s experiences in terms of the educator’s or school’s instructional goals.
[…]
The word “curriculum” began as a Latin word which means “a race” or “the course of a race” (which in turn derives from the verb currere meaning “to run/to proceed”). (Wikipedia)

…and algorithm:

In mathematics and computer science, an algorithm… is a self-contained step-by-step set of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks.

The English word 'algorithm’ comes from Medieval Latin word algorism and French-Greek word “arithmos”. The word 'algorism’ (and therefore, the derived word 'algorithm’) come from the name al-Khwārizmī. Al-Khwārizmī (Persian: خوارزمی‎‎, c. 780–850) was a Persian mathematician, astronomer, geographer, and scholar. English adopted the French term, but it wasn’t until the late 19th century that “algorithm” took on the meaning that it has in modern English. (Wikipedia)

So my gloss on the above would be that a curriculum is the container for student experiences, and an algorithm provides the pathways. In order to have an algorithmic curriculum, there need to be disaggregated learning content and activities that can serve as data points. Instead of a forced group march, the curriculum takes shape around the individual or small groups - much like what is evident in Khan Academy’s Knowledge Map.

The problem is, as the etymology of 'algorithm’ suggests, it’s much easier to do this for subjects like maths, science, and languages than it is for humanities subjects. The former subjects are true/false, have concepts that obviously build on top of one another, and are based on logic. Not all of human knowledge works like that.

For this reason, then, we need to think of a way in which learning content and activities in the humanities can be represented in a way that builds around the learner. This, of course, is exactly what great teachers do in these fields: they personalise the quest for human knowledge based on student interests and experience.

One thing that I think is under-estimated in learning is the element of serendipity. To some extent, serendipity is the opposite of an algorithm. My Amazon recommendations mean I get more of the same. Stumbling across a secondhand bookshop, on the other hand, means I could head off in an entirely new direction due to a serendipitous find.

Using services such as StumbleUpon makes the web a giant serendipity engine. But it’s not a curriculum, as such. What I envisage by curriculum as algorithm is a fine balance between:

  • Continuously-curated learning content and activities
  • Formative feedback
  • Multiple pathways to diverse goals
  • Flexible accreditation

I’m hoping that as the Open Badges system evolves, we move beyond the metaphor of 'playlists’ for learning, towards curriculum as algorithm. I’d love to get some funding to explore this further…


Comments? Questions? Get in touch: @dajbelshaw / mail@dougbelshaw.com

Image CC BY x6e38

Christian HeilmannHelp making the fourth industrial revolution less scary

Last week I spent in Germany at an event sponsored by the government agency for unemployment covering the issue of digitalisation of the job market and the subsequential loss of jobs.

me, giving a keynote on machine learning and work

When the agency approached me to give a keynote on the upcoming “fourth industrial revolution” and what machine learning and artificial intelligence means for the job market I was – to put it mildly – bricking it. All the other presenters at the event had several doctor titles and were professors of this and that. And here I was, being asked to deliver the “future” to an audience of company owners, university professors and influential people who decide the employment fate of thousands of people.

Expert Panel

I went into hermit mode and watched, read and translated dozens of videos and articles on AI and the work environment. In the end, I took a more detailed look at the conference schedule and realised that most of the subject matter data will be covered by the presenter before me.

Thus I delivered a talk covering the current situation of AI and what it means for us as job seekers and employers. The slides and screencast are in German, but I am looking forward to translating them and maybe deliver them in a European frame soon.

The slide deck is on Slideshare, and even without knowing German, you should get the gist:

The screencast is on YouTube:

The feedback was overwhelming and humbling. I got interviewed by the local TV station where I mostly deflected all the negative and defeatist attitude towards artificial intelligence the media loves to portrait.

tv interview

I also got a half page spread in the local newspaper where – to the amusement of my friends – I was touted a “fascinating prophet”.

Newspaper article

During the expert panel on digital security I had a few interesting encounters. Whilst in general it felt tough to see how inflexible and outdated some of the attitudes of companies towards computers were, there is a lot of innovation happening even in rural areas. I was especially impressed with the state of robots in warehouses and the investment of the European Union in Blockchain solutions and security research.

One thing I am looking forward to is working with a cybersecurity centre in the area giving workshops on social engineering and security of iOT.

A few things I learned and I’d like you to also consider:

  • We are at the cusp – if not in the middle of – a new digital revolution
  • Our job as people in the know is to reach out to those who are afraid of it and give out sensible information as a counter point to some of the fearmongering of the press
  • It is incredibly rewarding to go out of our comfort zone and echo chamber and talk to people with real business and social change issues. It humbles you and makes you wonder just how you ended up knowing all that we do.
  • The good social aspects of our jobs could be a blueprint for other companies to work and change to be resilient towards replacement by machines
  • German is hard :)

So, be brave, offer to present at places not talking about the latest flavour of JavaScript or CSS preprocessing. The world outside our echo chamber needs us.

Or as the interrupters put it: What’s your plan for tomorrow?

Gervase MarkhamGPLv2 Combination Exception for the Apache 2 License

CW: heavy open source license geekery ahead.

One unfortunate difficulty with open source licensing is that some lawyers, including the FSF, consider the Apache License 2.0 incompatible with the GPL 2.0, which is to say that you can’t combined Apache 2.0-licensed code with GPL 2.0-licensed code and distribute the result. This is annoying because when choosing a permissive licence, we want people to use the more modern Apache 2.0 over the older BSD or MIT licenses, because it provides some measure of patent protection. And this incompatibility discourages people from doing that.

This was a concern for Mozilla when determining the correct licensing for Rust, and this is why the standard Rust license is a dual license – the choice of Apache 2.0 or MIT. The idea was that Apache 2.0 would be the normal license, but people could choose MIT if they wanted to combine “Rust license” code with GPL 2.0 code.

However, the LLVM project has now had notable open source attorney Heather Meeker come up with an exception to be added to the Apache 2.0 license to enable GPL 2.0 compatibility. This exception meets a number of important criteria for a legal fix for this problem:

  • It’s an additional permission, so is unlikely to affect the open source-ness of the license;
  • It doesn’t require the organization using it to take a position on the question of whether the two licenses are actually compatible or not;
  • It’s specific to the GPL 2.0, thereby constraining its effects to solving the problem.

Here it is:

—- Exceptions to the Apache 2.0 License: —-

In addition, if you combine or link compiled forms of this Software with software that is licensed under the GPLv2 (“Combined Software”) and if a court of competent jurisdiction determines that the patent provision (Section 3), the indemnity provision (Section 9) or other Section of the License conflicts with the conditions of the GPLv2, you may retroactively and prospectively choose to deem waived or otherwise exclude such Section(s) of the License, but only in their entirety and only with respect to the Combined Software.

—- end —-

It seems very well written to me; I wish it had been around when we were licensing Rust.

Gervase MarkhamIntroducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

QMOFirefox 50 Beta 3 Testday, September 30th

Hello Mozillians,

We are happy to announce that Friday, September 30th, we are organizing Firefox 50 Beta 3 Testday. We will be focusing our testing on Pointer Lock API and WebM EME support for Widevine features. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Karl Dubost[worklog] Edition 037. Autumn is here. The fall of -webkit-

Two Japanese national holidays during the week. And there goes the week. Tune of the Week: Anderson .Paak - Silicon Valley

Webcompat Life

Progress this week:

Today: 2016-09-26T09:24:48.064519
336 open issues
----------------------
needsinfo       13
needsdiagnosis  110
needscontact    9
contactready    27
sitewait        161
----------------------

You are welcome to participate

  • Monday day off in Japan: Respect for the elders.
  • Thursday day off in Japan: Autumn Equinox.

Firefox 49 has been released and an important piece of cake is delivered now to every users. You can get some context on why some (not all) -webkit- landed on Gecko and the impact on Web standards.

We have a team meeting soon in Taipei.

The W3C was meeting this week in Lisbon. Specifically about testing.

I did a bit of Prefetch links testing and how they appear in devtools.

Webcompat issues

(a selection of some of the bugs worked on this week).

  • Little by little we are accumulating our issues list about CSS zoom. Firefox is the only one to not support the non-standard property. It's coming from (Trident) IE was imported in WebKit (Safari), then maintained alive in Blink (Chrome, Opera) to finally come into Edge. Sadness.

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Mic Berman

How will you Lead? From a talk at MarsDD in Toronto April 2016

The Servo BlogThis Week In Servo 79

In the last week, we landed 96 PRs in the Servo organization’s repositories.

Promise support has arrived in Servo, thanks to hard work by jdm, dati91, and mmatyas! This does not fully implement microtasks, but unblocks the uses of Promises in many places (e.g., the WebBluetooth test suite).

Emilio rewrote the bindings generation code for rust-bindgen, dramatically improving the flow of the code and output generated when producing Rust bindings for C and C++ code.

The TPAC WebBluetooth standards meeting talked a bit about the great progress by the team at the University of Szeged in the context of Servo.

Planning and Status

Our overall roadmap is available online and now includes the Q3 plans. The Q4 and 2017 planning will begin shortly!

This week’s status updates are here. We have been having a conversation on the mailing list about how to better involve all contributors to the Servo project and especially improve the visibility into upcoming work - please make your ideas and opinions known!

Notable Additions

  • bholley made it possible to manage the Gecko node data without using FFI calls
  • aneeshusa improved Homu so that it would ignore Work in Progress (WIP) pull requests
  • wdv4758h implemented iterators for FormData
  • nox updated our macOS builds to use libc++ instead of libstdc++
  • TheKK added support for noreferrer to when determining referrer policies
  • manish made style unit tests run on all properties (including stylo-only ones)
  • gw added the OSMesa source, a preliminary step towards better headless testing on CI
  • emilio implemented improved support for function pointers, typedefs, and macOS’s stdlib in bindgen
  • schuster styled the input text element with user-agent CSS rather than hand-written Rust code
  • jeenalee added support for open-ended dictionaries in the Headers API
  • saneyuki fixed the build failures in SpiderMonkey on macOS Sierra
  • mrobinson added support for background-repeat properties space and round
  • pcwalton improved the layout of http://python.org
  • phrohdoh implemented the minlength attribute for text inputs
  • anholt improved WebGL support
  • mmatyas added ARM support to WebRender
  • ms2ger implemented safe, high-level APIs for manipulating JS typed arrays
  • manish added the ability to uncompute a style back to its specified value, in support of animations
  • cbrewster added an option to replace the current session entry when reloading a page
  • kichjang changed the loading of external scripts to use the Fetch network stack
  • splav implemented the HTMLOptionsCollection API
  • cynicaldevil fixed a panic involving <link> elements and the rel attribute

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Demo of the in-progress fetch() API:

Niko MatsakisIntersection Impls

As some of you are probably aware, on the nightly Rust builds, we currently offer a feature called specialization, which was defined in RFC 1210. The idea of specialization is to improve Rust’s existing coherence rules to allow for overlap between impls, so long as one of the overlapping impls can be considered more specific. Specialization is hotly desired because it can enable powerful optimizations, but also because it is an important component for modeling object-oriented designs.

The current specialization design, while powerful, is also limited in a few ways. I am going to work on a series of articles that explore some of those limitations as well as possible solutions.

This particular posts serves two purposes: it describes the running example I want to consder, and it describes one possible solution: intersection impls (more commonly called lattice impls). We’ll see that intersection impls are a powerful feature, but they don’t completely solve the problem I am aiming to solve and they also intoduce other complications. My conclusion is that they may be a part of the final solution, but are not sufficient on their own.

Running example: interconverting between Copy and Clone

I’m going to structure my posts around a detailed look at the Copy and Clone traits, and in particular about how we could use specialization to bridge between the two. These two traits are used in Rust to define how values can be duplicated. The idea is roughly like this:

  • A type is Copy if it can be copied from one place to another just by copying bytes (i.e., with memcpy). This is basically types that consist purely of scalar values (e.g., u32, [u32; 4], etc).
  • The Clone trait expands upon Copy to include all types that can be copied at all, even if requires executing custom code or allocating memory (for example, a String or Vec<u32>).

These two traits are clearly related. In fact, Clone is a supertrait of Copy, which means that every type that is copyable must also be cloneable.

For better or worse, supertraits in Rust work a bit differently than superclasses from OO languages. In particular, the two traits are still independent from one another. This means that if you want to declare a type to be Copy, you must also supply a Clone impl. Most of the time, we do that with a #[derive] annotation, which auto-generates the impls for you:

1
2
3
4
5
#[derive(Copy, Clone, ...)]
struct Point {
    x: u32,
    y: u32,
}

That derive annotation will expand out to two impls looking roughly like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
struct Point {
    x: u32,
    y: u32,
}

impl Copy for Point {
    // Copy has no methods; it can also be seen as a "marker"
    // that indicates that a cloneable type can also be
    // memcopy'd.
}

impl Clone for Point {
    fn clone(&self) -> Point {
        *self // this will just do a memcpy
    }
}

The second impl (the one implementing the Clone trait) seems a bit odd. After all, that impl is written for Point, but in principle it could be used any Copy type. It would be nice if we could add a blanket impl that converts from Copy to Clone that applies to all Copy types:

1
2
3
4
5
6
// Hypothetical addition to the standard library:
impl<T:Copy> Clone for T {
    fn clone(&self) -> Point {
        *self
    }
}

If we had such an impl, then there would be no need for Point above to implement Clone explicitly, since it implements Copy, and the blanket impl can be used to supkply the Clone impl. (In other words, you could just write #[derive(Copy)].) As you have probably surmised, though, it’s not that simple. Adding a blanket impl like this has a few complications we’d have to overcome first. This is still true with the specialization system described in [RFC 1210][].

There are a number of examples where these kinds of blanket impls might be useful. Some examples: implementing PartialOrd in terms of Ord, implementing PartialEq in terms of Eq, and implementing Debug in terms of Display.

Coherence and backwards compatibility

Hi! I’m the language feature coherence! You may remember me from previous essays like Little Orphan Impls or RFC 1023.

Let’s take a step back and just think about the language as it is now, without specialization. With today’s Rust, adding a blanket impl<T:Copy> Clone for T would be massively backwards incompatible. This is because of the coherence rules, which aim to prevent there from being more than one trait applicable to any type (or, for generic traits, set of types).

So, if we tried to add the blanket impl now, without specialization, it would mean that every type annotated with #[derive(Copy, Clone)] would stop compiling, because we would now have two clone impls: one from derive and the blanket impl we are adding. Obviously not feasible.

Why didn’t we add this blanket impl already then?

You might then wonder why we didn’t add this blanket impl converting from Copy to Clone in the wild west days, when we broke every existing Rust crate on a regular basis. We certainly considered it. The answer is that, if you have such an impl, the coherence rules mean that it would not work well with generic types.

To see what problems arise, consider the type Option:

1
2
3
4
5
#[derive(Copy, Clone)]
enum Option<T> {
    Some(T),
    None,
}

You can see that Option<T> derives Copy and Clone. But because Option is generic for T, those impls have a slightly different look to them once we expand them out:

1
2
3
4
5
6
7
8
9
10
impl<T:Copy> Copy for Option<T> { }

impl<T:Clone> Clone for Option<T> {
    fn clone(&self) -> Option<T> {
        match *self {
            Some(ref v) => Some(v.clone()),
            None => None,
        }
    }
}

Before, the Clone impl for Point was just *self. But for Option<T>, we have to do something more complicated, which actually calls clone on the contained value (in the case of a Some). To see why, imagine a type like Option<Rc<u32>> – this is clearly cloneable, but it is not Copy. So the impl is rewritten so that it only assumes that T: Clone, not T: Copy.

The problem is that types like Option<T> are sometimes Copy and sometimes not. So if we had the blanket impl that converts all Copy types to Clone, and we have the impl above that impl Clone for Option<T> if T: Clone, then we can easily wind up in a situation where there are two applicable impls. For example, consider Option<u32>: it is Copy, and hence we could use the blanket impl that just returns *self. But it is also fits the Clone-based impl I showed above. This is a coherence violation, because now the compiler has to pick which impl to use. Obviously, in the case of the trait Clone, it shouldn’t matter too much which one it chooses, since they both have the same effect, but the compiler doesn’t know that.

Enter specialization

OK, all of that prior discussion was assuming the Rust of today. So what if we adopted the existing specialization RFC? After all, its whole purpose is to improve coherence so that it is possible to have multiple impls of a trait for the same type, so long as one of those implementations is more specific. Maybe that applies here?

In fact, the RFC as written today does not. The reason is that the RFC defines rules that say an impl A is more specific than another impl B if impl A applies to a strict subset of the types which impl B applies to. Let’s consider some arbitrary trait Foo. Imagine that we have an impl of Foo that applies to any Option<T>:

1
impl<T> Foo for Option<T> { .. }

The more specific rule would then allow a second impl for Option<i32>; this impl would specialize the more generic one:

1
impl Foo for Option<i32> { .. }

Here, the second impl is more specific than the first, because while the first impl can be used for Option<i32>, it can also be used for lots of other types, like Option<u32>, Option<i64>, etc. So that means that these two impls would be accepted under RFC #1210. If the compiler ever had to choose between them, it would prefer the impl that is specific to Option<i32> over the generic one that works for all T.

But if we try to apply that rule to our two Clone impls, we run into a problem. First, we have the blanket impl:

1
impl<T:Copy> Clone for T { .. }

and then we have an impl tailored to Option<T> where T: Clone:

1
impl<T:Clone> Clone for Option<T> { .. }

Now, you might think that the second impl is more specific than the blanket impl. After all, it can be used for any type, whereas the second impl can only be used Option<T>. Unfortunately, this isn’t quite right. After all, the blanket impl cannot be used for any type T: it can only be used for Copy types. And we already saw that there are lots of types for which the second impl can be used where the first impl is inapplicable. In other words, neither impl is a subset of one another – rather, they both cover two distinct, but overlapping, sets of types.

To see what I mean, let’s look at some examples:

1
2
3
4
5
6
| Type              | Blanket impl | `Option` impl |
| ----              | ------------ | ------------- |
| i32               | APPLIES      | inapplicable  |
| Box<i32>          | inapplicable | inapplicable  |
| Option<i32>       | APPLIES      | APPLIES       |
| Option<Box<i32>>  | inapplicable | APPLIES       |

Note in particular the first and fourth rows. The first row shows that the blanket impl is not a subset of the Option impl. The last row shows that the Option impl is not a subset of the blanket impl either. That means that these two impls would be rejected by RFC #1210 and hence adding a blanket impl now would still be a breaking change. Boo!

To see the problem from another angle, consider this Venn digram, which indicates, for every impl, the sets of types that it matches. As you can see, there is overlap between our two impls, but neither is a strict subset of one another:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +---------------------------------------+-----+
| |                                       |     |
| | Example: Option<i32>                  |     |
| |                                       |     |
+-+---------------------------------------+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Enter intersection impls

One of the first ideas proposed for solving this is the so-called lattice specialization rule, which I will call intersection impls, since I think that captures the spirit better. The intuition is pretty simple: if you have two impls that have a partial intersection, but which don’t strictly subset one another, then you can add a third impl that covers precisely that intersection, and hence which subsets both of them. So now, for any type, there is always a most specific impl to choose. To get the idea, it may help to consider this ASCII Art Venn diagram. Note the difference from above: there is now an impl (indicating with = lines and . shading) covering precisely the intersection of the other two.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
+-----------------------------------------+
|[impl<T:Copy> Clone for T]               |
|                                         |
| Example: i32                            |
| +=======================================+-----+
| |[impl<T:Copy> Clone for Option<T>].....|     |
| |.......................................|     |
| |.Example: Option<i32>..................|     |
| |.......................................|     |
+-+=======================================+     |
  |                                             |
  |   Example: Option<Box<i32>>                 |
  |                                             |
  |          [impl<T:Clone> Clone for Option<T>]|
  +---------------------------------------------+

Intersection impls have some nice properties. For one thing, it’s a kind of minimal extension of the existing rule. In particular, if you are just looking at any two impls, the rules for deciding which is more specific are unchanged: the only difference when adding in intersection impls is that coherence permits overlap when it otherwise wouldn’t.

They also give us a good opportunity to recover some optimization. Consider the two impls in this case: the blanket impl that applies to any T: Copy simply copies some bytes around, which is very fast. The impl that is tailed to Option<T>, however, does more work: it matches the impl and then recursively calls clone. This work is necessary if T: Copy does not hold, but otherwise it’s wasted work. With an intersection impl, we can recover the full performance:

1
2
3
4
5
6
// intersection impl:
impl<T:Copy> Clone for Option<T> {
    fn clone(&self) -> Option<T> {
        *self // since T: Copy, we can do this here
    }
}

A note on compiler messages

I’m about to pivot and discuss the shortcomings of intersection impls. But before I do so, I want to talk a bit about the compiler messages here. I think that the core idea of specialization – that you want to pick the impl that applies to the most specific set of types – is fairly intuitive. But working it out in practice can be kind of confusing, especially at first. So whenever we propose any extension, we have to think carefully about the error messages that might result.

In this particular case, I think that we could give a rather nice error message. Imagine that the user had written these two impls:

1
2
3
4
5
6
7
impl<T: Copy> Clone for T { // impl A
    fn clone(&self) -> T { ... }
}

impl<T: Clone> Clone for Option<T> { // impl B
    fn clone(&self) -> Option<T> { ... }
}

As we’ve seen, these two impls overlap but neither specializes the other. One might imagine an error message that says as much, and which also suggests the intersection impl that must be added:

1
2
3
4
5
6
7
8
9
10
error: two impls overlap, but neither specializes the other
  |
2 | impl<T: Copy> Clone for T {...}
  | ----
  |
4 | impl<T: Clone> Clone for Option<T> {...}
  |
  | note: both impls apply to a type like `Option<T>` where `T: Copy`;
  |       to specify the behavior in this case, add the following intersection impl:
  |       `impl<T: Copy> Clone for Option<T>`

Note the message at the end. The wording could no doubt be improved, but the key point is that we should be to actually tell you exactly what impl is still needed.

Intersection impls do not solve the cross-crate problem

Unfortunately, intersection impls don’t give us the backwards compatibility that we want, at least not by themselves. The problem is, if we add the blanket impl, we also have to add the intersection impl. Within the same crate, this might be ok. But if this means that downstream crates have to add an intersection impl too, that’s a big problem.

Intersection impls may force you to predict the future

There is one other problem with intersection impls that arises in cross-crate situations, which nrc described on the tracking issue: sometimes there is a theoretical intersection between impls, but that intersection is empty in practice, and hence you may not be able to write the code you wanted to write. Let me give you an example. This problem doesn’t show up with the Copy/Clone trait, so we’ll switch briefly to another example.

Imagine that we are adding a RichDisplay trait to our project. This is much like the existing Display trait, except that it can support richer formatting like ANSI codes or a GUI. For convenience, we want any type that implements Display to also implement RichDisplay (but without any fancy formatting). So we add a trait and blanket impl like this one (let’s call it impl A):

1
2
trait RichDisplay { /* elided */ }
impl<D: Display> RichDisplay for D { /* elided */ } // impl A

Now, imagine that we are also using some other crate widget that contains various types, including Widget<T>. This Widget<T> type does not implement Display. But we would like to be able to render a widget, so we implement RichDisplay for this Widget<T> type. Even though we didn’t define Widget<T>, we can implement a trait for it because we defined the trait:

1
impl<T: RichDisplay> RichDisplay for Widget<T> { ... } // impl B

Well, now we have a problem! You see, according to the rules from RFC 1023, impls A and B are considered to potentially overlap, and hence we will get an error. This might surprise you: after all, impl A only applies to types that implement Display, and we said that Widget<T> does not. The problem has to do with semver: because Widget<T> was defined in another crate, it is outside of our control. In this case, the other crate is allowed to implement Display for Widget<T> at some later time, and that should not be a breaking change. But imagine that this other crate added an impl like this one (which we can call impl C):

1
impl<T: Display> Display for Widget<T> { ... } // impl C

Such an impl would cause impls A and B to overlap. Therefore, coherence considers these to be overlapping – however, specialization does not consider impl B to be a specialization of impl A, because, at the moment, there is no subset relationship between them. So there is a kind of catch-22 here: because the impl may exist in the future, we can’t consider the two impls disjoint, but because it doesn’t exist right now, we can’t consider them to be specializations.

Clearly, intersection impls don’t help to address this issue, as the set of intersecting types is empty. You might imagine having some alternative extension to coherence that permits impl B on the logic of if impl C were added in the future, that’d be fine, because impl B would be a specialization of impl A.

This logic is pretty dubious, though! For example, impl C might have been written another way (we’ll call this alternative version of impl C impl C2):

1
2
impl<T: WidgetDisplay> Display for Widget<T> { ... } // impl C2
//   ^^^^^^^^^^^^^^^^ changed this bound

Note that instead of working for any T: Display, there is now some other trait T: WidgetDisplay in use. Let’s say it’s only implemented for optional 32-bit integers right now (for some reason or another):

1
2
trait WidgetDisplay { ... }
impl WidgetDisplay for Option<i32> { ... }

So now if we had impls A, B, and C2, we would have a different problem. Now impls A and B would overlap for Widget<Option<i32>>, but they would not overlap for Widget<String>. The reason here is that Option<i32>: WidgetDisplay, and hence impl A applies. But String: RichDisplay (because String: Display) and hence impl B applies. Now we are back in the territory where intersection impls come into play. So, again, if we had impls A, B, and C2, one could imagine writing an intersection impl to cover this situation:

1
impl<T: RichDisplay + WidgetDisplay> RichDisplay for Widget<T> { ... } // impl D

But, of course, impl C2 has yet to be written, so we can’t really write this intersection impl now, in advance. We have to wait until the conflict arises before we can write it.

You may have noticed that I was careful to specify that both the Display trait and Widget type were defined outside of the current crate. This is because RFC 1023 permits the use of negative reasoning if either the trait or the type is under local control. That is, if the RichDisplay and the Widget type were defined in the same crate, then impls A and B could co-exist, because we are allowed to rely on the fact that Widget does not implement Display. The idea here is that the only way that Widget could implement Display is if I modify the crate where Widget is defined, and once I am modifying things, I can also make any other repairs (such as adding an intersection impl) that are necessary.

Conclusion

Today we looked at a particular potential use for specialization: adding a blanket impl that implements Clone for any Copy type. We saw that the current subset-only logic for specialization isn’t enough to permit adding such an impl. We then looked at one proposed fix for this, intersection impls (often called lattice impls).

Intersection impls are appealing because they increase expressiveness while keeping the general feel of the subset-only logic. They also have an explicit nature that appeals to me, at least in principle. That is, if you have two impls that partially overlap, the compiler doesn’t select which one should win: instead, you write an impl to cover precisely that intersection, and hence specify it yourself. Of course, that explicit nature can also be verbose and irritating sometimes, particularly since you will often want the intersection impl to behave the same as one of the other two (rather than doing some third, different thing).

Moreover, the explicit nature of interseciton impls causes problems across crates:

  • they don’t allow you to add a blanket impl in a backwards compatible fashion;
  • they interact poorly with semver, and specifically the limitations on negative logic imposed by RFC 1023.

My conclusion then is that intersection impls may well be part of the solution we want, but we will need additional mechanisms. Stay tuned for additional posts.

A note on comments

As is my wont, I am going to close this post for comments. If you would like to leave a comment, please go to this thread on Rust’s internals forum instead.

Cameron KaiserTenFourFox 45.5.0b1 available: now with little-endian (integer) typed arrays, AltiVec VP9, improved MP3 support and a petulant rant

The TenFourFox 45.5.0 beta (yes, it says it's 45.4.0, I didn't want to rev the version number yet) is now available for testing (downloads, hashes). This blog post will serve as the current "release notes" since we have until November 8 for the next release and I haven't decided everything I'll put in it, so while I continue to do more work I figured I'd give you something to play with. Here's what's new so far, roughly in order of importance.

First, minimp3 has been converted to a platform decoder. Simply by doing that fixed a number of other bugs which were probably related to how we chunked frames, such as Google Translate voice clips getting truncated and problems with some types of MP3 live streams; now we use Mozilla's built-in frame parser instead and in this capacity minimp3 acts mostly as a disembodied codec. The new implementation works well with Google Translate, Soundcloud, Shoutcast and most of the other things I tried. (See, now there's a good use for that Mac mini G4 gathering dust on your shelf: install TenFourFox and set it up for remote screensharing access, and use it as a headless Internet radio -- I'm sitting here listening to National Public Radio over Shoutcast in a foxbox as I write this. Space-saving, environmentally responsible computer recycling! Yes, I know I'm full of great ideas. Yes. You're welcome.)

Interestingly, or perhaps frustratingly, although it somewhat improved Amazon Music (by making duration and startup more reliable) the issue with tracks not advancing still persisted for tracks under a certain critical length, which is dependent on machine speed. (The test case here was all the little five or six second Fingertips tracks from They Might Be Giants' Apollo 18, which also happens to be one of my favourite albums, and is kind of wrecked by this problem.) My best guess is that Amazon Music's JavaScript player interface ends up on a different, possibly asynchronous code path in 45 than 38 due to a different browser feature profile, and if the track runs out somehow it doesn't get the end-of-stream event in time. Since machine speed was a factor, I just amped up JavaScript to enter the Baseline JIT very quickly. That still doesn't fix it completely and Apollo 18 is still messed up, but it gets the critical track length down to around 10 or 15 seconds on this Quad G5 in Reduced mode and now most non-pathological playlists will work fine. I'll keep messing with it.

In addition, this release carries the first pass at AltiVec decoding for VP9. It has some of the inverse discrete cosine and one of the inverse Hadamard transforms vectorized, and I also wrote vector code for two of the convolutions but they malfunction on the iMac G4 and it seems faster without them because a lot of these routines work on unaligned data. Overall, our code really outshines the SSE2 versions I based them on if I do say so myself. We can collapse a number of shuffles and merges into a single vector permute, and the AltiVec multiply-sum instruction can take an additional constant for use as a bias, allowing us to skip an add step (the SSE2 version must do the multiply-sum and then add the bias rounding constant in separate operations; this code occurs quite a bit). Only some of the smaller transforms are converted so far because the big ones are really intimidating. I'm able to model most of these operations on my old Core 2 Duo Mac mini, so I can do a step-by-step conversion in a relatively straightforward fashion, but it's agonizingly slow going with these bigger ones. I'm also not going to attempt any of the encoding-specific routines, so if Google wants this code they'll have to import it themselves.

G3 owners, even though I don't support video on your systems, you get a little boost too because I've also cut out the loopfilter entirely. This improves everybody's performance and the mostly minor degradation in quality just isn't bad enough to be worth the CPU time required to clean it up. With this initial work the Quad is able to play many 360p streams at decent frame rates in Reduced mode and in Highest Performance mode even some 480p ones. The 1GHz iMac G4, which I don't technically support for video as it is below the 1.25GHz cutoff, reliably plays 144p and even some easy-to-decode (pillarboxed 4:3, mostly, since it has lots of "nothing" areas) 240p. This is at least as good as our AltiVec VP8 performance and as I grind through some of the really heavyweight transforms it should get even better.

To turn this on, go to our new TenFourFox preference pane (TenFourFox > Preferences... and click TenFourFox) and make sure MediaSource is enabled, then visit YouTube. You should have more quality settings now and I recommend turning annotations off as well. Pausing the video while the rest of the page loads is always a good idea as well as before changing your quality setting; just click once anywhere on the video itself and wait for it to stop. You can evaluate it on my scientifically validated set of abuses of grammar (and spelling), 1970s carousel tape decks, gestures we make at Gmail other than the middle finger and really weird MTV interstitials. However, because without further configuration Google will "auto-"control the stream bitrate and it makes that decision based on network speed rather than dropped frames, I'm leaving the "slower" appellation because frankly it will be, at least by default. Nevertheless, please advise if you think MSE should be the default in the next version or if you think more baking is necessary, though the pref will be user-exposed regardless.

But the biggest and most far-reaching change is, as promised, little-endian typed arrays (the "LE" portion of the IonPower-NVLE project). The rationale for this change is that, largely due to the proliferation of asm.js code and the little-endian Emscripten systems that generate it, there will be more and more code our big-endian machines can't run properly being casually imported into sites. We saw this with images on Facebook, and later with WhatsApp Web, and also with MEGA.nz, and others, and so on, and so forth. asm.js isn't merely the domain of tech demos and high-end ported game engines anymore.

The change is intentionally very focused and very specific. Only typed array access is converted to little-endian, and only integer typed array access at that: DataView objects, the underlying ArrayBuffers and regular untyped arrays in particular remain native. When a multibyte integer (16-bit halfword or 32-bit word) is written out to a typed array in IonPower-LE, it is transparently byteswapped from big-endian to little-endian and stored in that format. When it is read back in, it is byteswapped back to big-endian. Thus, the intrinsic big-endianness of the engine hasn't changed -- jsvals and doubles are still tag followed by payload, and integers and single-precision floats are still MSB at the lowest address -- only the way it deals with an integer typed array. Since asm.js uses a big typed array buffer essentially as a heap, this is sufficient to present at least a notional illusion of little-endianness as the asm.js script accesses that buffer as long as those accesses are integer.

I mentioned that floats (neither single-precision nor doubles) are not byteswapped, and there's an important reason for that. At the interpreter level, the virtual machine's typed array load and store methods are passed through the GNU gcc built-in to swap the byte order back and forth (which, at least for 32 bits, generates pretty efficient code). At the Baseline JIT level, the IonMonkey MacroAssembler is modified to call special methods that generate the swapped loads and stores in IonPower, but it wasn't nearly that simple for the full Ion JIT itself because both unboxed scalar values (which need to stay big-endian because they're native) and typed array elements (which need to be byte-swapped) go through the same code path. After I spent a couple days struggling with this, Jan de Mooij suggested I modify the MIR for loading and storing scalar values to mark it if the operation actually accesses a typed array. I added that to the IonBuilder and now Ion compiled code works too.

All of these integer accesses have almost no penalty: there's a little bit of additional overhead on the interpreter, but Baseline and Ion simply substitute the already-built-in PowerPC byteswapped load and store instructions (lwbrx, stwbrx, lhbrx, sthbrx, etc.) that we already employ for irregexp for these accesses, and as a result we incur virtually no extra runtime overhead at all. Although the PowerPC specification warns that byte-swapped instructions may have additional latency on some implementations, no PPC chip ever used in a Power Mac falls in that category, and they aren't "cracked" on G5 either. The pseudo-little endian mode that exists on G3/G4 systems but not on G5 is separate from these assembly language instructions, which work on all PowerPCs including the G5 going all the way back to the original 601.

Floating point values, on the other hand, are a different story. There are no instructions to directly store a single or double precision value in a byteswapped fashion, and since there are also no direct general purpose register-floating point register moves, the float has to be spilled to memory and picked up by a GPR (or two, if it's a double) and then swapped at that point to complete the operation. To get it back requires reversing the process, along with the GPR (or two) getting spilled this time to repopulate the double or float after the swap is done. All that would have significantly penalized float arrays and we have enough performance problems without that, so single and double precision floating point values remain big-endian.

Fortunately, most of the little snippets of asm.js floating around (that aren't entire Emscriptenized blobs: more about that in a moment) seem perfectly happy with this hybrid approach, presumably because they're oriented towards performance and thus integer operations. MEGA.nz seems to load now, at least what I can test of it, and WhatsApp Web now correctly generates the QR code to allow your phone to sync (just in time for you to stop using WhatsApp and switch to Signal because Mark Zuckerbrat has sold you to his pimps here too).

But what about bigger things? Well ...

Yup. That's DOSBOX emulating MECC's classic Oregon Trail (from the Internet Archive's MS-DOS Game Library), converted to asm.js with Emscripten and running inside TenFourFox. Go on and try that in 45.4. It doesn't work; it just throws an exception and screeches to a halt.

To be sure, it doesn't fully work in this release of 45.5 either. But some of the games do: try playing Oregon Trail yourself, or Where in the World is Carmen Sandiego or even the original, old school in its MODE 40 splendour, Те́трис (that's Tetris, comrade). Even Commander Keen Goodbye Galaxy! runs, though not even the Quad can make it reasonably playable. In particular the first two probably will run on nearly any Power Mac since they're not particularly dependent on timing (I was playing Oregon Trail on my iMac G4 last night), though you should expect it may take anywhere from 20 seconds to a minute to actually boot the game (depending on your CPU) and I'd just mute the tab since not even the Quad G5 at full tilt can generate convincing audio. But IonPower-LE will now run them, and they run pretty well, considering.

Does that seem impractical? Okay then: how about something vaguely useful ... like ... Linux?

This is, of course, Fabrice Belliard's famous jslinux emulator, and yes, IonPower now runs this too. Please don't expect much out of it if you're not on a high-end G5; even the Quad at full tilt took about 80 seconds elapsed time to get to a root prompt. But it really works and it's useable.

Getting into ridiculous territory was running Linux on OpenRISC:

This is the jor1k emulator and it's only for the highest end G5 systems, folks. Set it to 5fps to have any chance of booting it in less than five minutes. But again -- it's not that the dog walked well.

vi freaks like me will also get a kick out of vim.js. Or, if you miss Classic apps, now TenFourFox can be your System 7 (mouse sync is a little too slow here but it boots):

Now for the bad news: notice that I said things don't fully work. With em-dosbox, the Emscriptenoberated DOSBOX, notice that I only said some games run in TenFourFox, not most, not even many. Wolfenstein 3D, for example, gets as far as the main menu and starting a new game, and then bugs out with a "Reboot requested" message which seems to originate from the emulated BIOS. (It works fine on my MacBook Air, and I did get it to run under PCE.js, albeit glacially.) Catacombs 3D just sits there, trying to load a level and never finishing. Most of the other games don't even get that far and a few don't start at all.

I also tried a Windows 95 emulator (also DOSBOX, apparently), which got part way into the boot sequence and then threw a JavaScript exception "SimulateInfiniteLoop"; the Internet Archive's arcade games under MAME which starts up and then exhausts recursion and aborts (this seems like it should be fixable or tunable, but I haven't explored this further so far); and of course programs requiring WebGL will never, ever run on TenFourFox.

Debugging Emscripten goo output is quite difficult and usually causes tumours in lab rats, but several possible explanations come to mind (none of them mutually exclusive). One could be that the code actually does depend on the byte ordering of floats and doubles as well as integers, as do some of the Mozilla JIT conformance tests. However, that's not ever going to change because it requires making everything else suck for that kind of edge case to work. Another potential explanation is that the intrinsic big-endianness of the engine is causing things to fail somewhere else, such as they managed to get things inadvertently written in such a way that the resulting data was byteswapped an asymmetric number of times or some other such violation of assumptions. Another one is that the execution time is just too damn long and the code doesn't account for that possibility. Finally, there might simply be a bug in what I wrote, but I'm not aware of any similar hybrid endian engine like this one and thus I've really got nothing to compare it to.

In any case, the little-endian typed array conversion definitely fixes the stuff that needed to get fixed and opens up some future possibilities for web applications we can also run like an Intel Mac can. The real question is whether asm.js compilation (OdinMonkey, as opposed to IonPower) pays off on PowerPC now that the memory model is apparently good enough at least for most things. It would definitely run faster than IonPower, possibly several times faster, but the performance delta would not be as massive as IonPower versus the interpreter (about a factor of 40 difference), the compilation step might bring lesser systems to their knees, and it would require some significant additional engineering to get it off the ground (read: a lot more work for me to do). Given that most of our systems are not going to run these big massive applications well even with execution time cut in half or even 2/3rds (and some of them don't work correctly as it is), it might seem a real case of diminishing returns to make that investment of effort. I'll just have to see how many free cycles I have and how involved the effort is likely to be. For right now, IonPower can run them and that's the important thing.

Finally, the petulant rant. I am a fairly avid reader of Thom Holwerda's OSNews because it reports on a lot of marginal and unusual platforms and computing news that most of the regular outlets eschew. The articles are in general very interesting, including this heads-up on booting the last official GameCube game (and since the CPU in the Nintendo GameCube is a G3 derivative, that's even relevant on this blog). However, I'm going to take issue with one part of his otherwise thought-provoking discussion on the new Apple A10 processor and the alleged impending death of Mac OS macOS, where he says, "I didn't refer to Apple's PowerPC days for nothing. Back then, Apple knew it was using processors with terrible performance and energy requirements, but still had to somehow convince the masses that PowerPC was better faster stronger than x86; claims which Apple itself exposed — overnight — as flat-out lies when the company switched to Intel."

Besides my issue with what he links in that last sentence as proof, which actually doesn't establish Apple had been lying (it's actually a Low End Mac piece contemporary with the Intelcalypse asking if they were), this is an incredibly facile oversimplification. Before the usual suspects hop on the comments with their usual suspecty things, let's just go ahead for the sake of argument and say everything its detractors said about the G5 and the late generation G4 systems are true, i.e., they're hot, underpowered and overhungry. (I contest the overhungry part in particular for the late laptop G4 systems, by the way. My 2005 iBook G4 to this day still gets around five hours on a charge if I'm aggressive and careful about my usage. For a 2005 system that's damn good, especially since Apple said six for the same model I own but only 4.5 for the 2008 MacBooks. At least here you're comparing Reality Distortion Field to Reality Distortion Field, and besides, all the performance/watt in the world doesn't do you a whole hell of a lot of good if your machine's out of puff.)

So let's go ahead and just take all that as given for discussion purposes. My beef with that comment is it conveniently ignores every other PowerPC chip before the Intel transition just to make the point. For example, PC Magazine back in the day noted that a 400MHz Yosemite G3 outperformed a contemporary 450MHz Pentium II on most of their tests (read it for yourself, April 20, 1999, page 53). The G3, which doesn't have SIMD of any kind, even beat the P2 running MMX code. For that matter, a 350MHz 604e was over twice as fast at integer performance than a 300MHz P2. I point all of this out not (necessarily) to go opening old wounds but to remind those ignorant of computing history that there was a time in "Apple's PowerPC days" when even the architecture's detractors will admit it was at least competitive. That time clearly wasn't when the rot later set in, but he certainly doesn't make that distinction.

To be sure, was this the point of his article? Not really, since he was more addressing ARM rather than PowerPC, but it is sort of. Thom asserts in his exchange with Grüber Alles that Apple and those within the RDF cherrypick benchmarks to favour what suits them, which is absolutely true and I just did it myself, but Apple isn't any different than anyone else in that regard (put away the "tu quoque" please) and Apple did this as much in the Power Mac days to sell widgets as they do now in the iOS ones. For that matter, Thom himself backtracks near the end and says, "there is one reason why benchmarks of Apple's latest mobile processors are quite interesting: Apple's inevitable upcoming laptop and desktop switchover to its own processors." For the record I see this as highly unlikely due to the Intel Mac's frequent use as a client virtual machine host, though it's interesting to speculate. But the rise of the A-series is hardly comparable with Apple's PowerPC days at all, at least not as a monolithic unit. If he had compared the benchmark situation with when the PowerPC roadmap was running out of gas in the 2004-5 timeframe, by which time even boosters like yours truly would have conceded the gap was widening but Apple relentlessly ginned up evidence otherwise, I think I'd have grudgingly concurred. And maybe that's actually what he meant. However, what he wrote lumps everything from the 601 to the 970MP into a single throwaway comment, is baffling from someone who also uses and admires Mac OS 9 (as I do), and dilutes his core argument. Something like that I'd expect from the breezy mainstream computer media types. Thom, however, should know better.

(On a related note, Ars Technica was a lot better when they were more tech and less politics.)

Next up: updates to our custom gdb debugger and a maintenance update for TenFourFoxBox. Stay tuned and in the meantime try it and see if you like it. Post your comments, and, once you've played a few videos or six, what you think the default should be for 45.5 (regular VP8 video or MSE/VP9).

Mitchell BakerLiving with Diverse Perspectives

Diversity and Inclusion is more than having people of different demographics in a group.  It is also about having the resulting diversity of perspectives included in the decision-making and action of the group in a fundamental way.

I’ve had this experience lately, and it demonstrated to me both why it can be hard and why it’s so important.  I’ve been working on a project where I’m the individual contributor doing the bulk of the work. This isn’t because there’s a big problem or conflict; instead it’s something I feel needs my personal touch. Once the project is complete, I’m happy to describe it with specifics. For now, I’ll describe it generally.

There’s a decision to be made.  I connected with the person I most wanted to be comfortable with the idea to make sure it sounded good.  I checked with our outside attorney just in case there was something I should know.  I checked with the group of people who are most closely affected and would lead the decision and implementation if we proceed. I received lots of positive response.

Then one last person checked in with me from my first level of vetting and spoke up.  He’s sorry for the delay, etc but has concerns.  He wants us to explore a bunch of different options before deciding if we’ll go forward at all, and if so how.

At first I had that sinking feeling of “Oh bother, look at this.  I am so sure we should do this and now there’s all this extra work and time and maybe change. Ugh!”  I got up and walked around a bit and did a few thing that put me in a positive frame of mind.  Then I realized — we had added this person to the group for two reasons.  One, he’s awesome — both creative and effective. Second, he has a different perspective.  We say we value that different perspective. We often seek out his opinion precisely because of that perspective.

This is the first time his perspective has pushed me to do more, or to do something differently, or perhaps even prevent me from something that I think I want to do.  So this is the first time the different perspective is doing more than reinforcing what seemed right to me.

That lead me to think “OK, got to love those different perspectives” a little ruefully.  But as I’ve been thinking about it I’ve come to internalize the value and to appreciate this perspective.  I expect the end result will be more deeply thought out than I had planned.  And it will take me longer to get there.  But the end result will have investigated some key assumptions I started with.  It will be better thought out, and better able to respond to challenges. It will be stronger.

I still can’t say I’m looking forward to the extra work.  But I am looking forward to a decision that has a much stronger foundation.  And I’m looking forward to the extra learning I’ll be doing, which I believe will bring ongoing value beyond this particular project.

I want to build Mozilla into an example of what a trustworthy organization looks like.  I also want to build Mozilla so that it reflects experience from our global community and isn’t living in a geographic or demographic bubble.  Having great people be part of a diverse Mozilla is part of that.  Creating a welcoming environment that promotes the expression and positive reaction to different perspectives is also key.  As we learn more and more about how to do this we will strengthen the ways we express our values in action and strengthen our overall effectiveness.

Mozilla WebDev CommunityBeer and Tell – September 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

emceeaich: Gopher Tessel

First up was emceeaich, who shared Gopher Tessel, a project for running a Gopher server (an Internet protocol that was popular before the World Wide Web) on a Tessel. Tessel is small circuit board that runs Node.js projects; Gopher Tessel reads sensors (such as the temperature sensor) connected to the board, and exposes their values via Gopher. It also can control lights connected to the board.

groovecoder: Crypto: 500 BC – Present

Next was groovecoder, who shared a preview of a talk about cryptography throughout history. The talk is based on “The Code Book” by Simon Sign. Notable moments and techniques mentioned include:

  • 499 BCE: Histiaeus of Miletus shaves the heads of messengers, tattoos messages on their scalps, and sends them after their hair has grown back to hide the message.
  • ~100 AD: Milk of tithymalus plant is used as invisible ink, activated by heat.
  • ~700 BCE: Scytale
  • 49 BC: Caesar cipher
  • 1553 AD: Vigenère cipher

bensternthal: Home Monitoring & Weather Tracking

bensternthal was up next, and he shared his work building a dashboard with weather and temperature information from his house. Ben built several Node.js-based applications that collect data from his home weather station, from his Nest thermostat, and from Weather Underground and send all the data to an InfluxDB store. The dashboard itself uses Grafana to plot the data, and all of these servers are run using Docker.

The repositories for the Node.js applications and the Docker configuration are available on GitHub:

craigcook: ByeHolly

Next was craigcook, who shared a virtual yearbook page that he made as a farewell tribute to former-teammate Holly Habstritt-Gaal, who recently took a job at another company. The page shows several photos that are clipped at the edges to look curved like an old television screen. This is done in CSS using clip-path with an SVG-based path for clipping. The SVG used is also defined using proportional units, which allows it to warp and distort correctly for different image sizes, as seen by the variety of images it is used on in the page.

peterbe: react-buggy

peterbe told us about react-buggy, a client for viewing Github issues implemented in React. It is a rewrite of buggy, a similar client peterbe wrote for Bugzilla bugs. Issues are persisted in Lovefield (a wrapper for IndexedDB) so that the app can function offline. The client also uses elasticlunr.js to provide full-text search on issue titles and comments.

shobson: tic-tac-toe

Last up was shobson, who shared a small Tic-Tac-Toe game on the viewsourceconf.org offline page that is shown when the site is in offline mode and you attempt to view a page that is not available offline.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Air MozillaParticipation Q3 Demos

Participation Q3 Demos Watch the Participation Team share the work from the last quarter in the Demos.

Emily DunhamSetting a Freenode channel's taxonomy info

Setting a Freenode channel’s taxonomy info

Some recent flooding in a Freenode channel sent me on a quest to discover whether the network’s services were capable of setting a custom message rate limit for each channel. As far as I can tell, they are not.

However, the problem caused me to re-read the ChanServ help section:

/msg chanserv help
- ***** ChanServ Help *****
- ...
- Other commands: ACCESS, AKICK, CLEAR, COUNT, DEOP, DEVOICE,
-                 DROP, GETKEY, HELP, INFO, QUIET, STATUS,
-                 SYNC, TAXONOMY, TEMPLATE, TOPIC, TOPICAPPEND,
-                 TOPICPREPEND, TOPICSWAP, UNQUIET, VOICE,
-                 WHY
- ***** End of Help *****

Taxonomy is a cool word. Let’s see what taxonomy means in the context of IRC:

/msg chanserv help taxonomy
- ***** ChanServ Help *****
- Help for TAXONOMY:
-
- The taxonomy command lists metadata information associated
- with registered channels.
-
- Examples:
-     /msg ChanServ TAXONOMY #atheme
- ***** End of Help *****

Follow its example:

/msg chanserv taxonomy #atheme
- Taxonomy for #atheme:
- url                       : http://atheme.github.io/
- ОХЯЕБУ                    : лололол
- End of #atheme taxonomy.

That’s neat; we can elicit a URL and some field with a cryllic and apparently custom name. But how do we put metadata into a Freenode channel’s taxonomy section? Google has no useful hits (hence this blog post), but further digging into ChanServ’s manual does help:

/msg chanserv help set

- ***** ChanServ Help *****
- Help for SET:
-
- SET allows you to set various control flags
- for channels that change the way certain
- operations are performed on them.
-
- The following subcommands are available:
- EMAIL           Sets the channel e-mail address.
- ...
- PROPERTY        Manipulates channel metadata.
- ...
- URL             Sets the channel URL.
- ...
- For more specific help use /msg ChanServ HELP SET command.
- ***** End of Help *****

Set arbirary metadata with /msg chanserv set #channel property key value

The commands /msg chanserv set #channel email a@b.com and /msg chanserv set #channel property email a@b.com appear to function identically, with the former being a convenient wrapper around the latter.

So that’s how #atheme got their fancy cryllic taxonomy: Someone with the appropriate permissions issued the command /msg chanserv set #atheme property ОХЯЕБУ лололол.

Behaviors of channel properties

I’ve attempted to deduce the rules governing custom metadata items, because I couldn’t find them documented anywhere.

  1. Issuing a set property command with a property name but no value deletes
the property, removing it from the taxonomy.
  1. A property is overwritten each time someone with the appropriate permissions
issues a /set command with a matching property name (more on the matching in a moment). The property name and value are stored with the same capitalization as the command issued.
  1. The algorithm which decides whether to overwrite an existing property or
create a new one is not case sensitive. So if you set ##test email test@example.com and then set ##test EMAIL foo, the final taxonomy will show no field called email and one field called EMAIL with the value foo.
  1. When displayed, taxonomy items are sorted first in alphabetical order (case
insensitively), then by length. For instance, properties with the names a, AA, and aAa would appear in that order, because the initial alphebetization is case-insensitive.
  1. Attempting to place [mIRC color codes](http://www.mirc.com/colors.html) in the

property name results in the error “Parameters are too long. Aborting.”

However, placing color codes in the value of a custom property works just fine.

Other uses

As a final note, you can also do basically the same thing with Freenode’s NickServ, to set custom information about your nickname instead of about a channel.

Support.Mozilla.OrgWhat’s Up with SUMO – 22nd September

Hello, SUMO Nation!

How are you doing? Have you seen the First Inaugural Firefox Census already? Have you filled it out? Help us figure out what kind of people use Firefox! You can get to it right after you read through our latest & greatest news below.

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 21st of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 28th of September!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

  • PLATFORM REMINDER! The Platform Meetings are BACK! If you missed the previous ones, you can find the notes in this document. (here’s the channel you can subscribe to). We really recommend going for the document and videos if you want to make sure we’re covering everything as we go.
  • A few important and key points to make regarding the migration:
    • We are trying to keep as many features from Kitsune as possible. Some processes might change. We do not know yet how they will look.
    • Any and all training documentation that you may be accessing is generic – both for what you can accomplish with the platform and the way roles and users are called within the training. They do not have much to do with the way Mozilla or SUMO operate on a daily basis. We will use these to design our own experience – “translate” them into something more Mozilla, so to speak.
    • All the important information that we have has been shared with you, one way or another.
    • The timelines and schedule might change depending on what happens.
  • We started discussions about Ranks and Roles after the migration – join in! More topics will start popping up in the forums up for discussion, but they will all be gathered in the first post of the main migration thread.
  • If you are interested in test-driving the new platform now, please contact Madalina.
    • IMPORTANT: the whole place is a work in progress, and a ton of the final content, assets, and configurations (e.g. layout pieces) are missing.
  • QUESTIONS? CONCERNS? Please take a look at this migration document and use this migration thread to put questions/comments about it for everyone to share and discuss. As much as possible, please try to keep the migration discussion and questions limited to those two places – we don’t want to chase ten different threads in too many different places.

Social

Support Forum

  • Once again, and with gusto – SUUUUMO DAAAAAY! Go for it!
  • Final reminder: If you are using email notifications to know what posts to return to, jscher2000 has a great tip (and tool) for you. Check it out here!

Knowledge Base & L10n

Firefox

  • for Android
    • Version 49 is out! Now you can enjoy the following:

      • caching selected pages (e.g. mozilla.org) for offline retrieval
      • usual platform and bug fixes
  • for Desktop
    • Version 49 is out! Enjoy the following:

      • text-to-speech in Reader mode (using your OS voice modules)
      • ending support for older Mac OS versions
      • ending support for older CPUs
      • ending support for Firefox Hello
      • usual platform and bug fixes
  • for iOS
    • No news from under the apple tree this time!

By the way – it’s the first day of autumn, officially! I don’t know about you, but I am looking forward to mushroom hunting, longer nights, and a bit of rain here and there (as long as it stops at some point). What is your take on autumn? Tell us in the comments!

Cheers and see you around – keep rocking the helpful web!

Mozilla WebDev CommunityExtravaganza – September 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Survey Gizmo Integration with Google Analytics

First up was shobson, who talked about a survey feature on MDN that prompts users to leave feedback about how MDN helped them complete a task. The survey is hosted by SurveyGizmo, and custom JavaScript included on the survey reports the user’s answers back to Google Analytics. This allows us to filter on the feedback from users to answer questions like, “What sections of the site are not helping users complete their tasks?”.

View Source Offline Mode

shobson also mentioned the View Source website, which is now offline-capable thanks to Service Workers. The pages are now cached if you’ve ever visited them, and the images on the site have offline fallbacks if you attempt to view them with no internet connection.

SHIELD Content Signing

Next up was mythmon, who shared the news that Normandy, the backend service for SHIELD, now signs the data that it sends to Firefox using the Autograph service. The signature is included with responses via the Content-Signature header. This signing will allow Firefox to only execute SHIELD recipes that have been approved by Mozilla.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Neo

Eli was up next, and he shared Neo, a tool for setting up new React-based projects with zero configuration. It installs and configures many useful dependencies, including Webpack, Babel, Redux, ESLint, Bootstrap, and more! Neo is installed as a command used to initialize new projects or a dependency to be added to existing projects, and acts as a single dependency that pulls in all the different libraries you’ll need.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Standu.ps Reboot

Last up was pmac, who shared a note about how he and willkg are re-writing the standu.ps service using Django, and are switching the rewrite to use Github authentication instead of Persona. They have a staging server setup and expect to have news next month about the availability of the new service.

Standu.ps is a service used by several teams at Mozilla for posting status updates as they work, and includes an IRC bot for quick posting of updates.


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Air MozillaReps weekly, 22 Sep 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox NightlyThese Weeks in Firefox: Issue 1

Every two weeks, engineering teams working on Firefox Desktop get together and update each other on things that they’re working on. These meetings are public. Details on how to join, as well as meeting notes, are available here.

We feel that the bleeding edge development state captured in those meeting notes might be interesting to our Nightly blog audience. To that end, we’re taking a page out of the Rust and Servo playbook, and offering you handpicked updates about what’s going on at the forefront of Firefox development!

Expect these every two weeks or so.

Thanks for using Nightly, and keep on rocking the free web!

Highlights

Contributor(s) of the Week

  • The team has nominated Adam (adamgj.wong), who has helped clean-up some of our Telemetry APIs. Great work, Adam!

Project Updates

 

Add-ons

  • andym wants to remind everybody that the Add-ons team is still triaging and fixing SDK bugs (like this one, for example).

Electrolysis (e10s)

Core Engineering

  • ksteuber rewrote the Snappy Symbolication Server (mainly used for the Gecko Profiler for Windows builds) and this will be deployed soon.
  • felipe is in the process of designing experiment mechanisms for testing different behaviours for Flash (allowing some, denying some, click-to-play some, based on heuristics)

Platform UI and other Platform Audibles

Quality of Experience

Sync / Firefox Accounts

Uncategorized

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Air MozillaPrivacy Lab - September 2016 - EU Privacy Panel

Privacy Lab - September 2016 - EU Privacy Panel Want to learn more about EU Privacy? Join us for a lively panel discussion of EU Privacy, including GDPR, Privacy Shield, Brexit and more. After...

About:CommunityOne Mozilla Clubs

24009148094_5ce13ab4a5_z

In 2015, The Mozilla Foundation launched the Mozilla Clubs program to bring people together locally to teach, protect and build the open web in an engaging and collaborative way. Within a year it grew to include 240+ Clubs in 100+ cities globally, and now is growing to reach new communities around the world.

Today we are excited to share a new focus for Mozilla Clubs taking place on a University or College Campus (Campus Clubs). Mozilla Campus Clubs blend the passion and student focus of the former Firefox Student Ambassador program and Take Back The Web Campaign with the existing structure of  Mozilla Clubs to create a unified model for participation on campuses!

Mozilla Campus Clubs take advantage of the unique learning environments of Universities and Colleges to bring groups of students together to teach, build and protect the open web. It builds upon the Mozilla Club framework to provide targeted support to those on campus through its:

  1. Structure:  Campus Clubs include an Executive Team in addition to the Club Captain position, who help develop programs and run activities specific to the 3 impact areas (teach, build, protect).
  2. Training & Support: Like all Mozilla Clubs, Regional Coordinators and Club Captains receive training and mentorship throughout their clubs journey. However the nature of the training and support for Campus Clubs is specific to helping students navigate the challenges of setting up and running a club in the campus context.
  3. Activities: Campus Club activities are structured around 3 impact areas (teach, build, protect). Club Captains in a University or College can find suggested activities (some specific to students) on the website here.

These clubs will be connected to the larger Mozilla Club network to share resources, curriculum, mentorship and support with others around the world. In 2017 you’ll see additional unification in terms of a joint application process for all Regional Coordinators and a unified web presence.

This is an exciting time for us to unite our network of passionate contributors and create new opportunities for collaboration, learning, and growth within our Mozillian communities. We also see the potential of this unification to allow for greater impact across Mozilla’s global programs, projects and initiatives.

If you’re currently involved in Mozilla Clubs and/or the FSA program, here are some important things to know:

  • The Firefox Student Ambassador Program is now Mozilla Campus Clubs: After many months of hard work and careful planning the Firefox Ambassador Program (FSA) has officially transitioned to Mozilla Clubs as of Monday September 19th, 2016. For full details about the Firefox Student Ambassador transition check out this guide here.
  • Firefox Club Captains will now be Mozilla Club Captains: Firefox Club Captains who already have a club, a structure, and a community set up on a university/college should register your club here to be partnered with a Regional Coordinator and have access to new resources and opportunities, more details are here.
  • Current Mozilla Clubs will stay the same: Any Mozilla Club that already exists will stay the same. If they happen to be on a university or college campus Clubs may choose to register as a Campus Club, but are not required to do so.
  • There is a new application for Regional Coordinators (RC’s): Anyone interested in taking on more responsibility within the Clubs program can apply here.  Regional Coordinators mentor Club Captains that are geographically close to them. Regional Coordinators support all Club Captains in their region whether they are on campus or elsewhere.
  • University or College students who want to start a Club at their University and College may apply here. Students who primarily want to lead a club on a campus for/with other university/college students will apply to start a Campus Club.
  • People who want to start a club for any type of learner apply here. Anyone who wants to start a club that is open to all kinds of learners (not limited to specifically University students) may apply to start a Club here.

Individuals who are leading Mozilla Clubs commit to running regular (at least monthly) gatherings, participate in community calls, and contribute resources and learning materials to the community. They are part of a network of leaders and doers who support and challenge each other. By increasing knowledge and skills in local communities Club leaders ensure that the internet is a global public resource, open and accessible to all.

This is the beginning of a long term collaboration for the Mozilla Clubs Program. We are excited to continue to build momentum for Mozilla’s mission through new structures and supports that will help engage more people with a passion for the open web.