QMOFirefox 49 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – August 26th – we held a new Testday event, for Firefox 49 Beta 7.

Thank you all for helping us making Mozilla a better place – Ron Bentley, Iryna Thompson, Carmen Făt, Logicoma, Moin Shaikh, Aaron Raimist

From Bangladesh: Mohammad Maruf Islam, Samad Talukder, Nazir Ahmed Sabbir, Rezaul huque Nayeem, Azmina Akter Papeya, Saheda Reza Antora, Saddam Hossain, Tanvir Rahman, Kazi nuzhat Tasnem, Maruf Rahman, Sajal Ahmed,  Kazi Ashraf Hossain, Md.Majedul islam, Forhad Hossain, Sajedul Islam, Akash, Tazin Ahmed,  Toki Yasir, Ria, Sourov_Arko,  Amir Hossain Rhidoy, Roy Ayers, Sufi Ahmed Hamim, Fahim.

A big thank you goes out to all our active moderators too! 

Results:

 

Keep an eye on QMO for upcoming events!

Daniel StenbergMozilla’s search for a new logo

I’m employed by Mozilla. The same Mozilla that recently has announced that it is looking around for feedback on how to revamp its logo and graphical image.

It was with amusement I saw one of the existing suggestions for a new logo by using “://” (colon slash slash) the name:

mozilla-colon-slashslash-suggestion

… compared with the recently announced new curl logo:

good_curl_logo

Me being in both teams and being a general Internet protocol enthusiast I couldn’t be more happy if Mozilla would end up using a design so clearly based on the same underlying thoughts. After all,
Imitation is the sincerest of flattery as Charles Caleb Colton once so eloquently expressed it.

Air MozillaPrivacy Lab - August 2016 - Tools to Teach Privacy

Privacy Lab - August 2016 - Tools to Teach Privacy Erin Berman and members of her web team from the San Jose Public Library (SJPL) will talk about their Virtual Privacy Lab tool, developed with...

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1290580] Rebase docker infra off of CentOS 6 & use docker organization
  • [1299280] Recent changes introduced by bug 1294478, flags do not display on the enter_bug.cgi page

discuss these changes on mozilla.tools.bmo.


Aki Sasaki100% test coverage [in python]

Tl;dr: I reached 100% test coverage (as measured by coverage) on several of my projects, and I'm planning on continuing that trend for the important projects I maintain. We were chatting about this, and I decided to write a blog post about how to get to 100% test coverage, and why.

Why 100% test coverage?

Find unreachable code

Back in 2014, Apple issued a critical security update for iOS, to patch a bug known as #gotofail.

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
{
    OSStatus err;
    ...

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail; goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0) goto fail; ... fail: SSLFreeBuffer(&signedHashes); SSLFreeBuffer(&hashCtx); return err; }

The second highlighted goto fail caused SSLVerifySignedServerKeyExchange calls to jump to the fail block with a successful err. This bypassed the actual signature verification and returned a blanket success. With this bug in place, malicious servers would have been able to serve SSL certs with mismatched or missing private keys, and still look legit.

One might argue for strict indentation, using curly braces for all if blocks, better coding practices, or just Better Reviews. But 100% test coverage would have found it.

"For the bug at hand, even the simplest form of coverage analysis, namely line coverage, would have helped to spot the problem: Since the problematic code results from unreachable code, there is no way to achieve 100% line coverage.

"Therefore, any serious attempt to achieve full statement coverage should have revealed this bug."

Even though python has meaningful indentation, that doesn't mean it's foolproof against unreachable code. Simply marking each line as visited is enough to protect against this class of bug.

And even when there isn't a huge security issue at stake, unreachable code can clutter the code base. At best, it's dead weight, extra lines of code to work around, to refactor, to have to read when debugging. At worst, it's confusing, misleading, and hides serious bugs. Don't do unreachable code.

Uncovering bugs while writing tests

My first attempt at 100% test coverage was on a personal project, on a whim. I knew all my code was good, but full coverage was something I had never seen. It was a lark. A throwaway bit of effort to see if I could do it, and then something I could mention offhand. "Oh yeah, and by the way, it has 100% test coverage."

Imagine my surprise when I uncovered some non-trivial bugs.

This has been the norm. Every time I put forward a sustained effort to add test coverage to a project, I uncover some things that need fixing. And I fix them. Even if its as small as a confusing variable name or a complex bit of logic that could use a comment or two. And often it's more than that.

Certainly, if I'm able to find and fix bugs while writing tests, I should be able to find and fix those bugs when they're uncovered in the wild, no? The answer is yes. But. It's a difference of headspace and focus.

When I'm focused on isolating a discrete chunk of code for testing, it's easier to find wrong or confusing behavior than when when I'm deploying some large, complex production process that may involve many other pieces of software. There are too many variables, too little headspace for the problem, and often too little time. And Murphy says those are the times these problems are likely to crop up. I'd rather have higher confidence at those times. Full test coverage helps reduce those variables when the chips are down.

A helpful guide for future changes

Software is the proverbial shark: it's constantly moving, or it dies. Small patches are pretty easy to eyeball and tell if they're good. Well written and maintained code helps here too, as does documentation, mentoring, and code reviews. But only tests can verify that the code is still good, especially when dealing with larger patches and refactors.

Tests can help new contributors catch problems before they reach code review or production. And sometimes after a long hiatus and mental context switch, I feel like a new contributor to my own projects. The new contributor you are helping may be you in six months.

Sometimes your dependencies release new versions, and it's nice to have a full set of tests to see if something will break before rolling anything out to production. And when making large changes or refactoring, tests can be a way of keeping your sanity.

Momentum is contagious

There's the social aspect of codebase maintenance. If you want to ask a new contributor to write tests for their code, that's an easier ask when you're at 100% coverage than when you're at, say, 47%.

This can also carry over to your peers' and coworkers' projects. Enough momentum, and you can build an ecosystem of well-tested projects, where the majority of all bugs are caught before any code ever lands on the main repo.

Technical wealth

Have you read Forget Technical Debt - Here's How to Build Technical Wealth? It's well worth the read.

Automated testing and self-validation loom large in that post. She does caution against 100% test coverage as an end in itself in the early days of a startup, for example, but that's when time to market and profitability are opportunity costs. When developing technical wealth at an existing enterprise, the long term maintenance benefits speak for themselves.

How did you reach 100% test coverage [in python]?

Clean architecture

If you haven't watched Clean Architecture in Python, add it to your list. It's worth the time. It's the Python take on Uncle Bob Martin's original talk. Besides making your code more maintainable, it makes your code more testable. When I/O is cleanly separated from the logic, the logic is easily testable, and the I/O portions can be tested in mocked code and integration tests.

parse_args()

Some people swear by click. I'm not quite sold on it, yet, but it's easy to want something other than optparse or argparse when faced with a block of option logic like in this main() function. How can you possibly sanely test all of that, especially when right after the option parsing we go into the main logic of the script?

I pulled all of the optparse logic into its own function, and I called it with sys.argv[:1]. That let me start testing optparse tests, separate from my signing tests. Signtool is one of those projects where I haven't yet reached 100% coverage, but it's high on my list.

Toss if __name__ == '__main__':

if __name__ == '__main__': is one of the first things we learn in python. It allows us to mix script and library functionality into the same file: when you import the file, __name__ is the name of the module, and the block doesn't execute, and when you run it as a script, __name__ is set to __main__, and the block executes. (More detailed descriptions here.)

There are ways to skip these blocks in code coverage. That might be the easiest solution if all you have under if __name__ == '__main__': is a call to main(). But I've seen scripts where there are pages of code inside this block, all code-coverage-unfriendly.

I forget where I found my solution. This answer suggests __name__ == '__main__' and main(). I rewrote this as

def main(name=None):
    if name in (None, '__main__'):
        ...


main(name=__name__)

Either way, you don't have to worry about covering the special if block. You can mock the heck out of main() and get to 100% coverage that way. (See the mock and integration section.)

Rewrite the logic

mimetypes.guess_type('foo.log') returns 'text/plain' on OSX, and None on linux. I fixed it like this:

content_type, _ = mimetypes.guess_type(path)
if content_type is None and path.endswith('.log'):
    content_type = 'text/plain'

And that works. You can get full coverage on linux: a "foo.log" will enter the if block, and a "foo.unknown_extension" will skip it. But on OSX you'll never satisfy the conditional; the content_type will never be None for a foo.log.

More ugly mocking, right? You could. But how about:

content_type, _ = mimetypes.guess_type(path)
if path.endswith('.log'):
    content_type = content_type or 'text/plain'

Coverage will mark that as covered on both linux and OSX.

@pytest.mark.parametrize

I used to use nosetests for everything, just because that's what I first encountered in the wild. When I first hacked on PGPy I had to get accustomed to pytest, but that was pretty easy. One thing that drew me to it was @pytest.mark.parametrize (which is an alternate spelling.)

With @pytest.mark.parametrize, you can loop through multiple inputs with the same logic, without having to write the loops yourself. Like this. Or, for the PGPy unicode example, this.

This is doable in nosetests, but pytest encourages it by making it easy.

fixtures

In nosetests, I would often write small functions to give me objects or data structures to test against. That, or I'd define constants, and I'd be careful to avoid changing any mutable constants, e.g. dicts, during testing.

pytest fixtures allow you to do that, but more simply. With a @pytest.fixture(scope="function"), the fixture will get reset every function, so even if a test changes the fixture, the next test will get a clean copy.

There are also built-in fixtures, like tmpdir, mocker, and event_loop, which allow you to more easily and succintly perform setup and cleanup around your tests. And there are lots of additional modules which add fixtures or other functionality to pytest.

Here are slides from a talk on advanced fixture use.

mock and integration

It's certainly possible to test the I/O and loop-laden main sections of your project. For unit tests, it's a matter of mocking the heck out of it. Pytest's mocker fixture helps here, although it's certainly possible to nest a bunch of with mock.patch blocks to avoid mocking that code past the end of your test.

Even if you have to include some I/O in a function, that doesn't mean you have to use mock to test it. Instead of

def foo():
    requests.get(...)

you could do

def foo(request_function=requests.get):
    request_function(...)

Then when you unit test, you can override request_function with something that raises the exception or returns the value that you want to test.

Finally, even after you get to 100% coverage on unit tests, it's good to add some integration testing to make sure the project works in the real world. For scriptworker, these tests are marked with @pytest.mark.skipif(os.environ.get("NO_TESTS_OVER_WIRE"), reason=SKIP_REASON), so we can turn them off in Travis.

Use pytest!

Besides parameterization and the mocker fixture, pytest is easily superior to nosetests.

There's parallelization with pytest-xdist. The fixtures make per-test setup and cleanup a breeze. Pytest is more succinct: assert x == y instead of self.assertEquals(x, y) ; because we don't have the self to deal with, the tests can be functions instead of methods. The following

with pytest.raises(OSError):
    function(*args, **kwargs)

is a lot more legible than self.assertRaises(OSError, function, *args, **kwargs) . You've got @pytest.mark.asyncio to await a coroutine; the event_loop fixture is there for more asyncio testing goodness, with auto-open/close for each test.

Miscellaneous hints

Use tox, coveralls, and travis or taskcluster-github for per-checkin test and coverage results.

Use coverage>=4.2 for async code. Pre-4.2 can't handle the async.

Use coverage's branch coverage by setting branch = True in your .coveragerc.

I haven't tried hypothesis or other behavior- or property- based testing yet. But hypothesis does use pytest. There's an issue open to integrate them more deeply.

It looks like I need to learn more about mutation testing.

What other tricks am I missing?



comment count unavailable comments

Frédéric HarperFree stickers – show others that you are a lion

NLIBK_sticker

As I a mentioned in February, I decided to start my own business and go back to the marvelous world of freelancing with No lion is born king. No need to say that it’s an interesting time, not always easy, but overall, I’m quite happy! I’m also lucky as it’s going well, thanks to my amazing network I’ve built over the last couple of years and, of course, my personal brand.

Talking about branding, I love my company’s brand, and with the help of the talented Sylvain Grand’Maison, I’ve ordered some stickers to show how proud I am to the rest of the world. Since it’s more than just a company’s name, I would like to offer you some of those stickers so you can show others that you are a lion, a fighter, that you worked hard to become the king of your jungle. If you want some (size is 2X2 inches), please send me an email with your complete postal address and I’ll send you some for free. The only thing I’m asking in exchange is that you send me pictures of where you put them!

I can’t wait to see with which other stickers my lion will hang with…

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1290580] Rebase docker infra off of CentOS 6 & use docker organization
  • [1298815] Re-introduce generate_bmo_data.pl
  • [1294478] Move JSON serialization out of the templates and into perl to improve performance

discuss these changes on mozilla.tools.bmo.


Myk MelezGraphene On Gecko Without B2G

Recently, I’ve been experimenting with a reimplementation of Graphene on Gecko that doesn’t require B2G. Graphene is a desktop runtime for HTML apps, and Browser.html uses it to run on Servo. There’s a Gecko implementation of Graphene that is based on B2G, but it isn’t maintained and appears to have bitrotted. It also entrains all of B2G, although Browser.html doesn’t use many B2G features.

My reimplementation is minimal. All it does is load a URL in a native window and expose a few basic APIs (including <iframe mozbrowser>). But that’s enough to run Browser.html. And since Browser.html is a web app, it can be loaded from a web server (with this change that exposes <iframe mozbrowser> to https: URLs).

I forked browserhtml/browserhtml to mykmelez/browserhtml, built it locally, and then pushed the build to my GitHub Pages website at mykmelez.github.io/browserhtml. Then I forked mozilla/gecko-dev to mykmelez/graphene-gecko and reimplemented Graphene in it. I put the implementation into embedding/graphene/, but I could have put it anywhere, including in a separate repo, per A Mozilla App Outside Central.

To try it out, clone, build, and run it:

git clone https://github.com/mykmelez/graphene-gecko
cd graphene-gecko
./mach build && ./mach run https://mykmelez.github.io/browserhtml/

You’ll get a native window with Browser.html running in Graphene:

Graphene on Gecko

This Week In RustThis Week in Rust 145

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Crate of the Week

No crate was suggested for this week. So I unanimously declare ring, Brian Smith's Rust crypto implementation, which is finally on crates.io as this week's crate.

Can we have suggestions and votes next week? Pretty please?

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

138 pull requests were merged in the last two weeks.

New Contributors

  • changchun.fan
  • Daniele Baracchi
  • Mohit Agarwal
  • Paul Fanelli
  • Shyam Sundar B
  • zjhmale

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

No new RFCs were proposed this week.

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

My Rust memoirs will be called "Once in 'a lifetime".

@Argorak on Twitter.

Thanks to Vincent Esche for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Christian HeilmannWhat does a developer evangelist/advocate do?

I love it! What is it?

Today I was asked to define the role of a developer evangelist in my company for a slide deck to be shown at hiring events and university outreach. This was a good opportunity to make a list of all the tasks we have to do in this job and why we do them. It might be useful for you to read through, and a good resource to point people to when they – once again – ask you “but, do you still code?”.

First things first: Whenever you write about this topic there is a massive discussion about the job title ” evangelist” and how it is terrible and has religious overtones and whatnot. So if that is what bothers you, please choose below what you’d like to read and your browser will change it accordingly to give you peace of mind:



Now, what is a developer evangelist? As defined in my developer evangelist handbook:

A developer evangelist is a spokesperson, mediator and translator between a company and both its technical staff and outside developers.

What do you need to know to be one? First there are the necessary skills:

  • Great development skills – both in creating and explaining software and products
  • Excellent communication skills – this job is about reaching out, listening and distilling information
  • Excellent networking skills – you’re meant to collect contacts and keep them happy

Then there are also important skills to have:

  • Patience of a saint – you will need to have to explain your job over and over and will have to defend your lack of ”not writing code”
  • Strong sense of independence – your job is to help communication by aiding with your voice. You’re not in sales.
  • Excellent self-organising skills – you’re on the road a lot and if you don’t take care, you burn out quickly or get buried under an avalanche of requests.

Being a developer evangelist is a role that spans across several departments. Your job is to be a communicator. Whilst you are most likely reporting to engineering, you need to spend a lot of time making different departments talk to another. Therefore internal networking skills are very important. Your job also affects the work of other people (PR, legal, comms, marketing). Be aware of these effects and make sure to communicate them before you reach out.

As a translator, you have both outbound (company to people) and inbound (people to your company) tasks to fulfil. Let’s list most of them:

Outbound tasks

Be a social media presence

Provide a personal channel to ask about your company or products. People react much better to a human and interact more with them than with a corporate account. Be yourself – at no point you should become a corporate sales person. Do triage – point people to resources to use like feedback channels instead of talking on their behalf Don’t make decisions for your colleagues. Help them, don’t add to their workload.

Keep up to date with competition and market

You’re the ear on the ground – your job is to know what the competition does and what gets people excited. Use the products of your competition to see what works for you and their users and give that feedback to your teams. Detect trends and report them to your company.

Create openly available software products

You need to develop software products to keep up to date technically and show your audience that you’re not just talking but that you know what you’re doing. These should be openly available. In many cases, your company can’t release as freely as you can. Show the world that you’re trusted to do so. Building products also allows you to use tools outside developers use and feed back your experiences to your company

Participate in other products

Take part in other people’s open source products. That way you know the pains and joys of using them. Your job is to be available. By providing helpful contributions to other products people judge you by your work and how you behave as a community member, not as a company person. Analysing how other products are run gives you great feedback for your teams.

Participate in public discussions

A lot of discussion happens outside your company, on channels like Stackoverflow, Slack, Facebook Groups, Hacker News and many more. You should be monitoring those to a degree and be visible, bringing facts where discussions get heated. These are great places to find new influencers and partners in communication.

Participate in other publications

Taking part in other publications than your own and the ones of your company solidifies your status as a thought leader and influencer. Write for magazines, take part in podcasts and interview series. Help others to get their developer oriented products off the ground – even when they are your competition.

Create video tutorials

Creating short, informative and exciting videos is a great opportunity to reach new followers. Make sure you keep those personal – if there is a video about a product, help the product team build one instead. Show why you care about a feature. Keep these quick and easy, don’t over-produce them. These are meant as a “hi, look at this!”.

Participate and help with events

Present and give workshops at events. Introduce event organisers and colleagues or other presenters you enjoyed. Help promote events. Help marketing and colleagues at events to make your presence useful and yielding results.

Act as a “firewall” for internal teams

People working on products should spend their time doing that – not arguing with people online
Your job is to take negative feedback and redirect it to productive results
This means in many cases asking for more detailed reporting before working with the team to fix it.
It also means managing expectations. Just because a competitor has a cool new thing doesn’t mean your company needs to follow. Explain why.

Help dealing with influencers

If you do your job right and talk to others a lot, what other people call ”influencers” are what you call friends and contacts. Show them how much you care by getting them preview information of things to come. Make sure that communication from your company to them goes through you as it makes outreach much less awkward. Find new upcoming influencers and talk to your company about them.

Inbound tasks

Stay up to date and participate in products

There is no way to be a developer evangelist for a company if you don’t know about the products your company does. You need to work with and on the products, only then can you be inspiring and concentrate on improving them. Working on the products also teaches you a lot about how they are created, which helps with triaging questions from the outside.

Keep product teams and internal engineering up to date

It is important to report to your product teams and engineers. This includes feedback about your own product, but also what positive and negative feedback the competition got. This also includes introducing the teams to people to communicate with in case they want to collaborate on a new feature.

Amplify messaging of internal teams

Your colleagues bring great content, it is your job to give it reach. Amplify their messages by spreading them far and wide and explaining them in an understandable fashion to different audiences. Tell other influencers about the direct information channels your teams have.

Coach and promote internal talent

It is great that you can represent your company, but sometimes the voice of someone working directly on a product has more impact. Coach people in the company and promote their presence on social media and at events. Connect conference organisers and internal people – make sure to check with their manager to have them available. Help people present and prepare materials for outreach.

Report on events and success of campaigns

Being at an event or running a campaign should be half the job. The other half is proving to the people in the company that it was worth it. Make sure to collect questions you got, great things you learned and find a good way to communicate them to the teams they concern. Keep a log of events that worked and those who didn’t – this is great information for marketing when it comes to sponsorship.

Help organising events

It is important to be involved in the events your company organises. Try not to get roped into presenting at those – instead offer to coach people to be there instead. Offer to be on the content board of these events – you don’t want to be surprised by some huge press release that says something you don’t agree with. Use your reach to find external speakers for your events.

Work with PR, legal and marketing

Your work overlaps a lot with those departments. Be a nice colleague and help them out and you get access to more budget and channels you don’t even know about. Make sure that what you say or do in your job is not causing any legal issues.

Give constructive feedback to the product teams and get questions answered

You’re seen as a approachable channel for questions about your products. Make sure that requests coming through you are followed up swiftly. Point out communication problems to your internal teams and help them fix those. Use external feedback to ask for improvements of your own products. It is easy to miss problems when you are too close to the subject matter.

Collate outside feedback and convert to constructive feedback

Your colleagues are busy building products. They shouldn’t have to deal with angry feedback that has no action items in it. Your job is to filter out rants and complaints and to analyse the cause of them. Then you report the root issue and work with the teams how to fix them.

Why take on this job?

Considering how full your plate is, the following question should come up:

Why would you want to become a developer evangelist (or developer advocate)?

There are perks to being someone in this role. Mostly that it is a natural progression for a developer who wants to move from a delivery role to one where they can influence more what the company does.

  • It bridges the communication gap – developers have a bad reputation when it comes to communication. Showing that someone technical can help understanding each other is a good move for our market.
  • It helps avoiding frustration – a lot of engineering is not needed, but based on false assumptions or misguided “I want to use this”. Good evangelism helps building what is needed, not what is cool.
  • It bridges age and culture gaps – if you’re not interested in a cutthroat competition in engineering, you have a chance to use your talent otherwise.

Before you jump on the opportunity, there are some very important points to remember:

  • Developer relations is not a starting position – most developer evangelists graduated from being developers in the same company. You need to know the pain to help prevent it.
  • There are part time opportunities though – engineers or people learning in the company can help with Devrel to ease into the job earlier.
  • Always be ready to prove your worth – measuring the impact of a developer evangelist is tough, you need to make sure you’re well organised in recording your successes.

It’s a versatile, morphing and evolving role. And that – to me – makes it really exciting.

Mike HoyeFree As In Health Care

This is to some extent a thought experiment.

The video below shows what’s called a “frontal offset crash test” – your garden variety driver-side head-on collision – between a 2009 Chevrolet Malibu and a 1959 Chevrolet Bel Air. I’m about to use this video to make a protracted argument about software licenses, standards organizations, and the definition of freedom. It may not interest you all that much but if it’s ever crossed your mind that older cars are safer because they’re heavier or “solid” or had “real” bumpers or something you should watch this video. In particular, pay attention to what they consider a “fortunate outcome” for everyone involved. Lucky, for the driver in the Malibu, is avoiding a broken ankle. A Bel Air driver would be lucky if all the parts of him make it into the same casket.

 [ https://www.youtube.com/watch?v=joMK1WZjP7g ]

Like most thought experiments this started with a question: what is freedom?

The author of the eighteenth-century tract “Cato’s Letters” expressed the point succinctly: “Liberty is to live upon one’s own Term; Slavery is to live at the mere Mercy of another.” The refrain was taken up with particular emphasis later in the eighteenth century, when it was echoed by the leaders and champions of the American Revolution.’ The antonym of liberty has ceased to be subjugation or domination – has ceased to be defenseless susceptibility to interference by another – and has come to be actual interference, instead. There is no loss of liberty without actual interference, according to most contemporary thought: no loss of liberty in just being susceptible to interference. And there is no actual interference – no interference, even, by a non-subjugating rule of law – without some loss of liberty; “All restraint, qua restraint, is evil,” as John Stuart Mill expressed the emerging orthodoxy.

– Philip Pettit, Freedom As Anti-Power, 1996

Most of our debates define freedom in terms of “freedom to” now, and the arguments are about the limitations placed on those freedoms. If you’re really lucky, like Malibu-driver lucky, the discussions you’re involved in are nuanced enough to involve “freedom from”, but even that’s pretty rare.

I’d like you to consider the possibility that that’s not enough.

What if we agreed to expand what freedom could mean, and what it could be. Not just “freedom to” but a positive defense of opportunities to; not just “freedom from”, but freedom from the possibility of.

Indulge me for a bit but keep that in mind while you exercise one of those freedoms, get in a car and go for a drive. Freedom of movement, right? Get in and go.

Before you can do that a few things have to happen first. For example: your car needs to have been manufactured.

Put aside everything that needs to have happened for the plant making your car to operate safely and correctly. That’s a lot, I know, but consider only the end product.

Here is a chart of the set of legislated standards that vehicle must meet in order to be considered roadworthy in Canada – the full text of CRC c.1038, the Motor Vehicle Safety Regulations section of the Consolidated Regulations of Canada runs a full megabyte, and contains passages such as:

H-point means the mechanically hinged hip point of a manikin that simulates the actual pivot centre of the human torso and thigh, described in SAE Standard J826, Devices for Use in Defining and Measuring Vehicle Seating Accommodation (July 1995); (point H)

H-V axis means the characteristic axis of the light pattern of a lamp, passing through the centre of the light source, used as the direction of reference (H = 0°, V = 0°) for photometric measurements and for the design of the installation of a lamp on a vehicle; (axe H-V)

… and

Windshield Wiping and Washing System

104 (1) In this section,

areas A, B and C means the areas referred to in Column I of Tables I, II, III and IV to this section when established as shown in Figures 1 and 2 of SAE Recommended Practice J903a Passenger Car Windshield Wiper Systems, (May 1966), using the angles specified in Columns III to VI of the above Tables; (zones A, B et C)

daylight opening means the maximum unobstructed opening through the glazing surface as defined in paragraph 2.3.12 of Section E, Ground Vehicle Practice, SAE Aerospace-Automotive Drawing Standards, (September 1963); (ouverture de jour)

glazing surface reference line means the intersection of the glazing surface and a horizontal plane 635 mm above the seating reference point, as shown in Figure 1 of SAE Recommended Practice J903a (May 1966); (ligne de référence de la surface vitrée)

… and that mind-numbing tedium you’re experiencing right now is just barely a taste; a different set of regulations exists for crash safety testing, another for emissions testing, the list goes very far on. This 23 page PDF of Canada’s Motor Vehicle Tire Safety Regulations – that’s just the tires, not the brakes or axles or rims, just the rubber that meets the road – should give you a sense of it.

That’s the car. Next you need roads.

The Ontario Provincial Standards for Roads & Public Works consists of eight volumes. The first of them, General And Construction Specifications, is 1358 pages long. Collectively they detail how roads you’ll be driving on must be built, illuminated, made safe and maintained.

You can read them over if you like, but you can see where I’m going with this. Cars and roads built to these standards don’t so much enable freedom of motion and freedom from harm as they delimit in excruciating detail the space – on what road, at what speeds, under what circumstances – where people must be free from the possibility of specific kinds of harm, where their motion must be free from the possibility of specific kinds of restriction or risk.

But suppose we move away from the opposition to bare interference in terms of which contemporary thinkers tend to understand freedom. Suppose we take up the older opposition to servitude, subjugation, or domination as the key to construing liberty. Suppose we understand liberty not as noninterference but as antipower. What happens then?

– Philip Pettit, ibid.

Let me give away the punchline here: if your definition of freedom includes not just freedom from harassment and subjugation but from the possibility of harassment and subjugation, then software licenses and cryptography have as much to do with real digital rights and freedoms as your driver’s license has to do with your freedom of mobility. Which is to say, almost nothing.

We should be well past talking about the minutia of licenses and the comparative strengths of cryptographic algorithms at this point. The fact that we’re not is a clear sign that privacy, safety and security on the internet are not “real rights” in any meaningful sense. Not only because the state does not meaningfully defend them but because it does not mandate in protracted detail how they should be secured, fund institutions to secure that mandate and give the force of law to the consequences of failure.

The conversation we should be having at this point is not about is not what a license permits, it’s about the set of standards and practices that constitutes a minimum bar to clear in not being professionally negligent.

The challenge here is that dollar sign. Right now the tech sector is roughly where the automotive sector was in the late fifties. You almost certainly know or know of somebody on Twitter having a very 1959 Bel-Air Frontal-Offset Collision experience right now, and the time for us to stop blaming the driver for that is long past. But if there’s a single grain of good news here’s it’s how far off your diminishing returns are. We don’t need detailed standards about the glazing surface reference line of automotive glass, we need standard seatbelts and gas tanks that reliably don’t explode.

But that dollars sign, and those standards, are why I think free software is facing an existential crisis right now.

[ https://www.youtube.com/watch?v=obSOaKTMLIc ]

I think it’s fair to say that the only way that standards have teeth is if there’s liability associated with them. We know from the automotive industry that the invisible hand of the free market is no substitute for liability in driving improvement; when the costs of failure are externalized, diffuse or hidden, those costs can easily be ignored.

According to the FSF, the “Four Freedoms” that define what constitutes Free Software are:

  • The freedom to run the program as you wish, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

The cannier among you will already have noted – and scarred Linux veterans can definitely attest to the fact – that there’s no mention at all of freedom-from in there. The FSF’s unstated position has always been that anyone who wants to be free from indignities like an opaque contraption of a user experience, buggy drivers and nonexistent vendor support in their software, not to mention the casual sexism and racism of the free software movement itself, well. Those people can go pound sand all the way to the Apple store. (Which is what everyone did, but let’s put that aside for the moment.)

Let’s go back to that car analogy for a moment:

Toyota Motor Corp has recalled 3.37 million cars worldwide over possible defects involving air bags and emissions control units.

The automaker on Wednesday said it was recalling 2.87 million cars over a possible fault in emissions control units. That followed an announcement late on Tuesday that 1.43 million cars needed repairs over a separate issue involving air bag inflators.

About 930,000 cars are affected by both potential defects, Toyota said. Because of that overlap, it said the total number of vehicles recalled was 3.37 million.

No injuries have been linked to either issue.

Potential defects.

I think the critical insight here is that Stallman’s vision of software freedom dates to a time when software was contained. You could walk away from that PDP-11 and the choices you made there didn’t follow you home in your pocket or give a world full of bored assholes an attack surface for your entire life. Software wasn’t everywhere, not just pushing text around a screen but everywhere and in everything from mediating our social lives and credit ratings to pumping our drinking water, insulin and anti-lock brakes.

Another way to say that is: software existed in a well-understood context. And it was that context that made it, for the most part, free from the possibility of causing real human damage, and consequently liability for that damage was a non-question. But that context matters: Toyota doesn’t issue that recall because the brakes failed on the chopped-up fifteen year old Corolla you’ve welded to a bathtub and used as rally car, it’s for the safety of day to day drivers doing day to day driving.

I should quit dancing around the point here and just lay it out:  If your definition of freedom includes freedom from the possibility of interference, it follows that “free as in beer” and “free as in freedom” can only coexist in the absence of liability.

This is only going to get more important as the Internet ends up in more and more Things, and your right – and totally reasonable expectation – to live a life free from arbitrary harassment enabled by the software around you becomes a life-or-death issue.

If we believe in an expansive definition of human freedom and agency in a world full of software making decisions then I think we have three problems, two practical and one fundamental.

The practical ones are straightforward. The first is that the underpinnings of the free-as-in-beer economic model that lets Google, Twitter and Facebook exist are fighting a two-ocean war against failing ad services and liability avoidance. The notion that a click-through non-contract can absolve any organization of their responsibility is not long for this world, and the nasty habit advertising and social networks have of periodically turning into semi-autonomous, weaponized misery-delivery platforms makes it harder to justify letting their outputs talk to your inputs every day.

The second one is the industry prisoner’s dilemma around, if not liability, then at a bare minimum responsibility. There’s a battery of high-caliber first-mover-disadvantages pointed at the first open source developer willing to say “if these tools are used under the following conditions, by users with the following user stories, then we can and should be held responsible for their failures”.

Neither of these problems are insoluble – alternative financial models exist, coalitions can be built, and so forth. It’ll be an upheaval, but not a catastrophic or even sudden one. But anyone whose business model relies on ads should be thinking about transitions five to ten years out, and your cannier nation-states are likely to start sneaking phrases like “auditable and replaceable firmware” in their trade agreements in the next three to five.

The fundamental problem is harder: we need a definition of freedom that encompasses the notion of software freedom and human agency, in which the software itself is just an implementation detail.

We don’t have a definition of freedom that’s both expansive in its understanding of what freedom and agency are, that speaks to a world where the line between data security and bodily autonomy is very blurry, where people can delegate their agency to and gain agency from a construct that’s both an idea and a machine. A freedom for which a positive defense of the scope of the possible isn’t some weird semitangible idea, but a moral imperative and a hill worth dying on.

I don’t know what that looks like yet; I can see the rough outlines of the place it should be but isn’t. I can see the seeds of it in the quantified-self stuff, copyleft pushback and the idea that crypto is a munition. It’s crystal clear that a programmer clinging to the idea that algorithms are apolitical or that software is divorced from human bias or personal responsibility is a physicist holding to the aetheric model or phlogiston when other people are fuelling their rockets. The line between software freedom and personal freedom is meaningless now, and the way we’ve defined “software freedom” just about guarantees its irrelevancy. It’s just freedom now, and at the very least if our definition of freedom is – and our debate about what freedom could be –  isn’t as vast and wide-ranging and weird and wonderful and diverse and inclusive and scary as it could possibly be, then the freedom we end up with won’t be either.

And I feel like a world full of the possible would be a hell of a thing to lose.

Air MozillaMozilla Weekly Project Meeting, 29 Aug 2016

Mozilla Weekly Project Meeting The Monday Project Meeting

Andy McKayThe luxury of cycling

A few days ago a senator tweeted this:

This weekend I attended the Ride to Conquer Cancer and cheered along my wife and the other members of her team as they completed a 250km ride from Vancouver to Seattle. It was moving to see cancer survivors and people who've lost loved ones to cancer, doing their bit to help fight cancer.

And let me tell you, these days I treat my ability to ride a bike as a luxury. It's all too easy to get struck down by any number of diseases and be unable to ride. To get into an accident and get injured so you can't ride. To live in a place that doesn't make riding possible. To have a family situation that makes it hard to ride.

A couple of years ago my sister got really sick. When I said I was thinking of doing the Gran Fondo "you are so lucky, I wish I could do that". So I did. And this year is my second race.

Oh and I pretty much obey all the traffic laws, unlike all the asshole car drivers who cut down the bicycle routes in Vancouver.

Smokey ArdissonQuoting Brent Simmons

Brent Simmons:

One of the rules I’ve used […] is not to argue with “I bet lots of people are like me and want feature X,” but instead say why you specifically want feature X, or why you’d prefer some behavior or design change.

In other words: instead of just asserting that a thing would be better or more popular if done a different way, tell a story with details.

Maybe that’s not right for every beta test, but that’s what works for me. I like stories. A single person can convince me with a good story.

Brent has written about this point before, in the “Tuesday Whipper-Snapping” section of How to manipulate me (or, Tuesday Whipper-Snapping),1 but it’s an important point that bears repeating: when you are a user trying to explain to a developer/QA/support that you’d like feature X or think feature Y would work better in some other way, tell us why. Tell us what you’re trying to do, why you’re trying to do it. As Brent says, tell us a story, with details, about what you need/want to do with the software.

First, as the quotation above indicates, a good use-case (“story”) can often convince the person/people behind the software right off the bat of the importance and/or awesomeness of your proposed feature. Second, as Brent mentioned a decade ago (!) in the discussion of Exposé in “Tuesday Whipper-Snapping,” your story is important because it may lead to the developer(s) devising a new or easier or more clever way of helping you to accomplish your task, rather than the specific implementation you may have conceived yourself.

So, tell us stories about what you hope to do.

        

1 Which I’ve linked to before in my compilation of great “Brent Simmons on software development for end-users” articles. ↩︎

Karl DubostA Shrödinger select menu with display: none on option

Let's develop a bit more on an issue that we have identified last week.

On the croma Web site, they have a fairly common form.

Croma Store Form

As you can notice the instruction "Select City" for selecting an item is in the form itself. And the markup of the form looks like this.

<div class="ib">
    <!-- Customization start here for store locator -->
    <div id="citydropdown" name="citydropdown" class="inpBox">
        <select id="city" name="city" onchange="drop_down_list1()">

            <option value="" style="display: none">Select City</option>

            <option value="AHMEDABAD">AHMEDABAD</option>
            <option value="AURANGABAD">AURANGABAD</option>
            <option value="BANGALORE">BANGALORE</option>
            <option value="CHANDIGARH">CHANDIGARH</option>
            <option value="CHENNAI">CHENNAI</option>
            <option value="FARIDABAD">FARIDABAD</option>
            <option value="GHAZIABAD">GHAZIABAD</option>
            <option value="GREATER NOIDA">GREATER NOIDA</option>
            <option value="GURGAON">GURGAON</option>
            <option value="HYDERABAD">HYDERABAD</option>
            <option value="MUMBAI">MUMBAI</option>
            <option value="NASIK">NASIK</option>
            <option value="NEW DELHI">NEW DELHI</option>
            <option value="NOIDA">NOIDA</option>
            <option value="PUNE">PUNE</option>
            <option value="RAJKOT">RAJKOT</option>
            <option value="SURAT">SURAT</option>
            <option value="VADODARA">VADODARA</option>
        </select>
    </div>
</div>

Basically the developers are trying to make the "Select City" instruction non selectable. It works in Safari (WebKit) and Opera/Chrome (Blink) with slight differences. I put up a reduced test case if you want to play with it.

The bug with display: none

It fails in Firefox (Gecko) in an unexpected way and only for e10s windows. The full list of options is not being displayed once the first one is set to display: none

Test results on Firefox

Do not do this!

If you really want the instruction to be part of the form, choose to do

<div class="ib">
    <!-- Customization start here for store locator -->
    <div id="citydropdown" name="citydropdown" class="inpBox">
        <select id="city" name="city" onchange="drop_down_list1()">

            <option value="" hidden selected disabled>Select City</option>

            <option value="AHMEDABAD">AHMEDABAD</option>
            <option value="AURANGABAD">AURANGABAD</option>
            <!-- cut for brevity -->
        </select>
    </div>
</div>

This will have the intended effect and will "work everywhere". It is still not the best way to do it, but at least it is kind of working except in some cases for accessibility reasons (I added the proper way below).

That said, it should not fail in Gecko. I opened a bug on Mozilla Bugzilla and Neil Deakin already picked it up with a patch.

Quite cool! This will probably be uplifted to Firefox 49 and 50.

select menu form, the proper way

There are a couple of issues with regards to accessibility and the proper way to create a form. This following would be better:

<div class="ib">
    <div id="citydropdown" name="citydropdown" class="inpBox">
        <label for="city">Select City</label>
        <select id="city" name="city" onchange="drop_down_list1()">
            <option value="AHMEDABAD">AHMEDABAD</option>
            <option value="AURANGABAD">AURANGABAD</option>
            <!-- cut for brevity -->
        </select>
    </div>
</div>

The label is an instruction outside of the options list, that tools will pick up and associate with the right list.

I see the appeal for people to have the instruction to be part of the drop down. And I wonder if there would be an accessible way to create this design. I wonder if an additional attribute on label instructing the browser to put inside as a non selectable item of the select list of option it targets with the for attribute.

Otsukare!

Karl Dubost[worklog] Edition 033. Season of typhoons. Bugs clean-up

A moderate typhoon landed this week just above us. Winds in one direction. The peace of the eye of the typhoon. Finally the winds in the opposite direction. Magnificent. The day after it was a cemetery for insects. Tragedy. So it was mostly a week for old bugs cleaning.

The spectacle of the society, the society of spectacle. The turn of international events and the rise of arguments instead of discussions is mesmerizingly sad. Tune of the week: DJ Shadow feat. Run The Jewels - Nobody Speak

Webcompat Life

Progress this week:

Today: 2016-08-29T07:47:04.454102
295 open issues
----------------------
needsinfo       4
needsdiagnosis  83
needscontact    18
contactready    28
sitewait        146
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

  • option doesn't like display: none on Gecko at least. It hides the full list. I opened a new bug for Gecko. This is happening only in e10s windows. And there is already a patch in the making.
  • Not really a webcompat issue, but Google alerts breaking on small screens.
  • Sometimes bugs are not bugs. This is one delicate part of webcompat. Can we reproduce? Reproducing a bug is not always straightforward. It sometimes depends on the capabilities of your own devices (mobile AND desktop). But the cool thing is when the use is coming back and says "Hey sorry it seems to be working now." Thanks for this. It's very useful.

WebCompat.com dev

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Seif LotfyWelcome to Ghost

You're live! Nice. We've put together a little post to introduce you to the Ghost editor and get you started. You can manage your content by signing in to the admin area at <your blog URL>/ghost/. When you arrive, you can select this post from a list on the left and see a preview of it on the right. Click the little pencil icon at the top of the preview to edit this post and read the next section!

Getting Started

Ghost uses something called Markdown for writing. Essentially, it's a shorthand way to manage your post formatting as you write!

Writing in Markdown is really easy. In the left hand panel of Ghost, you simply write as you normally would. Where appropriate, you can use shortcuts to style your content. For example, a list:

  • Item number one
  • Item number two
    • A nested item
  • A final item

or with numbers!

  1. Remember to buy some milk
  2. Drink the milk
  3. Tweet that I remembered to buy the milk, and drank it

Links

Want to link to a source? No problem. If you paste in a URL, like http://ghost.org - it'll automatically be linked up. But if you want to customise your anchor text, you can do that too! Here's a link to the Ghost website. Neat.

What about Images?

Images work too! Already know the URL of the image you want to include in your article? Simply paste it in like this to make it show up:

The Ghost Logo

Not sure which image you want to use yet? That's ok too. Leave yourself a descriptive placeholder and keep writing. Come back later and drag and drop the image in to upload:

Quoting

Sometimes a link isn't enough, you want to quote someone on what they've said. Perhaps you've started using a new blogging platform and feel the sudden urge to share their slogan? A quote might be just the way to do it!

Ghost - Just a blogging platform

Working with Code

Got a streak of geek? We've got you covered there, too. You can write inline <code> blocks really easily with back ticks. Want to show off something more comprehensive? 4 spaces of indentation gets you there.

.awesome-thing {
    display: block;
    width: 100%;
}

Ready for a Break?

Throw 3 or more dashes down on any new line and you've got yourself a fancy new divider. Aw yeah.


Advanced Usage

There's one fantastic secret about Markdown. If you want, you can write plain old HTML and it'll still work! Very flexible.

That should be enough to get you started. Have fun - and let us know what you think :)

Seif LotfyWelcome to Ghost

You're live! Nice. We've put together a little post to introduce you to the Ghost editor and get you started. You can manage your content by signing in to the admin area at <your blog URL>/ghost/. When you arrive, you can select this post from a list on the left and see a preview of it on the right. Click the little pencil icon at the top of the preview to edit this post and read the next section!

Getting Started

Ghost uses something called Markdown for writing. Essentially, it's a shorthand way to manage your post formatting as you write!

Writing in Markdown is really easy. In the left hand panel of Ghost, you simply write as you normally would. Where appropriate, you can use shortcuts to style your content. For example, a list:

  • Item number one
  • Item number two
    • A nested item
  • A final item

or with numbers!

  1. Remember to buy some milk
  2. Drink the milk
  3. Tweet that I remembered to buy the milk, and drank it

Links

Want to link to a source? No problem. If you paste in a URL, like http://ghost.org - it'll automatically be linked up. But if you want to customise your anchor text, you can do that too! Here's a link to the Ghost website. Neat.

What about Images?

Images work too! Already know the URL of the image you want to include in your article? Simply paste it in like this to make it show up:

The Ghost Logo

Not sure which image you want to use yet? That's ok too. Leave yourself a descriptive placeholder and keep writing. Come back later and drag and drop the image in to upload:

Quoting

Sometimes a link isn't enough, you want to quote someone on what they've said. Perhaps you've started using a new blogging platform and feel the sudden urge to share their slogan? A quote might be just the way to do it!

Ghost - Just a blogging platform

Working with Code

Got a streak of geek? We've got you covered there, too. You can write inline <code> blocks really easily with back ticks. Want to show off something more comprehensive? 4 spaces of indentation gets you there.

.awesome-thing {
    display: block;
    width: 100%;
}

Ready for a Break?

Throw 3 or more dashes down on any new line and you've got yourself a fancy new divider. Aw yeah.


Advanced Usage

There's one fantastic secret about Markdown. If you want, you can write plain old HTML and it'll still work! Very flexible.

That should be enough to get you started. Have fun - and let us know what you think :)

Mozilla WebDev CommunityBeer and Tell – August 2016

Once a month, web developers from across the Mozilla Project get together to talk about our side projects and drink, an occurrence we like to call “Beer and
Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Osmose: PyJEXL

First up was Osmose (that’s me!), who shared PyJEXL, an implementation of JEXL in Python. JEXL is an expression language based on JavaScript that computes the value of an expression when applied to a given context. For example, the statement 2 + foo, when applied to a context where foo is 7, evaluates to 9. JEXL also allows for defining custom functions and operators to be used in expressions.

JEXL is implemented as a JavaScript library; PyJEXL allows you to parse and evaluate JEXL expressions in Python, as well as analyze and validate expressions.

pmac: rtoot.org

Next up was pmac, who shared rtoot.org, the website for “The Really Terrible Orchestra of the Triangle”. It’s a static site for pmac’s local orchestra built using lektor, an easy-to-use static site generator. One of the more interesting features pmac highlighted was lektor’s data model. He showed a lektor plugin he wrote that added support for CSV fields as part of the data model, and used it to describe the orchestra’s rehearsal schedule.

The site is deployed as a Docker container on a Dokku instance. Dokku replicates the Heroku style of deployments by accepting Git pushes to trigger deploys. Dokku also has a great Let’s Encrypt plugin for setting up and renewing HTTPS certificates for your apps.

groovecoder: Scary/Tracky JS

Next was groovecoder, who previewed a lightning talk he’s been working on called “Scary JavaScript (and other Tech) that Tracks You Online”. The talk gives an overview of various methods that sites can use to track what sites you’ve visited without your consent. Some examples given include:

  • Reading the color of a link to a site to see if it is the “visited link” color (this has been fixed in most browsers).
  • Using requestAnimationFrame to time how long it took the browser to draw a link; visited links will take longer as the browser has to change their color and then remove it to avoid the previous vulnerability.
  • Embed a resource from another website as a video, and time how long it takes for the browser to attempt and fail to parse the video; a low time indicates the resource was served from the cache and you’ve seen it before.
  • Cookie Syncing

groovecoder plans to give the talk at a few local conferences, leading up to the Hawaii All Hands where he will give the talk to Mozilla.

rdalal: Therapist

rdalal was next with Therapist, a tool to help create robust, easily-configurable pre-commit hooks for Git. It is configured via a YAML file at your project root, and can run any command you want to run before committing code, such as code linting or build commands. Therapist is also able to detect which files were changed by the commit and only run commands on those to help save time.

gregtatum: River – Paths Over Time

Last up was gregtatum, who shared River – Paths Over Time. Inspired by the sight of rivers from above during the flight back from the London All Hands, Rivers is a simulation of rivers flowing and changing over time. The animation is drawn on a 2d canvas using rectangles that are drawn and then slowly faded over time; this slow fade creates the illusion of “drips” seeping into the ground as rivers shift position.

The source code for the simulation is available on Github, and is configured with several parameters that can be tweaked to change the behavior of the rivers.


If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Mozilla Reps CommunityReps Program Objectives – Q3 2016

With “RepsNext” we increased alignment between the Reps program and the Participation team. The main outcome is setting quarterly Objectives and Key Results together. This enables all Reps to see which contributions have a direct link to the Participation team . Of course, Reps are welcome to keep doing amazing work in other areas.

Objective 1 Reps are fired up about the Activate campaign and lead their local community to success through supporting that campaign with the help of the Review Team
KR1 80+ core mozillians coached by new mentors contribute to at least one focus activity.
KR2 75+ Reps get involved in the Activity Campaign, participating in at least one activity and/or mobilizing others.
KR3 The Review Team is working effectively by decreasing the budget review time by 30%

 

Objective 2 New Reps coaches and regional coaches bring leadership within the Reps program to a new level and increase the trust of local communities towards the Reps program
KR1 40 mentors and new mentors took coaching training and start at least 2 coaching relationships.
KR2 40% of the communities we act on report better shape/effectiveness thanks to the regional coaches
KR3 At least 25% of the communities Regional Coaches work with report feeling represented by the Reps program

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this?

Let’s keep the conversation going! Please provide your comments in Discourse.

Cameron KaiserTenFourFox 45.3.0b2 available

TenFourFox 45.3 beta 2 is now available (downloads, hashes, release notes). The issue with video frame skipping gone awry seems to be due to Mozilla's MediaSource implementation, which is (charitably) somewhat incomplete on 45 and not well-tested on systems of our, uh, vintage. Because audio is easier to decode than video the audio decoder thread will already have a speed advantage, but in addition the video decoder thread appears to only adjust itself for the time spent in decoding, not the time spent pushing the frames to the compositor or blitting them to the screen (which are also fairly costly in our software-rendered world). A frame that's simple to decode but requires a lot of painting will cause the video thread to get further and further behind queueing frames that the compositor can't keep up with. Even if you keep skipping keyframes, you'll end up more than one keyframe behind and you'll never catch up, and when this occurs the threads irrevocably go out of sync. The point at which this happens for many videos is almost deterministic and correlates well with time spent in compositing and drawing; you can invariably trigger it by putting the video into full screen mode which dramatically increases the effort required to paint frames. In a few cases the system would just give up and refuse to play further (hitting an assertion in debug mode).

After a few days of screaming and angry hacking (backporting some future bug fixes, new code for dumping frames that can never be rendered, throwing out the entire queue if we get so far behind we'll never reasonably catch up) I finally got the G5 to be able to hold its own in Reduced Performance but it's just too far beyond the G4 systems to do so. That's a shame since when it does work it appears to work pretty well. Nevertheless, for release I've decided to turn MediaSource off entirely and it now renders mostly using the same pipeline as 38 (fortunately, the other improvements to pixel pushing do make this quicker than 38 was; my iBook G4/1.33 now maintains a decent frame rate for most standard definition videos in Automatic Performance mode). If the feature becomes necessary in the future I'm afraid it will only be a reasonable option for G5 machines in its current state, but at least right now disabling MediaSource is functional and sufficient for the time being to get video support back to where it was.

The issue with the Amazon Music player is not related to or fixed by this change, however. I'm thinking that I'll have to redo our minimp3 support as a platform MP3 decoder so that we use more of Mozilla's internal media machinery; that refactoring is not likely to occur by release, so we'll ship with the old decoder that works most other places and deal with it in a future update. I don't consider this a showstopper unless it's affecting other sites.

Other changes in 45.3 beta 2 are some additional performance improvements. Most of these are trivial (including some minor changes to branching and proper use of single-precision math in the JIT), but two are of note, one backport that reduces the amount of allocation with scrolling and improves overall scrolling performance as a result, and another one I backported that reduces the amount of scanning required for garbage collection (by making garbage collection only run in the "dirty" zones after cycle collection instead of effectively globally). You'll notice that RAM use is a bit higher at any given moment, but it does go back down, and the browser has many fewer random pauses now because much less work needs to be done. Additionally in this beta unsigned extensions should work again and the update source is now set to the XML for 45. One final issue I want to repair before launch is largely server-side, hooking up the TenFourFox help button redirector to use Mozilla's knowledge base articles where appropriate instead of simply redirecting to our admittedly very sparse set of help documents on Tenderapp. This work item won't need a browser update; it will "just work" when I finish the code on Floodgap's server.

At this point barring some other major crisis I consider the browser shippable as 45.4 on time on 13 September, with Amazon Music being the highest priority issue to fix, and then we can start talking about what features we will add while we remain on feature parity. There will also be a subsequent point release of TenFourFoxBox to update foxboxes for new features in 45; stay tuned for that. Meanwhile, we could sure use your help on a couple other things:

  • Right now, our localized languages for the 45 launch are German, Spanish, French and Japanese, with an Italian translation in progress. We can launch with that, but we're losing a few languages there we had with 38. You can help.

  • Even minor releases of the browser get a small bump in support tickets on Tenderapp; a major release (such as 31 to 38, and now 38 to 45) generally brings lots of things out of the woodwork that we've never seen during the beta cycle. Most of these reports are spurious, but sometimes serious bugs have been uncovered even though they looked obviously bogus at first glance, and all of them need to be triaged regardless. If you're a reasonably adept user and you've got a decent understanding of how the browser works, help us help your fellow users by being a volunteer on Tenderapp. There's no minimum time commitment required; any help you can offer is appreciated, even if just temporary. Right now it's just Chris Trusch and me trying to field these reports, which is unsatisfactory to cover all of them (Theo used to but we haven't seen him in awhile). If you'd like to help, post in the comments and we'll make you a "blue name" on Tenderapp too.

That's it for now. See what you think.

Air MozillaOutreachy Participant Presentations Summer 2016

Outreachy Participant Presentations Summer 2016 Outreachy program participants from the summer 2016 cohort present their contributions and learnings. Mozilla has hosted 15 Outreachy participants this summer, working on technical projects...

Christian HeilmannSongsearch – using ServiceWorker to make a 4 MB CSV easily searchable in a browser

A few days ago I got an email about an old project of mine that uses YQL to turn a CSV into a “web services” interface. The person asking me for help was a Karaoke DJ who wanted to turn his available songs into a search interface. Maintenance of the dataset happens in Excel, so all that was needed was a way to display it on a laptop.

The person had a need and wanted a simple solution. He was OK with doing some HTML and CSS, but felt pretty lost at anything database related or server side. Yes, the best solution for that would be a relational DB, as it would also speed up searches a lot. As I don’t know for how much longer YQL is reliably usable, I thought I use this opportunity to turn this into an offline app using ServiceWorker and do the whole work in client side JavaScript. Enter SongSearch – with source available on GitHub.

the songsearch interface in action

Here’s how I built it:

  • The first step was to parse a CSV into an array in JavaScript. This has been discussed many a time and I found this solution on StackOverflow to be adequate to my uses.
  • I started by loading the file in and covering the interface with a loading message. This is not pretty, and had I had my own way and more time I’d probably look into streams instead.
  • I load the CSV with AJAX and parse it. From there on it is memory for me to play with.
  • One thing that was fun to find out was how to get the currently selected radio button in a group. This used to be a loop. Nowadays it is totally fine to use document.querySelector(‘input[type=”radio”][name=”what”]:checked’).value where the name is the name of your radio group :)
  • One thing I found is that searching can be slow – especially when you enter just one character and the whole dataset gets returned. Therefore I wanted to show a searching message. This is not possible when you do it in the main thread as it is blocked during computation. The solution was to use a WebWorker. You show the searching message before starting the worker, and hide it once it messaged back. One interesting bit I did was to include the WebWorker code also as a resource in the page, to avoid me having to repeat the songsearch method.
  • The final part was to add a ServiceWorker to make it work offline and cache all the resources. The latter could even be done using AppCache, but this may soon be deprecated.

As you can see the first load gets the whole batch, but subsequent reloads of the app are almost immediate as all the data is cached by the ServiceWorker.

Loading before and after the serviceworker cached the content

It’s not rocket science, it is not pretty, but it does the job and I made someone very happy. Take a look, maybe there is something in there that inspires you, too. I’m especially “proud” of the logo, which wasn’t at all done in Keynote :)

Mozilla Security BlogMitigating MIME Confusion Attacks in Firefox

Scanning the content of a file allows web browsers to detect the format of a file regardless of the specified Content-Type by the web server. For example, if Firefox requests script from a web server and that web server sends that script using a Content-Type of “image/jpg” Firefox will successfully detect the actual format and will execute the script. This technique, colloquially known as “MIME sniffing”, compensates for incorrect, or even complete absence of metadata browsers need to interpret the contents of a page resource. Firefox uses contextual clues (the HTML element that triggered the fetch) or also inspects the initial bytes of media type loads to determine the correct content type. While MIME sniffing increases the web experience for the majority of users, it also opens up an attack vector known as MIME confusion attack.

Consider a web application which allows users to upload image files but does not verify that the user actually uploaded a valid image, e.g., the web application just checks for a valid file extension. This lack of verification allows an attacker to craft and upload an image which contains scripting content. The browser then renders the content as HTML opening the possibility for a Cross-Site Scripting attack (XSS). Even worse, some files can even be polyglots, which means their content satisfies two content types. E.g., a GIF can be crafted in a way to be valid image and also valid JavaScript and the correct interpretation of the file solely depends on the context.

Starting with Firefox 50, Firefox will reject stylesheets, images or scripts if their MIME type does not match the context in which the file is loaded if the server sends the response header “X-Content-Type-Options: nosniff” (view specification). More precisely, if the Content-Type of a file does not match the context (see detailed list of accepted Content-Types for each format underneath) Firefox will block the file, hence prevent such MIME confusion attacks and will display the following message in the console:

The resource from “https://example.com/bar.jpg” was blocked due to MIME type mismatch (X-Content-Type-Options: nosniff).

Valid Content-Types for Stylesheets:
– “text/css”

Valid Content-Types for images:
– have to start with “image/”

Valid Content-Types for Scripts:
– “application/javascript”
– “application/x-javascript”
– “application/ecmascript”
– “application/json”
– “text/ecmascript”
– “text/javascript”
– “text/json”

K Lars Lohn0039 Firefox V2




0039 Firefox V2 - this is a remake of a maze that I originally drew on a flight from Oregon to the Mozilla All Hands meeting in Orlando, Florida in 2015.  I wanted to redo it as the original was numbered 0003 and my skills had advanced so far that I felt I needed to revisit this subject. 

Because it is a corporate logo, this maze is not for sale in the same manner as my other mazes.  I've not worked out a plan on how to distribute this maze with the company. 

The complete image posted here is only 1/4 of the full resolution of the original from which I make prints.  In full resolution it prints spectacularly well at 24" x 24".

Support.Mozilla.OrgWhat’s Up with SUMO – 25th August

Hello, SUMO Nation!

Another hot week behind us, and you kept us going – thank you for that! Here is a portion of the latest and greatest news from the world of SUMO – your world :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on the 31st of August!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

  • All quiet, keep up the awesome work, people :-)

Knowledge Base & L10n

Firefox

  • for Android
    • Version 49 will offer offline caching for some web pages. Take a bit of the net with you outside of the network’s range!
  • for iOS
    • Nothing new to report about iOS for now… Stay tuned for news in the future!

… and we’re done! We hope you have a great week(end) and that you come back soon to keep rocking the helpful web with us!

P.S. Just in case you missed it, here’s a great read about the way we want you to make Mozilla look better in the future.

Matěj CeplParsing milestoned XML in Python

I am trying to write a tool in Python (using Python 3.4 to be able to use the latest Python standard library on Windows without using any external libraries on Windows) for some manipulation with the source code for the Bible texts.

Let me first explain what is the milestoned XML, because many normal Python programmers dealing with normal XML documents may not be familiar with it. There is a problem with using XML markup for documents with complicated structure. One rather complete article on this topic is DeRose (2016).

Briefly [1] , the problem in many areas (especially in documents processing) is with multiple possible hierarchies overlapping each other (e.g., in Bibles there are divisions of text which are going across verse and chapters boundaries and sometimes terminating in the middle of verse, many especially English Bibles marks Jesus’ sayings with a special element, and of course this can go over several verses etc.). One of the ways how to overcome obvious problem that XML doesn't allow overlapping elements is to use milestones. So for example the book of Bible could be divided not like

<book>
<chapter>
<verse>text</verse>
...
</chapter>
...
</book>

but just putting milestones in the text, i.e.:

<book>
<chapter n="1" />
<verse sID="ID1.1" />text of verse 1.1
<verse eID="ID1.1" /> ....
</book>

So, in my case the part of the document may look like

text text
<verse/>
textB textB <czap> textC textC <verse/> textD textD </czap>

And I would like to get from some kind of iterator this series of outputs:

[(1, 1, "text text", ['text text']),
 (1, 2, "textB textB textC textC",
  ['<verse/>', 'textB textB', '<czap>', 'textC textC']),
 (1, 3, "textD textD", ['<verse/>', 'textD textD', '</czap>'])]

(the first two numbers should be number of the chapter and verse respectively).

My first attempt was in its core this iterator:

def __iter__(self) -> Tuple[int, int, str]:
    """
    iterate through the first level elements

    NOTE: this iterator assumes only all milestoned elements on the first
    level of depth. If this assumption fails, it might be necessary to
    rewrite this function (or perhaps ``text`` method) to be recursive.
    """
    collected = None

    for child in self.root:
        if child.tag in ['titulek']:
            continue
        if child.tag in ['kap', 'vers']:
            if collected and collected.strip():
                yield self.cur_chapter, self.cur_verse, \
                    self._list_to_clean_text(collected)
            if child.tag == 'kap':
                self.cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                self.cur_verse = int(child.get('n'))
            collected = child.tail or ''
        else:
            if collected is not None:
                if child.text is not None:
                    collected += child.text
                for sub_child in child:
                    collected += self._recursive_get_text(sub_child)
                if child.tail is not None:
                    collected += child.tail

(self.root is a product of ElementTree.parse(file_name).getroot()). The problem of this code lies in the note. When the <verse/> element is inside of <czap> one, it is ignored. So, obviously we have to make our iterator recursive. My first idea was to make this script parsing and regenerating XML:

#!/usr/bin/env python3
from xml.etree import ElementTree as ET
from typing import List

def start_element(elem: ET.Element) -> str:
    outx = ['<{} '.format(elem.tag)]
    for attr, attval in elem.items():
        outx.append('{}={} '.format(attr, attval))
    outx.append('>')
    return ''.join(outx)


def recursive_parse(elem: ET.Element) -> List[str]:
    col_xml = []
    col_txt = ''
    cur_chapter = chap

    if elem.text is None:
        col_xml.append(ET.tostring(elem))
        if elem.tail is not None:
            col_txt += elem.tail
    else:
        col_xml.extend([start_element(elem), elem.text])
        col_txt += elem.text
        for subch in elem:
            subch_xml, subch_text = recursive_parse(subch)
            col_xml.extend(subch_xml)
            col_txt += subch_text
        col_xml.append('</{}>'.format(elem.tag))

        if elem.tail is not None:
            col_xml.append(elem.tail)
            col_txt += elem.tail

    return col_xml, col_txt


if __name__ == '__main__':
    # write result XML to CRLF-delimited file with
    # ET.tostring(ET.fromstringlist(result), encoding='utf8')
    # or encoding='unicode'? Better for testing?
    xml_file = ET.parse('tests/data/Mat-old.xml')

    collected_XML, collected_TEXT = recursive_parse(xml_file.getroot())
    with open('test.xml', 'w', encoding='utf8', newline='\r\n') as outf:
       print(ET.tostring(ET.fromstringlist(collected_XML),
                         encoding='unicode'), file=outf)

    with open('test.txt', 'w', encoding='utf8', newline='\r\n') as outf:
        print(collected_TEXT, file=outf)

This works correctly in sense that the generated file test.xml is identical to the original XML file (after reformatting both files with tidy -i -xml -utf8). However, it is not iterator, so I would like to somehow combine the virtues of both snippets of code into one. Obviously, the problem is that return in my ideal code should serve two purposes. Once it should actually yield nicely formatted result from the iterator, second time it should just provide content of the inner elements (or not, if the inner element contains <verse/> element). If my ideal world I would like to get recursive_parse() to function as an iterator capable of something like this:

if __name__ == '__main__':
    xml_file = ET.parse('tests/data/Mat-old.xml')
    parser = ET.XMLParser(target=ET.TreeBuilder())

    with open('test.txt', 'w', newline='\r\n') as out_txt, \
            open('test.xml', 'w', newline='\r\n') as out_xml:
        for ch, v, verse_txt, verse_xml in recursive_parse(xml_file):
            print(verse_txt, file=out_txt)
            # or directly parser.feed(verse_xml)
            # if verse_xml is not a list
            parser.feed(''.join(verse_xml))

        print(ET.tostring(parser.close(), encoding='unicode'),
              file=out_xml)

So, my first attempt to rewrite the iterator (so far without the XML part I have):

def __iter__(self) -> Tuple[CollectedInfo, str]:
    """
    iterate through the first level elements
    """
    cur_chapter = 0
    cur_verse = 0
    collected_txt = ''
    # collected XML is NOT directly convertable into Element objects,
    # it should be treated more like a list of SAX-like events.
    #
    # xml.etree.ElementTree.fromstringlist(sequence, parser=None)
    # Parses an XML document from a sequence of string fragments.
    # sequence is a list or other sequence containing XML data fragments.
    # parser is an optional parser instance. If not given, the standard
    # XMLParser parser is used. Returns an Element instance.
    #
    # sequence = ["<html><body>", "text</bo", "dy></html>"]
    # element = ET.fromstringlist(sequence)
    # self.assertEqual(ET.tostring(element),
    #         b'<html><body>text</body></html>')
    # FIXME přidej i sběr XML útržků
    # collected_xml = None

    for child in self.root.iter():
        if child.tag in ['titulek']:
            collected_txt += '\n{}\n'.format(child.text)
            collected_txt += child.tail or ''
        if child.tag in ['kap', 'vers']:
            if collected_txt and collected_txt.strip():
                yield CollectedInfo(cur_chapter, cur_verse,
                                    re.sub(r'[\s\n]+', ' ', collected_txt,
                                           flags=re.DOTALL).strip()), \
                    child.tail or ''

            if child.tag == 'kap':
                cur_chapter = int(child.get('n'))
            elif child.tag == 'vers':
                cur_verse = int(child.get('n'))
        else:
            collected_txt += child.text or ''

            for sub_child in child:
                for sub_info, sub_tail in MilestonedElement(sub_child):
                    if sub_info.verse == 0 or sub_info.chap == 0:
                        collected_txt += sub_info.text + sub_tail
                    else:
                        # FIXME what happens if sub_element contains
                        # multiple <verse/> elements?
                        yield CollectedInfo(
                            sub_info.chap, sub_info.verse,
                            collected_txt + sub_info.text), ''
                        collected_txt = sub_tail

            collected_txt += child.tail or ''

    yield CollectedInfo(0, 0, collected_txt), ''

Am I going the right way, or did I still not get it?

[1]From the discussion of the topic on the XSL list.
DeRose, Steven. 2016. “Proceedings of Extreme Markup Languages®.” Accessed August 25. http://conferences.idealliance.org/extreme/html/2004/DeRose01/EML2004DeRose01.html.

Air MozillaConnected Devices Weekly Program Update, 25 Aug 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Mozilla Addons BlogWebExtensions in Firefox 50

Firefox 50 landed in Developer Edition this week, so we have another update on WebExtensions for you!Please use the WebExtensions API for any new add-on development, and consider porting your existing add-ons as soon as possible.

It’s also a great time to port because WebExtensions is compatible with multiprocess Firefox, which began rolling out in Firefox 48 to people without add-ons installed. When Firefox 49 reaches the release channel in September, we will begin testing multiprocess Firefox with add-ons. The goal is to turn it on for everyone in January 2017 with the release of Firefox 51.

If you need help porting to WebExtensions, please start with the compatibility checker, and check out these resources.

Since the last release, more than 79 bugs were closed on WebExtensions alone.

API Changes

In Firefox 50, a few more history APIs landed: the getVisits function, and two events–onVisited and onVisitRemoved.

Content scripts in WebExtensions now gain access to a few export helpers that existed in SDK add-ons: cloneInto, createObjectIn and exportFunction.

The webNavigation API has gained event filtering. This allows users of the webNavigation API to filter events based on some criteria. Details on the URL Filtering option are available here.

There’s been a change to debugging WebExtensions. If you go to about:debugging and click on debug you now get all the Firefox Developer Tools features that are available to you on a regular webpage.

Why is this significant? Besides providing more developer features, this will work across add-on reloads and allows the debugging of more parts of WebExtensions. More importantly, it means that we are now using the same debugger that the rest of the Firefox Dev Tools team is using. Reducing duplicated code is a good thing.

As mentioned in an earlier blog post, native messaging is now available. This allows you to communicate with other processes on the host’s operating system. It’s a commonly used API for password managers and security software, which need to communicate with external processes.

Documentation

The documentation for WebExtensions has been updated with some amazing resources over the last few months. This has included the addition of a few new areas:

The documentation is hosted on MDN and updates or improvements to the documentation are always welcome.

There are now 17 example WebExtensions on github. Recently added are history-deleter and cookie-bg-picker.

What’s coming

We are currently working on the proxy API. The intent is to ship a slightly different API than the one Chrome provides because we have access to better APIs in Firefox.

The ability to write WebExtensions APIs in an add-on has now landed in Firefox 51 through the implementation of WebExtensions Experiments. This means that you don’t need to build and compile all of Firefox in order to add in new APIs and get involved in WebExtensions. The policy for this functionality is currently under discussion and we’ll have more details soon.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Air MozillaWeb QA Team Meeting, 25 Aug 2016

Web QA Team Meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Air MozillaReps weekly, 25 Aug 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Marcia KnousProfessional Development - Strategic Leadership

This week I spent some time on the Harvard campus taking a professional development class inStrategic Leadership.  It was an interesting experience, so I thought I would share some of the takeaways from the two days.
The 32 participants in the class represented many different countries and cultures - we had participants from the U.S., Europe, Asia , Africa and Latin America. They spanned many different industries - the Military, Education, BioTech, and Government.  The networking event the first evening afforded us an opportunity to meet the diverse array of people from all different countries and industries (there was also another class being held with another 25 students from all over the world). I let many of the participants know that we had community members in their respective areas and described what Mozilla was all about. Some of the participants were quite excited about the fact they got to meet someone that works for a company that makes the browser they use on a daily basis. They were also interested to learn about how community contributions help drive the Mozilla project.
In addition to reviewing some case studies, the content on the first day explored the realms of leadership - including the psychological dimensions and emotional dimensions and styles. Day 2 we took a deep dive into decision making - using emotional intelligence as a foundation and exploring the challenges of decision making.
The great thing about these types of courses is they leave ample time for reflection. You actually get to think about your style, and learn about yourself and your leadership styles and how you apply them to different situations in your life. And since the case studies are taken from real life situations, taking the time to think about how you would have handled the situation is definitely a good process to go through.
Both the recent TRIBE class I took and this one helped me gather some tools that I should be able to use when I working with both my co-workers and community members. The important thing to remember is that it is a process - although you won't see changes overnight, if you employ some of the tools they outline, you may be able to hone your leadership skills as time passes.

The Mozilla BlogEU Copyright Law Undermines Innovation and Creativity on the Internet. Mozilla is Fighting for Reform

Mozilla has launched a petition — and will be releasing public education videos — to reform outdated copyright law in the EU

 

The internet is an unprecedented platform for innovation, opportunity and creativity. It’s where artists create; where coders and entrepreneurs build game-changing technology; where educators and researchers unlock progress; and where everyday people live their lives.

The internet brings new ideas to life everyday, and helps make existing ideas better. As a result, we need laws that protect and enshrine the internet as an open, collaborative platform.

But in the EU, certain laws haven’t caught up with the internet. The current copyright legal framework is outdated. It stifles opportunity and prevents — and in many cases, legally prohibits — artists, coders and everyone else from creating and innovating online. This framework was enacted before the internet changed the way we live. As a result, these laws clash with life in the 21st century.

Copyright

Here are just a few examples of outdated copyright law in the EU:

  • It’s illegal to share a picture of the Eiffel Tower light display at night. The display is copyrighted — and tourists don’t have the artists’ express permission.
  • In some parts of the EU, making a meme is technically unlawful. There is no EU-wide fair use exception.
  • In some parts of the EU, educators can’t screen films or share teaching materials in the classroom due to restrictive copyright law.

It’s time our laws caught up with our technology. Now is the time to make a difference: This fall, the European Commission plans to reform the EU copyright framework.

Mozilla is calling on the EU Commission to enact reform. And we’re rallying and educating citizens to do the same. Today, Mozilla is launching a campaign to bring copyright law into the 21st century. Citizens can read and sign our petition. When you add your name, you’re supporting three big reforms:

1. Update EU copyright law for the 21st century.

Copyright can be valuable in promoting education, research, and creativity — if it’s not out of date and excessively restrictive. The EU’s current copyright laws were passed in 2001, before most of us had smartphones. We need to update and harmonise the rules so we can tinker, create, share, and learn on the internet. Education, parody, panorama, remix and analysis shouldn’t be unlawful.

2. Build in openness and flexibility to foster innovation and creativity.

Technology advances at a rapid pace, and laws can’t keep up. That’s why our laws must be future-proof: designed so they remain relevant in 5, 10 or even 15 years. We need to allow new uses of copyrighted works in order to expand growth and innovation. We need to build into the law flexibility — through a User Generated Content (UGC) exception and a clause like an open norm, fair dealing, or fair use — to empower everyday people to shape and improve the internet.

3. Don’t break the internet.

A key part of what makes the internet awesome is the principle of innovation without permission — that anyone, anywhere, can create and reach an audience without anyone standing in the way. But that key principle is under threat. Some people are calling for licensing fees and restrictions on internet companies for basic things like creating hyperlinks or uploading content. Others are calling for new laws that would mandate monitoring and filtering online. These changes would establish gatekeepers and barriers to entry online, and would risk undermining the internet as a platform for economic growth and free expression.

At Mozilla, we’re committed to an exceptional internet. That means fighting for laws that make sense in the 21st century. Are you with us? Voice your support for modern copyright law in the EU and sign the petition today.

Mozilla Addons BlogAdd-ons Update – Week of 2016/08/24

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 1228 listed add-on submissions were reviewed:

  • 1224 (92%) were reviewed in fewer than 5 days.
  • 57 (4%) were reviewed between 5 and 10 days.
  • 46 (3%) were reviewed after more than 10 days.

There are 203 listed add-ons awaiting review.

You can read about the improvements we’ve made in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.

Compatibility

The compatibility blog post for Firefox 49 is up, and the bulk validation was run. The blog post for Firefox 50 should be published in the next week.

Going back to Firefox 48, there are a couple of changes that are worth keeping in mind: (1) release and beta builds no longer have a preference to deactivate signing enforcement, and (2) multiprocess Firefox is now enabled for users without add-ons, and add-ons will be gradually phased in, so make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank fiveNinePlusR for their recent contributions to the add-ons world. You can read more about their contributions in our recognition page.

Emma HumphriesGoing from "Drive-Thru" Bug Filers to New Mozillians

Earlier this month at PyCon AU, VM Brasseur gave this talk on maximizing the contributions from "drive-thru" participants in your open source project. Her thesis is that most of your contributors are going to report one bug, fix one thing, and move on, so it's in your best interest to set up your project so that these one-and-done contributors have a great experience: a process which is "easy to understand, easy to follow, and which makes it easy to contribute" will make new contributors successful, and more likely to return.

Meanwhile, the Firefox team's trying to increase the number of repeat contributors, in particular, people filing quality bugs. We know that a prompt reply to a first code contribution encourages that contributor to do more, and believe the same is true of bug filing: that quickly triaging incoming bugs will encourage those contributors to file more, and better bugs, making for a healthy QA process and a higher quality product.

For now, it's rare that the first few bugs someone files in Bugzilla are "high quality" bugs - clearly described, reproducible and actionable - which means they're more likely to stall, or be closed as invalid or incomplete, which in turn means that contributor is less likely file another.

In order to have more repeat contributors, we have to work on the "drive-thru" bug filer's experience that Brasseur describes in her talk, to make new contributors successful, and make them repeat quality bug filers. Here's some of the things we're doing:

  • Michael Hoye's asking new contributors about their bug-filing experience with a quick survey.
  • We're measuring contributor behavior in a dashboard, to find out how often contributors, new and old are filing bugs.
  • Hamilton Ulmer from the Data team is looking at an extract of bugs from Bugzilla to characterize good vs. bad bugs (that is bugs that stall, or get closed as invalid.)
  • The BMO and Firefox teams have been working on UI improvements to Bugzilla (Modal View and Readable Bug Statuses) to make it easier to file and follow up on bugs.

These steps don't guarantee success, but will guide us going forward. Thanks to Michael Hoye for review and comments.



comment count unavailable comments

Wil ClouserIt's time to upgrade from Test Pilot to Test Pilot

Were you one of the brave pioneers who participated in the original Test Pilot program? You deserve a hearty thanks!

Are you also someone who never bothered to uninstall the old Test Pilot add-on which has been sitting around in your browser ever since? Or maybe you were hoping the program would be revived and another experiment would appear? Or maybe you forgot all about it?

If that's you, I've got news: I'm releasing an upgrade for the old Test Pilot add-on today which will:

  • Open a new tab inviting you to join the new Test Pilot program
  • Uninstall itself and all related settings and data

If you're interested in joining the new program, please click through and install the new add-on when you're prompted. If you're not interested, just close the tab and be happy your browser has one less add-on it needs to load every time you start it up.

Random FAQs:

  • The old add-on is, well, old, so it can only be uninstalled after you restart Firefox. It's all automatic when you do, but if you're waiting for a prompt and don't see it, that could be the culprit.

  • If you already have the new Test Pilot add-on installed, you can just close the new tab. The old add-on will be uninstalled and your existing Test Pilot add-on (and any experiments you have running) will be unaffected.

A special thanks to Philipp Kewisch and Andreas Wagner for their help writing and reviewing the add-on.

ArkyGoogle Cardboard 360° Photosphere Viewer with A-Frame

In my previous post "Embedding Google Cardboard Camera VR Photosphere with A-Frame", I wrote that some talented programmer would probably create a better solution for embedding Google Cardboard camera photosphere using A-Frame.

I didn't know that Chris Car had already created a sophisticated solution for this problem. You can view it here on A-Frame blog.

You first might have to use Google Cardboard camera converter tool to make your Google Cardboard photosphere.

.

Robert O'CallahanRandom Thoughts On Rust: crates.io And IDEs

I've always liked the idea of Rust, but to tell the truth until recently I hadn't written much Rust code. Now I've written several thousand lines of Rust code and I have some more informed comments :-). In summary: Rust delivers, with no major surprises, but of course some aspects of Rust are better than I expected, and others worse.

cargo and crates.io

cargo and crates.io are awesome. They're probably no big deal if you've already worked with a platform that has similar infrastructure for distributing and building libraries, but I'm mainly a systems programmer which until now meant C and C++. (This is one note in a Rust theme: systems programmers can have nice things.) Easy packaging and version management encourages modularising and publishing code. Knowing that publishing to crates.io gives you some protection against language/compiler breakage is also a good incentive.

There's been some debate about whether Rust should have a larger standard library ("batteries included"). IMHO that's unnecessary; my main issue with crates.io is discovery. Anyone can claim any unclaimed name and it's sometimes not obvious what the "best" library is for any given task. An "official" directory matching common tasks to blessed libraries would go a very long way. I know from browser development that an ever-growing centrally-supported API surface is a huge burden, so I like the idea of keeping the standard library small and keeping library development decentralised and reasonably independent of language/compiler updates. It's really important to be able to stop supporting an unwanted library, letting its existing users carry on using it without imposing a burden on anyone else. However, it seems likely that in the long term crates.io will accumulate a lot of these orphaned crates, which will make searches increasingly difficult until the discovery problem is addressed.

IDEs

So far I've been using Eclipse RustDT. It's better than nothing, and good enough to get my work done, but unfortunately not all that good. I've heard that others are better but none are fantastic yet. It's a bit frustrating because in principle Rust's design enables the creation of extraordinarily good IDE support! Unlike C/C++, Rust projects have a standard, sane build system and module structure. Rust is relatively easy to parse and is a lot simpler than C++. Strong static typing means you can do code completion without arcane heuristics. A Rust IDE could generate code for you in many situations, e.g.: generate a skeleton match body covering all cases of a sum type; generate a skeleton trait implementation; one-click #[derive] annotation; automatically add use statements and update Cargo.toml; automatically insert conversion trait calls and type coercions; etc. Rust has quite comprehensive style and naming guidelines that an IDE can enforce and assist with. (I don't like a couple of the standard style decisions --- the way rustfmt sometimes breaks very().long().method().call().chains() into one call per line is galling --- but it's much better to have them than a free-for-all.) Rust is really good at warning about unused cruft up to crate boundaries --- a sane module system at work! --- and one-click support for deleting it all would be great. IDEs should assist with semantic versioning --- letting you know if you've changed stable API but haven't revved the major version. All the usual refactorings are possible, but unlike mainstream languages you can potentially do aggressive code motion without breaking semantics, by leveraging Rust's tight grip on side effects. (More about this in another blog post.)

I guess one Rust feature that makes an IDE's job difficult is type inference. For C and C++ an IDE can get adequate information for code-completion etc by just parsing up to the user's cursor and ignoring the rest of the containing scope (which will often be invalid or non-existent). That approach would not work so well in Rust, because in many cases the types of variables depend on code later in the same function. The IDE will need to deal with partially-known types and try very hard to quickly recover from parse errors so later code in the same function can be parsed. It might be a good idea to track text changes and reuse previous parses of unchanged text instead of trying to reparse it with invalid context. On the other hand, IDEs and type inference have some synergy because the IDE can display inferred types.

Mitchell BakerPracticing Open: Expanding Participation in Hiring Leadership

Last fall I came across a hiring practice that surprised me. We were hiring for a pretty senior position. When I looked into the interview schedule I realized that we didn’t have a clear process for the candidate to meet a broad cross-section of Mozillians.  We had a good clear process for the candidate to meet peers and people in the candidate’s organization.  But we didn’t have a mechanism to go broader.

This seemed inadequate to me, for two reasons.  First, the more senior the role, the broader a part of Mozilla we expect someone to be able to lead, and the broader a sense of representing the entire organization we expect that person to have.  Our hiring process should reflect this by giving the candidate and a broader section of people to interact.  

Second, Mozilla’s core DNA is from the open source world, where one earns leadership by first demonstrating one’s competence to one’s peers. That makes Mozilla a tricky place to be hired as a leader. So many roles don’t have ways to earn leadership through demonstrating competence before being hired. We can’t make this paradox go away. So we should tune our hiring process to do a few things:

  • Give candidates a chance to demonstrate their competence to the same set of people who hope they can lead. The more senior the role, the broader a part of Mozilla we expect someone to be able to lead.
  • Expand the number of people who have a chance to make at least a preliminary assessment of a candidate’s readiness for a role. This isn’t the same as the open source ideal of working with someone for a while. But it is a big difference from never knowing or seeing or being consulted about a candidate. We want to increase the number of people who are engaged in the selection and then helping the newly hired person succeed.

We made a few changes right away, and we’re testing out how broadly these changes might be effective.  Our immediate fix was to organize a broader set of people to talk to the candidate through a panel discussion. We aimed for a diverse group, from role to gender to geography. We don’t yet have a formalized way to do this, and so we can’t yet guarantee that we’re getting a representational group or that other potential criteria are met. However, another open source axiom is that “the perfect is the enemy of the good.” And so we started this with the goal of continual improvement. We’ve used the panel for a number of interviews since then.

We looked at this in more detail during the next senior leadership hire. Jascha Kaykas-Wolff, our Chief Marketing Officer, jumped on board, suggesting we try this out with the Vice President of Marketing Communications role he had open. Over the next few months Jane Finette (executive program manager, Office of the Chair) worked closely with Jascha to design and pilot a program of extending participation in the selection of our next VP of MarComm. Jane will describe that work in the next post. Here, I’ll simply note that the process was well received. Jane is now working on a similar process for the Director level.

Mozilla Cloud Services BlogSending VAPID identified WebPush Notifications via Mozilla’s Push Service


Introduction

The Web Push API provides the ability to deliver real time events (including data) from application servers (app servers) to their client-side counterparts (applications), without any interaction from the user. In other parts of our Push documentation we provide a general reference for the application API and a basic client usage tutorial. This document addresses the app server side portion in detail, including integrating VAPID and Push message encryption into your server effectively, and how to avoid common issues.

Note: Much of this document presumes you’re familiar with programming as well as have done some light work in cryptography. Unfortunately, since this is new technology, there aren’t many libraries available that make sending messages painless and easy. As new libraries come out, we’ll add pointers to them, but for now, we’re going to spend time talking about how to do the encryption so that folks who need it, or want to build those libraries can understand enough to be productive.

Bear in mind that Push is not meant to replace richer messaging technologies like Google Cloud Messaging (GCM), Apple Push Notification system (APNs), or Microsoft’s Windows Notification System (WNS). Each has their benefits and costs, and it’s up to you as developers or architects to determine which system solves your particular set of problems. Push is simply a low cost, easy means to send data to your application.


Push Summary


The Push system looks like:
A diagram of the push process flow

Application — The user facing part of the program that interacts with the browser in order to request a Push Subscription, and receive Subscription Updates.
Application Server — The back-end service that generates Subscription Updates for delivery across the Push Server.
Push — The system responsible for delivery of events from the Application Server to the Application.
Push Server — The server that handles the events and delivers them to the correct Subscriber. Each browser vendor has their own Push Server to handle subscription management. For instance, Mozilla uses autopush.
Subscription — A user request for timely information about a given topic or interest, which involves the creation of an Endpoint to deliver Subscription Updates to. Sometimes also referred to as a “channel”.
Endpoint — A specific URL that can be used to send a Push Message to a specific Subscriber.
Subscriber — The Application that subscribes to Push in order to receive updates, or the user who instructs the Application to subscribe to Push, e.g. by clicking a “Subscribe” button.
Subscription Update — An event sent to Push that results in a Push Message being received from the Push Server.
Push Message — A message sent from the Application Server to the Application, via a Push Server. This message can contain a data payload.

The main parts that are important to Push from a server-side perspective are as follows — we’ll cover all of these points below in detail:


Identifying Yourself


Mozilla goes to great lengths to respect privacy, but sometimes, identifying your feed can be useful.

Mozilla offers the ability for you to identify your feed content, which is done using the Voluntary Application server Identification for web Push VAPID specification. This is a set of header values you pass with every subscription update. One value is a VAPID key that validates your VAPID claim, and the other is the VAPID claim itself — a set of metadata describing and defining the current subscription and where it has originated from.

VAPID is only useful between your servers and our push servers. If we notice something unusual about your feed, VAPID gives us a way to contact you so that things can go back to running smoothly. In the future, VAPID may also offer additional benefits like reports about your feeds, automated debugging help, or other features.

In short, VAPID is a bit of JSON that contains an email address to contact you, an optional URL that’s meaningful about the subscription, and a timestamp. I’ll talk about the timestamp later, but really, think of VAPID as the stuff you’d want us to have to help you figure out something went wrong.

It may be that you only send one feed, and just need a way for us to tell you if there’s a problem. It may be that you have several feeds you’re handling for customers of your own, and it’d be useful to know if maybe there’s a problem with one of them.

Generating your VAPID key

The easiest way to do this is to use an existing library for your language. VAPID is a new specification, so not all languages may have existing libraries.
Currently, we’ve collected several libraries under https://github.com/web-push-libs/vapid and are very happy to learn about more.

Fortunately, the method to generate a key is fairly easy, so you could implement your own library without too much trouble

The first requirement is an Elliptic Curve Diffie Hellman (ECDH) library capable of working with Prime 256v1 (also known as “p256” or similar) keys. For many systems, the OpenSSL package provides this feature. OpenSSL is available for many systems. You should check that your version supports ECDH and Prime 256v1. If not, you may need to download, compile and link the library yourself.

At this point you should generate a EC key for your VAPID identification. Please remember that you should NEVER reuse the VAPID key for the data encryption key you’ll need later. To generate a ECDH key using openssl, enter the following command in your Terminal:

openssl ecparam -name prime256v1 -genkey -noout -out vapid_private.pem

This will create an EC private key and write it into vapid_private.pem. It is important to safeguard this private key. While you can always generate a replacement key that will work, Push (or any other service that uses VAPID) will recognize the different key as a completely different user.

You’ll need to send the Public key as one of the headers . This can be extracted from the private key with the following terminal command:

openssl ec -in vapid_private.pem -pubout -out vapid_public.pem

Creating your VAPID claim

VAPID uses JWT to contain a set of information (or “claims”) that describe the sender of the data. JWTs (or Javascript Web Tokens) are a pair of JSON objects, turned into base64 strings, and signed with the private ECDH key you just made. A JWT element contains three parts separated by “.”, and may look like:

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

  1. The first element is a “header” describing the JWT object. This JWT header is always the same — the static string {typ:"JWT",alg:"ES256"} — which is URL safe base64 encoded to eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9. For VAPID, this string should always be the same value.
  2. The second element is a JSON dictionary containing a set of claims. For our example, we’ll use the following claims:
    {
        "sub": "mailto:admin@example.com",
        "exp": "1463001340"
    }
    

    The required claims are as follows:

    1. sub : The “Subscriber” — a mailto link for the administrative contact for this feed. It’s best if this email is not a personal email address, but rather a group email so that if a person leaves an organization, is unavailable for an extended period, or otherwise can’t respond, someone else on the list can. Mozilla will only use this if we notice a problem with your feed and need to contact you.
    2. exp : “Expires” — this is an integer that is the date and time that this VAPID header should remain valid until. It doesn’t reflect how long your VAPID signature key should be valid, just this specific update. Normally this value is fairly short, usually the current UTC time + no more than 24 hours. A long lived “VAPID” header does introduce a potential “replay” attack risk, since the VAPID headers could be reused for a different subscription update with potentially different content.

     

    Feel free to add additional items to your claims. This info really should be the sort of things you want to get at 3AM when your server starts acting funny. For instance, you may run many AWS S3 instances, and one might be acting up. It might be a good idea to include the AMI-ID of that instance (e.g. “aws_id”:”i-5caba953″). You might be acting as a proxy for some other customer, so adding a customer ID could be handy. Just remember that you should respect privacy and should use an ID like “abcd-12345” rather than “Mr. Johnson’s Embarrassing Bodily Function Assistance Service”. Just remember to keep the data fairly short so that there aren’t problems with intermediate services rejecting it because the headers are too big.

Once you’ve composed your claims, you need to convert them to a JSON formatted string, ideally with no padding space between elements1, for example:

{"sub":"mailto:admin@example.com","exp":"1463087677"}

Then convert this string to a URL-safe base64-encoded string, with the padding ‘=’ removed. For example, if we were to use python:


   import base64
   import json
   # These are the claims
   claims = {"sub":"mailto:admin@example.com",
             "exp":"1463087677"}
   # convert the claims to JSON, then encode to base64
   body = base64.urlsafe_b64encode(json.dumps(claims))
   print body

would give us

eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0

This is the “body” of the JWT base string.

The header and the body are separated with a ‘.’ making the JWT base string.

eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0

  • The final element is the signature. This is an ECDH signature of the JWT base string created using your VAPID private key. This signature is URL safe base64 encoded, “=” padding removed, and again joined to the base string with an a ‘.’ delimiter.

    Generating the signature depends on your language and library, but is done by the ecdsa algorithm using your private key. If you’re interested in how it’s done in Python or Javascript, you can look at the code in https://github.com/web-push-libs/vapid.

    Since your private key will not match the one we’ve generated, the signature you see in the last part of the following example will be different.

    eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA


  • Forming your headers


    The VAPID claim you assembled in the previous section needs to be sent along with your Subscription Update as an Authorization header Bearer token — the complete token should look like so:

    Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIiOiAibWFpbHRvOmFkbWluQGV4YW1wbGUuY29tIiwgImV4cCI6ICIxNDYzMDg3Njc3In0.uyVNHws2F3k5jamdpsH2RTfhI3M3OncskHnTHnmdo0hr1ZHZFn3dOnA-42YTZ-u8_KYHOOQm8tUm-1qKi39ppA

    Note: the header should not contain line breaks. Those have been added here to aid in readability
    Important! The Authorization header ID will be changing soon from “Bearer” to “WebPush”. Some Push Servers may accept both, but if your request is rejected, you may want to try changing the tag. This, of course, is part of the fun of working with Draft specifications

    You’ll also need to send a Crypto-Key header along with your Subscription Update — this includes a p256ecdsa element that takes the VAPID public key as its value — formatted as a URL safe, base64 encoded DER formatted string of the raw keypair.

    An example follows:
    Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzCZMwrY7OypBBNwEnusGkdg9F84WqW1j5Ymjk

    Note: If you like, you can cheat here and use the content of “vapid_public.pem”. You’ll need to remove the “-----BEGIN PUBLIC KEY------” and “-----END PUBLIC KEY-----” lines, remove the newline characters, and convert all “+” to “-” and “/” to “_”.

    You can validate your work against the VAPID test page — this will tell you if your headers are properly encoded. In addition, the VAPID repo contains libraries for JavaScript and Python to handle this process for you.

    We’re happy to consider PRs to add libraries covering additional languages.


    Receiving Subscription Information


    Your Application will receive an endpoint and key retrieval functions that contain all the info you’ll need to successfully send a Push message. See
    Using the Push API for details about this. Your application should send this information, along with whatever additional information is required, securely to the Application Server as a JSON object.

    Such a post back to the Application Server might look like this:

    
    {
        "customerid": "123456",
        "subscription": {
            "endpoint": "https://updates.push.services.mozilla.com/push/v1/gAAAA…",
            "keys": {
                "p256dh": "BOrnIslXrUow2VAzKCUAE4sIbK00daEZCswOcf8m3TF8V…",
                "auth": "k8JV6sjdbhAi1n3_LDBLvA"
             }
        },
        "favoritedrink": "warm milk"
    }
    

    In this example, the “subscription” field contains the elements returned from a fulfilled PushSubscription. The other elements represent additional data you may wish to exchange.

    How you decide to exchange this information is completely up to your organization. You are strongly advised to protect this information. If an unauthorized party gained this information, they could send messages pretending to be you. This can be made more difficult by using a “Restricted Subscription”, where your application passes along your VAPID public key as part of the subscription request. A restricted subscription can only be used if the subscription carries your VAPID information signed with the corresponding VAPID private key. (See the previous section for how to generate VAPID signatures.)

    Subscription information is subject to change and should be considered “opaque”. You should consider the data to be a “whole” value and associate it with your user. For instance, attempting to retain only a portion of the endpoint URL may lead to future problems if the endpoint URL structure changes. Key data is also subject to change. The app may receive an update that changes the endpoint URL or key data. This update will need to be reflected back to your server, and your server should use the new subscription information as soon as possible.


    Sending a Subscription Update Without Data


    Subscription Updates come in two varieties: data free and data bearing. We’ll look at these separately, as they have differing requirements.

    Data Free Subscription Updates

    Data free updates require no additional App Server processing, however your Application will have to do additional work in order to act on them. Your application will simply get a “push” message containing no further information, and it may have to connect back to your server to find out what the update is. It is useful to think of Data Free updates like a doorbell — “Something wants your attention.”

    To send a Data Free Subscription, you POST to the subscription endpoint. In the following example, we’ll include the VAPID header information. Values have been truncated for presentation readability.

    
    curl -v -X POST\
      -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJhdWQiOiJodHR…"\
      -H "Crypto-Key: p256ecdsa=BA5vkyMXVfaKuehJuecNh30-NiC7mT9gM97Op5d8LiAKzfIezLzC…"\
      -H "TTL: 0"\
      "https://updates.push.services.mozilla.com/push/v1/gAAAAABXAuZmKfEEyPbYfLXqtPW…"
    

    This should result in an Application getting a “push” message, with no data associated with it.

    To see how to store the Push Subscription data and send a Push Message using a simple server, see our Using the Push API article.

    Data Bearing Subscription Updates


    Data Bearing updates are a lot more interesting, but do require significantly more work. This is because we treat our own servers as potentially hostile and require “end-to-end” encryption of the data. The message you send across the Mozilla Push Server cannot be read. To ensure privacy, however, your application will not receive data that cannot be decoded by the User Agent.

    There are libraries available for several languages at https://github.com/web-push-libs, and we’re happy to accept or link to more.

    The encryption method used by Push is Elliptic Curve Diffie Hellman (ECDH) encryption, which uses a key derived from two pairs of EC keys. If you’re not familiar with encryption, or the brain twisting math that can be involved in this sort of thing, it may be best to wait for an encryption library to become available. Encryption is often complicated to understand, but it can be interesting to see how things work.

    If you’re familiar with Python, you may want to just read the code for the http_ece package. If you’d rather read the original specification, that is available. While the code is not commented, it’s reasonably simple to follow.

    Data encryption summary

    • Octet — An 8 bit byte of data (between \x00 and \xFF)
    • Subscription data — The subscription data to encode and deliver to the Application.
    • Endpoint — the Push service endpoint URL, received as part of the Subscription data.
    • Receiver key — The p256dh key received as part of the Subscription data.
    • Auth key — The auth key received as part of the Subscription data.
    • Payload — The data to encrypt, which can be any streamable content between 2 and 4096 octets.
    • Salt — 16 octet array of random octets, unique per subscription update.
    • Sender key — A new ECDH key pair, unique per subscription update.

    Web Push limits the size of the data you can send to between 2 and 4096 octets. You can send larger data as multiple segments, however that can be very complicated. It’s better to keep segments smaller. Data, whatever the original content may be, is also turned into octets for processing.

    Each subscription update requires two unique items — a salt and a sender key. The salt is a 16 octet array of random octets. The sender key is a ECDH key pair generated for this subscription update. It’s important that neither the salt nor sender key be reused for future encrypted data payloads. This does mean that each Push message needs to be uniquely encrypted.

    The receiver key is the public key from the client’s ECDH pair. It is base64, URL safe encoded and will need to be converted back into an octet array before it can be used. The auth key is a shared “nonce”, or bit of random data like the salt.

    Emoji Based Diagram
    Subscription Data Per Update Info Update
    🎯 Endpoint 🔑 Private Server Key 📄Payload
    🔒 Receiver key (‘p256dh’) 🗝 Public Server Key
    💩 Auth key (‘auth’) 📎 Salt
    🔐 Private Sender Key
    ✒️ Public Server Key
    🏭 Build using / derive
    🔓 message encryption key
    🎲 message nonce

    Encryption uses a fabricated key and nonce. We’ll discuss how the actual encryption is done later, but for now, let’s just create these items.

    Creating the Encryption Key and Nonce

    The encryption uses HKDF (Hashed Key Derivation Function) using a SHA256 hash very heavily.

    Creating the secret

    The first HKDF function you’ll need will generate the common secret (🙊), which is a 32 octet value derived from a salt of the auth (💩) and run over the string “Content-Encoding: auth\x00”.
    So, in emoji =>
    🔐 = 🔑(🔒);
    🙊 = HKDF(💩, “Content-Encoding: auth\x00”).🏭(🔐)

    An example function in Python could look like so:

    
    # receiver_key must have "=" padding added back before it can be decoded.
    # How that's done is an exercise for the reader.
    # 🔒
    receiver_key = subscription['keys']['p256dh']
    # 🔑
    server_key = pyelliptic.ECC(curve="prime256v1")
    # 🔐
    sender_key = server_key.get_ecdh_key(base64.urlsafe_b64decode(receiver_key))
    
    #🙊
    secret = HKDF(
        hashes.SHA256(),
        length = 32,
        salt = auth,
        info = "Content-Encoding: auth\0").derive(sender_key)
    
    The encryption key and encryption nonce

    The next items you’ll need to create are the encryption key and encryption nonce.

    An important component of these is the context, which is:

    • A string comprised of ‘P-256’
    • Followed by a NULL (“\x00”)
    • Followed by a network ordered, two octet integer of the length of the decoded receiver key
    • Followed by the decoded receiver key
    • Followed by a networked ordered, two octet integer of the length of the public half of the sender key
    • Followed by the public half of the sender key.

    As an example, if we have an example, decoded, completely invalid public receiver key of ‘RECEIVER’ and a sample sender key public key example value of ‘sender’, then the context would look like:

    # ⚓ (because it's the base and because there are only so many emoji)
    root = "P-256\x00\x00\x08RECEIVER\x00\x06sender"

    The “\x00\x08” is the length of the bogus “RECEIVER” key, likewise the “\x00\x06” is the length of the stand-in “sender” key. For real, 32 octet keys, these values will most likely be “\x00\x20” (32), but it’s always a good idea to measure the actual key rather than use a static value.

    The context string is used as the base for two more HKDF derived values, one for the encryption key, and one for the encryption nonce. In python, these functions could look like so:

    In emoji:
    🔓 = HKDF(💩 , “Content-Encoding: aesgcm\x00” + ⚓).🏭(🙊)
    🎲 = HKDF(💩 , “Content-Encoding: nonce\x00” + ⚓).🏭(🙊)

    
    #🔓
    encryption_key = HKDF(
        hashes.SHA256(),
        length=16,
        salt=salt,
        info="Content-Encoding: aesgcm\x00" + context).derive(secret)
    
    # 🎲
    encryption_nonce = HDKF(
        hashes.SHA256(),
        length=12,
        salt=salt,
        info="Content-Encoding: nonce\x00" + context).derive(secret)
    

    Note that the encryption_key is 16 octets and the encryption_nonce is 12 octets. Also note the null (\x00) character between the “Content-Encoding” string and the context.

    At this point, you can start working your way through encrypting the data 📄, using your secret 🙊, encryption_key 🔓, and encryption_nonce 🎲.

    Encrypting the Data

    The function that does the encryption (encryptor) uses the encryption_key 🔓 to initialize the Advanced Encryption Standard (AES) function, and derives the Galois/Counter Mode (G/CM) Initialization Vector (IV) off of the encryption_nonce 🎲, plus the data chunk counter. (If you didn’t follow that, don’t worry. There’s a code snippet below that shows how to do it in Python.) For simplicity, we’ll presume your data is less than 4096 octets (4K bytes) and can fit within one chunk.
    The IV takes the encryption_nonce and XOR’s the chunk counter against the final 8 octets.

    
    def generate_iv(nonce, counter):
        (mask,) = struct.unpack("!Q", nonce[4:])  # get the final 8 octets of the nonce
        iv = nonce[:4] + struct.pack("!Q", counter ^ mask)  # the first 4 octets of nonce,
                                                         # plus the XOR'd counter
        return iv
    

    The encryptor prefixes a “\x00\x00” to the data chunk, processes it completely, and then concatenates its encryption tag to the end of the completed chunk. The encryptor tag is a static string specific to the encryptor. See your language’s documentation for AES encryption for further information.

    
    def encrypt_chunk(chunk, counter, encryption_nonce, encryption_key):
        encryptor = Cipher(algorithms.AES(encryption_key),
                          modes.GCM(generate_iv(encryption_nonce, 
                                                  counter))
        return encryptor.update(b"\x00\x00" + chunk) +
               encryptor.finalize() +
               encryptor.tag
    
    def encrypt(payload, encryption_nonce, encryption_key):
        result = b""
        counter = 0
        for i in list(range(0, len(payload)+2, 4096):
            result += encrypt_chunk(
                payload[i:i+4096], 
                counter, 
                encryption_key,
                encryption_nonce)
    


    Sending the Data

    Encrypted payloads need several headers in order to be accepted.

    The Crypto-Key header is a composite field, meaning that different things can store data here. There are some rules about how things should be stored, but we can simplify and just separate each item with a semicolon “;”. In our case, we’re going to store three things, a “keyid”, “p256ecdsa” and “dh”.

    “keyid” is the string “p256dh”. Normally, “keyid” is used to link keys in the Crypto-Key header with the Encryption header. It’s not strictly required, but some push servers may expect it and reject subscription updates that do not include it. The value of “keyid” isn’t important, but it must match between the headers. Again, there are complex rules about these that we’re safely ignoring, so if you want or need to do something complex, you may have to dig into the Encrypted Content Encoding specification a bit.

    “p256ecdsa” is the public key used to sign the VAPID header (See Forming your Headers). If you don’t want to include the optional VAPID header, you can skip this.

    The “dh” value is the public half of the sender key we used to encrypt the data. It’s the same value contained in the context string, so we’ll use the same fake, stand-in value of “sender”, which has been encoded as a base64, URL safe value. For our example, the base64 encoded version of the string ‘sender’ is ‘c2VuZGVy’

    Crypto-Key: p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh

    The Encryption Header contains the salt value we used for encryption, which is a random 16 byte array converted into a base64, URL safe value.

    Encryption: keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA

    The TTL Header is the number of seconds the notification should stay in storage if the remote user agent isn’t actively connected. “0” (Zed/Zero) means that the notification is discarded immediately if the remote user agent is not connected; this is the default. This header must be specified, even if the value is “0”.

    TTL: 0

    Finally, the Content-Encoding Header specifies that this content is encoded to the aesgcm standard.

    Content-Encoding: aesgcm

    The encrypted data is set as the Body of the POST request to the endpoint contained in the subscription info. If you have requested that this be a restricted subscription and passed your VAPID public key as part of the request, you must include your VAPID information in the POST.

    As an example, in python:

    headers = {
        'crypto-key': 'p256ecdsa=BA5v…;dh=c2VuZGVy;keyid=p256dh',
        'content-encoding': 'aesgcm',
        'encryption': 'keyid=p256dh;salt=cm5kIDE2IGJ5dGUgc2FsdA',
        'ttl': 0,
    }
    requests.post(subscription_info['subscription']['endpoint'], 
                  data=encrypted_data, 
                  headers=headers)
    

    A successful POST will return a response of 201, however, if the User Agent cannot decrypt the message, your application will not get a “push” message. This is because the Push Server cannot decrypt the message so it has no idea if it is properly encoded. You can see if this is the case by:

    • Going to about:config in Firefox
    • Setting the dom.push.loglevel pref to debug
    • Opening the Browser Console (located under Tools > Web Developer > Browser Console menu.

    When your message fails to decrypt, you’ll see a message similar to the following
    The debugging console displaying "The service worker for scope 'https://mozilla-services.github.io/WebPushDataTestPage/' encountered an error decryption the a push message:, with a message and where to look for more info

    You can use values displayed in the Web Push Data Encryption Page to audit the values you’re generating to see if they’re similar. You can also send messages to that test page and see if you get a proper notification pop-up, since all the key values are displayed for your use.

    You can find out what errors and error responses we return, and their meanings by consulting our server documentation.


    Subscription Updates


    Nothing (other than entropy) lasts forever. There may come a point where, for various reasons, you will need to update your user’s subscription endpoint. There are any number of reasons for this, but your code should be prepared to handle them.

    Your application’s service worker will get a onpushsubscriptionchange event. At this point, the previous endpoint for your user is now invalid and a new endpoint will need to be requested. Basically, you will need to re-invoke the method for requesting a subscription endpoint. The user should not be alerted of this, and a new endpoint will be returned to your app.

    Again, how your app identifies the customer, joins the new endpoint to the customer ID, and securely transmits this change request to your server is left as an exercise for the reader. It’s worth noting that the Push server may return an error of 410 with an errno of 103 when the push subscription expires or is otherwise made invalid. (If a push subscription has expired several months ago, the server may return a different errno value.

    Conclusion

    Push Data Encryption can be very challenging, but worthwhile. Harder encryption means that it is more difficult for someone to impersonate you, or for your data to be read by unintended parties. Eventually, we hope that much of this pain will be buried in libraries that allow you to simply call a function, and as this specification is more widely adopted, it’s fair to expect multiple libraries to become available for every language.

    See also:

    1. WebPush Libraries: A set of libraries to help encrypt and send push messages.
    2. VAPID lib for python or javascript can help you understand how to encode VAPID header data.

    Footnotes:

    1. Technically, you don’t need to strip the whitespace from JWS tokens. In some cases, JWS libraries take care of that for you. If you’re not using a JWS library, it’s still a very good idea to make headers and header lines as compact as possible. Not only does it save bandwidth, but some systems will reject overly lengthy header lines. For instance, the library that autopush uses limits header line length to 16,384 bytes. Minus things like the header, signature, base64 conversion and JSON overhead, you’ve got about 10K to work with. While that seems like a lot, it’s easy to run out if you do things like add lots of extra fields to your VAPID claim set.

    Christian HeilmannNew blog design – I let the browser do most of the work…

    Unless you are reading the RSS feed or the AMP version of this blog, you’ll see that some things changed here. Last week I spent an hour redesigning this blog from scrarch. No, I didn’t move to another platform (WordPress does the job for me so far), but I fixed a few issues that annoyed me.

    So now this blog is fully responsive, has no dependencies on any CSS frameworks or scripts and should render much nicer on mobile devices where a lot of my readers are.

    It all started with my finding Dan Klammers Bytesize Icons – the icons which are now visible in the navigation on top if your screen is wide enough to allow for them. I loved their simplicity and that I could embed them, thus having a visual menu that doesn’t need any extra HTTP overhead. So I copied and pasted, coloured in the lines and that was that.

    The next thing that inspired me incredibly was the trick to use a font-size of 1em + 1vw on the body of the document to ensure a readable text regardless of the resolution. It was one of the goodies in Heydon Pickering’s On writing less damn code post. He attributed this trick to Vasilis who of course is too nice and told the whole attribution story himself.

    Next was creating the menu. For this, I used the power of flexbox and a single media query to ensure that my logo stays but the text links wrap into a few lines next to it. You can play with the code on JSBin.

    The full CSS of the blog is now about 340 lines of code and has no dependency on any libraries or frameworks. There is no JavaScript except for ads.

    The rest was tweaking some font sizes and colours and adding some enhancements like skip links to jump over the navigation. These are visible when you tab into the document, which seems a good enough solution seeing that we do not have a huge navigation as it is.

    Other small fixes:

    • The code display on older posts is now fixed. I used an older plugin not compatible with the current one in the past. The fix was to write yet another plugin to un-do what the old one needed and giving it the proper HTML structure.
    • I switched the ad to a responsive one, so there should be no problems with this breaking the layout. Go on, test it out, click it a few hundred times to give it a thorough test.
    • I’ve stopped fixed image sizes for quite a while now and used 100% as width. With this new layout I also gave them a max width to avoid wasted space and massive blurring.
    • For videos I will now start using Embed responsively not to break the layout either.

    All in all this was the work of an hour, live in my browser and without any staging. This is a blog, it is here for words, not to do amazing feats of code.

    Here are few views of the blog on different devices (courtesy of the Chrome Devtools):

    iPad:
    Blog on iPad

    iPhone 5:
    Blog on iPhone 5

    iPhone 6:
    Blog on iPhone 6

    Nexus 5:
    Blog on Nexus 5

    Hope you like it.

    All in all I love working on the web these days. Our CSS toys are incredibly powerful, browsers are much more reliable and the insights you get and tweaks you can do in developer tools are amazing. When I think back when I did the first layout here in 2006, I probably wouldn’t go through these pains nowadays. Create some good stuff, just do as much as is needed.

    QMOFirefox 49 Beta 7 Testday, August 26th

    Hello Mozillians,

    We are happy to announce that Friday, August 26th, we are organizing Firefox 49 Beta 7 Testday. We will be focusing our testing on WebGL Compatibility and Exploratory Testing. Check out the detailed instructions via this etherpad.

    No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

    Join us and help us make Firefox better! See you on Friday!

    This Week In RustThis Week in Rust 144

    Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

    This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

    Updates from Rust Community

    News & Blog Posts

    New Crates & Project Updates

    Crate of the Week

    No crate was selected for this week for lack of votes. Ain't that a pity?

    Submit your suggestions and votes for next week!

    Call for Participation

    Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

    Some of these tasks may also have mentors available, visit the task page for more information.

    If you are a Rust project owner and are looking for contributors, please submit tasks here.

    Updates from Rust Core

    167 pull requests were merged in the last two weeks.

    New Contributors

    • Alexandre Oliveira
    • Amit Levy
    • clementmiao
    • DarkEld3r
    • Dustin Bensing
    • Erik Uggeldahl
    • Jacob
    • JessRudder
    • Michael Layne
    • Nazım Can Altınova
    • Neil Williams
    • pliniker

    Approved RFCs

    Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

    Final Comment Period

    Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

    New RFCs

    Upcoming Events

    If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

    fn work(on: RustProject) -> Money

    Tweet us at @ThisWeekInRust to get your job offers listed here!

    Quote of the Week

    No quote was selected for QotW.

    Submit your quotes for next week!

    This Week in Rust is edited by: nasa42, llogiq, and brson.

    Alex VincentIntroducing es7-membrane: A new ECMAScript 2016 Membrane implementation

    I have a new ECMAScript membrane implementation, which I will maintain and use in a professional capacity, and which I’m looking for lots of help with in the form of code reviews and API design advice.

    For those of you who don’t remember what a membrane is, Tom van Cutsem wrote about membranes a few years ago, including the first implementations in JavaScript. I recently answered a StackOverflow question on why a membrane might be useful in the first place.

    Right now, the membrane supports “perfect” mirroring across object graphs:  as far as I can tell, separate object graphs within the same membrane never see objects or functions from another object graph.

    The word “perfect” is in quotes because there are probably bugs, facets I haven’t yet tested for (“What happens if I call Object.freeze() on a proxy from the membrane?”, for example).  There is no support yet for the real uses of proxies, such as hiding properties, exposing new ones, or special handling of primitive values.  That support is forthcoming, as I do expect I will need a membrane in my “Verbosio” project (an experimental XML editor concept, irrelevant to this group) and another for the company I work for.

    The good news is the tests pass in Node 6.4, the current Google Chrome release, and Mozilla Firefox 51 (trunk, debug build).  I have not tested any other browser or ECMAScript environment.  I also will be checking in lots of use cases over the next few weeks which will guide future work on the module.

    With all that said, I’d love to get some help.  That’s why I moved it to its own GitHub repository.

    • None of this code has been reviewed yet.  My colleagues at work are not qualified to do a code or API review on this.  (This isn’t a knock on them – proxies are a pretty obscure part of the ECMAScript universe…)  I’m looking for some volunteers to do those reviews.
    • I have two or three different options, design-wise, for making a Membrane’s proxies customizable while still obeying the rules of the Membrane.  I’m assuming there’s some supremely competent people from the es-discuss mailing list who could offer advice through the GitHub project’s wiki pages.
    • I’d like also to properly wrap the baseline code as ES6 modules using the import, export statements – but I’m not sure if they’re safe to use in current release browsers or Node.  (I’ve been skimming Dr. Axel Rauschmeyer’s “Exploring ES6” chapter on ES6 modules.)
      • Side note:  try { import /* … */ } catch (e) { /* … */ } seems to be illegal syntax, and I’d really like to know why.  (The error from trunk Firefox suggested import needed to be on the first line, and I had the import block on the second line, after the try statement.)
    • This is my first time publishing a serious open-source project to GitHub, and my absolute first attempt at publishing to NPM:
    • I’m not familiar with Node, nor with “proper” packaging of modules pre-ES6.  So my build-and-test systems need a thorough review too.
    • I’m having trouble properly setting up continuous integration.  Right now, the build reports as passing but is internally erroring out…
    • Pretty much any of the other GitHub/NPM-specific goodies (a static demo site, wiki pages for discussions, keywords for the npm package, a Tonic test case, etc.) don’t exist yet.
    • Of course, anyone who has interest in membranes is welcome to offer their feedback.

    If you’re not able to comment here for some reason, I’ve set up a GitHub wiki page for that purpose.

    Hal WinePy Bay 2016 - a First Report

    Py Bay 2016 - a First Report

    PyBay held their first local Python conference this last weekend (Friday, August 19 through Sunday, August 21). What a great event! I just wanted to get down some first impressions - I hope to do more after the slides and videos are up.

    Read more...

    Karl Dubost[worklog] Edition 032. The sento cheminey

    From the seventh floor, I see the cheminey of a local sento. It's not always on. The smoke is not coming out, it gives me a good visual feedback of the opening hours. It's here. It doesn't have notifications. It doesn't have ics, atom. It's just there. And it's fine as-is. The digital world seems sometimes to create complicated UI and UX over things which are just working. They become disruptive but not helpful.

    Tune of the week: Leo Ferré - Avec le temps

    Webcompat Life

    Progress this week:

    Today: 2016-08-22T13:46:36.150030
    300 open issues
    ----------------------
    needsinfo       4
    needsdiagnosis  86
    needscontact    18
    contactready    28
    sitewait        156
    ----------------------
    

    You are welcome to participate

    Webcompat issues

    (a selection of some of the bugs worked on this week).

    WebCompat.com dev

    Reading List

    • Return of the 10kB Web design contest. Good stuff. I spend a good chunk of my time in the network panel of devtools… And it's horrific.

    Follow Your Nose

    TODO

    • Document how to write tests on webcompat.com using test fixtures.
    • ToWrite: Amazon prefetching resources with <object> for Firefox only.

    Otsukare!

    Will Kahn-Greenepyvideo last thoughts

    What is pyvideo?

    pyvideo.org is an index of Python-related conference and user-group videos on the Internet. Saw a session you liked and want to share it? It's likely you can find it, watch it, and share it with pyvideo.org.

    This is my last update. pyvideo.org is now in new and better hands and will continue going forward.

    Read more… (2 mins to read)

    Michael KellyCaching Async Operations via Promises

    I was working on a bug in Normandy the other day and remembered a fun little trick for caching asynchronous operations in JavaScript.

    The bug in question involved two asynchronous actions happening within a function. First, we made an AJAX request to the server to get an "Action" object. Next, we took an attribute of the action, the implementation_url, and injected a <script> tag into the page with the src attribute set to the URL. The JavaScript being injected would then call a global function and pass it a class function, which was the value we wanted to return.

    The bug was that if we called the function multiple times with the same action, the function would make multiple requests to the same URL, even though we really only needed to download data for each Action once. The solution was to cache the responses, but instead of caching the responses directly, I found it was cleaner to cache the Promise returned when making the request instead:

    export function fetchAction(recipe) {
      const cache = fetchAction._cache;
    
      if (!(recipe.action in cache)) {
        cache[recipe.action] = fetch(`/api/v1/action/${recipe.action}/`)
          .then(response => response.json());
      }
    
      return cache[recipe.action];
    }
    fetchAction._cache = {};
    

    Another neat trick in the code above is storing the cache as a property on the function itself; it helps avoid polluting the namespace of the module, and also allows callers to clear the cache if they wish to force a re-fetch (although if you actually needed that, it'd be better to add a parameter to the function instead).

    After I got this working, I puzzled for a bit on how to achieve something similar for the <script> tag injection. Unlike an AJAX request, the only thing I had to work with was an onload handler for the tag. Eventually I realized that nothing was stopping me from wrapping the <script> tag injection in a Promise and caching it in exactly the same way:

    export function loadActionImplementation(action) {
      const cache = loadActionImplementation._cache;
    
      if (!(action.name in cache)) {
        cache[action.name] = new Promise((resolve, reject) => {
          const script = document.createElement('script');
          script.src = action.implementation_url;
          script.onload = () => {
            if (!(action.name in registeredActions)) {
              reject(new Error(`Could not find action with name ${action.name}.`));
            } else {
              resolve(registeredActions[action.name]);
            }
          };
          document.head.appendChild(script);
        });
      }
    
      return cache[action.name];
    }
    loadActionImplementation._cache = {};
    

    From a nitpicking standpoint, I'm not entirely happy with this function:

    • The name isn't really consistent with the "fetch" terminology from the previous function, but I'm not convinced they should use the same verb either.
    • The Promise code could probably live in another function, leaving this one to only concern itself about the caching.
    • I'm pretty sure this does nothing to handle the case of the script failing to load, like a 404.

    But these are minor, and the patch got merged, so I guess it's good enough.

    Cameron KaiserTenFourFox 45 beta 2: not yet

    So far the TenFourFox 45 beta is doing very well. There have been no major performance regressions overall (Amazon Music notwithstanding, which I'll get to presently) and actually overall opinion is that 45 seems much more responsive than 38. On that score, you'll like beta 2 even more when I get it out: I raided some more performance-related Firefox 46 and 47 patches and backported them too, further improving scrolling performance by reducing unnecessary memory allocation and also substantially reducing garbage collection overhead. There are also some minor tweaks to the JavaScript JIT, including a gimme I should have done long ago where we use single-precision FPU instructions instead of double-precision plus a rounding step. This doesn't make a big difference for most scripts, but a few, particularly those with floating point arrays, will benefit from the reduction in code size and slight improvement in speed.

    Unfortunately, while this also makes Amazon Music's delay on starting and switching tracks better, it's still noticeably regressed compared to 38. In addition, I'm hitting an assertion with YouTube on certain, though not all, videos over a certain minimum length; in the release builds it just seizes up at that point and refuses to play the video further. Some fiddling in the debugger indicates it might be related to what Chris Trusch was reporting about frameskipping not working right in 45. If I'm really lucky all this is related and I can fix them all by fixing that root problem. None of these are showstoppers but that YouTube assertion is currently my highest priority to fix for the next beta; I have not solved it yet though I may be able to wallpaper it. I'm aiming for beta 2 this coming week.

    On the localization front we have French, German and Japanese translations, the latter from the usual group that operates separately from Chris T's work in issue 42. I'd like to get a couple more in the can before our planned release on or about September 13. If you can help, please sign up.

    Mozilla Open Design BlogNow we’re talking!

    The responses we’ve received to posting the initial design concepts for an updated Mozilla identity have exceeded our expectations. The passion from Mozillians in particular is coming through loud and clear. It’s awesome to witness—and exactly what an open design process should spark.

    Some of the comments also suggest that many people have joined this initiative in progress and may not have enough context for why we are engaging in this work.

    Since late 2014, we’ve periodically been fielding a global survey to understand how well Internet users recognize and perceive Mozilla and Firefox. The data has shown that while we are known, we’re not understood.

    • Less than 30% of people polled know we do anything other than make Firefox.
    • Many confuse Mozilla with our Firefox browser.
    • Firefox does not register any distinct attributes from our closest competitor, Chrome.

    We can address these challenges by strengthening our core with renewed focus on Firefox, prototyping the future with agile Internet of Things experiments and revolutionary platform innovation, and growing our influence by being clear about the Internet issues we stand for. All efforts now underway.

    To support these efforts and help clarify what Mozilla stands for requires visual assets reinforcing our purpose and vision. Our current visual toolkit for Mozilla is limited to a rather generic wordmark and small color palette. And so naturally, we’ve all filled that void with our own interpretations of how the brand should look.

    Our brand will always need to be expressed across a variety of communities, projects, programs, events, and more. But without a strong foundation of a few clearly identifiable Mozilla assets that connect all of our different experiences, we are at a loss. The current proliferation of competing visual expressions contribute to the confusion that Internet users have about us.

    For us to be able to succeed at the work we need to do, we need others to join us. To get others to join us, we need to be easier to find, identify and understand. That’s the net impact we’re seeking from this work.

    Doubling down on open.

    The design concepts being shared now are initial directions that will continue to be refined and that may spark new derivations. It’s highly unusual for work at this phase of development to be shown publicly and for a forum to exist to provide feedback before things are baked. Since we’re Mozilla, we’re leaning into our open-source ethos of transparency and participation.

    It’s also important to remember that we’re showing entire design systems here, not just logos. We’re pressure-testing which of these design directions can be expanded to fit the brilliant variety of programs, projects, events, and more that encompass Mozilla.

    If you haven’t had a chance to read earlier blog posts about the formation of the narrative territories, have a look. This design work is built from a foundation of strategic thinking about where Mozilla is headed over the next five years, the makeup of our target audience, and what issues we care about and want to amplify. All done with an extraordinary level of openness.

    We’re learning from the comments, especially the constructive ones, and are grateful that people are taking the time to write them. We’ll continue to share work as it evolves. Thanks to everyone who has engaged so far, and here’s to keeping the conversation going.

    Gervase MarkhamSomething You Know And… Something You Know

    The email said:

    To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

    This is united.com’s idea of two-factor authentication:

    united.com screenshot asking two security questions because my device is unknown

    It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

    Air MozillaWebdev Beer and Tell: August 2016

    Webdev Beer and Tell: August 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

    Air MozillaImproving Pytest-HTML: My Outreachy Project

    Improving Pytest-HTML: My Outreachy Project Ana Ribero of the Outreachy Summer 2016 program cohort describes her experience in the program and what she did to improve Mozilla's Pytest-HTML QA tools.

    Mozilla Addons BlogA Simpler Add-on Review Process

    In 2011, we introduced the concept of “preliminary review” on AMO. Developers who wanted to list add-ons that were still being tested or were experimental in nature could opt for a more lenient review with the understanding that they would have reduced visibility. However, having two review levels added unnecessary complexity for the developers submitting add-ons, and the reviewers evaluating them. As such, we have implemented a simpler approach.

    Starting on August 22nd, there will be one review level for all add-ons listed on AMO. Developers who want to reduce the visibility of their add-ons will be able to set an “experimental” add-on flag in the AMO developer tools. This flag won’t have any effect on how an add-on is reviewed or updated.

    All listed add-on submissions will either get approved or rejected based on the updated review policy. For unlisted add-ons, we’re also unifying the policies into a single set of criteria. They will still be automatically signed and post-reviewed at our discretion.

    We believe this will make it easier to submit, manage, and review add-ons on AMO. Review waiting times have been consistently good this year, and we don’t expect this change to have a significant impact on this. It should also make it easier to work on AMO code, setting up a simpler codebase for future improvements.  We hope this makes the lives of our developers and reviewers easier, and we thank you for your continued support.

    Gervase MarkhamAuditing the Trump Campaign

    When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

    • Project Name: Donald J. Trump for President
    • Project Website: https://www.donaldjtrump.com/
    • Project Description: Make America great again
    • What is the maintenance status of the project? Look at the polls, we are winning!
    • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

    Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

    If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

    Paul RougetServo homebrew nightly buids

    Servo binaries available via Homebrew

    See github.com/servo/homebrew-servo

    $ brew install servo/servo/servo-bin
    $ servo -w http://servo.org # See `servo --help`
    

    Update (every day):

    $ brew update && brew upgrade servo-bin
    

    Switch to older version (earliest version being 2016.08.19):

    $ brew switch servo-bin YYYY.MM.DD
    

    File issues specific to the Homebrew package here, and Servo issues here.

    This package comes without browserhtml.

    Daniel StenbergRemoving the PowerShell curl alias?

    PowerShell is a spiced up command line shell made by Microsoft. According to some people, it is a really useful and good shell alternative.

    Already a long time ago, we got bug reports from confused users who couldn’t use curl from their PowerShell prompts and it didn’t take long until we figured out that Microsoft had added aliases for both curl and wget. The alias had the shell instead invoke its own command called “Invoke-WebRequest” whenever curl or wget was entered. Invoke-WebRequest being PowerShell’s own version of a command line tool for fiddling with URLs.

    Invoke-WebRequest is of course not anywhere near similar to neither curl nor wget and it doesn’t support any of the command line options or anything. The aliases really don’t help users. No user who would want the actual curl or wget is helped by these aliases, and user who don’t know about the real curl and wget won’t use the aliases. They were and remain pointless. But they’ve remained a thorn in my side ever since. Me knowing that they are there and confusing users every now and then – not me personally, since I’m not really a Windows guy.

    Fast forward to modern days: Microsoft released PowerShell as open source on github yesterday. Without much further ado, I filed a Pull-Request, asking the aliases to be removed. It is a minuscule, 4 line patch. It took way longer to git clone the repo than to make the actual patch and submit the pull request!

    It took 34 minutes for them to close the pull request:

    “Those aliases have existed for multiple releases, so removing them would be a breaking change.”

    To be honest, I didn’t expect them to merge it easily. I figure they added those aliases for a reason back in the day and it seems unlikely that I as an outsider would just make them change that decision just like this out of the blue.

    But the story didn’t end there. Obviously more Microsoft people gave the PR some attention and more comments were added. Like this:

    “You bring up a great point. We added a number of aliases for Unix commands but if someone has installed those commands on WIndows, those aliases screw them up.

    We need to fix this.”

    So, maybe it will trigger a change anyway? The story is ongoing…

    Mike HoyeCulture Shock

    I’ve been meaning to get around to posting this for… maybe fifteen years now? Twenty? At least I can get it off my desk now.

    As usual, it’s safe to assume that I’m not talking about only one thing here.

    I got this document about navigating culture shock from an old family friend, an RCMP negotiator now long retired. I understand it was originally prepared for Canada’s Department of External Affairs, now Global Affairs Canada. As the story made it to me, the first duty posting of all new RCMP recruits used to (and may still?) be to a detachment stationed outside their home province, where the predominant language spoken wasn’t their first, and this was one of the training documents intended to prepare recruits and their families for that transition.

    It was old when I got it 20 years ago, a photocopy of a mimeograph of something typeset on a Selectric years before; even then, the RCMP and External Affairs had been collecting information about the performance of new hires in high-stress positions in new environments for a long time. There are some obviously dated bits – “writing letters back home” isn’t really a thing anymore in the stamped-envelope sense they mean and “incurring high telephone bills”, well. Kids these days, they don’t even know, etcetera. But to a casual search the broad strokes of it are still valuable, and still supported by recent data.

    Traditionally, the stages of cross—cultural adjustment have been viewed as a U curve. What this means is, that the first months in a new culture are generally exciting – this is sometimes referred to as the “honeymoon” or “tourist” phase. Inevitably, however, the excitement wears off and coping with the new environment becomes depressing, burdensome, anxiety provoking (everything seems to become a problem; housing, neighbors, schooling, health care, shopping, transportation, communication, etc.) – this is the down part of the U curve and is precisely the period of so-called “culture shock“. Gradually (usually anywhere from 6 months to a year) an individual learns to cope by becoming involved with, and accepted by, the local people. Culture shock is over and we are back, feeling good about ourselves and the local culture.

    Spoiler alert: It doesn’t always work out that way. But if you know what to expect, and what you’re looking for, you can recognize when things are going wrong and do something about it. That’s the key point, really: this slow rollercoaster you’re on isn’t some sign of weakness or personal failure. It’s an absolutely typical human experience, and like a lot of experiences, being able to point to it and give it a name also gives you some agency over it you may not have thought you had.

    I have more to say about this – a lot more – but for now here you go: “Adjusting To A New Environment”, date of publication unknown, author unknown (likely Canada’s Department of External Affairs.) It was a great help to me once upon a time, and maybe it will be for you.

    Air MozillaIntern Presentations 2016, 18 Aug 2016

    Intern Presentations 2016 Group 5 of Mozilla's 2016 Interns presenting what they worked on this summer. Click the Chapters Tab for a topic list. Nathanael Alcock- MV Dimitar...

    Support.Mozilla.OrgWhat’s Up with SUMO – 18th August

    Hello, SUMO Nation!

    It’s good to be back and know you’re reading these words :-) A lot more happening this week (have you heard about Activate Mozilla?), so go through the updates if you have not attended all our meetings – and do let us know if there’s anything else you want to see in the blog posts – in the comments!

    Welcome, new contributors!

    If you just joined us, don’t hesitate – come over and say “hi” in the forums!

    Contributors of the week

    Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

    Most recent SUMO Community meeting

    The next SUMO Community meeting

    • …is happening on the 24th of August!
    • If you want to add a discussion topic to the upcoming meeting agenda:
      • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
      • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
      • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

    Community

    Social

    Support Forum

    • The SUMO Firefox 48 Release Report is open for feedback: please add your links, tweets, bugs, threads and anything else that you would like to have highlighted in the report.
    • More details about the audit for the Support Forum:
      • The audit will only be happening this week and next
      • It will determine the forum contents that will be kept and/or refreshed
      • The Get Involved page will be rewritten and designed as it goes through a “Think-Feel-do” exercis.
      • Please take a few minutes this week to read through the document and make a comment or edit.
      • One of the main questions is “what are the things that we cannot live without in the new forum?” – if you have an answer, write more in the thread!
    • Join Rachel in the SUMO Vidyo room on Friday between noon and 14:00 PST for answering forum threads and general hanging out!

    Knowledge Base & L10n

    Firefox

    • for Android
      • Version 49 will not have many features, but will include bug and security fixes.
    • for iOS
      • Version 49 will not have many features, but will include bug and security fixes.

    … and that’s it for now, fellow Mozillians! We hope you’re looking forward to a great weekend and we hope to see you soon – online or offline! Keep rocking the helpful web!

    Chris H-CThe Future of Programming

    Here’s a talk I watched some months ago, and could’ve sworn I’d written a blogpost about. Ah well, here it is:

    Bret Victor – The Future of Programming from Bret Victor on Vimeo.

    It’s worth the 30min of your attention if you have interest in programming or computer history (which you should have an interest in if you are a developer). But here it is in sketch:

    The year is 1973 (well, it’s 2004, but the speaker pretends it is 1973), and the future of programming is bright. Instead of programming in procedures typed sequentially in text files, we are at the cusp of directly manipulating data with goals and constraints that are solved concurrently in spatial representations.

    The speaker (Bret Victor) highlights recent developments in the programming of automated computing machines, and uses it to suggest the inevitability of a very different future than we currently live and work in.

    It highlights how much was ignored in my world-class post-secondary CS education. It highlights how much is lost by hiding research behind paywalled journals. It highlights how many times I’ve had to rewrite the wheel when, more than a decade before I was born, people were prototyping hoverboards.

    It makes me laugh. It makes me sad. It makes me mad.

    …that’s enough of that. Time to get back to the wheel factory.

    :chutten


    Air MozillaConnected Devices Weekly Program Update, 18 Aug 2016

    Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

    Air MozillaReps weekly, 18 Aug 2016

    Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

    Niko Matsakis'Tootsie Pop' Followup

    A little while back, I wrote up a tentative proposal I called the Tootsie Pop model for unsafe code. It’s safe to say that this model was not universally popular. =) There was quite a long and fruitful discussion on discuss. I wanted to write a quick post summarizing my main take-away from that discussion and to talk a bit about the plans to push the unsafe discussion forward.

    The importance of the unchecked-get use case

    For me, the most important lesson was the importance of the unchecked get use case. Here the idea is that you have some (safe) code which is indexing into a vector:

    1
    2
    3
    4
    5
    6
    
    fn foo() {
        let vec: Vec<i32> = vec![...];
        ...
        vec[i]
        ...
    }
    

    You have found (by profiling, but of course) that this code is kind of slow, and you have determined that the bounds-check caused by indexing is a contributing factor. You can’t rewrite the code to use iterators, and you are quite confident that the index will always be in-bounds, so you decide to dip your tie into unsafe by calling get_unchecked:

    1
    2
    3
    4
    5
    6
    
    fn foo() {
        let vec: Vec<i32> = vec![...];
        ...
        unsafe { vec.get_unchecked(i) }
        ...
    }
    

    Now, under the precise model that I proposed, this means that the entire containing module is considered to be within an unsafe abstraction boundary, and hence the compiler will be more conservative when optimizing, and as a result the function may actually run slower when you skip the bounds check than faster. (A very similar example is invoking str::from_utf8_unchecked, which skips over the utf-8 validation check.)

    Many people were not happy about this side-effect, and I can totally understand why. After all, this code isn’t mucking about with funny pointers or screwy aliasing – the unsafe block is a kind of drop-in replacement for what was there before, so it seems odd for it to have this effect.

    Where to go from here

    Since posting the last blog post, we’ve started a longer-term process for settling and exploring a lot of these interesting questions about the proper use of unsafe. At this point, we’re still in the data gathering phase. The idea here is to collect and categorize interesting examples of unsafe code. I’d prefer at this point not to be making decisions per se about what is legal or not – although in some cases someting may be quite unambiguous – but rather just try to get a good corpus with which we can evaluate different proposals.

    While I haven’t given up on the Tootsie Pop model, I’m also not convinced it’s the best approach. But whatever we do, I still believe we should strive for something that is safe and predictable by default – something where the rules can be summarized on a postcard, at least if you don’t care about getting every last bit of optimization. But, as the unchecked-get example makes clear, it is important that we also enable people to obtain full optimization, possibly with some amount of opt-in. I’m just not yet sure what’s the right setup to balance the various factors.

    As I wrote in my last post, I think that we have to expect that whatever guidelines we establish, they will have only a limited effect on the kind of code that people write. So if we want Rust code to be reliable in practice, we have to strive for rules that permit the things that people actually do: and the best model we have for that is the extant code. This is not to say we have to achieve total backwards compatibility with any piece of unsafe code we find in the wild, but if we find we are invalidating a common pattern, it can be a warning sign.

    Daniel StenbergHTTP/2 connection coalescing

    Section 9.1.1 in RFC7540 explains how HTTP/2 clients can reuse connections. This is my lengthy way of explaining how this works in reality.

    Many connections in HTTP/1

    With HTTP/1.1, browsers are typically using 6 connections per origin (host name + port). They do this to overcome the problems in HTTP/1 and how it uses TCP – as each connection will do a fair amount of waiting. Plus each connection is slow at start and therefore limited to how much data you can get and send quickly, you multiply that data amount with each additional connection. This makes the browser get more data faster (than just using one connection).

    6 connections

    Add sharding

    Web sites with many objects also regularly invent new host names to trigger browsers to use even more connections. A practice known as “sharding”. 6 connections for each name. So if you instead make your site use 4 host names you suddenly get 4 x 6 = 24 connections instead. Mostly all those host names resolve to the same IP address in the end anyway, or the same set of IP addresses. In reality, some sites use many more than just 4 host names.

    24 connections

    The sad reality is that a very large percentage of connections used for HTTP/1.1 are only ever used for a single HTTP request, and a very large share of the connections made for HTTP/1 are so short-lived they actually never leave the slow start period before they’re killed off again. Not really ideal.

    One connection in HTTP/2

    With the introduction of HTTP/2, the HTTP clients of the world are going toward using a single TCP connection for each origin. The idea being that one connection is better in packet loss scenarios, it makes priorities/dependencies work and reusing that single connections for many more requests will be a net gain. And as you remember, HTTP/2 allows many logical streams in parallel over that single connection so the single connection doesn’t limit what the browsers can ask for.

    Unsharding

    The sites that created all those additional host names to make the HTTP/1 browsers use many connections now work against the HTTP/2 browsers’ desire to decrease the number of connections to a single one. Sites don’t want to switch back to using a single host name because that would be a significant architectural change and there are still a fair number of HTTP/1-only browsers still in use.

    Enter “connection coalescing”, or “unsharding” as we sometimes like to call it. You won’t find either term used in RFC7540, as it merely describes this concept in terms of connection reuse.

    Connection coalescing means that the browser tries to determine which of the remote hosts that it can reach over the same TCP connection. The different browsers have slightly different heuristics here and some don’t do it at all, but let me try to explain how they work – as far as I know and at this point in time.

    Coalescing by example

    Let’s say that this cool imaginary site “example.com” has two name entries in DNS: A.example.com and B.example.com. When resolving those names over DNS, the client gets a list of IP address back for each name. A list that very well may contain a mix of IPv4 and IPv6 addresses. One list for each name.

    You must also remember that HTTP/2 is also only ever used over HTTPS by browsers, so for each origin speaking HTTP/2 there’s also a corresponding server certificate with a list of names or a wildcard pattern for which that server is authorized to respond for.

    In our example we start out by connecting the browser to A. Let’s say resolving A returns the IPs 192.168.0.1 and 192.168.0.2 from DNS, so the browser goes on and connects to the first of those addresses, the one ending with “1”. The browser gets the server cert back in the TLS handshake and as a result of that, it also gets a list of host names the server can deal with: A.example.com and B.example.com. (it could also be a wildcard like “*.example.com”)

    If the browser then wants to connect to B, it’ll resolve that host name too to a list of IPs. Let’s say 192.168.0.2 and 192.168.0.3 here.

    Host A: 192.168.0.1 and 192.168.0.2
    Host B: 192.168.0.2 and 192.168.0.3

    Now hold it. Here it comes.

    The Firefox way

    Host A has two addresses, host B has two addresses. The lists of addresses are not the same, but there is an overlap – both lists contain 192.168.0.2. And the host A has already stated that it is authoritative for B as well. In this situation, Firefox will not make a second connect to host B. It will reuse the connection to host A and ask for host B’s content over that single shared connection. This is the most aggressive coalescing method in use.

    one connection

    The Chrome way

    Chrome features a slightly less aggressive coalescing. In the example above, when the browser has connected to 192.168.0.1 for the first host name, Chrome will require that the IPs for host B contains that specific IP for it to reuse that connection.  If the returned IPs for host B really are 192.168.0.2 and 192.168.0.3, it clearly doesn’t contain 192.168.0.1 and so Chrome will create a new connection to host B.

    Chrome will reuse the connection to host A if resolving host B returns a list that contains the specific IP of the connection host A is already using.

    The Edge and Safari ways

    They don’t do coalescing at all, so each host name will get its own single connection. Better than the 6 connections from HTTP/1 but for very sharded sites that means a lot of connections even in the HTTP/2 case.

    curl also doesn’t coalesce anything (yet).

    Surprises and a way to mitigate them

    Given some comments in the Firefox bugzilla, the aggressive coalescing sometimes causes some surprises. Especially when you have for example one IPv6-only host A and a second host B with both IPv4 and IPv4 addresses. Asking for data on host A can then still use IPv4 when it reuses a connection to B (assuming that host A covers host B in its cert).

    In the rare case where a server gets a resource request for an authority (or scheme) it can’t serve, there’s a dedicated error code 421 in HTTP/2 that it can respond with and the browser can then  go back and retry that request on another connection.

    Starts out with 6 anyway

    Before the browser knows that the server speaks HTTP/2, it may fire up 6 connection attempts so that it is prepared to get the remote site at full speed. Once it figures out that it doesn’t need all those connections, it will kill off the unnecessary unused ones and over time trickle down to one. Of course, on subsequent connections to the same origin the client may have the version information cached so that it doesn’t have to start off presuming HTTP/1.

    Air Mozilla360/VR Meet-Up

    360/VR Meet-Up We explore the potential of this evolving medium and update you on the best new tools and workflows, covering: -Preproduction: VR pre-visualization, Budgeting, VR Storytelling...

    Mitchell BakerPracticing “Open” at Mozilla

    Mozilla works to bring openness and opportunity for all into the Internet and online life.  We seek to reflect these values in how we operate.  At our founding it was easy to understand what this meant in our workflow — developers worked with open code and project management through bugzilla.  This was complemented with an open workflow through the social media of the day — mailing lists and the chat or “messenger” element, known as Internet Relay Chat (“irc”).  The tools themselves were also open-source and the classic “virtuous circle” promoting openness was pretty clear.

    Today the setting is different.  We were wildly successful with the idea of engineers working in open systems.  Today open source code and shared repositories are mainstream, and in many areas the best of practices and expected and default. On the other hand, the newer communication and workflow tools vary in their openness, with some particularly open and some closed proprietary code.  Access and access control is a constant variable.  In addition, at Mozilla we’ve added a bunch of new types of activities beyond engineering, we’ve increased the number of employees dramatically and we’re a bit behind on figuring out what practicing open in this setting means.  

    I’ve decided to dedicate time to this and look at ways to make sure our goals of building open practices into Mozilla are updated and more fully developed.  This is one of the areas of focus I mentioned in an earlier post describing where I spend my time and energy.  

    So far we have three early stage pilots underway sponsored by the Office of the Chair:

    • Opening up the leadership recruitment process to more people
    • Designing inclusive decision-making practices
    • Fostering meaningful conversations and exchange of knowledge across the organization, with a particular focus on bettering communications between Mozillians and leadership.

    Follow-up posts will have more info about each of these projects.  In general the goal of these experiments is to identify working models that can be adapted by others across Mozilla. And beyond that, to assist other Mozillians figure out new ways to “practice open” at Mozilla.

    The Rust Programming Language BlogAnnouncing Rust 1.11

    The Rust team is happy to announce the latest version of Rust, 1.11. Rust is a systems programming language focused on safety, speed, and concurrency.

    As always, you can install Rust 1.11 from the appropriate page on our website, and check out the detailed release notes for 1.11 on GitHub. 1109 patches were landed in this release.

    What’s in 1.11 stable

    Much of the work that went into 1.11 was with regards to compiler internals that are not yet stable. We’re excited about features like MIR becoming the default and the beginnings of incremental compilation, and the 1.11 release has laid the groundwork.

    As for user-facing changes, last release, we talked about the new cdylib crate type.

    The existing dylib dynamic library format will now be used solely for writing a dynamic library to be used within a Rust project, while cdylibs will be used when compiling Rust code as a dynamic library to be embedded in another language. With the 1.10 release, cdylibs are supported by the compiler, but not yet in Cargo. This format was defined in RFC 1510.

    Well, in Rust 1.11, support for cdylibs has landed in Cargo! By adding this to your Cargo.toml:

    crate-type = ["cdylib"]
    

    You’ll get one built.

    In the standard library, the default hashing function was changed, from SipHash 2-4 to SipHash 1-3. We have been thinking about this for a long time, as far back as the original decision to go with 2-4:

    we proposed SipHash-2-4 as a (strong) PRF/MAC, and so far no attack whatsoever has been found, although many competent people tried to break it. However, fewer rounds may be sufficient and I would be very surprised if SipHash-1-3 introduced weaknesses for hash tables.

    See the detailed release notes for more.

    Library stabilizations

    See the detailed release notes for more.

    Cargo features

    See the detailed release notes for more.

    Contributors to 1.11

    We had 126 individuals contribute to 1.11. Thank you so much!

    • Aaklo Xu
    • Aaronepower
    • Aleksey Kladov
    • Alexander Polyakov
    • Alexander Stocko
    • Alex Burka
    • Alex Crichton
    • Alex Ozdemir
    • Alfie John
    • Amanieu d’Antras
    • Andrea Canciani
    • Andrew Brinker
    • Andrew Paseltiner
    • Andrey Tonkih
    • Andy Russell
    • Ariel Ben-Yehuda
    • bors
    • Brian Anderson
    • Carlo Teubner
    • Carol (Nichols || Goulding)
    • CensoredUsername
    • cgswords
    • cheercroaker
    • Chris Krycho
    • Chris Tomlinson
    • Corey Farwell
    • Cristian Oliveira
    • Daan Sprenkels
    • Daniel Firth
    • diwic
    • Eduard Burtescu
    • Eduard-Mihai Burtescu
    • Emilio Cobos Álvarez
    • Erick Tryzelaar
    • Esteban Küber
    • Fabian Vogt
    • Felix S. Klock II
    • flo-l
    • Florian Berger
    • Frank McSherry
    • Georg Brandl
    • ggomez
    • Gleb Kozyrev
    • Guillaume Gomez
    • Hendrik Sollich
    • Horace Abenga
    • Huon Wilson
    • Ivan Shapovalov
    • Jack O’Connor
    • Jacob Clark
    • Jake Goulding
    • Jakob Demler
    • James Alan Preiss
    • James Lucas
    • James Miller
    • Jamey Sharp
    • Jeffrey Seyfried
    • Joachim Viide
    • John Ericson
    • Jonas Schievink
    • Jonathan L
    • Jonathan Price
    • Jonathan Turner
    • Joseph Dunne
    • Josh Stone
    • Jupp Müller
    • Kamal Marhubi
    • kennytm
    • Léo Testard
    • Liigo Zhuang
    • Loïc Damien
    • Luqman Aden
    • Manish Goregaokar
    • Mark Côté
    • marudor
    • Masood Malekghassemi
    • Mathieu De Coster
    • Matt Kraai
    • Mátyás Mustoha
    • M Farkas-Dyck
    • Michael Necio
    • Michael Rosenberg
    • Michael Woerister
    • Mike Hommey
    • Mitsunori Komatsu
    • Morten H. Solvang
    • Ms2ger
    • Nathan Moos
    • Nick Cameron
    • Nick Hamann
    • Nikhil Shagrithaya
    • Niko Matsakis
    • Oliver Middleton
    • Oliver Schneider
    • Paul Jarrett
    • Pavel Pravosud
    • Peter Atashian
    • Peter Landoll
    • petevine
    • Reeze Xia
    • Scott A Carr
    • Sean McArthur
    • Sebastian Thiel
    • Seo Sanghyeon
    • Simonas Kazlauskas
    • Srinivas Reddy Thatiparthy
    • Stefan Schindler
    • Steve Klabnik
    • Steven Allen
    • Steven Burns
    • Tamir Bahar
    • Tatsuya Kawano
    • Ted Mielczarek
    • Tim Neumann
    • Tobias Bucher
    • Tshepang Lekhonkhobe
    • Ty Coghlan
    • Ulrik Sverdrup
    • Vadim Petrochenkov
    • Vincent Esche
    • Wangshan Lu
    • Will Crichton
    • Without Boats
    • Wojciech Nawrocki
    • Zack M. Davis
    • 吴冉波

    William LachanceHerding Automation Infrastructure

    For every commit to Firefox, we run a battery of builds and automated tests on the resulting source tree to make sure that the result still works and meets our correctness and performance quality criteria. This is expensive: every new push to our repository implies hundreds of hours of machine time. However, this type of quality control is essential to ensure that the product that we’re shipping to users is something that we can be proud of.

    But what about evaluating the quality of the product which does the building and testing? Who does that? And by what criteria would we say that our automation system is good or bad? Up to now, our procedures for this have been rather embarassingly adhoc. With some exceptions (such as OrangeFactor), our QA process amounts to motivated engineers doing a one-off analysis of a particular piece of the system, filing a few bugs, then forgetting about it. Occasionally someone will propose turning build and test automation for a specific platform on or off in mozilla.dev.planning.

    I’d like to suggest that the time has come to take a more systemic approach to this class of problem. We spend a lot of money on people and machines to maintain this infrastructure, and I think we need a more disciplined approach to make sure that we are getting good value for that investment.

    As a starting point, I feel like we need to pay closer attention to the following characteristics of our automation:

    • End-to-end times from push submission to full completion of all build and test jobs: if this gets too long, it makes the lives of all sorts of people painful — tree closures become longer when they happen (because it takes longer to either notice bustage or find out that it’s fixed), developers have to wait longer for try pushes (making them more likely to just push directly to an integration branch, causing the former problem…)
    • Number of machine hours consumed by the different types of test jobs: our resources are large (relatively speaking), but not unlimited. We need proper accounting of where we’re spending money and time. In some cases, resources used to perform a task that we don’t care that much about could be redeployed towards an underresourced task that we do care about. A good example of this was linux32 talos (performance tests) last year: when the question was raised of why we were doing performance testing on this specific platform (in addition to Linux64), no one could come up with a great justification. So we turned the tests off and reconfigured the machines to do Windows performance tests (where we were suffering from a severe lack of capacity).

    Over the past week, I’ve been prototyping a project I’ve been calling “Infraherder” which uses the data inside Treeherder’s job database to try to answer these questions (and maybe some others that I haven’t thought of yet). You can see a hacky version of it on my github fork.

    Why implement this in Treeherder you might ask? Two reasons. First, Treeherder already stores the job data in a historical archive that’s easy to query (using SQL). Using this directly makes sense over creating a new data store. Second, Treeherder provides a useful set of front-end components with which to build a UI with which to visualize this information. I actually did my initial prototyping inside an ipython notebook, but it quickly became obvious that for my results to be useful to others at Mozilla we needed some kind of real dashboard that people could dig into.

    On the Treeherder team at Mozilla, we’ve found the New Relic software to be invaluable for diagnosing and fixing quality and performance problems for Treeherder itself, so I took some inspiration from it (unfortunately the problem space of our automation is not quite the same as that of a web application, so we can’t just use New Relic directly).

    There are currently two views in the prototype, a “last finished” view and a “total” view. I’ll describe each of them in turn.

    Last finished

    This view shows the counts of which scheduled automation jobs were the “last” to finish. The hypothesis is that jobs that are frequently last indicate blockers to developer productivity, as they are the “long pole” in being able to determine if a push is good or bad.

    Right away from this view, you can see the mochitest devtools 9 test is often the last to finish on try, with Windows 7 mochitest debug a close second. Assuming that the reasons for this are not resource starvation (they don’t appear to be), we could probably get results into the hands of developers and sheriffs faster if we split these jobs into two seperate ones. I filed bugs 1294489 and 1294706 to address these issues.

    Total Time

    This view just shows which jobs are taking up the most machine hours.

    Probably unsurprisingly, it seems like it’s Android test jobs that are taking up most of the time here: these tests are running on multiple layers of emulation (AWS instances to emulate Linux hardware, then the already slow QEMU-based Android simulator) so are not expected to have fast runtime. I wonder if it might not be worth considering running these tests on faster instances and/or bare metal machines.

    Linux32 debug tests seem to be another large consumer of resources. Market conditions make turning these tests off altogether a non-starter (see bug 1255890), but how much value do we really derive from running the debug version of linux32 through automation (given that we’re already doing the same for 64-bit Linux)?

    Request for comments

    I’ve created an RFC for this project on Google Docs, as a sort of test case for a new process we’re thinking of using in Engineering Productivity for these sorts of projects. If you have any questions or comments, I’d love to hear them! My perspective on this vast problem space is limited, so I’m sure there are things that I’m missing.

    Mike HoyeThe Future Of The Planet

    I’m not sure who said it first, but I’ve heard a number of people say that RSS solved too many problems to be allowed to live.

    I’ve recently become the module owner of Planet Mozilla, a venerable communication hub and feed aggregator here at Mozilla. Real talk here: I’m not likely to get another chance in my life to put “seize control of planet” on my list of quarterly deliverables, much less cross it off in less than a month. Take that, high school guidance counselor and your “must try harder”!

    I warned my boss that I’d be milking that joke until sometime early 2017.

    On a somewhat more serious note: We have to decide what we’re going to do with this thing.

    Planet Mozilla is a bastion of what’s by now the Old Web – Anil Dash talks about it in more detail here, the suite of loosely connected tools and services that made the 1.0 Web what it was. The hallmarks of that era – distributed systems sharing information streams, decentralized and mutually supportive without codependency – date to a time when the economics of software, hardware, connectivity and storage were very different. I’ve written a lot more about that here, if you’re interested, but that doesn’t speak to where we are now.

    Please note that when talk about “Mozilla’s needs” below, I don’t mean the company that makes Firefox or the non-profit Foundation. I mean the mission and people in our global community of communities that stand up for it.

    I think the following things are true, good things:

    • People still use Planet heavily, sometimes even to the point of “rely on”. Some teams and community efforts definitely rely heavily on subplanets.
    • There isn’t a better place to get a sense of the scope of Mozilla as a global, cultural organization. The range and diversity of articles on Planet is big and weird and amazing.
    • The organizational and site structure of Planet speaks well of Mozilla and Mozilla’s values in being open, accessible and participatory.
    • Planet is an amplifier giving participants and communities an enormous reach and audience they wouldn’t otherwise have to share stories that range from technical and mission-focused to human and deeply personal.

    These things are also true, but not all that good:

    • It’s difficult to say what or who Planet is for right now. I don’t have and may not be able to get reliable usage metrics.
    • The egalitarian nature of feeds is a mixed blessing: On one hand, Planet as a forum gives our smallest and most remote communities the same platform as our executive leadership. On the other hand, headlines ranging from “Servo now self-aware” and “Mozilla to purchase Alaska” to “I like turnips” are all equal citizens of Planet, sorted only by time of arrival.
    • Looking at Planet via the Web is not a great experience; if you’re not using a reader even to skim, you’re getting a dated user experience and missing a lot. The mobile Web experience is nonexistent.
    • The Planet software is, by any reasonable standards, a contraption. A long-running and proven contraption, for sure, but definitely a contraption.

    Maintaining Planet isn’t particularly expensive. But it’s also not free, particularly in terms of opportunity costs and user-time spent. I think it’s worth asking what we want Planet to accomplish, whether Planet is the right tool for that, and what we should do next.

    I’ve got a few ideas about what “next” might look like; I think there are four broad categories.

    1. Do nothing. Maybe reskin the site, move the backing repo from Subversion to Github (currently planned) but otherwise leave Planet as is.
    2. Improve Planet as a Planet, i.e: as a feed aggregator and communication hub.
    3. Replace Planet with something better suited to Mozilla’s needs.
    4. Replace Planet with nothing.

    I’m partial to the “Improve Planet as a Planet” option, but I’m spending a lot of time thinking about the others. Not (or at least not only) because I’m lazy, but because I still think Planet matters. Whatever we choose to do here should be a use of time and effort that leaves Mozilla and the Web better off than they are today, and better off than if we’d spent that time and effort somewhere else.

    I don’t think Planet is everything Planet could be. I have some ideas, but also don’t think anyone has a sense of what Planet is to its community, or what Mozilla needs Planet to be or become.

    I think we need to figure that out together.

    Hi, Internet. What is Planet to you? Do you use it regularly? Do you rely on it? What do you need from Planet, and what would you like Planet to become, if anything?

    These comments are open and there’s a thread open at the Mozilla Community discourse instance where you can talk about this, and you can always email me directly if you like.

    Thanks.

    * – Mozilla is not to my knowledge going to purchase Alaska. I mean, maybe we are and I’ve tipped our hand? I don’t get invited to those meetings but it seems unlikely. Is Alaska even for sale? Turnips are OK, I guess.

    Air MozillaThe Joy of Coding - Episode 68

    The Joy of Coding - Episode 68 mconley livehacks on real Firefox bugs while thinking aloud.

    Mozilla Open Design BlogNow for the fun part.

    On our open design journey together, we’ve arrived at an inflection point. Today our effort—equal parts open crit, performance art piece, and sociology experiment—takes its logical next step, moving from words to visuals. A roomful of reviewers lean forward in their chairs, ready to weigh in on what we’ve done so far. Or so we hope.

    We’re ready. The work with our agency partner, johnson banks, has great breadth and substantial depth for first-round concepts (possibly owing to our rocket-fast timeline). Our initial response to the work has, we hope, helped make it stronger and more nuanced. We’ve jumped off this cliff together, holding hands and bracing for the splash.

    Each of the seven concepts we’re sharing today leads with and emphasizes a particular facet of the Mozilla story. From paying homage to our paleotechnic origins to rendering us as part of an ever-expanding digital ecosystem, from highlighting our global community ethos to giving us a lift from the quotidian elevator open button, the concepts express ideas about Mozilla in clever and unexpected ways.

    There are no duds in the mix. The hard part will be deciding among them, and this is a good problem to have.

    We have our opinions about these paths forward, our early favorites among the field. But for now we’re going to sit quietly and listen to what the voices from the concentric rings of our community—Mozillians, Mozilla fans, designers, technologists, and beyond—have to say in response about them.

    Tag, you’re it.

    Here’s what we’d like you to do, if you’re up for it. Have a look at the seven options and tell us what you think. To make comments about an individual direction and to see its full system, click on its image below.

    Which of these initial visual expressions best captures what Mozilla means to you? Which will best help us tell our story to a youthful, values-driven audience? Which brings to life the Mozilla personality: Gutsy, Independent, Buoyant, For Good?

    If you want to drill down a level, also consider which design idea:

    • Would resonate best around the world?
    • Has the potential to show off modern digital technology?
    • Is most scalable to a variety of Mozilla products, programs, and messages?
    • Would stand the test of time (well…let’s say 5-10 years)?
    • Would make people take notice and rethink Mozilla?

    This is how we’ve been evaluating each concept internally over the past week or so. It’s the framework we’ll use as we share the work for qualitative and quantitative feedback from our key audiences.

    How you deliver your feedback is up to you: writing comments on the blog, uploading a sketch or a mark-up, shooting a carpool karaoke video….bring it on. We’ll be taking feedback on this phase of work for roughly the next two weeks.

    If you’re new to this blog, a few reminders about what we’re not doing. We are not crowdsourcing the final design, nor will there be voting. We are not asking designers to work on spec. We welcome all feedback but make no promise to act on it all (even if such a thing were possible).

    From here, we’ll reduce these seven concepts to three, which we’ll refine further based partially on feedback from people like you, partially on what our design instincts tell us, and very much on what we need our brand identity to communicate to the world. These three concepts will go through a round of consumer testing and live critique in mid-September, and we’ll share the results here. We’re on track to have a final direction by the end of September.

    We trust that openness will prevail over secrecy and that we’ll all learn something in the end. Thanks for tagging along.

    jb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.keyjb_Mozilla_design_pres_edit_3.key