Bryce Van DykDebug Rust on Windows with Visual Studio Code and the MSVC Debugger

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

Debugging support for Rust when using the Microsoft Visual C++ (MSVC) toolchain is something that a lot of people (myself included) would really benefit from. There have been some improvements in this area that enable better debugging in Visual Studio Code (with a bit of tweaking)! Let's take a look at how to set it up!

Notes on toolchains

As of May 2017, there are two different ABIs that Rust can target on Windows. These are MSVC and GNU. By default Rust on Windows will target MSVC; as such this blog is addressing that case. For more information on these ABIs see the rustup README.

Required extensions

To get Rust MSVC debugging working in VSCode we're going to use a couple of extensions:

  • C/C++: Brings in support for debugging MSVC applications. Since we're using MSVC behind the scenes, this gives us some debugging support for our Rust programs.
  • Native Debug: Allows us to set breakpoints by clicking next to line numbers.

Configuration

Debugging in VSCode is configured from the debug view, which we can get to by clicking on the bug icon, or via Ctrl + Shift + D. In this view there is a drop down box to select or add a launch configuration:

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

From there we're given a drop down menu offering a number of debug configurations. For MSVC rust debugging we want C++ (Windows).

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

Now we're presented with a newly created launch.json file, which contains a skeleton debug config. There are various settings that can be tweaked here, but the only one that really needs to change is the program which needs to be set to the program to debug.

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

Change this value to point to the debug binary. From there the Native Debug extension should allow us to click in the columns to set break points, these will then be hit when running with the debug settings we've just configured.

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

Visualizers

The Visual Studio debugger we're using behind the scenes supports creation of custom views for native data, natvis.

If Rust nightly is installed via rustup, it will come with a few of these visualizers. To get these to work with VSCode copy them from the nightly dir to VSCode's debugger extension dir. For me that means copying from:

C:\Users\B\.multirust\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\etc

to

C:\Users\B\.vscode\extensions\ms-vscode.cpptools-0.11.2\debugAdapters\vsdbg\bin\Visualizers

Note that these visualizers help with some types and not others. We can open the files up and look at their XML guts to get an idea on what types they work with and how they do so. In my experience, there are still plenty of types with which debugger doesn't yet play nice.

Worked Example

To showcase debugging, let's use a simple example. If you want to follow along you can copy the code from below, or grab a clone/get a zip from here. If you want a copy with the debug config included, you can find it here.

fn another_function() {  
    let message = "I'm a message in another function";
    println!("{}", message);
}

fn yet_another_function() -> u64 {  
    let number = 2;
    let another_number = 5;
    number + another_number
}

fn main() {  
    println!("Hello, world!");
    another_function();
    let number = yet_another_function();
    println!("{}", number);
}

Following through the steps above let's set launch.json's program value to ${workspaceRoot}/target/debug/vscode-rust-debug-example.exe, make sure the build is up to date: cargo build, and we're good to go.

Let's set a breakpoint early in the program before running:

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

To start debugging click the little green arrow or hit F5. Doing so, we hit the breakpoint, and from there we're able to step into functions. We get a stack trace and some visualization of data (though we don't get all of the &str visualized):

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

We're also able to step out, and can keep stepping through the program happily:

Debug Rust on Windows with Visual Studio Code and the MSVC Debugger

Wrap up

And that's Rust debugging is VSCode! If you have any questions or comments (especially if you know how to improve more on the debugging experience) let me know!

Daniel StenbergNumber of semicolon separated words in libreoffice calc

=LEN(TRIM(A1))-LEN(SUBSTITUTE(TRIM(A1),";",""))+1

The length of the entire A1 field (white space trimmed) minus the length of the A1 field with all the semicolons removed, plus one.

A string in A1 that looks like “A;B;C” will give the number 3.

I had to figure this out and I thought I’d store it here for later retrieval. Maybe someone else will enjoy it as well. The same formula works in Excel too I think.

Chris LordFree Ideas for UI Frameworks, or How To Achieve Polished UI

Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.

1. No main-thread UI

The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.

2. Contextually-aware compositor

This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).

3. Memory bandwidth budget

This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).

It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).

4. Level-of-detail

This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.

Pitfalls

I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.

You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.

Don’t lose site of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.

One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.

Mozilla Open Policy & Advocacy BlogAadhaar isn’t progress — it’s dystopian and dangerous


This opinion piece by Mozilla Executive Chairwoman Mitchell Baker and Mozilla community member Ankit Gadgil first appeared in the Business Standard.

Imagine your government required you to consent to ubiquitous stalking in order to participate in society — to do things such as log into a wifi hotspot, register a SIM card, get your pension, or even obtain a food ration of rice. Imagine your government was doing this in ways your Supreme Court had indicated were illegal.

This isn’t some dystopian future, this is happening in India right now. The government of India is pushing relentlessly to roll out a national biometric identity database called Aadhaar, which it wants India’s billion-plus population to use for virtually all transactions and interactions with government services.

The Indian Supreme Court has directed that Aadhaar is only legal if it’s voluntary and restricted to a limited number of schemes. Seemingly disregarding this directive, Prime Minister Narendra Modi’s government has made verification through Aadhaar mandatory for a wide range of government services, including vital subsidies that some of India’s poorest citizens rely on to survive. Vital subsidies aren’t voluntary.

Even worse, the government of India is selling access to this database to private companies to use and combine with other datasets as they wish. This would allow companies to have access to some of your most intimate details and create detailed profiles of you, in ways you can’t necessarily see or control. The government can also share user data “in the interest of national security,” a term that remains dangerously undefined. There are little to no protections on how Aadhaar data is used, and certainly no meaningful user consent. Individual privacy and security cannot be adequately protected and users cannot have trust in systems when they do not have transparency or a choice in how their private information will be used.

This is all possible because India currently does not have any comprehensive national law protecting personal security through privacy. India’s Attorney General has recently cast doubt on whether a right to privacy exists in arguments before the Supreme Court, and has not addressed how individual citizens can enjoy personal security without privacy.

We have long argued that enacting a comprehensive privacy and data protection law should be a national policy priority for India. While it is encouraging to see the Attorney General also indicate to the Supreme Court in a separate case that the government of India intends to develop a privacy and data protection law by Diwali, it is not at all clear that the draft law the government will put forward will contain the robust protections needed to ensure the security and privacy of individuals in India. At the same time, the government of India is still exploiting this vacuum in legal protections by continuing to push ahead with a massive initiative that systematically threatens individuals’ security and privacy. The world is looking to India to be a leader on internet policy, but it is unclear if Prime Minister Modi’s government will seize this opportunity and responsibility for India to take its place as a global leader on protecting individual security and privacy.

The protection of individual security and privacy is critical to building safe online systems. It is the lifeblood of the online ecosystem, without which online efforts such as Aadhaar and Digital India are likely to fail or become deeply dangerous.

One of Mozilla’s founding principles is the idea that security and privacy on the internet are fundamental and must not be treated as optional. This core value underlines and guides all of Mozilla’s work on online privacy and security issues—including our product development and design decisions and policies, and our public policy and advocacy work. The Mozilla Community in India has also long sought to empower Indians to protect their privacy themselves including through national campaigns with privacy tips and tools. Yet, we also need the government to do its part to protect individual security and privacy.

The Mozilla Community in India has further been active in promoting the use, development, and adoption of open source software. Aadhaar fails here as well.

The Government of India has sought to soften the image of Aadhaar by wrapping it in the veneer of open source. It refers to the Aadhaar API as an “Open API” and its corporate partners as “volunteers.” As executive chairwoman and one of the leading contributors to Mozilla, one of the largest open source projects in the world, let us be unequivocally clear: There’s nothing open about this. The development was not open, the source code is not open, and companies that pay to get a license to access this biometric identity database are not volunteers. Moreover, requiring Indians to use Aadhaar to access so many services dangerously intensifies the already worrying trend toward centralisation of the internet. This is disappointing given the government of India’s previous championing of open source technologies and the open internet.

Prime Minister Modi and the government of India should pause the further roll out of Aadhaar until a strong, comprehensive law protecting individual security and privacy is passed. We further urge a thorough and open public process around these much-needed protections, India’s privacy law should not be passed in a rushed manner in the dead of night as the original Aadhaar Act was. As an additional act of openness and transparency and to enable an informed debate, the government of India should make Aadhaar actually open source rather than use the language of open source for an initiative that has little if anything “open” about it. We hope India will take this opportunity to be a beacon to the world on how citizens should be protected.

The post Aadhaar isn’t progress — it’s dystopian and dangerous appeared first on Open Policy & Advocacy.

Mozilla Addons BlogView Source links removed from listing pages

Up until a few weeks ago, AMO had “View Source” links that allowed users to inspect the contents of listed add-ons. Due to performance issues, the viewer didn’t work reliably, so we disabled it to investigate the issue. Unfortunately, the standard error message that was shown when people tried to click the links led to some confusion, and we decided to remove them altogether.

What’s Next for Source Viewing

The open issue has most of the background and some ideas on where to go from here. It’s likely we won’t support this feature again. Instead, we can make it easier for developers to point users to their code repositories. I think this is an improvement, since sites like GitHub can provide a much better source-viewing experience. If you still want to inspect the actual package hosted on AMO, you can download it, decompress it, and use your tool of preference to give it a look.

The post View Source links removed from listing pages appeared first on Mozilla Add-ons Blog.

Patrick ClokeRSS Feeds for Wikipedia Current Events and NHL News

I subscribe to a fair amount of feeds for news, blogs, articles, etc. I’m currently subscribed to 122 feeds, some of which have tens of articles a day (news sites), some of which are dead. [1] Unfortunately there’s still a few sites that I was visiting manually each …

Andy McKayRide to Conquer Cancer - Team Mozilla

The Ride to Conquer Cancer is a fund raising event by the BC Cancer Foundation. Each year thousands of riders go from Vancouver down to Seattle in the US and over the years has raised millions of dollars.

This year the Mozilla Vancouver has put together a team. Myself, Roland Tanglao and Eva Szekely from the Vancouver Mozilla office will be riding and raising money. If you'd like to support our cause, then please consider donating. We've got a fundraising goal and we are going to make it.

I attended the ride last year and found it moving to see so many people working to help a cause. This year I'm riding for family and friends who have been affected with cancer, but specifically for my father who passed away a few years ago.

If any Mozillians from the wider Mozilla community in Vancouver want to join in, please drop me a line.

Eric RahmAre we slim yet is dead, all hail are we slim yet

Aside from some pangs of nostalgia, it is with great pleasure that I announce the retirement of areweslimyet.com, the areweslimyet github project, and its associated infrastructure (a sad computer in Mountain View under dvander’s desk and a possibly less sad computer running the website that’s owned by the former maintainer).

Wait, what?

Don’t worry! Are we slim yet, aka AWSY, lives on, it’s just moved in-tree and is run within Mozilla’s automated testing infrastructure.

For equivalent graphs check out:
Explicit
RSS
Miscellaneous

You can build your own graph from Perfherder. Just choose ‘+ Add test data’, ‘awsy’ for the framework and the tests and platforms you care about.

Wait, why?

I spent a few years maintaining and updating AWSY and some folks spent a fair amount of time before me. It was an ad hoc system that had bits and pieces bolted on over time. I brought it into the modern age from using the mozmill framework over to marionette, added support for e10s, and cleaned up some old slightly busted code. I tried to reuse packages developed by Mozilla to make things a bit easier (mozdownload and friends).

This was all pretty good, but things kept breaking. We weren’t in-tree, so breaking changes to marionette, mozdownload, etc would cause failures for us and it would take a while to figure out what happened. Sometimes the hard drive filled up. Sometimes the status file would get corrupted due to a poorly timed shutdown. It just had a lot of maintenance for a project with nobody dedicated to it.

The final straw was the retirement of archive.mozilla.org for what we call tinderbox builds, builds that are done more or less per push. This completely broke AWSY back in January and we decided it was just better to give in and go in-tree.

So is this a good thing?

It is a great thing. We’ve gone from 18,000 lines of code to 1,000 lines of code. That is not a typo. We now run on linux64, win32, and win64. Mac is coming soon. We turned on e10s. We have results on mozilla-inbound, autoland, try, mozilla-central, and mozilla-beta. We’re going to have automated crash analysis soon. We were able to use the project to give the greenlight for the e10s-multi project on memory usage.

Oh and guess what? Developers can run AWSY locally via mach. That’s right, try this out:

mach awsy-test --quick

Big thanks go out to Paul Yang and Bob Clary who pulled all this together — all I did was do a quick draft of an awsy-lite implementation — they did the heavy lifting getting it in tree, integrated with task cluster, and integrated with mach.

What’s next?

Now that we’re in-tree we can easily add new tests. Imagine getting data points for running the AWSY test with a specific add-on enabled to see if it regresses memory across revisions. And anyone can do this, no crazy local setup. Just mach awsy-test.

Andreas GalChrome won

Disclaimer: I worked for 7 years at Mozilla and was Mozilla’s Chief Technology Officer before leaving 2 years ago to found an embedded AI startup.

Mozilla published a blog post two days ago highlighting its efforts to make the Desktop Firefox browser competitive again. I used to closely follow the browser market but haven’t looked in a few years, so I figured it’s time to look at some numbers:

alldevices

The chart above shows the percentage market share of the 4 major browsers over the last 6 years, across all devices. The data is from StatCounter and you can argue that the data is biased in a bunch of different ways, but at the macro level it’s safe to say that Chrome is eating the browser market, and everyone else except Safari is getting obliterated.

Trend

I tried a couple different ways to plot a trendline and an exponential fit seems to work best. This aligns pretty well with theories around the explosive diffusion of innovation, and the slow decline of legacy technologies. If the 6 year trend holds, IE should be pretty much dead in 2 or 3 years. Firefox is not faring much better, unfortunately, and is headed towards a 2-3% market share. For both IE and Firefox these low market share numbers further accelerate the decline because Web authors don’t test for browsers with a small market share. Broken content makes users switch browsers, which causes more users to depart. A vicious cycle.

Chrome and Safari don’t fit as well as IE and Firefox. The explanation for Chrome is likely that the market share is so large that Chrome is running out of users to acquire. Some people are stuck on old operating systems that don’t support Chrome. Safari’s recent growth is underperforming its trend most likely because iOS device growth has slowed.

Desktop market share

Looking at all devices blends mobile and desktop market shares, which can be misleading. Safari/iOS is dominant on mobile whereas on Desktop Safari has a very small share. Firefox in turn is essentially not present on mobile. So let’s look at the Desktop numbers only.

desktop

The Desktop-only graph unfortunately doesn’t predict a different fate for IE and Firefox either. The overall desktop PC market is growing slightly (most sales are replacement PCs, but new users are added as well). Despite an expanding market both IE and Firefox are declining unsustainably.

Adding users?

Eric mentioned in the blog post that Firefox added users last year. The relative Firefox market share declined from 16% to 14.85% during that period. For comparison, Safari Desktop is relatively flat, which likely means Safari market share is keeping up with the (slow) growth of the PC/Laptop market. Two possible theories are that Eric meant in his blog post that browser installs were added. People often re-install the browser on a new machine, which could be called an “added user”, but it comes usually at the expense of the previous machine becoming disused. It’s also possible that the absolute daily active user count has indeed increased due to the growth of the PC/laptop market, despite the steep decline in relative market share. Firefox ADUs aren’t public so it’s hard to tell.

From these graphs it’s pretty clear that Firefox is not going anywhere. That means that the esteemed Fox will be around for many many years, albeit with an ever diminishing market share. It also, unfortunately, means that a turnaround is all but impossible.

With a CEO transition about 3 years ago there was a major strategic shift at Mozilla to re-focus efforts on Firefox and thus the Desktop. Prior to 2014 Mozilla heavily invested in building a Mobile OS to compete with Android: Firefox OS. I started the Firefox OS project and brought it to scale. While we made quite a splash and sold several million devices, in the end we were a bit too late and we didn’t manage to catch up with Android’s explosive growth. Mozilla’s strategic rationale for building Firefox OS was often misunderstood. Mozilla’s founding mission was to build the Web by building a browser. Mobile thoroughly disrupted this mission. On mobile browsers are much less relevant–even more so third party mobile browsers. On mobile browsers are a feature of the Facebook and Twitter apps, not a product. To influence the Web on mobile, Mozilla had to build a whole stack with the Web at its core. Building mobile browsers (Firefox Android) or browser-like apps (Firefox Focus) is unlikely to capture a meaningful share of use cases. Both Firefox for Android and Firefox Focus have a market share close to 0%.

The strategic shift in 2014, back to Firefox, and with that back to Desktop, was significant for Mozilla. As Eric describes in his article, a lot of amazing technical work has gone into Firefox for Desktop the last years. The Desktop-focused teams were expanded, and mobile-focused efforts curtailed. Firefox Desktop today is technically competitive with Chrome Desktop in many areas, and even better than Chrome in some. Unfortunately, looking at the graphs, none of this has had any effect on market trends. Browsers are a commodity product. They all pretty much look the same and feel the same. All browsers work pretty well, and being slightly faster or using slightly less memory is unlikely to sway users. If even Eric–who heads Mozilla’s marketing team–uses Chrome every day as he mentioned in the first sentence, it’s not surprising that almost 65% of desktop users are doing the same.

What does this mean for the Web?

I started Firefox OS in 2011 because already back then I was convinced that desktops and browsers were dead. Not immediately–here we are 6 years later and both are still around–but both are legacy technologies that are not particularly influential going forward. I don’t think there will be a new browser war where Firefox or some other competitor re-captures market share from Chrome. It’s like launching a new and improved horse in the year 2017. We all drive cars now. Some people still use horses, and there is value to horses, but technology has moved on when it comes to transportation.

Does this mean Google owns the Web if they own Chrome? No. Absolutely not. Browsers are what the Web looked like in the first decades of the Internet. Mobile disrupted the Web, but the Web embraced mobile and at the heart of most apps beats a lot of JavaScript and HTTPS and REST these days. The future Web will look yet again completely different. Much will survive, and some parts of it will get disrupted. I left Mozilla because I became curious what the Web looks like once it consists predominantly of devices instead of desktops and mobile phones. At Silk we created an IoT platform built around open Web technologies such as JavaScript, and we do a lot of work around democratizing data ownership through embedding AI in devices instead of sending everything to the cloud.

So while Google won the browser wars, they haven’t won the Web. To stick with the transportation metaphor: Google makes the best horses in the world and they clearly won the horse race. I just don’t think that race matters much going forward.

Update: A lot of good comments in a HackerNews thread here. My favorite was this one: “Mozilla won the browser war. Firefox lost the browser fight. But there’s many wars left to fight, and I hope Mozilla dives into a new one.” Couldn’t agree more.


Filed under: Mozilla

Air MozillaLocalization Community Bi-Monthly Call, 25 May 2017

Localization Community Bi-Monthly Call These calls will be held in the Localization Vidyo room every second (14:00 UTC) and fourth (20:00 UTC) Thursday of the month and will be...

Mozilla Localization (L10N)Taipei Localization Workshop

In front of the iconic Taipei 101.

In discussing our plans for this years’ event, the city of Taipei was on a short list of preferred locations. Peter from our Taipei community helped us solidify the plan. We set the date for April 21-22 in favour of cooler weather and to avoid typhoon season!  This would be my third visit in Taiwan.

Working with our community leaders, we developed nomination criteria and sent out invitations. In addition to contributing to localizing content, we also reviewed community activities in other areas such as testing Pontoon, leading and managing community projects, and active participation in community channels.

360° view of the meetup.

In total, we invited representatives from 12 communities and all were represented at our event. We had a terrific response, more than 80% of the invitees accepted the invitation and were able to join us. It was a good mix of familiar faces and newcomers. We asked everyone to set personal goals in addition to team goals. Flod and Gary joined me for the second year in a row, while this was Axel’s first meeting with these communities in Asia.

Based on the experience and feedback from last year’s event, we switched things up, balancing discussion and presentation sessions with community-oriented breakout sessions throughout the weekend. These changes were well received.

Our venue was the Mozilla Taipei office, right at the heart of financial centre, a few minutes from Taipei 101. On Saturday morning, Axel covered the removal of the Aurora branch and cross-channel, while later Flod talked about Quantum and Photon and their impact on localization. We then held a panel Q&A session with the localisers and l10n-drivers. Though we solicited questions in advance, most questions were spontaneous, both technical and non-technical. They covered a broad range of subjects including Firefox, new brand design, vendor management and crowd sourcing practices by other companies. We hoped this new format would be interactive. And it was! We loved it, and from the survey, the response was positive too. In fact, we were asked to conduct another session the following day, so more questions could be answered.

Localisers were briefed on product updates.

The upcoming Firefox browser launch in autumn creates new challenges for our communities, including promoting the product in their languages. In anticipation, we are developing a Firefox l10n marketing kit for the communities. We took advantage of the event to collect input on local experiences that worked well and that didn’t. We covered communication channels, materials needed for organising an event, and key messages to promote the localised product. Flod shared the design of Photon, with a fun, new look and feel.

On Sunday, Flod demonstrated all the new development on Pontoon, including how to utilise the tool to work more efficiently. He covered the basic activities for different roles as a suggester, as a translator and as a locale manager. He also covered advanced features such as batch processing, referencing other languages for inspiration and filters, before describing future feature improvements. Though it was unplanned, many localisers tried their hands on the tool while they listened in attentively. It worked out better than expected!

Quality was the focus and theme for this year’s event. We shared test plans for desktop, mobile, and mozilla.org, then allowed the communities to spend the breakout sessions testing their localisation work. Axel also made a laptop available to test Windows Installer. Each community worked on their group goals between sessions for the rest of the weekend.

Last stop of the 貓空纜車 (Maokong Gondola ride)

Of course, we found some time to play. Though the weather was not cooperative, we braved unseasonally cold, wet, and windy weather to take a gondola ride on 貓空纜車 (Taipei Maokong Gondola) over the Taipei Zoo in the dark. Irvin introduced the visitors to the local community contributors at 摩茲工寮  (Mozilla community space). Gary led a group to visit Taipei’s famed night markets. Others followed Joanna to her workplace at 三七茶堂 (7 Tea House), to get an informative session on tea culture. Many brought home some local teas, the perfect souvenir from Taiwan.

Observing the making of the famous dumplings at 鼎泰豐 (Din Tai Fung at Taipei 101)

We were also spoiled by the abundance of food Taipei had to offer. The local community put a lot of thought in the planning phase. Among the challenges were the size of the group, the diversity of the dietary needs, and the desire of having a variety of cuisines. Flod and Axel had an eye opening experience with all the possible food options! There was no shortage, between snacks, lunch and dinner. Many of us gained a few pounds before heading home.

All of us were pleased with the active participation of all the attendees, their collaborations within the community and beyond. We hope you had achieved your personal goals. We are especially grateful for the tremendous support from Peter, Joanna and Lora who helped with each step of the planning, hotel selection, transportation directions, visa application process, food and restaurant selections and cultural activities. We could have not done it with their knowledge, patience and advice in planning and execution. Behind the scenes, community veterans Bob and Irvin lent their support to make sure things went as seamlessly as possible. It was true team effort to host a successful event of this size. Thanks to you all for creating this wonderful experience together.

We look forward to another event in Asia next year. In which country, using what format? We want to hear from you!

Nathan Froydapplying amazon’s lessons to mozilla (part 1)

Several days ago, somebody pointed me at Why Amazon is eating the world and the key idea has been rolling around in my head ever since:

[The reason that Amazon’s position is defensible is] that each piece of Amazon is being built with a service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition.

The most obvious example of Amazon’s [service-oriented architecture] structure is Amazon Web Services (Steve Yegge wrote a great rant about the beginnings of this back in 2011). Because of the timing of Amazon’s unparalleled scaling — hypergrowth in the early 2000s, before enterprise-class SaaS was widely available — Amazon had to build their own technology infrastructure. The financial genius of turning this infrastructure into an external product (AWS) has been well-covered — the windfalls have been enormous, to the tune of a $14 billion annual run rate. But the revenue bonanza is a footnote compared to the overlooked organizational insight that Amazon discovered: By carving out an operational piece of the company as a platform, they could future-proof the company against inefficiency and technological stagnation.

…Amazon has replaced useless, time-intensive bureaucracy like internal surveys and audits with a feedback loop that generates cash when it works — and quickly identifies problems when it doesn’t. They say that money earned is a reasonable approximation of the value you’re creating for the world, and Amazon has figured out a way to measure its own value in dozens of previously invisible areas.

Open source is the analogue of this strategy into the world of software.  You have some small collection of code that you think would be useful to the wider world, so you host your own repository or post it on Github/Bitbucket/etc.  You make an announcement in a couple of different venues where you expect to find interested people.  People start using it, express appreciation for what you’ve done, and begin to generate ideas on how it could be made better, filing bug reports and sending you patches.  Ideally, all of this turns into a virtuous cycle of making your internal code better as well as providing a useful service to external contributors.  The point of the above article is that Amazon has applied an open-source-like strategy to its business relentlessly, and it’s paid off handsomely.

Google is probably the best (unintentional?) practitioner of this strategy, exporting countless packages of software, such as GTest, Go, and TensorFlow, not to mention tools like their collection of sanitizers. They also do software-related exports like their C++ style guide. Facebook opens up in-house-developed components with React, HHVM, and Buck, among others. Microsoft has been charging into this arena in the past couple of years, with examples like Visual Studio Code, TypeScript, and ChakraCore.  Apple doesn’t really play the open source game; their opensource site and available software is practically the definition of “throwing code over the wall”, even if having access to the source is useful in a lot of cases.  To the best of my knowledge, Amazon doesn’t really play in this space either.  I could also list examples of exported code from other smaller but still influential technology companies: Github, Dropbox, Twitter, and so forth, as well as companies that aren’t traditional technology companies, but have still invested in open-sourcing some of their software.

Whither Mozilla in the above list?  That is an excellent question.  I think in many cases, we haven’t tried, and in the Firefox-related cases where we tried, we decided (incorrectly, judging through the above lens) that the risks of the open source approach weren’t worth it.  Two recent cases where we have tried exporting software and succeeded wildly have been asm.js/WebAssembly and Rust, and it’d be worth considering how to translate those successes into Firefox-related ones.  I’d like to make a follow-up post exploring some of those ideas soon.

 

 

Air MozillaEmerging Technologies Speaker Series 5.25.17

Emerging Technologies Speaker Series 5.25.17 The Emerging Technologies (Distinguished) Speaker Series is a fortnightly 1-hour speaker series on Tuesdays at 10am, for both external and internal speakers to speak on...

Air MozillaReps Weekly Meeting May 25, 2017

Reps Weekly Meeting May 25, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Firefox NightlyThese Weeks in Firefox: Issue 17

Highlights

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

  • Resolved bugs (excluding employees):
    • More than one bug fixed: https://mzl.la/2rwO12P
      • Dan Banner
      • Kevin Jones
      • Milind L (:milindl)
      • Swapnesh Kumar Sahoo [:swapneshks]
      • tfe
    • New contributors (🌟 = First Patch!)
      • 🌟 jomer14 got rid of some leftover l10n files that Firefox Accounts didn’t need anymore!
      • 🌟 Pauline got rid of PlacesUtils.asyncGetBookmarkIds (which isn’t needed anymore thanks to Bookmarks.jsm), which also reduced our memory footprint!
      • 🌟 Shashwat Jolly cleaned up some of our strings in about:license!

Project Updates

Activity Stream

  • ‘Graduation’ team is getting close to preffing on Activity Stream in Nightly and working on Search Suggestions, about:home, cleaning up tests, and perf telemetry
    • Aiming for more regular / weekly landings from github to mozilla-central
    • Replaced custom React search suggestions from Test Pilot with existing contentSearchUI used for about:home/about:newtab simplifying tests
  • Test Pilot team is finishing up customization (drag’n’drop, add/edit topsite), as well as beginning work on Sections
  • Activity Stream, when enabled on m-c runs in the content process!
  • Removed most of Content Services / Suggested Tiles code from about:newtab resulting in perf improvements and removed / ~8x fewer intermittent test failures

Electrolysis (e10s)

  • The e10s-multi team is still looking over the data being gathered from the Beta population against the release criteria to determine whether or not e10s-multi will ship in Firefox 54, or will have to wait until Firefox 55
  • e10s-a11y support in Firefox 55 has been marked “at risk” due to stability issues. A go/no-go will be happen no later than June 3rd
  • erahm has a blog post about how memory usage with 4 content processes continues to be the sweet spot

Firefox Core Engineering

  • Reminder: Firefox 55 is also installing 64-bit by default on 64-bit OS’s; updates will come later.
  • Resolved race condition in Rust runtime. This may have been bugging some tests (no pun intended).
  • We have preliminary top crashers lists from crash stacks sent via crash pings. Expanding that analysis is pending review of the data & correlation with crash reports.

Form Autofill

Photon

Performance
  • More rigorous reflow tests have landed for window opening, tab opening and tab closing.
  • Kudos to the Structure / Menus team for making the subview animations smooth as silk! (Notice that Oh no! Reflow! is detecting no synchronous reflows in that video)
  • Task.jsm and Promise.defer() removal big patches landed as pre-announced during the last meeting. This covered the browser/ and toolkit/ folders, more patches coming soon for other folders.
Structure
  • The page action menu has started taking shape and now has items to copy/email links, and will soon have a ‘send to device’ submenu;
  • Tomorrow’s nightly will have Firefox Account and cut/copy/paste items in the main hamburger panel;
  • Main work on the permanent overflow panel (as a replacement for the customizable bits of the existing hamburger panel) is done, working on polish and bugfixes;
  • Work will start on the new library button this week;
  • We’ll be working to flip the photon pref by default on Nightly in the next week or two;
Animation
Visuals
Onboarding
Preferences
  • Search is taking shape on Nightly! It now comes with the right highlight color, tooltips for sub-dialog search results. With the help from QA engineers, we are closing the gap between implementation & spec.
  • UX team asked us to revise re-org. The change will likely postpone re-org shipping by one release (to 56); the good news is the release population will be presented with the new search & re-org at the same time, if that happens.

Project Mortar (PDFium)

  • peterv made some progress of the JSPlugin architecture – the last few pieces of work are reviewing. Hopefully this is the final round of review, and we will land all of them (8 patches!) soon.

Search

  • One-off buttons in the location bar are ready to ride the trains in 55.
  • Search suggestions are now enabled by default. Users who had explicitly opted out of search suggestions in the past, will not see them.
  • Hi-res favicon improvements: the default globe favicon is now hi-res (SVG) everywhere in the UI and some ugly icon rescaling was fixed.

Sync / Firefox Accounts

Test Pilot

  • Containers experiment release 2.3.0 coming this week
    • adds a “site assignment” on-boarding panel to increase site assignments
      A screenshot of a panel explaining how you can choose sites that will always open in a container.
    • Also removes SDK code!
  • Screenshots feature is now aiming for 55
    • We’re hoping WebExtensions start-up will be performant enough by 55
    • Our backup plan: move UI (toolbar button, context menu item) into bootstrap.js code, lazy-load WebExtension on click
  • We’re planning to start a Test Pilot blog (with help from Marketing)
  • Test Pilot and experiments are moving away from the SDK

Looking into replacing Test Pilot addon functionality with a WebExtension API Experiment (learn more)

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Niko MatsakisQuery structure in chalk

For my next post discussing chalk, I want to take kind of a different turn. I want to talk about the general struct of chalk queries and how chalk handles them right now. (If you’ve never heard of chalk, it’s sort of “reference implementation” for Rust’s trait system, as well as an attempt to describe Rust’s trait system in terms of its logical underpinnings; see this post for an introduction to the big idea.)

The traditional, interactive Prolog query

In a traditional Prolog system, when you start a query, the solver will run off and start supplying you with every possible answer it can find. So if I put something like this (I’m going to start adopting a more Rust-like syntax for queries, versus the Prolog-like syntax I have been using):

?- Vec<i32>: AsRef<?U>

The solver might answer:

Vec<i32>: AsRef<[i32]>
    continue? (y/n)

This continue bit is interesting. The idea in Prolog is that the solver is finding all possible instantiations of your query that are true. In this case, if we instantiate ?U = [i32], then the query is true (note that the solver did not, directly, tell us a value for ?U, but we can infer one by unifying the response with our original query). If we were to hit y, the solver might then give us another possible answer:

Vec<i32>: AsRef<Vec<i32>>
    continue? (y/n)

This answer derives from the fact that there is a reflexive impl (impl<T> AsRef<T> for T) for AsRef. If were to hit y again, then we might get back a negative response:

no

Naturally, in some cases, there may be no possible answers, and hence the solver will just give me back no right away:

?- Box<i32>: Copy
    no

In some cases, there might be an infinite number of responses. So for example if I gave this query, and I kept hitting y, then the solver would never stop giving me back answers:

?- Vec<?U>: Clone
   Vec<i32>: Clone
     continue? (y/n)
   Vec<Box<i32>>: Clone
     continue? (y/n)
   Vec<Box<Box<i32>>>: Clone
     continue? (y/n)
   Vec<Box<Box<Box<i32>>>>: Clone
     continue? (y/n)

As you can imagine, the solver will gleefully keep adding another layer of Box until we ask it to stop, or it runs out of memory.

Another interesting thing is that queries might still have variables in them. For example:

?- Rc<?T>: Clone

might produce the answer:

Rc<?T>: Clone
    continue? (y/n)

After all, Rc<?T> is true no matter what type ?T is.

Do try this at home: chalk has a REPL

I should just note that ever since aturon recently added a REPL to chalk, which means that – if you want – you can experiment with some of the examples from this blog post. It’s not really a “polished tool”, but it’s kind of fun. I’ll give my examples using the REPL.

How chalk responds to a query

chalk responds to queries somewhat differently. Instead of trying to enumerate all possible answers for you, it is looking for an unambiguous answer. In particular, when it tells you the value for a type variable, that means that this is the only possible instantiation that you could use, given the current set of impls and where-clauses, that would be provable.

Overall, chalk’s answers have three parts:

  • Status: Yes, No, or Maybe
  • Refined goal: a version of your original query with some substitutions applied
  • Lifetime constraints: these are relations that must hold between the lifetimes that you supplied as inputs. I’ll come to this in a bit.

Future compatibility note: It’s worth pointing out that I expect some the particulars of a “query response” to change, particularly as aturon continues the work on negative reasoning. I’m presenting the current setup here, for the most part, but I also describe some of the changes that are in flight (and expected to land quite soon).

Let’s look at these three parts in turn.

The status and refined goal of a query response

The “status” tells you how sure chalk is of its answer, and it can be yes, maybe, or no.

A yes response means that your query is uniquely provable, and in that case the refined goal that we’ve given back represents the only possible instantiation. In the examples we’ve seen so far, there was one case where chalk would have responded with yes:

> cargo run
?- load libstd.chalk
?- exists<T> { Rc<T>: Clone }
Solution {
    successful: Yes,
    refined_goal: Query {
        value: Constrained {
            value: [
                Rc<?0>: Clone
            ],
            constraints: []
        },
        binders: [
            U0
        ]
    }
}

(Since this is the first example using the REPL, a bit of explanation is in order. First, cargo run executs the REPL, naturally. The first command, load libstd.chalk, loads up some standard type/impl definitions. The next command, exists<T> { Rc<T>: Clone } is the actual query. In the section of Prolog examples, I used the Prolog convention, which is to implicitly add the “existential quantifiers” based on syntax. chalk is more explicit: writing exists<T> { ... } here is saying “is there a T such that ... is true?”. In future examples, I’ll skip over the first two lines.)

You can see that the response here (which is just the Debug impl for chalk’s internal data structures) included not only Yes, but also a “refined-goal”. I don’t want to go into all the details of how the refined goal is represented just now, but if you skip down to the value field you will pick out the string Rc<?0>: Clone – here the ?0 indicates an existential variable. This is saying thatthe “refined” goal is the same as the query, meaning that Rc<T>: Clone is true no matter what Clone is. (We saw the same thing in the Prolog case.)

So what about some of the more ambiguous cases. For example, what happens if we ask exists<T> { Vec<T>: Clone }. This case is trickier, because for Vec<T> to be clone, T must be Clone, so it matters what T is:

?- exists<T> { Vec<T>: Clone }
Solution {
    successful: Maybe,
    ... // elided for brevity
}

Here we get back maybe. This is chalk’s way of saying that the query is provable for some instants of ?T, but we need more type information to find a unique answer. The idea is that we will continue type-checking or processing in the meantime, which may yield results that further constrain ?T; e.g., maybe we find a call to vec.push(22), indicating that the type of the values within is i32. Once that happens, we can repeat the query, but this time with a more specific value for ?T, so something like Vec<i32>: Clone:

?- Vec<i32>: Clone
Solution {
    successful: Yes,
    ...
}

Finally, some times chalk can decisively prove that something is not provable. This would occur if there is just no impl that could possibly apply (but see aturon’s post, which covers how we plan to extend chalk to be able to reason beyond a single crate):

?- Box<i32>: Copy
`Copy` is not implemented for `Box<i32>` in environment `Env(U0, [])`

Refined goal in action

The refined goal so far hasn’t been very important; but it’s generally a way for the solver to communicate back a kind of substitution – that is, to communicate back what values the type variables have to have in order for the query to be provable. Consider this query:

?- exists<U> { Vec<i32>: AsRef<Vec<U>> }

Now, in general, a Vec<i32> implements AsRef twice:

  • Vec<i32>: AsRef<Slice<i32>> (chalk doesn’t understand the syntax [i32], so I made a type Slice for it)
  • Vec<i32>: AsRef<Vec<i32>>

But here, we know we are looking for AsRef<Vec<U>>. This implies then that U must be i32. And indeed, if we give this query, chalk tells us so, using the refined goal:

?- exists<U> { Vec<i32>: AsRef<Vec<U>> }
Solution {
    successful: Yes,
    refined_goal: Query {
        value: Constrained {
            value: [
                Vec<i32>: AsRef<Vec<i32>>
            ],
            constraints: []
        },
        binders: []
    }
}

Here you can see that there are no variables. Instead, we see Vec<i32>: AsRef<Vec<i32>>. If we unify this with our original query (skipping past the exists part), we can deduce that U = i32.

You might imagine that the refined goal can only be used when the response is yes – but, in fact, this is not so. There are times when we can’t say for sure if a query is provable, but we can still say something about what the variables must be for it to be provable. Consider this example:

?- exists<U, V> { Vec<Vec<U>>: AsRef<Vec<V>> }
Solution {
    successful: Maybe,
    refined_goal: Query {
        value: Constrained {
            value: [
                Vec<Vec<?0>>: AsRef<Vec<Vec<?0>>>
            ],
            constraints: []
        },
        binders: [
            U0
        ]
    }
}

Here, we were asking if Vec<Vec<U>> implements AsRef<Vec<V>>. We got back a maybe response. This is because the AsRef impl requires us to know that U: Sized, and naturally there are many sized types that U could be, so we need to wait until we get more information to give back a definitive response.

However, leaving aside concerns about U: Sized, we can see that Vec<Vec<U>> must equal Vec<V>, which implies that, for this query to be provable, Vec<U> = V must hold. And the refined goal reflects as much:

Vec<Vec<?0>>: AsRef<Vec<Vec<?0>>>

Open vs closed queries

Queries in chalk are always “closed” formulas, meaning that all the variables that they reference are bound by either an exists<T> or a forall<T> binder. This is in contrast to how the compiler works, or a typical prolog implementation, where a trait query occurs in the context of an ongoing set of processing. In terms of the current rustc implementation, the difference is that, in rustc, when you wish to do some trait selection, you invoke the trait solver with an inference context in hand. This defines the context for any inference variables that appear in the query.

In chalk, in contrast, the query starts with a “clean slate”. The only context that it needs is the global context of the entire program – i.e., the set of impls and so forth (and you can consider those part of the query, if you like).

To see the difference, consider this chalk query that we looked at earlier:

?- exists<U> { Vec<i32>: AsRef<Vec<U>> }

In rustc, such a query would look more like Vec<i32>: AsRef<Vec<?22>>, where we have simply used an existing inference variable (?22). Moreover, the current implementation simply gives back the yes/maybe/no part of the response, and does not have a notion of a refined goal. This is because, since we have access to the raw inference variable, we can just unify ?22 (e.g., with i32) as a side-effect of processing the query.

The new idea then is that when some part of the compiler needs to prove a goal like Vec<i32>: AsRef<Vec<?22>>, it will first create a canonical query from that goal (chalk code is in query.rs). This is done by replacing all the random inference variables (like ?22) with existentials. So you would get exists<T> Vec<i32>: AsRef<Vec<T>> as the output. One key point is that this query is independent of the precise inference variables involved: so if we have to solve this same query later, but with different inference variables (e.g., Vec<i32>: AsRef<Vec<?44>>), when we make the canonical form of that query, we’d get the same result.

Once we have the canonical query, we can invoke chalk’s solver. The code here varies depending on the kind of goal, but the basic strategy is the same. We create a “fulfillment context”, which is the combination of an inference context (a set of inference variables) and a list of goals we have yet to prove. (The compiler has a similar data structure, but it is setup somewhat differently; for example, it doesn’t own an inference context itself.)

Within this fulfillment context, we can “instantiate” the query, which means that we replace all the variables bound in an exists<> binder with an inference variable (here is an example of code invoking instantiate(). This effectively converts back to the original form, but with fresh inference variables. So exists<T> Vec<i32>: AsRef<Vec<T>> would become Vec<i32>: AsRef<Vec<?0>>. Next we can actually try to prove the goal, for example by searching through each impl, unifying the goal with the impl header, and then recursively processing the where-clauses on the impl to make sure they are satisfied.

An advantage of the chalk approach where queries are closed is that they are much easier to cache. We can solve the query once and then “replay” the result an endless number of times, so long as the enclosing context is the same.

Lifetime constraints

I’ve glossed over one important aspect of how chalk handles queries, which is the treatment of lifetimes. In addition to the refined goal, the response from a chalk query also includes a set of lifetime constraints. Roughly speaking, the model is that the chalk engine gives you back the lifetime constraints that would have to be satisfied for the query to be provable.

In other words, if you have a full, lifetime-aware logic, you might say that the query is provable in some environment Env that also includes some facts about the lifetimes (i.e., which lifetime outlives which other lifetime, and so forth):

Env, LifetimeEnv |- Query

but in chalk we are only giving in Env, and the engine is giving back to us a LifetimeEnv:

chalk(Env, Query) = LifetimeEnv

with the intention that we know that if we can prove that LifetimeEnv holds, then Query also holds.

One of the main reasons for this split is that we want to ensure that the results from a chalk query do not depend on the specific lifetimes involved. This is because, in part, we are going to be solving chalk queries in contexts when lifetimes have been fully erased, and hence we don’t actually know the original lifetimes or their relationships to one another. (In this case, the idea is roughly that we will get back a LifetimeEnv with the relationships that would have to hold, but we can be sure that an earlier phase in the compiler has proven to us that this LifetimeEnv will be satisfied.)

Anyway, I plan to write a follow-up post (or more…) focusing just on lifetime constraints, so I’ll leave it at that for now. This is also an area where we are doing some iteration, particularly because of the interactions with specialization, which are complex.

Future plans

Let me stop here to talk a bit about the changes we have planned. aturon has been working on a branch that makes a few key changes. First, we will replace the notion of “refined goal” with a more straight-up substitution. That is, we’d like chalk to answer back with something that just tells you the values for the variables you’ve given. This will make later parts of the query processing easier.

Second, following the approach that aturon outlined in their blog post, when you get back a “maybe” result, we are actually going to be considering two cases. The current code will return a refined substitution only if there is a unique assignment to your input variables that must be true for the goal to be provable. But in the newer code, we will also have the option to return a “suggestion” – something which isn’t necessary for the goal to be provable, but which we think is likely to be what the user wanted. We hope to use this concept to help replicate, in a more structured and bulletproof way, some of the heuristics that are used in rustc itself.

Finally, we plan to implement the “modal logic” operators, so that you can make queries that explicitly reason about “all crates” vs “this crate”.

Eric RahmA Rust-based XML parser for Firefox

Goal: Replace Gecko’s XML parser, libexpat, with a Rust-based XML parser

Firefox currently uses an old, trimmed down, and slightly modified version of libexpat, a library written in C, to support parsing of XML documents. These files include plain old XML on the web, XSLT documents, SVG images, XHTML documents, RDF, and our own XUL UI format. While it’s served it’s job well it has long been unmaintained and has been a source of many security vulnerabilities, a few of which I’ve had the pleasure of looking into. It’s 13,000 lines of rather hard to understand code and tracing through everything when looking into security vulnerabilities can take days at a time.

It’s time for a change. I’d like us to switch over to a Rust-based XML parser to help improve our memory safety. We’ve done this already with at least two other projects: an mp4 parser, and a url parser. This seems to fit well into that mold: a standalone component with past security issues that can be easily swapped out.

There have been suggestions adding full XML 1.0 v5 support, there’s a 6-year old proposal to rewrite our XML stack which doesn’t include replacing expat, there’s talk of the latest and greatest, but not quite fully speced, XML5. These are all interesting projects, but they’re large efforts. I’d like to see us make a reasonable change now.

What do we want?

In order to avoid scope creep and actually implement something in the short term I just want a library we can drop in that has parity with the features of libexpat that we currently use. That means:

  • A streaming, sax-like interface that generates events as we feed it a stream of data
  • Support for DTDs and external entities
  • XML 1.0 v4 (possibly v5) support
  • A UTF-16 interface. This isn’t a firm requirement; we could convert from UTF-16 -> UTF-8 -> UTF-16, but that’s clearly sub-optimal
  • As fast as expat with a low memory footprint
Why do we need UTF-16?

Short answer: That’s how our current XML parser stack works.

Slightly longer answer: In Firefox libexpat is wrapped by nsExpatDriver which implements nsITokenizer. nsITokenizer uses nsScanner which exposes the data it wraps as UTF-16 and takes in nsAString, which as you may have guessed is a wide string. It can also read in c-strings, but internally it performs a character conversion to UTF-16. On the other side all tokenized data is emitted as UTF-16 so all consumers would need to be updated as well. This extends further out, but hopefully that’s enough to explain that for a drop-in replacement it should support UTF-16.

What don’t we need?

We can drop the complexity of our parser by excluding parts of expat or more modern parsers that we don’t need. In particular:

  • Character conversion (other parts of our engine take care of this)
  • XML 1.1 and XML5 support
  • Output serialization
  • A full rewrite of our XML handling stack

What are our options?

There are three Rust-based parsers that I know of, none of which quite fit our needs:

  • xml-rs
    • StAX based, we prefer SAX
    • Doesn’t support DTD, entities
    • UTF-8 only
    • Doesn’t seem very active
  • RustyXML
    • Is SAX-like
    • Doesn’t support DTD, entities
    • Seems to only support UTF-8
    • Doesn’t seem to be actively developed
  • xml5ever
    • Used in Servo
    • Only aims to support XML5
    • Permissive about malformed XML
    • Doesn’t support DTD, entities

Where do we go from here?

My recommendation is to implement our own parser that fits the needs and use cases of Firefox specifically. I’m not saying we’d necessarily start from scratch, it’s possible we could fork one of the existing libraries or just take inspiration from a little bit of all of them, but we have rather specific requirements that need to be met.

Air MozillaBugzilla Project Meeting, 24 May 2017

Bugzilla Project Meeting The Bugzilla Project developers meeting.

Air Mozilla2017 Global Sprint Ask Me Anything #2

2017 Global Sprint Ask Me Anything #2 We'll be answering questions and chatting about the 2017 Global Sprint, Mozilla's 2-day, world-wide collaboration party for the open web.

Air MozillaWeekly SUMO Community Meeting May 24, 2017

Weekly SUMO Community Meeting May 24, 2017 This is the sumo weekly call

Giorgos LogiotatidisMonitoring python cron jobs with Babis

Over at MozMeao we are using APScheduler to schedule the execution of periodic tasks, like Django management tasks to clear sessions or to fetch new job listings for Mozilla Careers website.

A couple of services provide monitoring of cron job execution including HealthChecks.io and DeadManSnitch. The idea is that you ping a URL after the successful run of the cron job. If the service does not receive a ping within a predefined time window then it triggers notifications to let you know.

With shell scripts this is as simple as running curl after your command:

$ ./manage.py clearsessions && curl https://hchk.io/XXXX

For python based scripts like APScheduler's I created a tool to help with that:

Babis provides a function decorator that will ping monitor URLs for you. It will ping a URL before the start or after the end of the execution of your function. With both before and after options combined, the time required to complete the run can be calculated.

You can also rate limit your pings. So if you're running a cron job every minute but your check window is every 15 minutes can you play nicely and avoid DOSing your monitor by defining a rate of at most one request per 15 minutes with 1/15m.

In some cases network hiccups or monitor service maintenance can make Babis to fail. With the silent_failures flag you can ignore any failures to ping the defined URLs.

The most common use of Babis is to ping a URL after the function has returned without throwing an exception.

@babis.decorator(ping_after='https://hchk.io/XXXX')
def cron_job():
  pass

Babis is available on PyPI and just a pip install away. Learn more at Babis GitHub Page

Giorgos LogiotatidisMonitoring python cron job with Babis

Over at MozMeao we are using APScheduler to schedule the execution of periodic tasks, like Django management tasks to clear sessions or to fetch new job listings for Mozilla Careers website.

A couple of services provide monitoring of cron job execution including HealthChecks.io and DeadManSnitch. The idea is that you ping a URL after the successful run of the cron job. If the service does not receive a ping within a predefined time window then it triggers notifications to let you know.

With shell scripts this is as simple as running curl after your command:

$ ./manage.py clearsessions && curl https://hchk.io/XXXX

For python based scripts like APScheduler's I created a tool to help with that:

Babis provides a function decorator that will ping monitor URLs for you. It will ping a URL before the start or after the end of the execution of your function. With both before and after options combined, the time required to complete the run can be calculated.

You can also rate limit your pings. So if you're running a cron job every minute but your check window is every 15 minutes can you play nicely and avoid DOSing your monitor by defining a rate of at most one request per 15 minutes with 1/15m.

In some cases network hiccups or monitor service maintenance can make Babis to fail. With the silent_failures flag you can ignore any failures to ping the defined URLs.

The most common use of Babis is to ping a URL after the function has returned without throwing an exception.

@babis.decorator(ping_after='https://hchk.io/XXXX')
def cron_job():
  pass

Babis is available on PyPI and just a pip install away. Learn more at Babis GitHub Page

Air MozillaMay 2017 Privacy Lab - Privacy on the Blockchain: An Introduction to Zcash

May 2017 Privacy Lab - Privacy on the Blockchain: An Introduction to Zcash Kevin Gallagher will talk about Zcash and privacy. Zcash is the first open, permissionless cryptocurrency that can fully protect the privacy of transactions using zero-knowledge...

Justin DolskePhoton Engineering Newsletter #2

That’s right! Time for another Photon Engineering Newsletter! (As I gradually catch up with real-time, this update covers through about May 16th).

May got off to a busy start for the Photon team. As I mentioned in last week’s update, the team has largely shifted from the planning to implementation, so visible changes are starting to come quickly.

Work Week

A particularly big event was the Photon team gathering in Mozilla’s Toronto office for a work week. About 50 people from Engineering, UX, User Research, and Program Management gathered, from all over the world, with a focus on building Photon. Mozilla operates really well with distributed/remote teams, but periodically getting together to do things face-to-face is super useful to work through issues more quickly.

photonww_shorlander_small

It was really terrific to see so many people coming together to hack on Photon. We got a lot done (more on that below), saw some great demos, and the energy was high. And of course, no workweek is complete without UX creating some fun posters:

57small

One important milestone reached during the week was setting the initial scope for what’s going to be included in Photon (or, more bluntly, what’s NOT going to be part of Photon when it ships in Firefox 57). We’re still refining estimates, but it looks like all the major worked planned for Photon can be accomplished. Most things placed in the “reserve backlog” (meaning we’ll do it if there’s extra time left, but we’re not committing to do them) are minor or nice-to-have things.

This is probably a good spot to talk a little more about our schedule. Firefox 57 is the release that Quantum and Photon are targeting. It’s scheduled to ship on November 14th, but there are important milestones before then… September 22nd is when 57 enters Beta, after that point it’s increasingly hard to make changes (because we don’t want to destabilize Firefox right before it ships). August 7th is when Nightly becomes version 57. This is also our target date to be complete with “major work” for Photon. This might seem a little surprising, but it’s due to the recent process change that removed Aurora. Under the old release schedule, August 7th is when Nightly-57 uplifted to Aurora (beginning ~12 weeks of stabilization before release). We want to keep the same amount of stabilization time (for a project as big as Photon there is _always_ fallout and followup work), so we kept the same calendar date for Photon’s target. This doesn’t mean we’ll be “done” on August 7th, just that the focus will be shifting from implementing features to fixing bugs, improving quality, and final polish.

And now for the part you’ve all been waiting for, the recent changes!

Recent Changes

Menus/structure:

  • The page action menu (aka the “…” button at the end of the URL bar) got the first menu items added to it, Copy URL and Email Link. More coming, as this becomes the standard place for “actions you can perform with this page” items.
    actionmenu_first
  • The new hamburger menu is coming along, although still disabled by default (via the browser.photon.structure.enabled pref). Items for character encoding, work offline, and the devtools submenu have been added to it.
  • The new overflow menu (also disabled by default, same pref) is nearing completion with 2/3 of the bugs fixed. This menu is now shown in customize mode (instead of the old hamburger menu); so instead of only showing icons that couldn’t fit in the navbar (e.g. because you made your window too narrow), it’s the new place you can customize with buttons you want to be easily accessible without always taking up navbar space.

Animation:

  • When rearranging tabs in the tabstrip, a snappier animation rate is now used.
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator – but these haven’t landed yet.

Preferences:

  • The “Updates” section of preferences now shows the current Firefox version.
  • Good progress at the workweek on fixing the first set of bugs needed to enable searching within preferences.
  • UX is working on some further changes to the reorganization that we believe will improve it.

Visual redesign:

  • The new styling for the location and search bars landed.
  • The stop/reload button has been removed from the end (inside) of the location bar, and is now a normal toolbar button to the left of the location bar.
  • The back/forward buttons have been detached from the location bar.
  • URL that are longer than the location box can display now fade out at the end.
  • Minor update to the about:privatebrowsing page (shown when opening a new private window).
  • Upcoming work on compact/touch modes for the toolbar and more toolbar button style changes.

Onboarding:

  • Running Funnelcake tests for the new tour notification.
  • Built a prototype of the new tab page tour overlay at the workweek.
  • Will be adding new automigration UI to the Activity Stream new tab page. Users trying Firefox for the first time will no longer immediately see the old data migration wizard (which makes for a pretty poor first impression). Instead, Firefox will automatically import from your previous browser, so you launch straight into Firefox and can see your data. There’s also a clear message indicating what happened, and giving you the choice to keep (or not) the data, or try importing from a different browser. This screencast shows the general flow:
    automig

Performance:

  • Florian landed a massive series of patches (assisted by an automated code-rewriting tool) that switches use of Task.jsm/yield to ES7 async/await. The native ES7 code is more efficient, and we’ve often seen the older Task.jsm usage show in the profiler. This also helps with modernizing Firefox’s front end, which extensively uses JavaScript.
  • The animation shown when opening a window is now suppressed for the first window to be opened.
  • Tab navigation and restoring now cause less visual noise in the tab title, by skipping the display of unnecessary text (e.g. “Loading” and “New Tab”).
  • A few things have been moved off the startup path, so that Firefox launches faster.
  • Removed some synchronous reflows when adding and removing tabs and when interacting with the AwesomeBar.
  • Working on adding tests to detect synchronous reflows, so that we can ensure they don’t sneak back in after we remove them.

 

This concludes update #2.


Andreas TolfsenWhat is libexec?

Today, as I was working on importing and building geckodriver in mozilla-central, I found myself head first down a rabbit hole trying to figure out why the mach try command we use for testing changesets in continuous integration complained that I didn’t have git-cinnabar installed:

% ./mach try -b do -p all -u all -t none
mach try is under development, please file bugs blocking 1149670.
ERROR git-cinnabar is required to push from git to try withthe autotry command.

More information can by found at https://github.com/glandium/git-cinnabar

As I’ve explained previously, git-cinnabar is a git extension for interacting with remote Mercurial repositories. It’s a godsend written by fellow Mozillian Mike Hommey that lets me do my work without getting my hands dirty with hg.

As one might suspect, mach try uses whereis(1) to look for the git-cinnabar binary. However, as it is a helper program that is not meant to be invoked directly, but rather be despatched through git cinnabar (without hyphen), it gets installed into /usr/local/libexec/git-core. Since I had never heard about libexec before, I decided to do some research.

libexec is meant for system daemons and system utilities executed by other programs. That is, the binaries put in this namespaced directory are meant for the consumption of other programs, and are not intended to be executed directly by users.

In fact, libexec is defined in the Filesystem Hierarchy Standard published by the Linux Foundation in this way:

/usr/libexec includes internal binaries that are not intended to be executed directly by users or shell scripts. Applications may use a single subdirectory under /usr/libexec.

On my preferred Linux system, Debian, there is apparently also /usr/local/libexec, which as far as I understand is meant to complement /usr/libexec in the same way that /usr/local compliments /usr. It provides a tertiary hierarchy for local data and programs that may be shared amongst hosts and that are safe from being overwritten when the system is upgraded. This is exactly what I want, since I installed git-cinnabar from source.

It was somewhat surprising to me to find that whereis(1)—or at least the util-linux version of it—does not provide a flag for searching auxillary support programs located in libexec, when it is capable of searching for manuals and sources, in addition to executable binaries:

% whereis -h

Usage:
 whereis [options] [-BMS <dir>... -f] <name>

Locate the binary, source, and manual-page files for a command.

Options:
 -b         search only for binaries
 -B <dirs>  define binaries lookup path
 -m         search only for manuals and infos
 -M <dirs>  define man and info lookup path
 -s         search only for sources
 -S <dirs>  define sources lookup path
 -f         terminate <dirs> argument list
 -u         search for unusual entries
 -l         output effective lookup paths

For more details see whereis(1).

To further complicate things, git itself has no option for emitting where it finds its internally-called programs from. It does have an --exec-path flag that tells you where internal git commands are kept, but not where optionally installed git extensions are.

I think the fix to my problem is to tell mach try to include both /usr/libexec/git-core and /usr/local/libexec/git-core in the search path when it looks for git extensions, but maybe there is a more elegant way to check if git has a particular subcommand available? Certainly it’s conceivable to just call git cinnabar --help or similar and check its exit code.

Firefox UXFree Templates to Keep Your Design Sprint on Track

Like many organizations, Mozilla Firefox has been experimenting with the Google Ventures Design Sprint method as one way to quickly align teams and explore product ideas. Last fall I had the opportunity to facilitate a design sprint for our New Mobile Experiences team to explore new ways of connecting people to the mobile web. Our team had a productive week and you can read more about our experience in this post on the Sprint Stories website.

This year, Google I/O included a panel of Google “Sprint Masters” discussing how they use Design Sprints in their work: Using Design Sprints to Increase Cross-Functional Collaboration. A common theme during the discussion was the importance of detailed facilitator planning in advance of the Design Sprint.

What is the most important thing to do in preparing for a Sprint?
“The most important thing is planning, planning, and then more planning. I want to emphasize the fact that there are unknowns during a Design Sprint, so you have to even plan for those. And what that looks like is the agenda, how you want to pace the activities, and who you want in the room.” — Ratna Desai, Google UX Lead

When I was planning my first Design Sprint as a facilitator, I was grateful for the detailed daily outlines in the GV Library, as well as the Monday morning kickoff presentation. But, given the popularity of Design Sprints, I was surprised that I was unable to find (at the time at least) examples of daily presentations that I could repurpose to keep us on track. There is a a lot to cover in a 5 day Design Sprint. It’s a challenge to even remember what is coming next, let alone keep everything timely and orderly. I relied heavily on these Keynote slides to pace our activities and I hope that they can serve as a resource for other first time Design Sprint facilitators by cutting down on some of your planning time.

These slides closely follow the basic approach described in the Sprint book, but could be easily modified to suit your specific needs. They also include some presenter notes to help you describe each activity. Shout out to Unsplash for the amazing photos.

Design Sprint Daily Agenda Templates

Monday

Monday Keynote Slides (link)

10:00 — Introductions. Sprint overview. Monday overview.
10:15 — Set a long term goal. List sprint questions.
11:15 — Break
11:30 — Make a map
12:30 — Expert questions
1:00 — Lunch
2:00 — Ask the experts. Update the map. Make “How might we” notes.
4:00 — Break
4:15 — Organize the HMW notes. Vote on HMW notes.
4:35 — Pick a target

Tuesday

Tuesday Keynote Slides (link)

10:00 — Lightning demos
11:30 — Break
11:45 — Continue lightning demos
12:30 — Divide or swarm
1:00 — Lunch
2:00 — Four Step Sketch: Notes
2:30 — Four Step Sketch: Ideas
3:00 — Four Step Sketch: Crazy 8s
3:30 — Break
3:45 — Four Step Sketch: Solution sketch
4:45 — Recruitment criteria

Wednesday

Wednesday Keynote Slides (link)

10:00 — Sticky decisions: Showcase and vote
11:30 — Break
11:45 — Concept selection: Evaluate and vote
1:00 — Lunch
2:00 — Make a storyboard
3:30 — Break
3:45 — Complete storyboard
4:45 — Select participants

Thursday

Thursday Keynote Slides (link)

10:00 — Pick the right tools. Divide and conquer.
10:10 — Prototype
11:30 — Break
11:45 — Prototype. Begin interview guide.
1:00 — Lunch
2:00 — Stitch it together. Test run video set-up
2:45 — Prototype demo
3:00 — Pilot interview
3:30 — Break
3:45 — Finish prototype. Update interview guide.
4:45 — Interview preparations.

Friday

Friday Keynote Slides (link)

9:00 — Interview #1
10:00 — Break
10:30 — Interview #2
11:30 — Early lunch
12:30 — Interview #3
1:30 — Break
2:00 — Interview #4
3:00 — Break
3:30 — Interview #5
4:30 — Debrief


Free Templates to Keep Your Design Sprint on Track was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaMartes Mozilleros, 23 May 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Firefox NightlyPreview Form Autofill in Firefox Nightly

An early version of the new Form Autofill feature is ready for testing by early adopters with U.S. addresses and websites using Firefox Nightly on desktop. Form Autofill helps you fill out addresses in online forms. You can give the work-in-progress a try and watch it improve over time but keep in mind there are many more months of work left to go.

An address is selected from the autocomplete dropdown on the email field and the rest of the address fields are filled with the related saved information for that profile.

Firefox has an existing form history feature, which helps you fill out one field at a time using frecency. In contrast, Form Autofill completes all related address fields when you select a suggestion from the autocomplete dropdown on the focused field. Autofill profiles can contain one or more of the following: name, mailing address, phone number, and/or email. Multiple profiles are supported, so, for example, you can have separate personal and work profiles.

What’s currently supported

  • Basic address profile management for addresses that follow United States formats. For now you need to use this interface to create profiles for testing
  • Enabling/disabling the feature in preferences
  • Filling a form which uses the @autocomplete attribute on <input> elements for the supported fields, and with minimal transformations. This means it won’t work yet on most websites. Try it on our demo page, Macy’s, or any Shopify-powered store.

Coming soon on Nightly

  • Heuristics to determine field data types when @autocomplete isn’t used. This will make autofill work on many more sites.
  • In-content add/edit dialog UX
  • Support for <select> dropdowns
  • Automatic saving of submitted addresses and prompts to confirm changes after autofilling
  • Showing a preview of what will be autofilled upon highlighting an address in the autocomplete dropdown
  • Full UI translation

Before release

  • Support for credit cards
  • More data validation and cleansing
  • Sync between desktops with your Firefox Account
  • Significantly improved heuristics… there is a lot of work to do here yet
  • Support for more countries based on feedback from our localization communities
  • Support for data transformations on more types e.g. splitting a phone number into country code, area code, exchange, and number when required by sites
  • UI polish
  • Telemetry to measure accuracy of autofilled data as a quality metric

Try it out

To give Form Autofill a try, make sure you’re running Firefox Nightly and open Privacy preferences (you can type about:preferences#privacy in the address bar). Click the “Saved Profiles…” button beside “Enable Profile Autofill” and then click “Add” and save a new address. Then visit our demo page or other sites using the HTML5 autocomplete attribute, and you should see the autofill dropdown as seen in the image above.

Take a look at the Form Autofill wiki page for much more information and to get involved.

Thanks,
The Form Autofill Team

Matthew NoorenberghePreview Form Autofill in Firefox Nightly

An early version of the new Form Autofill feature is ready for testing by early adopters with U.S. addresses and websites using Firefox Nightly on desktop. Form Autofill helps you fill out addresses in online forms. You can give the work-in-progress a try and watch it improve over time but keep in mind there are many more months of work left to go. An address is selected from the autocomplete dropdown on the email field and the rest of the address fields are filled with the related saved information for that profile. Firefox has an existing form history feature, which helps you fill out one field at a time using frecency. In contrast, Form Autofill completes all related address fields when you select a suggestion from the autocomplete dropdown on the focused field. Autofill profiles can contain one or more of the following: name, mailing address, phone number, and/or email. Multiple profiles are supported, so, for example, you can have separate personal and work profiles.

What's currently supported

  • Basic address profile management for addresses that follow United States formats. For now you need to use this interface to create profiles for testing
  • Enabling/disabling the feature in preferences
  • Filling a form which uses the @autocomplete attribute on <input> elements for the supported fields, and with minimal transformations. This means it won't work yet on most websites. Try it on our demo page, Macy’s, or any Shopify-powered store.

Coming soon on Nightly

  • Heuristics to determine field data types when @autocomplete isn't used. This will make autofill work on many more sites.
  • In-content add/edit dialog UX
  • Support for <select> dropdowns
  • Automatic saving of submitted addresses and prompts to confirm changes after autofilling
  • Showing a preview of what will be autofilled upon highlighting an address in the autocomplete dropdown
  • Full UI translation

Before release

  • Support for credit cards
  • More data validation and cleansing
  • Sync between desktops with your Firefox Account
  • Significantly improved heuristics… there is a lot of work to do here yet
  • Support for more countries based on feedback from our localization communities
  • Support for data transformations on more types e.g. splitting a phone number into country code, area code, exchange, and number when required by sites
  • UI polish
  • Telemetry to measure accuracy of autofilled data as a quality metric

Try it out

To give Form Autofill a try, make sure you’re running Firefox Nightly and open Privacy preferences (you can type about:preferences#privacy in the address bar). Click the “Saved Profiles…” button beside “Enable Profile Autofill” and then click “Add” and save a new address. Then visit our demo page or other sites using the HTML5 autocomplete attribute, and you should see the autofill dropdown as seen in the image above. Take a look at the Form Autofill wiki page for much more information and to get involved. Thanks, –The Form Autofill Team P.S. Thanks to Kit Cambridge for proofreading.

The Mozilla BlogMozilla Thimble Gets a Makeover

We’re introducing major upgrades to our educational code editor

 

Learning to code—from getting the hang of HTML tags to mastering the nuances of JavaScript—shouldn’t be a challenge. It should be fun, intuitive, hands-on and free of cost.

That’s why Mozilla built Thimble nearly five years ago. Much like Firefox enables users to browse the web, Thimble enables users to learn the web. It’s our browser-based tool for learning to code.

Today, we’re proud to announce Thimble is getting a makeover.

We’re introducing a suite of new features to make learning and teaching code even easier. Why? When more people can shape, and not just consume the web, the Internet becomes a healthier and more egalitarian, inclusive and funky place.

Thimble has taught hundreds of thousands of people across more than 200 countries. It’s been localized into 33 languages, and used in classrooms, at hackathons and at home. Thimble has also proved to be more than an educational code editor—it’s a creative platform. Thimble users can create personal webpages, comic strips, post cards, games and more.

And in true Mozilla fashion, Thimble is an open-source project. Many of those who learn to code with Thimble later return to offer tweaks and upgrades. Over 300 contributors from dozens of countries help build Thimble. Learn more about them here.

New Features

 

  • JavaScript console. Users can now debug their JavaScript projects within Thimble. It’s a simpler experience than the browser console, which can be cluttered and intimidating

 

  • Code snippets menu. Access a handy menu populated with HTML, CSS and JavaScript snippets. Creating content just got easier

 

  • ‘Favorite’ feature. Bookmark, organize and easily access projects you’re currently working on

 

  • Edit SVG image code directly. You can now change properties of SVG images—like fill and stroke color—within Thimble. No need to use an external editor or image software

 

  • Tajik language support. Add one more language to Thimble’s repertoire: Tajik, which is spoken in Tajikistan and Uzbekistan

 

  • Plus more, like an upgraded color picker; visible white space in code; updated file icons; an improved homepage gallery; easier file import and export mechanics; and the ability to disable autocomplete

Mozilla owes a debt of gratitude to the network of developers, designers, teachers and localizers around the world who made these upgrades possible—more than 300 contributors from 33 countries, like Brazil, Turkey, China and the UK. Their commitment to web literacy and open-source software makes the Internet healthier.

Now: start coding.

The post Mozilla Thimble Gets a Makeover appeared first on The Mozilla Blog.

Emily DunhamSalt: Successful ping but highstate says "minion did not return"

Salt: Successful ping but highstate says “minion did not return”

Today I was setting up some new OSX hosts on Macstadium for Servo’s build cluster. The hosts are managed with SaltStack.

After installing all the things, I ran a test ping and it looked fine:

user@saltmaster:~$ salt newbuilder test.ping
newbuilder:
    True

However, running a highstate caused Salt to claim the minion was non-responsive:

user@saltmaster:~$ salt newbuilder state.highstate
newbuilder:
    Minion did not return. [No response]

Googling this problem yielded a bunch of other “minion did not return” kind of issues, but nothing about what to do when the minion sometimes returns fine and other times does not.

The fix turned out to be simple: When a test ping succeeds but a longer-running state fails, it’s an issue with the master’s timeout setting. The timeout defaults to 5 seconds, so a sufficiently slow job will look to the master like the minion was unreachable.

As explained in the Salt docs, you can bump the timeout by adding the line timeout: 30 (or whatever number of seconds you choose) to the file /etc/salt/master on the salt master host.

This Week In RustThis Week in Rust 183

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

Sadly, we had no nominations this week, so stay tuned for next week!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

135 pull requests were merged in the last week.

New Contributors

  • Anders Papitto
  • Daniel Lockyer
  • David LeGare
  • Ivan Dardi
  • Michael Kohl
  • Mike Lubinets
  • Venkata Giri Reddy

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - 5/23/2017

Here’s what happened on the MozMEAO SRE team from May 17th - May 23rd.

Current work

Bedrock (mozilla.org)

Bedrock has been deployed to our Virginia Kubernetes cluster, with our Tokyo cluster closely following shortly later today or Wednesday.

Production deployment

MDN

MDN migration analysis summary

Future work

Decommission openwebdevice.org

Waiting for some internal communications before moving forward.

Nucleus

We’re planning to move nucleus to Kubernetes, and then proceed to decommissioning current nucleus infra.

Basket

We’re planning to move basket to Kubernetes shortly after the nucleus migration, and then proceed to decommissioning existing infra.

Links

Christian HeilmannBreaking out of the Tetris mindset

This is a longer version of this year’s keynote at Beyond Tellerrand conference. I posted earlier about the presentation with the video and slides. Here you get the longer transcript and explanation. If you prefer Medium to read, here’s the same post there

It is amazing to work on the web. We have a huge gift in the form of a decentralized network of devices. A network that helps people around the world to communicate and learn. And a huge resource of entertainment on demand. A gift we can tweak to our needs and interests rather than unwrap and consume. Some of us even know the technologies involved in creating interfaces of the web. Interfaces that empower others to consume content and become creators.

And yet there is a common feeling of failure. The web seems to be never good enough. Despite all our efforts the web doesn’t feel like a professional software platform. Our efforts to standardise the web to allow becoming a “web developer” easier seem not to work out. We keep re-defining what that even is.

Lately a lot of conferences have talks about mental health. Talks about us working too much and not going anywhere. Talks about how fragmented, insecure and unprofessional our tech stack is.

There’s been a meme going around for a while that summarises this feeling quite well. It is about the game Tetris:

“If Tetris has taught me anything, it’s that errors pile up and accomplishments disappear”

There is some truth to that. We worked for almost two decades on the web and the current state of it isn’t too encouraging. The average web site is too big to consume on mobile devices and too annoying to do so on laptops. We limit things too much on mobile. We cram a proverbial kitchen sink of features, code and intrusive advertising into our desktop solutions.

Let’s see what’s going on there and how we can shift ourselves away from this Tetris mindset. I’ll start by zooming out and talk a bit about the way we communicate about technologies.

Back when this was all fields…

Tweet
One of my older tweets – This was my old computer setup. Notice the photos of contacts on the wall.

When I started using computers my setup was basic by current standards. And yet I didn’t use the computer to play. I wanted to use it to communicate. That’s why I reached out to other creators and makers and we shared the things we wrote. We shared tools that made our life easier. We got to know each other writing long notes and explanations. We communicated by mail, sending floppy disks to each other. Once we got to know each other better, we shared photos and visited each other in person. It was a human way to create. Communication came at a high price and you had to put a lot of effort into it. Which is why we ensured that it was high quality, cordial communication. You had to wait weeks for an answer. You made sure that you said everything you wanted to say before you created the package and mailed it.

Speed cheapens communication

Fast forward to now. Communication is a dime a dozen. We’re connected all the time. Instead of spending time and effort to get to know others and explain and lead, we shout. We aim to be the most prolific communicator out there. Or to prove someone wrong who is loud and prolific. Or make fun of everything others spent a lot of effort on.

The immediacy and the ubiquity of communication is cheapening the way we act. We don’t feel like putting up much effort upfront. A stream of information gives you the impression that it is easy to undo a mistake right after you made it. But the constant barrage of information makes others in the conversation tune out. They only prick up their ears when something seems to be out of the ordinary. And this is where polarisation comes in. A drive to impress the audience in a positive way leads to unhealthy competitiveness. Giving out strong negative messages leads to witch hunts and Twitter drama. The systems we use have competition built in. They drive us having to win and release the most amazing things all the time. Gamification is what social networks are about. You get likes and views and retweets and favorites. Writing more and interacting more gets you to a higher level of access and features. You’re encouraged to communicate using in-built features like emoji, GIFs and memes. It is not about accumulating sensible content and finding solutions. It is about keeping the conversations and arguments going. All to get more clicks, views and interactions to show us more ads – the simple way to monetise on the web.

What this model brings is drama and dogma. The more outrageous your views, the more interaction you cause. The more dogmatic your approach, the more ammunition you have proving others wrong. It is not about finding consensus, it is about winning. Winning meaningless internet points and burning quite a few bridges whilst you do it. And it is addictive. The more you do it, the higher the rush to win. And the more devastating the feeling of non-accomplishment when you don’t win.

The scattered landscape of today

Let’s zoom in again into what we do as web developers. Web development is a spectrum of approaches. I’ll try to explain this using Tetronimos (that’s the official name of Tetris blocks).

All Tetris blocksAll the Tetronimos, depicting the spectrum of web development approaches, from trusting and conservative to innovative and controlling

On the one end we have a conservative group that doesn’t trust any browser or device and keeps it all on the server. On the other we have people who want to run everything on the client and replace web standards with their own abstractions. Conservatives allow the user to customise their device and environment to their needs. The other extreme is OK to block users who don’t fulfil certain criteria.

The following are not quotes by me. I encountered them over the years, anonymised and edited them for brevity. Let’s start.

The right, conservative block…

The Right Block

This is a group of people that most likely have been around the web for a long time. They’ve had a lot of promises and seen new technology proving too flaky to use over and over again. People that remember a web where we had to support terrible environments. A web where browsers were dumb renderers without any developer tools.

Here’s some of the quotes I keep hearing:

You can’t trust the client, but you can trust your server.
Web standards are good, but let’s not get overboard with fancy things.

bq/It is great that there are evergreen browsers, but that doesn’t apply to our customers…

This approach results in bullet-proof, working interfaces. Interfaces that are likely to bore end users. It seems clumsy these days to have to re-load a page on every interaction. And it is downright frustrating on a mobile connection. Mobile devices offer a lof of web browsing features using client-side technologies. What’s even worse is that this approach doesn’t reward end users who keep their environment up to date.

The right, conservative piece…

The right leaning block

These people are more excited about the opportunities client-side functionality give you. But they tread carefully. They don’t want to break the web or lock out end users based on their abilities or technical setup.

Some quotes may be:

Progressive enhancement is the way to build working solutions. You build on static HTML generated from a backend and you will never break the user experience.
You have no right to block any user. The web is independent of device and ability. It is our job to make sure people can use our products. We do that by relying on standards.

Web products do not have to look and work the same in every environment.
Working, great looking interfaces that vary with the environment.

This is the best of both worlds. But it also means that you need to test and understand limits of older environments. Browser makers learned a lot from this group. It wanted to make the web work and get more control over what browsers do. That helps standard bodies a lot.

The Square

The Square

This is the group most concerned about the HTML we create. They are the people advocating for semantic HTML and sensible structures.

Some common phrases you’d hear from them are:

No matter what you do, HTML is always the thing the browser gets. HTML is fault tolerant and will work wherever and forever.
Using semantic HTML gives you a lot of things for free. Accessibility, caching, fast rendering. You can’t lose.

HTML is the final truth, that’s 100% correct. But we dropped the ball when HTML5 leap-frogged innovation. Advanced form interactions aren’t as reliable as we want them to be. Adding ARIA attributes to HTML elements isn’t a silver bullet to make them accessible. Browser makers should have spent more time fixing this. But instead, we competed with each other trying to be the most innovative browser. When it comes to basic HTML support, there was hardly any demand from developers to improve. HTML is boring to most developers and browsers are forgiving. This is why we drown in DIVs and SPANs. We keep having to remind people of the benefits of using real buttons, anchors and headings.

The straight and narrow…

The straight Block

This is a group advocating for a web that allows developers to achieve more by writing less code. They don’t want to have to worry about browser differences. They use libraries, polyfills and frameworks. These give us functionality not yet supported in browsers today.

Some famous ways of saying this are:

Browser differences are annoying and shouldn’t be in the way of the developer. That’s why we need abstraction libraries to fix these issues.
Understanding standards while they are in the making is nothing we have time for.
$library is so much easier – why don’t we add it to browsers?

There is no doubt that jQuery, Modernizr and polyfills are to thank for the web we have today. The issue is that far too many things depend on stop-gap solutions that never went away. Developers became dependent on them and never looked at the problems they solve. Problems that don’t exist in current browsers and yet we keep their fixes alive. That said, browser makers and standard creators learned a lot from these abstractions. We have quite a few convenience methods in JavaScript now because jQuery paved the way.

The T-Block

The T Block

In Tetris, this is the most versatile block. It helps you to remove lines or build a solid foundation for fixing a situation that seems hopeless. On the web, this is JavaScript. It is the only language that spans the whole experience. You can use it on the server and on the client. In an environment that supports JavaScript, you can create CSS and HTML with it.

This leads to a lot of enthusiastic quotes about it:

I can do everything in JavaScript. Every developer on the web should know it.
Using JavaScript, I can test if something I wanted to happen happened. There is no hoping that the browser did it right – we know.

JavaScript was the necessary part to make the web we have now happen. It can make things more accessible and much more responsive. You can even find out what’s possible in a certain environment and cater a fitting solution to it. It is a fickle friend though, many things can go wrong until the script loads and executes. And it is unforgiving. One error and nothing happens.

The innovation piece…

The Left Leaning Block

This group considers JavaScript support a given. They want to have development tool chains. Tooling as rich as that of other programming environments.

This leads to quotes like these:

It is OK to rely on JavaScript to be available. The benefits of computational values without reloads are too many to miss out on.

I don’t want to have to think about older browsers and broken environments. Frameworks and build processes can take care of that.

The concept of starting with a text editor and static files doesn’t exist any more. We have so much more benefits from using a proper toolchain. If that’s too hard for you, then you’re not a web developer.

Web standards like CSS, JavaScript and HTML are conversion targets. Sass, CoffeeScript, Elm, Markdown and Jade gives us more reliable control right now. We should not wait until browsers catch up.

It is a waste of time to know about browser differences and broken implementations. It is much more efficient to start with an abstraction.

Developer convenience trumps end users experience here. This can result in bloated solutions. We know about that bloat and we create a lot of technologies to fix that issue. Browser makers can’t help much here. Except creating developer tools that connect the abstraction with the final output (sourcemaps).

The innovative blocker…

The Left Block

These are the bleeding edge developers. They want to re-invent what browsers and standards do as they’re not fast enough.

Common quotes coming from this end of the spectrum are:

Browsers and web standards are too slow and don’t give us enough control. We want to know what’s going on and control every part of the interface.
CSS is broken. The DOM is broken. We have the technologies in our evergreen browsers to fix all that reliably as we have insight into what’s happening and can optimise it.

This approach yields high fidelity, beautiful and responsive interfaces. Interfaces that lock out a large group of users as they demand a lot from the device they run on. We assume that everybody has access to a high end environment and we don’t cater to others. Any environment with a high level of control also comes with high responsibility. If you replace web technologies with your own, you are responsible for everything – including maintenance. Browser makers can take these new ideas and standardise them. The danger is that they will never get used once we have them.

ExplanationsThe more innovative you are, the more you have to be responsible for the interface working for everybody. The more conservative you are, the more you have to trust the browser to do the right thing.

Every single group in this spectrum have their place on the web. In their frame of reference, each result in better experiences for our users. The difference is responsibility and support.

If we create interfaces dependent on JavaScript we’re responsible to make them accessible. If we rely on preprocessors it is up to us to explain these to the maintainers of our products. We also demand more of the devices and connectivity of our end users. This can block out quite a large chunk of people.

The less we rely on browsers and devices, the more we allow end users to customise the interface to their needs. Our products run everywhere. But are they also delivering an enjoyable experience? If I have a high-end device with an up-to-date, evergreen browser I don’t want a interface that was hot in 1998.

Who defines what and who is in charge?

Who defines who is allowed to use our products?

Who has the final say how an interface looks and what it is used for?

The W3C has covered this problem quite a long time ago. In its design principles you can find this gem:

Users over authors over implementors over specifiers over theoretical purity…

If our users are the end goal, findings and best practices should go both ways. Advocates of a sturdy web can learn from innovators and vice versa. We’re not quite doing that. Instead we build silos. Each of the approaches has dedicated conferences, blogs, slack channels and communities. Often our argumentation why one or the other is better means discrediting the other one. This is not helpful in the long run.

This is in part based on the developer mindset. We seem to be hard-wired to fix problems, as fast as possible, and with a technological approach. Often this means that we fix some smaller issue and cause a much larger problem.

How great is the web for developers these days?

It is time to understand that we work in a creative, fast moving space that is in constant flux. It is a mess, but mess can be fun and working in a creative space needs a certain attitude.

Creatives in the film industry know this. In the beautiful documentary about the making of Disney’s Zootopia the creative director explains this in no minced terms:

As a storyboard artist in Disney you know that most of your work will be thrown away. The idea is to fail fast and fail often and create and try out a lot of ideas. Using this process you find two or three great ideas that make the movie a great story. (paraphrased)

Instead of trying to fix all the small niggles, let’s celebrate how far we’ve come with the standards and technologies we use. We have solid foundations.

  • JavaScript: JavaScript is eating the world. We use it from creating everything on the web up to creating APIs and Web Servers. We write lots of tools in it to build our solutions. JavaScript engines are Open Source and embeddable. NodeJS and NPM allow us to build packages and re-use them on demand. In ES6 we got much more solid DOM access and traversal methods. Inspired by jQuery, we have querySelector() and classList() and many more convenience methods. We even replaced the unwieldy XMLHttpRequest with fetch(). And the concept or Promises and Async/Await allow us to build much more responsive systems.
  • CSS: CSS evolved beyond choosing fonts and setting colours. We have transitions to get from one unknown state to another in a smooth fashion. We have animations to aid our users along an information flow. Both of these fire events to interact with JavaScript. We have Flexbox and Grids to lay out elements and pages. And we have Custom Properties, which are variables in CSS but so much more. Custom Properties are a great way to have CSS and JavaScript interact. You change the property value instead of having to add and remove classes on parent elements. Or – even worse – having to manipulate inline styles.
  • Progressive Web Apps: The concept of Progressive Web Apps is an amazing opportunity. By creating a manifest file we can define that what we built is an app and not a site. That way User Agents and Search Engines can offer install interfaces. And browsers can allow for access to device and Operating System functionality. Service Workers offer offline functionality and work around the problem of unreliable connectivity. They also allow us to convert content on the fly before loading and showing the document. Notifications can make our content stickier without even having to show the apps.
  • Tooling: Our developers tools have also gone leaps and bounds. Every browser is also a debugging environment. We haveh ackable editors written in web technologies. Toolchains allow us to produce what we need when it makes sense. No more sending code to environments that don’t even know how to execute it.
  • Information availability: Staying up up-to-date is also simple these days. Browser makers are available for feedback and information. We have collaboration tools by the truckload. We have more events than we can count and almost all publish videos of their talks.

It is time for us to fill the gaps. It is time we realise that not everything we do has to last forever and needs to add up to a perfect solution. It is OK for our accomplishments to vanish. It is not OK for them to become landfill of the web.

Our job right now is to create interfaces that are simple, human and fun to use. There is no such thing as a perfect user – we need to think inclusive. It isn’t about allowing access but about avoiding barriers.

We have come a long way. We made the world much smaller and more connected. Let’s stop fussing over minor details, and show more love to the web, love to the craft and much more respect for another. Things are looking up, and I can’t wait to see what we – together – will come up with next.

You don’t owe the world perfection, but you have a voice that should be heard and your input matters. Get creative – no creativity is a waste.

David Humphrey"When in doubt, WebIDL"

Yesterday I was fixing a bug in some IndexedDB code we use for Thimble's browser-based filesystem. When I'm writing code against an API like IndexedDB, I love knowing the right way to use it. The trouble is, it can be difficult to find the proper use of an API, or in my case, its edge cases.

Let's take a simple question: should I wrap certain IndexedDB methods in try...catch blocks? In other words, will this code ever throw?

There are a number of obvious ways to approach this problem, from reading the docs on MDN to looking for questions/answers on StackOverflow to reading open source code on GitHub for projects that do similar things to you.

I did all of the above, and the problem is that people don't always know, or at least don't always bother to do the right thing in all cases. Some IndexedDB wrapper libraries on GitHub I read do use try...catch, others don't; some docs discuss it, others glaze over it. Even DOM tests I read sometimes use it, and sometimes don't bother. I hate that level of ambiguity.

Another option is to ask someone smart, which I did yesterday. I thought the reply I got from @asutherland was a great reminder of something I'd known, but didn't think to use in this case:

"When in doubt, WebIDL"

WebIDL is an Interface Definition Language (IDL) for the web that defines interfaces (APIs), their methods, properties, etc. It also clearly defines what might throw an error at runtime.

While implementing DOM interfaces in Gecko, I've had to work with IDL in the past; however, it hadn't occurred to me to consult it in this case, as a user of an API vs. its implementer. I thought it was an important piece of learning, so I wanted to write about it.

Let's take the case of opening a database using IndexedDB. To do this, you need to do something like this:

var db;  
var request = window.indexedDB.open("data");  
request.onerror = function(event) {  
  handleError(request.error);
};
request.onsuccess = function() {  
  db = request.result;
};

Now, given that indexedDB.open() returns an IDBOpenDBRequest object to which one can attach an onerror handler, it's easy to think that you've done everything correctly simply by handling the error case.

However, window.indexedDB implements the IDBFactory interface, so we can also check its WebIDL definition to see if the open method might also throw. It turns out that it does:

/**
 * Interface that defines the indexedDB property on a window.  See
 * http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html#idl-def-IDBFactory
 * for more information.
 */
[Exposed=(Window,Worker,System)]
interface IDBFactory {  
  [Throws, NeedsCallerType]
  IDBOpenDBRequest
  open(DOMString name,
       [EnforceRange] unsigned long long version);
...

This is really useful to know, since we should really be wrapping this call in a try...catch if we care about failing in ways to which a user can properly respond (e.g., showing error messages, retrying, etc.).

Why might it throw? You can see for yourself in the source for WebKit or Gecko, everything from:

  • Passing a database version number of 0
  • Not specifying a database name
  • Failing a security check (e.g., origin check)
  • The underlying database can't be opened for some reason

The list goes on. Try it yourself, by opening your dev console and entering some code that is known to throw, based on what we just saw in the code (e.g., no database name, using 0 as a version):

indexedDB.open()  
Uncaught TypeError: Failed to execute 'open' on 'IDBFactory': 1 argument required, but only 0 present.  
...
indexedDB.open("name", 0)  
Uncaught TypeError: Failed to execute 'open' on 'IDBFactory': The version provided must not be 0.  

The point is, it might throw in certain cases, and knowing this should guide your decisions about how to use it. The work I'm doing relates to edge case uses of a database, in particular when disk space is low, the database is corrupted, and other things we're still not sure about. I want to handle these errors, and try to guide users toward a solution. To accomplish that, I need try...catch.

When in doubt, WebIDL.

Firefox NightlyTry to find the patch which caused a crash.

For some categories of crashes, we are automatically able to pinpoint the patch which introduced the regression.

The issue

Developers make mistakes, not because they’re bad but most of the time because the code is complex and sometimes just because the modifications they made are so trivial that they don’t pay too much attention.

In parallel, the sooner we can catch these mistakes, the easier it is for developers to fix them. At the end, this strongly improves the user experience.
Indeed, if developers are quickly informed about new regressions introduced by their changes, it becomes much easier for them to fix issues as they still remember the changes.

How do we achieve that?

When a new crash signature shows up, we retrieve the stack trace of the crash, i.e. the sequence of called functions which led to the crash: https://crash-stats.mozilla.com/report/index/53b199e7-30f5-4c3d-8c8a-e39c82170315#frames .

For each function, we have the file name where it is defined and the mercurial changeset from which Firefox was built, so in querying https://hg.mozilla.org  it is possible to know what the last changes on this file were.

The strategy is the following:

  1. we retrieve the crashes which just appeared in the last nightly version (no crash in the last three days);
  2. we bucketize crashes by their proto-signature;
  3. for each bucket, we get a crash report and then get the functions and files which appear in the stack trace;
  4. for each file, we query mercurial to know if a patch has been applied to this file in the last three days.

The last stage is to analyze the stack traces and the corresponding patches to infer that a patch is probably the responsible for a crash and finally just report a bug.

Results

As an example:

https://bugzilla.mozilla.org/show_bug.cgi?id=1347836

The patch https://hg.mozilla.org/mozilla-central/diff/99e3488b1ea4/layout/base/nsLayoutUtils.cpp modified the function nsLayoutUtils::SurfaceFromElement and the crash occured in this function (https://crash-stats.mozilla.com/report/index/53b199e7-30f5-4c3d-8c8a-e39c82170315#frames), few lines after the modified line.

Finally the issue was a function which returned a pointer which could be dangling (the patch).

https://bugzilla.mozilla.org/show_bug.cgi?id=1347461

The patch https://hg.mozilla.org/mozilla-central/diff/bf33ec027cea/security/manager/ssl/DataStorage.cpp modified the line where the crash occured (https://crash-stats.mozilla.com/report/index/c7ba45aa-99a9-448b-91df-37da82170314#frames).

Finally the issue was an attempt to use an uninitialized object.

https://bugzilla.mozilla.org/show_bug.cgi?id=1342217

The patch https://hg.mozilla.org/mozilla-central/diff/e6fa8ff0d0be/dom/media/platforms/wrappers/MediaDataDecoderProxy.cpp added the function where the crash occured (https://crash-stats.mozilla.com/report/index/6a96375e-5c83-4ebe-9078-2d4472170222#frames).

Finally the issue was just a missing return in a function (the patch).

In these differents bugs, the volume is very low so almost nobody care about them but finally they reveal true mistakes in the code, so the volume could be higher in beta or release.
For the future, we hope that it will be possible to automate most of that process and file automatically a bug.

Air MozillaMozilla Weekly Project Meeting, 22 May 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Jeff WaldenI’ve stopped hiking the PCT

Every year’s PCT is different. The path routinely changes, mostly as fresh trail is built to replace substandard trail (or no trail at all, if the prior trail was a road walk). But the reason for an ever-shifting PCT, even within hiking seasons, should be obvious to any California resident: fires.

The 2013 Mountain Fire near Idyllwild substantially impacted the nearby trail. A ten-mile stretch of trail is closed to hikers even today, apparently (per someone I talked to at the outfitter in Idyllwild) because the fire burned so hot that essentially all organic matter was destroyed, so they have to rebuild the entire trail to have even a usable tread. (The entire section is expected to open midsummer next year – too late for northbound thru-hikers, but maybe not for southbounders.)

These circumstances lead to an alternate route for hikers to use. Not all do: many hikers simply hitchhike past the ten closed miles and the remote fifteen miles before them.

But technically, there’s an official reroute, and I’ve nearly completed it. Mostly it involves lots of walking on the side of roads. The forest service dirt roads are rough but generally empty, so not the worst. The walking on a well-traveled highway with no usable shoulder, however, was the least enjoyable hiking I’ve ever done. (I mean ever.) I really can’t disagree with the people who just hitchhiked to Idyllwild and skipped it all.

I’ll be very glad to get back on the real trail several miles from now.

Air MozillaWebdev Beer and Tell: May 2017

Webdev Beer and Tell: May 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Hacks.Mozilla.OrgShowcasing your WebVR experiences

WebVR combines the powerful reach of the Internet with the immersive appeal of virtual reality content. With WebVR, a VR experience is never more than one URL away. Nevertheless, VR equipment is still very expensive and not quite fully adopted for consumer use. For this reason, it is useful to be able to record your VR projects for others to experience and enjoy, at least from a viewer perspective.

Recording VR content

This video tutorial teaches you how to record a virtual experience you’ve created using the mirror mode in SteamVR. Capturing one eye allows your audience to enjoy the video in a regular 2D mode but capturing both eyes will enable a more immersive experience thanks to stereoscopic video.

This video tutorial assumes you have a SteamVR compatible setup and Open Broadcast Software installed.

There are other options for capturing your VR experiences. If you’re a Windows 10 user, perhaps you prefer to use Game DVR, which works out of the box.

Extract a GIF from your favorite cut

Now that you have a video with your VR content, you can make a GIF from it with Instagiffer for Windows. Instagiffer is not the fastest software out there, but the output quality of the GIFs is superb.

Start by installing and launching Instagiffer. The UI is split into three sections or steps.

A window with three sections for choosing the video, settings and preview

Click on Load Video in the Step 1 section, and select the video from which you want to extract the GIF.

When clicking load video, the Windows file selection dialog appears

Locate the sequence you want to convert into a GIF and fill the options in the Step 2 section. In this case, I want to extract 5.5 seconds of video starting from second 18; a sequence in which I shot an enemy bullet.

Three slide bars allow to modify start time, framerate (smoothness) and frame size. A text box indicates the length of the clip.

Length, Smoothness and Frame Size will affect the size of your GIF: the higher the values, the higher the size of the resulting file.

In Step 3 section, you can crop the image by dragging the square red helpers. In this case, I’m removing the black bands around the video. You can also use it to isolate each eye.

A red rectangle with two handlers in each corner represent the cropping area

Notice that the size of the GIF is shown in the bottom-right corner of the preview. You can adjust this size by moving the Frame Size slider in the Step 2 section.

Finally, click on the Create GIF! button at the bottom of the window to start the conversion.

A progress bar shows how it remains until completion

One of the things I love about Instagiffer is that, after finishing, it will display compatibility warnings about the GIF, testing on some of the most popular Internet services.

The notice shows warnings for Tumblr, Imgur and Twitter, pointing out problems with sizes and dimensions

Click on the final result to see the animation. It’s really good!

Capture of a A-Blast gameplay

If you are more into old-school tools, check out Kevin’s CLI utility Gifpardy and see how it goes.

Make a 3D YouTube video

One of the advantages of recording both eyes is that you can assemble stereoscopic side-by-side 3D videos. You can use YouTube, for instance.

Just upload your video and edit it. Go to the Advanced settings tab inside the Info & Settings view.

Browser content screenshot at Info & Settings tab of a YouTube video

Check the box that says This video is 3D and select Side by side: Left video on the left side in the combo box.

Checkbox for enabling 3D video. A warning reads:

The deprecation warning encourages you to do this step offline, with your favorite video editor.

Once you are done, YouTube will select the best option for displaying 3D content, applying the proper filters or corrections as needed.

For instance, you’ll see an anaglyph representation when viewing your video with the Firefox browser on desktop.

An anaglyph red/green representation of the 3D video

You can switch to a 2D representation as well.

Regular 2D representation chooses only one eye to show

When you view the video with Firefox for Android you will see both eyes side by side.

Video on Firefox Android is shown side by side with no distortion (as the original video)

And if you try with the YouTube native app, an icon for Cardboard/Daydream VR will appear, transporting you to a virtual cinema where you can enjoy the clip.

In the YouTube app, a Cardboard is shown in the bottom-right corner to enter VR mode

Theater mode applies the proper distortion to each eye and provides a cinematic view

In conclusion

Virtual reality is not widely adopted or easily accessible yet, but the tools are available now to reach more people and distribute your creative work by recording your WebVR demos in video. Discover VR galleries on Twitter, GIPHY or Tumblr, choose your best clips and share them!

Do you prefer high quality video? Check out the VR content on YouTube or Vimeo.

At Mozilla, we support the success of WebVR and aim to demonstrate that people can share and enjoy virtual reality experiences on the Web! Please share your WebVR projects with the world. We’d love to see what you’re making. Let us know on Twitter by tagging your project with #aframevr, and we’ll RT it! Follow @AframeVR and @MozillaVR for the latest developments and new creative work.

Air MozillaGecko And Native Profiler

Gecko And Native Profiler Ehsan Akhgari: Gecko And Native Profiler. May 18, 2017 Ehsan and Markus etherpad is here: https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Gecko_Profiler_FAQ

Daniel Stenbergcurl: 5000 stars

The curl project has been hosted on github since March 2010, and we just now surpassed 5000 stars. A star on github of course being a completely meaningless measurement for anything. But it does show that at least 5000 individuals have visited the page and shown appreciation this low impact way.

5000 is not a lot compared to the really popular projects.

On August 12, 2014 I celebrated us passing 1000 stars with a beer. Hopefully it won’t be another seven years until I can get my 10000 stars beer!

Ehsan AkhgariQuantum Flow Engineering Newsletter #10

Let’s start this week’s updates with looking at the ongoing efforts to improve the usefulness of the background hang reports data.  With Ben Miroglio’s help, we confirmed that we aren’t blowing up telemetry ping sizes yet by sending native stack traces for BHR hangs, and as a result we can now capture a deeper call stack depth, which means the resulting data will be easier to analyze.  Doug Thayer has also been hard at work at creating a new BHR dashboard based on the perf-html UI.  You can see a sneak peak here, but do note that this is work in progress!  The raw BHR data is still available for your inspection.
Kannan Vijayan has been working on adding some low level instrumentation to SpiderMonkey in order to get some detailed information on the relative runtime costs of various builtin intrinsic operations inside the JS engine in various workloads using the rdtsc instruction on Windows.  He now has a working setup that allows him to take a real world JS workload and get some detailed data on what builtin intrinsics were the most costly in that workload.  This is extremely valuable because it allows us to focus our optimization efforts on these builtins where the most gains are to be achieved first.  He already has some initial results of running this tool on the Speedometer benchmark and on a general browsing workload and some optimization work has already started to happen.
Dominik Strohmeier has been helping with running startup measurements on the reference Acer machine to track the progress of the ongoing startup improvements using an HDMI video capture card.  For these measurements, we are tracking two numbers, one is the first paint times (the time at which we paint the first frame from the browser window) and the other is the hero element time (the time at which we paint the “hero element” which is the search box in about:home in this case.)  The baseline build here is the Nightly of Apr 1st as a date before active work on startup optimizations started.  At that time, our median first paint time was 1232.84ms (with a standard deviation of 16.58ms) and our hero element time was
1849.26ms (with a standard deviation of 28.58ms).  On the Nightly of May 18, our first paint time is 849.66ms (with a standard deviation of 11.78ms) and our hero element time is 1616.02ms (with a standard deviation of 24.59ms).
Next week we’re going to have a small work week with some people from the DOM, JS, Layout, Graphics and Perf teams here in Toronto.  I expect to be fully busy at the work week, so you should expect the next issue of this newsletter in two weeks!  With that, it is time to acknowledge the hard work of those who helped make Firefox faster this past week.  I hope I’m not dropping any names by accident!

Manish GoregaokarTeaching Programming: Proactive vs Reactive

I’ve been thinking about this a lot these days. In part because of an idea I had but also due to this twitter discussion.

When teaching most things, there are two non-mutually-exclusive ways of approaching the problem. One is “proactive”1, which is where the teacher decides a learning path beforehand, and executes it. The other is “reactive”, where the teacher reacts to the student trying things out and dynamically tailors the teaching experience.

Most in-person teaching experiences are a mix of both. Planning beforehand is very important whilst teaching, but tailoring the experience to the student’s reception of the things being taught is important too.

In person, you can mix these two, and in doing so you get a “best of both worlds” situation. Yay!

But … we don’t really learn much programming in a classroom setup. Sure, some folks learn the basics in college for a few years, but everything they learn after that isn’t in a classroom situation where this can work2. I’m an autodidact, and while I have taken a few programming courses for random interesting things, I’ve taught myself most of what I know using various sources. I care a lot about improving the situation here.

With self-driven learning we have a similar divide. The “proactive” model corresponds to reading books and docs. Various people have proactively put forward a path for learning in the form of a book or tutorial. It’s up to you to pick one, and follow it.

The “reactive” model is not so well-developed. In the context of self-driven learning in programming, it’s basically “do things, make mistakes, hope that Google/Stackoverflow help”. It’s how a lot of people learn programming; and it’s how I prefer to learn programming.

It’s very nice to be able to “learn along the way”. While this is a long and arduous process, involving many false starts and a lack of a sense of progress, it can be worth it in terms of the kind of experience this gets you.

But as I mentioned, this isn’t as well-developed. With the proactive approach, there still is a teacher – the author of the book! That teacher may not be able to respond in real time, but they’re able to set forth a path for you to work through.

On the other hand, with the “reactive” approach, there is no teacher. Sure, there are Random Answers on the Internet, which are great, but they don’t form a coherent story. Neither can you really be your own teacher for a topic you do not understand.

Yet plenty of folks do this. Plenty of folks approach things like learning a new language by reading at most two pages of docs and then just diving straight in and trying stuff out. The only language I have not done this for is the first language I learned3 4.

I think it’s unfortunate that folks who prefer this approach don’t get the benefit of a teacher. In the reactive approach, teachers can still tell you what you’re doing wrong and steer you away from tarpits of misunderstanding. They can get you immediate answers and guidance. When we look for answers on stackoverflow, we get some of this, but it also involves a lot of pattern-matching on the part of the student, and we end up with a bad facsimile of what a teacher can do for you.

But it’s possible to construct a better teacher for this!

In fact, examples of this exist in the wild already!

The Elm compiler is my favorite example of this. It has amazing error messages

The error messages tell you what you did wrong, sometimes suggest fixes, and help correct potential misunderstandings.

Rust does this too. Many compilers do. (Elm is exceptionally good at it)

One thing I particularly like about Rust is that from that error you can try rustc --explain E0373 and get a terminal-friendly version of this help text.

Anyway, diagnostics basically provide a reactive component to learning programming. I’ve cared about diagnostics in Rust for a long time, and I often remind folks that many things taught through the docs can/should be taught through diagnostics too. Especially because diagnostics are a kind of soapbox for compiler writers — you can’t guarantee that your docs will be read, but you can guarantee that your error messages will. These days, while I don’t have much time to work on stuff myself I’m very happy to mentor others working on improving diagnostics in Rust.

Only recently did I realize why I care about them so much – they cater exactly to my approach to learning programming languages! If I’m not going to read the docs when I get started and try the reactive approach, having help from the compiler is invaluable.

I think this space is relatively unexplored. Elm might have the best diagnostics out there, and as diagnostics (helping all users of a language – new and experienced), they’re great, but as a teaching tool for newcomers; they still have a long way to go. Of course, compilers like Rust are even further behind.

One thing I’d like to experiment with is a first-class tool for reactive teaching. In a sense, clippy is already something like this. Clippy looks out for antipatterns, and tries to help teach. But it also does many other things, and not all are teaching moments are antipatterns.

For example, in C, this isn’t necessarily an antipattern:

struct thingy *result;
if (result = do_the_thing()) {
    frob(*result)
}

Many C codebases use if (foo = bar()). It is a potential footgun if you confuse it with ==, but there’s no way to be sure. Many compilers now have a warning for this that you can silence by doubling the parentheses, though.

In Rust, this isn’t an antipattern either:

fn add_one(mut x: u8) {
    x += 1;
}

let num = 0;
add_one(num);
// num is still 0

For someone new to Rust, they may feel that the way to have a function mutate arguments (like num) passed to it is to use something like mut x: u8. What this actually does is copies num (because u8 is a Copy type), and allows you to mutate the copy within the scope of the function. The right way to make a function that mutates arguments passed to it by-reference would be to do something like fn add_one(x: &mut u8). If you try the mut x thing for non-Copy values, you’d get a “reading out of moved value” error when you try to access num after calling add_one. This would help you figure out what you did wrong, and potentially that error could detect this situation and provide more specific help.

But for Copy types, this will just compile. And it’s not an antipattern – the way this works makes complete sense in the context of how Rust variables work, and is something that you do need to use at times.

So we can’t even warn on this. Perhaps in “pedantic clippy” mode, but really, it’s not a pattern we want to discourage. (At least in the C example that pattern is one that many people prefer to forbid from their codebase)

But it would be nice if we could tell a learning programmer “hey, btw, this is what this syntax means, are you sure you want to do this?”. With explanations and the ability to dismiss the error.

In fact, you don’t even need to restrict this to potential footguns!

You can detect various things the learner is trying to do. Are they probably mixing up String and &str? Help them! Are they writing a trait? Give a little tooltip explaining the feature.

This is beginning to remind me of the original “office assistant” Clippy, which was super annoying. But an opt-in tool or IDE feature which gives helpful suggestions could still be nice, especially if you can strike a balance between being so dense it is annoying and so sparse it is useless.

It also reminds me of well-designed tutorial modes in games. Some games have a tutorial mode that guides you through a set path of doing things. Other games, however, have a tutorial mode that will give you hints even if you stray off the beaten path. Michael tells me that Prey is a recent example of such a game.

This really feels like it fits the “reactive” model I prefer. The student gets to mold their own journey, but gets enough helpful hints and nudges from the “teacher” (the tool) so that they don’t end up wasting too much time and can make informed decisions on how to proceed learning.

Now, rust-clippy isn’t exactly the place for this kind of tool. This tool needs the ability to globally “silence” a hint once you’ve learned it. rust-clippy is a linter, and while you can silence lints in your code, you can’t silence them globally for the current user. Nor does that really make sense.

But rust-clippy does have the infrastructure for writing stuff like this, so it’s an ideal prototyping point. I’ve filed this issue to discuss this topic.

Ultimately, I’d love to see this as an IDE feature.

I’d also like to see more experimentation in the department of “reactive” teaching — not just tools like this.

Thoughts? Ideas? Let me know!

thanks to Andre (llogiq) and Michael Gattozzi for reviewing this


  1. This is how I’m using these terms. There seems to be precedent in pedagogy for the proactive/reactive classification, but it might not be exactly the same as the way I’m using it.

  2. This is true for everything, but I’m focusing on programming (in particular programming languages) here.

  3. And when I learned Rust, it only had two pages of docs, aka “The Tutorial”. Good times.

  4. I do eventually get around to doing a full read of the docs or a book but this is after I’m already able to write nontrivial things in the language, and it takes a lot of time to get there.

Justin DolskePhoton Engineering Newsletter #1

Well, hello there. Let’s talk about the state of Photon, the upcoming Firefox UI refresh! You’ve likely seen Ehsan’s weekly Quantum Flow updates. They’re a great summary of the Quantum Flow work, so I’m just going to copy the format for Photon too. In this update I’ll briefly cover some of the notable work that’s happened up through the beginning of May. I hope to do future updates on a weekly basis.

Our story so far

Up until recently, the Photon work hasn’t been very user-visible. It’s been lots of planning, discussion, research, prototypes, and foundational work. But now we’re at the point where we’re getting into full-speed implementation, and you’ll start to see things changing.

Photon is landing incrementally between now and Firefox 57. It’s enabled by default on Nightly, so you won’t need to enable any special settings. (Some pieces may be temporarily disabled-by-default until we get them up to a Nightly level of quality, but we’ll enable them when they’re ready for testing.) This allows us to get as much testing as possible, even in versions that ultimately won’t ship with Photon. But it does mean that Nightly users will only gradually see Photon changes arriving, instead of a big splash with everything arriving at once.

For Photon work that lands on Nightly-55 or Nightly-56, we’ll be disabling almost all Photon-specific changes once those versions are out of Nightly. In other words, Beta-55 and Beta-56 (and of course the final release versions, Firefox-55 and Firefox-56). That’s not where we’re actively developing or fixing bugs – so if you want to try out Photon as it’s being built, you should stick with Nightly. Users on Beta or Release won’t see Photon until 57 starts to ship on those channels later this year.

The Photon work is split into 6 main areas (which is also how the teams implementing it are organized). These are, briefly:

1. Menus and structure – Replaces the existing application menu (“Hamburger button”) with a simplified linear menu, adds a “page action” menu, changes the bookmarks split-button to be a more general-purpose “library menu”, updates sidebars, and more.

2. Animation – Adds animation to toolbar button actions, and improves animations/transitions of other UI elements (like tabs and menus).

3. Preferences – Reorganizes the Firefox preferences UI to improve organization and adds the ability to search.

4. Visual redesign – This is a collection of other visual changes for Photon. Updating icons, changing toolbar buttons, adapting UI size when using touchscreens, and many other general UI refinements.

5. Onboarding – An important part of the Photon experience is helping new users understand what’s great about Firefox, and showing existing users what’s new and different in 57.

6. Performance – Performance is a key piece throughout Photon, but the Performance team is helping us to identify what areas of Firefox have issues. Some of this work overlaps with Quantum Flow, other work is improve specific areas of Firefox UI jank.

Recent Changes

These updates are going to focus more on the work that’s landing and less on the process that got it there. To start getting caught up, here’s a summary of what’s happened so far in each of the project areas though early May…

Menus/structure: Work is underway to implement the new menus. It’s currently behind a pref until we have enough implemented to turn them on without making Nightly awkward to use. In bug 1355331 we briefly moved the sidebar to the right side of the window instead of the left. But we’ve since decided that we’re only going to provide a preference to allow putting it on the right, and it will remain on the left by default.

Animation: In bug 1352069 we consolidated some existing preferences into a single new toolkit.cosmeticAnimations.enabled preference, to make it easy to disable non-essential animations for performance or accessibility reasons. Bugs 1345315 and 1356655 reduced jank in the tab opening/closing animations. The UX team is finalizing the new animations that will be used in Photon, and the engineering team has build prototypes for how to implement them in a way that performs well.

Preferences: Earlier in the year, we worked with a group of students at Michigan State University to reorganize Firefox’s preferences and add a search function (bug 1324168). We’re now completing some final work, preparing for a second revision, and adding some new UI for performance settings. While this is now considered part of Photon, it was originally scheduled to land in Firefox 55 or 56, and so will ship before the rest of Photon.

Visual redesign:  Bug 1347543 landed a major change to the icons in Firefox’s UI. Previously the icons were simple PNG bitmaps, with different versions for OS variations and display scaling factors. Now they’re a vector format (SVG), allowing a single source file to be be rendered within Firefox at different sizes or with different colors. You won’t notice this change, because we’re currently using SVG versions of the current pre-Photon icons. We’ll switch to the final Photon icons later, for Firefox 57. Another big foundational piece of work landed in bug 1352364, which refactored our toolbar button CSS so that we can easily update it for Photon.

Onboarding: The onboarding work got started later than other parts of Photon. So while some prototyping has started, most of the work up to May was spent finalizing the scope and design of project.

Performance: As noted in Ehsan’s Quantum updates, the Photon performance work has already resulted in a significant improvement to Firefox startup time. Other notable fixes have made closing tabs faster, and work to improve how favicons are stored improved performance on our Tp5 page-load benchmark by 30%! Other fixes have reduced awesomebar jank. While a number of performance bugs have been fixed (of which these are just a subset), most of the focus so far has been on profiling Firefox to identify lots of other things to fix. And it’s also worth noting the great Performance Best Practices guide Mike Conley helped put together, as well as his Oh No! Reflow! add-on, which is a useful tool for finding synchronous reflows in Firefox UI (which cause jank).

That’s it for now! The next couple of these Photon updates will catch up with what’s currently underway.


Robert O'Callahanrr Usenix Paper And Technical Report

Our paper about rr has been accepted to appear at the Usenix Annual Technical Conference. Thanks to Dawn for suggesting that conference, and to the ATC program committee for accepting it :-). I'm scheduled to present it on July 13. The paper is similar to the version we previously uploaded to arXiv.

Some of the reviewers requested more material: additional technical details, explanations of some of our design choices compared to alternatives, and reflection on our "real world" experience with rr. There wasn't space for that within the conference page limits, so our shepherd suggested publishing a tech-report version of the paper with the extra content. I've done that and uploaded "Engineering Record And Replay For Deployability: Extended Technical Report". I hope some people find it interesting and useful.

Adblock PlusThe plan towards offering Adblock Plus for Firefox as a Web Extension

TL;DR: Sometime in autumn this year the current Adblock Plus for Firefox extension is going to be replaced by another, which is more similar to Adblock Plus for Chrome. Brace for impact!

What are Web Extensions?

At some point, Web Extensions are supposed to become a new standard for creating browser extensions. The goal is writing extensions in such a way that they could run on any browser without any or only with minimal modifications. Mozilla and Microsoft are pursuing standardization of Web Extensions based on Google Chrome APIs. And Google? Well, they aren’t interested. Why should they be, if they already established themselves as an extension market leader and made everybody copy their approach.

It isn’t obvious at this point how Web Extensions will develop. The lack of interest from Google isn’t the only issue here; so far the implementation of Web Extensions in Mozilla Firefox and Microsoft Edge shows very significant differences as well. It is worth noting that Web Extensions are necessarily less powerful than the classic Firefox extensions, even though many shortcomings can probably be addressed. Also, my personal view is that the differences between browsers are either going to result in more or less subtle incompatibilities or in an API which is limited by the lowest common denominator of all browsers and not good enough for anybody.

So why offer Adblock Plus as a Web Extension?

Because we have no other choice. Mozilla’s current plan is that Firefox 57 (scheduled for release on November 14, 2017) will no longer load classic extensions, and only Web Extensions are allowed to continue working. So we have to replace the current Adblock Plus by a Web Extension by then or ideally even by the time Firefox 57 is published as a beta version. Otherwise Adblock Plus will simply stop working for the majority of our users.

Mind you, there is no question why Mozilla is striving to stop supporting classic extensions. Due to their deep integration in the browser, classic extensions are more likely to break browser functionality or to cause performance issues. They’ve also been delaying important Firefox improvements due to compatibility concerns. This doesn’t change the fact that this transition is very painful for extension developers, and many existing extensions won’t take this hurdle. Furthermore, it would have been better if the designated successor of the classic extension platform were more mature by the time everybody is forced to rewrite their code.

What’s the plan?

Originally, we hoped to port Adblock Plus for Firefox properly. While using Adblock Plus for Chrome as a starting point would require far less effort, this extension also has much less functionality compared to Adblock Plus for Firefox. Also, when developing for Chrome we had to make many questionable compromises that we hoped to avoid with Firefox.

Unfortunately, this plan didn’t work out. Adblock Plus for Firefox is a large codebase and rewriting it all at once without introducing lots of bugs is unrealistic. The proposed solution for a gradual migration doesn’t work for us, however, due to its asynchronous communication protocols. So we are using this approach to start data migration now, but otherwise we have to cut our losses.

Instead, we are using Adblock Plus for Chrome as a starting point, and improving it to address the functionality gap as much as possible before we release this version for all our Firefox users. For the UI this means:

  • Filter Preferences: We are working on a more usable and powerful settings page than what is currently being offered by Adblock Plus for Chrome. This is going to be our development focus, but it is still unclear whether advanced features such as listing filters of subscriptions or groups for custom filters will be ready by the deadline.
  • Blockable Items: Adblock Plus for Chrome offers comparable functionality, integrated in the browser’s Developer Tools. Firefox currently doesn’t support Developer Tools integration (bug 1211859), but there is still hope for this API to be added by Firefox 57.
  • Issue Reporter: We have plans for reimplementing this important functionality. Given all the other required changes, this one has lower priority, however, and likely won’t happen before the initial release.

If you are really adventurous you can install a current development build here. There is still much work ahead however.

What about applications other than Firefox Desktop?

The deadline only affects Firefox Desktop for now; in other applications classic extensions will still work. However, it currently looks like by Firefox 57 the Web Extensions support in Firefox Mobile will be sufficient to release a Web Extension there at the same time. If not, we still have the option to stick with our classic extension on Android. Update (2017-05-18): Mozilla announced that Firefox Mobile will drop support for classic extensions at the same time as Firefox Desktop. So the option to keep our classic extension there doesn’t exist, we’ll have to make with whatever Web Extensions APIs are available.

As to SeaMonkey and Thunderbird, things aren’t looking well there. It’s doubtful that these will have noteworthy Web Extensions support by November. In fact, it’s not even clear whether they plan to support Web Extensions at all. And unlike with Firefox Mobile, we cannot publish a different build for them (Addons.Mozilla.Org only allows different builds per operating system, not per application). So our users on SeaMonkey and Thunderbird will be stuck with an outdated Adblock Plus version.

What about extensions like Element Hiding Helper, Customizations and similar?

Sadly, we don’t have the resources to rewrite these extensions. We just released Element Hiding Helper 1.4, and it will most likely remain as the last Element Hiding Helper release. There are plans to integrate some comparable functionality into Adblock Plus, but it’s not clear at this point when and how it will happen.

Mozilla Addons BlogCompatibility Update: Add-ons on Firefox for Android

We announced our plans for add-on compatibility and the transition to WebExtensions in the Road to Firefox 57 blog post. However, we weren’t clear on what this meant for Firefox for Android.

We did this intentionally, since at the time the plan wasn’t clear to us either. WebExtensions APIs are landing on Android later than on desktop. Many of them either don’t apply or need additional work to be useful on mobile. It wasn’t clear if moving to WebExtensions-only on mobile would cause significant problems to our users.

The Plan for Android

After looking into the most critical add-ons for mobile and the implementation plan for WebExtensions, we have decided it’s best to have desktop and mobile share the same timeline. This means that mobile will be WebExtensions-only at the same time as desktop Firefox, in version 57. The milestones specified in the Road to Firefox 57 post now apply to all platforms.

The post Compatibility Update: Add-ons on Firefox for Android appeared first on Mozilla Add-ons Blog.

Air MozillaMozilla Roadshow Paris

Mozilla Roadshow Paris The Mozilla Roadshow is making a stop in Paris. Join us for a meetup-style, Mozilla-focused event series for people who build the Web. Hear from...

Chris H-CData Science is Hard: Anomalies Part 3

So what do you do when you have a duplicate data problem and it just keeps getting worse?

You detect and discard.

Specifically, since we already have a few billion copies of pings with identical document ids (which are extremely-unlikely to collide), there is no benefit to continue storing them. So what we do is write a short report about what the incoming duplicate looked like (so that we can continue to analyze trends in duplicate submissions), then toss out the data without even parsing it.

As before, I’ll leave finding out the time the change went live as an exercise for the reader:newplot(1)

:chutten


The Mozilla BlogOne Step Closer to a Closed Internet

Today, the FCC voted on Chairman Ajit Pai’s proposal to repeal and replace net neutrality protections enacted in 2015. The verdict: to move forward with Pai’s proposal

 

We’re deeply disheartened. Today’s FCC vote to repeal and replace net neutrality protections brings us one step closer to a closed internet.  Although it is sometimes hard to describe the “real” impacts of these decisions, this one is easy: this decision leads to an internet that benefits Internet Service Providers (ISPs), not users, and erodes free speech, competition, innovation and user choice.

This vote undoes years of progress leading up to 2015’s net neutrality protections. The 2015  rules properly place ISPs under “Title II” of the Communications Act of 1934, and through that well-tested basis of legal authority, prohibit ISPs from engaging in paid prioritization and blocking or throttling of web content, applications and services. These rules ensured a more open, healthy Internet.

Pai’s proposal removes the 2015 protections and re-re-classifies ISPs under “Title I,” which courts already have determined is insufficient for ensuring a truly neutral net. The result: ISPs would be able to once again prioritize, block and throttle with impunity. This means fewer opportunities for startups and entrepreneurs, and a chilling effect on innovation, free expression and choice online.

Net neutrality isn’t an abstract issue — it has significant, real-world effects. For example, in the past, without net neutrality protections, ISPs have imposed limits on who can FaceTime and determined how we stream videos, and also adopted underhanded business practices.

So what’s next and what can we do?

We’re now entering a 90-day public comment period, which ends in mid-August. The FCC may determine a path forward as soon as October of this year.

During the public comment period in 2015, nearly 4 million citizens wrote to the FCC, many of them demanding strong net neutrality protections.  We all need to show the same commitment again.

We’re already well on our way to making noise. In the weeks since Pai first announced his proposal, more than 100,000 citizens (not bots) have signed Mozilla’s net neutrality petition at mzl.la/savetheinternet. And countless callers (again, not bots) have recorded more than 50 hours of voicemail for the FCC’s ears. We need more of this.

We’re also planning strategic, direct engagement with policymakers, including through written comments in the FCC’s open proceeding. Over the next three months, Mozilla will continue to amplify internet users’ voices and fuel the movement for a healthy internet.

The post One Step Closer to a Closed Internet appeared first on The Mozilla Blog.

Air MozillaReps Weekly Meeting May 18, 2017

Reps Weekly Meeting May 18, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air Mozilla2017 Global Sprint Ask Me Anything

2017 Global Sprint Ask Me Anything Questions and answers about the upcoming 2017 Mozilla Global Sprint, our worldwide collaboration party for the Open Web

Eric ShepherdA genuine medical oddity

My health continues to be an adventure. My neuropathy continues to worsen steadily; I no longer have any significant sensation in many of my toes, and my feet are always in a state of “pins and needles” style numbness. My legs are almost always tingling so hard they burn, or feel like they’re being squeezed in a giant fist, or both. The result is that I have some issues with my feet not always doing exactly what I expect them to be doing, and I don’t usually know exactly where they are.

For example, I have voluntarily stopped driving for the most part, because much of the time, sensation in my feet is so bad that I can’t always tell whether my feet are in the right places. A few times, I’ve found myself pressing the gas and brake pedals together because I didn’t realize my foot was too far to the left.

I also trip on things a lot more than I used to, since my feet wander a bit without my realizing it. On January 2, I tripped over a chair in my office while carrying an old CRT monitor to store it in my supply cabinet. I went down hard on my left knee and landed on the monitor I was carrying, taking it squarely to my chest. My chest was okay, just a little sore, but my knee was badly injured. The swelling was pretty brutal, and it is still trying to finish healing up more than four months later.

Given the increased problems with my leg pain, my neurologist recently had an MRI performed on my lumbar (lower) spine. An instance of severe nerve root compression was found which is possibly contributing to my pain and numbness in my legs. We are working to schedule for them to attempt to inject medication at that location to try to reduce the swelling that’s causing the compression. If successful, that could help temporarily relieve some of my symptoms.

But the neuropathic pain in my neck and shoulders continues as well. There is some discussion of possibly once again looking at using a neurostimulator implant to try to neutralize the pain signals that are being falsely generated. Apparently I’m once again eligible for this after a brief period where my symptoms shifted outside the range of those which are considered appropriate for that type of therapy.

In addition to the neurological issues, I am in the process of scheduling a procedure to repair some vascular leaks in my left leg, which may be responsible for some swelling there that could be in part responsible for some of my leg trouble (although that is increasingly unlikely given other information that’s come to light since we started scheduling that work).

Then you can top all that off with the side effects of all the meds I’m taking. I take at least six medications which have the side effect of “drowsiness” or “fatigue” or “sleepiness.” As a result, I live in a fog most of the time. Mornings and early afternoons are especially difficult. Just keeping awake is a challenge. Being attentive and getting things written is a battle. I make progress, but slowly. Most of my work happens in the afternoons and evenings, squeezed into the time between my meds easing up enough for me to think more clearly and alertly, and time for my family to get together for dinner and other evening activities together.

Balancing work, play, and personal obligations when you have this many medical issues at play is a big job. It’s also exhausting in and of itself. Add the exhaustion and fatigue that come from the pain and the meds, and being me is an adventure indeed.

I appreciate the patience and the help of my coworkers and colleagues more than I can begin to say. Each and every one of you is awesome. I know that my unpredictable work schedule (between having to take breaks because of my pain and the vast number of appointments I have to go to) causes headaches for everyone. But the team has generally adapted to cope with my situation, and that above all else is something I’m incredibly grateful for. It makes my daily agony more bearable. Thank you. Thank you. Thank you.

Thank you.

Doug BelshawFake News and Digital Literacies: some resources

Free Hugs CC BY-NC-ND clement127

In a couple of weeks’ time, on Thursday, 1st June 2017, I’ll be a keynote speaker at an online Library 2.0 event, convened by Steve Hargadon. The title is Digital Literacy and Fake News and you can register for it here. An audience of around 5,000 people from all around the world is expected to hear us discuss the following:

What does “digital literacy” mean in an era shaped by the Internet, social media, and staggering quantities of information? How is it that the fulfillment of human hopes for a open knowledge society seem to have resulted in both increased skepticism of, and casualness with, information? What tools and understanding can library professionals bring to a world that seems to be dominated by fake news?

In preparation for the session, Steve has asked us all to provide a ‘Top 10’ of our own resources on the topic, as well as those from others that we’d recommend. In the spirit of working openly, I’m sharing in advance what I’ve just emailed to him.

I’ll be arguing that ‘Fake News’ is a distraction from more fundamental problems, including algorithmic curation of news feeds, micro-targeting of user groups, and the way advertising fuels the web economy.

 1. My resources

 2. Other resources

I hope you can join us live, or at least watch the recording afterwards! Don’t forget to sign up.


Comments? Questions? I’m off Twitter for May, but you can email me: hello@dynamicskillset.com

Image CC BY-NC-ND clement127

Daniel PocockHacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Air MozillaThe Joy of Coding - Episode 100

The Joy of Coding - Episode 100 mconley livehacks on real Firefox bugs while thinking aloud.

Kim MoirThe Manager’s Path in review

I ordered Camille Fournier’s book on engineering management when it was in pre-order.  I was delighted to read when in arrived in the mail.  If you work in the tech industry, manager or not, this book is a must read.

The book is a little over 200 pages but it packs much more succinct and use useful advice that other longer books on this topic that I’ve read in the past.  It takes a unique approach in that the first chapters describes what to expect from your manager, as a new individual contributor.  Each chapter then moves to how to help others as a mentor, a tech lead, managing other people, to managing the a team, to a manager of multiple teams and further up the org chart.  At the end of each chapter, there are questions for you that relate to the chapter to assess your own experience with the material.

IMG_3914

Some of the really useful advice in the book

  • Choose your manager wisely, consider who you will be reporting to when interviewing.  “Strong managers know how to play the game at their company.  They can get you promoted; the can get you attention and feedback from important people.  Strong managers have strong networks, and can get you jobs every after you stop working for them.”
  • How to be a great tech lead – understand the architecture, be a team player, lead technical discussions, communicate “Your productivity is now less important than the productivity of the whole team….If one universal talent separates successful leaders from the pack, it’s communication skills.  Successful leaders write well, they read carefully, and they can get up in front of a group and speak.”
  • On transitioning to a job managing a team “As much as you may want to believe that management is a natural progression of the skills you develop as a senior engineer, it’s really a whole new set of skills and challenges.”  So many people think that and don’t take the time to learn new skills before taking on the challenge of managing a team.
  • One idea I thought was really fantastic was to create a 30/60/90 day plan for new hires or new team members to establish clear goals to ensure they are meeting expectations on getting up to speed.
  • Camille also discusses the perils of micromanagement and how this can be a natural inclination for people who were deeply technical before becoming managers.  Instead of focusing on the technical details, you need to focus on giving people autonomy over their work, to keep them motivated and engaged.
  • On giving performance reviews – use concrete examples from anonymous peer reviews to avoid bias.  Spent plenty of time preparing, start early and focus on accomplishments and strengths.    When describing areas for improvement, keep it focused. Avoid surprises during the review process. If a person is under-performing, the review process should not be the first time they learn this.
  • From the section on debugging dysfunctional teams, the example was given of a team that wasn’t shipping the team only released once a week and the release process was very time consuming and painful.  Once the release process was more automated and occurred more regularly – the team became more productive.  In other words, sometimes dysfunctional teams are due to resource constraints or broken processes.
  • Be kind, not nice. It’s kind to tell someone that they aren’t ready for promotion and describe the steps that they need to get to the next level.  It’s not kind to tell someone that they should get promoted, but watch them fail.  It’s kind to tell someone that their behaviour is disruptive, and that they need to change it.
  • Don’t be afraid.  Avoiding conflict because of fear will not resolve the problems on your team.
  • “As you grow more into leadership positions, people will look to you for behavioral guidance. What you want to teach them in how to focus.  To that end, there are two areas I encourage you to practice modeling, right now: figuring out what’s important, and going home.”  💯💯💯
  • Suggestions for roadmapping uncertainty – be realistic about the probability of change, break down projects into smaller your deliverables so that if you don’t implement the entire larger project,  you have still implemented some of the project’s intended results.
  • Chapter 9 on bootstrapping culture discusses how the role of a senior engineering leader is not just to set technical direction, but to be clear and thoughtful about setting the culture of the engineering team.
  • I really like this paragraph on hiring for the values of the team

culture

  • The bootstrapping culture chapter finishes with notes about code review, running outage postmortems and architectural reviews which all have very direct and useful advice.

This book describes concrete and concise steps to be an effective leader as you progress through your career.  It also features the voices and perspectives of women in leadership, something some well-known books lack. I’ve read a lot of management books over the years, and while some have jewels of wisdom, I haven’t read one that is this densely packed with useful content.  It really makes you think – what is the most important thing I can be doing now to help others make progress and be happy with their work?

I’ve also found the writing, talks and perspectives of the following people on engineering leadership invaluable


Mozilla Open Policy & Advocacy BlogWorking Together Towards a more Secure Internet through VEP Reform

Today, Mozilla sent a letter to Congress expressing support for an important bill has just been introduced: the Protecting Our Ability to Counter Hacking Act (PATCH Act). You can read more in this post from Denelle Dixon.

This bill focuses on a relatively unknown, but critical, piece of the U.S. government’s responsibility to secure our internet infrastructure: the Vulnerabilities Equities Process (VEP). The VEP is the government’s process for reviewing and coordinating the disclosure of vulnerabilities to folks who write code – like us – who can fix them in the software and hardware we all use (you can learn more about what we know here). However, the VEP is not codified in law, and lacks transparency and reporting on both the process policymakers follow and the considerations they take into account. The PATCH Act would address these gaps.

The cyberattack over the last week – using the WannaCry exploit from the latest Shadow Brokers release, and exploiting unpatched Windows computers – only emphasizes the need to work together and make sure that we’re all as secure as we can be. As we said earlier this week, these exploits might have been shared with Microsoft by the NSA – and that would be the right way to handle an exploit like this. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes points to the importance of codifying and improving the Vulnerabilities Equities Process.

We’ve said before – many times – how important it is to work together to protect cybersecurity. Reforming the VEP is one key component of that shared responsibility, ensuring that the U.S. government shares vulnerabilities that put swaths of the internet at risk. The process was conceived in 2010 to improve our collective cybersecurity, and implemented in 2014 after the Heartbleed vulnerability put most of the internet at risk (for more information, take a look at this timeline). It’s time to take the next step and put this process into statute.

Last year, we wrote about five important reforms to the VEP we believe are necessary:

  • All security vulnerabilities should go through the VEP.
  • All relevant federal agencies involved in the VEP should work together using a standard set of criteria to ensure all risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP should be placed within the Department of Homeland Security (DHS), with their expertise in existing coordinated vulnerability disclosure programs.
  • The VEP should be codified in law to ensure compliance and permanence.

Over the last year, we have seen many instances where hacking tools from the U.S. government have been posted online, and then used – by unknown adversaries – to attack users. Some of these included “zero days”, which left companies scrambling to patch their software and protect their users, without prior notice. It’s important that the government defaults to disclosing vulnerabilities, rather than hoarding them in case they become useful later. We hope they will instead work with technology companies to help protect all of us online.

The PATCH Act – introduced by Sen. Gardner, Sen. Johnson, Sen. Schatz, Rep. Farenthold, and Rep. Lieu – aims to codify and make the existing Vulnerabilities Equities Process more transparent. It’s relatively simple – a good thing, when it comes to legislation: it creates a VEP Board, housed at DHS, which will consider disclosure of vulnerabilities that some part of the government knows about. The VEP Board would make public the process and criteria they use to balance the relevant interests and risks – an important step – and publish reporting around the process. These reports would allow the public to consider whether the process is working well, without sharing classified information (saving that reporting for the relevant oversight entities). This would also make it easier to disclose vulnerabilities through DHS’ existing channels.

Mozilla looks forward to working with members of Congress on this bill, as well as others interested in VEP reform – and all the other government actors, in the U.S. and around the world, who seek to take action that would improve the security of the internet. We stand with you, ready to defend the security of the internet and its users.

The post Working Together Towards a more Secure Internet through VEP Reform appeared first on Open Policy & Advocacy.

The Mozilla BlogImproving Internet Security through Vulnerability Disclosure

Supporting the PATCH Act for VEP Reform

 

Today, Mozilla sent a letter to Congress in support of the Protecting Our Ability to Counter Hacking Act (PATCH Act) that was just introduced by Sen. Cory Gardner, Sen. Ron Johnson, Sen. Brian Schatz, Rep. Blake Farenthold, and Rep. Ted Lieu.

We support the PATCH Act because it aims to codify and make the existing Vulnerabilities Equities Process more transparent. The Vulnerabilities Equities Process (VEP) is the U.S. government’s process for reviewing and coordinating the disclosure of new vulnerabilities it learns about.

The VEP remains shrouded in secrecy, and is in need of process reforms to ensure transparency, accountability, and oversight. Last year, I wrote about five important reforms to the VEP we believe are necessary to make the internet more secure. The PATCH Act includes many of the key reforms, including codification in law to increase transparency and accountability.

For background, a vulnerability is a flaw – in design or implementation – that can be used to exploit or penetrate a product or system. We saw an example this weekend as a ransomware attack took unpatched systems by surprise – and you’d be surprised at how common they are if we don’t all work together to fix them. These vulnerabilities can put users and businesses at significant risk from bad actors. At the same time, exploiting these same vulnerabilities can also be useful for law enforcement and intelligence operations. It’s important to consider those equities when the government decides what to do.

If the government has exploits that have been compromised, they must disclose them to tech companies before those vulnerabilities can be used widely and put users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law. Read this Mozilla Policy blog post from Heather West for more details.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

We look forward to working with the U.S. government (and governments around the world) to improve disclosure of security vulnerabilities and better secure the internet to protect us all.

 

 

The post Improving Internet Security through Vulnerability Disclosure appeared first on The Mozilla Blog.

Daniel PocockBuilding an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

Mozilla Addons BlogAdd-on Compatibility for Firefox 55

Firefox 55 will be released on August 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 55 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

General

Recently, we turned on a restriction on Nightly that only allows multiprocess add-ons to be enabled. You can use a preference to toggle it. Also, Firefox 55 is the first version to move directly from Nightly to Beta after the removal of the Aurora channel.

XPCOM and Modules

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 55.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 54.

The post Add-on Compatibility for Firefox 55 appeared first on Mozilla Add-ons Blog.

QMOFirefox 54 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – May 12th – we held a Testday event, for Firefox 54 Beta 7.

Thank you all for helping us making Mozilla a better place – Gabi Cheta, Ilse Macías, Juliano Naves, Athira Appu, Avinash Sharma,  Iryna Thompson.

From India team: Surentharan R.A, Fahima Zulfath, Baranitharan.M,  Sriram, vignesh kumar.

From Bangladesh team: Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Md.Majedul islam, Sajedul Islam, Saddam Hossain, Maruf Rahman, Md.Tarikul Islam Oashi, Md. Ehsanul Hassan, Meraj Kazi, Sayed Ibn Masud, Tanvir Rahman, Farhadur Raja Fahim, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Md. Almas Hossain, Saheda Reza Antora, Fahmida Noor, Muktasib Un Nur, Mohammad Maruf Islam,  Rezwana Islam Ria, Tazin Ahmed, Towkir Ahmed, Azmina Akter Papeya

Results:

– several test cases executed for the Net Monitor MVP and Firefox Screenshots features;

– 2 new bugs filed: 13647711364773.

– 15 bugs verified: 116717813640901363288134125813420021335869,  776254, 13637371363840, 13276911315550135847913380361348264 and 1361247.

Again thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Hacks.Mozilla.OrgHaving fun with physics and A-Frame

A-Frame is a WebVR framework to build virtual reality experiences. It comes with some bundled components that allow you to easily add behavior to your VR scenes, but you can download more –or even create your own.

In this post I’m going to share how I built a VR scene that integrates a physics engine via a third-party component. While A-Frame allows you to add objects and behaviors to a scene, if you want those objects to interact with each other or be manipulated by the user, you might want to use a physics engine to handle the calculations you’ll need. If you are new to A-Frame, I recommend you check out the Getting started guide and play with it a bit first.

The scene I created is a bowling alley that works with the HTC Vive headset. You have a ball in your right hand which you can throw by holding the right-hand controller trigger button and releasing it as you move your arm. To return the ball back to your hand and try again, press the menu button. You can try the demo here! (Note: You will need Firefox Nightly and an HTC Vive. Follow the the setup instructions in WebVR.rocks)

ScreenshotThe source code is at your disposal on Github to tweak and have fun with.

Adding a physics engine to A-Frame

I’ve opted for aframe-physics-system, which uses Cannon.js under the hood. Cannon is a pure JavaScript physics engine (not a compiled version to ASM from C/C++), so we can easily interface with it –and peek at its code.

aframe-physics-system is middleware that initialises the physics engine and exposes A-Frame components for us to apply to entities. When we use its static-body or dynamic-body components, aframe-physics-system creates a Cannon.Body instance and “attaches” it to our A-Frame entities, so on every frame it adjusts the entity’s position, rotation, etc. to match the body’s.

If you wish to use a different engine, take a look at aframe-physics-system or aframe-physics-components. These components are not very complex and it should not be complicated to mimic their behavior with another engine.

Static and dynamic bodies

Static bodies are those which are immovable. Think of the ground, or walls that can’t be torn down, etc. In the scene, the immovable entities are the ground and the bumpers on each side of the bowling lane.

Dynamic bodies are those which move, bounce, topple etc. Obviously the ball and the bowling pins are our dynamic bodies. Note that since these bodies move and can fall, or collide and knock down other bodies, the mass property will have a big influence. Here’s an example for a bowling pin:

<a-cylinder dynamic-body="mass: 1" ...>

The avatar and the physics world

To display the “hands” of the user (i.e., to show the tracked VR controllers as hands) I used the vive-controls component, already bundled in A-Frame.

<a-entity vive-controls="hand: right" throwing-hand></a-entity>
<a-entity vive-controls="hand: left"></a-entity>

The challenge here is that the user’s avatar (“head” and “hands”) is not part of the physical world –i.e. it’s out of the physics engine’s scope, since the head and the hands must follow the user’s movement, without being affected by physical rules, such as gravity or friction.

In order for the user to be able to “hold” the ball, we need to fetch the position of the right controller and manually set the ball’s position to match this every frame. We also need to reset other physical properties, such as velocity.

This is done in the custom throwing-hand component (which I added to the entity representing the right hand), in its tick callback:

ball.body.velocity.set(0, 0, 0);
ball.body.angularVelocity.set(0, 0, 0);
ball.body.quaternion.set(0, 0, 0, 1);
ball.body.position.set(position.x, position.y, position.z);

Note: a better option would have been to also match the ball’s rotation with the controller.

Throwing the ball

The throwing mechanism works like this: the user has to press the controller’s trigger and when she releases it, the ball is thrown.

There’s a method in Cannon.Body which applies a force to a dynamic body: applyLocalImpulse. But how much impulse should we apply to the ball and in which direction?

We can get the right direction by calculating the velocity of the throwing hand. However, since the avatar isn’t handled by the physics engine, we need to calculate the velocity manually:

let velocity = currentPosition.vsub(lastPosition).scale(1/delta);

Also, since the mass of the ball is quite high (to give it more “punch” against the pins), I had to add a multiplier to that velocity vector when applying the impulse:

ball.body.applyLocalImpulse(
  velocity.scale(50),
  new CANNON.Vec3(0, 0, 0)
);

Note: If I had allowed the ball to rotate to match the controller’s rotation, I would have needed to apply that rotation to the velocity vector as well, since applyLocalImpulse works with the ball’s local coordinates system.

To detect when the controller’s trigger has been released, the only thing needed is a listener for the triggerup event in the entity representing the right hand. Since I added my custom throwing-hand component there, I set up the listener in its init callback:

this.el.addEventListener('triggerup', function (e) {
  // ... throw the ball
});

A glitch

At the beginning, I was simulating the throw by pressing the space bar key. The code looked like this:

document.addEventListener('keyup', function (e) {
  if (e.keyCode === 32) { // spacebar
    e.preventDefault();
    throwBall();
  }
});

However, this was outside of the A-Frame loop, and the computation of the throwing hand’s lastPosition and currentPosition was out of sync, and thus I was getting odd results when calculating the velocity.

This is why I set a flag instead of calling launch directly, and then, inside of the throwing-hand’s tick callback, throwBall is called if that flag is set to true.

Another glitch: shaking pins

Using the aframe-physics-system’s default settings I noticed a glitch when I scaled down the bowling pins: They were shaking and eventually falling to the ground!

This can happen when using a physics engine if the computations are not precise enough: there is a small error that carries over frame by frame, it accumulates and… you have things crumbling or tumbling down, especially if these things are small –they need less error for changes to be more noticeable.

One workaround for this is to increase the accuracy of the physics simulation — at the expense of performance. You can control this with the iterations setting at the aframe-physics-system’s component configuration (by default it is set to 10). I increased it to 20:

<a-scene physics="iterations: 20">

To better see the effects of this change, here is a comparison side by side with iterations set to 5 and 20:

NOTE: Upload to Youtube this video and insert it here: https://drive.google.com/a/mozilla.com/file/d/0B45CULzwzeLdNGdDTk9QOFFqQUE/view?usp=sharing

The “sleep” feature of Cannon provides another possible workaround to handle this specific situation without affecting performance. When an object is in sleep mode, physics won’t make it move until it wakes upon collision with another object.

Your turn: play with this!

I have uploaded the project to Glitch as well as to a Github repository in case you want to play with it and make your own modifications. Some things you can try:

  • Allow the player to use both hands (maybe with a button to switch the ball from one hand to the other?)
  • Automatically reset the bowling pins to their original position once they have all fallen. You can check the rotation of their bodies to implement this.
  • Add sound effects! There is a callback for collision events you can use to detect when the ball has collided with another element… You can add a sound effects for when the ball clashes against the pins, or when it hits the ground.

If you have questions about A-Frame or want to get more involved in building WebVR with A-Frame, check out our active community on Slack. We’d love to see what you’re working on.

Christian HeilmannYou don’t owe the world perfection! – keynote at Beyond Tellerrand

Yesterday morning I was lucky enough to give the opening keynote at the excellent Beyond Tellerrand conference in Dusseldorf, Germany. I wrote a talk for the occasion that covered a strange disconnect that we’re experiencing at the moment.
Whilst web technology advanced leaps and bounds we still seem to be discontent all the time. I called this the Tetris mind set: all our mistakes are perceived as piling up whilst our accomplishments vanish.

Eva-Lotta Lamm created some excellent sketchnotes on my talk.
Sketchnotes of the talk

The video of the talk is already available on Vimeo:

Breaking out of the Tetris mind set from beyond tellerrand on Vimeo.

You can get the slides on SlideShare:

I will follow this up with a more in-depth article on the subject in due course, but for today I am very happy how well received the keynote was and I want to remind people that it is OK to build things that don’t last and that you don’t owe the world perfection. Creativity is a messy process and we should feel at ease about learning from mistakes.

Nick DesaulniersSubmitting Your First Patch to the Linux kernel and Responding to Feedback

After working on the Linux kernel for Nexus and Pixel phones for nearly a year, and messing around with the excellent Eudyptula challenge, I finally wanted to take a crack at submitting patches upstream to the Linux kernel.

This post is woefully inadequate compared to the existing documentation, which should be preferred.

I figure I’d document my workflow, now that I’ve gotten a few patches accepted (and so I can refer to this post rather than my shell history…). Feedback welcome (open an issue or email me).

Step 1: Setting up an email client

I mostly use git send-email for sending patch files. In my ~/,gitconfig I have added:

1
2
3
4
5
6
[sendemail]
  ; setup for using git send-email; prompts for password
  smtpuser = myemailaddr@gmail.com
  smtpserver = smtp.googlemail.com
  smtpencryption = tls
  smtpserverport = 587

To send patches through my gmail account. I don’t add my password so that I don’t have to worry about it when I publish my dotfiles. I simply get prompted every time I want to send an email.

I use mutt to respond to threads when I don’t have a patch to send.

Step 2: Make fixes

How do you find a bug to fix? My general approach to finding bugs in open source C/C++ code bases has been using static analysis, a different compiler, and/or more compiler warnings turned on. The kernel also has an instance of bugzilla running as an issue tracker. Work out of a new branch, in case you choose to abandon it later. Rebase your branch before submitting (pull early, pull often).

Step 3: Thoughtful commit messages

I always run git log <file I modified> to see some of the previous commit messages on the file I modified.

1
2
3
4
5
6
7
8
$ git log arch/x86/Makefile

commit a5859c6d7b6114fc0e52be40f7b0f5451c4aba93
...
    x86/build: convert function graph '-Os' error to warning
commit 3f135e57a4f76d24ae8d8a490314331f0ced40c5
...
    x86/build: Mostly disable '-maccumulate-outgoing-args'

The first words of commit messages in Linux are usually <subsystem>/<sub-subsystem>: <descriptive comment>.

Let’s commit, git commit <files> -s. We use the -s flag to git commit to add our signoff. Signing your patches is standard and notes your agreement to the Linux Kernel Certificate of Origin.

Step 4: Generate Patch file

git format-patch HEAD~. You can use git format-patch HEAD~<number of commits to convert to patches> to turn multiple commits into patch files. These patch files will be emailed to the Linux Kernel Mailing List (lkml). They can be applied with git am <patchfile>. I like to back these files up in another directory for future reference, and cause I still make a lot of mistakes with git.

Step 5: checkpatch

You’re going to want to run the kernel’s linter before submitting. It will catch style issues and other potential issues.

1
2
3
4
$ ./scripts/checkpatch.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
total: 0 errors, 0 warnings, 9 lines checked

0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch has no obvious style problems and is ready for submission.

If you hit issues here, fix up your changes, update your commit with git commit --amend <files updated>, rerun format-patch, then rerun checkpatch until you’re good to go.

Step 6: email the patch to yourself

This is good to do when you’re starting off. While I use mutt for responding to email, I use git send-email for sending patches. Once you’ve gotten a hang of the workflow, this step is optional, more of a sanity check.

1
2
$ git send-email \
0001-x86-build-require-only-gcc-use-maccumulate-outgoing-.patch

You don’t need to use command line arguments to cc yourself, assuming you set up git correctly, git send-email should add you to the cc line as the author of the patch. Send the patch just to yourself and make sure everything looks ok.

Step 7: fire off the patch

Linux is huge, and has a trusted set of maintainers for various subsystems. The MAINTAINERS file keeps track of these, but Linux has a tool to help you figure out where to send your patch:

1
2
3
4
5
6
$ ./scripts/get_maintainer.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
Person A <person@a.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person B <person@b.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person C <person@c.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT))

With some additional flags, we can feed this output directly into git send-email.

1
2
3
4
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc person@a.com \
0001-my.patch

Make sure to cc yourself when prompted. Otherwise if you don’t subscribe to LKML, then it will be difficult to reply to feedback. It’s also a good idea to cc any other author that has touched this functionality recently.

Step 8: monitor feedback

Patchwork for the LKML is a great tool for tracking the progress of patches. You should register an account there. I highly recommend bookmarking your submitter link. In Patchwork, click any submitter, then Filters (hidden in the top left), change submitter to your name, click apply, then bookmark it. Here’s what mine looks like. Not much today, and mostly trivial patches, but hopefully this post won’t age well in that regard.

Feedback may or may not be swift. I think my first patch I had to ping a couple of times, but eventually got a response.

Step 9: responding to feedback

Update your file, git commit <changed files> --amend to update your latest commit, git format-patch -v2 HEAD~, edit the patch file to put the changes below the dash below the signed off lines (example), rerun checkpatch, rerun get_maintainer if the files you modified changed since V1. Next, you need to find the messageID to respond to the thread properly.

In gmail, when viewing the message I want to respond to, you can click “Show Original” from the dropdown near the reply button. From there, copy the MessageID from the top (everything in the angle brackets, but not the brackets themselves). Finally, we send the patch:

1
2
3
4
5
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc person@a.com \
--in-reply-to 2017datesandletters@somehostname \
0001-my.patch

We make sure to add anyone who may have commented on the patch from the mailing list to keep them in the loop. Rinse and repeat 2 through 9 as desired until patch is signed off/acked or rejected.

I’ve added this handy shell function to my ~/.zshrc:

1
2
3
4
5
6
7
function kpatch () {
  patch=$1
  shift
  git send-email \
    --cc-cmd="./scripts/get_maintainer.pl --norolestats $patch" \
    $@ $patch
}

That I can then invoke like:

1
kpatch 0001-Input-mousedev-fix-implicit-conversion-warning.patch --cc anyone@else.com

where anyone@else.com is anyone I want to add in additon to what get_maintainer sugguests.

Finding out when your patch gets merged is a little tricky; each subsystem maintainer seems to do things differently. My first patch, I didn’t know it went in until a bot at Google notified me. The maintainers for the second and third patches had bots notify me that it got merged into their trees, but when they send Linus a PR and when that gets merged isn’t immediately obvious.

It’s not like Github where everyone involved gets an email that a PR got merged and the UI changes. While there’s pros and cons to having this fairly decentralized process, and while it is kind of is git’s original designed-for use case, I’d be remiss not to mention that I really miss Github. Getting your first patch acknowledged and even merged is intoxicating and makes you want to contribute more; radio silence has the opposite effect.

Happy hacking!

(Thanks to Reddit user /u/EliteTK for pointing out that -v2 was more concise than --subject-prefix="Patch vX").

J.C. JonesAnalyzing Let's Encrypt statistics via Map/Reduce

I've been supplying the statistics for Let's Encrypt since they've launched. In Q4 of 2016 their volume of certificates exceeded the ability of my database server to cope, and I moved it to an Amazon RDS instance.

Ow.

Amazon's RDS service is really excellent, but paying out of pocket hurts.

I've been re-building my existing Golang/MySQL tools into a Golang/Python toolchain at slowly over the past few months. This switches from a SQL database with flexible, queryable columns to a EBS volume of folders containing certificates.

The general structure is now:

/ct/state/Y3QuZ29vZ2xlYXBpcy5jb20vaWNhcnVz
/ct/2017-08-13/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem
/ct/2017-08-13/oracle.out
/ct/2017-08-13/oracle.out.offsets
/ct/2017-08-14/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem

Fetching CT

Underneath /ct/state exists the state of the log-fetching utility, which is now a Golang tool named ct-fetch. It's mostly the same as the ct-sql tool I have been using, but rather than interact with SQL, it simply writes to disk.

This program creates folders for each notAfter date seen in a certificate. It appends each certificate to a file named for its issuer, so the path structure for cert data looks like:

/BASE_PATH/NOT_AFTER_DATE/ISSUER_BASE64.pem

The Map step

A Python 3 script ct-mapreduce-map.py processes each date-named directory. Inside, it reads each .pem and .cer file, decoding all the certificates within and tallying them up. When it's done in the directory, it writes out a file named oracle.out containing the tallies, and also a file named oracle.out.offsets with information so it can pick back up later without starting over.

map script running

The tallies contain, for each issuer:

  1. The number of certificates issued each day
  2. The set of all FQDNs issued
  3. The set of all Registered Domains (eTLD + 1 label) issued

The resulting oracle.out is large, but much smaller than the input data.

The Reduce step

Another Python 3 script ct-mapreduce-reduce.py finds all oracle.out files in directories whose names aren't in the past. It reads each of these in and merges all the tallies together.

The result is the aggregate of all of the per-issuer data from the Map step.

This will get converted then into the data sets currently available at https://ct.tacticalsecret.com/

State

I'm not yet to the step of synthesizing the data sets; each time I compare this data with my existing data sets there are discrepancies - but I believe I've improved my error reporting enough that after another re-processing, the data will match.

Performance

Right now I'm trying to do all this with a free-tier AWS EC2 instance and 100 GB of EBS storage; it's not currently clear to me whether that will be able to keep up with Let's Encrypt's issuance volume. Nevertheless, even an EBS-optimized instance is much less expensive than an RDS instance, even if the approach is less flexible.

performance monitor

This Week In RustThis Week in Rust 182

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is PX8, a Rust implementation of an Open Source fantasy console. Thanks to hallucino for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

125 pull requests were merged in the last week.

New Contributors

  • Bastien Orivel
  • Dennis Schridde
  • Eduardo Pinho
  • faso
  • Liran Ringel

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Spent the last week learning rust. The old martial arts adage applies. Cry in the dojo, laugh in the battlefield.

/u/crusoe on Reddit.

Thanks to Ayose Cazorla for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

J.C. JonesOCSP Telemetry in Firefox

Firefox Telemetry is such an amazing tool for looking at the state of the web. Keeler and I have recently been investigating the state of OCSP, which is a method of revoking TLS certificates. Firefox is among the last of the web browsers to use OCSP on each secure website by default.

The telemetry keys that are relevant to our OCSP support are:

  • SSL_TIME_UNTIL_HANDSHAKE_FINISHED
  • CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_CANCELED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_FAILED_TIME

The first (SSL_TIME_UNTIL_HANDSHAKE_FINISHED) is technically not an OCSP response time telemetry – it includes the server you're connecting to and may/may not have an OCSP fetch – but I include it as it’s highly correlated with successful responses (r=0.972) [1][2], and clearly related in the code. Before you get too excited though, the correlation likely only indicates that a user's network performance is generally similar between the web servers they're talking to and the OCSP servers being queried. Still, cool.

Otherwise the keys are what they say, and times are in milliseconds.

Speeding Up TLS Handshakes

We've been toying with ideas to speed up SSL_TIME_UNTIL_HANDSHAKE_FINISHED; the first of which was to reduce the timeout for OCSP for Domain-Validated certificates. Looking at the Continuous Distribution Function of CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, only 11% of OCSP handshakes complete after 1 second in flight: https://mzl.la/2ogNaSN
OCSP Success time (ms) CDF

Correspondingly, we decreased our timeout on DV OCSP from 2 seconds to 1 second.

This led to a very modest improvement in SSL_TIME_UNTIL_HANDSHAKE_FINISHED (~2%) in Firefox Nightly; this is probably because after one successful OCSP query, the result is cached, so only the first connection is slower, while the telemetry metric includes all connections - whether or not they had an OCSP query.

We've got some other tricks still to try.

Speed Trends

Among CAs in mailing lists there’s been a lot of talk about their efforts at improving OCSP response times. Looking at the aggregate information (as we can't break this down by CA), we’ve not really seen any improvement in response time over the last 18 months [3].

OCSP success time (ms) over time

Next steps

We're going to try some more ideas to speed up the initial TLS connection for both DV and EV certificates.

Footnotes

  1. Correlation between TLS handshake and OCSP Success https://i.have.insufficient.coffee/handshakevsocsp_success.png
  2. Correlation between TLS handshake and OCSP Failure https://i.have.insufficient.coffee/handshakevsocsp_fail.png
  3. OCSP Successes (median, 95th percentile) over time by date. Note: this took a while to load: https://mzl.la/2ogT8TJ

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - 5/16/2017

Here’s what happened on the MozMEAO SRE team from May 9th - May 16th.

Current work

Bedrock (mozilla.org)

Work continues on moving Bedrock to our Kubernetes infrastructure.

Postgres/RDS provisioning

A Postgres RDS instance has already been provisioned in us-east-1 for our Virginia cluster, and another was created in ap-northeast-1 to support the Tokyo cluster. Additionally, development, staging, and production databases were created in each region. This process was documented here.

Elastic Load Balancer (ELB) provisioning

We’ve automated the creation of ELB’s for Bedrock in Virginia and Tokyo. There are still a few more wrinkles to sort out, but the infra is mostly in place to begin The Big Move to Kubernetes.

MDN

Work continues to analyze the Apache httpd configuration from the current SCL3 datacenter config.

Downtime incident 2017-05-13

On May 13th, 2017 22:49 -22:55, New Relic reported that MDN was unavailable. The site was slow to respond to page views, and was running long database queries. Log analysis show a security scan of our database-intensive endpoints.

On May 14th, 2017, there were high I/O alerts on 3 of the 6 production web servers. This was not reflected in high traffic or a decrease in responsiveness.

Basket

The FxA team would like to send events (FXA_IDs) to Basket and Salesforce, and needed SQS queues in order to move forward. We automated the provisioning of dev/stage/prod SQS queues, and passed off credentials to the appropriate engineers.

The FxA team requested cross AWS account access to the new SQS queues. Access has been automated and granted via this PR.

Snippets

Snippets Stats Collection Issues 2017-04-10

A planned configuration change to add a Route 53 Traffic Policy for the snippets stats collection service caused a day’s worth of data to not be collected due to a SSL certificate error.

Careers

Autoscaling

In order to take advantage of Kubernetes cluster and pod autoscaling (which we’ve documented here), app memory and CPU limits were set for careers.mozilla.org in our Virginia and Tokyo clusters. This allows the careers site to scale up and down based on load.

Acceptance tests

Giorgos Logiotatidis added acceptance tests, which contains a simple bash script and additional Jenkinsfile stages to check if careers.mozilla.org pages return valid responses after deployment.

Downtime incident 2017-04-11

A typo was merged and pushed to production and caused a couple of minutes of downtime before we rolled-back to the previous version.

Decommission openwebdevice.org status

openwebdevice.org will remain operational in http-only mode until the board approves decommissioning. A timeline is unavailable.

Future work

Nucleus

We’re planning to move nucleus to Kubernetes, and then proceed to decommissioning current nucleus infra.

Basket

We’re planning to move basket to Kubernetes shortly after the nucleus migration, and then proceed to decommissioning existing infra.

Links

Mozilla Addons BlogAdd-ons Update – 2017/05

Here’s the state of the add-ons world this month.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,132 listed add-on submissions:

  • 944 in fewer than 5 days (83%).
  • 21 between 5 and 10 days (2%).
  • 167 after more than 10 days (15%).

969 listed add-ons are awaiting review.

For two weeks we’ve been automatically approving add-ons that meet certain criteria. It’s a small initial effort (~60 auto-approvals) which will be expanded in the future. We’re also starting an initiative this week to clear most of the review queues by the end of the quarter. The change should be noticeable in the next couple of weeks.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility

We published the blog post for 54 and ran the bulk validation script. Additionally, we’ll publish the add-on compatibility post for Firefox 55 later this week.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • psionikangel
  • lavish205
  • Tushar Arora
  • raajitr
  • ccarruitero
  • Christophe Villeneuve
  • Aayush Sanghavi
  • Martin Giger
  • Joseph Frazier
  • erosman
  • zombie
  • Markus Stange
  • Raajit Raj
  • Swapnesh Kumar Sahoo

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/05 appeared first on Mozilla Add-ons Blog.

Eric RahmFirefox memory usage with multiple content processes

This is a continuation of my Are They Slim Yet series, for background see my previous installment.

With Firefox’s next release, 54, we plan to enable multiple content processes — internally referred to as the e10s-multi project — by default. That means if you have e10s enabled we’ll use up to four processes to manage web content instead of just one.

My previous measurements found that four content processes are a sweet spot for both memory usage and performance. As a follow up we wanted to run the tests again to confirm my conclusions and make sure that we’re testing on what we plan to release. Additionally I was able to work around our issues testing Microsoft Edge and have included both 32-bit and 64-bit versions of Firefox on Windows; 32-bit is currently our default, 64-bit is a few releases out.

The methodology for the test is the same as previous runs, I used the atsy project to load 30 pages and measure memory usage of the various processes that each browser spawns during that time.

Without further ado, the results:

Graph of browser memory usage, Chrome uses a lot.

So we continue to see Chrome leading the pack in memory usage across the board: 2.4X the memory as Firefox 32-bit and 1.7X 64-bit on Windows. IE 11 does well, in fact it was the only one to beat Firefox. It’s successor Edge, the default browser on Windows 10, appears to be striving for Chrome level consumption. On macOS 10.12 we see Safari going the Chrome route as well.

Browsers included are the default versions of IE 11 and Edge 38 on Windows 10, Chrome Beta 59 on all platforms, Firefox Beta 54 on all platforms, and Safari Technology Preview 29 on macOS 10.12.4.

Note: For Safari I had to run the test manually, they seem to have made some changes that cause all the pages from my test to be loaded in the same content process.

The Mozilla BlogWannaCry is a Cry for VEP Reform

This weekend, a vulnerability in some versions of the Windows operating system resulted in the biggest cybersecurity attack in years. The so-called “WannaCry” malware relied on at least one exploit included in the latest Shadow Brokers release. As we have repeated, attacks like this are a clarion call for reform to the government’s Vulnerabilities Equities Process (VEP).

The exploits may have been shared with Microsoft by the NSA. We hope that happened, as it would be the right way to handle a vulnerability like this. Sharing vulnerabilities with tech companies enables us to protect our users, including the ones within the government. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law.

The WannaCry attack also shows the importance of security updates in protecting users. Microsoft patched the relevant vulnerabilities in a March update, but users who had not updated remain vulnerable. Mozilla has shared some resources to help users update their software, but much more needs to be done in this area.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

The post WannaCry is a Cry for VEP Reform appeared first on The Mozilla Blog.