The Mozilla BlogToday’s Firefox Aims to Reduce Your Online Annoyances

Almost a hundred years ago, John Maynard Keyes suggested that the industrial revolution would effectively end work for humans within a couple of generations, and our biggest challenge would be figuring what to do with that time. That definitely hasn’t happened, and we always seem to have lots to do, much of it online. When you’re on the web, you’re trying to get stuff done, and therefore online annoyances are just annoyances. Whether it’s autoplaying videos, page jumps or finding a topic within all your multiple tabs, Firefox can help. Today’s Firefox release minimizes those online inconveniences, and puts you back in control.

Block autoplaying content by default

Ever open a new page and all of a sudden get bombarded with noise? Well, worry no more. Starting next week, we will be rolling out the peace that silence brings with our latest feature, block autoplay. Here’s how to use block autoplay:

  • Scenario #1 – For anyone who wants peace and quiet on the web:  Go to a site that plays videos or audio, it could be a news site or site known for hosting movies and television shows, the Block Autoplay feature will stop the audio and video from automatically playing. If you want to view the video, simply click on the play button to watch it.

There will be instances where there are some sites, like social media, that automatically mute the sound but will continue to play the video. In this case, the new Block Autoplay Feature will not stop the video from playing.

  • Scenario #2 – For the binge-watcher: If your weekend plans involve catching up on your favorite TV series, you’ll want to make it interruption-free. To play the videos continuously, hit play and all subsequent videos will play automatically, just as the site intended. This will apply to all streaming sites including Netflix, Hulu and YouTube. To continue to autoplay from the first video, you should add those sites to your permissions list.

To enable autoplay on your favorite websites, add them to your permissions list by visiting the control center — which can be found by clicking the lowercase “i” with a circle in the address bar. From there go to Permissions and select “allow” in the drop down to automatically play media with sound.

From Permissions, you can choose to allow or block

No more annoying page jumps with smoother scrolling

Do you ever find yourself immersed in an online article, then all of a sudden an image or ad loads from the top of the page and you lose your place. Images or ads load slower than the written content on a page, and without scroll anchoring in place, you’re left bouncing around the page. Today’s release features scroll anchoring. Now, the page remembers where you are so that you aren’t interrupted by slow loading images or ads.

Search made easier and faster

Search is one of the most common activities that people do whenever they go online, so we are always looking for ways to streamline that experience. Today, we’re improving the search experience to make it faster, easier and more convenient by enabling:

  • Searching within Multiple Tabs – Did you know that if you enter a ‘%’ in your Awesome Bar, you can search the tabs on your computer? If you have more than one device on Firefox Sync, you can search the tabs on your other devices as well. Now you can search from the tab overflow menu, which appears when you have a large number of tabs open in a window. When this happens, you’ll see on the right side of the plus sign (where you typically open a new tab) a down arrow. This is called the tab overflow menu. Simply click on it to find the new box for searching your tabs.
  • Searching in Private Browsing – Sometimes you’d prefer your search history to not be saved, like those times when you’re planning a surprise party or gift. Now, when you open a new tab in Private Browsing, you’ll see a search bar with your default search engine – Google, Bing, Amazon.com, DuckDuckGo, eBay, Twitter or Wikipedia. You can set your default search engine when you go to Preferences, Search, then Default Search Engine.

 

Additional features in today’s Firefox release include:

  • Keeping you safe with easy-to-understand security warnings – Whenever you visit a site, it’s our job to make sure the site is safe. We review a security certificate, a proof of their identity, before letting you visit the site. If something isn’t right, you’ll get a security warning. We’ve updated these warnings to be simple and straightforward on why the site might not be safe.To read more about how we created these warnings, visit here.
  • Web Authentication support for Windows Hello –  For the security-minded early adopters, we’re providing biometric support for Web Authentication using Windows Hello on Windows 10. With the upcoming release for Windows 10, users will be able to sign in to compatible websites using fingerprint or facial recognition, a PIN, or a security key. To learn more, visit our Security blog.
  • Improved experience for extension users – Previously, extensions stored their settings in individual files (commonly referred to as a JSON file) which took some time to load a page. We made changes so that the extensions now store their settings in a Firefox database. This makes it faster to get you to the sites you want to visit.

For the complete list of what’s new or what we’ve changed, you can review today’s full release notes.

Check out and download the latest version of Firefox Quantum, available here.

The post Today’s Firefox Aims to Reduce Your Online Annoyances appeared first on The Mozilla Blog.

Mozilla Security BlogPasswordless Web Authentication Support via Windows Hello

Firefox 66, being released this week, supports using the Windows Hello feature for Web Authentication on Windows 10, enabling a passwordless experience on the web that is hassle-free and more secure. Firefox has supported Web Authentication for all desktop platforms since version 60, but Windows 10 marks our first platform to support the new FIDO2 “passwordless” capabilities for Web Authentication.

A Windows 10 dialog box prompting for a Web Authentication credential

PIN Prompt on Windows 10 2019 April release

As of today, Firefox users on the Windows Insider Program’s fast ring can use any authentication mechanism supported by Windows for websites via Firefox. That includes face or fingerprint biometrics, and a wide range of external security keys via the CTAP2 protocol from FIDO2, as well as existing deployed CTAP1 FIDO U2F-style security keys. Try it out and give us feedback on your experience.

For the rest of Firefox users on Windows 10, the upcoming update this spring will enable this automatically.

Akshay Kumar from Microsoft’s Windows Security Team contributed this support to Firefox. We thank him for making this feature happen, and the Windows team for ensuring that all the Web Authentication features of Windows Hello were available to Firefox users.

For Firefox users running older versions of Windows, Web Authentication will continue to use our Rust-implemented CTAP1 protocol support for U2F-style USB security keys. We will continue work toward providing CTAP2/FIDO2 support on all of our other platforms, including older versions of Windows.

For Firefox ESR users, this Windows Hello support is currently planned for ESR 60.0.7, being released mid-May.

If you haven’t used Web Authentication yet, adoption by major websites is underway. You can try it out at a variety of demo sites: https://webauthn.org/, https://webauthn.io/, https://webauthn.me/https://webauthndemo.appspot.com/, or learn more about it on MDN.

If you want to try the Windows Hello support in Firefox 66 on Windows 10 before the April 2019 update is released, you can do so via the Windows Insider program. You’ll need to use the “fast” ring of updates.

The post Passwordless Web Authentication Support via Windows Hello appeared first on Mozilla Security Blog.

Andreas TolfsenLunchtime brown bags

Andreas TolfsenUpdate from WebDriver meeting at TPAC

Andreas TolfsenWhat is libexec?

Andreas Tolfsengeckodriver 0.11.1 released

Andreas TolfsenWebDriver update from TPAC 2015

Andreas TolfsenThe case against visibility checks in WebDriver

Andreas TolfsenMaking Mercurial log make sense

Andreas TolfsenThe sorry state of women in tech

Andreas TolfsenWebDriver now a living standard

Andreas TolfsenOptimised Rust code in Gecko

Andreas TolfsenHi, Mozilla!

The Mozilla BlogWelcome Lindsey Shepard, VP Product Marketing

I’m excited to let you know that today, Lindsey Shepard joins us as our VP of Product Marketing.

Lindsey brings a wealth of experience from a variety of sectors ranging from consumer technology to the jewelry industry.

“I’m thrilled to be joining Mozilla, an organization that has always been a champion for user agency and data privacy, during this pivotal time in the tech industry. I’m looking forward to showcasing to people the iconic Firefox brand, along with its quickly-expanding offering of products and services that realistically and respectfully meet the needs and challenges of online life today.”

Most recently, Lindsey headed up corporate-level marketing for Facebook Inc., including leading product marketing for Facebook’s core products: News Feed, News, Stories, Civic Engagement, Privacy and Safety. Before joining Facebook, Lindsey led marketing at GoldieBlox, a Bay Area start-up focused on bridging the gender gap in STEM.

As our new VP of Product Marketing Lindsey will be a core member of my marketing leadership team, responsible for building strong ties with our product organization. She will be a key driver of Mozilla’s future growth, overseeing new product launches, nurturing existing products, ideating on key campaigns and go-to-market strategies, and evangelizing new innovations in internet technologies.

Lindsey will be based in the Bay Area and will share her time between our Mountain View and San Francisco offices. Please join me in welcoming Lindsey to Mozilla.

The post Welcome Lindsey Shepard, VP Product Marketing appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgA Homepage for the JavaScript Specification

Screenshot of the TC39 website

Screenshot of the TC39 website

 

 

Ecma TC39, the JavaScript Standards Committee, is proud to announce that we have shipped a website for following updates to the JavaScript specification. This is the first part of a two-part project aimed at improving our information distribution and documentation. The website provides links to our most significant documents, as well as a list of proposals that are near completion. Our goal is to help people find the information they need in order to understand the specification and our process.

While the website is currently an MVP and very simple, we have plans to expand it. These plans include a set of documentation about how we work. We will experiment with other features as the need arises.

The website comes as part of work that began last year to better understand how the community was accessing information about the work around the JavaScript specification. We did a series of in-person interviews, followed by a widely distributed survey to better understand what people struggled with. One of the biggest requests was that we publish and maintain a website that helps people find the information they are looking for.

Resource needs

The two most requested items with regard to resources were Learning Resources and a Website. These two are linked, but require very different types of work. Since this clearly highlighted the need for a website, we began work on this right away.

 

resource requests for the tc39

Aggregated tags in response to the question “What would you like to see as a resource for the language specification process?”

We identified different types of users: Learners who are discovering the specification for the first time, Observers of the specification who are watching proposal advancement, and Reference Users who need a central location where all of the significant documents can be found. The website was designed around these users. In order to not overwhelm people with information, the MVP is specifically focused on the most pertinent information, namely proposals in Stage 3 of our process. Links are contextualized in order to help people understand what documents they are looking at.

Stage 3 Proposal List

Stage 3 Proposal List

The website is very simple, but gives us a starting point from which to move forward. We are continuing to work on documenting our process. We hope to make more of these documents publicly available soon and to incorporate them into the website over time.

Developer frustrations

 

The survey surfaced a number of issues that have been impacting the community around JavaScript. Three of the top four frustrations were related to things that could be alleviated by building a website. One that was not directly related but heavily emphasized was that the unclear advancement of proposals. This was also surfaced in GitHub issues. This is challenging to resolve, but we are currently working through ideas. For the time being, we have added a link to the most recent presentation of each proposal. We also have a checklist in the TC39 Process document that is now being added to some proposals on GitHub.

TC39 developer frustrations

Aggregated tags in response to the question “Is there something we can do better, or that you find particularly frustrating right now?”

As part of the survey, we collected emails in order to get in touch later, as we were unsure how many responses we would get. The goal was to better understand specific concerns. However, we had an overwhelming amount of feedback that pointed us in the direction we needed to go. After reviewing this, we decided against keeping this personal information and to request feedback publicly on a case-by-case basis. Thank you to everyone who participated.

 

We are looking forward to your feedback and comments. This project was community-driven— thank you to everyone who made it possible!

 

codehag xtucrkirsling zoepage chicoxyzzy littledan jasonwilliams othree ljharb IgnoredAmbience andreruffert Regaddi devsnek

 

 

 

 

 

 

QMO

Hello Mozillians,

We are happy to let you know that Friday, March 29th, we are organizing Firefox 67 Beta 6 Testday. We’ll be focusing our testing on: Anti-tracking (Fingerprinting and Cryptominers) and Media playback & support.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Wladimir PalantShould you be concerned about LastPass uploading your passwords to its server?

TL;DR: Yes, very much.

The issue

I’ve written a number of blog posts on LastPass security issues already. The latest one so far looked into the way the LastPass data is encrypted before it is transmitted to the server. The thing is: when your password manager uploads all data to its server backend, you normally want to be very certain that the data visible to the server is useless both to attackers who manage to compromise the server and company employees running that server. Early last year I reported a number of issues that allowed subverting LastPass encryption with comparably little effort. The most severe issues have been addressed, so all should be good now?

Sadly, no. It is absolutely possible for a password manager to use a server for some functionality while not trusting it. However, LastPass has been designed in a way that makes taking this route very difficult. In particular, the decision to fall back to server-provided pages for parts of the LastPass browser extension functionality is highly problematic. For example, whenever you access Account Settings you leave the trusted browser extension and access a web interface presented to you by the LastPass server, something that the extension tries to hide from you. Some other extension functionality is implemented similarly.

The glaring hole

So back in November I discovered an API meant to accommodate this context switch from the extension to a web application and make it transparent to the user. Not sure how I managed to overlook it on my previous strolls through the LastPass codebase but the getdata and keyplug2web API calls are quite something. The response to these calls contains your local encryption key, the one which could be used to decrypt all your server-side passwords.

There has been a number of reports in the past about that API being accessible by random websites. I particularly liked this security issue uncovered by Tavis Ormandy which exploited an undeclared variable to trick LastPass into loosening up its API restrictions. Luckily, all of these issues have been addressed and by now it seems that only lastpass.com and lastpass.eu domains can trigger these calls.

Oh, but the chances of some page within lastpass.com or lastpass.eu domain to be vulnerable aren’t exactly low! Somebody thought of that, so there is an additional security measure. The extension will normally ignore any getdata or keyplug2web calls, only producing a response once after this feature is unlocked. And it is unlocked on explicit user actions such as opening Account Preferences. This limits the danger considerably.

Except that the action isn’t always triggered by the user. There is a “breach notification” feature where the LastPass server will send notifications with arbitrary text and link to the user. If the user clicks the link here, the keyplug2web API will be unlocked and the page will get access to all of the user’s passwords.

The attack

LastPass is run by LogMeIn, Inc. which is based in United States. So let’s say the NSA knocks on their door: “Hey, we need your data on XYZ so we can check their terrorism connections!” As we know by now, NSA does these things and it happens to random people as well, despite not having any ties to terrorism. LastPass data on the server is worthless on its own, but NSA might be able to pressure the company into sending a breach notification to this user. It’s not hard to choose a message in such a way that the user will be compelled to click the link, e.g. “IMPORTANT: Your Google account might be compromised. Click to learn more.” Once they click it’s all over, my proof-of-concept successfully downloaded all the data and decrypted it with the key provided. The page can present the user with an “All good, we checked it and your account isn’t affected” message while the NSA walks away with the data.

The other scenario is of course a rogue company employee doing the same on their own. Here LastPass claims that there are internal processes to prevent employees from abusing their power in such a way. It’s striking however how their response mentions “a single person within development” — does it include server administrators or do we have to trust those? And what about two rogue employees? In the end, we have to take their word on their ability to prevent an inside job.

The fix

I reported this issue via Bugcrowd on November 22, 2018. As of LastPass 4.25.0.4 (released on February 28, 2019) this issue is considered resolved. The way I read the change, the LastPass server is still able to send users breach notifications with text and image that it can choose freely. Clicking the button (button text determined by the server) will still give the server access to all your data. Now there is additional text however saying: “LastPass has detected that you have used the password for this login on other sites, too. We recommend going to your account settings for this site, and creating a new password. Use LastPass to generate a unique, strong password for this account. You can then save the changes on the site, and to LastPass.” Ok, I guess this limits the options for social engineering slightly…

No changes to any of the other actions which will provide the server with the key to decrypt your data:

  • Opening Account Settings, Security Challenge, History, Bookmarklets, Credit Monitoring
  • Linking to a personal account
  • Adding an identity
  • Importing data if the binary component isn’t installed
  • Printing all sites

Some of these actions will prompt you to re-enter your master password. That’s merely security theater however, you can check that they have g_local_key global variable set already which is all they need to decrypt your data.

One more comment on the import functionality: supposedly, a binary component is required to read a file. If the binary component isn’t installed, LastPass will fall back to uploading your file to the server. The developers apparently missed that the API to make this work locally has been part of any browser released since 2012 (yes, that’s seven years ago).

Conclusion

I wrote the original version of this Stack Exchange answer in September 2016. Back then it already pointed out that mixing trusted extension user interface with web applications is a dangerous design choice. It makes it hard to secure the communication channels, something that LastPass has been struggling with a lot. But beyond that, there is also lots of implicit trust in the server’s integrity here. While LastPass developers might be inclined to trust their servers, users have no reason for that. The keys to all their online identities are data that’s too sensitive to entrust any company with it.

LastPass has always been stressing that they cannot access your passwords, so keeping them on their servers is safe. This statement has been proven wrong several times already, and the improvements so far aren’t substantial enough to make it right. LastPass design offers too many loopholes which could be exploited by a malicious server. So far they didn’t make a serious effort to make the extension’s user interface self-contained, meaning that they keep asking you to trust their web server whenever you use LastPass.

Ian BickingOpen Source Doesn’t Make Money Because It Isn’t Designed To Make Money

Or: The Best Way To Do Something Is To At Least Try

We all know the story: you can’t make money on open source. Is it really true?

I’m thinking about this now because Mozilla would like to diversify its revenue in the next few years, and one constraint we have is that everything we do is open source.

There are dozens (hundreds?) of successful open source projects that have tried to become even just modest commercial enterprises, some very seriously. Results aren’t great.

I myself am trying to pitch a commercial endeavor in Mozilla right now (if writing up plans and sending them into the ether can qualify as “pitching”), and this question often comes up in feedback: can we sell something that is open source?

I have no evidence that we can (or can’t), but I will make this assertion: it’s hard to sell something that wasn’t designed to be sold.

We treat open source like it’s a poison pill for a commercial product. And yes, with an open source license it’s harder to force someone to pay for a product, though many successful businesses exist without forcing anyone.

I see an implicit assumption that makes it harder to think about this: the idea that if something is useful, it should be profitable. It’s an unspoken and morally-infused expectation, a kind of Just World hypothesis: if something has utility, if it helps people, if it’s something the world needs, if it empowers other people, then there should be a revenue opportunity. It should be possible for the thing to be your day job, to make money, to see some renumeration for your successful effort in creating or doing this thing.

That’s what we think the world should be like, but we all know it isn’t. You can’t make a living making music. Or art. You can’t even make a living taking care of children. I think this underlies many of this moment’s critiques of capitalism: there’s too many things that are important, even needed, or that fulfill us more than any profitable item, and yet are economically unsustainable.

I won’t try to fix that in this blog post, only note: not all good things make money.

But we know there is money in software. Lots of money! Is the money in secrets? If OpenSSL was secret, could it make money? If it had a licensing paywall, could it make money? Seems unlikely. The license isn’t holding it back. It’s just not shaped like something that makes money. Solving important problems isn’t enough.

So what can you get paid to do?

  1. People will pay a little for apps; not a lot, but a bit. Scaling up requires marketing and capital, which open source projects almost never have (and I doubt many open source projects would know what to do with capital if they had it).
  2. There’s always money in ads. Sadly. This could potentially offend someone enough to actually repackage your open source software with ads removed. As a form of price discrimination (e.g., paid ad removal) I think you could avoid defection.
  3. Fully-hosted services: Automattic’s wordpress.com is a good example here. Is Ghost doing OK? These are complete solutions: you don’t just get software, you get a website.
  4. People will pay if you ensure they get a personalized solution. I.e., consulting. Applied to software you get consultingware. While often maligned, many real businesses are built on this. I think Drupal is in this category.
  5. People will pay you for your dedicated and ongoing attention. In other words: a day job as an employee. It feels unfair to put this option on the list, but it’s such a natural progression from consultingware, and such a dominant pattern in open source that I think it deserves acknowledgement.
  6. Anything paired with a physical device. People will judge the value based on the hardware and software experience together.
  7. I’m not sure if Firefox makes money (indirectly) from ads, or as compensation for maintaining monopoly positions.

I’m sure I’m missing some interesting ideas from that list.

But if you have a business concept, and you think it might work, what does open source even have to do with it? Don’t we learn: focus on your business! On your customer! Software licensing seems like a distraction, even software is a questionable thing to focus on, separate from the business. Maybe this is why you can’t make money with open source: it’s a distraction. The question isn’t open-source-vs-proprietary, but open-source-vs-business-focused.

Another lens might be: who are you selling to? Classical scratch-your-own-itch open source software is built by programmers for programmers. And it is wildly successful, but it’s selling to people who aren’t willing to pay. They want to take the software and turn it around into greater personal productivity (which turns out to be a smart move, given the rise in programmer wages). Can we sell open source to other people? Can anyone else do anything with source code?

And so I remain pessimistic that open source can find commercial success. But also frustrated: so much software is open source except any commercial product. This is where the Free Software mission has faltered despite so many successes: software that people actually touch isn’t free or open. That’s a shame.

The Servo BlogThis Week In Servo 127

In the past week, we merged 50 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Screenshots

A standalone demo of Pathfinder running on a Magic Leap device.

Exciting works in progress

Notable Additions

  • waywardmonkeys updated harfbuzz to version 2.3.1.
  • gterzian fixed an underflow error in the HTTP cache.
  • waywardmonkeys improved the safety of the harfbuzz bindings.
  • Manishearth removed a bunch of unnecessary duplication that occurred during XMLHttpRequest.
  • georgeroman implemented a missing WebDriver API.
  • jdm made ANGLE build a DLL on Windows.
  • gterzian prevented tasks from running in non-active documents.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Cameron KaiserTenFourFox FPR13 available

TenFourFox Feature Parity Release 13 final is now available for testing (downloads, hashes, release notes). I added Olga's minimp3 patch for correctness; otherwise, there are no additional changes except for several security updates and to refresh the certificate and TLD stores. As usual it will go live Monday evening Pacific time assuming no difficulties.

I have three main updates in mind for TenFourFox FPR14: expanding FPR13's new AppleScript support to allow injecting JavaScript into pages (so that you can drive a web page by manipulating the DOM elements within it instead of having to rely on screen coordinates and sending UI events), adding Olga's ffmpeg framework to enable H.264 video support with a sidecar library (see the previous post for details on the scheme), and a possible solution to allow JavaScript async functions which actually might fix quite a number of presently non-working sites. I'm hopeful that combined with another parser hack this will be enough to restore Github functionality on TenFourFox, but no promises. Unfortunately, it doesn't address the infamous this is undefined problem that continues to plague a number of sites and I still have no good solution for that. These projects are decent-sized undertakings, so it's possible one or two might get pushed to FPR15. FPR14 is scheduled for May 14 with Firefox 67.

Meanwhile, I took a close look at the upcoming Raptor Blackbird at the So Cal Linux Expo 17. If the full big Talos II I'm typing this on is still more green than you can dream, the smaller Blackbird may be just your size to get a good-performing 64-bit Power system free of the lurking horrors in modern PCs at a better price. Check out some detailed board pics of the prototype and other shots of the expo on Talospace. If you're still not ready to jump, I'll be reviewing mine when it arrives hopefully later this spring.

Mozilla Open Policy & Advocacy BlogMozilla statement on the Christchurch terror attack

Like millions of people around the world, the Mozilla team has been deeply saddened by the news of the terrorist attack against the Muslim community in Christchurch, New Zealand.

The news of dozens of people killed and injured while praying in their place of worship is truly upsetting and absolutely abhorrent.

This is a call to all of us to look carefully at how hate spreads and is propagated and stand against it.

The post Mozilla statement on the Christchurch terror attack appeared first on Open Policy & Advocacy.

Firefox NightlyThese Weeks in Firefox: Issue 55

Highlights

  • We published a blog post about student contributions to Firefox!
  • The new Firefox QuantumBar can now be instantly toggled and tested by setting browser.urlbar.quantumbar to true in about:config – please give it a shot, and please file bugs if you see anything unusual.
  • Meridel from the Firefox UX team  blogged about their work on the new certificate error pages!
  • The DevTools team has been adding some amazing new goodies in the last few weeks:
    • Worker debugging and column breakpoints are slated to ship in Firefox 67.
    • The DevTools Network panel now has resizeable columns, currently hidden behind a pref, but we are looking for feedback (bug) – set devtools.netmonitor.features.resizeColumns to true in about:config to test this out!
    • When copying inner or outer HTML from the Inspector (right-click a node and use the copy sub-menu), it is now possible to auto-prettify the HTML. For now this works by setting the devtools.markup.beautifyOnCopy preference to true in about:config (bug).
    • The all new and improved about:debugging is getting close to shipping (try it by enabling devtools.aboutdebugging.new-enabled or going to about:debugging-new). This new version allows you to debug Gecko running on devices over USB without launching WebIDE, amongst many other improvements.
        • If you test it and find bugs, please file them here and we’ll take care of them.
      The new about:debugging page showing which devices can be connected to, and which tabs and extensions are loaded on that device.

      The new about:debugging page is coming along nicely!

    • Lots of speed improvements thanks to not loading DevTools modules in fresh compartments anymore (bug):
      • Inspector opening is up to 40% faster against complex documents!
      • Inspector actions like expanding a DOM Element children or updating the inspector after a page reload are more than 40% faster!
      • Debugger is also faster to step in/out (20 to 30% faster)!
      • The unittest asserting base RDP protocol performance is 20% faster!
      • The console is up to 15% faster to show object attributes when expanding an object!
      • Otherwise, almost all other tests report between 2 to 10% improvement on all panels!
      • 🔥🔥🔥🔥

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug
  • Aaditya Arora
  • akshitha shetty
  • Bisola Omisore (Sola)
  • Erik Carrillo [:E_Carr]
  • Helena Moreno (aka helenatxu)
  • Heng Yeow (:tanhengyeow)
  • Ian Moody [:Kwan] (UTC+0)
  • Jawad Ahmed [:jawad]
  • Laphets [:Laphets]
  • lloan:[lloan]
  • Manish [:manishkk]
  • Masatoshi Kimura [:emk]
  • Mellina Y.
  • LMonika Maheshwari [:MonikaMaheshwari]
  • Oriol Brufau [:Oriol]
  • PhoenixAbhishek
  • Shivam Singhal [ :championshuttler ]
New contributors (🌟 = first patch)

Project Updates

Activity Stream

  • Landed CFR Pin Tab (first non-addon recommendation) triggered by visiting select sites frequently
    • A panel is showing a user instructions on how to pin their current tab.

      Because knowledge is power.

  • Also added a notification when the tab is pinned and an option in the page action menu to pin/unpin the tab.
    • A panel tells the user that they've just pinned a tab, and tells them how to un-pin it.

      The tab has been pinned!

  • Working with the Performance team to ensure quality of new Pocket experience as we prepare to turn on for more users outside of Nightly for select regions for Firefox 68.
  • Dark theme added for new Pocket experience, so users who were excluded from the Nightly experiment because of having the dark theme enabled will be enrolled soon (but can still opt-out in about:preferences while the team works to bring the new page to feature parity).

Add-ons / Web Extensions

  • addons.mozilla.org:
    • Now 100% supporting COSE signatures on production!
    • Removal of support for lightweight themes (LWTs, aka “personas”) continues.
  • Firefox:
    • 68
      • Migration of search engines to WebExtensions is targeting 68. (Thanks to Dale Harvey and Shane Caraveo for slogging through this, and to Andrew Swan for his help.)
        • [Note: open question about testing cold start times reliably/for regressions, see bug 1529321.]
      • Bugs landing for the rewrite of about:addons in HTML, targeting 68.
      • User Scripts API will be pref’d on in 68 (but you can flip the pref and check it out in 67 and 66 too: extensions.webextensions.userScripts.enabled).
    • 67
    • 66
      • Return To AMO, via installer attribution and Activity Stream, has been approved for release in 66!
      • (As a result of enabling IndexedDB for the backend of storage.local in 66 (bug 1488825), this also closes a perf issue (bug 1371255).)
    • Kris Maglione fixed a small 67 blocker and a not-small 66 blocker (race between new tab page and extension controlling new tab page).
    • Contributions from Oriol, championshuttler, violet.bugreport, jawad, zombie! Thanks!

Applications

Lockbox

Developer Tools

Debugger
  • Log points have the correct source location now
  • Event breakpoints are in-progress and coming along nicely
  • Try now runs Jest and Flow tests for the debugger.
Network
Lots of returning contributors
  • [sachdev.hemaksshi] Bug 1474207 – Network Monitor response payload testing method variances
  • [sachdev.hemakshi] Bug 1514750 – Network monitor params plain text
  • [pong7219] Bug 1508241 – Improve zebra table colors (Network)
  • [amy_yyc] Bug 1498565 – Showing XML response payload freezes Firefox
  • [tanhengyeow] Bug 1530140 – Change Netmonitor’s localization access keys to lower case
  • [tanhengyeow] Bug 1485416 – Highlight tracker in the Headers side panel
Console
Lots of returning contributors
  • [Helena Moreno] Bug 1532939 – Support Ctrl/Cmd + K to clear the console
  • [Helena Moreno] Bug 1466040 – Ctrl/Cmd + click on a network log in console output should open the link in a new tab
  • [Neha] Bug 1523290 – Test for JSTerm menu in Browser Console
  • [Yzen] Made ObjectInspector focusable in the console, which means you can navigate to and through them using keyboard Bug 1424159
  • [Kelly] is working on adapting the console toolbar layout depending on its width Bug 1523864
Layout Tools
  • Track Changes now has a “Copy rule” (bug) and “Copy all changes” (bug) buttons
Technical debt
  • Deprecation notice for Canvas Inspector, Shader Editor and Web Audio Editor landed – a link providing more information is displayed on the settings for each tool, and also over the panel itself (bug).
  • The panels will be removed after the code freeze (so they ship in the next version). We’re also removing old shared components that aren’t used anymore. As Yulia said: “Deleted code has no bugs”
Remote Debugging
  • You can now debug service workers in e10s multi-process if you are also running the new ServiceWorkers implementation (dom.serviceWorkers.parent_intercept) (bug)
  • We are now using the regular toolbox (and not the browser toolbox) to debug local addons (bug)
  • Fixed inconsistent runtime name for Reference Browser (it appeared as nightly and fennec) (bug)

Fission

Lint

Password Manager

Performance

  • dthayer
  • Felipe
  •  Florian
    • Adding a lot of more markers to the profile (See those in the Marker Chart and Marker Table)
      • A profile is shown with a grid of markers indicating important events occurring during the profile.

        These profiler markers make it much easier to understand what’s happening inside of a profile, which is good for diagnosing performance problems.

      • New things include loading of subscript JS, Cu.import, notifyObservers, etc
      • Main-thread IO (stat, open/close, read)
      • This has already been helping us file and fix a ton of bugs!
    • Writing a test to capture and whitelist all the main-thread IO that happens during startup so that we don’t add more.
  •  Gijs
    • Kicking off preloading of about:newtab more intelligently
      • Limits the number of preloaded tabs across all browser windows
      • Initiates it from an idle task
    • Browser Adjustment study is wrapping up:
      • Contrary to what we initially thought, it wasn’t getting the impacts on page load time that we expected
      • Trying to see what can be reused from that for potential power savings improvements
  •  mconley
    • Added a new talos test (called startup_about_home_paint) to measure time for about:home to render top sites
    • Made the PageStyleChild populate the menu off an idle callback, and not do it for about:* pages, which was showing up in about:home profiles
    • Plans to run a pref-flip study on Beta to ensure that the Process Priority Manager doesn’t have any ill effects on page load time or retention

Performance tools

  • FileIO markers have file names on all platforms more consistently now.
  • Memory-related markers are now separated out in the timeline and integrated in the memory track.
    • A profile shows a separate track for garbage collection activity. The mouse cursor hovers one marked as "minor".

      A “minor garbage collection” is occurring in this profile.

  • Symbolicate unsymbolicated profiles at load time.
  • Have a timeline toolbar and hidden tracks indicator now.
    • An indicator in the profiler UI shows how many tracks are being hidden by default, and offers to show them.

      This should make it easier to realize that there are hidden tracks that might have interesting information in them.

  • Future Google Summer of Code applicants tackle some “polish” bugs. Expect some small but useful changes!

Policy Engine

Privacy/Security

Search and Navigation

Search
Quantum Bar
  • Quality Engineering completed first pass on Nightly 67, positive results (93% pass)
  • Test coverage largely improved. Also layout reflow tests and Talos verified.
  • Preparing to run a Nightly partial study to check impact.
  • Added accessibility events when arrowing among search results. Still working on a11y.
  • Many bugs fixed, not listing all of them (See the tracking bug).
  • Initial design of first future experiments.

Mark SurmanVP search update — and Europe

A year ago, Mozilla Foundation started a search for a VP, Leadership Programs. The upshot of the job: work with people from around the world to build a movement to ensure our digital world stays open, healthy and humane. Over a year later, we’re in the second round of this search — finding the person to drive this work isn’t easy. However, we’re getting closer, so it’s time for an update.

At a nuts and bolts level, the person in this role will support teams at Mozilla that drive our thought leadership, fellowships and events programs. This is a great deal of work, but fairly straightforward. The tricky part is helping all the people we touch through these programs connect up with each other and work like a movement — driving to real outcomes that make digital life better.

While the position is global in scope, it will be based in Europe. This is in part because we want to work more globally, which means shifting our attention out of North America and towards African, European, Middle Eastern and South Asian time zones. Increasingly, it is also because we want to put a significant focus on Europe itself.

Europe is one of the places where a vision of an internet that balances public and private interests, and that respects people’s rights, has real traction. This vision spans everything from protecting our data to keeping digital markets open to competition to building a future where we use AI responsibly and ethically. If we want the internet to get better globally then learning from, and being more engaged with, Europe and Europeans has to be a central part of the plan.

The profile for this position is quite unique. We’re looking for someone who can think strategically and represent Mozilla publically, while also leading a distributed team within the organization; has a deep feel for both the political and technical aspects of digital life; and shares the values outlined in the Mozilla Manifesto. We’re also looking for someone who will add diversity to our senior leadership team.

In terms of an update: we retained the recruiting firm Perrett Laver in January to lead the current round of the search. We recently met with the recruiters to talk over 50 prospective candidates. There are some great people in there — people coming from the worlds of internet governance, open content, tech policy and the digital side of international development. We’re starting interviews with a handful of these people over the coming weeks — and still keeping our ear to the ground for a few more exceptional candidates as we do.

Getting this position filled soon is critical. We’re at a moment in history where the world really needs more people rolling up their sleeves to create a better digital world — this position is about gathering and supporting these people. The good news: I’m starting to feel optimistic that we can get this position filled by the middle of 2019.

PS. If you want to learn more about this role, here is the full recruiting package.

The post VP search update — and Europe appeared first on Mark Surman.

Dave TownsendBridging an internal LAN to a server’s Docker containers over a VPN

I recently decided that the basic web hosting I was using wasn’t quite a configurable or powerful as I would like so I have started paying for a VPS and am slowly moving all my sites over to it. One of the things I decided was that I wanted the majority of services it ran to be running under Docker. Docker has its pros and cons but the thing I like about it is that I can define what services run, how they run and where they store all their data in a single place, separate from the rest of the server. So now I have a /srv/docker directory which contains everything I need to backup to ensure I can reinstall all the services easily, mostly regardless of the rest of the server.

As I was adding services I quickly realised I had a problem to solve. Some of the services were obviously external facing, nginx for example. But a lot should not be exposed to the public internet but needed to still be accessible, web management interfaces etc. So I wanted to figure out how to easily access them remotely.

I considered just setting up port forwarding or a socks proxy over ssh. But this would mean having to connect to ssh whenever needed and either defining all the ports and docker IPs (which I would then have to make static) in the ssh config or having to switch proxies in my browser whenever I needed to access a service and also would only really support web protocols.

Exposing them publicly anyway but requiring passwords was another option, I wasn’t a big fan of this either though. It would require configuring an nginx reverse proxy or something everytime I added a new service and I thought I could come up with something better.

At first I figured a VPN was going to be overkill, but eventually I decided that once set up it would give me the best experience. I also realised I could then set up a persistent VPN from my home network to the VPS so when at home, or when connected to my home network over VPN (already set up) I would have access to the containers without needing to do anything else.

Alright, so I have a home router that handles two networks, the LAN and its own VPN clients. Then I have a VPS with a few docker networks running on it. I want them all to be able to access each other and as a bonus I want to be able to just use names to connect to the docker containers, I don’t want to have to remember static IP addresses. This is essentially just using a VPN to bridge the networks, which is covered in many other places, except I had to visit so many places to put all the pieces together that I thought I’d explain it in my own words, if only so I have a single place to read when I need to do this again.

In my case the networks behind my router are 10.10.* for the local LAN and 10.11.* for its VPN clients. On the VPS I configured my docker networks to be under 10.12.*.

0. Configure IP forwarding.

The zeroth step is to make sure that IP forwarding is enabled and not firewalled any more than it needs to be on both router and VPS. How you do that will vary and it’s likely that the router will already have it enabled. At the least you need to use sysctl to set net.ipv4.ip_forward=1 and probably tinker with your firewall rules.

1. Set up a basic VPN connection.

First you need to set up a simple VPN connection between the router and the VPS. I ended up making the VPS the server since I can then connect directly to it from another machine either for testing or if my home network is down. I don’t think it really matters which is the “server” side of the VPN, either should work, you’ll just have to invert some of the description here if you choose the opposite.

There are many many tutorials on doing this so I’m not going to talk about it much. Just one thing to say is that you must be using certificate authentication (most tutorials cover setting this up), so the VPS can identify the router by its common name. Don’t add any “route” configuration yet. You could use redirect-gateway in the router config to make some of this easier, but that would then mean that all your internet traffic (from everything on the home LAN) goes through the VPN which I didn’t want. I set the VPN addresses to be in 10.12.10.* (this subnet is not used by any of the docker networks).

Once you’re done here the router and the VPS should be able to ping their IP addresses on the VPN tunnel. The VPS IP is 10.12.10.1, the router’s gets assigned on connection. They won’t be able to reach beyond that yet though.

2. Make the docker containers visible to the router.

Right now the router isn’t able to send packets to the docker containers because it doesn’t know how to get them there. It knows that anything for 10.12.10.* goes through the tunnel, but has no idea that other subnets are beyond that. This is pretty trivial to fix. Add this to the VPS’s VPN configuration:

push "route 10.12.0.0 255.255.0.0"

When the router connects to the VPS the VPN server will tell it that this route can be accessed through this connection. You should now be able to ping anything in that network range from the router. But neither the VPS nor the docker containers will be able to reach the internal LANs. In fact if you try to ping a docker container’s IP from the local LAN the ping packet should reach it, but the container won’t know how to return it!

3. Make the local LAN visible to the VPS.

Took me a while to figure this out. Not quite sure why, but you can’t just add something similar to a VPN’s client configuration. Instead the server side has to know in advance what networks a client is going to give access to. So again you’re going to be modifying the VPS’s VPN configuration. First the simple part. Add this to the configuration file:

route 10.10.0.0 255.255.0.0
route 10.11.0.0 255.255.0.0

This makes openVPN modify the VPS’s routing table telling it it can direct all traffic to those networks to the VPN interface. This isn’t enough though. The VPN service will receive that traffic but not know where to send it on to. There could be many clients connected, which one has those networks? You have to add some client specific configuration. Create a directory somewhere and add this to the configuration file:

client-config-dir /absolute/path/to/directory

Do NOT be tempted to use a relative path here. It took me more time than I’d like to admit to figure out that when running as a daemon the open vpn service won’t be able to find it if it is a relative path. Now, create a file in the directory, the filename must be exactly the common name of the router’s VPN certificate. Inside it put this:

iroute 10.10.0.0 255.255.0.0
iroute 10.11.0.0 255.255.0.0

This tells the VPN server that this is the client that can handle traffic to those networks. So now everything should be able to ping everything else by IP address. That would be enough if I didn’t also want to be able to use hostnames instead of IP addresses.

4. Setting up DNS lookups.

Getting this bit to work depends on what DNS server the router is running. In my case (and many cases) this was dnsmasq which makes this fairly straightforward. The first step is setting up a DNS server that will return results for queries for the running docker containers. I found the useful dns-proxy-server. It runs as the default DNS server on the VPS, for lookups it looks for docker containers with a matching hostname and if not forwards the request on to an upstream DNS server. The VPS can now find the a docker container’s IP address by name.

For the router (and so anything on the local LAN) to be able to look them up it needs to be able to query the DNS server on the VPS. This meant giving the DNS container a static IP address (the only one this entire setup needs!) and making all the docker hostnames share a domain suffix. Then add this line to the router’s dnsmasq.conf:

server=/<domain>/<dns ip>

This tells dnsmasq that anytime it receives a query for *.domain it passes on the request to the VPS’s DNS container.

5. Done!

Everything should be set up now. Enjoy your direct access to your docker containers. Sorry this got long but hopefully it will be useful to others in the future.

The Mozilla BlogThank you, Denelle Dixon

I want to take this opportunity to thank Denelle Dixon for her partnership, leadership and significant contributions to Mozilla over the last six years.

Denelle joined Mozilla Corporation in September 2012 as an Associate General Counsel and rose through the ranks to lead our global business and operations as our Chief Operating Officer. Next month, after an incredible tour of duty at Mozilla, she will step down as a full-time Mozillian to join the Stellar Development Foundation as their Executive Director and CEO.

As a key part of our senior leadership team, Denelle helped to build a stronger more resilient Mozilla, including leading the acquisition of Pocket, orchestrating our major partnerships, and helping refocus us to unlock the growth opportunities ahead. Denelle has had a huge impact here — on our strategy, execution, technology, partners, brand, culture, people, the list goes on. Although I will miss her partnership deeply, I will be cheering her on in her new role as she embarks on the next chapter of her career.

As we conduct a search for our next COO, I will be working more closely with our business and operations leaders and teams as we execute on our strategy that will give people more control over their connected lives and help build an Internet that’s healthier for everyone.

Thank you, Denelle for everything, and all the best on your next adventure!

The post Thank you, Denelle Dixon appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgFast, Bump-Allocated Virtual DOMs with Rust and Wasm

Dodrio is a virtual DOM library written in Rust and WebAssembly. It takes advantage of both Wasm’s linear memory and Rust’s low-level control by designing virtual DOM rendering around bump allocation. Preliminary benchmark results suggest it has best-in-class performance.

Background

Virtual DOM Libraries

Virtual DOM libraries provide a declarative interface to the Web’s imperative DOM. Users describe the desired DOM state by generating a virtual DOM tree structure, and the library is responsible for making the Web page’s physical DOM reflect the user-generated virtual DOM tree. Libraries employ some diffing algorithm to decrease the number of expensive DOM mutation methods they invoke. Additionally, they tend to have facilities for caching to further avoid unnecessarily re-rendering components which have not changed and re-diffing identical subtrees.

Bump Allocation

Bump allocation is a fast, but limited approach to memory allocation. The allocator maintains a chunk of memory, and a pointer pointing within that chunk. To allocate an object, the allocator rounds the pointer up to the object’s alignment, adds the object’s size, and does a quick test that the pointer didn’t overflow and still points within the memory chunk. Allocation is only a small handful of instructions. Likewise, deallocating every object at once is fast: reset the pointer back to the start of the chunk.

The disadvantage of bump allocation is that there is no general way to deallocate individual objects and reclaim their memory regions while other objects are still in use.

These trade offs make bump allocation well-suited for phase-oriented allocations. That is, a group of objects that will all be allocated during the same program phase, used together, and finally deallocated together.

<figcaption>Pseudo-code for bump allocation</figcaption>
bump_allocate(size, align):
    aligned_pointer = round_up_to(self.pointer, align)
    new_pointer = aligned_pointer + size
    if no overflow and new_pointer < self.end_of_chunk:
        self.pointer = new_pointer
        return aligned_pointer
    else:
        handle_allocation_failure()

Dodrio from a User’s Perspective

First off, we should be clear about what Dodrio is and is not. Dodrio is only a virtual DOM library. It is not a full framework. It does not provide state management, such as Redux stores and actions or two-way binding. It is not a complete solution for everything you encounter when building Web applications.

Using Dodrio should feel fairly familiar to anyone who has used Rust or virtual DOM libraries before. To define how a struct is rendered as HTML, users implement the dodrio::Render trait, which takes an immutable reference to self and returns a virtual DOM tree.

Dodrio uses the builder pattern to create virtual DOM nodes. We intend to support optional JSX-style, inline HTML templating syntax with compile-time procedural macros, but we’ve left it as future work.

The 'a and 'bump lifetimes in the dodrio::Render trait’s interface and the where 'a: 'bump clause enforce that the self reference outlives the bump allocation arena and the returned virtual DOM tree. This means that if self contains a string, for example, the returned virtual DOM can safely use that string by reference rather than copying it into the bump allocation arena. Rust’s lifetimes and borrowing enable us to be aggressive with cost-saving optimizations while simultaneously statically guaranteeing their safety.

<figcaption>“Hello, World!” example with Dodrio</figcaption>
struct Hello {
    who: String,
}

impl Render for Hello {
    fn render<'a, 'bump>(&'a self, bump: &'bump Bump) -> Node<'bump>
    where
        'a: 'bump,
    {
        span(bump)
            .children([text("Hello, "), text(&self.who), text("!")])
            .finish()
    }
}

Event handlers are given references to the root dodrio::Render component, a handle to the virtual DOM instance that can be used to schedule re-renders, and the DOM event itself.

<figcaption>Incrementing counter example with Dodrio</figcaption>
struct Counter {
    count: u32,
}

impl Render for Counter {
    fn render<'a, 'bump>(&'a self, bump: &'bump Bump) -> Node<'bump>
    where
        'a: 'bump,
    {
        let count = bumpalo::format!(in bump, "{}", self.count);
        div(bump)
            .children([
                text(count.into_bump_str()),
                button(bump)
                    .on("click", |root, vdom, _event| {
                        let counter = root.unwrap_mut::<Counter>();
                        counter.count += 1;
                        vdom.schedule_render();
                    })
                    .children([text("+")])
                    .finish(),
            ])
            .finish()
    }
}

Additionally, Dodrio also has a proof-of-concept API for defining rendering components in JavaScript. This reflects the Rust and Wasm ecosystem’s strong integration story for JavaScript, that enables both incremental porting to Rust and heterogeneous, polyglot applications where just the most performance-sensitive code paths are written in Rust.

<figcaption>A Dodrio rendering component defined in JavaScript</figcaption>
class Greeting {
  constructor(who) {
    this.who = who;
  }

  render() {
    return {
      tagName: "p",
      attributes: [{ name: "class", value: "greeting" }],
      listeners: [{ on: "click", callback: this.onClick.bind(this) }],
      children: [
        "Hello, ",
        {
          tagName: "strong",
          children: [this.who],
        },
      ],
    };
  }

  async onClick(vdom, event) {
    // Be more excited!
    this.who += "!";

    // Schedule a re-render.
    await vdom.render();

    console.log("re-rendering finished!");
  }
}
<figcaption>Using a rendering component defined in JavaScript</figcaption>
#[wasm_bindgen]
extern "C" {
    // Import the JS `Greeting` class.
    #[wasm_bindgen(extends = Object)]
    type Greeting;

    // And the `Greeting` class's constructor.
    #[wasm_bindgen(constructor)]
    fn new(who: &str) -> Greeting;
}

// Construct a JS rendering component from a `Greeting` instance.
let js = JsRender::new(Greeting::new("World"));

Finally, Dodrio exposes a safe public interface, and we have never felt the need to reach for unsafe when authoring Dodrio rendering components.

Internal Design

Both virtual DOM tree rendering and diffing in Dodrio leverage bump allocation. Rendering constructs bump-allocated virtual DOM trees from component state. Diffing batches DOM mutations into a bump-allocated “change list” which is applied to the physical DOM all at once after diffing completes. This design aims to maximize allocation throughput, which is often a performance bottleneck for virtual DOM libraries, and minimize bouncing back and forth between Wasm, JavaScript, and native DOM functions, which should improve temporal cache locality and avoid out-of-line calls.

Rendering Into Double-Buffered Bump Allocation Arenas

Virtual DOM rendering exhibits phases that we can exploit with bump allocation:

  1. A virtual DOM tree is constructed by a Render implementation,
  2. it is diffed against the old virtual DOM tree,
  3. saved until the next time we render a new virtual DOM tree,
  4. when it is diffed against that new virtual DOM tree,
  5. and then finally it and all of its nodes are destroyed.

This process repeats ad infinitum.

<figcaption>Virtual DOM tree lifetimes and operations over time</figcaption>
        ------------------- Time ------------------->
Tree 0: [ render | ------ | diff ]
Tree 1:          [ render | diff | ------ | diff ]
Tree 2:                          [ render | diff | ------ | diff ]
Tree 3:                                          [ render | diff | ------ | diff ]
...

At any given moment in time, only two virtual DOM trees are alive. Therefore, we can double buffer two bump allocation arenas that switch back and forth between the roles of containing the new or the old virtual DOM tree:

  1. A virtual DOM tree is rendered into bump arena A,
  2. the new virtual DOM tree in bump arena A is diffed with the old virtual DOM tree in bump arena B,
  3. bump arena B has its bump pointer reset,
  4. bump arenas A and B are swapped.
<figcaption>Double buffering bump allocation arenas for virtual DOM tree rendering</figcaption>
        ------------------- Time ------------------->
Arena A: [ render | ------ | diff | reset | render | diff | -------------- | diff | reset | render | diff ...
Arena B:          [ render | diff | -------------- | diff | reset | render | diff | -------------- | diff ...

Diffing and Change Lists

Dodrio uses a naïve, single-pass algorithm to diff virtual DOM trees. It walks both the old and new trees in unison and builds up a change list of DOM mutation operations whenever an attribute, listener, or child differs between the old and the new tree. It does not currently use any sophisticated algorithms to minimize the number of operations in the change list, such as longest common subsequence or patience diffing.

The change lists are constructed during diffing, applied to the physical DOM, and then destroyed. The next time we render a new virtual DOM tree, the process is repeated. Since at most one change list is alive at any moment, we use a single bump allocation arena for all change lists.

A change list’s DOM mutation operations are encoded as instructions for a custom stack machine. While an instruction’s discriminant is always a 32-bit integer, instructions are variably sized as some have immediates while others don’t. The machine’s stack contains physical DOM nodes (both text nodes and elements), and immediates encode pointers and lengths of UTF-8 strings.

The instructions are emitted on the Rust and Wasm side, and then batch interpreted and applied to the physical DOM in JavaScript. Each JavaScript function that interprets a particular instruction takes four arguments:

  1. A reference to the JavaScript ChangeList class that represents the stack machine,
  2. a Uint8Array view of Wasm memory to decode strings from,
  3. a Uint32Array view of Wasm memory to decode immediates from,
  4. and an offset i where the instruction’s immediates (if any) are located.

It returns the new offset in the 32-bit view of Wasm memory where the next instruction is encoded.

There are instructions for:

  • Creating, removing, and replacing elements and text nodes,
  • adding, removing, and updating attributes and event listeners,
  • and traversing the DOM.

For example, the AppendChild instruction has no immediates, but expects two nodes to be on the top of the stack. It pops the first node from the stack, and then calls Node.prototype.appendChild with the popped node as the child and the node that is now at top of the stack as the parent.

<figcaption>Emitting the AppendChild instruction</figcaption>
// Allocate an instruction with zero immediates.
fn op0(&self, discriminant: ChangeDiscriminant) {
    self.bump.alloc(discriminant as u32);
}

/// Immediates: `()`
///
/// Stack: `[... Node Node] -> [... Node]`
pub fn emit_append_child(&self) {
    self.op0(ChangeDiscriminant::AppendChild);
}
<figcaption>Interpreting the AppendChild instruction</figcaption>
function appendChild(changeList, mem8, mem32, i) {
    const child = changeList.stack.pop();
    top(changeList.stack).appendChild(child);
    return i;
}

On the other hand, the SetText instruction expects a text node on top of the stack, and does not modify the stack. It has a string encoded as pointer and length immediates. It decodes the string, and calls the Node.prototype.textContent setter function to update the text node’s text content with the decoded string.

<figcaption>Emitting the SetText instruction</figcaption>
// Allocate an instruction with two immediates.
fn op2(&self, discriminant: ChangeDiscriminant, a: u32, b: u32) {
    self.bump.alloc([discriminant as u32, a, b]);
}

/// Immediates: `(pointer, length)`
///
/// Stack: `[... TextNode] -> [... TextNode]`
pub fn emit_set_text(&self, text: &str) {
    self.op2(
        ChangeDiscriminant::SetText,
        text.as_ptr() as u32,
        text.len() as u32,
    );
}
<figcaption>Interpreting the SetText instruction</figcaption>
function setText(changeList, mem8, mem32, i) {
    const pointer = mem32[i++];
    const length = mem32[i++];
    const str = string(mem8, pointer, length);
    top(changeList.stack).textContent = str;
    return i;
}

Preliminary Benchmarks

To get a sense of Dodrio’s speed relative to other libraries, we added it to Elm’s Blazing Fast HTML benchmark that compares rendering speeds of TodoMVC implementations with different libraries. They claim that the methodology is fair and that the benchmark results should generalize. They also subjectively measure how easy it is to optimize the implementations to improve performance (for example, by adding well-placed shouldComponentUpdate hints in React and lazy wrappers in Elm). We followed their same methodology and disabled Dodrio’s on-by-default, once-per-animation-frame render debouncing, giving it the same handicap that the Elm implementation has.

That said, there are some caveats to these benchmark results. The React implementation had bugs that prevented it from completing the benchmark, so we don’t include its measurements below. If you are curious, you can look at the original Elm benchmark results to see how it generally fared relative to some of the other libraries measured here. Second, we made an initial attempt to update the benchmark to the latest version of each library, but quickly got in over our heads, and therefore this benchmark is not using the latest release of each library.

With that out of the way let’s look at the benchmark results. We ran the benchmarks in Firefox 67 on Linux. Lower is better, and means faster rendering times.

<figcaption>Benchmark results</figcaption>Benchmark results graph

Library Optimized? Milliseconds
Ember 2.6.3 No 3542
Angular 1.5.8 No 2856
Angular 2 No 2743
Elm 0.16 No 4295
Elm 0.17 No 3170
Dodrio 0.1-prerelease No 2181
Angular 1.5.8 Yes 3175
Angular 2 Yes 2371
Elm 0.16 Yes 4229
Elm 0.17 Yes 2696

Dodrio is the fastest library measured in the benchmark. This is not to say that Dodrio will always be the fastest in every scenario — that is undoubtedly false. But these results validate Dodrio’s design and show that it already has best-in-class performance. Furthermore, there is room to make it even faster:

  • Dodrio is brand new, and has not yet had the years of work poured into it that other libraries measured have. We have not done any serious profiling or optimization work on Dodrio yet!

  • The Dodrio TodoMVC implementation used in the benchmark does not use shouldComponentUpdate-style optimizations, like other implementations do. These techniques are still available to Dodrio users, but you should need to reach for them much less frequently because idiomatic implementations are already fast.

Future Work

So far, we haven’t invested in polishing Dodrio’s ergonomics. We would like to explore adding type-safe HTML templates that boil down to Dodrio virtual DOM tree builder invocations.

Additionally, there are a few more ways we can potentially improve Dodrio’s performance:

For both ergonomics and further performance improvements, we would like to start gathering feedback informed by real world usage before investing too much more effort.

Evan Czaplicki pointed us to a second benchmark — krausest/js-framework-benchmark — that we can use to further evaluate Dodrio’s performance. We look forward to implementing this benchmark for Dodrio and gathering more test cases and insights into performance.

Further in the future, the WebAssembly host bindings proposal will enable us to interpret the change list’s operations in Rust and Wasm without trampolining through JavaScript to invoke DOM methods.

Conclusion

Dodrio is a new virtual DOM library that is designed to leverage the strengths of both Wasm’s linear memory and Rust’s low-level control by making extensive use of fast bump allocation. If you would like to learn more about Dodrio, we encourage you to check out its repository and examples!

Thanks to Luke Wagner and Alex Crichton for their contributions to Dodrio’s design, and participation in brainstorming and rubber ducking sessions. We also discussed many of these ideas with core developers on the React, Elm, and Ember teams, and we thank them for the context and understanding these discussions ultimately brought to Dodrio’s design. A final round of thanks to Jason Orendorff, Lin Clark, Till Schneidereit, Alex Crichton, Luke Wagner, Evan Czaplicki, and Robin Heggelund Hansen for providing valuable feedback on early drafts of this document.

Mozilla Reps CommunityRep of the Month – February 2019

Please join us in congratulating Edoardo Viola, our Rep of the Month for February 2019!

Edoardo is a long-time Mozillian from Italy and has been a Rep for almost two years. He’s a Resource Rep and has been on the Reps Council until January. When he’s not busy with Reps work, Edoardo is a Mentor in the Open Leadership Training Program. In the past he has contributed to Campus Clubs as well as MozFest, where he was a Space Wrangler for the Web Literacy Track.

Recently Edoardo helped out at FOSDEM in Brussels as part of the Mozilla volunteers organizing our presence. He helped out at the booth, as well as helping to moderate the Mozilla Dev Room. He also contributes to the Internet Health Report as part of the volunteer team to give input for the next edition of the report.

To congratulate him, please head over to Discourse!

Eric RahmDoubling the Number of Content Processes in Firefox

Over the past year, the Fission MemShrink project has been working tirelessly to reduce the memory overhead of Firefox. The goal is to allow us to start spinning up more processes while still maintaining a reasonable memory footprint. I’m happy to announce that we’ve seen the fruits of this labor: as of version 66 we’re doubling the default number of content processes from 4 to 8.

Doubling the number of content processes is the logical extension of the e10s-multi project. Back when that project wrapped up we chose to limit the default number of processes to 4 in order to balance the benefits of multiple content processes — fewer crashes, better site isolation, improved performance when loading multiple pages — with the impact on memory usage for our users.

Our telemetry has looked really good: if we compare beta 59 (roughly when this project started) with beta 66, where we decided to let the increase be shipped to our regular users, we see a virtually unchanged total memory usage for our 25th, median, and 75th percentile and a modest 9% increase for the 95th percentile on Windows 64-bit.

Doubling the number of content processes and not seeing a huge jump is quite impressive. Even on our worst-case-scenario stress test — AWSY which loads 100 pages in 30 tabs, repeated 3 times — we only saw a 6% increase in memory usage when turning on 8 content processes when compared to when we started the project.

This is a huge accomplishment and I’m very proud of the loose-knit team of contributors who have done some phenomenal feats to get us to this point. There have been some big wins, but really it’s the myriad of minor improvements that compounded into a large impact. This has ranged from delay-loading browser JavaScript code until it’s needed (or not at all), to low-level changes to packing C++ data structures more efficiently, to large system-wide changes to how we generate bindings that glue together our JavaScript and C++ code. You can read more about the background of this project and many of the changes in our initial newsletter and the follow-up.

While I’m pleased with where we are now, we still have a way to go to get our overhead down even further. Fear not, for we have a quite a few changes in the pipeline including a fork server to help further reduce memory usage on Linux and macOS, work to share font data between processes, and work to share more CSS data between processes. In addition to reducing overhead we now have a tab unloading feature in Nightly 67 that will proactively unload tabs when it looks like you’re about to run out of memory. So far the results in reducing the number of out-of-memory crashes are looking really good and we’re hoping to get that released to a wider audience in the near future.

The Firefox FrontierGet better password management with Firefox Lockbox on iPad

We access the web on all sorts of devices from our laptop to our phone to our tablets. And we need our passwords everywhere to log into an account. This … Read more

The post Get better password management with Firefox Lockbox on iPad appeared first on The Firefox Frontier.

Daniel StenbergLooking for the Refresh header

The other day someone filed a bug on curl that we don’t support redirects with the Refresh header. This took me down a rabbit hole of Refresh header research and I’ve returned to share with you what I learned down there.

tl;dr Refresh is not a standard HTTP header.

As you know, an HTTP redirect is specified to use a 3xx response code and a Location: header to point out the new URL (I use the term URL here but you know what I mean). This has been the case since RFC 1945 (HTTP/1.0). According to an old mail from Roy T Fielding (dated June 1996), Refresh “didn’t make it” into that spec. That was the first “real” HTTP specification. (And the HTTP we used before 1.0 didn’t even have headers!)

The little detail that it never made it into the 1.0 spec or any later one, doesn’t seem to have affected the browsers. Still today, browsers keep supporting the Refresh header as a sort of Location: replacement even though it seems to never have been present in a HTTP spec.

In good company

curl is not the only HTTP library that doesn’t support this non-standard header. The popular python library requests apparently doesn’t according to this bug from 2017, and another bug was filed about it already back in 2011 but it was just closed as “old” in 2014.

I’ve found no support in wget or wget2 either for this header.

I didn’t do any further extensive search for other toolkits’ support, but it seems that the browsers are fairly alone in supporting this header.

How common is the the Refresh header?

I decided to make an attempt to figure out, and for this venture I used the Rapid7 data trove. The method that data is collected with may not be the best – it scans the IPv4 address range and sends a HTTP request to each TCP port 80, setting the IP address in the Host: header. The result of that scan is 52+ million HTTP responses from different and current HTTP origins. (Exactly 52254873 responses in my 59GB data dump, dated end of February 2019).

Results from my scans
  • Location is used in 18.49% of the responses
  • Refresh is used in 0.01738% of the responses (exactly 9080 responses featured them)
  • Location is thus used 1064 times more often than Refresh
  • In 35% of the cases when Refresh is used, Location is also used
  • curl thus handles 99.9939% of the redirects in this test
Additional notes
  • When Refresh is the only redirect header, the response code is usually 200 (with 404 being the second most)
  • When both headers are used, the response code is almost always 30x
  • When both are used, it is common to redirect to the same target and it is also common for the Refresh header value to only contain a number (for the number of seconds until “refresh”).
Refresh from HTML content

Redirects can also be done by meta tags in HTML and sending the refresh that way, but I have not investigated how common as that isn’t strictly speaking HTTP so it is outside of my research (and interest) here.

In use, not documented, not in the spec

Just another undocumented corner of the web.

When I posted about these findings on the HTTPbis mailing list, it was pointed out that WHATWG mentions this header in their iana page. I say mention because calling that documenting would be a stretch…

It is not at all clear exactly what the header is supposed to do and it is not documented anywhere. It’s not exactly a redirect, but almost?

Will/should curl support it?

A decision hasn’t been made about it yet. With such a very low use frequency and since we’ve managed fine without support for it so long, maybe we can just maintain the situation and instead argue that we should just completely deprecate this header use from the web?

Updates

After this post first went live, I got some further feedback and data that are relevant and interesting.

  • Yoav Wiess created a patch for Chrome to count how often they see this header used in real life.
  • Eric Lawrence pointed out that IE had several incompatibilities in its Refresh parser back in the day.
  • Boris pointed out (in the comments below) the WHATWG documented steps for handling the header.
  • The use of <meta> tag refresh in contents is fairly high. The Chrome counter says almost 4% of page loads!

Andrew HalberstadtTask Configuration at Scale

A talk I did for the Automationeer’s Assemble series on how Mozilla handles complexity in their CI configuration.


Hacks.Mozilla.OrgIodide: an experimental tool for scientific communication and exploration on the web

In the last 10 years, there has been an explosion of interest in “scientific computing” and “data science”: that is, the application of computation to answer questions and analyze data in the natural and social sciences. To address these needs, we’ve seen a renaissance in programming languages, tools, and techniques that help scientists and researchers explore and understand data and scientific concepts, and to communicate their findings. But to date, very few tools have focused on helping scientists gain unfiltered access to the full communication potential of modern web browsers. So today we’re excited to introduce Iodide, an experimental tool meant to help scientists write beautiful interactive documents using web technologies, all within an iterative workflow that will be familiar to many scientists.

Exploring the Lorenz attractor then examining the code in Iodide

Iodide in action.

 

Beyond being just a programming environment for creating living documents in the browser, Iodide attempts to remove friction from communicative workflows by always bundling the editing tool with the clean readable document. This diverges from IDE-style environments that output presentational documents like .pdf files (which are then divorced from the original code) and cell-based notebooks which mix code and presentation elements. In Iodide, you can get both a document that looks however you want it to look, and easy access to the underlying code and editing environment.

Iodide is still very much in an alpha state, but following the internet aphorism “If you’re not embarrassed by the first version of your product, you’ve launched too late”, we’ve decided to do a very early soft launch in the hopes of getting feedback from a larger community. We have a demo that you can try out right now, but expect a lot of rough edges (and please don’t use this alpha release for critical work!). We’re hoping that, despite the rough edges, if you squint at this you’ll be able to see the value of the concept, and that the feedback you give us will help us figure out where to go next.

How we got to Iodide

Data science at Mozilla

At Mozilla, the vast majority of our data science work is focused on communication. Though we sometimes deploy models intended to directly improve a user’s experience, such as the recommendation engine that helps users discover browser extensions, most of the time our data scientists analyze our data in order to find and share insights that will inform the decisions of product managers, engineers and executives.

Data science work involves writing a lot of code, but unlike traditional software development, our objective is to answer questions, not to produce software. This typically results in some kind of report — a document, some plots, or perhaps an interactive data visualization. Like many data science organizations, at Mozilla we explore our data using fantastic tools like Jupyter and R-Studio. However, when it’s time to share our results, we cannot usually hand off a Jupyter notebook or an R script to a decision-maker, so we often end up doing things like copying key figures and summary statistics to a Google Doc.

We’ve found that making the round trip from exploring data in code to creating a digestible explanation and back again is not always easy. Research shows that many people share this experience. When one data scientist is reading through another’s final report and wants to look at the code behind it, there can be a lot of friction; sometimes tracking down the code is easy, sometimes not. If they want to attempt to experiment with and extend the code, things obviously get more difficult still. Another data scientist may have your code, but may not have an identical configuration on their machine, and setting that up takes time.

The virtuous cycle of data science work

The virtuous circle of data science work.

 

Why is there so little web in science?

Against the background of these data science workflows at Mozilla, in late 2017 I undertook a project that called for interactive data visualization. Today you can create interactive visualizations using great libraries for Python, R, and Julia, but for what I wanted to accomplish, I needed to drop down to Javascript. This meant stepping away from my favorite data science environments. Modern web development tools are incredibly powerful, but extremely complicated. I really didn’t want to figure out how to get a fully-fledged Javascript build toolchain with hot module reloading up and running, but short of that I couldn’t find much aimed at creating clean, readable web documents within the live, iterative workflow familiar to me.

I started wondering why this tool didn’t exist — why there’s no Jupyter for building interactive web documents — and soon zoomed out to thinking about why almost no one uses Javascript for scientific computing. Three big reasons jumped out:

  1. Javascript itself has a mixed reputation among scientists for being slow and awkward;
  2. there aren’t many scientific computing libraries that run in the browser or that work with Javascript; and,
  3. as I’d discovered, there are very few scientific coding tools that enable fast iteration loop and also grant unfiltered access to the presentational capabilities in the browser.

These are very big challenges. But as I thought about it more, I began to think that working in a browser might have some real advantages for the kind of communicative data science that we do at Mozilla. The biggest advantage, of course, is that the browser has arguably the most advanced and well-supported set of presentation technologies on the planet, from the DOM to WebGL to Canvas to WebVR.

Thinking on the workflow friction mentioned above, another potential advantage occurred to me: in the browser, the final document need not be separate from the tool that created it. I wanted a tool designed to help scientists iterate on web documents (basically single-purpose web apps for explaining an idea)… and many tools we were using were themselves basically web apps. For the use case of writing these little web-app-documents, why not bundle the document with the tool used to write it?

By doing this, non-technical readers could see my nice looking document, but other data scientists could instantly get back to the original code. Moreover, since the compute kernel would be the browser’s JS engine, they’d be able to start extending and experimenting with the analysis code immediately. And they’d be able to do all this without connecting to remote computing resources or installing any software.

Towards Iodide

I started discussing the potential pros and cons of scientific computing in the browser with my colleagues, and in the course of our conversations, we noticed some other interesting trends.

Inside Mozilla we were seeing a lot of interesting demos showing off WebAssembly, a new way for browsers to run code written in languages other than Javascript. WebAssembly allows programs to be run at incredible speed, in some cases close to native binaries. We were seeing examples of computationally-expensive processes like entire 3D game engines running within the browser without difficulty. Going forward, it would be possible to compile best-in-class C and C++ numerical computing libraries to WebAssembly and wrap them in ergonomic JS APIs, just as the SciPy project does for Python. Indeed, projects had started to do this already.

WebAssembly makes it possible to run code at near-native speed in the browser.

We also noticed the Javascript community’s willingness to introduce new syntax when doing so helps people to solve their problem more effectively. Perhaps it would be possible to emulate some of key syntactic elements that make numerical programming more comprehensible and fluid in MATLAB, Julia, and Python — matrix multiplication, multidimensional slicing, broadcast array operations, and so on. Again, we found other people thinking along similar lines.

With these threads converging, we began to wonder if the web platform might be on the cusp of becoming a productive home for scientific computing. At the very least, it looked like it might evolve to serve some of the communicative workflows that we encounter at Mozilla (and that so many others encounter in industry and academia). With the core of Javascript improving all the time and the possibility of adding syntax extensions for numerical programming, perhaps JS itself could be made more appealing to scientists. WebAssembly seemed to offer a path to great science libraries. The third leg of the stool would be an environment for creating data science documents for the web. This last element is where we decided to focus our initial experimentation, which brought us to Iodide.

The anatomy of Iodide

Iodide is a tool designed to give scientists a familiar workflow for creating great-looking interactive documents using the full power of the web platform. To accomplish that, we give you a “report” — basically a web page that you can fill in with your content — and some tools for iteratively exploring data and modifying your report to create something you’re ready to share. Once you’re ready, you can send a link directly to your finalized report. If your colleagues and collaborators want to review your code and learn from it, they can drop back to an exploration mode in one click. If they want to experiment with the code and use it as the basis of their own work, with one more click they can fork it and start working on their own version.

Read on to learn a bit more about some of the ideas we’re experimenting with in an attempt to make this workflow feel fluid.

The Explore and Report Views

Iodide aims to tighten the loop between exploration, explanation, and collaboration. Central to that is the ability to move back and forth between a nice looking write-up and a useful environment for iterative computational exploration.

When you first create a new Iodide notebook, you start off in the “explore view.” This provides a set of panes including an editor for writing code, a console for viewing the output from code you evaluate, a workspace viewer for examining the variables you’ve created during your session, and a “report preview” pane in which you can see a preview of your report.

Editing a Markdown code chunk in Iodide's Explore View

Editing a Markdown code chunk in Iodide’s explore view.

 

By clicking the “REPORT” button in the top right corner, the contents of your report preview will expand to fill the entire window, allowing you to put the story you want to tell front and center. Readers who don’t know how to code or who aren’t interested in the technical details are able to focus on what you’re trying to convey without having to wade through the code. When a reader visits the link to the report view, your code will runs automatically. if they want to review your code, simply clicking the “EXPLORE” button in the top right will bring them back into the explore view. From there, they can make a copy of the notebook for their own explorations.

Moving from Explore to Report View.

Moving from explore to report view.

 

Whenever you share a link to an Iodide notebook, your collaborator can always access to both of these views. The clean, readable document is never separated from the underlying runnable code and the live editing environment.

Live, interactive documents with the power of the Web Platform

Iodide documents live in the browser, which means the computation engine is always available. Whenever you share your work, you share a live interactive report with running code. Moreover, since the computation happens in the browser alongside the presentation, there is no need to call a language backend in another process. This means that interactive documents update in real-time, opening up the possibility of seamless 3D visualizations, even with the low-latency and high frame-rate required for VR.

MRI

Contributor Devin Bayly explores MRI data of his brain

 

Sharing, collaboration, and reproducibility

Building Iodide in the web simplifies a number of the elements of workflow friction that we’ve encountered in other tools. Sharing is simplified because the write-up and the code are available at the same URL rather than, say, pasting a link to a script in the footnotes of a Google Doc. Collaboration is simplified because the compute kernel is the browser and libraries can be loaded via an HTTP request like any webpage loads script — no additional languages, libraries, or tools need to be installed. And because browsers provide a compatibility layer, you don’t have to worry about notebook behavior being reproducible across computers and OSes.

To support collaborative workflows, we’ve built a fairly simple server for saving and sharing notebooks. There is a public instance at iodide.io where you can experiment with Iodide and share your work publicly. It’s also possible to set up your own instance behind a firewall (and indeed this is what we’re already doing at Mozilla for some internal work). But importantly, the notebooks themselves are not deeply tied to a single instance of the Iodide server. Should the need arise, it should be easy to migrate your work to another server or export your notebook as a bundle for sharing on other services like Netlify or Github Pages (more on exporting bundles below under “What’s next?”). Keeping the computation in the client allows us to focus on building a really great environment for sharing and collaboration, without needing to build out computational resources in the cloud.

Pyodide: The Python science stack in the browser

When we started thinking about making the web better for scientists, we focused on ways that we could make working with Javascript better, like compiling existing scientific libraries to WebAssembly and wrapping them in easy to use JS APIs. When we proposed this to Mozilla’s WebAssembly wizards, they offered a more ambitious idea: if many scientists prefer Python, meet them where they are by compiling the Python science stack to run in WebAssembly.

We thought this sounded daunting — that it would be an enormous project and that it would never deliver satisfactory performance… but two weeks later Mike Droettboom had a working implementation of Python running inside an Iodide notebook. Over the next couple months, we  added Numpy, Pandas, and Matplotlib, which are by far the most used modules in the Python science ecosystem. With help from contributors Kirill Smelkov and Roman Yurchak at Nexedi, we landed support for Scipy and scikit-learn. Since then, we’ve continued adding other libraries bit by bit.

Running the Python interpreter inside a Javascript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration.

Matplotlib running in the browser

Running Matplotlib in the browser enables its interactive features, which are unavailable in static environments

 

Bringing Python into the browser creates some magical workflows. For example, you can import and clean your data in Python, and then access the resulting Python objects from Javascript (in most cases, the conversion happens automatically) so that you can display them using JS libraries like d3. Even more magically, you can access browser APIs from Python code, allowing you to do things like manipulate the DOM without touching Javascript.

Of course, there’s a lot more to say about Pyodide, and it deserves an article of its own — we’ll go into more detail in a follow up post next month.

JSMD (JavaScript MarkDown)

Just as in Jupyter and R’s R-Markdown mode, in Iodide you can interleave code and write-up as you wish, breaking your code into “code chunks” that you can modify and run as a separate units. Our implementation of this idea parallels R Markdown and MATLAB’s “cell mode”: rather than using an explicitly cell-based interface, the content of an Iodide notebook is just a text document that uses a special syntax to delimit specific types of cells. We call this text format “JSMD”.

Following MATLAB, code chunks are defined by lines starting with %% followed by a string indicating the language of the chunk below. We currently support chunks containing Javascript, CSS, Markdown (and HTML), Python, a special “fetch” chunk that simplifies loading resources, and a plugin chunk that allows you to extend Iodide’s functionality by adding new cell types.

We’ve found this format to be quite convenient. It makes it easy to use text-oriented tools like diff viewers and your own favorite text editor, and you can perform standard text operations like cut/copy/paste without having to learn shortcuts for cell management. For more details you can read about JSMD in our docs.

What’s next?

It’s worth repeating that we’re still in alpha, so we’ll be continuing to improve overall polish and squash bugs. But in addition to that, we have a number of features in mind for our next round of experimentation. If any of these ideas jump out as particularly useful, let us know! Even better, let us know if you’d like to help us build them!

Enhanced collaborative features

As mentioned above, so far we’ve built a very simple backend that allows you to save your work online, look at work done by other people, and quickly fork and extend existing notebooks made by other users, but these are just the initial steps in a useful collaborative workflow.

The next three big collaboration features we’re looking at adding are:

  1. Google Docs-style comment threads
  2. The ability to suggest changes to another user’s notebook via a fork/merge workflow similar to Github pull requests
  3. Simultaneous notebook editing like Google Docs.

At this point, we’re prioritizing them in roughly that order, but if you would tackle them in a different order or if you have other suggestions, let us know!

More languages!

We’ve spoken to folks from the R and Julia communities about compiling those languages to WebAssembly, which would allow their use in Iodide and other browser-based projects. Our initial investigation indicates that this should be doable, but that implementing these languages might be a bit more challenging than Python. As with Python, some cool workflows open up if you can, for example, fit statistical models in R or solve differential equations in Julia, and then display your results using browser APIs. If bringing these languages to the web interests you, please reach out — in particular, we’d love help from FORTRAN and LLVM experts.

Export notebook archive

Early versions of Iodide were self-contained runnable HTML files, which included both the JSMD code used in the analysis, and the JS code used to run Iodide itself, but we’ve moved away from this architecture. Later experiments have convinced us that the collaboration benefits of having an Iodide server outweigh the advantages of managing files on your local system. Nonetheless, these experiments showed us that it’s possible to take a runnable snapshot of an Iodide notebook by inling the Iodide code along with any data and libraries used by a notebook into one big HTML file. This might end up being a bigger file than you’d want to serve to regular users, but it could prove useful as a perfectly reproducible and archivable snapshot of an analysis.

Iodide to text editor browser extension

While many scientists are quite used to working in browser-based programming environments, we know that some people will never edit code in anything other than their favorite text editor. We really want Iodide to meet people where they are already, including those who prefer to type their code in another editor but want access to the interactive and iterative features that Iodide provides. To serve that need, we’ve started thinking about creating a lightweight browser extension and some simple APIs to let Iodide talk to client-side editors.

Feedback and collaboration welcome!

We’re not trying to solve all the problems of data science and scientific computing, and we know that Iodide will not be everyone’s cup of tea. If you need to process terabytes of data on GPU clusters, Iodide probably doesn’t have much to offer you. If you are publishing journal articles and you just need to write up a LaTeX doc, then there are better tools for your needs. If the whole trend of bringing things into the browser makes you cringe a little, no problem — there are a host of really amazing tools that you can use to do science, and we’re thankful for that! We don’t want to change the way anyone works, and for many scientists web-focused communication is beside the point. Rad! Live your best life!

But for those scientists who do produce content for the web, and for those who might like to do so if you had tools designed to support the way you work: we’d really love to hear from you!

Please visit iodide.io, try it out, and give us feedback (but again: keep in mind that this project is in alpha phase — please don’t use it for any critical work, and please be aware that while we’re in alpha everything is subject to change). You can take our quick survey, and Github issues and bug reports are very welcome. Feature requests and thoughts on the overall direction can be shared via our Google group or Gitter.

If you’d like to get involved in helping us build Iodide, we’re open source on Github. Iodide touches a wide variety of software disciplines, from modern frontend development to scientific computing to compilation and transpilation, so there are a lot of interesting things to do! Please reach out if any of this interests you!


Huge thanks to Hamilton Ulmer, William Lachance, and Mike Droettboom for their great work on Iodide and for reviewing this article.

The Firefox FrontierUse Firefox Send to safely share files for free

Moving files around the web can be complicated and expensive, but with Firefox Send it doesn’t have to be. There are plenty of services that let you send files for … Read more

The post Use Firefox Send to safely share files for free appeared first on The Firefox Frontier.

The Mozilla BlogIntroducing Firefox Send, Providing Free File Transfers while Keeping your Personal Information Private

At Mozilla, we are always committed to people’s security and privacy. It’s part of our long-standing Mozilla Manifesto. We are continually looking for new ways to fulfill that promise, whether it’s through the browser, apps or services. So, it felt natural to graduate one of our popular Test Pilot experiments, Firefox Send, send.firefox.com. Send is a free encrypted file transfer service that allows users to safely and simply share files from any browser. Additionally, Send will also be available as an Android app in beta later this week. Now that it’s a keeper, we’ve made it even better, offering higher upload limits and greater control over the files you share.

Here’s how Firefox Send works:



Encryption & Controls at your fingertips

Imagine the last time you moved into a new apartment or purchased a home and had to share financial information like your credit report over the web. In situations like this, you may want to offer the recipient one-time or limited access to those files. With Send, you can feel safe that your personal information does not live somewhere in the cloud indefinitely.

Send uses end-to-end encryption to keep your data secure from the moment you share to the moment your file is opened. It also offers security controls that you can set. You can choose when your file link expires, the number of downloads, and whether to add an optional password for an extra layer of security.

Choose when your file link expires, the number of downloads and add an optional password

Share large files & navigate with ease

Send also makes it simple to share large file sizes – perfect for sharing professional design files  or collaborating on a presentation with co-workers. With Send you can share file sizes up to 1GB quickly. To send files up to 2.5GB, sign up for a free Firefox account.

Send makes it easy for your recipient, too. No hoops to jump through. They simply receive a link to click and download the file. They don’t need to have a Firefox account to access your file. Overall, this makes the sharing experience seamless for both parties, and as quick as sending an email.

Sharing large file sizes is simple and quick

We know there are several cloud sharing solutions out there, but as a continuation of our mission to bring you more private and safer choices, you can trust that your information is safe with Send. As with all Firefox apps and services, Send is Private By Design, meaning all of your files are protected and we stand by our mission to handle your data privately and securely.

Whether you’re sharing important personal information, private documents or confidential work files you can start sending your files for free with Firefox Send.

The post Introducing Firefox Send, Providing Free File Transfers while Keeping your Personal Information Private appeared first on The Mozilla Blog.

The Mozilla BlogApply for a Mozilla Fellowship

We’re seeking technologists, activists, policy experts, and scientists devoted to a healthy internet. Apply to be a 2019-2020 Mozilla Fellow

 

Today, we’re opening applications for Mozilla Fellowships. Mozilla is seeking technologists, activists, policy experts, and scientists who are building a more humane digital world:

http://mozilla.fluxx.io/apply/fellowship

Mozilla Fellows work on the front lines of internet health, at a time when the internet is entwined with everything from elections and free expression to justice and personal safety. Fellows ensure the internet remains a force for good — empowerment, equality, access — and also combat online ills, like abuse, exclusion, and closed systems.

Mozilla is particularly interested in individuals whose expertise aligns with our 2019 impact goal: “better machine decision making,” or ensuring the artificial intelligence in our lives is designed with responsibility and ethics top of mind. For example: Fellows might research how disinformation spreads on Facebook. Or, build a tool that identifies the blind spots in algorithms that detect cancer. Or, advocate for a “digital bill of rights” that protects individuals from invasive facial recognition technology.

During a 10-month tenure, Mozilla Fellows may run campaigns, build products, and influence policy. Along the way, Fellows receive competitive funding and benefits; mentorship and trainings; access to the Mozilla network and megaphone; and more. Mozilla Fellows hail from a range of disciplines and geographies: They are scientists in the UK, human rights researchers in Germany, tech policy experts in Nigeria, and open-source advocates in New Zealand. The Mozilla Fellowship runs from October 2019 through July 2020.

Specifically, we’re seeking Fellows who identify with one of three profiles:

  • Open web activists: Individuals addressing issues like privacy, security, and inclusion online. These Fellows will embed at leading human rights and civil society organizations from around the world, working alongside the organizations and also exchanging advocacy and technical insights among each other.

 

  • Tech policy professionals: Individuals who examine the interplay of technology and public policy — and craft legal, academic, and governmental solutions.

 

  • Scientists and researchers: Individuals who infuse open-source practices and principles into scientific research. These Fellows are based at the research institution with which they are currently affiliated.

Learn more about Mozilla Fellowships, and then apply. Part 1 of the applications closes on Monday April 8, 2019 at 5:00pm ET. Below, meet a handful of current Mozilla Fellows:

 

Valentina Pavel

Valentina is a digital rights advocate working on privacy, freedom of speech, and open culture. Valentina is currently investigating the implications of digital feudalism, and exploring different visions for shared data ownership. Read her latest writing.

 

 

Selina Musuta

Selina is a web developer and infosec expert. Selina is currently embedded at Consumer Reports, and supporting the organization’s privacy and security work.

 

 

 

Julia Lowndes | @juliesquid

Julia is a environmental scientist and open-source advocate. Julia is currently evangelizing openness in the scientific community, and training fellow researchers how to leverage open data and processes. Learn about her latest project.

 

 

Richard Whitt | @richardswhitt

Richard is a tech policy expert and industry veteran. Richard is currently exploring how to re-balance the user-platform dynamic online, by putting powerful AI and other emerging tech in the hands of users. Read his recent essay in Fast Company.

The post Apply for a Mozilla Fellowship appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogEU takes major step forward on government vulnerability disclosure review processes

We’ve argued for many years that governments should implement transparent processes to review and disclose the software vulnerabilities that they learn about. Such processes are essential for the cybersecurity of citizens, businesses, and governments themselves. For that reason, we’re delighted to report that the EU has taken a crucial step forward in that endeavour, by giving its cybersecurity agency an explicit new mandate to help European governments establish and implement these processes where requested.

The just-adopted EU Cybersecurity Act is designed to increase the overall level of cybersecurity across the EU, and a key element of the approach focuses on empowering the EU’s cybersecurity agency (‘ENISA’) to play a more proactive role in supporting the Union’s Member States in cybersecurity policy and practices. Since the legislative proposal was launched in 2017, we’ve argued that ENISA should be given the power to support EU Member States in the area of government vulnerability disclosure (GVD) review processes.

Malicious actors can exploit vulnerabilities to cause significant harm to individuals and businesses, and can cripple critical infrastructure. At the same time, governments often learn about software vulnerabilities and face competing incentives as to whether to disclose the existence of the vulnerability to the affected company immediately, or delay disclosure so they can use the vulnerability as an offensive/intelligence-gathering tool. For those reasons, it’s essential that governments have processes in place for reviewing and coordinating the disclosure of the software vulnerabilities that they learn about, as a key pillar in their strategy to defend against the nearly daily barrage of cybersecurity attacks, hacks, and breaches.

For several years, we’ve been at the forefront of calls for governments to put in place these processes. In the United States, we spoke out strongly in favor of the Protecting Our Ability to Counter Hacking Act (PATCH Act) and participated in the Centre for European Policy Studies’ Task Force on Software Vulnerability Disclosure, a broad stakeholder taskforce that in 2018 recommended EU and national-level policymakers to implement GVD review processes. In that context, our work on the EU Cybersecurity Act is a necessary and important continuation of this commitment.

We’re excited to see continued progress on this important issue of cybersecurity policy. The adoption of the Cybersecurity Act by the European Parliament today ensures that, for the first time, the EU has given legal recognition to the importance of EU Member States putting in place processes to review and manage the disclosure of vulnerabilities that they learn about. In addition, by giving the EU Cybersecurity Agency the power to support Member States in developing and implementing these processes upon request, the EU will help ensure that Member States with weaker cybersecurity resilience are supported in implementing this ‘next generation’ of cybersecurity policy.

We applaud EU lawmakers for this forward-looking approach to cybersecurity, and are excited to continue working with policymakers within the 28 EU Member States to see this vision for government vulnerability disclosure review processes realised at national and EU level. This will help Europe and all Europeans to be more secure.

Further reading:

The post EU takes major step forward on government vulnerability disclosure review processes appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 277

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is validator, a crate offering simple validation for Rust structs. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

173 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Ownership is hard. It indeed is. And you managed to do that exact hard thing by hand, without any mechanical checks. (Or so you think.)

– @Cryolite on twitter (translated from Japanese)

Thanks to Xidorn Quan and GolDDranks for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla Open Policy & Advocacy BlogMeet the newest walled garden

Recently, Mark Zuckerberg posted a lengthy note outlining Facebook’s vision to integrate its three messaging services – WhatsApp, Messenger, and Instagram (through its Direct messaging functionality) – into one privacy and security oriented platform. The post positioned Facebook’s future around individual and small group conversations, rather than the “public forum” style communications through Facebook’s newsfeed platform. Initial coverage of the move, largely critical, has focused on the privacy and security aspects of this integrated platform, the history of broken promises on privacy and the changes that would be needed for Facebook’s business model to realize the goal. However, there’s a yet darker side to the proposal, one mostly lost in the post and coverage so far: Facebook is taking one step further to make its family of services into the newest walled garden, at the expense of openness and the broader digital economy.

Here’s the part of the post that highlights the evolution in progress:


Sounds good on its face, right? Except, what Facebook is proposing isn’t interoperability as most use that term. It’s more like intraoperability – making sure the various messaging services in Facebook’s own walled garden all can communicate with each other, not with other businesses and services. In the context of this post, it seems clear that Facebook will intentionally box out other companies, apps, and networks in the course of this consolidation. Rather than creating the next digital platform to take the entire internet economy forward, encouraging downstream innovation, investment, and growth, Facebook is closing out its competitors and citing privacy and security considerations as its rationale.

This is not an isolated incident – it’s a trend. For example, on August 1, 2018, Facebook shut off the “publish_actions” feature in one of its core APIs. This change gutted the practical ability of independent companies and developers to interoperate with Facebook’s services. Some services were forced to disconnect Facebook profiles or stop interconnecting with Facebook entirely. Facebook subsequently changed a long-standing platform policy that had prohibited the use of their APIs to compete, but the damage was done, and the company’s restrictive practices continue.

We can see further evidence of the intent to create a silo under the guise of security in Zuckerberg’s note where he says: “Finally, it would create safety and spam vulnerabilities in an encrypted system to let people send messages from unknown apps where our safety and security systems couldn’t see the patterns of activity.” Security and spam are real problems, but interoperability doesn’t need to mean opening a system up to every incoming message from any service. APIs can be secured by tokens and protected by policies and legal systems.

Without doubt, Facebook needs to prioritize the privacy of its users. Shutting down overly permissive APIs, even at the cost of some amount of competition and interoperability, can be necessary for that purpose – as with the Cambridge Analytica incident. But there’s a difference between protecting users and building silos. And designing APIs that offer effective interoperability with strong privacy and security guarantees is a solvable problem.

The long-term challenge we need to be focused on with Facebook isn’t just whether we can trust the company with our privacy and security – it’s also whether they’re using privacy and security simply as a cover to get away with anticompetitive behavior.

How does this square with the very active conversations around competition and centralization in tech we’re witnessing around the world today? The German competition authority just issued a decision forcing Facebook to stop sharing data amongst its services. This feels like quite the discordant note for Facebook to be casting, even as the company is (presumably) thinking about how to comply with the German decision. Meanwhile, the Federal Trade Commission is actively pursuing an investigation into Facebook’s data practices. And regulators in the U.S., the European Union, India, Israel, and Australia are actively reviewing their antitrust and competition laws to ensure they can respond to the challenges posed by technology and data.

It’s hard to say whether integrating its messaging services will further entrench Facebook’s position, or make it harder to pursue the kinds of remedies directed by the Bundeskartellamt and being considered by politicians around the world. But it seems like Facebook is on a collision course towards finding out.

If Facebook believes that messaging as a platform offers incredible future innovation, the company has a choice. It could either seek to develop that potential within a silo, the way AT&T fostered innovation in telephones in the 1950s – or it could try the way the internet was built to work: offering real interoperability on reasonable terms so that others can innovate downstream.

The post Meet the newest walled garden appeared first on Open Policy & Advocacy.

Chris AtLeeSmaller Firefox Updates

Back in 2014 I blogged about several ideas about how to make Firefox updates smaller.

Since then, we have been able to implement some of these ideas, and we also landed a few unexpected changes!

tl;dr

It's hard to measure exactly what the impact of all these changes are over time. As Firefox continues to evolve, new code and dependencies are added, old code removed, while at the same time the build system and installer/updater continue to see improvements. Nevertheless I was interested in comparing what the impact of all these changes would be.

To attempt a comparison, I've taken the latest release of Firefox as of March 6, 2019, which is Firefox 65.0.2. Since most of our users are on Windows, I've downloaded the win64 installer.

Next, I tried to reverse some of the changes made below. I re-compressed omni.ja, used bz2 compression for the MAR files, re-added the deleted images and startup cache, and used the old version of mbsdiff to generate the partial updates.

Format Current Size "Old" Size Improvement (%)
Installer 45,693,888 56,725,712 19%
Complete Update 49,410,488 70,366,869 30%
Partial Update (from 64.0.2) 14,935,692 28,080,719 47%

Small updates FTW!

Ideally most of our users are getting partial updates from version to version, and a nearly 50% reduction in partial update size is quite significant! Smaller updates mean users can update more quickly and reliably!

One of the largest contributors to our partial update sizes right now are the binary diff size for compiled code. For example, the patch for xul.dll alone is 13.8MB of the 14.9MB partial update right now. Diffing algorithms like courgette could help here, as could investigations into making our PGO process more deterministic.

Here are some of the things we've done to reduce update sizes in Firefox.

Shipping uncompressed omni.ja files

This one is a bit counter-intuitive. omni.ja files are basically just zip files, and originally were shipped as regular compressed zips. The zip format compressed each file in the archive independently, in contrast to something like .tar.bz2 where the entire archive is compressed at once. Having the individual files in the archive compressed makes both types of updates inefficient: complete updates are larger because compressing (in the MAR file) already compressed data (in the ZIP file) doesn't yield good results, and partial updates are larger because calculating a binary diff between two compressed blobs also doesn't yield good results. Also, our Windows installers have been using LZMA compression for a long time, and after switching to LZMA for update compression, we can achieve much greater compression ratios with LZMA of the raw data versus LZMA of zip (deflate) compressed data.

The expected impact of this change was ~10% smaller complete updates, ~40% smaller partial updates, and ~15% smaller installers for Windows 64 en-US builds.

Using LZMA compression for updates

Pretty straightforward idea: LZMA does a better job of compression than bz2. We also looked at brotli and zstd for compression, but LZMA performs the best so far for updates, and we're willing to spend quite a bit of CPU time to compress updates for the benefit of faster downloads.

LZMA compressed updates were first shipped for Firefox 56.

The expected impact of this change was 20% reduction for Windows 64 en-US updates.

Disable startup cache generation

This came out of some investigation about why partial updates were so large. I remember digging into this in the Toronto office with Jeff Muizelaar, and we noticed that one of the largest contributors to partial update sizes were the startup cache files. The buildid was encoded into the header of startup cache files, which effectively changes the entire compressed file. It was unclear whether shipping these provided any benefit, and so we experimented with turning them off. Telemetry didn't show any impact to startup times, and so we stopped shipping the startup cache as of Firefox 55.

The expected impact of this change was about 25% for a Windows 64 en-US partial update.

Optimized bsdiff

Adam Gashlin was working on a new binary diffing tool called bsopt, meant to generate patch files compatible with bspatch. As part of this work, he discovered that a few changes to the current mbsdiff implementation could substantially reduce partial update sizes. This first landed in Firefox 61.

The expected impact of this change was around 4.5% for partial updates for Window 64 builds.

Removed unused theme images

We removed nearly 1MB of unused images from Firefox 55. This shrinks all complete updates and full installers by about 1MB.

Optimize png images

By using a tool called zopflipng, we were able to losslessly recompress PNG files in-tree, and reduce the total size of these files by 2.4MB, or about 25%.

Reduce duplicate files we ship

We removed a few hundred kilobytes of duplicate files from Firefox 52, and put in place a check to prevent further duplicates from being shipped. It's hard to measure the long term impact of this, but I'd like to think that we've kept bloat to a minimum!

Firefox NightlyFirefox Student Projects in 2018: A Recap

Firefox is an open-source project, created by a vibrant community of paid and volunteer contributors from all over the world. Did you know that some of those contributors are students, who are sponsored or given course credit to make improvements to Firefox?

In this blog post, we want to talk about some student projects that have wrapped up recently, and also offer the students themselves an opportunity to reflect on their experience working on them.

If you or someone you know might be interested in developing Firefox as a student, there are some handy links at the bottom of this article to help get you started with some student programs. Not a student? No problem – come hack with us anyways!

Now let’s take a look at some interesting things that have happened in Firefox recently, thanks to some hard-working students.

 

Multi-select Tabs by Abdoulaye O. Ly

In the summer of 2018, Abdoulaye O. Ly worked on Firefox Desktop for Google Summer of Code. His project was to work on adding multi-select functionality to the tab bar in Firefox Desktop, and then adding context menu items to work on sets of tabs rather than individual ones. This meant Abdoulaye would be making changes in the heart of one of the most complicated and most important parts of Firefox’s user interface.

Abdoulaye’s project was a smash success! After a few months of baking and polish in Nightly, multi-select tabs shipped enabled by default to the general Firefox audience in Firefox 64. It was one of the top-line features for that release!

You can try it right now by holding down Ctrl/Cmd and clicking on individual tabs in the tab bar. You can also hold down Shift and select a range of tabs. Then, try right-clicking on the range to perform some operations. This way, you can bookmark a whole set of tabs, send them to another device, or close them all at once!

Here’s what Abdoulaye had to say about working on the project:

Being part of the multi-select project was one of my best experiences so far. Indeed, I had the opportunity to implement features that are being used by millions of Firefox users. In addition, it gave me the privilege of receiving a bunch of constructive reviews from my mentor and other Mozilla engineers, which has greatly boosted my software development skills. Now, I feel less intimidated when skimming through large code bases. Another aspect on which I have also made significant progress is on team collaboration, which was the most challenging part of my GSoC internship.

We want to thank Abdoulaye for collaborating with us on this long sought-after feature! He will continue his involvement at Mozilla with a summer internship in our Toronto office. Congratulations on shipping, and great work!

 

Better Certificate Error Pages by Trisha Gupta

University student Trisha Gupta contributed to Firefox as part of an Outreachy open source internship.

Her project was to make improvements to the certificate error pages that Firefox users see when a website presents a (seemingly) invalid security certificate. These sorts of errors can show up for a variety of reasons, only some of which are the fault of the websites themselves.

The Firefox user experience and security engineering teams collaborated on finding ways to convey these types of errors to users in a better, more understandable way. They produced a set of designs, fine-tuned them in user testing, and handed them off to Trisha so she could start implementing them in Firefox.

In some cases, this meant adding entirely new pages, such as the “clock skew” error page. That page tells users that the certificate error is caused by their system clocks being off by a few years, which happens a lot more often than one might think.

The new certificate error page displayed when the computer clock is set incorrectly.

What year is it?

We are really grateful for Trisha’s fantastic work on this important project, and for the Outreachy program that enabled her to find this opportunity. Here is what Trisha says about her internship:

The whole experience working with the Mozillians was beyond fruitful. Everyone on my team, especially my mentor were very helpful and welcoming. It was my first time working with such a huge company and such an important team. Hence, the biggest challenge for me was to practically not let anybody down. I was very pleasantly surprised when at All Hands, all my team members were coding in homerooms together, and each one of us had something to learn from the other, regardless of the hierarchy or position or experience! It was overwhelming yet motivating to see the quality of code, the cohesion in teamwork and the common goal to make the web safer.

Right from filing bugs, to writing patches, to seeing them pass the tests and not breaking the tree, the whole process of code correction, review, testing and submission was a great learning experience. Of course, seeing the error pages live in Nightly was the most satisfying feeling ever, and I am super grateful to Mozilla for the best summer of my life!”

Trisha’s project is currently enabled in Firefox Nightly and Beta and scheduled to be released to the general user population in Firefox 66.

 

Dark Theme Darkening by Dylan Stokes, Vivek Dhingra, Lian Zhengyi, Connor Masini, and Bogdan Pozderca

Michigan State University students Dylan Stokes, Vivek Dhingra, Lian Zhengyi, Connor Masini, and Bogdan Pozderca extended Firefox’s Theming API to allow for theme authors to style more parts of the browser UI as well as increase cross-browser compatibility with Google Chrome themes.

The students worked as part of their CSE498: Collaborative Design course, often called their “Capstone.” Students enroll in this course during their last year of an undergraduate degree in computer science and are assigned a company in the industry to work with for a semester project.

With the work contributed by this team, themes can now customize the Firefox menu, findbar, location bar and search box dropdown, icon colors and more. These new features are also used by the “Dark” theme that ships by default with Firefox. Dylan Stokes published an end-of-semester blog post on the Mozilla Hacks blog that goes into further details of the technical work that the team did. The following video was created by the team to recap their work: Customizable theme development in Firefox 61

Here’s what Vivek had to say about the project:

Working on ‘Dark Theme Darkening’ was one of the most rewarding experiences of my student life. It was extremely motivating to code something and see that deployed on different Firefox branches in weeks. Emphasis on thorough testing, rigorous code reviews, efficient communication with Mozilla contributors across the globe and working in a fast pace team environment are just some of the many useful experiences I had as part of this project.

Mike [Conley] and Jared [Wein] were also kind enough to spend a weekend with us in East Lansing, MI. None of us expected that as we got started on the project so we were very thankful for their time and effort for having a coding marathon with us, and answering all our questions.

Mike and Jared also had a very thorough project plan for us which made it easier to work in an organized manner. We also had the opportunity to interact with Mozilla contributors and employees across the world as we worked on different tasks. I love working in diverse teams and as part of this project, I had the opportunity to do that a lot.

Dylan also had some comments about the project:

Working on the Dark Theme Darkening project was an amazing opportunity. It was the first time I got a sense of developing for a large application. It was a bit overwhelming at first, but our mentors: Jared, Mike, and Tim [Nguyen] were extremely helpful in guiding us to get started crushing our first bugs.

The code review process really helped me grow as a developer. Not only was I writing working code; I was writing efficient, production level code. It still amazes me that code that I was able to write is currently being used by millions of users.

One thing that surprised me the most was the community. Anyone I interacted with wanted to help in anyway they could. We were all part of a team and everyone’s end goal was to create a web browser that is fast for good.

The students’ work for the Dark Theme Darkening project shipped in Firefox 61. See their work by going to the Firefox menu, choosing Customize, then enabling the Dark theme. You can also create your own theme with the colors of your choice.

 

Other projects

We’ve just shown you 3 projects, but a whole bunch of other amazing students have been working on improving Firefox in 2018. A few more examples:

And that’s just the projects that focused on Firefox development itself! The Mozilla community mentored so many projects that it would be too much for us to list them all here, so please check them out on the project pages for GSoC and Outreachy.

 

Get Involved

The projects described above are just a small sampling of the type of work that students have contributed to the Mozilla project. If you’re interested in getting involved, consider signing up to Mozilla’s Open Source Student Network and keep checking for opportunities at Google Summer of Code, Outreachy, codetribute. Finally, you should consider applying for an internship with Mozilla. Applications for the next recruiting cycle (Summer 2020) open in September/October 2019.

Feel free to connect with any of us through Mozilla’s IRC servers if you have any questions related to becoming a new contributor to Firefox. Thanks, and happy hacking!

Thanks

We are really grateful for all Mozilla contributors who invested countless hours mentoring a student in the last year and for all the great people driving our student programs behind the scenes.

A special thank you to Mike Conley and Jared Wein for authoring this blog post with me.

The Firefox FrontierSpring Cleaning with Browser Extensions

Flowers in bloom, birds singing, cluttered debris everywhere. It’s Spring cleaning season. We may not be able to help with that mystery odor in the garage, but here are some … Read more

The post Spring Cleaning with Browser Extensions appeared first on The Firefox Frontier.

QMODevEdition 66 Beta 14 Testday Results

Hello Mozillians!

As you may already know, last Friday – March 8th – we held a new Testday event, for DevEdition 66 Beta 14.

Thank you all for helping us make Mozilla a better place: Iryna Thompson, Rok Žerdin (zerdo), gaby2300, noelonassis.

From Mozilla India Community: Aishwarya Narasimhan.

From Mozilla Bangladesh Community: Sayed Ibn Masud, Maruf Rahman, Saheda Reza Antora, Sajedul Islam, Hasibul Hasan Shanto, Kazi Ashraf Hossain and Mim Ahmed Joy.

Results:

– 3 new issues logged: 1533743, 1534076 and 1533665

– several test cases executed for Firefox Screenshots, Search and Build Installation & Uninstallation.

– 3 bugs verified: 1506073, 1512187 and 1519422

Thanks for yet another awesome testday, we appreciate your contribution 🙂

We hope to see you all in our next events, keep an eye on QMO, we will make announcements as soon as somethings shows up!

About:CommunityFirefox 66 new contributors

With the release of Firefox 66, we are pleased to welcome the 39 developers who contributed their first code change to Firefox in this release, 35 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Will Kahn-GreeneSocorro: February 2019 happenings

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

This blog post summarizes Socorro activities in February.

Read more… (6 mins to read)

The Servo BlogThis Month In Servo 126

In the past month, we merged 176 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online. Plans for 2019 will be published soon.

This week’s status updates are here.

Exciting works in progress

Notable Additions

  • jdm improved the rendering of 2d canvas paths with transforms applied.
  • sreeise implemented the DOM interfaces for audio, text, and video tracks.
  • ceyusa added support for hardware accelerated rendering in the media backend.
  • jdm prevented a panic when going back in history from a page using WebGL.
  • paulrouget enabled support for sharing Gecko’s VR process on Oculus devices.
  • asajeffrey made fullscreen content draw over top of any other page content.
  • jdm fixed a regression in hit-testing certain kinds of content.
  • paulrouget added automatic header file generation for the C embedding API.
  • jdm converted the Magic Leap port to use the official embedding API.
  • Manishearth added support for media track constraints to getUserMedia.
  • asajeffrey made the VR embedding API more flexible.
  • Manishearth implemented support for sending and receiving video streams over WebRTC.
  • jdm redesigned the media dependency graph to reduce time spent compiling Servo when making changes.
  • Manishearth added support for extended attributes on types in the WebIDL parser.
  • asajeffrey avoided a deadlock in the VR thread.
  • jdm fixed a severe performance problem when loading sites that use a lot of innerHTML modification.
  • asajeffrey implemented a test VR display that works on desktop.
  • Manishearth implemented several missing WebRTC callbacks.
  • jdm corrected the behaviour of the contentWindow API when navigating an iframe backwards in history.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Mike ConleyFirefox Front-End Performance Update #14

We’re only a few weeks away from Firefox 67 merging from the Nightly channel to Beta, and since my last update, a number of things have landed.

It’s the end of a long week for me, so I apologize for the brevity here. Let’s check it out!

Document Splitting Foundations for WebRender (In-Progress by Doug Thayer)

dthayer is still trucking along here – he’s ironed out a number of glitches, and kats is giving feedback on some APZ-related changes. dthayer is also working on a WebRender API endpoint for generating frames for multiple documents in a single transaction, which should help reduce the window of opportunity for nasty synchronization bugs.

Warm-up Service (In-Progress by Doug Thayer)

dthayer is pressing ahead with this experiment to warm up a number of critical files for Firefox shortly after the OS boots. He is working on a prototype that can be controlled via a pref that we’ll be able to test on users in a lab-setting (and perhaps in the wild as a SHIELD experiment).

Startup Cache Telemetry (In-Progress by Doug Thayer)

dthayer landed this Telemetry early in the week, and data has started to trickle in. After a few more days, it should be easier for us to make inferences on how the startup caches are operating out in the wild for our Nightly users.

Smoother Tab Animations (In-Progress by Felipe Gomes)

UX, Product and Engineering are currently hashing out the remainder of the work here. Felipe is also aiming to have the non-responsive tab strip bug fixed soon.

Lazier Hidden Window (Completed by Felipe Gomes)

After a few rounds of landings and backouts, this appears to have stuck! The hidden window is now created after the main window has finished painting, and this has resulted in a nice ts_paint (startup paint) win on our Talos benchmark!

<figcaption>This is a graph of the ts_paint startup paint Talos benchmark. The highlighted node is the first mozilla-central build with the hidden window work. Lower is better, so this looks like a nice win!</figcaption>

There’s still potential for more improvements on the hidden window, but that’s been split out to a separate project / bug.

Browser Adjustment Project (In-Progress by Gijs Kruitbosch)

This project appears to be reaching its conclusion, but with rather unsatisfying results. Denis Palmeiro from Vicky Chin’s team has done a bunch of testing of both the original set of patches that Gijs landed to lower the global frame rate (painting and compositing) from 60fps to 30fps for low-end machines, as well as the new patches that decrease the frequency of main-thread painting (but not compositing) to 30fps. Unfortunately, this has not yielded the page load wins that we wanted1. We’re still waiting to see if there’s a least a power-usage win here worth pursuing, but we’re almost ready the pull the plug on this one.

Better about:newtab Preloading (In-Progress by Gijs Kruitbosch)

Gijs has a set of patches that should make this possible, which will mean (in theory) that we’ll present a ready-to-roll about:newtab when users request one more often than not.

Unfortunately, there’s a small snag with a test failure in automation, but Gijs is on the case.

Experiments with the Process Priority Manager (In-Progress by Mike Conley)

The Process Priority Manager has been enabled in Nightly for a number of weeks now, and no new bugs have been filed against it. I filed a bug earlier this week to run a pref-flip experiment on Beta after the Process Priority Manager patches are uplifted later this month. Our hope is that this has a neutral or positive impact on both page load time and user retention!

Make the PageStyleChild load lazily (Completed by Mike Conley)

There’s an infrequently used feature in Firefox that allows users to switch between different CSS stylesheets that a page might offer. I’ve made the component that scans the document for alternative stylesheets much lazier, and also made it skip non web-pages, which means (at the very least) less code running when loading about:home and about:newtab



  1. This was unexpected – we ran an experiment late in 2018 where we noticed that lowering the frame rate manually via the layout.frame_rate pref had a positive impact on page load time… unfortunately, this effect is no longer being observed. This might be due to other refresh driver work that has occurred in the meantime. 

Chris H-CBlast from the Past: I filed a bug against Firefox 3.6.6

A screenshot of the old bugzilla duplicate finder UI with the text inside table cells not rendering at allOn June 30, 2010 I was:

  • Sleepy. My daughter had just been born a few months prior and was spending her time pooping, crying, and not sleeping (as babies do).
  • Busy. I was working at Research in Motion (it would be three years before it would be renamed BlackBerry) on the BlackBerry Browser for BlackBerry 6.0. It was a big shift for us since that release was the first one using WebKit instead of the in-house “mango” rendering engine written in BlackBerry-mobile-dialect Java.
  • Keen. Apparently I was filing a bug against Firefox 3.6.6?!

Yeah. I had completely forgotten about this. Apparently while reading my RSS feeds in Google Reader (that doesn’t make me old, does it?) taking in news from Dragonmount about the Wheel of Time (so I guess I’ve always been a nerd, then) the text would sometimes just fail to render. I even caught it happening on the old Bugzilla “possible duplicate finder” UI (see above).

The only reason I was reminded this exists was because I received bugmail on my personal email address when someone accidentally added and removed themselves from the Cc list.

Pretty sure this bug, being no longer reproducible, still in UNCONFIRMED state, and filed against a pre-rapid-release version Firefox is something I should close. Yeah, I’ll just go and do that.

:chutten

 

Mike TaylorA historical look at lowercase defaultstatus

The other day I was doing some research on DOM methods and properties that Chrome implements, and has a usecounter for, but don't exist in Firefox.

defaultstatus caught my eye, because like, there's also a use counter for defaultStatus.

(The discerning reader will notice there's a lowercase and a lowerCamelCase version. The less-discerning reader should maybe slow down and start reading from the beginning.)

As far as I know, there's no real spec for these old BOM (Baroque Object Model) properties. It's supposed to allow you to set the default value for window.status, but it probably hasn't done anything in your browser for years.

image of some baroque art shit

Chrome inherited lowercase defaultstatus from Safari, but I would love to know why Safari (or KHTML pre-fork?) added it, and why Opera, Firefox or IE never bothered. Did a site break? Did someone complain about a missing status on a page load? Did this all stem from a typo?

DOMWindow.idl has the following similar-ish comments over the years and probably more, but nothing that points to a bug:

This attribute is an alias of defaultStatus and is necessary for legacy uses. For compatibility with legacy content.

It's hard to pin down exactly when it was added. It's in Safari 0.82's kjs_window.cpp. And in this "old" kde source tree as well. It is in current KHTML sources, so that suggests it was inherited by Safari after all.

Curious to see some code in the wild, I did some bigquerying with BigQuery on the HTTPArchive dataset and got a list of ~3000 sites that have a lowercase defaultstatus. Very exciting stuff.

There's at least 4 kinds of results:

1) False-positive results like var foo_defaultstatus. I could re-run the query, but global warming is real and making Google cloud servers compute more things will only hasten our own destruction.

2) User Agent sniffing, but without looking at navigator.userAgent. I guess you could call it User Agent inference, if you really cared to make a distinction.

Here's an example from some webmail script:

O.L3 = function(n) {
    switch (n) {
        case 'ie':
            p = 'execScript';
            break;
        case 'ff':
            p = 'Components';
            break;
        case 'op':
            p = 'opera';
            break;
        case 'sf':
        case 'gc':
        case 'wk':
            p = 'defaultstatus';
            break;
    }
    return p && window[p] !== undefined;
}

And another from some kind of design firm's site:

browser = (function() {
    return {
        [snip]
        'firefox': window.sidebar,
        'opera': window.opera,
        'webkit': undefined !== window.defaultstatus,
        'safari': undefined !== window.defaultstatus && typeof CharacterData != 'function',
        'chrome': typeof window.chrome === 'object',
        [snip]
    }
})();

3a) Enumerating over global built-ins. I don't know why people do this. I see some references to Babel, Ember, and JSHint. Are we making sure the scripts aren't leaking globals? Or trying to overwrite built-ins? Who knows.

3b) Actual usage, on old sites. Here's a few examples:

<body background="images/bvs_green_bkg.gif" bgcolor="#598580" text="#A2FF00" onload="window.defaultstatus=document.title;return true;">
<body onload="window.defaultstatus='Индийский гороскоп - ведическая астрология, джйотиш онлайн.'">

This one is my favorite, and not just because the site never calls it:

function rem() {
  window.defaultstatus="ok"
}

OK, so what have we learned? I'm not sure we've learned much of anything, to be honest.

If Chrome were to remove defaultstatus the code using it as intended wouldn't break—a new global would be set, but that's not a huge deal. I guess the big risk is breaking UA sniffing and ended up in an unanticipated code-path, or worse, opting users into some kind of "your undetected browser isn't supported, download Netscape 2" scenario.

Anyways, window.defaultstatus, or window.defaultStatus for that matter, isn't as cool or interesting as Caravaggio would have you believe. Thanks for reading.

Mozilla ThunderbirdFOSDEM 2019 and DeltaChat

During the last month we attended two events: FOSDEM, Europe’s premier free software event, and a meetup with the folks behind DeltaChat. At both events we met great people, had interesting conversations, and talked through potential future collaboration with Thunderbird. This post details some of our conversations and insights gather from those events.

FOSDEM 2019

Magnus (Thunderbird Technical Manager), Kai (Thunderbird Security Engineer), and I (Ryan, Community Manager) arrived in Brussels for Europe’s premier free software event (free as in freedom, not beer): FOSDEM. I was excited to meet many of our contributors in-person who I’d only met online. It’s exhilarating to be looking someone in the eye and having a truly human interaction around something that you’re passionate about – this is what makes FOSDEM a blast.

There are too many conversations that we had to detail in their entirety in this blog post, but below are some highlights.

Chat over IMAP/Email

One thing we discussed at FOSDEM was Chat over IMAP with the people from Open-Xchange. Robert even gave a talk called “Break the Messaging Silos with COI”. They made a compelling case as to why email is a great medium for chat, and the idea of using a chat that lets you select the provider that stores your data – genius! We followed on FOSDEM with a meetup with the DeltaChat folks in Freiburg, Germany where we discussed encryption and Chat over Email.

Encryption, Encryption, Encryption

We discussed encryption a lot, primarily because we have been thinking about it a lot as a project. With the rising awareness of users about privacy concerns in tech, services like Protonmail getting a lot of attention, and in acknowledgement that many Thunderbird users rely on encrypted Email for their security – it was important that we use this opportunity to talk with our sister projects, contributors, and users about how we can do better.

Sequoia-PGP

We were very grateful that the Sequoia-PGP team took the time to sit down with us and listen to our ideas and concerns surrounding improving encrypted Email support in Thunderbird. Sequoia-PGP is an OpenPGP library, written in Rust that appears to be pretty solid. There is a potential barrier to incorporating their work into Thunderbird, in license compatibility (we use MPL and they use GPL). But we discussed a wide range of topics and have continued talking through what is possible following the event, it is my hope that we will find some way to collaborate going forward.

One thing that stood out to me about the Sequoia team was their true interest in seeing Thunderbird be the best that it can be, and they seemed to genuinely want to help us. I’m grateful to them for the time that they spent and look forward to getting another opportunity to sit with them and chat.

pEp

Following our discussion with the Sequoia team, we spoke to Volker of the pEp Foundation. Over dinner we discussed Volker’s vision of privacy by default and lowering the barrier of using encryption for all communication. We had spoken to Volker in the past, but it was great to sit around a table, enjoy a meal, and talk about the ways in which we could collaborate. pEp’s approach centers around key management and improved user experience to make encryption more understandable and easier to manage for all users (this is a simplified explanation, see pEp’s website for more information). I very much appreciated Volker taking the time to walk us through their approach, and sharing ideas as to how Thunderbird might move forward. Volker’s passion is infectious and I was happy to get to spend time with him discussing the pEp project.

EteSync

People close to me know that I have a strong desire to see encrypted calendar and contact sync become a standard (I’ve even grabbed the domains cryptdav.com and cryptdav.org). So when I heard that Tom of EteSync was at FOSDEM, I emailed him to set up a time to talk. EteSync is secure, end-to-end encrypted and privacy respecting sync for your contacts, calendars and tasks. That hit the mark!

In our conversation we discussed potential ways to work together, and I encouraged him to try and make this into a standard. He was quite interested and we talked through who we should pull into the conversation to move this forward. I’m happy to say that we’ve managed to get Thunderbird Council Chairman and Lightning Calendar author Philipp Kewisch in on the conversation – so I hope to see us move this along. I’m so glad that Tom created an implementation that will help people maintain their privacy online. We so often focus on securing our communication, but what about the data that is produced from those conversations? He’s doing important work and I’m glad that I was able to find ways to support his vision. Tom also gave a talk at FOSDEM this year, called “Challenges With Building End-to-End Encrypted Applications – Learnings From Etesync”.

Autocrypt on the Train

During FOSDEM we attended a talk about Autocrypt by Vincent Breitmoser. As we headed to the city Freiburg, for our meetup with the people behind DeltaChat, we realized Vincent was on our train and managed to sit with him on the ride over. Vincent was going to the same meetup that we were so it shouldn’t have been surprising, but it was great to get an opportunity to sit down with him and discuss how the Autocrypt project was doing and the state of email encryption, in general.

Vincent reiterated Autocrypt’s focus on raising the floor on encryption, getting as many people using encryption keys as possible and handling some of the complexity around the exchange of keys. We had concerns around the potential for man-in-the-middle attacks when using Autocrypt and Vincent was upfront about that and we had a useful discussion about balancing the risks and ease of use of email security. Vincent’s sincerity and humble nature made the conversation an enjoyable one, and I came away having made a new friend. Vincent is a good guy, and following our meetup in Freiburg we have discussed other ways in which we could collaborate.

Other FOSDEM Conversations

Of course, I will inevitably leave out someone in recounting who we talked to as FOSDEM. I had many conversations with old friends, met new people, and shared ideas. I got to meet Elio Qoshi of Ura Design face-to-face for the first time, which was really awesome (they did a style guide and usability study for Thunderbird, and have contributed in a number of other ways). I spoke to the creators of Mailfence, a privacy-focused email provider.

I attended a lot of talks and had my head filled with new perspectives, had preconceived notions challenged, and learned a lot. I hope that we’ll get to return next year and share some of the work that we’re doing now!

DeltaChat in Freiburg

A while before finishing our FOSDEM planning, we were invited by Holger Krekel to come to Freiburg, Germany following FOSDEM and learn more about Chat over Email (as their group calls it), and their implementation – DeltaChat. They use Autocrypt in DeltaChat, so there were conversations about that as well. Patrick Brunschwig, the author of the  Enigmail add-on was also present, and had interesting insights to add to the encryption conversation.

Hanging at a flat in Freiburg we spent two days talking through Chat over Email support in Thunderbird, how we might improve encryption in Thunderbird core, and thought through how Thunderbird can enhance its user experience around chat and encryption. Friedel, the author of rpgp, a rust implementation of OpenPGP, showed up at the event and shared his insights – which we appreciated.

I also got an opportunity to talk with the core maintainer of DeltaChat, Björn Petersen, about the state of chat generally. He started DeltaChat in order to offer an alternative to these chat silos, with a focus on an experience that would be on par with the likes of Telegram, Signal, and WhatsApp.

Following more general conversations, I spoke with Björn, Janka, and Xenia about the chat experience in DeltaChat. We discussed what a Chat over Email implementation in Thunderbird might look like, and more broadly talked through other potential UX improvements in the app. Xenia described the process their team went through when polling DeltaChat users about potential improvements and what insights they gained in doing that. We chatted about how what they have learned might apply to Thunderbird and it was very enlightening.

At one point Holger took us to Freiburg’s Chaos Computer Club, and there we got to hang out and talk about a wide range of topics – mostly centered around open source software and privacy. I thought it was fascinating and I got to learn about new projects that are up and coming. I hope to be able to collaborate with some of them to improve Thunderbird. In the end I was grateful that Holger and the rest of the DeltaChat contributors encouraged us to join them for their meetup, and opened up their space for us so that we could spend time with them and learn from them.

Thanks for reading this post! I know it was long, but I hope you found it interesting and learned something from it.

Mozilla Open Policy & Advocacy BlogOne hour takedown deadlines: The wrong answer to Europe’s content regulation question

We’ve written a lot recently about the dangers that the EU Terrorist Content regulation poses to internet health and user rights, and efforts to combat violent extremism. One aspect that’s particularly concerning is the rule that all online hosts must remove ‘terrorist content’ within 60 minutes of notification. Here we unpack why that obligation is so problematic, and put forward a more nuanced approach to content takedowns for EU lawmakers.

Since the early days of the web, ‘notice & action’ has been the cornerstone of online content moderation. As there is so much user-generated content online, and because it is incredibly challenging for an internet intermediary to have oversight of each and every user activity, the best way to tackle illegal or harmful content is for online intermediaries to take ‘action’ (e.g. remove it) once they have been ‘notified’ of its existence by a user or another third party. Despite the fast-changing nature of internet technology and policy, this principle has shown remarkable resilience. While it often works imperfectly and there is much that could be done to make the process more effective, it remains a key tool for online content control.

Unfortunately, the EU’s Terrorist Content regulation stretches this tool beyond its limit. Under the proposed rules, all hosting services, regardless of their size, nature, or exposure to ‘terrorist content’ would be obliged to put in place technical and operational infrastructure to remove content within 60 minutes of notification. There’s three key reasons why this is a major policy error:

  • Regressive burden: Not all internet companies are the same, and it is reasonable to suggest that in terms of online content control, those who have more should do more. More concretely, it is intuitive that a social media service with billions in revenue and users should be able to remove notified content more quickly than a small family-run online service with a far narrower reach. Unfortunately however, this proposal forces all online services – regardless of their means – to implement the same ambitious 60-minute takedown timeframe. This places a disproportionate burden on those least able to comply, giving an additional competitive advantage to the handful of already dominant online platforms.
  • Incentivises over-removal: A crucial aspect of the notice & action regime is the post-notification review and assessment. Regardless of whether a notification of suspected illegal content comes from a user, a law enforcement authority, or a government agency, it is essential that online services review the notification to assess its validity and conformity with basic evidentiary standards. This ‘quality assurance’ aspect is essential given how often notifications are either inaccurate, incomplete, or in some instances, bogus. However, a hard deadline of 60 minutes to remove notified content makes it almost impossible for most online services to do the kind of content moderation due diligence that would minimise this risk. What’s likely to result is the over-removal of lawful content. Worryingly, the risk is especially high for ‘terrorist content’ given its context-dependent nature and the thin line between intentionally terroristic and good-faith public interest reporting.
  • Little proof that it actually works: Most troubling about the European Commission’s 60-minute takedown proposal is that there doesn’t seem to be any compelling reason why 60 minutes is an appropriate or necessary timeframe. To this date, the Commission has produced no research or evidence to justify this approach; a surprising state of affairs given how radically this obligation departs from existing policy norms. At the same time, a ‘hard’ 60 minute deadline strips the content moderation process of strategy and nuance, allowing for no distinction between type of terrorist content, it’s likely reach, or the likelihood that it will incite terrorist offences. With no distinction there can be no prioritisation.

For context, the decision by the German government to mandate a takedown deadline of 24 hours for ‘obviously illegal’ hate speech in its 2017 ‘NetzDG’ law sparked considerable controversy on the basis of the risks outlined above. The Commission’s proposal brings a whole new level of risk. Ultimately, the 60-minute takedown deadline in the Terrorist Content regulation is likely to undermine the ability for new and smaller internet services to compete in the marketplace, and creates the enabling environment for interference with user rights. Worse, there is nothing to suggest that it will help reduce the terrorist threat or the problem of radicalisation in Europe.

From our perspective, the deadline should be replaced by a principle-based approach, which ensures the notice & action process is scaled according to different companies’ exposure to terrorist content and their resources. For that reason, we welcome amendments that have been suggested in some European Parliament committees that call for terrorist content to be removed ‘expeditiously’ or ‘without undue delay’ upon notification. This approach would ensure that online intermediaries make the removal of terrorist content from their services a key operational objective, but in a way which is reflective of their exposure, the technical architecture, their resources, and the risk such content is likely to pose.

As we’ve argued consistently, one of the EU Terrorist Content regulation’s biggest flaws is its lack of any proportionality criterion. Replacing the hard 60-minute takedown deadline with a principle-based approach would go a long way towards addressing that. While this won’t fix everything – there are still major concerns with regard to upload filtering, the unconstrained role of government agencies, and the definition of terrorist content – it would be an important step in the right direction.

The post One hour takedown deadlines: The wrong answer to Europe’s content regulation question appeared first on Open Policy & Advocacy.

Firefox UXHow to validate an idea when you’re not working in a startup.

I had a brilliant idea! How do I get stakeholders to understand whether the market sees it in the same way?

People in startups have tried so hard to avoid spending time and money on building a product that doesn’t achieve the product/ market fit, so do tech companies. Resources are always limited. Making right decisions on where to put their resources are serious in organizations, and sometimes, it’s even harder to make one than in a startup.

ChecknShare, an experimental product idea from Mozilla Taipei for improving Taiwanese seniors’ online sharing experience, has learned a lot after doing several rounds of validations. In our retrospective meeting, we found the process can be polished to be more efficient when we both validate our ideas and communicate with our stakeholders at the same time.

Here are 3 steps that I suggest for validating your idea:

Step 1: Define hypotheses with stakeholders

Having hypotheses in the planning stage is essential, but never forget to include stakeholders when making your beautiful list of hypotheses. Share your product ideas with stakeholders, and ask them if they have any questions. Take their questions into consideration to plan for a method which can cover them all.

Your stakeholders might be too busy to participate in the process of defining the hypotheses. It’s understandable, you just need to be sure they all agree on the hypotheses before you start validating.

Step 2: Identify the purpose of validating your idea

Are you just trying to get some feedback for further iteration? Or do you need to show some results to your stakeholders in order to get some engagement/ resources from them? The purpose might influence how you select the validation methods.

There are two types of validation methods, qualitative and quantitative. Quantitative methods focus on finding “what the results look like”, while qualitative methods focus on “why/ how these results came about”. If you’re trying to get some insights for design iteration, knowing “why users have trouble falling in love with your idea” could be your first priority in the validation stage. Nevertheless, things might be different when you’re trying to get your stakeholders to agree.

From the path that ChecknShare has gone through, quantitative results were much easier to influence stakeholders as concrete numbers were interpreted as a representation of a real world situation. I’m not saying quantitative methods are “must-dos” during the validation stage, but be sure to select a method that speaks your stakeholders’ language.

Step 3: Select validation methods that validate the hypotheses precisely

With the hypotheses that were acknowledged by your stakeholders and the purpose behind the validation, you can select methods wisely without wasting time on inconsequential work.

In the following, I’m going to introduce the 5 validation methods that we conducted for ChecknShare and the lessons we’ve learned from each of them. I hope these shared lessons can help you find your perfect one. Starting with the qualitative methods:

Qualitative Validation Methods

1. Participatory Workshop

The participatory workshop was an approach for us to validate the initial ideas generated from the design sprint. During the co-design process, we had 6 participants who matched with our target user criteria. We prioritized the scenario, got first-hand feedback for the ideas, and did quick iterations with our participants. (For more details on how we hosted the workshop, please look at the blog I wrote previously.)

Although hosting a workshop externally can be challenging due to some logistic works like recruiting relevant participants and finding a large space for accommodating people, we see participatory workshop as a fast and effective approach for having early interactions with our target users.

2. Physical pitching survey

<figcaption>The pitching session in a local learning center</figcaption>

In order to see how our target market reacts to the idea in the early stage, we hosted a pitching session in a local learning center that offered free courses for seniors to learn how to use smartphones. During the pitching session, we handed out paper questionnaires to investigate their smartphone behaviors, interests of the idea, and their willingness to participate in our future user testings.

It was our first time experimenting with a physical survey instead of sitting in the office and deploying surveys through virtual platforms. A physical survey isn’t the best approach to get a massive number of responses in a short time. However, we got a chance to talk to real people, saw their emotional expressions when pitching an idea, recruited user testing participants, and pilot tested a potential channel for our future go-to-market strategy.

Moreover, we invited our stakeholders to attend the pitching session. It provided a chance for them to be immersed in the environment and feel more empathy around our target users. The priceless experience made our post conversations with stakeholders more realistic when we were evaluating the risk and potential of our target users who the team wasn’t quite familiar with.

<figcaption>Our stakeholders were chatting with seniors during the pitching session</figcaption>

3. User Testing

During user testing, we were focusing on the satisfaction level of the product features and the usability of the UI flow. For the usability testing, we provided several pairs of paper prototypes for A/B testing participants’ understanding of the copy and UI design, and an interactive prototype to see if they could accomplish the tasks we assigned. The feedback indicated the areas that needed to be tweaked in the following iteration.

<figcaption>A/B Testing the product feature by using paper prototypes</figcaption>

User testing can get various results as it depends on how you design it. From our experience of conducting a user testing that combined concept testing and usability testing, we learned that the usability testing could be postponed to the production stage since the detailed design polishment was too early before the production stage was officially kicked off by stakeholders.

Quantitative Validation Methods

When we realized that qualitative results didn’t speak our stakeholders’ language, we started to recollect our stakeholders’ questions holistically and applied quantitative methods to answer them. Here are the following 2 methods we applied:

4. Online Survey

To understand the potential market size and the product value proposition which our stakeholders consider of great importance, we designed an online survey that investigated the current sharing behavior and the preference of the features among different ages. It helped us to see if there were any other user segments that were similar with seniors and the priority of the features.

<figcaption>The pie chart and bar chart reveal the portion of our target users.</figcaption>
<figcaption>The EDM we sent out for spreading the online survey</figcaption>

The challenge of conducting an online survey is to find an efficient deployment channel with less bias. Since the age range of our target responses were quite wide (from age 21 to 65, 9 segments), conducting an online survey became time-consuming and was beyond our expectations. To get at least 50 responses from each age bracket, we delivered survey invitations through Mozilla Taiwan’s social account, sent out EDM by collaborating with our media partner, and also bought responses from Survey Monkey.

When we reviewed the entire survey results with our stakeholders, we had a constructive discussion and progressed on defining our target audience and the value proposition based on solid numbers. An online survey can be an easier approach if the survey scope uses a narrower age range. For making constructive discussions happen earlier, we’d suggest running a quick survey once the product concept is settled.

5. Landing Page Test

We couldn’t just use a survey to investigate a participant’s app download willingness since it’s very hard to avoid leading questions. Therefore, the team decided to run a landing page test and see how the real market reacted to the product concept. We designed a landing page which contained a key message, product introduction of the top 3 features, several CTA buttons for email signup, and a hidden email collecting section that only showed when a participant clicked on the CTA button. We intentionally kept the page structure similar to a common landing page. (Have no idea what a landing page test is? Scott McLeod published a thorough landing page test guide which might be very helpful for you :)) Along with the landing page, we had an Ad banner which is consistent with our landing page design.

We ran our ad on Google Display Network for 5 days and got 10x more visitors than the previous online survey responses, which is the largest number of participants compared to the other validations we conducted. The CTR and conversion rate was quite persuasive, so ChecknShare finally got support from our stakeholders and the team was able to start thinking about more details around design implementation.

Landing page test is uncommon in Taiwan’s software industry, not to mention testing product concepts for seniors. We weren’t quite confident with getting reliable results at the beginning, but it ended up reaching out to the most seniors we’ve never had in our long validation journey. Here I summarized some suggestions for running a landing page test:

  • Set success criteria with stakeholders before running the test.
    Finding a reasonable benchmark target is essential. There’s no such thing as an absolute number for setting a KPI because it can vary depending on the region, acquiring channels, and the product category.
  • Make sure your copy can deliver the key product values in 5–10 secs read.
    The copy on both ad and landing page should be simple, clear, and touching. Simply pilot testing the copy with fresh eyes can be very insightful for copy iterations.
  • Reduce any factors that might influence the reading experience.
    Don’t let the website design ruin your test results. Remember to check the accessibility of your website (especially text size and contrast ratio). Pairing comprehensible illustrations, UI screens or even some animation of the UI flow with your copy can be very helpful in making it easier to understand.

The endless quantitative-qualitative dilemma

“What if I don’t have sufficient time to do both qualitative and quantitative testing?” you might ask.

We believe that having both qualitative and quantitative results are important. One supports each other. If you don’t have time to do both, take a step back, talk with your stakeholders, and think about what are the most important criteria that have to be true for becoming a successful product.

There’s no perfect method to validate all types of hypotheses precisely. Keep asking yourself why you need to do this validation, and be creative.

References:
1.
8 tips for hosting your first participatory workshop — Tina Hsieh
2.
How to setup a landing page for testing a business or product idea. — Scott McLeod
3.
How to Test and Validate Startup Ideas — Mitch Robinson


How to validate an idea when you’re not working in a startup. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgReal virtuality: connecting real things to virtual reality using web technologies

This is the story of our lucky encounter at FOSDEM, the largest free and open source software event in Europe. We are two developers, focused on different domains, who saw an opportunity to continue our technical conversation by building a proof of concept. Fabien Benetou is a developer focused on virtual reality and augmented reality. Phillipe Coval works on the Internet of Things. Creating a prototype gave the two of us a way to explore some ideas we’d shared at the conference.

WebXR meets the Web of Things

Today we’ll report on the proof-of-concept we built in half a day, after our lucky meeting-of-minds at FOSDEM. Our prototype applies 3D visualisation to power an IoT interface. It demonstrates how open, accessible web technologies make it possible to combine software from different domains to create engaging new interactive experiences.

Our proof of concept, illustrated in the video below, shows how a sensor connected to the Internet brings data from the real world to the virtual world. The light sensor reads colors from cardboard cards and changes the color of entities in virtual reality.

The second demo shows how actions in the virtual world can affect the real world. In this next video, we turn on LEDs with colors that match their virtual reality counterparts.

We’ll show you how to do a similar experiment yourself:

  • Build a demo that goes from IoT to WoT, showing the value of connecting things to the web.
  • Connect your first thing and bring it online.
  • Make a connection between the Web of Things and WebXR. Once your thing is connected, you’ll be able to display it, and interact with it in VR and AR.

Here’s a bit more of the context: Fabien Benetou organized of the JavaScript devroom track at FOSDEM, and presented High end augmented reality using JavaScript. Philippe Coval from the Samsung OpenSource group joined his colleague Ziran Sun to present Bring JavaScript to the Internet of Things on the same track.

Philippe demonstrated a remote “SmartHome in a Box”, using a live webcam stream. It was a demo he’d shared the day before in the Mozilla devroom, in a joint presentation with Mozilla Tech Speaker Dipesh Monga.

The demo showed interactions of different kinds of sensors, including a remote sensor from the OpenSenseMap project, a community website that lets contributors upload real-time sensor data.

The Followup to FOSDEM

In Rennes, a city in Brittany, in the northwest of France, the Ambassad’Air project is doing community air-quality tracking using luftdaten software on super cheap microcontrollers. Fabien had already made plans to visit Rennes the following week (to breathe fresh air and enjoy local baked delicacies like the delightful kouign amann).

So we decided to meet again in Rennes, and involve the local community. We proposed a public workshop bridging “Web of Things” and “XR” using FLOSS. Big thanks to Gulliver, the local GNU/Linux Group, who offered to host our last minute hacking session. Thanks also to the participants in Rennes for their curiosity and their valuable input.

In the sections ahead we offer an overview of the different concepts that came together in our project.

From IoT to the Web of Things

The idea of the Internet of Things existed before it got its name. Some fundamental IoT concepts have a lot in common with the way the web works today. As the name suggests, the web of things offers an efficient way to connect any physical object to the world wide web.

Let’s start with a light bulb 💡. Usually, we use a physical switch to turn the bulb on or off. Now imagine if your light bulb 💡 could have its own web page.

If your light bulb or any smart device is web friendly, it would be reachable by a URL like https://mylamp.example.local. The light bulb vendor could implement a web server in the device, and a welcome page for the user. The manufacturer could provide another endpoint for a machine-readable status that would indicate “ON” or “OFF”. Even better, that endpoint could be read using an HTTP GET query or set using an HTTP POST operation with ON or OFF.

All this is simply an API to manage a boolean, making it possible to use the mobile browser as a remote control for the light bulb.

Although this model works, it’s not the best way to go. A standardized API should respect REST principles and use common semantics to describe Things (TD). The W3C is pushing for standardization — a smooth interoperable web language that can be implemented by any project, such as Mozilla’s Project Things.

Newcomers can start with a virtual adapter and play with simulated things. These things appear on the dashboard but do not exist in reality. Actuators or sensors can be implemented using web thing libraries for any language. Useful hint: it’s much simpler to practice on a simulator before working with real hardware and digging into hardware datasheets.

For curious readers, check out the IoT.js code in Philippe’s webthings-iotjs guide on GitHub, and explore color sensor code that’s been published to NPM as color-sensor-js.

Connect your first thing

How do you make a web-friendly smart home? You can start by setting up a basic local IoT network. Here’s how:

  1. You’ll need a a computer with a network interface to use as a gateway.
  2. Add devices to your network and define a protocol to connect them to the central gateway.
  3. Build a user interface to let the user control all connected devices from the gateway.
  4. Later you can develop custom web apps that can also connect to the gateway.

To avoid reinventing the wheel, look at existing free software. That’s where Mozilla’s Things Gateway comes in. You won’t need network engineering or electronics expertise to get started.

You can rely on a low-cost and low-power consumption single board computer, for instance the Raspberry Pi, to install the operating system image provided by Mozilla. Then you can create virtual things like light bulbs, or connect real hardware like sensors onto the gateway itself. You’ll be able to control your device(s) from the web through the tunneling service provided by the “things cloud”. Your data is reachable at a custom domain, stays on your local network, and is never sent to a 3rd party in the cloud.

In order to make the process efficient and also safe, the gateway takes care of authentication by generating a token. The gateway can also generate direct code snippets in several languages (including JavaScript) that can be used for other applications:

You can build on top of existing code that should just work when you copy/paste it into your application. Developers can focus on exploring novel applications and use cases for the technology.

For your next step, we recommend testing the simplest example: list all the things connected to your gateway. In our example, we use a light bulb 💡, a thing composed of several properties. Make sure that the thing displayed on the gateway web interface matches the real world thing. Use the browser’s console with the provided code snippets to check that the behavior matches the device.

Get to know your Things Gateway

Once this is running, the fun begins. Since you can access the gateway with code, you can:

  • List all things, including the schema, to understand their capabilities (properties, values, available actions).
  • Read a property value (e.g. the current temperature of a sensor).
  • Change a property (e.g. control the actuator or set the light bulb color).
  • Get the coordinates of a thing on a 2D floor plan.
  • And much more!

Using a curl command, you can query the whole tree to identify all things registered by the gateway:


gateway=”https://sosg.mozilla-iot.org<
token=”B4DC0DE..."

curl \
-H "Authorization: Bearer $token" \
-H 'Accept: application/json' \
https://sosg.mozilla-iot.org/things \
| jq -M .

The result is a JSON structure of all the things. Each thing has a different endpoint like:

{
"name": "ColorSensor",
...
"properties": { "color": {
"type": "string",
"@type": "ColorProperty", "readOnly": true,    "links": [ {...
"href": "/things/http---localhost-58888-/properties/color"
...

User devices are private and not exposed to the world wide web, so no one else can access or control your light bulb. Here's a quick look at the REST architecture that makes this possible:

Static slide with code describing the RESTful architecture of the gateway

From WoT to WebXR

Introducing A-Frame for WebVR

Once we were able to programmatically get property values using a single HTTP GET request, we could use those values to update the visual scene, e.g. changing the geometry or color of a cube. This is made easier with a framework like A-Frame, which lets you describe simple 3D scenes using HTML.

For example, to define that cube in A-Frame, we use the <a-box></a-box> tag. Then we change its color by adding the color attribute.

<a-box color="#00ff00">

a screenshot from A-Frame showing pink parallelogram, green cube, and yellow cylinder

The beauty behind the declarative code is that these 3D objects, or entities, are described clearly, yet their shape and behavior can be extended easily with components. A-Frame has an active community of contributors. The libraries are open source, and built on top of three.js, one of the most popular 3D frameworks on the web. Consequently, scenes that begin with simple shapes can develop into beautiful, complex scenes.

This flexibility allows developers to work at the level of the stack where they feel comfortable, from HTML to writing components in JavaScript, to writing complex 3D shaders. By staying within the boundaries of the core of A-Frame you might never even have to write JavaScript. If you want to write JavaScript, documentation is available to do things like manipulating the underlying three.js object.

A-Frame itself is framework agnostic. If you are a React developer, you can rely on React. Prefer Vue.js? Not a problem. Vanilla HTML & JS is your thing? These all work. Want to use VR in data visualisation? You can let D3 handle the data bindings.

Using a framework like A-Frame which targets WebXR means that your <a-box> will work on all VR and AR devices which have access to a browser that supports WebXR, from the smartphone in your pocket to high-end VR and professional AR headsets.

Connecting the Web of Things to Virtual Reality

In our next step we change the color value on the 3D object to the thing’s actual value, derived from its physical color sensor. Voila! This connects the real world to the virtual. Here’s the A-Frame component we wrote that can be applied to any A-Frame entity.

var token = 'Bearer SOME_CODE_FOR_AUTH'

// The token is used to manage access, granted only to selected users

var baseURL = 'https://sosg.mozilla-iot.org/'
var debug = false // used to display content in the console

AFRAME.registerComponent('iot-periodic-read-values', {

// Registering an A-Frame component later used in VR/AR entities
init: function () {
this.tick = AFRAME.utils.throttleTick(this.tick, 500, this);

// check for new value every 500ms
},
tick: function(t, dt){
fetch(baseURL + 'things/http---localhost-58888-/properties/color', {
headers: {
Accept: 'application/json',
Authorization: token
}
}).then(res => {
return res.json();
}).then(property => {
this.el.setAttribute("color", property.color);

// the request went through
// update the color of the VR/AR entity
});
}
})

The short video above shows real world color cards causing colors to change in the virtual display. Here’s a brief description of what we’re doing in the code.

  1. We generate a security token (JWT) to gain access to our Things Gateway.
  2. Next we register a component that can be used in A-Frame in VR or AR to change the display of a 3D entity.
  3. Then we fetch the property value of a Thing and display it on the current entity.

In the same way we can get information with an HTTP GET request, we can send a command with an HTTP PUT request. We use A-Frame’s <a-cursor> to allow for interaction in VR. Once we look at an entity, such as another cube, the cursor can then send an event. When that event is captured, a command is issued to the Things Gateway. In our example, when we aim at a green sphere (or “look” with our eyes through the VR headset), we toggle the green LED, red sphere (red LED) and blue sphere (blue LED).

Going from Virtual Reality to Augmented Reality

The objective of our demo was two-fold: to bring real world data into a virtual world, and to act on the real world from the virtual world. We were able to display live sensor data such as temperature and light intensity in VR. In addition, were able to turn LEDs on and off from the VR environment. This validates our proof of concept.

Sadly, the day came to an end, and we ran out of time to try our proof of concept in augmented reality (AR) with a Magic Leap device. Fortunately, the end of the day didn’t end our project. Fabien was able to tunnel to Philippe’s demo gateway, registered under the mozilla-iot.org subdomain and access it as if it were on a local network, using Mozilla’s remote access feature.

The project was a success! We connected the real world to AR as well as to VR.

The augmented reality implementation proved easy. Aside from removing <a-sky>; so it wouldn’t cover our field of view, we didn’t have to change our code. We opened our existing web page on the MagicLeap ML1 thanks to exokit, a new open-source browser specifically targeting spatial devices (as presented during Fabien’s FOSDEM talk). It just worked!

As you can see in the video, we briefly reproduced the gateway’s web interface. We have a few ideas for next steps.  By making those spheres interactive we could activate each thing or get more information about them. Imagine using the gateway floorplan to match the spatial information of a thing to the physical layout of a flat. There are A-Frame components that make it straightforward to generate simplified building parts like walls and doors.

You don’t need a Magic Leap device to explore AR with the Web of Things. A smartphone running Mozilla XR Viewer on an iPhone or an Android using the experimental build of Chromium will work with traditional RGB cameras.

From the Virtual to the Immersive Web

The transition from VR/AR to XR takes two steps. The first step is the technical aspect, which is where relying on A-Frame comes in. Although the specifications for VR and AR on the web are still works in progress by the W3C’s "Immersive Web" standardization process, we can target XR devices today.

By using a high-level framework, we can begin development even though the spec is still in progress, because the spec includes a polyfill maintained by browser vendors and the community at large. The promise of having one code base for all VR and AR headsets is one of the most exciting aspects of WebXR. Using A-Frame, we are able to start today and be ready for tomorrow.

The second step involves you, as reader and user. What would you like to see? Do you have ideas of use cases that create interactive spatial content for VR and AR?

Conclusion

The hack session in Rennes was fascinating. We were able to get live data from the real world and interact with it easily in the virtual world. This opens the door to many possibilities: from our simplistic prototype to artistic projects that challenge our perception of reality. We also foresee pragmatic use cases, for instance in hospitals and laboratories filled with sensors and modern instrumentation (IIoT or Industrial IoT).

This workshop and the resulting videos and code are simple starting points. If you start work on a similar project, please do get in touch (@utopiah and @rzr@social.samsunginter.net/@RzrFreeFr). We'll help however we can!

There's also work in progress on a webapp to A-Frame. Want to get involved in testing or reviewing code? You're invited to help with the design or suggest some ideas of your own.

What Things will YOU bring to the virtual world? We can't wait to hear from you.

Resources

Pete MooreWeekly review 2014-07-02

Mark SurmanMozilla, AI and internet health:an update

Last year the Mozilla team asked itself: what concrete improvements to the health of the internet do we want to tackle over the next 3–5 years?

We looked at a number of different areas we could focus. Making the ad economy more ethical. Combating online harassment. Countering the rush to biometric everything. All worthy topics.

As my colleague Ashley noted in her November blog post, we settled in the end on the topic of ‘better machine decision making’. This means we will focus a big part of our internet health movement building work on pushing the world of AI to be more human — and more humane.

Earlier this year, we looked in earnest at how to get started. We have now mapped out a list of first steps we will take across our main program areas — and we’re digging in. Here are some of the highlights of the tasks we’ve set for ourselves this year:

Shape the agenda

  • Bring the ‘better machine decision making’ concept to life by leaning into a focus on AI in the Internet Health Report, MozFest and press pitches about our fellows.
  • Shake up the public narrative about AI by promoting — and funding — artists working on topics like automated censorship, behavioural manipulation and discriminatory hiring.
  • Define a specific (policy) agenda by bringing in senior fellows to ask questions like: ‘how do we use GDPR to push on AI issues?’; or ‘could we turn platforms into info fiduciaries?’

Connect Leaders

  • Highlight the role of AI in areas like privacy and discrimination by widely promoting the work of fellowship, host orgs and MozFest alumni working on these issues.
  • Promote ethics in computer science education through a $3.5M award fund for professors, knowing we need to get engineers thinking about ethics issues to create better AI.
  • Find allies working on AI + consumer tech issues by heavily focusing our ‘hosted fellowships’ in this area — and then building a loose coalition amongst host orgs.

Rally citizens

  • Show consumers how pervasive machine decision making is by growing the number of products that include AI covered in the Privacy Not Included buyers guide.
  • Shine a light on AI, misinformation and tech platforms through a high profile EU election campaign, starting with a public letter to Facebook and political ad transparency.
  • Lend a hand to developers who care about ethics and AI by exploring ideas like the Union of Concern Technologists and an ‘ethics Q+A’ campaign at campus recruiting fairs.

We’re also actively refining our definition of ‘better machine decision making’ — and developing a more detailed theory of how we make it happen. A first step in this process was to update the better machine decision making issue brief that we first developed back in November. This process has proven helpful and gives us something crisper to work from. However, we still have a ways to go in setting out a clear impact goal for this work.

As a next step, I’m going to post a series of reflections that came to me in writing this document. I’m going to invite other people to do the same. I’m also going to work with my colleague Sam to look closely at Mozilla’s internet health theory of change through an AI lens — poking at the question of how we might change industry norms, government policy and consumer demand to drive better machine decision making.

The approach we are taking is: 1. dive in and take action; 2. reflect and refine our thinking as we go; and 3. engage our community and allies as we do these things; 4. rinse and repeat. Figuring out where we go — and where we can make concrete change on how AI gets made and used — has to be an iterative process. That’s why we’ll keep cycling through these steps as we go.

With that in mind, myself and others from the Mozilla team will start providing updates and reflections on our blogs. We’ll also be posting invitations to get involved as we go. And, we will track it all on the nascent Mozilla AI wiki. You can can use to follow along — and get involved.

The post Mozilla, AI and internet health:<br>an update appeared first on Mark Surman.

Mozilla Reps CommunityRep of the Month – November 2018

Please join us in congratulating Viswaprasath KS, our Rep of the Month for November 2018!

Viswaprasath KS, also know as iamvp7, is a long time Mozillian from India who joined the Mozilla Rep program in June 2013. By profession he works as a software developer. He initially started contributing with designs and SUMO (Army of Awesome). He was also part of Firefox Student Ambassador E-Board and helped students to build exciting Firefox OS apps. In May 2014 he became one of the Firefox OS app reviewers.

Currently he is an active Mozilla TechSpeaker and loves to evangelise about WebExtensions and Progressive Web Apps. He has been an inspiration to many and loves to keep working towards a better web. He has worked extensively on Rust and WebExtensions, conducting many informative sessions on these topics recently. Together with other Mozillians he also wrote “Building Browser Extension”.

Thanks Viswaprasath, keep rocking the Open Web! :tada: :tada:

To congratulate him, please head over to Discourse!

Firefox NightlyThese Weeks in Firefox: Issue 54

Highlights

  • Firefox Account is experimenting with putting an avatar next to the hamburger menu. It will give users visibility on their account, sync status as well as links to manage the account. Targeting landing soon!
    • A default toolbar button in the Firefox UI displays a panel to turn on syncing.

      Take Firefox with you!

    • A default toolbar button in the Firefox UI offers some Firefox Account options now that the user is logged in.
  • We have added support for blocking known fingerprinters and cryptominers with content blocking! This is currently enabled in Nightly.
    • This is currently enabled in Nightly, and is still experimental. It might break some sites.

  • Lots of DevTools goodies this week!
    • In the DevTools Debugger, the XHR breakpoint type (ANY, GET, PUT, POST, etc.)  can be now specified through new UI. This was done by a volunteer contributor, Jaril!
      • A dropdown in the DevTools debugger allows developers to break on different types of XHR requests, like GET, POST, PUT, HEAD, and DELETE.
  • Log points UX has been improved (including syntax highlighting, context menu and markers), thanks to contributors Bomsy and Florens
    • Log points are different from breakpoints – they don’t break JS execution, they just create a log when hit.
    • The DevTools debugger now allows developers to set "log points" that log some value when reaching a section of code.
  • It is now possible to copy all collected CSS changes done through DevTools UI. Thanks to Razvan Caliman!
    • A "Copy all changes" button has been added to the "Changes" panel in the DevTools Inspector.
  • Auto discovery of layout CSS properties (done by Micah Tigley). Hold shift and mouse over any defined property in the box-model widget (in the Layout sidebar). This will highlight the corresponding CSS property in the rule-view.
    • The DevTools Inspector now sends you to the rules that defined the various dimensions of the box when clicking on those dimensions in the Layout panel.
  • The Password Manager team has added a “View Saved Logins” footer to the password manager autocomplete popup  (disabled until the follow-up is resolved)
    • A "View Saved Logins" option now appears at the bottom of the login autocomplete list.
  • Tim Huang and Tom Ritter added letterboxing (an anti-fingerprinting technique) to Firefox
    • An anti-fingerprinting technique now adds additional padding around the content area.

      Note the gray margin in the content area.

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Hemakshi Sachdev [:hemakshis]
  • Manish [:manishkk]
  • Oriol Brufau [:Oriol]
  • Rainier Go [:rfgo]
  • Tim Nguyen :ntim
  • Trishul

New contributors (🌟 = first patch)

Project Updates

Activity Stream

  • Landed and uplifted MVP for experiments, Beta smoke test started Monday.
  • Preparing to run 16 layout experiments in Release 66 cycle for better engagement, e.g., large Hero articles vs List of articles
  • The team is helping Pocket engineers transition to increase ownership of new tab
  • CFR for Pinned Tabs will be our next recommendation experiment!
    • First experiment recommends add-ons, e.g., Facebook Container, Google Translate
    • Current experiment will suggest pinning tabs, e.g., Gmail, productivity / messaging sites

Add-ons / Web Extensions

Applications

Screenshots
  • Latest server release is on stage environment for testing prior to release (Changelog)
    • We have now exposed server shutdown strings to web localizers. In case anyone asks, Screenshots is not being removed from Firefox, just the ability to upload shots.
    • This upcoming server release will include tools to help users download their saved shots
Lockbox
  • This past sprint continued the focus on foundational work:
    • [lorchard] “Reveal Password” functionality (#84)
    • [loines] Define & document telemetry metrics (#82)
    • [lorchard] Expect a complete Login when updating in addon Logins API (#80)
    • [6a68] Re-style the list view on the management page (#76)
  • Our work is tracked as the ‘desktop’ repository within the Lockbox waffle board
  • We don’t yet have any good-first-bugs filed, but swing by #lockbox if you want to contribute ^_^
Services (Firefox Accounts / Sync / Push)
  • New FxA device pairing flow landed in Nightly, but pref’d off for now. You’ll soon be able to sign in to FxA on Android and iOS by scanning a QR code, instead of typing your password!
    • Check out this Lucidchart sketch to see the flow if you’re curious to learn more!

Developer Tools

Network
  • Resizeable Columns – Our Outreachy #17 intern Lenka Pelechova is finishing support for resizeable Columns in the Network panel. Currently focusing on Performance (bug)
Layout Tools
  • Our UX Designer Victoria Wang published survey for CSS Layout Debugging. You can help us build better CSS debugging tools (quick single-page survey)
Technical debt
  • Firefox 67 will soon display a removal notice (in the Options panel) about the Shader Editor, Canvas and Web Audio panels, which are going to be removed in 68. Work done by Yulia Startsev. Until the MDN page is up, you can look at the intent to unship post in the mailing list.
    • Warnings being displayed next to various panel toggles in the DevTools Settings panel, letting users know that these panels are deprecated.
Remote Debugging
  • Showing backward compatibility warnings in about:debugging (bug)
    • A warning is now displayed when DevTools is remotely connected to a Firefox that has a different version number.
  • Added a checkbox to enable local addon debugging (bug)
    • A checkbox has been added to about:debugging to enable extension debugging.
  • Open the Profiler for remote runtimes in about:debugging (bug)
    • A dialog from about:debugging now allows you to gather a performance profile from the connected instance of Firefox.

Fission

Performance

Performance tools

  • Perf-html.io moved to profiler.firefox.com and perf.html is now called “Firefox Profiler”.
  • I/O markers are now visible in the timeline. I/O marker stacks are visible when hovering them, and in lots of cases the path of the file that was touched is shown.
    • When capturing a profile, to have I/O markers, you need to check the “Main Thread IO” checkbox in the Gecko profiler add-on, or enable the “mainthreadio” feature using the MOZ_PROFILER_STARTUP_FEATURES environment variable when profiling startup.
    • We are investigating optionally collecting markers for off-main thread I/O, and enabling main thread I/O markers by default.
      • A FileIO marker with operation, source, filename and stack information

        A FileIO marker with operation, source, filename and stack information

  • We improved shutdown profiling: it’s now compatible with mainthreadio markers, and shows content process shutdowns. Here’s a profile with startup + shutdown, on a fresh profile, with I/O markers.
  • We have markers for <script>s, privileged js files, and nsObserverService::NotifyObservers now in the Firefox Profiler.
      • A Script marker with name information

        A Script marker with name information

         

      • A script marker and other SubScript markers that it triggers

        A script marker and other SubScript markers that it triggers

         

      • A “NotifyObservers” marker with “profile-before-change” name

        A “NotifyObservers” marker with “profile-before-change” name

     

Policy Engine

Privacy/Security

Search and Navigation

Search
Quantum Bar

Mozilla Open Policy & Advocacy BlogIndian government allows expanded private sector use of Aadhaar through ordinance (but still no movement on data protection law)

On Thursday, the Indian government approved an ordinance — an extraordinary procedure allowing the government to enact legislation without Parliamentary approval — that threatens to dilute the impact of the Supreme Court’s decision last September.

The Court had placed fundamental limits to the otherwise ubiquitous use of Aadhaar, India’s biometric ID system, including the requirement of an authorizing law for any private sector use. While the ordinance purports to provide this legal backing, its broad scope could dilute both the letter and intent of the judgment. As per the ordinance, companies will now be able to authenticate using Aadhaar as long as the Unique Identification Authority of India (UIDAI) is satisfied that “certain standards of privacy and security” are met. These standards remain undefined, and especially in the absence of a data protection law, this raises serious concerns.

The swift movement to foster expanded use of Aadhaar is in stark contrast to the lack of progress on advancing a data protection bill that would safeguard the rights of Indians whose data is implicated in this system. Aadhaar continues to be effectively mandatory for a vast majority of Indian residents, given its requirement for the payment of income tax and various government welfare schemes. Mozilla has repeatedly warned of the dangers of a centralized database of biometric information and authentication logs.

The implementation of these changes with no public consultation only exacerbates the lack of public accountability that has plagued the project. We urge the Indian government to consider the serious privacy and security risks from expanded private sector use of Aadhaar. The ordinance will need to gain Parliamentary approval in the upcoming session (and within six months) or else it will lapse. We urge the Parliament not to push through this law which clearly dilutes the Supreme Court’s diktat, and any subsequent proposals must be preceded by wide public consultation and debate.

 

The post Indian government allows expanded private sector use of Aadhaar through ordinance (but still no movement on data protection law) appeared first on Open Policy & Advocacy.

QMODevEdition 66 Beta 14 Friday, March 8th

Hello Mozillians,

We are happy to let you know that Friday, March 8th, we are organizing DevEdition 66 Beta 14 Testday. We’ll be focusing our testing on: Firefox Screenshots, Search, Build installation & uninstallation.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Daniel StenbergJulia’s cheat sheet for curl

Julia Evans makes these lovely comic style cheat sheets for various linux/unix networking tools and a while ago she made one for curl. I figured I’d show it here if you happened to miss her awesome work.

And yes, people have already pointed out to her that

This Week In RustThis Week in Rust 276

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is multi_try, a crate to simplify working with multiple results. Thanks to Azriel Hoh for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

195 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia Pacific
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

And again, we have two quotes for the week:

Can you eli5 why TryFrom and TryInto matters, and why it’s been stuck for so long ? (the RFC seems to be 3 years old)

If you stabilise Try{From,Into}, you also want implementations of the types in std. So you want things like impl TryFrom for u16. But that requires an error type, and that was (I believe) the problem.

u8 to u16 cannot fail, so you want the error type to be !. Except using ! as a type isn’t stable yet. So use a placeholder enum! But that means that once ! is stabilised, we’ve got this Infallible type kicking around that is redundant. So change it? But that would be breaking. So make the two isomorphic? Woah, woah, hold on there, this is starting to get crazy…

new person bursts into the room “Hey, should ! automatically implement all traits, or not?”

“Yes!” “No!” “Yes, and so should all variant-less enums!”

Everyone in the room is shouting, and the curtains spontaneously catching fire. In the corner, the person who proposed Try{From,Into} sits, sobbing. It was supposed to all be so simple… but this damn ! thing is just ruining everything.

… That’s not what happened, but it’s more entertaining than just saying “many people were unsure exactly what to do about the ! situation, which turned out to be more complicated than expected”.

– /u/Quxxy on reddit

What is the ! type?

The never type for computations that don’t resolve to a value. It’s named after its stabilization date.

– /u/LousyBeggar on reddit

Thanks to runiq and StyMaar for the suggestions!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Ian BickingThe Firefox Experiments I Would Have Liked To Try

I have been part of the Firefox Test Pilot team for several years. I had a long list of things I wanted to build. Some I didn’t personally want to build, but I thought they were interesting ideas. I didn’t get very far through this list at all, and now that Test Pilot is being retired I am unlikely to get to them in the future.

Given this I feel I have to move this work out of my head, and publishing a list of ideas seems like an okay way to do that. Many of these ideas were inspired by something I saw in the wild, sometimes a complete product (envy on my part!), or the seed of an idea embedded in some other product.

The experiments are a spread: some are little features that seem potentially useful. Others are features seen elsewhere that show promise from user research, but we could only ship them with confidence if we did our own analysis. Some of these are just ideas for how to explore an area more deeply, without a clear product in mind.

Test Pilot’s purpose was to find things worth shipping in the browser, which means some of these experiments aren’t novel, but there is an underlying question: would people actually use it? We can look at competitors to get ideas, but we have to ship something ourselves if we want to analyze the benefit.

Table of contents:


Sticky Reader Mode

mockup of Sticky Reader Mode

Give Reader Mode in Firefox a preference to make it per-domain sticky. E.g. if I use Reader Mode on nytimes.com and then if I visit an article on nytimes.com in the future it’ll automatically convert to reader mode. (The nytimes.com homepage would not be a candidate for that mode.)

I made an experiment in sticky-reader-mode, and I think it works really nicely. It changes the browsing experience significantly, and most importantly it doesn’t require frequent proactive engagement to change behavior. Lots of these proposed ideas are tools that require high engagement by the user, and if you don’t invoke the tool then they do nothing. In practice no one (myself included) remembers to invoke these tools. Once you click the preference on a site Sticky Reader Mode then you are opted in to this new experience with no further action required.

There are a bunch of similar add-ons. Sticky Reader Mode works a bit better than most because of its interface, and it will push you directly into Reader Mode without rendering the normal page. But it does this by using APIs that are not public to normal WebExtensions. As a result it can’t be shipped outside Test Pilot, and can’t go in addons.mozilla.org. So… just trust me, it’s great.

Recently I’ve come upon Brave Speed Reader which is similar, but without per-site opt-in, and using machine learning to identify articles.

Cloud Browser

mockup of a Cloud Browser

Run a browser/user-agent in the cloud and use a mobile view as a kind of semantic or parsed view on that user agent (the phone would just control the browser that is hosted on the cloud). At its simplest we just take the page, simplify it in a few ways, and send it on - similar to what Opera Mini does. The approach lends itself to a variety of task-oriented representations of remote content.

When I first wrote this down I had just stared at my phone while it took 30 seconds to show me a 404 page. The browser probably knew after a couple seconds that it was a 404 but it acted as a rendering engine and not a user agent, so the browser insisted on faithfully rendering the useless not found page.

Obviously running a full browser instance in the cloud is resource hungry and finicky but I think we could ignore those issues while testing. Those are hard but solved operational issues.

Prior art: Opera Mini does some of this. Puffin is specifically cloud rendering for mobile. Light Point does the same for security reasons.

I later encountered brow.sh which is another interesting take on this (specifically with html.brow.sh).

This is a very big task, but I still believe there’s tremendous potential in it. Most of my concepts are not mobile-based, in part because I don’t like mobile, I don’t like myself when using a mobile device, and it’s not something I want to put my energy into. But I still like this idea.

Modal Page Actions

mockup of Modal Page Actions

This was tangentially inspired by Vivaldi’s Image Properties, not because of the interface, but thinking about how to fit high-information inspection tools into the browser.

The idea: instead of context menus, page actions, or other interaction points that are part of the “chrome”, implement one overlay interface: the do-something-with-this-page interface. Might also be do-something-with-this-element (e.g. replacing the 7 image-related context menu items: View Image, Copy Image, Copy Image Location, Save Image As, Email Image, Set As Desktop Background, and View Image Info).

The interface would be an overlay onto the page, similar to what happens when you start Screenshots:

Screenshots interface

Everything that is now in the Page Action menu (the ... in the URL bar), or in the context menu, would be available here. Some items might have a richer interface, e.g., Send Tab To Device would show the devices directly instead of using a submenu. Bookmarking would include some inline UI for managing the resulting bookmark, and so on.

There was some pushback because of the line of death – that is, the idea all trusted content must clearly originate from the browser chrome, and not the content area. I do not believe in the Line of Death, it’s something users could use to form trust, but I don’t believe they do use it (further user research required).

The general pattern is inspired by mobile interfaces which are typically much more modal than desktop interfaces. Modal interfaces have gotten a bad rap, I think somewhat undeserved: modal interfaces are also interfaces that guide you through processes, or ask you to explicitly dismiss the interface. It’s not unreasonable to expect someone to finish what they start.

Find+1

mockup of Find + 1

We have find-in-page but what about find-in-anything-linked-from-this-page?

Hit Cmd-Shift-F and you get an interface to do that. All the linked pages will be loaded in the background and as you search we show snippets of matching pages. Clicking on a snippet opens or focuses the tab and goes to where the search term was found.

I started experimenting in find-plus-one and encountered some challenges: hidden tabs aren’t good workers, loading pages in the background takes a lot of grinding in Firefox, and most links on pages are stupid (e.g., I don’t want to search your Careers page). An important building block would be a way to identify the important (non-navigational) parts of a page. Maybe lighter-weight ways to load pages (in other projects I’ve used CSP injection). The Copy Keeper concept did come about while I experimented with this.

A simpler implementation of this might simply do a text search of all your open tabs, which would be technically simpler and mostly an exercise in making a good representation of the results.

Your Front Page

mockup of Your Front Page

Create a front page of news from the sites you already visit. Like an RSS reader, but prepopulated with your history. This creates an immediate well-populated experience.

My initial thought was to use ad hoc parsers for popular news sites, and at run an experiment with just a long whitelist of news providers.

I got the feedback: why not just use RSS? Good question: I thought RSS was kind of passé, but I hadn’t looked for myself. I went on to do some analysis of RSS, and found it available for almost all news sites. The autodetection (<link rel=alternate>) is not as widely available, and it requires manual searching to find many feeds. Still RSS is a good way to get an up-to-date list of articles and their titles. Article content isn’t well represented and other article metadata is inaccurate or malformed (e.g., there are no useful tags). So using RSS would be very reasonable discovery mechanism, but an “RSS reader” doesn’t seem like a good direction on the current web.

Page Archive

This is bringing back old functionality from Page Shot, a project of mine which morphed into Firefox Screenshots: save full DOM copies of pages. What used to be fairly novel is now well represented by several projects (e.g., WebMemex or World Brain Memex).

Unfortunately I have never been able to really make this kind of tool part of my own day-to-day behavior, and I’ve become skeptical it can work for a general populace. But maybe there’s a way to package up this functionality that is more accessible, or happens more implicitly. I forked a version of Page Shot as pagearchive a while ago, with this in mind, but I haven’t (and likely won’t) come back to it.

Personal Historical Archive

This isn’t really a product idea, but instead an approach to developing products.

One can imagine many tools that directly interact or learn from the content of your browsing. There is both a privacy issue here and a privacy opportunity: looking at this data is creepy, but if the tools live in your user agent (that belongs to you and hosts your information locally) then it’s not so creepy.

But it’s really hard to make experiments on this because you need a bunch of data. If you build a tool that starts watching your browsing then it will only slowly build up interesting information. The actual information that is already saved in browser history is interesting, but in my experience it is too limited and of poor quality. For instance, it is quite hard to build up a navigational path from the history when you use multiple tabs.

A better iterative development approach would be one where you have a static set of all the information you might want, and you can apply tools to that information. If you find something good then later you can add new data collection to the browser, secure in the knowledge that it’s going to find interesting things.

I spent quite a bit of effort on this, and produced `personal-history-archive. It’s something I still want to come back to. It’s a bit of a mess, because at various times it was retrofitted to collect historical information, or collect it on an ongoing basis, or collected it when driven by a script. I also tried to build tools in parallel for doing analysis on the resulting database.

This is also a byproduct of experimentation with machine learning. I wanted to apply things I was learning to browser data, but the data I wanted wasn’t there. I spent all my time collecting and cleaning data, and ended up spending only a small amount of time analyzing the data. I suspect I’m not the only one who has done this.

Navigational Breadcrumbs

mockup of Navigational Breadcrumbs

When I click on a link I lose the reminder of why I clicked on it. What on the previous page led me to click on this? Was I promised something? Are there sibling links that I might want to continue to directly instead of going back and selecting another link?

This tool would give you additional information about the page you are on, how you got there, and given where you came from, where you might go next. Would this be a sidebar? Overlay content? In a popup? I’m not sure.

Example: using this, if I click on a link from Reddit I will be able to see the title of the Reddit post (which usually doesn’t match the document title), and a link to comments on the page. If I follow a link from Twitter, I’ll be able to see the Tweet I came from.

This could be interesting paired with link preview (like a tentative forward). Maybe the mobile browser Linkbubbles (now integrated into Brave) has some ideas to offer.

Technically this will use some of the techniques from Personal History Archive, which tracks link sources.

This is based on the train of thought I wrote down in an HN comment – itself a response to Freeing the Web from the Browser.

I want to try this still, and have started a repo crossnav but haven’t put anything there yet. I think even some naive approaches could work, just trying to detect the category of link and the related links (e.g., on Reddit the category is other submissions, and the related links are things like comments).

Copy Keeper

mockup of Copy Keeper

A notebook/logbook that is filled in every time you copy from a web page. When you copy it records (locally):

  • Text of selection
  • HTML of selection
  • Screenshot of the block element around the selection
  • Text around selection
  • Page URL and nearest anchor/id
  • Page title
  • Datetime

This overloads “copy” to mean “remember”.

Clips would be searchable, and could be moved back to the clipboard in different forms (text, HTML, image, bibliographical reference, source URL). Maybe clips would be browsable in a sidebar (maybe the sidebar has to be open for copies to be collected), or clips could be browsed in a normal tab (Library-style).

I created a prototype in copy-keeper. I thought it was interesting and usable, though whether it would actually get any use in practice I don’t know. It’s one of those tools that seems handy but requires effort, and as a result doesn’t get used.

Change Scout

mockup of Change Scout

(Wherein I both steal a name from another team, and turn it into a category…)

Change Scout will monitor a page for you, and notify you when it changes. Did someone edit the document? Was there activity on an issue? Did an article get updated? Put Change Scout to track it and it will tell you what changes and when.

It would monitor the page inside the browser, so it would have access to personalized and authenticated content. A key task would be finding ways to present changes in an interesting and compact way. In another experiment I tried some very simple change detection tools, and mostly end up frustrated (small changes look very large to naive algorithms).

Popup Tab Switcher

Tab Switcher mockup

We take the exact UI of the Side View popup, but make it a tab switcher. “Recent Tabs” are the most recently focused tabs (weighted somewhat by how long you were on the tab), and then there’s the complete scrollable list. Clicking on an item simply focuses that tab. You can close tabs without focusing them.

I made a prototype in tab-switchr. In it I also added some controls to close tabs, which was very useful for my periodic tab cleanups. Given that it was a proactive tool, I surprised myself by using it frequently. There’s work in Firefox to improve this, unrelated to anything I’ve done. It reminds me a bit of various Tree-Style Tabs, which I both like because they make it easier to see my tabs, and dislike because I ultimately am settled on normal top-tabs. The popup interface is less radical but still provides many of the benefits.

I should probably clean this up a little and publish it.

Personal Podcast

Create your own RSS feed.

  • When you are on a page with some audio source, you can add the audio to your personal feed
  • When on an article, you can generate an audio version that will be added to the feed
  • You get an RSS feed with a random token to make it private (I don’t think podcast apps handle authentication well, but this requires research)
  • Maybe you can just send/text the link to add it to your preferred podcast app
  • If apps don’t accept RSS links very well, maybe something more complicated would be required. An app that just installs an RSS feed? We want to avoid the feed accidentally ending up in podcast directories.

Bookmark Manager

There’s a lot of low-rated bookmark managers in addons.mozilla.org and the Chrome Extension store. Let’s make our own low-rated bookmark manager!

But seriously, this would anticipate updates to the Library and built-in bookmark manager, which are deficient.

Some resources/ideas: Comment with a few gripes Google’s bookmark manager Bookmark section on addons.mozilla.org Bookmark organizers on addons.mozilla.org * Relevant WebExtension APIs

Extended Library

mockup of the Extended Library

The “Library” in Firefox is the combination history and bookmark browser you get if you use “Show all bookmarks” or “Show all history”.

In this idea we present the user with a record of their assets, wherever they are.

This is like a history view (and would be built from history), but would use heuristics to pick out certain kinds of things: docs you’ve edited, screenshots you’ve taken, tickets you’ve opened, etc. We’d be trying hard to find long-lived documents in your history, instead of transitional navigation, articles, things you’ve gotten to from public indexes, etc.

Automatically determining what should be tagged as a “library item” would be the hard part. But I think having an organic view of these items, regardless of underlying service, would be quite valuable. The browser has access to all your services, and it’s easy to forget what service hosts the thing you are thinking about.

Text Mobile Screenshot

mockup of Text Mobile Screenshot

This tool will render the tab in a mobile factor (using the devtools responsive design mode), take a full-page screenshot, and text the image and URL to a given number. Probably it would only support texting to yourself.

I’ve looked into this some, and getting the mobile view of a page is not entirely obvious and requires digging around deep in the browser. Devtools does some complicated stuff to display the mobile view. The rest is basic UI flows and operational support.

Email Readable

Emails the Reader Mode version of a site to yourself. In our research, people love to store things in Email, so why not?

Though it lacks the simplicity of this concept, Email Tabs contains this basic functionality. Email This does almost exactly this.

Your History Everywhere

An extension that finds and syncs your history between browsers (particularly between Chrome and Firefox).

This would use the history WebExtension APIs. Maybe we could create a Firefox Sync client in Chrome. Maybe it could be a general way to move things between browsers. Actual synchronization is hard, but creating read-only views into the data in another browser profile is much easier.

Obviously there’s lots of work to synchronize this data between Firefox properties, and knowing the work involved this isn’t easy and often involves close work with the underlying platform. Without full access to the platform (like on Chrome) we’ll have to find ways to simplify the problem in order to make it feasible.

My Homepage

Everyone (with an FxA account) gets there own homepage on the web. It’s like Geocities! Or maybe closer to github.io.

But more seriously, it would be programmatically accessible simple static hosting. Not just for you to write your own homepage, but an open way for applications to publish user content, without those applications themselves turning into hosting platforms. We’d absorb all the annoyances of hosting content (abuse, copyright, quotas, ops, financing) and let open source developers focus on enabling interesting content generation experiences for users on the open web.

Here’s a general argument why I think this would be a useful thing for us to do. And another from Les Orchard.

Studying what Electron does for people

This is a proposal for user research:

Electron apps are being shipped for many services, including services that don’t require any special system integration. E.g., Slack doesn’t require anything that a web browser can’t do. Spotify maybe catches some play/pause keys, but is very close to being a web site. Yet there is perceived value in having an app.

The user research would focus on cases where the Electron app doesn’t have any/many special permissions. What gives the app value over the web page?

The goal would be to understand the motivations and constraints of users, so we could consider ways to make the in-browser experience equally pleasant to the Electron app.

App quick switcher

Per my previous item: why do I have an IRCCloud app? Why do people use Slack apps? Maybe it’s just because they want to be able to switch into and out of those apps quickly.

A proposed product solution: add a shortcut to any specific (pinned?) tab. Might be autocreated. Using the shortcut when the app is already selected will switch you back to your previous-selected tab. Switching to the tab without the shortcut will display a gentle reminder that the shortcut exists (so you can train yourself to start using it).

To make it a little more fancy, I thought we might also be able to do a second related “preview” shortcut. This would let you peek into the window. I’m not sure what “peeking” means. Maybe we just show a popup with a screenshot of that other window.

Maybe this should all just overload ⌘1/2/3 (maybe shift-⌘1/etc for peeking). Note these shortcuts do not currently have memory – you can switch to the first tab with ⌘1, but you can’t switch back.

This is one suggested solution to Whatever Electron does for people.

I started some work in quick-switch-extension, but keyboard shortcuts were a bit wonky, and I couldn’t figure out useful additional functionality that would make it fun. Firefox (Nightly?) now has Ctrl-Tab functionality that takes you to recent tabs, mitigating this problem (though it is not nearly as predictable as what I propose here).

Just Save

Just Save saves a page. It’s like a bookmark. Or a remembering. Or an archive. Or all of those all at once.

Just Save is a one-click operation, though a popup does show up (similar in function to Pocket) that would allow some additional annotation of your saved page.

We save: 1. Link 2. Title 3. Standard metadata 4. Screenshot 5. Frozen version of page 6. Scroll position 7. The tab history 8. Remember the other open tabs, so if some of them are saved we offer later relations between them 9. Time the page was saved 10. Query terms that led to the page

It’s like bookmarks, but purely focused on saving, while bookmarks do double-duty as a navigational tool. The tool encourages after-the-fact discovery and organization, not at-the-time-of-save choices.

And of course there’s a way to find and manage your saved pages. This idea needs more exploration of why you would return to a page or piece of information, and thus what we’d want to expose and surface from your history. We’ve done research, but it’s really just a start.

Open Search Combined Search

We have several open search providers. How many exist out there? How many could we find in history?

In theory Open Search is an API where a user could do personalized search across many properties, though I’m not sure if any sufficient number of sites has enabled it.

Notes Commander

It’s Notes, but with slash commands.

I other words it’s a document, but if you complete a line that begins with a / then it will try to execute that command, appending or overwriting from that point.

So for instance /timestamp just replaces itself with a timestamp.

Maybe /page inserts the currently active tab. /search foo puts search results into the document, but as editable (and followable) links. /page save freezes the page as one big data link, and inserts that link into the note.

It’s a little like Slack, but in document form, and with the browser as the context instead of a messaging platform. It’s a little like a notebook programming interface, but less structured and more document-like.

The ability to edit the output of commands is particularly interesting to me, and represents the kind of ad hoc information organizing that we all do regularly.

I experimented some with this in Notes, and got it working a little bit, but working with CKEditor (that Notes is built on) was just awful and I couldn’t get anything to work well. Notes also has a very limited set of supported content (no images or links), which was problematic. Maybe it’s worth doing it from scratch (with ProseMirror or Slate?)

After I tried to mock this up, I realized that the underlying model is much too unclear in my mind. What’s this for? When is it for? What would a list of commands look like?

Another thing I realized while attempting a mockup is that there should be a rich but normalized way to represent pages and URLs and so forth. Often you’ll be referring to URLs of pages that are already open. You may want to open sets of pages, or see immediately which URLs are already open in a tab. A frozen version of a page should be clearly linked to the source of that page, which of course could be an open tab. There’s a lot of pieces to fit together here, both common nouns and verbs, all of which interact with the browser session itself.

Automater

Automation and scripting for your browser: make demonstrations for your browser, give it a name, and you have a repeatable script.

The scripts will happen in the browser itself, not via any backend or scraping tool. In case of failed expectations or changed sites, the script will halt and tell the user.

Scripts could be as simple as “open a new tab pointing to this page every weekday at 9am”, or could involve clipping information, or just doing a navigational pattern before presenting the page to a user.

There’s a huge amount of previous work in this area. I think the challenge here is to create something that doesn’t look like a programming language displayed in a table.

Sidekick

Sidekick is a sidebar interface to anything, or everything, contextually. Some things it might display:

  • Show you the state of your clipboard
  • Show you how you got to the current tab (similar to Navigational Breadcrumbs)
  • Show you other items from the search query that kicked off the current tab
  • Give quick navigation to nearby pages, given the referring page (e.g., the next link, or next set of links)
  • Show you buttons to activate other tabs you are likely to switch to from the current tab
  • Show shopping recommendations or other content-aware widgets
  • Let you save little tidbits (text, links, etc), like an extended clipboard or notepad
  • Show notifications you’ve recently received
  • Peek into other tabs, or load them inline somewhat like Side View
  • Checklists and todos
  • Copy a bunch of links into the sidebar, then treat them like a todo/queue

Possibly it could be treated like an extensible widget holder.

From another perspective: this is like a continuous contextual feature recommender. I.e., it would try to answer the question: what’s the feature you could use right now?

Timed Repetition

Generally in order to commit something to long-term memory you must revisit information later, ideally long enough that it’s a struggle.

Is anything we see in a browser worth committing to long-term memory? Sometimes it feels like nothing is worth remembering, but that’s a kind of nihilism based on the shitty aspects of typical web browsing behavior.

The interface would require some positive assertion: I want to know this. Probably you’d want to highlight the thing you’d “know”. Then, later, we’d want to come up with some challenge. We don’t need a “real” test that is verified by the browser, instead we simply need to ask some related question, then the user can say if they got it right or not (or remembered it or not).

Reader Mode improvements

Reader mode is a bit spartan. Maybe it could be a bit nicer:

  • Pick up some styles or backgrounds from the hosting site
  • Display images or other media differently or more prominently
  • Add back some markup or layout that Readability erases
  • Apply to some other kinds of sites that aren’t articles (e.g., a video site)
  • A multicolumn version like McReadability

Digest Mode

Inspired by Full Hacker News (comments): take a bunch of links (typically articles) and concatenate their content into one page.

Implicitly this requires Reader Mode parsing of the pages, though that is relatively cheap for “normal” articles. Acquiring a list of pages is somewhat less clear. Getting a list of pages is a kind of news/RSS question. Taking a page like Hacker News and figuring out what the “real” links are is another approach that may be interesting. Lists of related links are everywhere, yet hard to formally define.

This would work very nicely with complementary text summarization.

Open question: is this actually an interesting or useful way to consume information?

Firefox for X

There’s an underlying concept here worth explaining:

Feature develop receives a lot of skepticism. And it’s reasonable: there’s a lot of conceit in a feature, especially embedded in a large product. Are people going to use a product or not because of some little feature? Or maybe the larger challenge: can some feature actually change behavior? Every person has their own thing going on, people aren’t interested in our theories, and really not that many people are interested in browsers. Familiar functionality – the back button, bookmarks, the URL bar, etc. – are what they expect, what they came for, and what they will gravitate to. Everything I’ve written so far in this list are things people won’t actually use.

A browser is particularly problematic because it’s so universal. It’s for sites and apps and articles. It’s for the young and the elderly, the experienced and not. It’s used for serious things, it’s used for concentration, and it’s used for dumb things and to avoid concentrating. How can you build a feature for everyone, targeting anything they might do? And if you build something, how can a person trust a new feature is really for them, not some other person? People are right to be skeptical of the new!

But we also know that most people regularly use more than one browser. Some people use Chrome for personal stuff, and Firefox for work. Some people do the exact opposite. Some people do their banking and finance in a specific browser. Some use a specific browser just for watching videos.

Which browser a person uses for which task is seemingly random. Maybe they were told to use a specific browser for one task, and then the other browser became the fallback. Maybe they once heard somewhere once that one browser was more secure. Maybe flash seemed broken on one browser when they were watching a video, and now a pattern has been set.

This has long seemed like an opportunity to me. Market a browser that actually claims to be the right browser for some of these purposes! Firefox has Developer Edition and it’s been reasonably successful.

This offers an opportunity for both Mozilla and Firefox users to agree on purpose. What is Firefox for? Everything! Is this feature meant for you? Unlikely! In a purpose-built browser both sides can agree what it’s trying to accomplish.

This idea often gets poo-pooed for how much work it is, but I think it’s simpler than it seems. Here’s what a “new browser” means:

  • Something you can find and download from its own page or site
  • It’s Firefox, but uses its own profile, keeping history/etc separate from other browser instances (including Firefox)
  • It has its own name and icon, and probably a theme to make it obvious what browser you are in
  • It comes with some browser extensions and prefs changed, making it more appropriate for the proposed use case

The approach is heavy on marketing and build tools, and light on actual browser engineering.

I also have gotten frequent feedback that Multi-Account Containers should solve all these use cases, but that gets everything backwards. People already understand multiple browsers, and having completely new entry points to bring people to Firefox is a feature, not a bug.

Sadly I think the time for this has passed, maybe in the market generally or maybe just for Mozilla. It would have been a very different approach to the browser.

Some of us in the Test Pilot team had some good brainstorming around actual concepts too, which is where I actually get excited about the ideas:

Firefox Study

For students, studying.

  • Integrate note-taking tools
  • Create project and class-based organizational tools, helping to organize tabs, bookmarks, and notes
  • Tools to document and organize deadlines
  • Citation generators

I don’t know what to do with online lectures and video, but it feels like there’s some meaningful improvements to be done in that space. Video-position-aware notetaking tools?

I think the intentionality of opening a browser to study is a good thing. iPads are somewhat popular in education, and I suspect part of that is having a device that isn’t built around multitasking, and using an iPad means stepping away from regular computing.

Firefox Media

To watch videos. This requires very few features, but benefits from just being a separate profile, history, and icon.

There’s a small number of features that might be useful:

  • Cross-service search (like Can I Stream.it or JustWatch)
  • Search defaults to video search
  • Cross-service queue
  • Quick service-based navigation

I realize it’s a lot like Roku in an app.

Firefox for Finance

This is really just about security.

Funny story: people say they value security very highly. But if Mozilla wants to make changes in Firefox that increase security but break some sites – particularly insecure sites – people will then stop using Firefox. They value security highly, but still just below anything at all breaking. This is very frustrating for us.

At the same time, I kind of get it. I’m dorking around on the web and I click through to some dumb site, and I get a big ol’ warning or a blank page or some other weirdness. I didn’t even care about the page or its security, and here my browser is trying to make me care.

That’s true some of the time, but not others. If you are using Firefox for Finance, or Firefox Super Secure, or whatever we might call it, then you really do care.

There’s a second kind of security implied here as well: security from snooping eyes and on shared computers. Firefox Master Password is a useful feature here. Generally there’s an opportunity for secure data at rest.

This is also a vehicle for education in computer security, with an audience that we know is interested.

Firefox Low Bandwidth

Maybe we work with proxy services. Or just do lots of content blocking. In this browser we let content break (and give a control to load the full content), so long as you start out compact.

  • Cache content that isn’t really supposed to be cached
  • Don’t load some kinds of content
  • Block fonts and other seemingly-unimportant content
  • Monitoring tools to see where bandwidth usage is going
Firefox for Kids

Sadly making things for kids is hard, because you are obliged to do all sorts of things if you claim to target children, but you don’t have to do anything if kids just happen to use your tool.

There is an industry of tools in this area that I don’t fully understand, and I’d want to research before thinking about a feature list. But it seems like it comes down to three things:

  • Blocking problematic content
  • Encouraging positive content
  • Monitoring tools for parents

There’s something very uninspiring about that list, it feels like it’s long on negativity and short on positive engagement. Coming up with an answer to that is not a simple task.

Firefox Calm

Inspired by a bunch of things:

What would a calm Firefox experience look like? Or maybe it would be better to think about a calm presentation of the web. At some point I wrote out some short pitches:

  • Read without distraction: Read articles like they are articles, not interactive (and manipulative) experiences.
  • Stay focused on one thing at a time: Instead of a giant list of tabs and alerts telling you what we aren’t doing, automatically focus on the one thing you are doing right now.
  • Control your notifications: Instead of letting any site poke at you for any reason, notifications are kept to a minimum and batched.
  • Focused writing: When you need to focus on what you are saying, not what people are saying to you, enter focused writing mode.
  • Get updates without falling down a news hole: Avoid clickbait, don’t reload pages, just see updates from the sites you trust (relates to Your Front Page)
  • Pomodoro: let yourself get distracted… but only a little bit. The Pomodoro technique helps you switch between periods of focused work and letting yourself relax
  • Don’t even ask: Do you want notifications from the news site you visited once? Do you want videos to autoplay? Of course not, and we’ll stop even asking.
  • Suggestion-free browsing: Every page you look at isn’t an invitation to tell you what you should look at next. Remove suggested content, and do what YOU want to do next. (YouTube example)

Concluding thoughts

Not just the conclusion of this list, the conclusion of my work in this area…

Some challenges in the design process:

  1. Asking someone to do something new is hard, and unlikely to happen. My previous post (The Over-engaged Knowledge Worker) relates to this tension.
  2. … and yet a “problem” isn’t enough to get someone to do something either.
  3. If someone is consciously and specifically doing some task, then there’s an opportunity.
  4. Creating wholistic solutions is unwelcome, unintuitively each thing that adds to the size of a solution diminishes from the breadth of problems the solution can solve.
  5. … and yet, abstract solutions without any clear suggestion of what they solve aren’t great either!
  6. Figuring out how to package functionality is a big deal.
  7. Approaches that increase the density of information or choices are themselves somewhat burdensome.
  8. … and yet context-sensitive approaches are unpredictable and distracting compared to consistent (if dense) functionality.
  9. I still believe there’s a wealth of material in the content of the pages people encounter. But it’s irregular and hard to understand, it takes concerted and long-term effort to do something here.
  10. Lots of the easy stuff, the roads well traveled, are still hard for a lot of people. Maybe this can be fixed by optimizing current UI… but I think there’s still room for novel improvements to old ideas.
  11. User research is a really great place to start, but it’s not very prescriptive. It’s mostly problem-finding, not solution-finding.
  12. There’s some kinds of user research I wish I had access to, specifically really low level analysis of behavior. What’s in someone’s mind when they open a new tab, or reuse one? In what order do they scan the UI? What are mental models of a URL, of pages and how they change, in what order to people compose (mentally and physically) things they want to share… it feels like it can go on forever, and there would be a ton of detail in the results, but given all the other constraints these insights feel important.
  13. There’s so many variables in an experiment, that it’s hard to know what failures really means. Every experiment that offers a novel experience involves several choices, and any one choice can cause the experiment to fail.

As Test Pilot comes to an end, I do find myself asking: is there room for qualitative improvements in desktop browser UI? Desktop computing is waning. User expectations of a browser are calcified. The only time people make a choice is when something breaks, and the only way to win is to not break anything and hope you competitor does break things.

So, is there room for improvement? Of course there is! The millions of hours spent every day in Firefox alone… this is actually important. Yes, a lot of things are at a local maximum, and we can A/B test little tweaks to get some suboptimal parts to their local maximum. But I do not believe in any way that the browsers we know are the optimal container. The web is bigger than browsers, bigger than desktop or mobile or VR, and a user agent can do unique things beyond any site or app.

And yet…

Daniel Stenbergalt-svc in curl

The RFC 7838 was published already in April 2016. It describes the new HTTP header Alt-Svc, or as the title of the document says HTTP Alternative Services.

HTTP Alternative Services

An alternative service in HTTP lingo is a quite simply another server instance that can provide the same service and act as the same origin as the original one. The alternative service can run on another port, on another host name, on another IP address, or over another HTTP version.

An HTTP server can inform a client about the existence of such alternatives by returning this Alt-Svc header. The header, which has an expiry time, tells the client that there’s an optional alternative to this service that is hosted on that host name, that port number using that protocol. If that client is a browser, it can connect to the alternative in the background and if that works out fine, continue to use that host for the rest of the time that alternative is said to work.

In reality, this header becomes a little similar to the DNS records SRV or URI: it points out a different route to the server than what the A/AAAA records for it say.

The Alt-Svc header came into life as an attempt to help out with HTTP/2 load balancing, since with the introduction of HTTP/2 clients would suddenly use much more persistent and long-living connections instead of the very short ones used for traditional HTTP/1 web browsing which changed the nature of how connections are done. This way, a system that is about to go down can hint the clients on how to continue using the service, elsewhere.

Alt-Svc: h2="backup.example.com:443"; ma=2592000;

HTTP upgrades

Once that header was published, the by then already existing and deployed Google QUIC protocol switched to using the Alt-Svc header to hint clients (read “Chrome users”) that “hey, this service is also available over gQUIC“. (Prior to that, they used their own custom alternative header that basically had the same meaning.)

This is important because QUIC is not TCP. Resources on the web that are pointed out using the traditional HTTPS:// URLs, still imply that you connect to them using TCP on port 443 and you negotiate TLS over that connection. Upgrading from HTTP/1 to HTTP/2 on the same connection was “easy” since they were both still TCP and TLS. All we needed then was to use the ALPN extension and voila: a nice and clean version negotiation.

To upgrade a client and server communication into a post-TCP protocol, the only official way to it is to first connect using the lowest common denominator that the HTTPS URL implies: TLS over TCP, and only once the server tells the client what more there is to try, the client can go on and try out the new toys.

For HTTP/3, this is the official way for HTTP servers to tell users about the availability of an HTTP/3 upgrade option.

curl

I want curl to support HTTP/3 as soon as possible and then as I’ve mentioned above, understanding Alt-Svc is a key prerequisite to have a working “bootstrap”. curl needs to support Alt-Svc. When we’re implementing support for it, we can just as well support the whole concept and other protocol versions and not just limit it to HTTP/3 purposes.

curl will only consider received Alt-Svc headers when talking HTTPS since only then can it know that it actually speaks with the right host that has the authority enough to point to other places.

Experimental

This is the first feature and code that we merge into curl under a new concept we do for “experimental” code. It is a way for us to mark this code as: we’re not quite sure exactly how everything should work so we allow users in to test and help us smooth out the quirks but as a consequence of this we might actually change how it works, both behavior and API wise, before we make the support official.

We strongly discourage anyone from shipping code marked experimental in production. You need to explicitly enable this in the build to get the feature. (./configure –enable-alt-svc)

But at the same time we urge and encourage interested users to test it out, try how it works and bring back your feedback, criticism, praise, bug reports and help us make it work the way we’d like it to work so that we can make it land as a “normal” feature as soon as possible.

Ship

The experimental alt-svc code has been merged into curl as of commit 98441f3586 (merged March 3rd 2019) and will be present in the curl code starting in the public release 7.64.1 that is planned to ship on March 27, 2019. I don’t have any time schedule for when to remove the experimental tag but ideally it should happen within just a few release cycles.

alt-svc cache

The curl implementation of alt-svc has an in-memory cache of known alternatives. It can also both save that cache to a text file and load that file back into memory. Saving the alt-svc cache to disk allows it to survive curl invokes and to truly work the way it was intended. The cache file stores the expire timestamp per entry so it doesn’t matter if you try to use a stale file.

curl –alt-svc

Caveat: I now talk about how a feature works that I’ve just above said might change before it ships. With the curl tool you ask for alt-svc support by pointing out the alt-svc cache file to use. Or pass a “” (empty name) to make it not load or save any file. It makes curl load an existing cache from that file and at the end, also save the cache to that file.

curl also already since a long time features fancy connection options such as –resolve and –connect-to, which both let a user control where curl connects to, which in many cases work a little like a static poor man’s alt-svc. Learn more about those in my curl another host post.

libcurl options for alt-svc

We start out the alt-svc support for libcurl with two separate options. One sets the file name to the alt-svc cache on disk (CURLOPT_ALTSVC), and the other control various aspects of how libcurl should behave in regards to alt-svc specifics (CURLOPT_ALTSVC_CTRL).

I’m quite sure that we will have reason to slightly adjust these when the HTTP/3 support comes closer to actually merging.

Cameron KaiserAnother choice for Intel TenFourFox users

Waaaaaaay back when, I parenthetically mentioned in passing an anonymous someone(tm) trying to resurrect the then-stalled Intel port. Since then we now have a periodically updated unofficial and totally unsupported mainline Intel version, but it wasn't actually that someone who was working on it. That someone now has a release, too.

@OlgaTPark's Intel TenFourFox fork is a bit unusual in that it is based on 45.9 (yes, back before the FPR releases began), so it is missing later updates in the FPR series. On the other hand, it does support Tiger (mainline Intel TenFourFox requires at least 10.5), it additionally supports several features not supported by TenFourFox, i.e., by enabling Mozilla features in some of its operating system-specific flavours that are disabled in TenFourFox for reasons of Tiger compatibility, and also includes support for H.264 video with ffmpeg.

H.264 video has been a perennial request which I've repeatedly nixed for reasons of the MPEG LA threatening to remove and purée the genitals of those who would use its patents without a license, and more to the point using ffmpeg in Firefox and TenFourFox probably would have violated the spirit, if not the letter, of the Mozilla Public License. Currently, mainline Firefox implements H.264 using operating system support and the Cisco decoder as an external plugin component. Olga's scheme does much the same thing using a separate component called the FFmpeg Enabler, so it should be possible to implement the glue code in mainline TenFourFox, "allowing" the standalone, separately-distributed enabler to patch in the library and thus sidestepping at least the Mozilla licensing issue. The provided library is a fat dylib with PowerPC and Intel support and the support glue is straightforward enough that I may put experimental support for this mechanism in FPR14.

(Long-time readers will wonder why there is MP3 decoding built into TenFourFox, using minimp3 which itself borrows code from ffmpeg, if I have these objections. There are three simple reasons: MP3 patents have expired, it was easy to do, and I'm a big throbbing hypocrite. One other piece of "OlgaFox" that I'll backport either for FPR13 final or FPR14 is a correctness fix for our MP3 decoder which apparently doesn't trip up PowerPC, but would be good for Intel users.)

Ordinarily I don't like forks using the same name, even if I'm no longer maintaining the code, so that I can avoid receiving spurious support requests or bug reports on code I didn't write. For example, I asked the Oysttyer project to change names from TTYtter after I had ceased maintaining it so that it was clearly recognized they were out on their own, and they graciously did. In this case, though it might be slightly confusing, I haven't requested my usual policy because it is clearly and (better be) widely known that no Intel version of TenFourFox, no matter what version or what features, is supported by me.

On the other hand, if someone used Olga's code as a basis for, say, a 10.5-specific PowerPC fork of TenFourFox enabling features supported in that OS (a la the dearly departed AuroraFox), I would have to insist that the name be changed so we don't get people on Tenderapp with problem reports about it. Fortunately, Olga's release uses the names TenFiveFox and TenSixFox for those operating system-specific versions, and I strongly encourage anyone who wants to do such a Leopard-specific port to follow suit.

Releases can be downloaded from Github, and as always, there is no support and no promises of updates. Do not send support questions about this or any Intel build of TenFourFox to Tenderapp.

Mozilla Addons BlogMarch’s featured extensions

Firefox Logo on blue background

Pick of the Month: Bitwarden – Free Password Manager

by 8bit Solutions LLC
Store your passwords securely (via encrypted vaults) and sync across devices.

“Works great, looks great, and it works better than it looks.”

Featured: Save Page WE

by DW-dev
Save complete pages or just portions as a single HTML file.

“Good for archiving the web!”

Featured: Terms of Service; Didn’t Read

by Abdullah Diaa, Hugo, Michiel de Jong
A clever tool for cutting through the gibberish of common ToS contracts you encounter around the web.

“Excellent time and privacy saver! Let’s face it, no one reads all the legalese in the ToS of each site used.”

Featured: Feedbro

by Nodetics
An advanced reader for aggregating all of your RSS/Atom/RDF sources.

“The best of its kind. Thank you.”

Featured: Don’t Touch My Tabs!

by Jeroen Swen
Don’t let clicked links take control of your current tab and load content you didn’t ask for.

“Hijacking ads! Deal with it now!”

Featured: DuckDuckGo Privacy Essentials

by DuckDuckGo
Search with enhanced security—tracker blocking, smarter encryption, private search, and other privacy perks.

“Perfect extension for blocking trackers while not breaking webpages.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post March’s featured extensions appeared first on Mozilla Add-ons Blog.

Will Kahn-GreeneBleach: stepping down as maintainer

What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

I'm stepping down

In October 2015, I had a conversation with James Socol that resulted in me picking up Bleach maintenance from him. That was a little over 3 years ago. In that time, I:

  • did 12 releases
  • improved the tests; switched from nose to pytest, added test coverage for all supported versions of Python and html5lib, added regression tests for xss strings in OWASP Testing Guide 4.0 appendix
  • worked with Greg to add browser testing for cleaned strings
  • improved documentation; added docstrings, added lots of examples, added automated testing of examples, improved copy
  • worked with Jannis to implement a security bug disclosure policy
  • improved performance (Bleach v2.0 released!)
  • switched to semver so the version number was more meaningful
  • did a rewrite to work with the extensive html5lib API changes
  • spent a couple of years dealing with the regressions from the rewrite
  • stepped up as maintainer for html5lib and did a 1.0 release
  • added support for Python 3.6 and 3.7

I accomplished a lot.

A retrospective on OSS project maintenance

I'm really proud of the work I did on Bleach. I took a great project and moved it forward in important and meaningful ways. Bleach is used by a ton of projects in the Python ecosystem. You have likely benefitted from my toil.

While I used Bleach on projects like SUMO and Input years ago, I wasn't really using Bleach on anything while I was a maintainer. I picked up maintenance of the project because I was familiar with it, James really wanted to step down, and Mozilla was using it on a bunch of sites--I picked it up because I felt an obligation to make sure it didn't drop on the floor and I knew I could do it.

I never really liked working on Bleach. The problem domain is a total fucking pain-in-the-ass. Parsing HTML like a browser--oh, but not exactly like a browser because we want the output of parsing to be as much like the input as possible, but as safe. Plus, have you seen XSS attack strings? Holy moly! Ugh!

Anyhow, so I did a bunch of work on a project I don't really use, but felt obligated to make sure it didn't fall on the floor, that has a pain-in-the-ass problem domain. I did that for 3+ years.

Recently, I had a conversation with Osmose that made me rethink that. Why am I spending my time and energy on this?

Does it further my career? I don't think so. Time will tell, I suppose.

Does it get me fame and glory? No.

Am I learning while working on this? I learned a lot about HTML parsing. I have scars. It's so crazy what browsers are doing.

Is it a community through which I'm meeting other people and creating friendships? Sort of. I like working with James, Jannis, and Greg. But I interact and work with them on non-Bleach things, too, so Bleach doesn't help here.

Am I getting paid to work on it? Not really. I did some of the work on work-time, but I should have been using that time to improve my skills and my career. So, yes, I spent some work-time on it, but it's not a project I've been tasked with to work on. For the record, I work on Socorro which is the Mozilla crash-ingestion pipeline. I don't use Bleach on that.

Do I like working on it? No.

Seems like I shouldn't be working on it anymore.

I moved Bleach forward significantly. I did a great job. I don't have any half-finished things to do. It's at a good stopping point. It's a good time to thank everyone and get off the stage.

What happens to Bleach?

I'm stepping down without working on what comes next. I think Greg is going to figure that out.

Thank you!

Jannis was a co-maintainer at the beginning because I didn't want to maintain it alone. Jannis stepped down and Greg joined. Both Jannis and Greg were a tremendous help and fantastic people to work with. Thank you!

Sam Snedders helped me figure out a ton of stuff with how Bleach interacts with html5lib. Sam was kind enough to deputize me as a temporary html5lib maintainer to get 1.0 out the door. I really appreciated Sam putting faith in me. Conversations about the particulars of HTML parsing--I'll miss those. Thank you!

While James wasn't maintaining Bleach anymore, he always took the time to answer questions I had. His historical knowledge, guidance, and thoughtfulness were crucial. James was my manager for a while. I miss him. Thank you!

There were a handful of people who contributed patches, too. Thank you!

Thank your maintainers!

My experience from 20 years of OSS projects is that many people are in similar situations: continuing to maintain something because of internal obligations long after they're getting any value from the project.

Take care of the maintainers of the projects you use! You can't thank them enough for their time, their energy, their diligence, their help! Not just the big successful projects, but also the one-person projects, too.

Shout-out for PyCon 2019 maintainers summit

Sumana mentioned that PyCon 2019 has a maintainers summit. That looks fantastic! If you're in the doldrums of maintaining an OSS project, definitely go if you can.

Changes to this blog post

Update March 2, 2019: I completely forgot to thank Sam Snedders which is a really horrible omission. Sam's the best!

Niko MatsakisAsync-await status report

I wanted to post a quick update on the status of the async-await effort. The short version is that we’re in the home stretch for some kind of stabilization, but there remain some significant questions to overcome.

Announcing the implementation working group

As part of this push, I’m happy to announce we’ve formed a async-await implementation working group. This working group is part of the whole async-await effort, but focused on the implementation, and is part of the compiler team. If you’d like to help get async-await over the finish line, we’ve got a list of issues where we’d definitely like help (read on).

If you are interested in taking part, we have an “office hours” scheduled for Tuesday (see the compiler team calendar) – if you can show up then on Zulip, it’d be ideal! (But if not, just pop in any time.)

Who are we stabilizing for?

I mentioned that there remain significant questions to overcome before stabilization. I think the most root question of all is this one: Who is the audience for this stabilization?

The reason that question is so important is because it determines how to weigh some of the issues that currently exist. If the point of the stabilization is to start promoting async-await as something for widespread use, then there are issues that we probably ought to resolve first – most notably, the await syntax, but also other things.

If, however, the point of stabilization is to let ‘early adopters’ start playing with it more, then we might be more tolerant of problems, so long as there are no backwards compatibility concerns.

My take is that either of these is a perfectly fine answer. But if the answer is that we are trying to unblock early adopters, then we want to be clear in our messaging, so that people don’t get turned off when they encounter some of the bugs below.

OK, with that in place, let’s look in a bit more detail.

Implementation issues

One of the first things that we did in setting up the implementation working group is to do a complete triage of all existing async-await issues. From this, we found that there was one very firm blocker, #54716. This issue has to do the timing of drops in an async fn, specifically the drop order for parameters that are not used in the fn body. We want to be sure this behaves analogously with regular functions. This is a blocker to stabilization because it would change the semantics of stable code for us to fix it later.

We also uncovered a number of major ergonomic problems. In a follow-up meeting (available on YouTube), cramertj and I also drew up plans for fixing these bugs, though these plans have not yet been writting into mentoring instructions. These issues include all focus around async fns that take borrowed references as arguments – for example, the async fn syntax today doesn’t support more than one lifetime in the arguments, so something like async fn foo(x: &u32, y: &u32) doesn’t work.

Whether these ergonomic problems are blockers, however, depends a bit on your perspective: as @cramertj says, a number of folks at Google are using async-await today productively despite these limitations, but you must know the appropriate workarounds and so forth. This is where the question of our audience comes into play. My take is that these issues are blockers for “async fn” being ready for “general use”, but probably not for “early adopters”.

Another big concern for me personally is the maintenance story. Thanks to the hard work of Zoxc and cramertj, we’ve been able to standup a functional async-await implementation very fast, which is awesome. But we don’t really have a large pool of active contributors working on the async-await implementation who can help to fix issues as we find them, and this seems bad.

The syntax question

Finally, we come to the question of the await syntax. At the All Hands, we had a number of conversations on this topic, and it became clear that we do not presently have consensus for any one syntax. We did a lot of exploration here, however, and enumerated a number of subtle arguments in favor of each option. At this moment, @withoutboats is busily trying to write-up that exploration into a document.

Before saying anything else, it’s worth pointing out that we don’t actually have to resolve the await syntax in order to stabilize async-await. We could stabilize the await!(...) macro syntax for the time being, and return to the issue later. This would unblock “early adopters”, but doesn’t seem like a satisfying answer if our target is the “general public”. If we were to do this, we’d be drawing on the precedent of try!, where we first adopted a macro and later moved that support to native syntax.

That said, we do eventually want to pick another syntax, so it’s worth thinking about how we are going to do that. As I wrote, the first step is to complete an overall summary that tries to describe the options on the table and some of the criteria that we can use to choose between them. Once that is available, we will need to settle on next steps.

Resolving hard questions

I am looking at the syntax question as a kind of opportunity – one of the things that we as a community frequently have to do is to find a way to resolve really hard questions without a clear answer. The tools that we have for doing this at the moment are really fairly crude: we use discussion threads and manual summary comments. Sometimes, this works well. Sometimes, amazingly well. But other times, it can be a real drain.

I would like to see us trying to resolve this sort of issue in other ways. I’ll be honest and say that I don’t entirely know what those are, but I know they are not open discussion threads. For example, I’ve found that the #rust2019 blog posts have been an incredibly effective way to have an open conversation about priorities without the usual ranchor and back-and-forth. I’ve been very inspired by systems like vTaiwan, which enable a lot of public input, but in a structured and collaborative form, rather than an “antagonistic” one. Similarly, I would like to see us perhaps consider running more experiments to test hypotheses about learnability or other factors (but this is something I would approach with great caution, as I think designing good experiments is very hard).

Anyway, this is really a topic for a post of its own. In this particular case, I hope that we find that enumerating in detail the arguments for each side leads us to a clear conclusion, perhaps some kind of “third way” that we haven’t seen yet. But, thinking ahead, it’d be nice to find ways to have these conversations that take us to that “third way” faster.

Closing notes

As someone who has not been closely following async-await thus far, I’m super excited by all I see. The feature has come a ridiculously long way, and the remaining blockers all seem like things we can overcome. async await is coming: I can’t wait to see what people build with it.

Cross-posted to internals here.

Mozilla Open Innovation TeamSharing our Common Voices

Mozilla releases the largest the largest to-date public domain transcribed dataset of human voices available for use, including 18 different languages, adding up to almost 1,400 hours of recorded voice data from more than 42,000 contributors.

From the onset, our vision for Common Voice has been to build the world’s most diverse voice dataset, optimized for building voice technologies. We also made a promise of openness: we would make the high quality, transcribed voice data that was collected publicly available to startups, researchers, and anyone interested in voice-enabled technologies.

Today, we’re excited to share our first multi-language dataset with 18 languages represented, including English, French, German and Mandarin Chinese (Traditional), but also for example Welsh and Kabyle. Altogether, the new dataset includes approximately 1,400 hours of voice clips from more than 42,000 people.

With this release, the continuously growing Common Voice dataset is now the largest ever of its kind, with tens of thousands of people contributing their voices and original written sentences to the public domain (CC0). Moving forward, the full dataset will be available for download on the Common Voice site.

Data Qualities

The Common Voice dataset is unique not only in its size and licence model but also in its diversity, representing a global community of voice contributors. Contributors can opt-in to provide metadata like their age, sex, and accent so that their voice clips are tagged with information useful in training speech engines.

This is a different approach than for other publicly available datasets, which are either hand-crafted to be diverse (i.e. equal number of men and women) or the corpus is as diverse as the “found” data (e.g. the TEDLIUM corpus from TED talks is ~3x men to women).

More Common Voices: from 3 to 22 languages in 8 months

Since we enabled multi-language support in June 2018, Common Voice has grown to be more global and more inclusive. This has surpassed our expectations: Over the last eight months, communities have enthusiastically rallied around the project, launching data collection efforts in 22 languages with an incredible 70 more in progress on the Common Voice site.

As a community-driven project, people around the world who care about having a voice dataset in their language have been responsible for each new launch — some are passionate volunteers, some are doing this as part of their day jobs as linguists or technologists. Each of these efforts require translating the website to allow contributions and adding sentences to be read.

Our latest additions include Dutch, Hakha-Chin, Esperanto, Farsi, Basque, and Spanish. In some cases, a new language launch on Common Voice is the beginning of that language’s internet presence. These community efforts are proof that all languages — not just ones that can generate high revenue for technology companies — are worthy of representation.

We’ll continue working with these communities to ensure their voices are represented and even help make voice technology for themselves. In this spirit, we recently joined forces with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) and co-hosted an ideation hackathon in Kigali to create a speech corpus for Kinyarwanda, laying the foundation for local technologists in Rwanda to develop open source voice technologies in their own language.

Improvements in the contribution experience, including optional profiles

The Common Voice Website is one of our main vehicles for building voice data sets that are useful for voice-interaction technology. The way it looks today is the result of an ongoing process of iteration. We listened to community feedback about the pain points of contributing while also conducting usability research to make contribution easier, more engaging, and fun.

People who contribute not only see progress per language in recording and validation, but also have improved prompts that vary from clip to clip; new functionality to review, re-record, and skip clips as an integrated part of the experience; the ability to move quickly between speak and listen; as well as a function to opt-out of speaking for a session.

We also added the option to create a saved profile, which allows contributors to keep track of their progress and metrics across multiple languages. Providing some optional demographic profile information also improves the audio data used in training speech recognition accuracy.

<figcaption>Common Voice started as a proof of concept prototype and has been collaboratively iterated over the past year</figcaption>

Empower decentralized product innovation: a marathon rather than a sprint

Mozilla aims to contribute to a more diverse and innovative voice technology ecosystem. Our goal is to both release voice-enabled products ourselves, while also supporting researchers and smaller players. Providing data through Common Voice is one part of this, as are the open source Speech-to-Text and Text-to-Speech engines and trained models through project DeepSpeech, driven by our Machine Learning Group.

We know this will take time, and we believe releasing early and working in the open can attract the involvement and feedback of technologists, organisations, and companies that will make these projects more relevant and robust. The current reality for both projects is that they are still in their research phase, with DeepSpeech making strong progress toward productization.

To date, with data from Common Voice and other sources, DeepSpeech is technically capable to convert speech to text with human accuracy and “live”, i.e. in realtime as the audio is being streamed. This allows transcription of lectures, phone conversations, television programs, radio shows, and and other live streams all as they are happening.

The DeepSpeech engine is already being used by a variety of non-Mozilla projects: For example in Mycroft, an open source voice based assistant; in Leon, an open-source personal assistant; in FusionPBX, a telephone switching system installed at and serving a private organization to transcribe phone messages. In the future Deep Speech will target smaller platform devices, such as smartphones and in-car systems, unlocking product innovation in and outside of Mozilla.

For Common Voice, our focus in 2018 was to build out the concept, make it a tool for any language community to use, optimise the website, and build a robust backend (for example, the accounts system). Over the coming months we will focus efforts on experimenting with different approaches to increase the quantity and quality of data we are able to collect, both through community efforts as well as new partnerships.

Our overall aim remains: Providing more and better data to everyone in the world who seeks to build and use voice technology. Because competition and openness are healthy for innovation. Because smaller languages are an issue of access and equity. Because privacy and control matters, especially over your voice.


Sharing our Common Voices was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hacks.Mozilla.OrgImplications of Rewriting a Browser Component in Rust

The previous posts in this Fearless Security series examine memory safety and thread safety in Rust. This closing post uses the Quantum CSS project as a case study to explore the real world impact of rewriting code in Rust.

The style component is the part of a browser that applies CSS rules to a page. This is a top-down process on the DOM tree: given the parent style, the styles of children can be calculated independently—a perfect use-case for parallel computation. By 2017, Mozilla had made two previous attempts to parallelize the style system using C++. Both had failed.

Quantum CSS resulted from a need to improve page performance. Improving security is a happy byproduct.

Rewrites code to make it faster; also makes it more secure

There’s a large overlap between memory safety violations and security-related bugs, so we expected this rewrite to reduce the attack surface in Firefox. In this post, I will summarize the potential security vulnerabilities that have appeared in the styling code since Firefox’s initial release in 2002. Then I’ll look at what could and could not have been prevented by using Rust.

Over the course of its lifetime, there have been 69 security bugs in Firefox’s style component. If we’d had a time machine and could have written this component in Rust from the start, 51 (73.9%) of these bugs would not have been possible. While Rust makes it easier to write better code, it’s not foolproof.

Rust

Rust is a modern systems programming language that is type- and memory-safe. As a side effect of these safety guarantees, Rust programs are also known to be thread-safe at compile time. Thus, Rust can be a particularly good choice when:

✅ processing untrusted input safely.
✅ introducing parallelism to improve performance.
✅ integrating isolated components into an existing codebase.

However, there are classes of bugs that Rust explicitly does not address—particularly correctness bugs. In fact, during the Quantum CSS rewrite, engineers accidentally reintroduced a critical security bug that had previously been patched in the C++ code, regressing the fix for bug 641731. This allowed global history leakage via SVG image documents, resulting in bug 1420001. As a trivial history-stealing bug, this is rated security-high. The original fix was an additional check to see if the SVG document was being used as an image. Unfortunately, this check was overlooked during the rewrite.

While there were automated tests intended to catch :visited rule violations like this, in practice, they didn’t detect this bug. To speed up our automated tests, we temporarily turned off the mechanism that tested this feature—tests aren’t particularly useful if they aren’t run. The risk of re-implementing logic errors can be mitigated by good test coverage (and actually running the tests). There’s still a danger of introducing new logic errors.

As developer familiarity with the Rust language increases, best practices will improve. Code written in Rust will become even more secure. While it may not prevent all possible vulnerabilities, Rust eliminates an entire class of the most severe bugs.

Quantum CSS Security Bugs

Overall, bugs related to memory, bounds, null/uninitialized variables, or integer overflow would be prevented by default in Rust. The miscellaneous bug I referenced above would not have been prevented—it was a crash due to a failed allocation.

Security bugs by category

All of the bugs in this analysis are related to security, but only 43 received official security classifications. (These are assigned by Mozilla’s security engineers based on educated “exploitability” guesses.) Normal bugs might indicate missing features or problems like crashes. While undesirable, crashes don’t result in data leakage or behavior modification. Official security bugs can range from low severity (highly limited in scope) to critical vulnerability (might allow an attacker to run arbitrary code on the user’s platform).

There’s a significant overlap between memory vulnerabilities and severe security problems. Of the 34 critical/high bugs, 32 were memory-related.

Security rated bug breakdown

Comparing Rust and C++ code

Bug 955914 is a heap buffer overflow in the GetCustomPropertyNameAt function. The code used the wrong variable for indexing, which resulted in interpreting memory past the end of the array. This could either crash while accessing a bad pointer or copy memory to a string that is passed to another component.

The ordering of all CSS properties (both longhand and custom) is stored in an array, mOrder. Each element is either represented by its CSS property value or, in the case of custom properties, by a value that starts at eCSSProperty_COUNT (the total number of non-custom CSS properties). To retrieve the name of a custom property, first, you have to retrieve the custom property value from mOrder, then access the name at the corresponding index of the mVariableOrder array, which stores the custom property names in order.

Vulnerable C++ code:

    void GetCustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const {
            MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT);

            aResult.Truncate();
            aResult.AppendLiteral("var-");
            aResult.Append(mVariableOrder[aIndex]);

The problem occurs at line 6 when using aIndex to access an element of the mVariableOrder array. aIndex is intended for use with the mOrder array not the mVariableOrder array. The corresponding element for the custom property represented by aIndex in mOrder is actually mOrder[aIndex] - eCSSProperty_COUNT.

Fixed C++ code:

    void Get CustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const {
      MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT);

      uint32_t variableIndex = mOrder[aIndex] - eCSSProperty_COUNT;
      aResult.Truncate();
      aResult.AppendLiteral("var-");
      aResult.Append(mVariableOrder[variableIndex]);
    }

Equivalent Rust code

While Rust is similar to C++ in some ways, idiomatic Rust uses different abstractions and data structures. Rust code will look very different from C++ (see below for details). First, let’s consider what would happen if we translated the vulnerable code as literally as possible:

    fn GetCustomPropertyNameAt(&self, aIndex: usize) -> String {
        assert!(self.mOrder[aIndex] >= self.eCSSProperty_COUNT);

        let mut result = "var-".to_string();
        result += &self.mVariableOrder[aIndex];
        result
    }

The Rust compiler would accept the code, since there is no way to determine the length of vectors before runtime. Unlike arrays, whose length must be known, the Vec type in Rust is dynamically sized. However, the standard library vector implementation has built-in bounds checking. When an invalid index is used, the program immediately terminates in a controlled fashion, preventing any illegal access.

The actual code in Quantum CSS uses very different data structures, so there’s no exact equivalent. For example, we use Rust’s powerful built-in data structures to unify the ordering and property name data. This allows us to avoid having to maintain two independent arrays. Rust data structures also improve data encapsulation and reduce the likelihood of these kinds of logic errors. Because the code needs to interact with C++ code in other parts of the browser engine, the new GetCustomPropertyNameAt function doesn’t look like idiomatic Rust code. It still offers all of the safety guarantees while providing a more understandable abstraction of the underlying data.

tl;dr;

Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures). However, even Rust is not foolproof. Developers still need to be aware of correctness bugs and data leakage attacks. Code review, testing, and fuzzing still remain essential for maintaining secure libraries.

Compilers can’t catch every mistake that programmers can make. However, Rust has been designed to remove the burden of memory safety from our shoulders, allowing us to focus on logical correctness and soundness instead.

The Mozilla BlogSharing our Common Voices – Mozilla releases the largest to-date public domain transcribed voice dataset

Mozilla crowdsources the largest dataset of human voices available for use, including 18 different languages, adding up to almost 1,400 hours of recorded voice data from more than 42,000 contributors.

From the onset, our vision for Common Voice has been to build the world’s most diverse voice dataset, optimized for building voice technologies. We also made a promise of openness: we would make the high quality, transcribed voice data that was collected publicly available to startups, researchers, and anyone interested in voice-enabled technologies.

Today, we’re excited to share our first multi-language dataset with 18 languages represented, including English, French, German and Mandarin Chinese (Traditional), but also for example Welsh and Kabyle. Altogether, the new dataset includes approximately 1,400 hours of voice clips from more than 42,000 people.

With this release, the continuously growing Common Voice dataset is now the largest ever of its kind, with tens of thousands of people contributing their voices and original written sentences to the public domain (CC0). Moving forward, the full dataset will be available for download on the Common Voice site.

 

Data Qualities

The Common Voice dataset is unique not only in its size and licence model but also in its diversity, representing a global community of voice contributors. Contributors can opt-in to provide metadata like their age, sex, and accent so that their voice clips are tagged with information useful in training speech engines.

This is a different approach than for other publicly available datasets, which are either hand-crafted to be diverse (i.e. equal number of men and women) or the corpus is as diverse as the “found” data (e.g. the TEDLIUM corpus from TED talks is ~3x men to women).

More Common Voices: from 3 to 22 languages in 8 months

Since we enabled multi-language support in June 2018, Common Voice has grown to be more global and more inclusive. This has surpassed our expectations: Over the last eight months, communities have enthusiastically rallied around the project, launching data collection efforts in 22 languages with an incredible 70 more in progress on the Common Voice site.

As a community-driven project, people around the world who care about having a voice dataset in their language have been responsible for each new launch — some are passionate volunteers, some are doing this as part of their day jobs as linguists or technologists. Each of these efforts require translating the website to allow contributions and adding sentences to be read.

Our latest additions include Dutch, Hakha-Chin, Esperanto, Farsi, Basque, and Spanish. In some cases, a new language launch on Common Voice is the beginning of that language’s internet presence. These community efforts are proof that all languages—not just ones that can generate high revenue for technology companies—are worthy of representation.

We’ll continue working with these communities to ensure their voices are represented and even help make voice technology for themselves. In this spirit, we recently joined forces with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) and co-hosted an ideation hackathon in Kigali to create a speech corpus for Kinyarwanda, laying the foundation for local technologists in Rwanda to develop open source voice technologies in their own language.

Improvements in the contribution experience, including optional profiles

The Common Voice Website is one of our main vehicles for building voice data sets that are useful for voice-interaction technology. The way it looks today is the result of an ongoing process of iteration. We listened to community feedback about the pain points of contributing while also conducting usability research to make contribution easier, more engaging, and fun.

People who contribute not only see progress per language in recording and validation, but also have improved prompts that vary from clip to clip; new functionality to review, re-record, and skip clips as an integrated part of the experience; the ability to move quickly between speak and listen; as well as a function to opt-out of speaking for a session.

We also added the option to create a saved profile, which allows contributors to keep track of their progress and metrics across multiple languages. Providing some optional demographic profile information also improves the audio data used in training speech recognition accuracy.

 

Common Voice started as a proof of concept prototype and has been collaboratively iterated over the past year

Empower decentralized product innovation: a marathon rather than a sprint

Mozilla aims to contribute to a more diverse and innovative voice technology ecosystem. Our goal is to both release voice-enabled products ourselves, while also supporting researchers and smaller players. Providing data through Common Voice is one part of this, as are the open source Speech-to-Text and Text-to-Speech engines and trained models through project DeepSpeech, driven by our Machine Learning Group.

We know this will take time, and we believe releasing early and working in the open can attract the involvement and feedback of technologists, organisations, and companies that will make these projects more relevant and robust. The current reality for both projects is that they are still in their research phase, with DeepSpeech making strong progress toward productization.

To date, with data from Common Voice and other sources, DeepSpeech is technically capable to convert speech to text with human accuracy and “live”, i.e. in realtime as the audio is being streamed. This allows transcription of lectures, phone conversations, television programs, radio shows, and and other live streams all as they are happening.

The DeepSpeech engine is already being used by a variety of non-Mozilla projects: For example in Mycroft, an open source voice based assistant; in Leon, an open-source personal assistant; in FusionPBX, a telephone switching system installed at and serving a private organization to transcribe phone messages. In the future Deep Speech will target smaller platform devices, such as smartphones and in-car systems, unlocking product innovation in and outside of Mozilla.

For Common Voice, our focus in 2018 was to build out the concept, make it a tool for any language community to use, optimise the website, and build a robust backend (for example, the accounts system). Over the coming months we will focus efforts on experimenting with different approaches to increase the quantity and quality of data we are able to collect, both through community efforts as well as new partnerships.

Our overall aim remains: Providing more and better data to everyone in the world who seeks to build and use voice technology. Because competition and openness are healthy for innovation. Because smaller languages are an issue of access and equity. Because privacy and control matters, especially over your voice.

The post Sharing our Common Voices – Mozilla releases the largest to-date public domain transcribed voice dataset appeared first on The Mozilla Blog.

Mozilla GFXWebRender newsletter #41

Welcome to episode 41 of WebRender’s newsletter.

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser Servo and on its way to becoming Firefox‘s rendering engine.

Today’s highlights are two big performance improvements by Kvark and Sotaro. I’ll let you follow the links below if you are interested in the technical details.
I think that Sotaro’s fix illustrates well the importance of progressively rolling out this type of project a hardware/OS configuration at a time, giving us the time and opportunity to observe and address each configuration’s strengths and quirks.

Notable WebRender and Gecko changes

  • Kvark rewrote the mixed blend mode rendering code, yielding great performance improvements on some sites.
  • Kats fixed another clipping problem affecting blurs.
  • Kats fixed scaling of blurs.
  • Glenn fixed a clip mask regression.
  • Glenn added some picture cache testing infrastructure.
  • Nical landed a series of small CPU optimizations.
  • Nical reduced the cost of hashing and copying font instances.
  • Nical changed how the tiling origin of blob images is computed.
  • Sotaro greatly improved the performance of picture caching on Windows with Intel GPUs.
  • Sotaro improved the performance of canvas rendering.
  • Sotaro fixed empty windows with GDK_BACKEND=wayland.
  • Sotaro fixed empty popups with GDK_BACKEND=wayland.
  • Jamie improved the performance of texture uploads on Adreno GPUs.

Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on crates.io (documentation)

Emily DunhamWhen searching an error fails

When searching an error fails

This blog has seen a dearth of posts lately, in part because my standard post formula is “a public thing had a poorly documented problem whose solution seems worth exposing to search engines”. In my present role, the tools I troubleshoot are more often private or so local that the best place to put such docs has been an internal wiki or their own READMEs.

This change of ecosystem has caused me to spend more time addressing a different kind of error: Those which one really can’t just Google.

Sometimes, especially if it’s something that worked fine on another system and is mysteriously not working any more, the problem can be forehead-slappingly obvious in retrospect. Here are some general steps to catch an “oops, that was obvious” fix as soon as possible.

Find the command that yielded the error

First, I identify what tool I’m trying to use. Ops tools are often an amalgam of several disparate tools glued together by a script or automation. This alias invokes that call to SSH, this internal tool wraps that API with an appropriate set of arguments by ascertaining them from its context. If I think that SSH, or the API, is having a problem, the first troubleshooting step is to figure out exactly what my toolchain fed into it. Then I can run that from my own terminal, and either observe a more actionable error or have something that can be compared against some reliable documentation.

Wrappers often elide some or all of the actual error messages that they receive. I ran into this quite recently when a multi-part shell command run by a script was silently failing, but running the ssh portion of that command in isolation yielded a helpful and familiar error that prompted me to add the appropriate key to my ssh-agent, which in turn allowed the entire script to run properly.

Make sure the version “should” work

Identifying the tool also lets me figure out where that tool’s source lives. Finding the source is essential for the next troubleshooting steps that I take.:

$ which toolname
$ toolname -version #

I look for hints about whether the version of the tool that I’m using is supposed to be able to do the thing I’m asking it to do. Sometimes my version of the tool might be too new. This can be the case when the dates on all the docs that suggest it’s supposed to work the way it’s failing are more than a year or so old. If I suspect I might be on too new a version, I can find a list of releases near the tool’s source and try one from around the date of the docs.

More often, my version of a custom tool has fallen behind. If the date of the docs claiming the tool should work is recent, and the date of my local version is old, updating is an obvious next step.

If the tool was installed in a way other than my system package manager, I also check its README for hints about the versions of any dependencies it might expect, and make sure that it has those available on the system I’m running it from.

Look for interference from settings

Once I have something that seems like the right version of the tool, I check the way its README or other docs looked as of the installed version, and note any config files that might be informing its behavior. Some tooling cares about settings in an individual file; some cares about certain environment variables; some cares about a dotfile nearby on the file system; some cares about configs stored somewhere in the homedir of the user invoking it. Many heed several of the above, usually prioritizing the nearest (env vars and local settings) over the more distant (system-wide settings).

Check permissions

Issues where the user running a script has inappropriate permissions are usually obvious on the local filesystem, but verifying that you’re trying to do a task as a user allowed to do it is more complicated in the cloud. Especially when trying to do something that’s never worked before, it can be helpful to attempt to do the same task as your script manually through the cloud service’s web interface. If it lets you, you narrow down the possible sources of the problem; if it fails, it often does so with a far more human-friendly message than when you get the same failure through an API.

Trace the error through the source

I know where the error came from, I have the right versions of the tool and its dependencies, no settings are interfering with the tool’s operation, and permissions are set such that the tool should be able to succeed. When all this normal, generic troubleshooting has failed, it’s time to trace the error through the tool’s source.

This is straightforward when I’m fortunate enough to have a copy of that source: I pick some string from the error message that looks like it’ll always be the same for that particular error, and search it in the source. If there are dozens of hits, either the tool is aflame with technical debt or I picked a bad search string.

Locating what ran right before things broke leads to the part of the source that encodes the particular assumptions that the program makes about its environment, which can sometimes point out that I failed to meet one. Sometimes, I find that the error looked unfamiliar because it was actually escalated from some other program wrapped by the tool that showed it to me, in which case I restart this troubleshooting process from the beginning on that tool.

Sometimes, when none of the aforementioned problems is to blame, I discover that the problem arose from a mismatch between documentation and the program’s functionality. In these cases, it’s often the docs that were “right”, and the proper solution is to point out the issue to the tool’s developers and possibly offer a patch. When the code’s behavior differs from the docs’ claims, a patch to one or the other is always necessary.

The Rust Programming Language BlogAnnouncing Rust 1.33.0

The Rust team is happy to announce a new version of Rust, 1.33.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.33.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.33.0 on GitHub.

What's in 1.33.0 stable

The two largest features in this release are significant improvements to const fns, and the stabilization of a new concept: "pinning."

const fn improvements

With const fn, you can now do way more things! Specifically:

  • irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... })
  • let bindings (e.g. let x = 1;)
  • mutable let bindings (e.g. let mut x = 1;)
  • assignment (e.g. x = y) and assignment operator (e.g. x += y) expressions, even where the assignment target is a projection (e.g. a struct field or index operation like x[3] = 42)
  • expression statements (e.g. 3;)

You're also able to call const unsafe fns inside a const fn, like this:

const unsafe fn foo() -> i32 { 5 }
const fn bar() -> i32 {
    unsafe { foo() }
}

With these additions, many more functions in the standard library are able to be marked as const. We'll enumerate those in the library section below.

Pinning

This release introduces a new concept for Rust programs, implemented as two types: the std::pin::Pin<P> type, and the Unpin marker trait. The core idea is elaborated on in the docs for std::pin:

It is sometimes useful to have objects that are guaranteed to not move, in the sense that their placement in memory does not change, and can thus be relied upon. A prime example of such a scenario would be building self-referential structs, since moving an object with pointers to itself will invalidate them, which could cause undefined behavior.

A Pin<P> ensures that the pointee of any pointer type P has a stable location in memory, meaning it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. We say that the pointee is "pinned".

This feature will largely be used by library authors, and so we won't talk a lot more about the details here. Consult the docs if you're interested in digging into the details. However, the stabilization of this API is important to Rust users generally because it is a significant step forward towards a highly anticipated Rust feature: async/await. We're not quite there yet, but this stabilization brings us one step closer. You can track all of the necessary features at areweasyncyet.rs.

Import as _

You can now import an item as _. This allows you to import a trait's impls, and not have the name in the namespace. e.g.

use std::io::Read as _;

// Allowed as there is only one `Read` in the module.
pub trait Read {}

See the detailed release notes for more details.

Library stabilizations

Here's all of the stuff that's been made const:

Additionally, these APIs have become stable:

See the detailed release notes for more details.

Cargo features

Cargo should now rebuild a crate if a file was modified during the initial build.

See the detailed release notes for more.

Crates.io

As previously announced, coinciding with this release, crates.io will require that you have a verified email address to publish. Starting at 2019-03-01 00:00 UTC, if you don't have a verified email address and run cargo publish, you'll get an error.

This ensures we can comply with DMCA procedures. If you haven't heeded the warnings cargo printed during the last release cycle, head on over to crates.io/me to set and verify your email address. This email address will never be displayed publicly and will only be used for crates.io operations.

Contributors to 1.33.0

Many people came together to create Rust 1.33.0. We couldn't have done it without all of you. Thanks!

The Firefox FrontierWhen an internet emergency strikes

Research shows that we spend more time on phones and computers than with friends. This means we’re putting out more and more information for hackers to grab. It’s better to … Read more

The post When an internet emergency strikes appeared first on The Firefox Frontier.

Mozilla Addons BlogDesign and create themes for Firefox

Last September, we announced the next major evolution in themes for Firefox. With the adoption of static themes, you can now go beyond customizing the header of the browser and easily modify the appearance of the browser’s tabs and toolbar, and choose to distribute your theme publicly or keep it private for your own personal use. If you would like to learn about how to take advantage of these new features or are looking for an updated tutorial on how to create themes, you have come to the right place!

Designing themes doesn’t have to be complicated. The theme generator on AMO allows users to create a theme within minutes. You may enter hex, rgb, or rgba values or use the color selector to pick your preferred colors for the header, toolbar, and text. You will also need to provide an image which will be aligned to the top-right. It may appear to be simple, and that’s because it is!

If you want to test what your theme will look like before you submit it to AMO, the extension Firefox Color will enable you to preview changes in real-time, add multiple images, make finer adjustments, and more. You will also be able to export the theme you create on Firefox Color.

If you want to create a more detailed theme, you can use the static theme approach to create a theme XPI and make further modifications to the new tab background, sidebar, icons, and more. Visit the theme syntax and properties page for further details.

When your theme is generated, visit the Developer Hub to upload it for signing. The process of uploading a theme is similar to submitting an extension. If you are using the theme generator, you will not be required to upload a packaged file. In any case, you will need to decide whether you would like to share your design with the world on addons.mozilla.org, self-distribute it, or keep it for yourself. To keep a theme for yourself or to self-distribute, be sure to select “On your own” when uploading your theme.

Whether you are creating and distributing themes for the public or simply creating themes for private enjoyment, we all benefit by having an enhanced browsing experience. With the theme generator on AMO and Firefox Color, you can easily create multiple themes and switch between them.

The post Design and create themes for Firefox appeared first on Mozilla Add-ons Blog.

Frédéric WangReview of Igalia's Web Platform activities (H2 2018)

This blog post reviews Igalia’s activity around the Web Platform, focusing on the second semester of 2018.

Projects

MathML

During 2018 we have continued discussions to implement MathML in Chromium with Google and people interested in math layout. The project was finally launched early this year and we have encouraging progress. Stay tuned for more details!

Javascript

As mentioned in the previous report, Igalia has proposed and developed the specification for BigInt, enabling math on arbitrary-sized integers in JavaScript. We’ve continued to land patches for BigInt support in SpiderMonkey and JSC. For the latter, you can watch this video demonstrating the current support. Currently, these two support are under a preference flag but we hope to make it enable by default after we are done polishing the implementations. We also added support for BigInt to several Node.js APIs (e.g. fs.Stat or process.hrtime.bigint).

Regarding “object-oriented” features, we submitted patches private and public instance fields support to JSC and they are pending review. At the same time, we are working on private methods for V8

We contributed other nice features to V8 such as a spec change for template strings and iterator protocol, support for Object.fromEntries, Symbol.prototype.description, miscellaneous optimizations.

At TC39, we maintained or developed many proposals (BigInt, class fields, private methods, decorators, …) and led the ECMAScript Internationalization effort. Additionally, at the WebAssembly Working Group we edited the WebAssembly JS and Web API and early version of WebAssembly/ES Module integration specifications.

Last but not least, we contributed various conformance tests to test262 and Web Platform Tests to ensure interoperability between the various features mentioned above (BigInt, Class fields, Private methods…). In Node.js, we worked on the new Web Platform Tests driver with update automation and continued porting and fixing more Web Platform Tests in Node.js core.

We also worked on the new Web Platform Tests driver with update automation, and continued porting and fixing more Web Platform Tests in Node.js core. Outside of core, we implemented the initial JavaScript API for llnode, a Node.js/V8 plugin for the LLDB debugger.

Accessibility

Igalia has continued its involvement at the W3C. We have achieved the following:

We are also collaborating with Google to implement ATK support in Chromium. This work will make it possible for users of the Orca screen reader to use Chrome/Chromium as their browser. During H2 we began implementing the foundational accessibility support. During H1 2019 we will continue this work. It is our hope that sufficient progress will be made during H2 2019 for users to begin using Chrome with Orca.

Web Platform Predictability

On Web Platform Predictability, we’ve continued our collaboration with AMP to do bug fixes and implement new features in WebKit. You can read a review of the work done in 2018 on the AMP blog post.

We have worked on a lot of interoperability issues related to editing and selection thanks to financial support from Bloomberg. For example when deleting the last cell of a table some browsers keep an empty table while others delete the whole table. The latter can be problematic, for example if users press backspace continuously to delete a long line, they can accidentally end up deleting the whole table. This was fixed in Chromium and WebKit.

Another issue is that style is lost when transforming some text into list items. When running execCommand() with insertOrderedList/insertUnorderedList on some styled paragraph, the new list item loses the original text’s style. This behavior is not interoperable and we have proposed a fix so that Firefox, Edge, Safari and Chrome behave the same for this operation. We landed a patch for Chromium. After discussion with Apple, it was decided not to implement this change in Safari as it would break some iOS rich text editor apps, mismatching the required platform behavior.

We have also been working on CSS Grid interoperability. We imported Web Platform Tests into WebKit (cf bugs 191515 and 191369 and at the same time completing the missing features and bug fixes so that browsers using WebKit are interoperable, passing 100% of the Grid test suite. For details, see 191358, 189582, 189698, 191881, 191938, 170175, 191473 and 191963. Last but not least, we are exporting more than 100 internal browser tests to the Web Platform test suite.

CSS

Bloomberg is supporting our work to develop new CSS features. One of the new exciting features we’ve been working on is CSS Containment. The goal is to improve the rendering performance of web pages by isolating a subtree from the rest of the document. You can read details on Manuel Rego’s blog post.

Regarding CSS Grid Layout we’ve continued our maintenance duties, bug triage of the Chromium and WebKit bug trackers, and fixed the most severe bugs. One change with impact on end users was related to how percentages row tracks and gaps work in grid containers with indefinite size, the last spec resolution was implemented in both Chromium and WebKit. We are finishing the level 1 of the specification with some missing/incomplete features. First we’ve been working on the new Baseline Alignment algorithm (cf. CSS WG issues 1039, 1365 and 1409). We fixed related issues in Chromium and WebKit. Similarly, we’ve worked on Content Alignment logic (see CSS WG issue 2557) and resolved a bug in Chromium. The new algorithm for baseline alignment caused an important performance regression for certain resizing use cases so we’ve fixed them with some performance optimization and that landed in Chromium.

We have also worked on various topics related to CSS Text 3. We’ve fixed several bugs to increase the pass rate for the Web Platform test suite in Chromium such as bugs 854624, 900727 and 768363. We are also working on a new CSS value ‘break-spaces’ for the ‘white-space’ property. For details, see the CSS WG discussions: issue 2465 and pull request. We implemented this new property in Chromium under a CSSText3BreakSpaces flag. Additionally, we are currently porting this implementation to Chromium’s new layout engine ‘LayoutNG’. We have plans to implement this feature in WebKit during the second semester.

Multimedia

  • WebRTC: The libwebrtc branch is now upstreamed in WebKit and has been tested with popular servers.
  • Media Source Extensions: WebM MSE support is upstreamed in WebKit.
  • We implemented basic support for <video> and <audio> elements in Servo.

Other activities

Web Engines Hackfest 2018

Last October, we organized the Web Engines Hackfest at our A Coruña office. It was a great event with about 70 attendees from all the web engines, thank you to all the participants! As usual, you can find more information on the event wiki including link to slides and videos of speakers.

TPAC 2018

Again in October, but this time in Lyon (France), 12 people from Igalia attended TPAC and participated in several discussions on the different meetings. Igalia had a booth there showcasing several demos of our last developments running on top of WPE (a WebKit port for embedded devices). Last, Manuel Rego gave a talk on the W3C Developers Meetup about how to contribute to CSS.

This.Javascript: State of Browsers

In December, we also participated with other browser developers to the online This.Javascript: State of Browsers event organized by ThisDot. We talked more specifically about the current work in WebKit.

New Igalians

We are excited to announce that new Igalians are joining us to continue our Web platform effort:

  • Cathie Chen, a Chinese engineer with about 10 years of experience working on browsers. Among other contributions to Chromium, she worked on the new LayoutNG code and added support for list markers.

  • Caio Lima a Brazilian developer who recently graduated from the Federal University of Bahia. He participated to our coding experience program and notably worked on BigInt support in JSC.

  • Oriol Brufau a recent graduate in math from Barcelona who is also involved in the CSSWG and the development of various browser engines. He participated to our coding experience program and implemented the CSS Logical Properties and Values in WebKit and Chromium.

Coding Experience Programs

Last fall, Sven Sauleau joined our coding experience program and started to work on various BigInt/WebAssembly improvements in V8.

Conclusion

We are thrilled with the web platform achievements we made last semester and we look forward to more work on the web platform in 2019!