Marco CastelluccioSearchfox in Phabricator extension

Being able to search code while reviewing can be really useful, but unfortunately it’s not so straightforward. Many people resort to loading the patch under review in an IDE in order to be able to search code.

Being able to do it directly in the browser can make the workflow much smoother.

To support this use case, I’ve built an extension for Phabricator that integrates Searchfox code search functionality directly in Phabricator differentials. This way reviewers can benefit from hovers, go-to-definition and find-references without having to resort to the IDE or without having to manually navigate to the code on searchfox.org or dxr.mozilla.org. Moreover, compared to searchfox.org or dxr.mozilla.org, the extension highlights both the pre-patch view and the post-patch view, so reviewers can see how pre-existing variables/functions are being used after the patch.

To summarize, the features of the extension currently are:

  1. Highlight keywords when you hover them, highlighting them both in the pre-patch and in the post-patch view;
  2. When you press on a keyword, it offers options to search for the definition, callers, and so on (the results are opened on Searchfox in a new tab).

Here’s a screenshot from the extension in action:

Screenshot of Searchfox in Phabricator <figcaption>Figure 1: Screenshot of Searchfox in Phabricator.</figcaption>

I’m planning to add support for sticky highlighting and blame information (when hovering on the line number on the left side). Indeed, being able to look at the past history of a line is another sought after feature by reviewers.

You can find the extension on AMO, at https://addons.mozilla.org/addon/searchfox-phabricator/.

The source code, admittedly not great as it was written as an experiment, lives at https://github.com/marco-c/mozsearch-phabricator-addon.

Should you find any issues, please file them on https://github.com/marco-c/mozsearch-phabricator-addon/issues.

Mozilla Future Releases BlogSearching Made Faster, the Latest Firefox Exploration

Search is one of the most common activities that people do whenever they go online. At Mozilla, we are always looking for ways to streamline that experience to make it fast, easy and convenient for our users.

Our Firefox browser provides a variety of options for people to search the things and information they seek when they’re on the web, so we want to make search even easier. For instance, there are two search boxes on every home or new tab page – one is what we call the “awesome bar” also known as the URL bar, and the other is the search box in the home/new tab pages.

In the awesome bar, users can use a shortcut to their queries by simply entering a predefined keyword (like @google) and typing the actual search term they are seeking, whether it’s the nearest movie theater location and times for the latest blockbuster movie or finding a sushi restaurant close to their current location. These Search Keywords have been part of the browser experience for years, yet it’s not commonly known. Here’s a hint to enable it:  Go to “Preferences,” then “Search” and check “ One-Click Search Engines”.

This brings us back to why we started our latest refinement: Search shortcuts, which is starting to roll out to US users today.

How does it work?

We are getting one step closer to making the search experience even faster and more straightforward. Users in the US will start to see Google and Amazon as pinned top sites, called “Search shortcuts”. Tapping on these top sites redirects the user to the awesome bar, and automatically fills the corresponding keyword for the search engine. Typing any search term or phrase after the keyword “@google” or “@amazon” and hitting enter, will result in searching for the term in Google or Amazon accordingly, without having to wait for a page to load.

These shortcuts are easy to manage right from the new tab page, so you can add or remove them as you please.  To remove the default search shortcuts, simply click on the dots icon and select “unpin.” If you have a search engine you’d rather have listed, click on the three dots on the right side of your Top Sites section and select “Add search engine.”

What to expect next

We are currently exploring how to expand this utility outside of the US. We expect to learn a great deal in the coming weeks by analyzing the user sentiment and usage of the new feature. User feedback and comments will help us determine next steps and future improvements.

In the spirit of full transparency that Mozilla has always stood for, we anticipate that some of these search queries may fall under the agreements with Google and Amazon, and bring business value to the company. Not only are users benefiting from a new utility, they are also helping Mozilla’s financial sustainability.

In the meantime, check out and download the latest version of Firefox Quantum for the desktop in order to use the Search Shortcuts feature when it becomes available.

Download Firefox for Windows, Mac, Linux

The post Searching Made Faster, the Latest Firefox Exploration appeared first on Future Releases.

Hacks.Mozilla.OrgDweb: Decentralised, Real-Time, Interoperable Communication with Matrix

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

While Scuttlebutt is person-centric and IPFS is document-centric, today you’ll learn about Matrix, which is all about messages. Instead of inventing a whole new stack, they’ve leaned on some familiar parts of the web today – HTTP as a transport, and JSON for the message format. How those messages get around is what distinguishes it – a system of decentralized servers, designed with interoperability in mind from the beginning, and an extensibility model for adapting to different use-cases. Please enjoy this introduction from Ben Parsons, developer advocate for [Matrix.org](https://matrix.org).

– Dietrich Ayala

What is Matrix?

Matrix is an open standard for interoperable, decentralised, real-time communication over the Internet. It provides a standard HTTP API for publishing and subscribing to real-time data in specified channels, which means it can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication, and anything else that can be expressed as JSON and needs to be transmitted in real-time over HTTP. The most common use of Matrix today is as an Instant Messaging platform.

  • Matrix is interoperable in that it follows an open standard and can freely communicate with other platforms. Matrix messages are JSON, and easy to parse. Bridges are provided to enable communication with other platforms.
  • Matrix is decentralised – there is no central server. To communicate on Matrix, you connect your client to a single “homeserver” – this server then communicates with other homeservers. For every room you are in, your homeserver will maintain a copy of the history of that room. This means that no one homeserver is the host or owner of a room if there is more than one homeserver connected to it. Anyone is free to host their own homeserver, just as they would host their own website or email server.

Why create another messaging platform?

The initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on – making it as easy as sending an email.

In future, we want to see Matrix used as a generic HTTP messaging and data synchronization system for the whole web, enabling IoT and other applications through a single unified, understandable interface.

What does Matrix provide?

Matrix is an Open Standard, with a specification that describes the interaction of homeservers, clients and Application Services that can extend Matrix.

There are reference implementations of clients, servers and SDKs for various programming languages.

Architecture

You connect to Matrix via a client. Your client connects to a single server – this is your homeserver. Your homeserver stores and provides history and account information for the connected user, and room history for rooms that user is a member of. To sign up, you can find a list of public homeservers at hello-matrix.net, or if using Riot as your client, the client will suggest a default location.

Homeservers synchronize message history with other homeservers. In this way, your homeserver is responsible for storing the state of rooms and providing message history.

Let’s take a look at an example of how this works. Homeservers and clients are connected as in the diagram in figure 1.

Figure 1. Homeservers with clients
Figure 1. Homeservers with clients

Figure 2.
Figure 2. Private vs shared homeservers

If we join a homeserver (Figure 3), that means we are connecting our client to an account on that homeserver.

Figure 3.
Figure 3. Joining a homeserver

Now we send a message. This message is sent into a room specified by our client, and given an event id by the homeserver.

Figure 4.
Figure 4. Sending a message

Our homeserver sends the message event to every homeserver which has a user account belonging to it in the room. It also sends the event to every local client in the room. (Figure 5.)

Figure 5.
Figure 5. Homeserver message propagation

Finally, the remote homeservers send the message event to their clients, which in are the appropriate room.

Figure 6.
Figure 6. Message delivery

Usage Example – simple chatbot

Let’s use the matrix-js-sdk to create a small chatbot, which listens in a room and responds back with an echo.

Make a new directory, install matrix-js-sdk and let’s get started:

mkdir my-bot
cd my-bot
npm install matrix-js-sdk
touch index.js

Now open index.js in your editor. We first create a client instance, this connects our client to our homeserver:

var sdk = require('matrix-js-sdk');

const client = sdk.createClient({
  baseUrl: "https://matrix.org",
  accessToken: "....MDAxM2lkZW50aWZpZXIga2V5CjAwMTBjaWQgZ2Vu....",
  userId: "@USERID:matrix.org"
});

The baseUrl parameter should match the homeserver of the user attempting to connect.

Access tokens are associated with an account, and provide full read/write access to all rooms available to that user. You can obtain an access token using Riot, by going to the settings page.

It’s also possible to get a token programmatically if the server supports it. To do this, create a new client with no authentication parameters, then call client.login() with "m.login.password":

const passwordClient = sdk.createClient("https://matrix.org");
passwordClient.login("m.login.password", {"user": "@USERID:matrix.org", "password": "hunter2"}).then((response) => {
  console.log(response.access_token);
});

With this access_token, you can now create a new client as in the previous code snippet. It’s recommended that you save the access_token for re-use.

Next we start the client, and perform a first sync, to get the latest state from the homeserver:

client.startClient({});
client.once('sync', function(state, prevState, res) {
  console.log(state); // state will be 'PREPARED' when the client is ready to use
});

We listen to events from the rooms we are subscribed to:

client.on("Room.timeline", function(event, room, toStartOfTimeline) {
  handleEvent(event);
});

Finally, we respond to the events by echoing back messages starting “!”

function handleEvent(event) {
  // we know we only want to respond to messages
  if (event.getType() !== "m.room.message") {
    return;
  }

  // we are only interested in messages which start with "!"
  if (event.getContent().body[0] === '!') {
    // create an object with everything after the "!"
    var content = {
      "body": event.getContent().body.substring(1),
      "msgtype": "m.notice"
    };
    // send the message back to the room it came from
    client.sendEvent(event.getRoomId(), "m.room.message", content, "", (err, res) => {
      console.log(err);
    });
  }
}

Learn More

The best place to come and find out more about Matrix is on Matrix itself! The absolute quickest way to participate in Matrix is to use Riot, a popular web-based client. Head to <https://riot.im/app>, sign up for an account and join the #matrix:matrix.org room to introduce yourself.

matrix.org has many resources, including the FAQ and Guides sections.

Finally, to get stuck straight into the code, take a look at the Matrix Spec, or get involved with the many Open-Source projects.

The post Dweb: Decentralised, Real-Time, Interoperable Communication with Matrix appeared first on Mozilla Hacks - the Web developer blog.

Chris H-CGoing from New Laptop to Productive Mozillian

laptopStickers

My old laptop had so many great stickers on it I didn’t want to say goodbye. So I put off my hardware refresh cycle from the recommended 2 years to almost 3.

To speak the truth it wasn’t only the stickers that made me wary of switching. I had a workflow that worked. The system wasn’t slow. It was only three years old.

But then Windows started crashing on me during video calls. And my Firefox build times became long enough that I ported changes to my Linux desktop before building them. It was time to move on.

Of course this opened up a can of worms. Questions, in order that they presented themselves, included:

Should I move to Mac, or stick with Windows? My lingering dislike for Apple products and complete unfamiliarity with OSX made that choice easy.

Of the Windows laptops, which should I go for? Microsoft’s Surface lineup keeps improving. I had no complaints from my previous Lenovo X1 Carbon. And the Dell XPS 15 and 13 were enjoyed by several of my coworkers.

The Dells I nixed because I didn’t want anything bigger than the X1 I was retiring, and because the webcam is positioned at knuckle-height. I felt wary of the Surfacebooks due to the number that mhoye had put in the ground due to manufacturing defects. Yes, I know he has an outsized effect on hardware and software. It really only served to highlight how much importance I put on familiarity and habit.

X1 Carbon 6th Generation it is, then.

So I initiated the purchase order. It would be sent to Mozilla Toronto, the location charged with providing my IT support, where it would be configured and given an asset number. Then it would be sent to me. And only then would the work begin in setting it up so that I could actually get work done on it.

First, not being a fan of sending keypresses over the network, I disabled Bing search from the Start Menu by setting the following registry keys:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Search
BingSearchEnabled dword:00000000
AllowSearchToUseLocation dword:00000000
CortanaConsent dword:00000000

Then I fixed some odd defaults in Lenovo’s hardware. Middle-click should middle-click, not enter into a scroll. Fn should need to be pressed to perform special functions on the F keys (it’s like FnLock was default-enabled).

I installed all editions of Firefox. Firefox Beta installed over the release-channel that came pre-installed. Firefox Developer Edition and Nightly came next and added their own icons. I had to edit the shortcuts for each of these individually on the Desktop and in the Quick Launch bar to have -P --no-remote arguments so I wouldn’t accidentally start the wrong edition with the wrong profile and lose all of my data. (This should soon be addressed)

In Firefox Beta I logged in to sync to my work Firefox Account. This brought me 60% of the way to being useful right there. So much of my work is done in the browser, and so much of my browsing experience can be brought to life by logging in to Firefox Sync.

The other 40% took the most effort and the most time. This is because I want to be able to compile Firefox on Windows, for my sins, and this isn’t the most pleasant of experiences. Luckily we have “Building Firefox for Windows” instructions on MDN. Unluckily, I want to use git instead of mercurial for version control.

  1. Install mozilla-build
  2. Install Microsoft Visual Studio Community Edition (needed for Win10 SDKs)
  3. Copy over my .vimrc, .bashrc, .gitconfig, and my ssh keys into the mozilla-build shell environment
  4. Add exclusions to Windows Defender for my entire development directory in an effort to speed up Windows’ notoriously-slow filesystem speeds
  5. Install Git for Windows
  6. Clone and configure git-cinnabar for working with Mozilla’s mercurial repositories
  7. Clone mozilla-unified
    • This takes hours to complete. The download is pretty quick, but turning all of the mercurial changesets into git commits requires a lot of filesystem operations.
  8. Download git-prompt.sh so I can see the current branch in my mozilla-build prompt
  9.  ./mach bootstrap
    • This takes dozens of minutes and can’t be left alone as it has questions that need answers at various points in the process.
  10. ​./mach build
    • This originally failed because when I checked out mozilla-unified in Step 7 my git used the wrong line-endings. (core.eol should be set to lf and core.autocrlf to false)
    • Then it failed because ./mach bootstrap downloaded the wrong rust std library. I managed to find rustup in ~/.cargo/bin which allowed me to follow the build system’s error message and fix things
  11. Just under 50min later I have a Firefox build

And that’s not all. I haven’t installed the necessary tools for uploading patches to Mozilla’s Phabricator instance so they can undergo code review. I haven’t installed Chrome so I can check if things are broken for everyone or just for Firefox. I haven’t cloned and configured the frankly-daunting number of github repositories in use by my team and the wider org.

Only with all this done can I be a productive mozillian. It takes hours, and knowledge gained over my nearly-3 years of employment here.

Could it be automated? Technologically, almost certainly yes. The latest mozilla-build can be fetched from a central location. mozilla-unified can be cloned using the version control setup of choice. The correct version of Visual Studio Community can be installed (but maybe not usably given its reliance on Microsoft Accounts). We might be able to get all the way to a working Firefox build from a recent checkout of the source tree before the laptop leaves IT’s hands.

It might not be worth it. How many mozillians even need a working Firefox build, anyway? And how often are they requesting new hardware?

Ignoring the requirement to build Firefox, then, why was the laptop furnished with a release-channel version of Firefox? Shouldn’t it at least have been Beta?

And could this process of setup be better documented? The parts common to multiple teams appear well documented to begin with. The “Building Firefox on Windows” documentation on MDN is exceedingly clear to work with despite the frightening complexity of its underpinnings. And my team has onboarding docs focused on getting new employees connected and confident.

Ultimately I believe this is probably as simple and as efficient as this process will get. Maybe it’s a good thing that I only undertook this after three years. That seems like a nice length of time to amortize the hours of cost it took to get back to productive.

Oh, and as for the stickers… well, Mozilla has a program for buying your own old laptop. I splurged and am using it to replace my 2009 Aspire Revo to connect to my TV and provide living room computing. It is working out just swell.

:chutten

Hacks.Mozilla.OrgShow your support for Firefox with new badges

Firefox is only as strong as its passionate users. Because we’re independent, people need to make a conscious choice to use a non-default browser on their system. We’re most successful when happy users tell others about an alternative worth trying.

A laptop showing a website with a Firefox badge

If you’re a Firefox user and want to show your support, we’ve made a collection of badges you can add to your website to tell users, “I use Firefox, and you should too!”

You can browse the badges and grab the code to display them on a dedicated microsite we’ve built, so there’s no need to download them (though you’re welcome to if you want). Images are hosted on a Mozilla CDN for convenience and performance only. We do no tracking of traffic to the CDN. We’ll be adding more badges as time goes on as well.

So whether you’re excited to use a browser from a non-profit with a mission to build a better Internet, or just think Firefox is a kick-ass product, we’d love for you to spread the word.

Thank you for your support!

The post Show your support for Firefox with new badges appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogAt MozFest, Spend 7 Days Exploring Internet Health

Mozilla’s ninth-annual festival — slated for October 22-28 in London — examines how the internet and human life intersect

 

Workshops that teach you how to detect misinformation and mobile trackers. A series of art installations that turn online data into artwork. A panel about the unintended consequences of AI, featuring a former YouTube engineer and a former FBI agent. And a conversation with the inventor of the web.

These are just a handful of the experiences at this year’s MozFest, Mozilla’s annual festival for, by, and about people who love the internet. From October 22-28 at the Royal Society of Arts (RSA) and Ravensbourne University in central London, more than 2,500 developers, designers, activists, and artists from dozens of countries will gather to explore privacy, security, openness, and inclusion online.

Tickets are just £45, and provide access to hundreds of sessions, talks, art, swag, meals, and more.

Says Mark Surman, Mozilla’s Executive Director: “At MozFest, people from across the globe — technologists from Nairobi, educators from Berlin — come together to build a healthier internet. We examine the most pressing issues online, like misinformation and the erosion of privacy. Then we roll up our sleeves to find solutions. In a way, MozFest is just the start: The ideas we bat around and the code we write always evolves into new campaigns and new open-source products.”

You can learn more and purchase tickets at mozillafestival.org. In the meantime, here’s a closer look at what you can expect:

Hundreds of hands-on workshops

MozFest is built around hands-on participation — many of your fellow attendees are leading sessions themselves. These sessions are divided among six spaces: Decentralisation; Digital Inclusion; Openness; Privacy and Security; Web Literacy; and the Youth Zone.

Sessions range from roundtable discussions to hackathons. Among them:

A scene from MozFest 2017

  • “Get the Upper Hand on Misinformation,” a session exploring concepts like confirmation bias, disinformation, and fake news. Participants will also suggest their own tools to combat these issues

 

  • “Tracking Mobile Trackers,” a session that teaches you how to detect — and jam — the mobile trackers that prey on your personal data

 

  • “Message Delayed: Designing Interplanetary Communication Tools,” a session exploring what interplanetary messaging might look like. It’s led by a researcher from MIT’s Media Lab

 

  • “Combating Online Distraction and Addiction,” a session sharing techniques and tools that help us have a more focused and deliberate online experience

 

  • “Build Your own Air Quality Sensor,” a session that teaches participants how to build an open-source device for monitoring pollution in their neighborhood

 

See all sessions»

 

Talks

The MozFest Dialogues & Debates stage features leading thinkers from across the internet health movement. This year, 18 luminaries from France, India, Afghanistan, and beyond will participate in solo talks and spirited panels. Among them:

A scene from MozFest 2017

  • “AI’s Collateral Damage,” a panel exploring artificial intelligence’s unintended impact on human rights. Featuring former YouTube engineer Guillaume Chaslot; Social Science Research Council president Alondra Nelson; author and former FBI special agent Clinton Watts; and Mozilla Fellow Camille Francois

 

 

  • “Data in Oppressive Regimes,” a panel exploring how citizens operate online when surveillance is routine and dissent is dangerous. Featuring Bahraini human rights activist Esra’a Al-Shafei and ARTICLE19 Iran programme officer Mahsa Alimardani

 

  • “Flaws in the Data-Driven Digital Economy,” a talk by Renée DiResta. Renée investigates the spread of disinformation and manipulated narratives across social networks. She is a Mozilla Fellow; the Director of Research at New Knowledge; and Head of Policy at nonprofit Data for Democracy

See all talks and panels»

Can’t make it to London? Don’t fret: You can also watch these talks online at mozillafestival.org

New Experiences

MozFest is always evolving — over nine years, it’s grown from a small gathering in a Barcelona museum to a global convening in the heart of London. This year, we’re excited to introduce:

A scene from MozFest 2017

  • Queering MozFest, a pan-festival experience that explores how internet issues intersect with gender and sexuality. Programming will reflect on the relationships between technology, normalisation, and marginalisation

 

  • Tracked, a game spanning the entire festival. The experience will engage players in various activities throughout the venue, demonstrating the trade-offs we each make when it comes to our personal data

 

  • Art + Data, a gallery of 36 interactive art installations that merge data and art — from ASCII scarves you can actually wear, to startling visualizations of the amount of personal data that’s public online

 

  • Mozilla’s second-ever *Privacy Not Included, a guide to help you shop for private and secure connected gifts this holiday season, will debut as MozFest. Some 70 products will be reviewed to reveal what exactly they do with your personal data

 

MozFest House

The Festival weekend — Saturday, October 27 and Sunday, October 28 — is where many sessions, talks, and experiences take place. But there’s an entire pre-week of programming, too. MozFest House runs from October 22 to October 26 at the Royal Society of the Arts (RSA) and extends the festival into a week-long affair. MozFest House programming includes:

  • A screening of “The Cleaners,” a documentary about the dark, day-to-day activities of online content moderators

 

  • “MisinfoCon,” a one-day conference exploring the spread of misinformation online — and how to fix it

 

  • “Viewsource,” a one-day conference where front-end developers and designers talk about CSS, JavaScript, HTML, Web Apps, and more

See all MozHouse programming»

~

MozFest couldn’t happen without the time and talent of our extraordinary volunteer wranglers. And it is made possible by our presenting sponsor Private Internet Access, a leading personal virtual private network (VPN) service. The event is also supported by Internet Society, the nonprofit working for an open, globally-connected, trustworthy, and secure Internet for everyone.

We hope you’ll join us in London — or tune in remotely — and help us build a better internet. mozillafestival.org

For press passes, please email Corey Nord at corey@pkpr.com.

The post At MozFest, Spend 7 Days Exploring Internet Health appeared first on The Mozilla Blog.

RabimbaFirefoxOS, A keyboard and prediction: Story of my first contribution





Returning to my cubical holding a hot cup of coffee and with a head loaded with frustration and panic over a system codebase that I managed to break with no sufficient time to fix it before the next morning. 

This was at IBM, New York where I was interning and working on the TJ Watson project. I returned back to my desk, turned on my dual monitors, started reading some blogs and engaging on Mozilla IRC (a new found and pretty short lived hobby). Just a few days before that, FirefoxOS was launched in India in the form of an Intex phone with a $35 price tag. It was making waves all around, because of its hefty price and poor performance . The OS struggle was showing up in the super low cost hardware. I was personally furious about some of the shortcomings, primarily the keyboard which at that time didn’t support prediction in any language other than English and also did not learn new words. Coincidentally, I came upon Dietrich Ayala in the FirefoxOS IRC channel, who at that time was a Platform Engineer at Mozilla. To my surprise he agreed with many of my complaints and asked me if I want to contribute my ideas. I very much wanted to, but then again, I had no idea how. The idea of contributing to the codebase of something like FirefoxOS terrified me. He suggested I first send a proposal and then proceed from there. With my busy work schedule at IBM, this discussion slipped my mind and did not fully boil in my head until I returned home from my internship.

That proposal now lives here. And was being tracked here as well part of Mozilla Developer Network Hacks (under “Word Prediction for Bengali”)

Fast forward a couple of years. And now we don’t have a FirefoxOS anymore. So I decided about time I give a go about what went into the implementation. And get to the nitty gritty of the code.


A little summary of what was done

But first Related Work. By that I mean the existing program for predictive text input.When using on-screen keyboard in Firefox OS, the program shows you three most likely words starting with the letters you just typed. This works even if you made a typo (see the right screenshot).



Firefox OS screenshot

Each word in the dictionary has a frequency associated with it, for example, the top words in the English dictionary are:


WordFrequency
the 222
of 214
and 212
in 210
a 208
to 208


For example, if you type “TH”, the program will suggest you “the” and “this” (with frequencies of 222 and 200), but not something rare like “thundershower” (which has a frequency of 10).

Mozilla developers previously used a Bloom filter for this task, but then switched to DAWG.

Using DAWG, you can quickly find mistyped words, however you cannot easily find the most frequent words for a given prefix. The existing implementation of DAWG in Firefox OS code (made by Christoph Kerschbaumer) worked like this: each word had its frequency stored in the last node of the word; the program just traversed the whole subtree and looked for three most frequent words. To make the tree traversal feasible, it was limited to two characters, so the program was able to suggest only 1-2 characters to finish the word typed by user.
Dictionary size was another problem. Each DAWG node consisted of the letter, frequency, left, right, and middle pointers (four bytes each; 20 bytes in total). The whole English dictionary was around 200'000 nodes or 3.9 MB.
Whole tree traversal

In this example, the program looks up the prefix TH in the tree, enumerates all words in the dictionary starting with TH (by traversing the red subtree), and find three words with maximum frequencies. For the full English dictionary, the size of subtree can be very large.

Sorting nodes by maximum frequency of the prefix

Here is what I proposed and implemented for Firefox OS.
If you can find a way to store the nodes sorted by maximum frequency of the words, then you can visit just the nodes with the most frequent words instead of traversing the whole subtree.

Note that a TST can be viewed as consisting of several binary trees; each of them is used to find a letter in the specific position. There are many ways to store the same letters in a binary tree (see green nodes on the drawing), so several TSTs are possible for the same words:


Words: tap ten the to

Three equivalent TSTs


Usually, you want to balance the binary tree to speed up the search. But for this task, I thought to put the letter that continues the most frequent word at the root of the binary tree instead of perfectly balancing it. In this way, you can quickly locate the most frequent word. The left and the right subtrees of the root will be balanced as usual.

However, the task is to find the three most frequent words, not just one word. So you can create a linked list of letters sorted by the maximum frequency of the words containing this letters. An additional pointer will be added to each node to maintain the linked list. The nodes will be sorted by two criteria: alphabetically in the binary tree (so that you can find the prefix) and by frequency in the linked list (so that you can find the most frequent words for this prefix).
For example, there are the following words and frequencies:
the 222
thou 100
ten 145
to 208 tens 110
voices 118
voice 139

For the first letter T, you have the following maximum frequencies (of the words starting with this prefix):
  • TH — 222 (the full word is “the”; “thou” has a lower frequency);
  • TO — 208 (the full word is “to”);
  • TE — 145 (the full word is “ten”; “tens” has a lower frequency).
The node with the highest maximum frequency (H in “th”) will be the root of the binary tree and the head of the linked list; O will be the next item, and E will be the last item in the linked list.
ternary search tree

So you have built the data structure described above. To find N most frequent words starting with a prefix, you first find this prefix in the DAWG (as usual, using the left and right pointers in the binary tree). Then, you follow the middle pointers to locate the most frequent word (remember, it's always at the root of the binary tree). When following this path, you save the second most likely nodes, so that after finding the first word, you already know where to start looking for the second one.

Please take a look at the drawing above. For example, the user types the letter T. You go down to the yellow binary tree. In its root, you find the prefix of the most frequent word (H); you also remember the second most likely frequency (208 for the letter O) by looking in the linked list.

You follow to the green binary tree. E is the prefix of the most frequent word here; you also remember the second frequency (100 for the letter O). So, you have found the first word (THE). Where to look for the second most frequent word? You compare the saved frequencies:
tho 100

to 208

and find that TO, not THO is a prefix of the second most frequent word.
So you continue the search from the TO node and find the second word “to”. TE is the next node sorted by frequency in the linked list, so you save it instead of TO:
tho 100

te 145
Now, TE has greater frequency, so you choose this path and find the third word, “ten”.
You can store the candidate prefixes in a priority queue (sorted by the frequency) for faster retrieval of the next best candidate, but I chose a sorted array for this task, because there are just three words to find. If you already have the required number of candidates (three) and you found a candidate that is worse the already found candidates, you can skip it. Don't insert it at the end of the priority queue (or the sorted array), because it will not be used anyway. But if you found a candidate that is better than the already found ones, you should store it (it's possible to replace the worse candidate in this case). So you can store just three candidates at any time.

The advantage of this method is that you can find the most frequent words without traversing the whole subtree (not thousands of nodes, but typically less than 100 nodes). You also can use fuzzy search to find a prefix with possible typos.

Averaging the frequency

Another problem is the file size and the suffixes (such as “-ing”, “-ed”, and “-s” in English language). When converting from TST to DAWG, you can join together the suffixes only if their frequencies are equal (the previous implementation by Christoph Kerschbaumer used this strategy), or you can join them if their order in the linked list is the same and store the average frequency in the compressed node.

In the later case, you can reduce the number of the nodes (in the English dictionary, the number of nodes went from 200'000 to 130'000). The frequencies are averaged only if doing so does not change the order of words (a less frequent suffix is never joined with a more frequent suffix).

For example, consider the same words:


the 222
thou 100
ten 145
to 208 tens 110
voices 118
voice 139
The prefix “-s” has an average frequency of (110+118)/2=114 and the null ending (the end of the word) has an average frequency of (222+100+208+145+110+139+118)/7=149, so the latter will be suggested more often.
ternary search tree
DAWG
Joining partially equal linked lists

The nodes are joined together only if their subtrees are equal and the linked lists are equal. For example, if “ends” were more likely than “end”, it would not be joined with “tens” that is less likely than “ten”. The averaging changes frequencies, but preserves the relative order of words with the same prefix.

The program is careful enough to preserve the linked lists when joining the nodes. But if the linked lists are partially equal, it can join them. Consider the following example (see the drawing to the right of this text):
ended 144
ending 135
standards 136
ends 130 standing 134
stands 133
The “-s” node has the averaged frequency of round((133+130)/2)=132, and “-ing” nodes have round((134+135)/2)=134. The linked lists and the nodes are partially joined: the common part of “standARDS — standING — standS” and “endED — endING — endS” is “-ing”, “-s”, and this part is joined. Again, if “standing” were more likely than “standards” or less likely than “stands”, it would be impossible to join the nodes, because their order in the linked list would be different.

Besides that, I allocated 20 bits (two bytes + additional four bits) for each pointer instead of 32 bits. Sixteen bits (65'536 nodes) were not enough for Mozilla dictionaries, but twenty bits (1 million nodes) are enough and leave a room for further expansion. The nodes are stored as a Uint16Array, including the letter and the frequency (16 bits each), the left, the right, the middle pointer, and the pointer for the linked list (the lower 16 bits of each pointer). An additional Uint16 stores the higher four bits of each pointer (4 bits × 4 pointers). After these changes, the dictionary size went down from the initial 3.9 MB to 1.8 MB.

Conclusion

The optimized program is not only faster but also uses a smaller dictionary.

This finishes up on how we can optimize it. Then comes the learning part. You can have a look at my talk which has a working demo as well on how it works. And specially with Bengali too





Acknowledgements:
The following people have guided me on this course and I am forever grateful to them for whatever I could do in the project
  • Dietrich Ayala - To actually help me start the project. getting me connected to everyone who helped me
  • Indranil Das Gupta - For valuable suggestions and also helping me get FIRE corpus
  • Sankarshan Mukhopadhyay - Valuable suggestions on my method and pointing out related work in Fedora
  • Prasenjit Majumder, Ayan Bandyopadhyay - For getting me access to the FIRE corpus
  • Tim Chien and Jan Jongboom - I learned from their previous works and for handling all my queries
  • Mahay Alam Khan - For getting me in touch with Anirudhha
  • Countless people in #gaia in Mozilla IRC who patiently listened to all my problems while I tried to build gaia in a Windows machine (*sigh*) and later all the installation problems
Related Talks:


I gave two talks on the related topic. 

The OSB talks was my first time speaking in a conference, so you can visibly see how sloppy I am. By JSFoo, I was a little experienced (one talk experience) so became little less sloppy (still plenty).
Another derivation I did present in OpenSource HongKong. But I don't believe that is recorded anywhere.

JSFoo 2015




OpenSource Bridge 2015




This Week In RustThis Week in Rust 256

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Noria, a new streaming data-flow system designed to act as a fast storage backend for read-heavy web applications. Thanks to Stevensonmt for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

124 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Africa
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

There actually are NOT very many places where the C code’s behavior conflicts with Rust’s borrowing rules. This is both somewhat surprising, because there’s no way this code was written with Rust’s borrowing semantics in mind, and also entirely sensible, since Rust’s borrowing semantics are often quite close to how you actually want your code to behave anyway.

– SimonHeath porting C to Rust

Thanks to Pascal Hertleif for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

RabimbaVoting impartially for fun and profit a.k.a Mozilla Reps Council Voting

I am part of a program called Mozilla Reps. Though I am involved as a volunteer contributor with Mozilla for quite some time now, I am relatively new to the Mozilla Reps program and hardly know anything about the program apart from my scope of work in it.
Apparently, this is the Election time for voting the nominated candidates for the Council who will spearhead the program for the next session. Since I am new to the program reading about everyone's election campaign and hearing about what they will do for the program was not giving me any clear motivation to vote for anyone specific. Though this wasn't anything super important, I still thought since I have a bit of time in my hand why not do something interesting about it.

This is my own impartial way of voting for those candidates.

How the voting works:

Each of the candidates was asked a specific number of questions and they all answered in a public forum for all of us to read and evaluate those answers. We are supposed to vote for the person whose answers resonates well with what we want from the program. You can read more about the process here.
The Q&A was put on public discourse for us to evaluate.

What I did:

Since I realized quite a few people I know are also running for candidacy and have answered questions there. I first collected all their Q&A text and then anonymized them (keeping a map for myself so that I know who to vote for at the end).
Then I analyzed all their answers to look for personality traits. The need for anonymization was so that I cannot let my personal knowledge of the candidates bias me towards the output I was getting at this stage.

The "voodoo" behind it

What I tried to achieve was a Frankenstein effect between what psychologists say and what modern NLP can do. An accepted theory of psychology is that language reflects the personality, thinking style and emotional states. Usage of certain words can provide clues to these (Fast & Funder, Yarkoni).
Also if we are able to find these markers from the text that will lead is to perceived facts like people who score high on excitement seeking are more likely to respond to queries than those who scored more as cautious (Mahmud et al). Which essentially for me translates if I have someone with high excitement marker, he is more likely to respond to various queries by others.

Armed with this I started scoring our Reps.
For this blogpost I will anonymize the names (so that nobody else gets influenced by this).

Rep 1: mal

Rep 2: vja
Rep 3: mli
Rep 4: mle
Rep 5: yan
Rep 6: pan

Casting my Vote:

Once I had a base matrix ready for everyone, I started looking into them to figure out what I want to see in my ideal candidate.
The interesting part if you look at their personality traits is that form the given text what I could find had a lot of similarity between them. So that is probably a good thing.
The first thing I started to look for is co-operation since the council will work a lot with reps. And almost everyone scores high on it. So I decided to take everyone who is more than 80% on it. I also someone who will take a risk and be adventurous, along with challenging authority. Both of the traits are prevalent among everyone. Since this is essentially a campaign pitch and how the system works, I knew Cautiousness will be high among everybody, so I did not look for it. However, I did look for sympathy, trust and openness to change. 
And after considering everyone (and my own subconscious mind nitpicking on other traits) I decide on Rep 1: mal

Rep 1 has 97% sympathy and trust, 82% co-operation, 98% adventurous,100% authority-challenge (oops) and 72% chance to embrace openness. The other traits are also comparable. -> 6 in voting
Next will be Rep5: yan with again high degree of co-operation, sympathy and trust. Followed by Rep4: mle, Rep3:mli,Rep2:vja and Rep6:pan

Once that is out of the way now we can delve into the more technical parts.

Shut up and just show me the code!

Before I point you to github.
What we wanted to do here is very similar to what these guys did on twitter . Essentially here we first tokenize the input text to develop a representation in an n-dimensional space. Then we use GLoVe to get the vector representation for the words in the input text. Then it uses bigfive and needs  to determine value characteristics. 

I am still cleaning up the code, but since it's almost 5:05 am here, I'll probably clean the code little later and remove all hard referencing. And then post the code and tweet it out maybe.

Disclaimer: This is in no way probably a good or even sane way to vote. Just something I came up with since I was clueless. Also probably it's horribly wrong to profile (or even try) somebody from their writing. So please do not take this as an ideal or even inspiring way.

With that out of the way. Do let me know what you think :)


Update: After almost 16 hours since this was first published. We now have our council members/winners. And I am super freaked out to say that somehow Rep1 won the election and Rep5 got the second highest vote (exactly how I cast my votes). Even though it's by pure coincidence it's curious to see others voters also went with a similar selection. I was careful enough to not divulge the names of this blogpost. Now I can though. So the names were constructed with the first letter of the first name and last two letter of last name concatinated.

And you can see the winners here (if you have access to reps portal)


Mike HoyeQuality Speakings

Unfortunately my suite of annoying verbal tics – um right um right um, which I continue to treat like Victor Borge’s phonetic punctuation – are on full display here, but I guess we’ll have to live with that. Here’s a talk I gave at the GTA Linux User Group on “The State Of Mozilla”, split into the main talk and the Q&A sections. I could probably have cut a quarter of that talk out by just managing those twitches better, but I guess that’s a project for 2019. In the meantime:


The talk:


The Q&A afterwards:

The preview on that second one is certainly unflattering. It ends on a note I’m pretty proud of, though, around the 35 minute mark.

I should go back make a note of all the “ums” and “rights” in this video and graph them out. I bet it’s some sort of morse-coded left-brain cry for help.

Marco CastelluccioUsing requestIdleCallback for long running computations

One of the ways developers have tipically tried to keep a smooth web application, without interfering with the browser’s animation and response to input, is to use a Web Worker for long running computations. For example, in the Prism.js (a library for syntax highlighting) API there’s an async parameter to choose “Whether to use Web Workers to improve performance and avoid blocking the UI when highlighting very large chunks of code”.

This is perfectly fine, but web workers are not so easy to use or debug. To take Prism.js again as an example, the option I mentioned earlier is false by default. Why?

In most cases, you will want to highlight reasonably sized chunks of code, and this will not be needed. Furthermore, using Web Workers is actually slower than synchronously highlighting, due to the overhead of creating and terminating the Worker. It just appears faster in these cases because it doesn’t block the main thread. In addition, since Web Workers operate on files instead of objects, plugins that hook on core parts of Prism (e.g. modify language definitions) will not work unless included in the same file (using the builder in the Download page will protect you from this pitfall). Lastly, Web Workers cannot interact with the DOM and most other APIs (e.g. the console), so they are notoriously hard to debug.”

Another alternative to achieve the same result, without using Web Workers and making things more difficult, is to use requestIdleCallback. This function allows a callback to be scheduled when the browser is idle, enabling us to perform background work / low priority work on the main thread without impacting animations / input response. N.B.: This will still be slower than synchronous, but might be cheaper than a Web Worker since you don’t have to pay the price of the Worker initialization.

Here’s an example, using promises and asynchronous functions we can also avoid callback hell and keep using normal loops.

function idle() {
  return new Promise(resolve => requestIdleCallback(resolve));
}

async function work() {
  let deadline = await idle();

  for (let job of jobs) {
    if (deadline.timeRemaining() <= 1) {
      deadline = await idle();
    }

    // Do something with `job`...
  }
}

I’m doing something similar in my Searchfox in Phabricator extension, to operate on one source line at a time and avoid slowing down the normal Phabricator operation. Here’s where I’m doing it.

Mozilla Addons BlogApply to Join the Featured Extensions Advisory Board

Do you love extensions? Do you have a keen sense of what makes a great extension? Want to help users discover extensions that will improve how they experience the web? If so, please consider applying to join our Featured Extensions Community Board!

Board members nominate and select new featured extensions each month to help millions of users find top-quality extensions to customize their Firefox browsers. Click here to learn more about the duties of the Featured Extension Advisory Board. The current board is currently wrapping up their six-month tour of duty and we are now assembling a new board of talented contributors for the months January – June, 2019.

Extension developers, designers, advocates, and fans are all invited to apply to join the board. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board.

To apply, please send us an email at amo-featured [at] mozilla [dot] org with your name and a few sentences about how you’re involved with AMO and why you are interested in joining the board. The deadline is Monday, October 22, 2018 at 11:59pm PDT. The new board will be announced shortly thereafter.

We look forward to hearing from you!

The post Apply to Join the Featured Extensions Advisory Board appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogHow XR Environments Shape User Behavior

How XR Environments Shape User Behavior

In previous research, The Extended Mind has documented how a 3D space automatically signals to people the rules of behavior. One of the key findings of that research is that when there is synchrony in the design of a space, it helps communicate behavioral norms to visitors. That means that when there is complementarity among content, affordances, and avatars, it helps people learn how to act. One example would be creating a gym environment (content), with weights (affordances), but only letting avatars dress in tuxedos and evening gowns. The contraction of people’s appearances could demotivate weight-lifting (the desired behavior).

This article shares learnings from the Hubs by Mozilla user research on how the different locations that they visited impacted participant’s behavior. Briefly, the researchers observed five pairs of participants in multiple 3D environments and watched as they navigated new ways of interacting with one another. In this particular study, participants visited a medieval fantasy world, a meeting room, an atrium, and a rooftop bunker.

To read more about the details and set up of the user study, read the intro blog post here.

The key environmental design insights are:

  • Users want to explore
  • The size of the space influences the type of conversation that users have
  • Objects in the environment shaped people’s expectations of what the space was for

The rest of the article will provide additional information on each of the insights.

Anticipate that people will want to explore upon arrival

Users immediately began exploring the space and quickly taught themselves to move. This might have been because people were new to Hubs by Mozilla and Social VR more generally. The general takeaway is that XR creators should give people something to discover once they arrive. Finding something will will be satisfying to the user. Platforms could also embrace novelty and give people something new to discover every time they visit. E.g., in Hubs, there is a rubber duck. Perhaps the placement of the duck could be randomly generated so people would have to look for it every time they arrive.

One thing to consider from a technical perspective was that the participants in this study didn’t grasp that by moving away from their companion it would be harder to hear that person. They made comments to the researchers and to each other about the spatialized audio feature:

“You have to be close to me for me to hear you”

While spatialized audio has multiple benefits and adds a dimension of presence to immersive worlds, in this case, people’s lack of understanding meant that they sometimes had sound issues. When this was combined with people immediately exploring the space when they arrived earlier than their companion, it was sometimes challenging for people to connect with one another. This leads to the second insight about size of the space.

Smaller spaces were easier for close conversations

When people arrived in the smaller spaces, it was easier for them to find their companion and they were less likely to get lost. There’s one particular world that was tested called a Medieval Fantasy book and it was inviting with warm colors, but it was large and people wandered off. That type of exploration sometimes got in the way of people enjoying conversations:

“I want to look at her robot face, but it’s hard because she keeps moving.”

This is another opportunity to consider use cases for for any Social VR environment. If the use case is conversation, smaller rooms lead to more intimate talks. Participants who were new to VR were able to access this insight when describing their experience.

"The size of the space alludes to…[the] type of conversation. Being out in this bigger space feels more public, but when we were in the office, it feels more intimate."

This quote illustrates how size signaled privacy to users. It is also coherent with past research from The Extended Mind on how to configure a space to match users’ expectations.

…when you go to a large city, the avenues are really wide which means a lot of traffic and people. vs. small streets means more residential, less traffic, more privacy. All of those rules still apply [to XR].

The lesson for all creators is that the more clear that they are on the use case of a space, the easier it should be to build it. In fact, participants were excited about the prospect of identifying or customizing their own spaces for a diverse set of activities or for meeting certain people:

“Find the best environment that suits what you want to do...

There is a final insight on how the environment shapes user behavior and it is about objects change people’s perceptions, including around big concepts like privacy.

Objects shaped people’s expectations of what the space was for

There were two particular Hubs objects that users responded to in interesting ways. The first is the rubber duck and the second is a door. What’s interesting to note is that in both cases, participants are interpreting these objects on their own and no one is guiding them.

How XR Environments Shape User Behavior

The rubber duck is unique to Hubs and was something that users quickly became attached to. When a participant clicked on the duck, it quacked and replicated itself, which motivated the users to click over and over again. It was a playful fidget-y type object, which helped users understand that it was fine to just sit and laugh with their companion and that they didn’t have to “do something” while they visited Hubs.

However, there were other objects that led users to make incorrect assumptions about privacy of Hubs. The presence of a door led a user to say:

“I thought opening one of those doors would lead me to a more public area.”

In reality, the door was not functional. Hubs’ locations are entirely private places accessible only via a unique URL.

What’s relevant to all creators is that their environmental design is open to interpretation by visitors. And even if creators make attempts to scrub out objects and environments sparse, that will just lead users to make different assumptions about what it is for. One set of participant decided that one of the more basic Hubs spaces reminded them of an interrogation room and they constructed an elaborate story for themselves that revolved around it.

Summary

Environmental cues can shape user expectations and behaviors when they enter an immersive space. In this test with Hubs by Mozilla, large locations led people to roam and small places focused people’s attention on each other. The contents of the room also influence the topics of conversations and how private they believed their discussions might be.

All of this indicates that XR creators should consider the subtle messages that their environments are sending to users. There’s value in user testing with multiple participants who come from different backgrounds to understand how their interpretations vary (or don’t) from the intentions of the creator. Testing doesn’t have to be a huge undertaking requiring massive development hours in response. It may uncover small things that could be revised rapidly – such as small tweaks to lighting and sound could impact people’s experience of a space. For the most part, people don’t feel like dim lighting is inviting and a test could uncover that early in the process and developers could amp up the brightness before a product with an immersive environment actually launches.

The final article in this blog series is going to focus on giving people the details of how this Hubs by Mozilla research study was executed and make recommendations for best practices in conducting usability research on cross platform (2D and VR) devices.

This article is part three of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of on accessibility, click here.
To read part two on the personal connections and playfulness of Hubs, click here.

Mozilla Security BlogRemoving Old Versions of TLS

In March of 2020, Firefox will disable support for TLS 1.0 and TLS 1.1.

On the Internet, 20 years is an eternity.  TLS 1.0 will be 20 years old in January 2019.  In that time, TLS has protected billions – and probably trillions – of connections from eavesdropping and attack.

In that time, we have collectively learned a lot about what it takes to design and build a security protocol.

Though we are not aware of specific problems with TLS 1.0 that require immediate action, several aspects of the design are neither as strong or as robust as we would like given the nature of the Internet today.  Most importantly, TLS 1.0 does not support modern cryptographic algorithms.

The Internet Engineering Task Force (IETF) no longer recommends the use of older TLS versions.  A draft document describes the technical reasons in more detail.

We will disable TLS 1.1 at the same time.  TLS 1.1 only addresses a limitation of TLS 1.0 that can be addressed in other ways. Our telemetry shows that only 0.1% of connections use TLS 1.1.

Graph showing the versions that we intend to remove (TLS 1.0 and 1.1) have low usage

TLS versions for all connections established by Firefox Beta 62, August-September 2018

Our telemetry shows that many sites already use TLS 1.2 or higher (Qualys says 94%).  TLS 1.2 is a prerequisite for HTTP/2, which can improve site performance.  We recommend that sites use a modern profile of TLS 1.2 unless they have specialized needs.

For sites that need to upgrade, the recently released TLS 1.3 includes an improved core design that has been rigorously analyzed by cryptographers.  TLS 1.3 can also make connections faster than TLS 1.2. Firefox already makes far more connections with TLS 1.3 than with TLS 1.0 and 1.1 combined.

Be aware that these changes will appear in pre-release versions of Firefox (Beta, Developer Edition, and Nightly) earlier than March 2020.  We will announce specific dates when we have more detailed plans.

We understand that upgrading something as fundamental as TLS can take some time.  This change affects a large number of sites.  That is why we are making this announcement so far in advance of the March 2020 removal date of TLS 1.0 and TLS 1.1.

Other browsers have made similar announcements. Chrome, Edge, and Safari all plan to make the same change.

The post Removing Old Versions of TLS appeared first on Mozilla Security Blog.

Wladimir PalantSo Google is now claiming: "no one (including Google) can access your data"

A few days ago Google announced ensuring privacy for your Android data backups. The essence is that your lockscreen PIN/pattern/passcode is used to encrypt your data and nobody should be able to decrypt it without knowing that passcode. Hey, that’s including Google themselves! Sounds good? Past experience indicates that such claims should not always be taken at face value. And in fact, this story raises some red flags for me.

The trouble is, whatever you use on your phone’s lockscreen is likely not very secure. It doesn’t have to be, because the phone will lock up after a bunch of failed attempts. So everybody goes with a passcode that is easy to type but probably not too hard to guess. Can you derive an encryption key from that passcode? Sure! Will this encryption be unbreakable? Most definitely not. With passwords being that simple, anybody getting their hands on encrypted data will be able to guess the password and decrypt the data within a very short time. That will even be the case for a well-chosen key derivation algorithm (and we don’t know yet which algorithm Google chose to use here).

Google is aware of that of course. So they don’t use the derived encryption key directly. Instead, the derived encryption key is used to encrypt a proper (randomly generated) encryption key, only the latter being used to encrypt the data. And then they find themselves in trouble: how could one possibly store the encryption key securely? On the one hand, they cannot keep it on user’s device because data might be shared between multiple devices. On the other hand, they don’t want to upload the key to their servers either, because of how unreliable the encryption layer on top of it is — running a bruteforce attack to extract the actual encryption key would be trivial even without having Google’s resources.

So they used a trick. The encryption key isn’t uploaded to a Google server, it is uploaded to a Titan security chip on a Google server. Presumably, your Android device will establish an encrypted connection directly to that Titan chip, upload your private key and the Titan chip will prevent bruteforce attacks by locking up after a few attempts at guessing your passcode. Problem solved?

Not quite. First of all, how do you know that whatever your Android device is uploading the private key to is really a Titan chip and not a software emulation of it? Even if it is, how do you know that it is running unmodified firmware as opposed to one that allows extracting data? And how do you know that Google really has no means of resetting these chips without all data being cleared? It all boils down to: you have to trust Google. In other words: it’s not that Google cannot access your data, they don’t want to. And you have to take their word on it. You also have to trust them when they claim that the NSA didn’t force them into adding a backdoor to those Titan chips.

Don’t take me wrong, they probably produced the best solution given what they have to work with. And for most Android users, their solution should still be a win, despite the shortcomings. But claiming that Google can no longer access users’ backup data is misleading.

The Servo BlogThese Weeks In Servo 115

In the past three weeks, we merged 181 PRs in the Servo organization’s repositories.

Our Windows nightlies have been broken for several months for a number of reasons, and we have now fixed all of the known breakage. If you’re a Windows user, give our latest builds a try! You can visit arbitrary URLs by pressing Ctr+L.

The Android Components project added a component to use Servo in any Android app.

We have a branch that allows Servo to build and run on Magic Leap devices:

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Exciting Work in Progress

Notable Additions

  • ceyusa implemented support for <video> and <audio> element playback.
  • pyfisch extended our border image implementation to support thickness and gradients.
  • ferjm made some WebAudio nodes match the specification a bit better.
  • pyfisch and jdm updated the version of WebRender in use.
  • ferjm enabled the Android backend for media playback.
  • SimonSapin redesigned the Taskcluster CI setup.
  • jdm corrected the flickering of WebGL content on Android builds.
  • codehag updated several parts of the devtools implmentation to work with modern versions of Firefox.
  • jdm made stdout redirect to Android’s logcat by default.
  • ferjm hardened the media backend against errors.
  • jdm made it easier to debug JS exceptions and WebGL errors.
  • nox reduced the unnecessary duplication of work performed by the putImageData API.
  • paulrouget harded the JNI integration layer.
  • nox consolidated the various byte-swapping and premultiplication operations.
  • ferjm made it possible to reuse AudioBuffer objects.
  • jdm fixed some graphical glitches on Oculus Go devices that affected images without alpha channels.
  • emilio improved the CSS animation and transitions implementation.
  • jdm prevented reloading a page from hiding all previously loaded images.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Cameron KaiserIt's baaaaa-aaack: TenFourFox Intel

A polite reminder: if you're going to link to this build, link to this post please so that people can understand this build doesn't have, nor will it ever have, official support.

It's back! It's undead! It's ugly! It's possibly functional! It's totally unsupported! It's ... TenFourFox for Intel Macs!

Years ago as readers of this blog will recall, Claudio Leite built TenFourFox 17.0.2 for Intel, which the update check-in server shows some determined users are still running to this day on 10.5 and even 10.4 despite various problems such as issue 209. However, he didn't have time to maintain it, and a newer version was never built, though a few people since then have made various attempts and submitted some patches.

One of these attempts is now far enough along to the point where I'm permitted to announce its existence. Riccardo Mottola has done substantial work on getting TenFourFox to build and run again on old Intel Macs with a focus on 32-bit compatibility, and his patches have been silently lurking in the source code repository for some time. Along with Ken Cunningham's additional work, who now also has a MacPorts portfile so you can build it yourself (PowerPC support in the portfile is coming, though you can still use the official instructions, of course), enough functions in the new Intel build that it can be used for basic tasks.

There are still known glitches in the build, including ones which may be severe, and currently Ken's portfile disables the JavaScript JIT due to crash bugs which have not yet been smoked out. (That said, even running in strict interpreter mode, the browser is still much faster than TenFourFox under Rosetta which has no JIT and must run emulated.) If you find one of these glitches, you get to deal with it all by yourself because the support level (i.e., none) hasn't changed. To wit:

  • The Power Mac is still the focus of development for TenFourFox, and anything else is best effort. Don't expect any Intel-specific bugs to ever be fixed. If anything does actually get fixed on Intel, be grateful.
  • The Intel version will never supersede the PowerPC version. Although I'll try not to intentionally break the Intel build, I may unintentionally do so, and if a bug crops up which requires breaking the Intel build to fix an issue with the PowerPC build, the Intel build will be broken until someone figures out what to do.
  • Intel builds remain unsupported and will probably never be supported. Do not post problems with the build to Tenderapp. Don't complain to Riccardo or Ken. Definitely don't complain to me. In fact, unless you're willing to figure out how to solve a problem you're encountering, don't expect anybody to care about any problem you have running the Intel build.
  • There may never be any Intel builds issued by anyone ever again except for whatever build you make for your own use. Don't complain about this on Tenderapp. Don't beg (bug) Riccardo or Ken for updates. Definitely don't beg (bug) me.

If you are allergic to actually doing work and want to mooch off someone else's (ahem), then Ken has provided a 10.5 Leopard build of FPR9 for 32-bit Intel. This version should work on 10.6-10.8 as well, but obviously not on 10.4; although the browser should still be able to be built on Tiger Intel, right now you'll have to do that yourself with the portfile or the official build instructions. You can get Ken's contributed build from SourceForge. As I said, you should not expect it to ever be updated, but if there is another future release, you can get it from the same directory whenever I get around to uploading it (which you shouldn't expect either).

As before, good news if it works for you, too bad if it doesn't, and please don't make Riccardo, Ken or me regret ever bringing the Intel build back. Again, do not report bugs in the Intel version to Tenderapp, and do not open Github issues unless you have code to contribute.

K Lars LohnThe Things Gateway - It's All About The Timing


In my last posting, I talked about creating an External Rule System for the Things Gateway from Mozilla.  This is a key component of the Automation part of a Smart Home system.   Of course, the Things Gateway already has a rule system of its own.  However, because it is GUI based, it has a complexity ceiling that is rather low by the standards of programmers.

My External Rule System provides an alternative for more sophisticated rules  that leverage the full power and readability of the Python programming language.  However, I must ensure the capabilities are a proper superset of the built in Thing Gateway capabilities.  The built in GUI Rule System has a special object called the "Clock" that can trigger a rule every day at a specific time. This is for the classic "turn the porch light on in the evening" home automation idea. My External Rule System needs the same capabilities, but as you'll see, it is easy to extend beyond basic time of day idea.

We'll start with the simplest example.
class MorningWakeRule(Rule):

def register_triggers(self):
morning_wake_trigger = AbsoluteTimeTrigger("morning_wake_trigger", "06:30:00")
return (morning_wake_trigger,)

def action(self, *args):
self.Bedside_Ikea_Light.on = True
(see this code in situ in the morning_wake_rule.py file in the pywot rule system demo directory)

Having only two parts, a trigger and an action, this rule is about as terse as a rule can be. In the register_triggers method, I defined an AbsoluteTimeTrigger that will fire every day at 6:30am.  That means that everyday at my wake up alarm time, the action method will run.  The body of that method is to set the "on" property of my bedside Ikea light to True.  That turns it on.

There are a number of triggers in the pywot.rule_triggers module.  It is useful to understand how they work.  The code that runs the AbsoluteTimeTrigger consists of two parts: the constructor and the trigger_detection_loop.  The constructor takes the time for the alarm in the form of a string.  The trigger_detection_loop method is run when the enclosing RuleSystem is started.
class AbsoluteTimeTrigger(TimeBasedTrigger):
def __init__(
self,
name,
# time_of_day_str should be in the 24Hr form "HH:MM:SS"
time_of_day_str,
):
super(AbsoluteTimeTrigger, self).__init__(name)
self.trigger_time = datetime.strptime(time_of_day_str, '%H:%M:%S').time()

async def trigger_detection_loop(self):
logging.debug('Starting timer %s', self.trigger_time)
while True:
time_until_trigger_in_seconds = self.time_difference_in_seconds(
self.trigger_time,
datetime.now().time()
)
logging.debug('timer triggers in %sS', time_until_trigger_in_seconds)
await asyncio.sleep(time_until_trigger_in_seconds)
self._apply_rules('activated', True)
await asyncio.sleep(1)
(see this code in situ in the rule_triggers.py file in the pywot directory)

The trigger_detection_loop is an infinite loop than can only be stopped by killing the program.  Within the loop, it calculates the number of seconds until the alarm is to go off.  It then sleeps the requisite number of seconds. A trigger object like this can participate in more than one rule, so it keeps an internal list of all the rules that included it using the Rule.register_triggers method. When the alarm is to fire, the call to _apply_rules will iterate over all the participating Rules and call their action methods.  In the case of MorningWakeRule above, that will turn on the light.

With the AbsoluteTimeTrigger, I've duplicated the capabilities of the GUI Rule System in regards to time.  Let's add more features.

Even though my sleep doctor says a consistent wake time throughout the week is best, I let myself sleep in on weekends.  I don't want the light to come on at 6:30am on Saturday and Sunday.  Let's modify the rule to take the day of the week into account.
class MorningWakeRule(Rule):

@property
def today_is_a_weekday(self):
weekday = datetime.now().date().weekday() # M0 T1 W2 T3 F4 S5 S6
return weekday in range(5)

@property
def today_is_a_weekend_day(self):
return not self.today_is_a_weekday

def register_triggers(self):
self.weekday_morning_wake_trigger = AbsoluteTimeTrigger(
"morning_wake_trigger", "06:30:00"
)
self.weekend_morning_wake_trigger = AbsoluteTimeTrigger(
"morning_wake_trigger", "07:30:00"
)
return (self.weekday_morning_wake_trigger, self.weekend_morning_wake_trigger)

def action(self, the_changed_thing, *args):
if the_changed_thing is self.weekday_morning_wake_trigger:
if self.today_is_a_weekday:
self.Bedside_Ikea_Light.on = True
elif the_changed_thing is self.weekend_morning_wake_trigger:
if self.today_is_a_weekend_day:
self.Bedside_Ikea_Light.on = True
(see this code in situ in the morning_wake_rule_02.py file in the pywot rule system demo directory)

In this code, I've added a couple properties to detect the day of the week and judge if it is a weekday or weekend day.  The register_triggers method has changed to include two instances of AbsoluteTimeTrigger. The first has my weekday wake time and the second has the weekend wake time.  Both triggers will call the action method everyday, but that method will ignore the one that is triggering on an inappropriate day.

Have you ever used a bedside table light as a morning alarm?  It's a rather rude way to wake up to have the light suddenly come on at full brightness when it is still dark in the bedroom.  How about changing it so the light slowly increases from off to full brightness over twenty minutes before the alarm time?

class MorningWakeRule(Rule):

@property
def today_is_a_weekday(self):
weekday = datetime.now().date().weekday() # M0 T1 W2 T3 F4 S5 S6
return weekday in range(5)

@property
def today_is_a_weekend_day(self):
return not self.today_is_a_weekday

def register_triggers(self):
self.weekday_morning_wake_trigger = AbsoluteTimeTrigger(
"weekday_morning_wake_trigger", "06:10:00"
)
self.weekend_morning_wake_trigger = AbsoluteTimeTrigger(
"weekend_morning_wake_trigger", "07:10:00"
)
return (self.weekday_morning_wake_trigger, self.weekend_morning_wake_trigger)

def action(self, the_changed_thing, *args):
if the_changed_thing is self.weekday_morning_wake_trigger:
if self.today_is_a_weekday:
asyncio.ensure_future(self._off_to_full())
elif the_changed_thing is self.weekend_morning_wake_trigger:
if self.today_is_a_weekend_day:
asyncio.ensure_future(self._off_to_full())

async def _off_to_full(self):
for i in range(20):
new_level = (i + 1) * 5
self.Bedside_Ikea_Light.on = True
self.Bedside_Ikea_Light.level = new_level
await asyncio.sleep(60)
(see this code in situ in the morning_wake_rule_03.py file in the pywot rule system demo directory)

This example is a little more complicated because it involves a bit of asynchronous programming.  I wrote the asynchronous method, _off_to_full, to slowly increase the brightness of the light.  At the designated time, instead of turning the light on, the action method instead will fire off the _off_to_full method asynchronously.  The action method ends, but _off_to_full runs on for the next twenty minutes raising the brightness of the bulb one level each minute.  When the bulb is at full brightness, the _off_to_full method falls off the end of its loop and silently quits.

Controlling lights based on time criterion is a basic feature of any Home Automation System.  Absolute time rules are the starting point.  Next time, I hope to show using the Python Package Astral to enable controlling lights with concepts like Dusk, Sunset, Dawn, Sunrise, the Golden Hour, the Blue Hour or phases of the moon.  We could even make a Philips HUE bulb show warning during the inauspicious Rahukaal part of the day.

In a future posting, I'll introduce the concept of RuleThings.  These are versions of my rule system that are also Things to add to the Things Gateway.  This will enable three great features:
  1. the ability to enable or disable external rules from within the Things Gateway GUI
  2. the ability to set and adjust an external alarm time from within the GUI 
  3. the ability for my rules system to interact with the GUI Rule System

Stay tuned, I'm just getting started...


Hacks.Mozilla.OrgPayments, accessibility, and dead macros: MDN Changelog for September 2018

Done in September

Here’s what happened in September to the code, data, and tools that support MDN Web Docs:

Here’s the plan for October:

Launched MDN payments

We’ve been thinking about the direction and growth of MDN. We’d like a more direct connection with developers, and to provide them with valuable features and benefits they need to be successful in their web projects. We’ve researched several promising ideas, and decided that direct payments would be the first experiment. Logged-in users and 1% of anonymous visitors see the banner that asks them to directly support MDN. See Ali Spivak’s and Kadir Topal’s post, A New Way to Support MDN, for more information.

Payment page on MDN

Payment page on MDN

The implementation phase started in August, when Potato London was hired to design and implement payments. Potato did an amazing job executing on a 5-week schedule, including several design meetings, daily standups, and a trip from Bristol to London to meet face-to-face during the MDN work week. Thanks to the hard work from the Potato team, including Charlie Harding, Josh Jarvis, Matt Hall, Michał Macioszczyk, Philip Lackmaker, and Rachel Lee.

A full-room art exhibit of piles of sackcloth "pillows" that resemble potatoes, ranging from potato-sized to couch-sized.

In honour of Potato, Tate Modern is exhibiting Magdalena Abakanowicz’s “Embryology”

Mozilla staff across the organization helped keep this project on schedule, from writing copy to security reviews to pull request reviews and fixes, including me, Ali Spivak, Caglar Ulucenk, Diane Tate, Havi Hoffman, Kadir Topal, Kevin Fann, Ryan Johnson, and Schalk Neethling.

Improved MDN’s accessibility resources

After the work week, we met with accessibility experts for the Hack on MDN event. Volunteers and staff improved MDN’s coverage of accessibility. This included discussions of accessibility topics, improving and expanding MDN’s documentation, and writing related blog posts. It also included code changes, improving MDN’s color contrast and adding markup for screen readers. See Janet Swisher’s Hack on MDN: Better accessibility for MDN Web Docs for the details.

Seren Davies (@ninjanails) was there, and many nails were painted.

A circle of people showing their painted nails and looking at the camera

Clockwise from the top: Chris Mills (headless mode), Glenda Sims, Bruce Lawson (with camera), Irene Smith, Estelle Weyl, Michiel Bijl, and Seren Davies

Removed 15% of KumaScript macros

The MDN team got together for a week at the London office to reflect on the quarter and plan the coming year.

We discussed KumaScript, our macro language and rendering service that implements standardized sidebars, banners, and internal links. It’s been easier to analyze macros since we moved them to GitHub in November 2016. We’re happy with the performance gains, but code reviews take forever, translations are hard, and we’re slow to write tests. These issues contributed to an incident in August where a sidebar macro was broken, and all the API reference pages showed an error for a day (bug 1487640).

Staff is getting impatient with KumaScript, and wants to replace it with something better. Florian wrote up the notes from the meeting on Discourse as Next steps for KumaScript.

Florian, Will Bamberg, and Ryan Johnson started on the first step, identifying and removing unused or seldom-used macros, such as hello.ejs. (PR 849).

Lionel Richie answering an 80's telephone from the video for "Hello"

Lionel Richie gets the news his favorite macro is gone.

The team removed 72 macros in about 2 weeks, and will continue removing them for the rest of the year. This will leave a smaller number of important macros, and we can analyze them for the next steps in the project.

Shipped tweaks and fixes

There were 379 PRs merged in September:

This includes some important changes and fixes:

66 pull requests were from first-time contributors:

Planned for October

October is the start of the fourth quarter. We have a few yearly goals to complete, including the Python 3 transition, the next round of the payments experiment, and performance experiments. This quarter also contains major holidays and the Mozilla All Hands, which mean it has about half the working days of other quarters. Time to get to work!

Move to Mozilla IT infrastructure

In October, Ryan Johnson, Ed Lim, Dave Parfitt, and Josh Mize will complete the setup of MDN services in the Mozilla IT infrastructure, and switch production traffic to the new systems. This will complete the migration of MDN from Mozilla Marketing to Emerging Technologies, started in February 2018. The team is organizing the switch-over checklist, and experimenting with the parallel staging environments.

The production switch is planned for October 29th, and will include a few hours when the site is in read-only mode.

The post Payments, accessibility, and dead macros: MDN Changelog for September 2018 appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language BlogAnnouncing Rust 1.29.2

The Rust team is happy to announce a new version of Rust, 1.29.2. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.29.2 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.29.2 on GitHub.

What’s in 1.29.2 stable

This patch release introduces a workaround to a miscompilation bug introduced in Rust 1.29.0. We haven’t found the root cause of the bug yet, but it showed up after a LLVM version upgrade, and it’s caused by an optimization. We disabled that optimization until the root cause is fixed.

This release also includes the rls-preview rustup component for Windows GNU users, which wasn’t included in the 1.29.0 release due to a build failure. We also added safeguards in the release infrastructure to prevent stable and beta releases with missing components for Tier 1 platform in the future.

Nicholas NethercoteSlimmer and simpler static atoms

String interning is:

a method of storing only one copy of each distinct string value, which must be immutable. Interning strings makes some string processing tasks more time- or space-efficient at the cost of requiring more time when the string is created or interned. The distinct values are stored in a string intern pool. The single copy of each string is called its intern.

In Firefox’s code we use the term atom rather than intern, and atom table rather than string intern pool. I don’t know why; those names have been used for a long time.

Furthermore, Firefox distinguishes between static atoms, which are those that are chosen at compile time and can be directly referred to via an identifier, and dynamic atoms, which are added on-demand at runtime. This post is about the former.

In 2016, Firefox’s implementation of static atoms was complex and inefficient. I filed a bug about this that included the following ASCII diagram showing all the data structures involved for a single atom for the string “foobar”.

static nsFakeStringBuffer<N=7> foobar_buffer (.data, 8+2N bytes)
/-----------------------------------------\ <------+
| int32_t mRefCnt = 1 // never reaches 0  |        | 
| uint32_t mSize = 14 // 7 x 16-bit chars |        | 
| u"foobar"           // the actual chars | <----+ | 
\-----------------------------------------/      | | 
                                                 | | 
PermanentAtomImpl (heap, 32 bytes)               | | 
/----------------------------------------------\ | | <-+
| void* vtablePtr    // implicit               | | |   | 
| uint32_t mLength = 6                         | | |   | 
| uint32_t mHash = ...                         | | |   | 
| char16_t* mString = @------------------------|-+ |   | 
| uintptr_t mRefCnt  // from NS_DECL_ISUPPORTS |   |   | 
\----------------------------------------------/   |   | 
                                                   |   | 
static nsIAtom* foobar (.bss, 8 bytes)             |   | 
/---\ <-----------------------------------+        |   | 
| @-|-------------------------------------|------------+
\---/                                     |        |   | 
                                          |        |   | 
static nsStaticAtom (.d.r.ro.l, 16 bytes) |        |   | 
(this element is part of a larger array)  |        |   | 
/------------------------------------\    |        |   | 
| nsStringBuffer* mStringBuffer = O--|----|--------+   | 
| nsIAtom** mAtom = @----------------|----+            | 
\------------------------------------/                 | 
                                                       | 
AtomTableEntry (heap, ~2 x 16 bytes[*])                | 
(this entry is part of gAtomTable)                     | 
/-------------------------\                            | 
| uint32_t mKeyHash = ... |                            | 
| AtomImpl* mAtom = @-----|----------------------------+
\-------------------------/                            | 
                                                       | 
StaticAtomEntry (heap, ~2 x 16 bytes[*])               | 
(this entry is part of gStaticAtomTable)               | 
/-------------------------\                            | 
| uint32_t mKeyHash = ... |                            | 
| nsIAtom* mAtom = @------|----------------------------+
\-------------------------/

[*] Each hash table is half full on average, so each entry takes up
approximately twice its actual size.

There is a lot going on in that diagram, but putting that all together gave the following overhead per atom.

  • Static shared: 0 bytes
  • Static unshared: 8 + 2(length+1) + 8 + 16
  • Dynamic: 32 + ~32 + ~32 bytes
  • Total bytes: (2(length+1) + 64 + ~64) * num_processes

(Although these atoms are “static” in the sense of being known at compile-time, a lot of the associated data was allocated dynamically.)

At the time there were about 2,700 static atoms, and avg_length was about 11, so the overhead was roughly:

  • 0 bytes fixed, and
  •  410,400 bytes per process. (Or more, depending on how the relocations required for the static pointers were represented, which depended on the platform.)

Today, things have improved greatly and now look like the following.

const char16_t[7] (.rodata, 2(N+1) bytes)
(this is detail::gGkAtoms.foobar_string)
/-----------------------------------------\ <--+
| u"foobar"           // the actual chars |    | 
\-----------------------------------------/    | 
                                               | 
const nsStaticAtom (.rodata, 12 bytes)         | 
(this is within detail::gGkAtoms.mAtoms[])     | 
/-------------------------------------\ <---+  | 
| uint32_t mLength:30 = 6             |     |  | 
| uint32_t mKind:2 = AtomKind::Static |     |  | 
| uint32_t mHash = ...                |     |  | 
| uint32_t mStringOffset = @----------|-----|--+
\-------------------------------------/     | 
                                            | 
constexpr nsStaticAtom* (0 bytes) @---------+
(this is nsGkAtoms::foobar)                 | 
                                            | 
AtomTableEntry (heap, ~2 x 16 bytes[*])     | 
(this entry is part of gAtomTable)          | 
/-------------------------\                 | 
| uint32_t mKeyHash = ... |                 | 
| nsAtom* mAtom = @-------|-----------------+
\-------------------------/

[*] Each hash table is half full on average, so each entry takes up
approximately twice its actual size.

That gives the following overhead per atom.

  • Static shared: 12 + 2(length+1) bytes
  • Static unshared: 0 bytes
  • Dynamic: ~32 bytes
  • Total: 12 + 2(length+1) + ~32 * num_processes

We now have about 2,300 static atoms and avg_length is still around 11, so the overhead is roughly:

  • 82,800 bytes fixed, and
  • 73,600 bytes per process.

I won’t explain all the parts of the two diagrams, but it can be seen that we’ve gone from six pieces per static atom to four; the size and complexity of the remaining pieces are greatly reduced; there are no static pointers (only constexpr pointers and integral offsets) and thus no relocations; and there is a lot more interprocess sharing thanks to more use of const. Also, there is no need for a separate static atom table any more, because the main atom table is thread-safe and the HTML5 parser (the primary user of the separate static atom table) now has a small but highly effective static atoms cache.

Things that aren’t visible from the diagrams: atoms are no longer exposed to JavaScript code via XPIDL, there are no longer any virtual methods involved, and all atoms are defined in a single place (with no duplicates) instead of 7 or 8 different places. Notably, the last few steps were blocked for some time by a bug in MSVC involving the handling of constexpr.

The bug dependency tree gives a good indication of how many separate steps were involved in this work. If there is any lesson to be had here, it’s that small improvements add up over time.

Gijs KruitboschFirefox removes core product support for RSS/Atom feeds

TL;DR: from Firefox 64 onwards, RSS/Atom feed support will be handled via add-ons, rather than in-product.

What is happening?

After considering the maintenance, performance and security costs of the feed preview and subscription features in Firefox, we’ve concluded that it is no longer sustainable to keep feed support in the core of the product. While we still believe in RSS and support the goals of open, interoperable formats on the Web, we strongly believe that the best way to meet the needs of RSS and its users is via WebExtensions.

With that in mind, we have decided to remove the built-in feed preview feature, subscription UI, and the “live bookmarks” support from the core of Firefox, now that improved replacements for those features are available via add-ons.

Why are you doing this?

By virtue of being baked into the core of Firefox, these features have long had outsized maintenance and security costs relative to their usage. Making sure these features are as well-tested, modern and secure as the rest of Firefox would take a surprising amount of engineering work, and unfortunately the usage of these features does not justify such an investment: feed previews and live bookmarks are both used in around 0.01% of sessions.

As one example of those costs, “live bookmarks” use a very old, very slow way to access the bookmarks database, and it would take a lot of time and effort to bring it up to the performance standards we expect from Quantum. Likewise, the feed viewer has its own “special” XML parser, distinct from the main Firefox one, and has not had a significant update in styling or functionality in the last seven years. The engineering work we’d need to bring these features, in their current states, up to modern standards is complicated by how few automated tests there are for anything in this corner of the codebase.

These parts of Firefox are also missing features RSS users typically want. Live bookmarks don’t work correctly with podcasts, don’t work well with sync, and don’t work at all on any of Mozilla’s mobile browsers. They don’t even understand if an article has been read or not, arguably the most basic feature a feed reader should have. In short, the in-core RSS features would need both a major technical overhaul and significant design and maintenance investments to make them useful to a meaningful portion of users.

Looking forward, Firefox offers other features to help users discover and read content, and the move to WebExtensions will make it much easier for the Mozilla community to bring their own ideas for new features to life as well.

What will happen to my existing live bookmarks?

When we remove live bookmarks, we will:

  1. Export the details of your existing live bookmarks to an OPML file on your desktop, which other feed readers (including ones that are webextensions) support importing from.
  2. Replace the live bookmarks with “normal” bookmarks pointing to the URL associated with the live bookmark.
  3. Open a page on support.mozilla.org that explains what has happened and offers you options for how you could continue consuming those feeds.

This will happen as part of Firefox 64, scheduled for release in December 2018. We will not change anything on Firefox 60 ESR, but the next major ESR branch (currently expected to be Firefox 68 ESR) will include the same changes.

Hacks.Mozilla.OrgHome Monitoring with Things Gateway 0.6

When it comes to smart home devices, protecting the safety and security of your home when you aren’t there is a popular area of adoption. Traditional home security systems are either completely offline (an alarm sounds in the house, but nobody is notified) or professionally monitored (with costly subscription services). Self monitoring of your connected home therefore makes sense, but many current smart home solutions still require ongoing service fees and send your private data to a centralised cloud service.

A floor plan style diagram describes uses of autonomous home monitoring with Project Things

The latest version of the Things Gateway rolls out today with new home monitoring features that let you directly monitor your home over the web, without a middleman. That means no monthly fees, your private data stays in your home by default, and you can choose from a variety of sensors from different brands.

Version 0.6 adds support for door sensors, motion sensors and customisable push notifications. Other enhancements include support for push buttons and a wider range of Apple HomeKit devices, as well as general robustness improvements and better error reporting.

Sensors

The latest update comes with support for door/window sensors and motion sensors, including the SmartThings Motion Sensor and SmartThings Multipurpose Sensor.An illustration with icons of various sensors used in home monitoringThese sensors make great triggers for a home monitoring system and also report temperature, battery level and tamper detection.

Push Notifications

You can now create rules which trigger a push notification to your desktop, laptop, tablet or smartphone. An example use case for this is to notify you when a door has been opened or motion is detected in your home, but you can use notifications for whatever you like!

To create a rule which triggers a push notification, simply drag and drop the notification output and customize it with your own message.

A diagram showing how the Intruder Alarm is triggered by the interaction of the sensors.Thanks to the power of Progressive Web Apps, if you’ve installed the gateway’s web app on your smartphone or tablet you’ll receive notifications even if the web app is closed.

Push Buttons

We’ve also added support for push buttons, like the SmartThings Button, which you can program to trigger any action you like using the rules engine. Use a button to simply turn a light on, or set a whole scene with multiple outputs.

Diagram of writing rules that trigger actions by the the Push Button

Error Reporting

0.6 also comes with a range of robustness improvements including connection detection and error reporting. That means it will be easier to tell whether you have lost connectivity to the gateway, or one of your devices has dropped offline, and if something goes wrong with an add-on, you’ll be informed about it inside the gateway UI.

If a device has dropped offline, its icon is displayed as translucent until it comes back online. If your web app loses connectivity with the gateway, you’ll see a message appear at the bottom of the screen.A diagram of all the sensors showing their status.

HomeKit

The HomeKit adapter add-on now supports a wider range of Apple HomeKit compatible devices including:

Smart plugs

Bridges

Light bulbs

Sensors

These devices use the built-in Bluetooth or WiFi support of your Raspberry Pi-based gateway, so you don’t even need a USB dongle.

Download

You can download version 0.6 today from the website. If you’ve already built your own Things Gateway with a Raspberry Pi and have it connected to the Internet, it should automatically update itself soon.

We can’t wait to see what creative things you do with all these new features. Be sure to let us know on Discourse and Twitter!

The post Home Monitoring with Things Gateway 0.6 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogPocket Offers New Features to Help People Read, Watch and Listen across iOS, Android and Web

We know that when you save something to Pocket, there is a reason why. You are saving something you want to learn about, something that fascinates you, something that will help shape and change you. That’s why we’ve worked hard to make Pocket a dedicated, quiet place to focus so that you can come back and absorb what you save when you are ready.

The trick is, in the reality of our lives, it’s not always that simple. Our lives don’t always have a quiet moment with a coffee cup in hand with Pocket in the other. We have work to do, kids to take care of, school to attend. But with Pocket we’ve always worked hard to ensure that Pocket gives you tools to fit content around your life, freeing you from the moment of distraction and putting you in control.

Today, we’re excited to share a new Pocket, that makes it easier than ever to read, watch, listen to all that you’ve saved across all of the ways you use it: iOS, Android and Web.

Listen: A new way to read

You can listen to content you’ve saved from favorite publishers from all across the web—all from Pocket. Your Pocket list just became your own personal podcast, curated by you. Our new listen feature frees the content you’ve saved to fit into your busy life. It enables you to absorb articles whenever and wherever, whether you are driving, or walking, working out, cooking, or on the train.

With the latest version of listen on iOS and Android, we’re introducing a more human sounding voice, powered by Amazon Polly, and the ability to play through your list easily and hands-free. To start listening, simply open Pocket and tap the new listen icon in the top left corner.

A new Pocket, just for you

 

With Pocket’s app, we’ve intended it to be a different space from anything else on your device. It’s intentionally an uncluttered and distraction-free environment, built with care so you can really read.

We’ve doubled down on this with a new fresh design, tailored to let you focus, tune out the world and tune into your interests. When you open Pocket, you’ll see a Pocket that’s been redesigned top to bottom. We’ve created a new, clean, clutter-free article view to help you absorb and focus. Introduced new app-wide dark and sepia themes to make reading comfortable, no matter what time of day it is. And updated fonts and typography to make long reads more comfortable.

“At Mozilla, we love the web. Sometimes we want to surf, and the Firefox team has been working on ways to surf like an absolute champ with features like Firefox Advance,” said Mark Mayo, Chief Product Officer, Firefox. “Sometimes, though, we want to settle down and read or listen to a few great pages. That’s where Pocket shines, and the new Pocket makes it even easier to enjoy the best of the web when you’re on the go in your own focused and uncluttered space. I love it.”

Working hard for you

We’re excited to get Pocket 7.0 into your hands today. You can get the latest Pocket on Google Play, App Store, and by joining our Web Beta.

As always, we want to hear from you – let us know what you think.

— Nate

 

The post Pocket Offers New Features to Help People Read, Watch and Listen across iOS, Android and Web appeared first on The Mozilla Blog.

Mozilla GFXWebRender newsletter #25

As usual, WebRender is making rapid progress. The team is working hard on nailing the remaining few blockers for enabling WebRender in Beta, after which focus will shift to the Release blockers. It’s hard to single out a particular highlight this week as the majority of bugs resolved were very impactful.

Notable WebRender and Gecko changes

  • Kats fixed a parallax scrolling issue.
  • Kats fixed the trello scrollbar jumping bug.
  • Kats fixed a crash.
  • Kats fixed various other things.
  • Matt finished the work on shader compile times. The startup times with and without WebRender are now on par.
  • Lee made WebRender more aggressively clean up evicted font instances in the font backend.
  • Lee fixed a bug with Windows variation fonts.
  • Emilio fixed some pixel snapping issues with fallback content.
  • Emilio fixed filter and fallback scaling issue.
  • Glenn fixed nested scroll frame clipping.
  • Glenn fixed large SVG clipping on google docs.
  • Glenn made various refactorings towards picture caching.
  • Glenn reduced the amount of work we do when building clip chain instances.
  • Nical added support for applying filters in linear space to WebRender.
  • Sotaro avoided scheduling repaints during animation if the animation values haven’t actually changed.
  • Sotaro fixed a frame scheduling bug.
  • Sotaro fixed a crash with cross process texture synchronization on windows.

Ongoing work

  • Bobby is improving memory usage by figuring out what set of OpenGL incantations with which planet alignment don’t cause ANGLE to opt into mipmapping where we don’t need it.
  • Chris and Andrew are looking into why we aren’t getting as much data as we hoped from the latest shield study.
  • Gankro making progress on blob recoordination.
  • Nical is adding support for running a subset of SVG filters on the GPU (instead of falling back to blob images).
  • A confabulate of graphics folks are thinking very hard about frame scheduling strategies to avoid uneven frame rates when certain parts of the pipeline are struggling.

Enabling WebRender in Firefox Nightly

  • In about:config set “gfx.webrender.all” to true,
  • restart Firefox.

QMODevEdition 63 Beta 14 Testday, October 12th

Hello Mozillians,

We are happy to let you know that Friday, October 12th, we are organizing Firefox 63 Beta 14 Testday. We’ll be focusing our testing on: Flash Compatibility and  Block Autoplay V2.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Mozilla VR BlogFirefox Reality 1.0.1 - with recline mode

Firefox Reality 1.0.1 - with recline mode

Firefox Reality 1.0.1 is now available for download in the Viveport, Oculus, and Daydream app stores. This is a minor point release, focused on fixing several performance issues and adding crash reporting UI and (thanks to popular request!) a reclined viewing mode.

New Features:

  • Crash reporting
  • Reclined viewing mode
  • MSAA in immersive mode

Bug Fixes:

  • Improved WebVR stability
  • Added some missing keys to keyboard
  • General stability fixes

Full release notes can be found in our Github repo here.

We’ve been collecting feedback from users, and are working on a more fully-featured version for November with performance improvements, bookmarks, and an improved movie/theater mode (including 180/360 video support).

Keep the feedback coming, and don't forget to check out new content weekly!

Mozilla B-Teamhappy bmo push day!

happy bmo push day! This is a “just general bugfixes” sort of release.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1496803] Suggested component links ignore cloned bug data
  • [1497234] Remove Personas Plus GitHub link from Custom Bug Entry Forms index
  • [1497070] In-page links are broken due to <base href> added during Mojo migration
  • [1497437] The crash graph should display Exact Match results by default
  • [623384] Use Module::Runtime…

View On WordPress

Mozilla B-Teamhappy bmo push day (last friday)

happy bmo push day (last friday)

(a few things went out on friday because of some API breakage, and didn’t get posted. woops)

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1495349] Remove Persona extension
  • [1263502] Add duplicates to /rest/bug/id
  • [1495906] After mojo update /latest/configuration API call no longer works and gives page not found
  • [1496233] “Dunno” -> “Don’t know” in approval request…

View On WordPress

Mozilla Security BlogDelaying Further Symantec TLS Certificate Distrust

Due to a long list of documented issues, Mozilla previously announced our intent to distrust TLS certificates issued by the Symantec Certification Authority, which is now a part of DigiCert. On August 13th, the next phase of distrust was enabled in Firefox Nightly. In this phase, all TLS certificates issued by Symantec (including their GeoTrust, RapidSSL, and Thawte brands) are no longer trusted by Firefox (with a few small exceptions).

In my previous update, I pointed out that many popular sites are still using these certificates. They are apparently unaware of the planned distrust despite DigiCert’s outreach, or are waiting until the release date that was communicated in the consensus plan to finally replace their Symantec certificates. While the situation has been improving steadily, our latest data shows well over 1% of the top 1-million websites are still using a Symantec certificate that will be distrusted.

Unfortunately, because so many sites have not yet taken action, moving this change from Firefox 63 Nightly into Beta would impact a significant number of our users. It is unfortunate that so many website operators have waited to update their certificates, especially given that DigiCert is providing replacements for free.

We prioritize the safety of our users and recognize the additional risk caused by a delay in the implementation of the distrust plan. However, given the current situation, we believe that delaying the release of this change until later this year when more sites have replaced their Symantec TLS certificates is in the overall best interest of our users. This change will remain enabled in Nightly, and we plan to enable it in Firefox 64 Beta when it ships in mid-October.

We continue to strongly encourage website operators to replace Symantec TLS certificates immediately. Doing so improves the security of their websites and allows the 10’s of thousands of Firefox Nightly users to access them.

The post Delaying Further Symantec TLS Certificate Distrust appeared first on Mozilla Security Blog.

Firefox NightlyThese Weeks in Firefox: Issue 47

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Project Updates

Add-ons / Web Extensions

  • Search engines are being converted to WebExtension style packaging (tracker bug)
  • about:addons is getting some visual tweaks.
  • Content scripts can now read from a <canvas> that they have modified.
  • Small fixes/improvements to the identity and menus APIs.

Browser Architecture

Developer Tools

Fission

Lint

Mobile

  • Firefox for iOS 14 is on it’s way to the App Store. The second release candidate will go out to QA in the week of the 8th for final verification.
  • Android Components 0.25 shipped. Highlights of this release:
    • Improvements
      • We have a new component feature-intent that provides intent processing functionality.
      • Added WorkManager implementation for updating experiment configurations in the background.
    • Fixes
      • Fullscreen mode would only take up part of the screen.
      • A crash that could happen when loading invalid URLs
      • A bug in FlatFileExperimentStorage that caused updated experiment configurations not being   saved to disk.
    • More here at the changelog

Performance

Policy Engine

  • New Policy for Security Devices (PKCS #11) landed
  • New Policy for DNS Over HTTPS being reviewed
  • Up Next:
    • Installing certificates
    • Browser Startup Page
    • Changing Locale
  • Plan is for all policy changes to be in by the end of the week

Search and Navigation

Address Bar & Search

Places

Web Payments

Mozilla Reps CommunityCommunity Coordinator role

The Reps program is evolving in order to be aligned with Mozilla’s changes on how we perceive communities. Part of those changes is the Mission Driven Mozillians project, where the Reps are involved.

We (the Reps Council) believe that the Reps program has a natural place inside this project because of the experience, skills and knowledge on leading, growing and helping communities on their daily life.

Based on  the work that has been done in the Mission Driven Mozillians project (video: https://discourse.mozilla.org/t/a-quick-video-intro-to-mission-driven-mozillians/25912), two types of volunteer leadership have been identified:

  1. Volunteers who have been identified inside the community as experts based on their knowledge and skills on a specific functional area – functional area experts
  2. Volunteers who have been identified as experts based on their ability to coordinate, support and expand communities  – community coordinators.

For the first category, functional areas are able to easily identify the right fit for the position based on different knowledge criterias and years of experience in the project. However, for the second category there is not a well defined role nor a set of guidelines for volunteers that were interested in community building. As a result, historically, people from the first category took over the community coordinator roles without necessarily being properly trained for it.

For that reason, the Reps Council has worked on a definition of the community coordinator role, since we believe that the Reps are a natural fit for this role.

Suggested definition of community coordinator role:

Community coordinator volunteers are aligned and committed Mozillians who are interested in:

  1. finding and connecting new talent with Mozilla projects they are contributing to
  2. developing communities on a functional and/or local level
  3. supporting local communities and Mozilla to have an effective and decentralized environment for contribution
  4. creating collaborations with other local communities in an effort to spread Mozilla’s mission and expand Mozilla’s outreach in the open source ecosystem

 

In order to be able to describe the responsibilities of the role more, we have specified how the role is going to look like based on the agreements the Mission Driven Mozillians group has crafted.

Where people hold coordinating roles, they should be reviewed regularly

  • This creates opportunities for new, diverse leaders to emerge.
  • Ensures continuous support from the communities they serve.
  • Prevents toxic individuals from maintaining power indefinitely.
  • Allows space for individuals to receive feedback and support to better thrive in their role.

What does that mean for Community Experts?

  • All Community experts and any volunteers that hold roles within this group have fixed terms and are reviewed regularly. Established processes will help contributors give feedback and review these roles.

Responsibilities should be clearly communicated and distributed

  • Creates more opportunities for more people.
  • Avoids gatekeeping and power accumulation.
  • Reduces burnout and over reliance on an individual by sharing accountability.
  • Creates leadership pathways for new people.
  • Potentially increases diversity.
  • An emphasis on responsibility over title avoids unnecessary “authority labels”.

What does that mean for the Community Experts?

  • All Community experts have public role descriptions that articulate clearly their responsibilities and have a culture of delegating them when needed across their co-volunteers.

When people are in a coordinating role, they should abide by standards, and be accountable for fulfilling their responsibilities

  • This builds confidence and support for individuals and these roles from community members and staff.
  • Ensures that everyone has shared clarity on expectations and success.
  • Creates an environment where the CPG is applied consistently.
  • Increases the consistency in roles across the organization .

What does that mean for the Community Experts?

  • All Community experts should keep their activities public and visible to everyone and they are accountable to fulfill their responsibilities

People in coordinating roles should follow and model Mozilla’s diversity & inclusion values

 

  • Creates a culture of inclusion that invites participation from new voices.
  • Encourages the inclusion of diverse voices and groups.
  • Creates an environment where the CPG is applied consistently.
  • Enables leadership pathways that explicitly consider inclusion dimensions.

What does that mean for the Community Experts?

  • The Community experts group has processes that are optimized to be welcoming to diverse audiences, ensures that all community building activities across the organization roles are available for everyone, and has communication channels that are properly used and are accessible for everyone. All Community experts are trained on the  CPG before joining and are confident to flag any lack of accountability and/or violations of the CPG.

People with coordinating roles should be supported and recognized in a set of standard ways across Mozilla

 

  • Enables people to have equal access to training and growth opportunities regardless of what part of the organisation they contribute to.
  • Allows people to follow their passions/skills instead of just contribute for rewards.
  • Roles have clear definitions and avoid labels that create authority feeling
  • We get shared understandings of the kind of responsibilities that exist

What does that mean for the Community Experts?

  • This means that we recognize all community building activities across the organization equally, support and also provide training for skills improvement on community building and self development.

 

These are the agreements that the Reps Council is suggesting and we need your feedback!

All Reps should agree because of their role as a community coordinators in their communities, let us know what you think on Discourse!

This blogpost is part of the work that the Reps Council and Reps Peers has been doing for the last quarter. The blogpost as been authored by Daniele Scasciafratte

Mozilla Open Innovation TeamTaming triage: Partnering with Topcoder to harness the power of the crowd

New innovation challenge is looking for an algorithm to automate bug triaging in Bugzilla

We are excited to announce the launch of the Bugzilla Automatic Bug Triaging Challenge, a crowdsourcing competition sponsored by Mozilla and hosted by Topcoder, the world’s largest network of software designers, developers, testers, and data scientists. The goal of the competition is to automate triaging (categorization by products and software components) of new bugs submitted to Bugzilla, Mozilla’s web-based bug tracking system. By cooperating with Topcoder, Mozilla is expanding its open innovation capabilities to include specialized crowdsourcing communities and competition mechanisms.

Mozilla’s Open Innovation strategy is guided by the principle of being Open by Design derived from a comprehensive 2017 review of how Mozilla works with open communities. The strategy sets forth a direction of expanding the organisation’s external outreach beyond its traditional base of core contributors: open source software developers, lead users, and Mozilla volunteers. Our cooperation with Topcoder is an example of reaching out to a global community of data scientists.

Why Bugs?

Mozilla is using crowdsourcing to scale the effort we can bring to our product and technology development through collaborative crowds. Such a “capacity crowdsourcing” has already been successfully applied to the Common Voice project, Mozilla’s initiative to crowdsource a large dataset of human voices for use in speech technology.

However, we know that engaging crowds can have positive impact on other areas of Mozilla product and technology development. In particular, we focus our attention on processes that require large amounts of manual engineering work; automating these processes can result in significant lowering of development and operating cost.

Take, for example, bug triaging in Bugzilla, a manual process of categorization (by products and software components) of hundreds of bugs submitted each month to Mozilla’s web-based bug tracking system. Although the accuracy of the manual bug triaging is very high, it consumes valuable time of experienced engineers, which may otherwise be spent on other high-priority projects.

Why Topcoder?

Over years, the Firefox Test Engineering team responsible for bug triaging has accumulated a lot of data that could potentially be used to automate the process. Working together with the Open Innovation team, the engineers have engaged Topcoder, the world’s largest network of software designers, developers, testers, and data scientists, which many organizations, including NASA, use as a platform to solve complex algorithmic problems.

The result of this collaboration has been the Bugzilla Automatic Bug Triaging Challenge whose objective is to create an algorithm allowing automated bug triaging in Bugzilla with the accuracy comparable to that for the manual process. To select the winners of the competition, the Topcoder architect team has developed a scoring mechanism; to qualify for a prize (ranging between $1,000 and $8,000), any algorithm must reach a certain minimal score. Refer to the Challenge description for more detail.

The Challenge will be open for submissions until October 26. Although Mozilla employees are not allowed to participate, we do encourage all members of Mozilla’s communities to take part in the competition.


Taming triage: Partnering with Topcoder to harness the power of the crowd was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogAnnouncing a Competition for Ethics in Computer Science, with up to $3.5 Million in Prizes

The Responsible Computer Science Challenge — by Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies — calls on professors to integrate ethics into undergraduate computer science courses

 

With great code comes great responsibility.

Today, computer scientists wield tremendous power. The code they write can be used by billions of people, and influence everything from what news stories we read, to what personal data companies collect, to who gets parole, insurance or housing loans

Software can empower democracy, heighten opportunity, and connect people continents away. But when it isn’t coupled with responsibility, the results can be drastic. In recent years, we’ve watched biased algorithms and broken recommendation engines radicalize users, promote racism, and spread misinformation.

That’s why Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are launching the Responsible Computer Science Challenge: an ambitious initiative to integrate ethics and accountability into undergraduate computer science curricula and pedagogy at U.S. colleges and universities, with up to $3.5 million in prizes.

Says Kathy Pham, computer scientist and Mozilla Fellow co-leading the challenge:

“In a world where software is entwined with much of our lives, it is not enough to simply know what software can do. We must also know what software should and shouldn’t do, and train ourselves to think critically about how our code can be used. Students of computer science go on to be the next leaders and creators in the world, and must understand how code intersects with human behavior, privacy, safety, vulnerability, equality, and many other factors.”

Pham adds: “Just like how algorithms, data structures, and networking are core computer science classes, we are excited to help empower faculty to also teach ethics and responsibility as an integrated core tenet of the curriculum.”

Pham is currently a Senior Fellow and Adjunct Lecturer at Harvard University, and an alum of Google, IBM, and the United States Digital Service at the White House. She will work closely with Responsible Computer Science applicants and winners.

Says Paula Goldman, Global Lead of the Tech and Society Solutions Lab at Omidyar Network: “To ensure technology fulfills its potential as a positive force in the world, we are supporting the growth of a tech movement that is guided by the emerging mantra to move purposefully and fix things. Treating ethical reflection and discernment as an opt-in sends the wrong message to computer science students: that ethical thinking can be an ancillary exploration or an afterthought, that it’s not part and parcel of making code in the first place. Our hope is that this effort helps ensure that the next generation of tech leaders is deeply connected to the societal implications of the products they build.”

Says Craig Newmark, founder of craigslist and Craig Newmark Philanthropies: “As an engineer, when you build something, you can’t predict all of the consequences of what you’ve made; there’s always something. Nowadays, we engineers have to understand the importance and impact of new technologies. We should aspire to create products that are fair to and respectful of people of all backgrounds, products that make life better and do no harm.”

Says Thomas Kalil, Chief Innovation Officer at Schmidt Futures: “Information and communication technologies are transforming our economy, society, politics, and culture. It is critical that we equip the next generation of computer scientists with the tools to advance the responsible development of these powerful technologies – both to maximize the upside and understand and manage the risks.”

Says Mary L. Gray, a Responsible Computer Science Challenge judge: “Computer science and engineering have deep domain expertise in securing and protecting data. But when it comes to drawing on theories and methods that attend to people’s ethical rights and social needs, CS and engineering programs are just getting started. This challenge will help the disciplines of CS and engineering identify the best ways to teach the next generation of technologists what they need to know to build more socially responsible and equitable technologies for the future.”

(Gray is senior researcher at Microsoft Research; fellow at Harvard University’s Berkman Klein Center for Internet & Society; and associate professor in the School of Informatics, Computing, and Engineering with affiliations in Anthropology and Gender Studies at Indiana University.)

The Responsible Computer Science Challenge is launching alongside an open letter signed by 35 industry leaders, calling for more responsibility in computer science curricula.

Responsible Computer Science Challenge details

Through the Responsible Computer Science Challenge, Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are supporting the conceptualization, development, and piloting of curricula that integrate ethics with computer science. Our hope is that this coursework will not only be implemented, but also scaled to colleges and universities across the country — and beyond.

Between December 2018 and July 2020, we will award up to $3.5 million in prizes to promising proposals. The challenge is open to both individual professors or collaborative teams consisting of professors, graduate students, and teaching assistants. We’re seeking educators who are passionate about teaching not only computer science, but how it can be deployed in a responsible, positive way.

The challenge consists of two stages:

In Stage 1, we will seek concepts for deeply integrating ethics into existing undergraduate computer science courses, either through syllabi changes (e.g. including a reading or exercise on ethics in each class meeting) or teaching methodology adjustments (e.g. pulling teaching assistants from ethics departments). Stage 1 winners will receive up to $150,000 each to develop and pilot their ideas. Winners will be announced in April 2019.

In Stage 2, we will support the spread and scale of the most promising approaches developed in Stage 1. Stage 2 winners will receive up to $200,000 each and will be announced in summer 2020.

Projects will be judged by an external review committee of academics, tech industry leaders, and others, who will use evaluation criteria developed jointly by Omidyar Network and Mozilla.

Judges include Bobby Schnabel, professor of computer science at the University of Colorado Boulder and former president of ACM; Maria Klawe, president of Harvey Mudd College; Joshua Cohen, Marta Sutton Weeks Professor of Ethics in Society at Stanford University; Brenda Darden Wilkerson, president and CEO of the Anita Borg Institute; and others.

We are accepting Initial Funding Concepts for Stage 1 now through December 13, 2018. Apply.

~

Pham concludes: “In the short term, we can create a new wave of engineers. In the long term, we can create a culture change in Silicon Valley and beyond — and as a result, a healthier internet.”

The Responsible Computer Science Challenge is part of Mozilla’s mission to empower the people and projects on the front lines of internet health work. Other recent awards include our WINS Challenges — which connect unconnected Americans — and the Mozilla Gigabit Community Fund.

Omidyar Network’s Tech and Society Solutions Lab draws on Omidyar Network’s long-standing belief in the promise of technology to create opportunity and social good, as well as the concern about unintended consequences that can result from technological innovation. The team aims to help technologists prevent, mitigate, and correct societal downsides of technology — and maximize positive impact.


ABOUT OMIDYAR NETWORK

Omidyar Network is a philanthropic investment firm dedicated to harnessing the power of markets to create opportunity for people to improve their lives. Established in 2004 by eBay founder Pierre Omidyar and his wife Pam, the organization invests in and helps scale innovative organizations to catalyze economic and social change. Omidyar Network has committed more than $1 billion to for- profit companies and nonprofit organizations that foster economic advancement and encourage individual participation across multiple initiatives, including Digital Identity, Education, Emerging Tech, Financial Inclusion, Governance & Citizen Engagement, and Property Rights. You can learn more here: www.omidyar.com.

 

ABOUT SCHMIDT FUTURES

Schmidt Futures is a philanthropic initiative, founded by Eric and Wendy Schmidt, that seeks to improve societal outcomes through the thoughtful development of emerging science and technologies that can benefit humanity. As a venture facility for public benefit, they invest risk capital in the most promising ideas and exceptional people across disciplines. Learn more at schmidtfutures.com

 

ABOUT CRAIG NEWMARK PHILANTHROPIES

Craig Newmark Philanthropies was created by craigslist founder Craig Newmark to support and connect people and drive broad civic engagement. The organization works to advance people and grassroots organizations that are getting stuff done in areas that include trustworthy journalism, voter protection, gender diversity in technology, and veterans and military families. For more information, please visit: CraigNewmarkPhilanthropies.org

The post Announcing a Competition for Ethics in Computer Science, with up to $3.5 Million in Prizes appeared first on The Mozilla Blog.

Chris IliasHow to edit Firefox for iOS bookmarks

Using Bookmarks in Firefox for iOS is relatively simple. When visiting a page, you can add it to your bookmarks list. When you pull up your list, the page title will appear as one of the list items. In some cases, the page title or URL may not be exactly what you want to bookmark. For example, if I go to Dark Sky and bookmark it, the bookmark URL will include my current GPS coordinates, and the bookmark title will include my current address.

But I don’t want the bookmark to be specific to my location! In Firefox for iOS, there doesn’t appear to be a way to edit the bookmark title and URL…or is there?

To edit Firefox for iOS bookmarks, you’ll need to edit them on the Windows/Mac/Linux version (aka Desktop).

  1. If you don’t already have a Firefox account set up, set it up and sync your bookmarks to the desktop version of Firefox.
  2. Open Firefox on your desktop and open the Library window.
    Click the Library button , then go to Bookmarks and click Show All Bookmarks.
  3. In the sidebar, select Mobile Bookmarks. It should be the last item in the list. That folder contains your Firefox for iOS bookmarks.
  4. Edit your mobile bookmarks. You can even add folders!

Your bookmarks in Firefox for iOS should be automatically updated.

Mozilla Security BlogTrusting the delivery of Firefox Updates

Providing a web browser that you can depend on year after year is one of the core tenet of the Firefox security strategy. We put a lot of time and energy into making sure that the software you run has not been tampered with while being delivered to you.

In an effort to increase trust in Firefox, we regularly partner with external firms to verify the security of our products. Earlier this year, we hired X41 D-SEC Gmbh to audit the mechanism by which Firefox ships updates, known internally as AUS for Application Update Service. Today, we are releasing their report.

Four researchers spent a total of 27 days running a technical security review of both the backend service that manages updates (Balrog) and the client code that updates your browser. The scope of the audit included a cryptographic review of the update signing protocol, fuzzing of the client code, pentesting of the backend and manual code review of all components.

Mozilla Security continuously reviews and tests the security of Firefox, but external verification is a critical part of our operations security strategy. We are glad to say that X41 did not find any critical flaw in AUS, but they did find various issues ranking from low to high, as well as 21 side findings.

X41 D-Sec GmbH found the security level of AUS to be good. No critical vulnerabilities have been identified in any of the components. The most serious vulnerabilities that were discovered are a Cross-Site Request Forgery (CSRF) vulnerability in the administration web application interface that might allow attackers to trigger unintended administrative actions under certain conditions. Other vulnerabilities identified were memory corruption issues, insecure handling of untrusted data, and stability issues (Denial of Service (DoS)). Most of these issues were constrained by requiring to bypass cryptographic signatures.

Three vulnerabilities ranked as high, and all of them were located in the administration console of Balrog, the backend service of Firefox AUS, which is protected behind multiple factors of authentication inside our internal network. The extra layers of security effectively lower the risk of the vulnerabilities found by X41, but we fixed the issues they found regardless.

X41 found a handful of bugs in the C code that handles update files. Thankfully, the cryptographic signatures prevent a bad actor from crafting an update file that could impact Firefox. Here again, designing our systems with multiple layers of security has proven useful.

Today, we are making the full report accessible to everyone in an effort to keep Firefox open and transparent. We are also opening up our bug tracker so you can follow our progress in mitigating the issues and side findings identified in the report.

Finally, we’d like to thank X41 for their high quality work on conducting this security audit. And,  as always, we invite you to help us keep Firefox secure by reporting issues through our bug bounty program.

The post Trusting the delivery of Firefox Updates appeared first on Mozilla Security Blog.

This Week In RustThis Week in Rust 255

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is pest, a Parsing Expression Grammar-based parser library. Thanks to CAD97 for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

136 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is a Fast Programming Language. Rust programs are therefore “fast,” especially so if you write them with the correct observations to the arcane ley lines of birth and death known as “lifetimes,” and also remember to pass cargo the --release flag.

– Adam Perry blogging about lolbench

Thanks to Pascal Hertleif for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Cameron KaiserTenFourFox FPR10b1 available

TenFourFox Feature Parity 10 beta 1 is now available (downloads, hashes, release notes). This version is mostly about expanded functionality, adding several new DOM and JavaScript ES6 features, and security changes to match current versions of Firefox. Not everything I wanted to get done for this release got done, particularly on the JavaScript side (only one of the ES6 well-known symbols updates was finished in time), but with Firefox 63 due on the 22nd we'll need this period for sufficient beta testing, so here it is.

The security changes include giving document-level (i.e., docshell) data: URIs unique origins to reduce cross-site scripting attack surface (for more info, see this Mozilla blog post from Fx57). This middle ground should reduce issues with the older codebase and add-on compatibility problems, but it is possible some historical add-ons may be affected by this and some sites may behave differently. However, many sites now assume this protection, so it is important that we do the same. If you believe a site is behaving differently because of this, toggle the setting security.data_uri.unique_opaque_origin to false and restart the browser. If the behaviour changes, then this was the cause and you should report it in the comments. This covers most of the known exploits of the old Firefox behaviour and I'll be looking at possibly locking this down further in future releases.

The other notable security change is support for noopener, but using the soon-to-be-current implementation in Firefox 63. This feature prevents new windows (that were presumably unwittingly) opened to a malicious page from that page then trying to manipulate the page that opened it, and many sites already support it.

This release also now prefs MSE (and VP9) to on by default, since YouTube seems to require it. We do have AltiVec acceleration for VP9 (compare with libvpx for Chromium on little-endian PowerPC), but VP9 is a heavier codec than VP8, and G4 and low-end G5 systems will not perform as well. You can still turn it off for sites that seem to do better with it disabled.

There are two known broken major sites: the Facebook display glitch (worse on 10.5 than 10.4, for reasons as yet unexplained), and Citibank does not load account information. Facebook can be worked around by disabling Ion JavaScript acceleration, but I don't advise this because of the profound performance impact and I suspect it's actually just fixing a symptom because backing out multiple changes in JavaScript didn't seem to make any difference. As usual, if you can stand Facebook Basic, it really works a lot better on low-power systems like ours. Unfortunately, Citibank has no such workaround; changing various settings or even user agents doesn't make any difference. Citibank does not work in versions prior to Fx51, so the needful could be any combination of features newly landed in the timeframe up to that point. This is a yuuuge range to review and very slow going. I don't have a fix yet for either of these problems, nor an ETA, and I'm not likely to until I better understand what's wrong. Debugging Facebook in particular is typically an exercise in forcible hair removal because of their multiple dependencies and heavy minification, and their developer account has never replied to my queries to get non-minified sources.

So, in the absence of a clear problem to repair, my plan for FPR11 is to try to get additional well-known symbols supported (which should be doable) and further expand our JavaScript ES6/ES7 support in general. Unfortunately for that last, I'm hitting the wall on two very intractable features because of their size which are starting to become important for continued compatibility. In general my preference is to implement new features in as compartmentalized a fashion as possible and preferably in a small number of commits that can be backed out without affecting too much else. These features, however, are enormous in scope and changes, and depend on many other smaller changes we either don't need, don't want or don't implement. They also tend to affect code outside of JavaScript such as the script loading environment and the runtime, which is problematic because we have very poor test coverage for those areas.

The first is modules (we do support classes, but not modules), introduced in Firefox 60. The metabug for this is incredibly intimidating and even the first "milestone 0" has a massive number of dependencies. The script loader changes could probably be implemented with some thought, but there is no way a single programmer working in his spare time can do the entire amount of work required and deal with all the potential regressions, especially when rebuilding JavaScript takes up to 20 minutes and rebuilding the browser takes several hours or more. The silver lining is that some sites may need refactoring to take advantage of modules, so wide adoption is not likely to occur in the near term until more frontend development tools start utilizing them.

The second, unfortunately, is already being used now: async functions, introduced in Firefox 52, and really co-routines by any other name. The work to support them in the parser is not trivial but I've mostly completed it, and some of that code is (silently) in FPR10. Unfortunately, the await keyword works in terms of ES6 Promises, which we definitely do not support (we only have support for DOM Promises, which are not really interchangeable at the code level), and which extend hooks into the browser event loop to enable them to run asynchronously. You can see the large number of needed changes and dependencies in that Github issue as well as the various changes and regressions that resulted. This problem is higher priority because the feature is tempting to developers and some sites already make use of them (you usually see an odd syntax error and stuff doesn't load in those situations); the code changes needed to convert a function to asynchronous operation are relatively minor while yielding (ahem) a potentially large benefit in terms of perceived speed and responsiveness. However, there is no good way to make this work without ES6 Promise, and the necessary parser changes may cause code to run that can never run correctly even if the browser accepts it.

I don't have good solutions for these looming problems but I'll try to continue making progress on what I know I can fix or implement and we'll examine what this means for feature parity as time progresses. Meanwhile, please try out the beta and post your comments, and expect FPR10 final later this month.

Hacks.Mozilla.OrgCalls between JavaScript and WebAssembly are finally fast 🎉

At Mozilla, we want WebAssembly to be as fast as it can be.

This started with its design, which gives it great throughput. Then we improved load times with a streaming baseline compiler. With this, we compile code faster than it comes over the network.

So what’s next?

One of our big priorities is making it easy to combine JS and WebAssembly. But function calls between the two languages haven’t always been fast. In fact, they’ve had a reputation for being slow, as I talked about in my first series on WebAssembly.

That’s changing, as you can see

This means that in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than non-inlined JS to JS function calls. Hooray! 🎉

Performance chart showing time for 100 million calls. wasm-to-js before: about 750ms. wasm-to-js after: about 450ms. JS-to-wasm before: about 5500ms. JS-to-wasm after: about 450ms. monomorphic JS-to-wasm before: about 5250ms. monomorphic JS-to-wasm before: about 250ms. wasm-to-builtin before: about 6000ms. wasm-to-builtin before: about 650ms.

So these calls are fast in Firefox now. But, as always, I don’t just want to tell you that these calls are fast. I want to explain how we made them fast. So let’s look at how we improved each of the different kinds of calls in Firefox (and by how much).

But first, let’s look at how engines do these calls in the first place. (And if you already know how the engine handles function calls, you can skip to the optimizations.)

How do function calls work?

Functions are a big part of JavaScript code. A function can do lots of things, such as:

  • assign variables which are scoped to the function (called local variables)
  • use functions that are built-in to the browser, like Math.random
  • call other functions you’ve defined in your code
  • return a value

A function with 4 lines of code: assigning a local variable with let w = 8; calling a built-in function with Math.random(); calling a user-defined function named randGrid(); and returning a value.

But how does this actually work? How does writing this function make the machine do what you actually want? 

As I explained in my first WebAssembly article series, the languages that programmers use — like JavaScript — are very different than the language the computer understands. To run the code, the JavaScript we download in the .js file needs to be translated to the machine language that the machine understands. 

Each browser has a built-in translator. This translator is sometimes called the JavaScript engine or JS runtime. However, these engines now handle WebAssembly too, so that terminology can be confusing. In this article, I’ll just call it the engine.

Each browser has its own engine:

  • Chrome has V8
  • Safari has JavaScriptCore (JSC)
  • Edge has Chakra
  • and in Firefox, we have SpiderMonkey

Even though each engine is different, many of the general ideas apply to all of them. 

When the browser comes across some JavaScript code, it will fire up the engine to run that code. The engine needs to work its way through the code, going to all of the functions that need to be called until it gets to the end.

I think of this like a character going on a quest in a videogame.

Let’s say we want to play Conway’s Game of Life. The engine’s quest is to render the Game of Life board for us. But it turns out that it’s not so simple…

Engine asking Sir Conway function to explain life. Sir Conway sends the engine to the Universum Neu function to get a Universe.

So the engine goes over to the next function. But the next function will send the engine on more quests by calling more functions.

Engine going to Universum Neu to ask for a universe. Universum Neu sends the engine to Randgrid.

The engine keeps having to go on these nested quests until it gets to a function that just gives it a result. 

Rnadgrid giving the engine a grid.

Then it can come back to each of the functions that it spoke to, in reverse order.

The engine returning through all of the functions.

If the engine is going to do this correctly — if it’s going to give the right parameters to the right function and be able to make its way all the way back to the starting function — it needs to keep track of some information. 

It does this using something called a stack frame (or a call frame). It’s basically like a sheet of paper that has the arguments to go into the function, says where the return value should go, and also keeps track of any of the local variables that the function creates. 

A stack frame, which is basically a form with lines for arguments, locals, a return value, and more.

The way it keeps track of all of these slips of paper is by putting them in a stack. The slip of paper for the function that it is currently working with is on top. When it finishes that quest, it throws out the slip of paper. Because it’s a stack, there’s a slip of paper underneath (which has now been revealed by throwing away the old one). That’s where we need to return to. 

This stack of frames is called the call stack.

a stack of stack frames, which is basically a pile of papers

The engine builds up this call stack as it goes. As functions are called, frames are added to the stack. As functions return, frames are popped off of the stack. This keeps happening until we get all the way back down and have popped everything out of the stack.

So that’s the basics of how function calls work. Now, let’s look at what made function calls between JavaScript and WebAssembly slow, and talk about how we’ve made this faster in Firefox.

How we made WebAssembly function calls fast

With recent work in Firefox Nightly, we’ve optimized calls in both directions — both JavaScript to WebAssembly and WebAssembly to JavaScript. We’ve also made calls from WebAssembly to built-ins faster.

All of the optimizations that we’ve done are about making the engine’s work easier. The improvements fall into two groups:

  • Reducing bookkeeping —which means getting rid of unnecessary work to organize stack frames
  • Cutting out intermediaries — which means taking the most direct path between functions

Let’s look at where each of these came into play.

Optimizing WebAssembly » JavaScript calls

When the engine is going through your code, it has to deal with functions that are speaking two different kinds of language—even if your code is all written in JavaScript. 

Some of them—the ones that are running in the interpreter—have been turned into something called byte code. This is closer to machine code than JavaScript source code, but it isn’t quite machine code (and the interpreter does the work). This is pretty fast to run, but not as fast as it can possibly be.

Other functions — those which are being called a lot — are turned into machine code directly by the just-in-time compiler (JIT). When this happens, the code doesn’t run through the interpreter anymore.

So we have functions speaking two languages; byte code and machine code.

I think of these different functions which speak these different languages as being on different continents in our videogame. 

A game map with two continents—One with a country called The Interpreter Kingdom, and the other with a country called JITland

The engine needs to be able to go back and forth between these continents. But when it does this jump between the different continents, it needs to have some information, like the place it left from on the other continent (which it will need to go back to). The engine also wants to separate the frames that it needs. 

To organize its work, the engine gets a folder and puts the information it needs for its trip in one pocket — for example, where it entered the continent from. 

 It will use the other pocket to store the stack frames. That pocket will expand as the engine accrues more and more stack frames on this continent.

A folder with a map on the left side, and the stack of frames on the right.

Sidenote: if you’re looking through the code in SpiderMonkey, these “folders” are called activations.

Each time it switches to a different continent, the engine will start a new folder. The only problem is that to start a folder, it has to go through C++. And going through C++ adds significant cost.

This is the trampolining that I talked about in my first series on WebAssembly. 

Every time you have to use one of these trampolines, you lose time. 

In our continent metaphor, it would be like having to do a mandatory layover on Trampoline Point for every single trip between two continents.

Same map as before, with a new Trampoline country on the same continent as The Interpreter Kingdom. An arrow goes from The Interpreter Kingdom, to Trampoline, to JITland.

So how did this make things slower when working with WebAssembly? 

When we first added WebAssembly support, we had a different type of folder for it. So even though JIT-ed JavaScript code and WebAssembly code were both compiled and speaking machine language, we treated them as if they were speaking different languages. We were treating them as if they were on separate continents.

Same map with Wasmania island next to JITland. There is an arrow going from JITland to Trampoline to Wasmania. On Trampoline, the engine asks a shopkeeper for folders.

This was unnecessarily costly in two ways:

  • it creates an unnecessary folder, with the setup and teardown costs that come from that
  • it requires that trampolining through C++ (to create the folder and do other setup)

We fixed this by generalizing the code to use the same folder for both JIT-ed JavaScript and WebAssembly. It’s kind of like we pushed the two continents together, making it so you don’t need to leave the continent at all.

SpiderMonkey engineer Benjamin Bouvier pushing Wasmania and JITland together

With this, calls from WebAssembly to JS were almost as fast as JS to JS calls.

Same perf graph as above with wasm-to-JS circled.

We still had a little work to do to speed up calls going the other way, though.

Optimizing JavaScript » WebAssembly calls

Even in the case of JIT-ed JavaScript code, where JavaScript and WebAssembly are speaking the same language, they still use different customs. 

For example, to handle dynamic types, JavaScript uses something called boxing.

Because JavaScript doesn’t have explicit types, types need to be figured out at runtime. The engine keeps track of the types of values by attaching a tag to the value. 

It’s as if the JS engine put a box around this value. The box contains that tag indicating what type this value is. For example, the zero at the end would mean integer.

Two binary numbers with a box around them, with a 0 label on the box.

In order to compute the sum of these two integers, the system needs to remove that box. It removes the box for a and then removes the box for b.

Two lines, the first with boxed numbers from the last image. The second with unboxed numbers.

Then it adds the unboxed values together.

Three lines, with the third line being the two numbers added together

Then it needs to add that box back around the results so that the system knows the result’s type.

Four lines, with the fourth line being the numbers added together with a box around it.

This turns what you expect to be 1 operation into 4 operations… so in cases where you don’t need to box (like statically typed languages) you don’t want to add this overhead.

Sidenote: JavaScript JITs can avoid these extra boxing/unboxing operations in many cases, but in the general case, like function calls, JS needs to fall back to boxing.

This is why WebAssembly expects parameters to be unboxed, and why it doesn’t box its return values. WebAssembly is statically typed, so it doesn’t need to add this overhead. WebAssembly also expects values to be passed in at a certain place — in registers rather than the stack that JavaScript usually uses. 

If the engine takes a parameter that it got from JavaScript, wrapped inside of a box, and gives it to a WebAssembly function, the WebAssembly function wouldn’t know how to use it. 

Engine giving a wasm function boxed values, and the wasm function being confused.

So, before it gives the parameters to the WebAssembly function, the engine needs to unbox the values and put them in registers.

To do this, it would go through C++ again. So even though we didn’t need to trampoline through C++ to set up the activation, we still needed to do it to prepare the values (when going from JS to WebAssembly).

The engine going to Trampoline to get the numbers unboxed before going to Wasmania

Going to this intermediary is a huge cost, especially for something that’s not that complicated. So it would be better if we could cut the middleman out altogether.

That’s what we did. We took the code that C++ was running — the entry stub — and made it directly callable from JIT code. When the engine goes from JavaScript to WebAssembly, the entry stub un-boxes the values and places them in the right place. With this, we got rid of the C++ trampolining.

I think of this as a cheat sheet. The engine uses it so that it doesn’t have to go to the C++. Instead, it can unbox the values when it’s right there, going between the calling JavaScript function and the WebAssembly callee.

The engine looking at a cheat sheet for how to unbox values on its way from JITland to Wasmania.

So that makes calls from JavaScript to WebAssembly fast. 

Perf chart with JS to wasm circled.

But in some cases, we can make it even faster. In fact, we can make these calls even faster than JavaScript » JavaScript calls in many cases.

Even faster JavaScript » WebAssembly: Monomorphic calls

When a JavaScript function calls another function, it doesn’t know what the other function expects. So it defaults to putting things in boxes.

But what about when the JS function knows that it is calling a particular function with the same types of arguments every single time? Then that calling function can know in advance how to package up the arguments in the way that the callee wants them. 

JS function not boxing values

This is an instance of the general JS JIT optimization known as “type specialization”. When a function is specialized, it knows exactly what the function it is calling expects. This means it can prepare the arguments exactly how that other function wants them… which means that the engine doesn’t need that cheat sheet and spend extra work on unboxing.

This kind of call — where you call the same function every time — is called a monomorphic call. In JavaScript, for a call to be monomorphic, you need to call the function with the exact same types of arguments each time. But because WebAssembly functions have explicit types, calling code doesn’t need to worry about whether the types are exactly the same — they will be coerced on the way in.

If you can write your code so that JavaScript is always passing the same types to the same WebAssembly exported function, then your calls are going to be very fast. In fact, these calls are faster than many JavaScript to JavaScript calls.

Perf chart with monomorphic JS to wasm circled

Future work

There’s only one case where an optimized call from JavaScript » WebAssembly is not faster than JavaScript » JavaScript. That is when JavaScript has in-lined a function.

The basic idea behind in-lining is that when you have a function that calls the same function over and over again, you can take an even bigger shortcut. Instead of having the engine go off to talk to that other function, the compiler can just copy that function into the calling function. This means that the engine doesn’t have to go anywhere — it can just stay in place and keep computing. 

I think of this as the callee function teaching its skills to the calling function.

Wasm function teaching the JS function how to do what it does.

This is an optimization that JavaScript engines make when a function is being run a lot — when it’s “hot” — and when the function it’s calling is relatively small. 

We can definitely add support for in-lining WebAssembly into JavaScript at some point in the future, and this is a reason why it’s nice to have both of these languages working in the same engine. This means that they can use the same JIT backend and the same compiler intermediate representation, so it’s possible for them to interoperate in a way that wouldn’t be possible if they were split across different engines. 

Optimizing WebAssembly » Built-in function calls

There was one more kind of call that was slower than it needed to be: when WebAssembly functions were calling built-ins. 

Built-ins are functions that the browser gives you, like Math.random. It’s easy to forget that these are just functions that are called like any other function.

Sometimes the built-ins are implemented in JavaScript itself, in which case they are called self-hosted. This can make them faster because it means that you don’t have to go through C++: everything is just running in JavaScript. But some functions are just faster when they’re implemented in C++.

Different engines have made different decisions about which built-ins should be written in self-hosted JavaScript and which should be written in C++. And engines often use a mix of both for a single built-in.

In the case where a built-in is written in JavaScript, it will benefit from all of the optimizations that we have talked about above. But when that function is written in C++, we are back to having to trampoline.

Engine going from wasmania to trampoline to built-in

These functions are called a lot, so you do want calls to them to be optimized. To make it faster, we’ve added a fast path specific to built-ins. When you pass a built-in into WebAssembly, the engine sees that what you’ve passed it is one of the built-ins, at which point it knows how to take the fast-path. This means you don’t have to go through that trampoline that you would otherwise.

It’s kind of like we built a bridge over to the built-in continent. You can use that bridge if you’re going from WebAssembly to the built-in. (Sidenote: The JIT already did have optimizations for this case, even though it’s not shown in the drawing.)

A bridge added between wasmania and built-in

With this, calls to these built-ins are much faster than they used to be.

Perf chart with wasm to built-in circled.

Future work

Currently the only built-ins that we support this for are mostly limited to the math built-ins. That’s because WebAssembly currently only has support for integers and floats as value types. 

That works well for the math functions because they work with numbers, but it doesn’t work out so well for other things like the DOM built-ins. So currently when you want to call one of those functions, you have to go through JavaScript. That’s what wasm-bindgen does for you.

Engine going from wasmania to the JS Data Marshall Islands to built-in

But WebAssembly is getting more flexible types very soon. Experimental support for the current proposal is already landed in Firefox Nightly behind the pref javascript.options.wasm_gc. Once these types are in place, you will be able to call these other built-ins directly from WebAssembly without having to go through JS.

The infrastructure we’ve put in place to optimize the Math built-ins can be extended to work for these other built-ins, too. This will ensure many built-ins are as fast as they can be.

But there are still a couple of built-ins where you will need to go through JavaScript. For example, if those built-ins are called as if they were using new or if they’re using a getter or setter. These remaining built-ins will be addressed with the host-bindings proposal.

Conclusion

So that’s how we’ve made calls between JavaScript and WebAssembly fast in Firefox, and you can expect other browsers to do the same soon.

Performance chart showing time for 100 million calls. wasm-to-js before: about 750ms. wasm-to-js after: about 450ms. JS-to-wasm before: about 5500ms. JS-to-wasm after: about 450ms. monomorphic JS-to-wasm before: about 5250ms. monomorphic JS-to-wasm before: about 250ms. wasm-to-builtin before: about 6000ms. wasm-to-builtin before: about 650ms.

Thank you

Thank you to Benjamin Bouvier, Luke Wagner, and Till Schneidereit for their input and feedback.

The post Calls between JavaScript and WebAssembly are finally fast 🎉 appeared first on Mozilla Hacks - the Web developer blog.

Cameron KaiserTenFourFox and Hack2Win

After a diligent analysis of the test cases and our existing code, TenFourFox is not known to be vulnerable to the exploits repaired in Firefox 62.0.3/60ESR. Even if the flaws in question actually existed as such in our code, they would require a PowerPC-specific exploit due to some architecture-dependent aspects of the attacks.

Mozilla VR BlogClose Conversation is the Future of Social VR

Close Conversation is the Future of Social VR

In many user experience (UX) studies, the researchers give the participants a task and then observe what happens next. Most research participants are earnest and usually attempt to follow instructions. However, in this study, research participants mostly ignored instructions and just started goofing off with each other once they entered the immersive space and testing the limits of embodiment.

The goal of this blog post is to share insights from Hubs by Mozilla usability study that other XR creators could apply to building a multi-user space.

The Extended Mind recruited pairs of people who communicate online with each other every day, which led to testing Hubs with people who have very close connections. There were three romantic partners in the study, one pair of roommates, and one set of high school BFFs. The reason that The Extended Mind recruited relatively intimate pairs of people is because they wanted to understand the potential for Hubs as a communication platform for people who already have good relationships. They also believe that they got more insights about how people would use Hubs in a natural environment rather than bringing in one person at a time and asking that person to hang out in VR with a stranger who they just met.

The two key insights that this blog post will cover are the ease of conversation that people had in Hubs and the playfulness that they embodied when using it.

Conversation Felt Natural

When people enter Hubs, the first thing they would do would be to look around to find the other person in the space. Regardless of if they were on mobile, laptop, tablet or in a VR headset, their primary goal was to connect. Once they located the other person, they immediately gave their impressions of the other person’s avatar and asked what they looked like to their companion. There was an element of fun in finding the other person and then discussing avatar appearances. Including one romantic partner sincerely telling his companion:

“You are adorable,”

…which indicates that his warm feelings for her in the real world easily translated to her avatar.

The researchers created conversational prompts for all of the research participants such as “Plan a potential vacation together,” but participants ignored the instructions and just talked about whatever caught their attention. Mostly people were self-directed in exploring their capabilities in the environment and wanted to communicate with their companion. They relished having visual cues from the other person and experiencing embodiment:

“Having a hand to move around felt more connected. Especially when we both had hands.”

“It felt like we were next to each other.”

The youngest participants in the study were in their early twenties and stated that they avoided making phone calls. They rated Hubs more highly than a phone conversation due to the improved sense of connection it gave them.

[Hubs is] “better than a phone call.”

Some even considered it superior to texting for self-expression:

“Texting doesn’t capture our full [expression]”

The data from this study shows that communication using 2D devices and VR headsets has strong potential for personal conversation among friends and partners. People appeared to feel strong connections with their partners in the space. They wanted to revisit the space in the future with groups of close friends and share it with them as well.

Participants Had Fun

Due to participants feeling comfortable in the space and confident in their ability to express themselves, they relaxed during the testing session and let their sense of humor show through.

The researchers observed a lot of joke-telling and goofiness from people. A consequence of feeling embodied in the VR headset was acting in ways to entertain their companion:

“Physical humor works here.”

Users also discovered that Hubs has a rubber duck mascot that will quack when it is clicked and it will replicate itself. Playing with the duck was very popular.

“The duck makes a delightful sound.”

“Having things to play with is good.”

Here's one image to illustrate the rubber ducks multiplying quickly:
Close Conversation is the Future of Social VR

It could be a future research question to determine exactly what is the balance of giving people something like the duck as a fidget activity versus a formal board game or card game. The lack of formality in Hubs appeared to actually bolster the storytelling aspects that users brought to it. Two users established a whole rubber duck Law & Order type tv show where they gave the ducks roles:

“Good cop duckie, bad cop duckie.”

People either forgot or ignored the researchers’ instructions to plan a vacation or other prompts because they were immersed in the fun and connection together. However, the watching the users tell each other stories and experiment in the space was more entertaining and led to more insights.

While it wasn’t actually tested in this study, there are ways to add media & gifs to Hubs to further enhance communication and comedy.

Summary: A Private Space That Let People Be Themselves

The Extended Mind believes that the privacy of the Hubs space bolstered people’s intimate experiences. Because people must have a unique URL to gain access, it limited the number of people in the room. That gave people a sense of control and likely led the them feeling comfortable experimenting with the layers of embodiment and having fun with each other.

The next blog post will cover additional insights about how the different environments in Hubs impacted their behavior and what other XR creators can apply to their own work.

This article is part two of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of the blog series overview, which focused on accessibility, click here.

Mozilla VR BlogDrawing and Photos, now in Hubs

Drawing and Photos, now in Hubs

As we covered in our last update, we recently added the ability for you to bring images, videos, and 3D models into the rooms you create in Hubs. This is a great way to bring content to view together in your virtual space, and it all works right in your browser.

We’re excited to announce two new features today that will further enrich the ways you can connect and collaborate in rooms you create in Hubs: drawing and easy photo uploads.

Hubs now has a pen tool you can use at any time to start drawing in 3D space. This is a great way to express ideas, spark your creativity, or just doodle around. You can draw by holding the pen in your hand if you are in Mixed Reality, or draw using your PC’s mouse or trackpad.

The new pen tool shines when combined with our media support. You can draw on images together or make a 3D sketch on top of a model from Sketchfab. You can also draw all over the walls if you want!

Drawing and Photos, now in Hubs

You can easily change the size and color of your pen strokes. You can write out text or even model out a rough 3D sketch.

Drawing and Photos, now in Hubs

If you’re using a phone, we’ve also added an easy way to quickly upload photos or take a snapshot with your phone’s camera. Just tap the photos button at the bottom of the screen to jump right into a photo picker.

Drawing and Photos, now in Hubs

This is a great way to share photos from your library or take a quick picture of something nearby. Selfies can be fun too, but don’t be surprised if people draw on your photo!

We hope you have fun with these new features. As always, please join us in the #social channel on the WebVR Slack or file a GitHub issue if you have feedback!

Chris H-CCanadian Holiday Inbound! Thanksgiving 2018 (Monday, October 8)

Monday is Thanksgiving in Canada[1], so please excuse your Canadian colleagues for not being in the office.

We’ll likely be spending the day wondering. We’ll be wondering how family could make such a mess, wondering why we ate so much pie, wondering if it’s okay to eat turkey for breakfast, wondering if pie can be a meal and dessert at the same time, wondering how we fit the leftovers in the fridge, wondering why we bothered hosting this year, wondering whose sock that is by the stairs, wondering when the snow will melt[2] or start to fall[3].

We’ll also be wondering who started the family tradition of having cornbread instead of buttered rolls, wondering where the harvest tradition began, wondering about what all goes into harvesting our food, wondering what it means to be thankful, wondering what we are thankful for, wondering why we ate the evening meal at 4pm, wondering whether 4pm is too late to have a nap.

With heads full of wondering and bellies full of food, we wish you a wonderful Thanksgiving. We’ll be back to work, if not our normal shapes, on Tuesday.

:chutten

PS: Canadian Pro-tip: Leftover food often turns into regret – but this regret can turn back into food if you leave it in the fridge for a little while!

[1]: https://mana.mozilla.org/wiki/display/PR/Holidays%3A+Canada
[2]: Calgary had a (record) snowfall of 32.8cm (1’1″) on Oct 2: https://www.cbc.ca/news/canada/calgary/calgary-october-snow-day-two-1.4848394
[3]: Snow’s a-coming, already or eventually: https://weather.gc.ca/canada_e.html

Cameron KaiserFruitfly and the Power Mac

First, some updates on TenFourFox FPR10. There are a couple of security related changes, some DOM updates and some JavaScript ES6 compatibility updates which should fix a few site glitches. However, I'm also trying to track down a debug-only regression in layout present in at least FPR9 and possibly earlier, and there are at least two major sites broken that I do not have a clear understanding of why (and are not regressions). Unfortunately, it is unlikely there will be a solution in time since FPR10 is timed to come out with the next Firefox on October 23ish.

New information came to light recently regarding Fruitfly, also detected by some antivirus systems as Quimitchin, which was discovered quietly infecting machines in January 2017. An unusual Mac-specific APT that later was found to have Windows variants (PDF), Fruitfly was able to capture screenshots, keystrokes, images from the webcam and system information from infected machines. At that time it was believed it was at most a decade old, placing the earliest possible infections in that timeline around 2007 and thus after the Intel transition. The author, 28-year-old Phillip Durachinsky, was eventually charged in January of this year with various crimes linked to its creation and deployment.

Late last month, however, court documents demonstrated that Durachinsky actually created the first versions of Fruitfly when he was 14 years old, i.e., 2003. This indicates there must be a PowerPC-compatible variant which can infect systems going back to at least Panther and probably Jaguar, and squares well with contemporary analyses that found Fruitfly had "ancient" system calls in its code, including, incredibly, GWorld and QuickDraw ones.

The history the FBI relates suggests that early infections were initiated manually by him, largely for the purpose of catching compromising webcam pictures and intercepting screenshots and logins when users entered keystrokes suggesting sexual content. If you have an iSight with the iris closed, though, there was no way he could trigger that because of the hardware cutoff, another benefit of having an actual switch on our computer cameras (except the iMac G5, which was a bag of hurt anyway and one of the few Power Macs I don't care for).

Fruitfly spreads by attacking weak passwords for AFP (Apple Filing Protocol) servers, as well as RDP, VNC, SSH and (on later Macs) Back to My Mac. Fortunately, however, it doesn't seem to get its hooks very deep into the OS. It can be relatively easily found by looking for a suspicious launch agent in ~/Library/LaunchAgents (a Power Mac would undoubtedly be affected by variant A, so check ~/Library/LaunchAgents/com.client.client.plist first), and if this file is present, launchctl unload it, delete it, and delete either ~/.client or ~/fpsaud depending on the variant the system was infected with. After that, change all your passwords and make sure you're not exposing those services where you oughtn't anymore!

For the very early pre-Tiger versions, however, assuming they exist, no one seems to know how currently those might have been kicked off because those systems lack launchd. It's possible it could have insinuated itself as a login item or into the system startup scripts, or potentially the Library/StartupItems folder, but it's probable we'll never know for sure because whatever infected systems dated from that time period have either been junked or paved over. Nevertheless, if you find a file named ~/.client on your system regardless of version that you didn't put there, assume you are infected and proceed accordingly.

Mozilla Reps CommunityRep of the Month – September 2018

Please join us in congratulating Umesh Agarwal, our Rep of the Month for September 2018!

Umesh is from Kharagpur, India and works as Big Data Splunk Architect & Administrator. He is an Open Source Geek and his other areas of interest are Cyber Security and Big Data Analysis. He is a passionate Mozillian and an amazing contributor for more than 6 years. Umesh served as Reps Council Member in 2016 and currently he is an active Reps Mentor.

He manages the Mozilla India Gear Store based in Pune, India. Umesh is a localization community manager for the Hindi language. He has recently successfully hosted ‘Mozilla L10n Hindi, Marathi, Gujrati Community Meetup 2018’ held in Pune on 1st – 2nd of September 2018. This event was part of the Mozilla L10n community events in 2018. The event was of very high standard, productive and ended up with great success metrics. Approximately twenty leaders of four regional languages traveled from various parts of the country to participate in the event.

Thanks Umesh, keep rocking the Open Web! :tada: :tada:

To congratulate him, please head over to the Discourse topic!

The Firefox FrontierFirefox: Helping you to tackle the midterms on your terms

For many people, a confusing tangle of cyberjargon and misinformation have combined to make the idea of turning to the web for election information a weird proposition. We can’t let … Read more

The post Firefox: Helping you to tackle the midterms on your terms appeared first on The Firefox Frontier.

Mozilla GFXWebRender newsletter #24

Hi there, this your twenty fourth WebRender newsletter. A lot of work in progress this week, so the change list is pretty short. To compensate I added a list of noteworthy ongoing work which hasn’t landed yet is but will probably land soon and gives a rough idea of what’s keeping us busy.

Without further ado,

Notable WebRender and Gecko changes

  • Bobby improved WebRender’s code documentation.
  • Jeff fixed a crash.
  • Kats fixed a bug that was causing issues with parallax scrolling type of effects.
  • Kats improved the CI infrastructure for WebRender.
  • Gankro cleaned up some of the blob image rendering code.
  • Andrew improved the memory recycling logic for shared images.
  • Glenn fixed a crash.
  • Glenn fixed a bug causing content to leak out of the their iframes.
  • Glenn improved the performance of building clip chains.
  • Glenn made progress (1, 2 towards an upcoming picture caching infrastructure.
  • Nical fixed various source of UI freezes.
  • Nical fixed a crash with large shadow radii.
  • Sotaro fixed an issue related to moving tabs between windows.
  • Sotaro removed a synchronous operation that sometimes blocked the compositor for a long time.

Ongoing work

  • Bobby is working on reducing WebRender’s memory usage.
  • Kvark is working on taming the infamous backface-visibility property.
  • Matt and Dan are almost done with big startup time improvements related to the time we spent compiling shaders.
  • Gankro and Jeff are working on blob image rasterization performance.
  • Andrew has some more shared image recycling changes in the work.
  • Sotaro is reducing the amount of extra work we do for some types of async animations.
  • Chris is fixing some rendering correctness bugs.
  • Glenn is working on the picture caching infrastructure (this one is a bit longer term than the rest of this list but we expect it will bring a lot of improvements on some difficult cases).

 

Daniel PocockRenewables, toilets, wifi and freedom

The first phase of the Dublin project was recently completed: wifi, a toilet and half a kitchen. An overhaul of the heating and hot water infrastructure and energy efficiency improvements are coming next.

The Irish state provides a range of Government grants for energy efficiency and renewable energy sources (not free as in freedom, but free as in cash).

The Potterton boiler is the type of dinosaur this funding is supposed to nudge into extinction. Nonetheless, it has recently passed another safety inspection:

Not so smart

This relic has already had a date with the skip, making way for smart controls:

Renewable energy

Given the size of the property and the funding possibilities, I'm keen to fully explore options for things like heat pumps and domestic thermal stores.

Has anybody selected and managed this type of infrastructure using entirely free solutions, for example, Domoticz or Home Assistant? Please let me know, I'm keen to try these things, contribute improvements and blog about the results.

Next project: intrusion detection

With neighbours like these, who needs cat burglars? Builders assure me he has been visiting on a daily basis and checking their work.

Time for a smart dog to stand up to this trespasser?

Mozilla Cloud Services BlogUpcoming WebPush Shield Study

WebPush does more than let you know you’ve got an upcoming calendar appointment or bug you about subscribing to a site’s newsletter (particularly one you just visited and have zero interest in doing). Turns out that WebPush is a pretty good way for us to do a number of things as well. Things like let you send tabs from one install of Firefox to another, or push out important certificate updates. We’ll talk about those more when we get ready to roll them out, but for now, we need to know if some of the key bits work.

One of the things we need to test is if our WebPush servers are up to the job of handling traffic, or if there might be any weird issue we might not have thought of. We’ve run tests, we’ve simulated loads, but honestly, nothing compares to real life for this sort of thing.

In the coming weeks, we’re going to be running an experiment. We’ll be using the Shield service to have your browser set up a web push connection. No data will go over that connection aside from the minimal communication that we need. It shouldn’t impact how you use Firefox, or annoy you with pop-ups. Chances are, you won’t even notice we’re doing this.

Why are we telling you if it something you wouldn’t notice? We like to be open and clear about things. You might see a reference to “dom.push.alwaysConnect” in about:config and wonder what it might mean. Shield lets us flip that switch and gives us control over how many folks at any given time hit our servers. That’s important when you want to test your server and things don’t go as planned.

In this case “dom.push.alwaysConnect” will ask your browser to open a connection to our servers. This is so we can test if our servers can handle the load. Why do it this way instead of a load test? Turns out that trying to effectively load test this is problematic. It’s hard to duplicate “real world” load and all the issues that come with it. This test will help us make sure that things don’t fall over when we make this a full feature. When that configuration flag is set to “true” your browser will try to connect to our push servers.

You can always opt out of the study, if you want, but we hope that you don’t mind being part of this. The more folks we have, and the more diverse the group, the more certain we can be that our servers are up for the challenge of keeping you safer and more in control.

Hacks.Mozilla.OrgA New Way to Support MDN

Starting this week, some visitors may notice something new on the MDN Web Docs site, the comprehensive resource for information about developing on the open web.

We are launching an experiment on MDN Web Docs, seeking direct support from our users in order to accelerate growth of our content and platform. Not only has our user base grown exponentially in the last few years (with corresponding platform maintenance costs), we also have a large list of cool new content, features, and programs we’d like to create that our current funding doesn’t fully cover.

In 2015, on our tenth anniversary (read about MDN’s evolution in the 10-year anniversary post), MDN had four million active monthly users. Now, just three years later, we have 12 million. Our last big platform update was in 2013. By asking for, and hopefully receiving, financial assistance from our users – which will be reinvested directly into MDN – we aim to speed up the modernization of MDN’s platform and offer more of what you love: content, features, and integration with the tools you use every day (like VS Code, Dev Tools, and others), plus better support for the 1,000+ volunteers contributing content, edits, tooling, and coding to MDN each month.

Currently, MDN is wholly funded by Mozilla Corporation, and has been since its inception in 2005. The MDN Product Advisory Board, formed in 2017, provides guidance and advice but not funding. The MDN board will never be pay-to-play, and although member companies may choose to sponsor events or other activities, sponsorship will never be a requirement for participation. This payment experiment was discussed at the last MDN board meeting and received approval from members.

Starting this week, approximately 1% of MDN users, chosen at random, will see a promotional box in the footer of MDN asking them to support MDN through a one-time payment.

Image showing banner placement on the footer of MDN

Banner placement on MDN

Clicking on the “Support MDN” button will open the banner and allow you to enter payment information.

Image showing the payment entry form on MDN

Payment page on MDN

If you don’t see the promotional banner on MDN, and want to express your support, or read the FAQ’s, you can go directly to the payment page.

Because we want to keep things fully transparent, we’ll report how we spend the money on a monthly basis on MDN, so you can see what your support is paying for. We hope that, through this program, we will create a tighter, healthier loop between our audience (you), our content (written for and by you), and our supporters (also, you, again).

Throughout the next couple months, and into 2019, we plan to roll out additional ways for you to engage with and support MDN. We will never put the existing MDN Web Docs site behind a paywall. We recognize the importance of this resource for the web and the people who work on it.

The post A New Way to Support MDN appeared first on Mozilla Hacks - the Web developer blog.

Will Kahn-GreeneBleach v3.0.0 released!

What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

Bleach v3.0.0 released!

Bleach 3.0.0 focused on easing the problems with the html5lib dependency and fixing regressions created in the Bleach 2.0 rewrite

For the first, I vendored html5lib 1.0.1 into Bleach and wrote a shim module. Bleach code uses things in the shim module which import things from html5lib. In this way I:

  1. keep the two separated to some exten
  2. the shim is easy to test on its own
  3. it shouldn't be too hard to update html5lib versions
  4. we don't have to test Bleach against multiple versions of html5lib (which took a lot of time)
  5. no one has to deal with Bleach requiring one version of html5lib and other libraries requiring other versions

I think this is a big win for all of us.

The second was tricky. The Bleach 2.0 rewrite changed clean and linkify from running in the tokenizing step of HTML parsing to running after parsing is done. The parser (un)helpfully would clean up the HTML before passing it to Bleach. Because of that, the cleaned text would end up with all this extra stuff.

For example, with Bleach 2.1.4, you'd have this:

>>> import bleach
>>> bleach.clean('This is terrible.<sarcasm>')
'This is terrible.&lt;sarcasm&gt;&lt;/sarcasm&gt;'

The tokenizer would parse out things that looked like HTML tags, the parser, would see an end tag that didn't have a start tag and would add the start tag, then clean would escape the start and end tags because they weren't in the list of allowed tags. Blech.

Bleach 3.0.0 fixes that by tweaking the tokenizer to know about the list of allowed tags. With this knowledge, it can see a start, end, or empty tag and strip or escape it during tokenization. Then the parser doesn't try to fix anything.

With Bleach 3.0.0, we get this:

>>> import bleach
>>> bleach.clean('This is terrible.<sarcasm>')
'This is terrible.&lt;sarcasm&gt;'

What I could use help with

I could use help with improving the documentation. I think it's dense and all over the place focus-wise. I find it difficult to read.

If you're good with documentation, I sure could use your help. See issue 397 for more.

Where to go for more

For more specifics on this release, see here: https://bleach.readthedocs.io/en/latest/changes.html#version-3-0-0-october-3rd-2018

Documentation and quickstart here: https://bleach.readthedocs.io/en/latest/

Source code and issue tracker here: https://github.com/mozilla/bleach

Mark BannerWhat’s next for ESLint on Firefox Source Code?

History

Around 2015 a couple of projects had started using ESLint in mozilla-central. In the last quarter of 2015, there was a big push to enable ESLint for browser/ and toolkit/ – the two main directories containing the javascript source behind Firefox.

Since then, we have come a long way. We have commands and hooks for developers to use, checks during the review phase, and automatic tests that run against our review tools and our continuous integration branches. Not only that, but we’ve also expanded our coverage to more directories, and expanded the amount of rules that are enabled.

As we’ve done this work, we’ve caught lots of bugs in the code or in our tests (there’s much more than just those links). Some of those have been small, some have been user facing issues. There are also now the countless potential bugs that we don’t get to see where ESLint catches issues for us before they even hit the core source trees. All this helps to save developer time and leaves more for fixing bugs and implementing new features.

Where to next?

There are several things high on the list that we should have as the next, future goals:

  1. Finish enabling ESLint on all our JavaScript code in mozilla-central

We are already covering the vast majority of production code in mozilla-central, however there are still a lot of unit tests that aren’t covered. Increasing linting coverage here will help to ensure these are functioning as we expect.

There’s always a few things it won’t make sense for, e.g. third-party imports, or the occasional piece of preprocessed code, but I think we should strive towards 100% coverage where we sensibly can.

  1. Harmonize our rules

Whilst we have a core set of rules, various directories are overriding and extending the rules. We should aim for having the majority of rules being the same everywhere, with additional rules being only occasional (e.g. experimental).

This will make it far easier for developers working across modules to be able to work in a consistent style, and help spread the useful rules across the tree.

Example rules that fall into this category: mozilla/var-only-at-top-level, block-scoped-var, no-shadow, no-throw-literal, no-unused-expressions, yoda

  1. Improve developer tools

For me, I find most use for ESLint when it is integrated into my editor. However, there’s various other occasions where we’re not doing quite enough to make it easy for developers, e.g. automatically installing hooks, or setting up editor support automatically.

These help developers to catch issues earlier whilst the developer is still focussed on the patch, reducing context switching. We should be getting these working as seamlessly as possible.

  1. Improve automatic fixing

Currently the automatic fixing doesn’t work fully if you run it across the whole tree (you can run it in segments), we should fix it to help make applying new rules globally a lot easier.

We need your help!

  • Find a problem with ESLint that is stopping you using it efficiently? Please file a bug. Alternately, come talk to me about issues that are slowing you down or getting in the way.
  • Reasonably regularly I file mentored bugs for enabling new directories or rules, which are great to get you started. However, if you’re interested in working with me on getting larger chunks going, please let me know (in the comments or ping Standard8 on IRC).
  • Anything I’ve missed or discussions points? Please add a comment.

Thank you

I’d like to say a big thank you to all those that have helped bring ESLint to where it is today. Special thanks go to Dave Townsend for his encouragement and many reviews.

There’s many more people that have been involved from the various teams that work on Firefox, as well as first-time contributors – too many to name, so thank you all!

K Lars LohnThe Things Gateway - A Pythonic Rule System

In my last post, I talked about the features and limitations of the Rules System within the Things Gateway by Mozilla graphical user interface.  Today, I'm going to show an alternate rule system that interacts with the Things Gateway entirely externally using the Web Thing API.  The Web Thing API enables anyone armed with a computer language that can use Web Sockets to create entirely novel applications or rules systems that can control the Things Gateway.

In the past few months, I've blogged several times about controlling the Things Gateway with the Web Thing API using Python 3.6.  In each one was a stand alone project, opening and managing Web Sockets in an asynchronous programming environment.  By writing these projects, I've explored both functional and object oriented idioms to see how they compare.  Now with some experience, I feel free to abstract some of the underlying common aspects to create a rule engine of my own.
One of the great features of the GUI Rule System is the translation of the graphical representation of the rule into an English sentence (likely a future target for localization).  Simply reading it aloud easily leads to an unambiguous understanding of the rule's behavior.  I imagine that the Javascript implementation uses the placements of the visual objects to create a parse tree of the if/then boolean expression.  The parse tree can then be walked and translated into our spoken language.

Implementing a similar system based on parse trees is tempting for its flexibility, but usually results in a new chimera language halfway between the programming language used and the language represented in the parse tree.  See the SQLAlchemy encapsulation of the SQL language in Python as an example.  I'm less fond of this technique than I used to be.  I think I can get away with a simpler implementation just using fairly straightforward Python.

In my last post, I discussed the differences between "While" rules and "If" rules in the GUI Rules System.  Recall that the "While" style of rule takes an action and then undoes the action when the rule condition is no longer True.  However, an "If" style of rule never undoes its action.

Here's an example of the "If" style rule from my last blog post:

Using my rule system, the rule code looks like this:
class ExampleIfRule(Rule):

def register_triggers(self):
return (self.Philips_HUE_01,)

def action(self, *args):
if self.Philips_HUE_01.on:
self.Philips_HUE_02.on = True
self.Philips_HUE_03.on = True
self.Philips_HUE_04.on = True 
(see this code in situ in the example_if_rule.py file in the pywot rule system demo directory)

Creating a rule starts by creating a class derived from the base class Rule.  The programmer is responsible for implementing two methods: register_triggers and action.  Optionally, a third method, initial_state, and a constructor can be included, too. 

The register_triggers method is a callback function.  It returns a tuple of objects responsible for triggering the rule's action method.  This is generally a set of Things defined by the Things Gateway.  Anytime one of the things in the registered_triggers tuple changes state, the action method will execute.

In this example, "Philips HUE 01" is specified as the trigger.  Any time any property of "Philips HUE 01" changes, the action method decides what to do about it.  It looks to see if the Philips HUE light is in the "on" state, and if so, turns on the other lights, too. 

When an instance of the rule class is instantiated, all the Things known to the Things Gateway are added as attributes to the rule.  That allows any Thing to be referenced in the code with standard member syntax: "self.Philips_HUE_01".  Each of the properties of the Thing are available using the dot notation, too: "self.Philips_HUE_01.on".  Changing the state of a thing's properties is done with assignment statements: "self.Philips_HUE_04.on = True".  The attribute names are sanitized derivations of the name attribute of the Thing.  Spaces and other characters not allows in Python identifiers are replaced with the underscore.  If the first character of the name is not allowed as a first character in an identifier, a leading underscore is added: "01 turns on 02, 03" becomes "_01_turns_on_02__03".  It's not ideal, but reconciling language requirement differences can be complicated.

The "While" version of the rule could look like this:

class ExampleWhileRule(Rule):

def register_triggers(self):
return (self.Philips_HUE_01,)

def action(self, the_triggering_thing, the_changed_property_name, the_new_value):
if the_changed_property_name == 'on':
self.Philips_HUE_02.on = the_new_value
self.Philips_HUE_03.on = the_new_value
self.Philips_HUE_04.on = the_new_value
(see this code in situ in the example_while_rule.py file in the pywot rule system demo directory)

Notice in this code, I've expanded the parameters of the action method.  Each time the action method is called, it receives a reference to the object that changed state, the name of the property that changed and the new value of the property.

To make the other lights follow the boolean value of  Philips HUE 01's on state, all we have to do is assign the_new_value  to the other lights' on property.

Since we've got the name of the changed property and it's new value, we can implement the full functionality of the bonded_things.py example that I gave several weeks ago:

class BondedBulbsRule(Rule):

def register_triggers(self):
return (
self.Philips_HUE_01,
self.Philips_HUE_02,
self.Philips_HUE_03,
self.Philips_HUE_04,
)

def action(self, the_triggering_thing, the_changed_property_name, the_new_value):
for a_thing in self.triggering_things.values():
setattr(a_thing, the_changed_property_name, the_new_value)
(see this code in situ in the bonded_rule.py file in the pywot rule system demo directory)

In this example, any change to on/off state or color of one bulb will immediately be echoed by all the others.  We start by registering all four bulbs in the list of triggers.  This means that a change in property to any one of them will trigger the action method.  All we have to do in the action is iterate through the list of triggering_things and change the property indicated by the_changed_property_name.  Yes, the bulb that triggered the change doesn't need to have its property changed again, but it doesn't hurt to do so.  The mechanism behind changing values can tell that the new and old values are the same, so it takes no action for that bulb.

Compare this rule based code with the original one-off version of the bonded things code.  The encapsulations of the Rules System significantly improves the readability of the code.


Up to this point, I've only demonstrated using Things from the Things Gateway as triggers.  However, any object can be written to asynchronously invoke the action method.  Consider this class:

class HeartBeat(TimeBasedTrigger):
def __init__(
self,
name,
period_str
# duration should be a integer in string form with an optional
# H, h, M, m, S, s, D, d as a suffix to indicate units - default S
):
super(HeartBeat, self).__init__(name)
self.period = self.duration_str_to_seconds(period_str)

async def trigger_dection_loop(self):
logging.debug('Starting heartbeat timer %s', self.period)
while True:
await asyncio.sleep(self.period)
logging.info('%s beats', self.name)
self._apply_rules()
(see this code in situ in the rule_triggers.py file in the pywot directory)

A triggering object can participate in more than one rule.  The act of registering a triggering object in a rule means that the rule is added to an internal list of participating_rules within the triggering object.  The method, _apply_rules, iterates through that collection and calls the  action method for each rule.  In the case of this HeartBeat trigger, it calls _apply_rules periodically as set by the period_str parameter of the constructor.  This provides a heartbeat that can make a series of actions happen over time.

Using the Heartbeat class that beats every two seconds, this rule creates a scrolling rainbow with six Philps HUE lights:

the_rainbow_of_colors = deque([
'#ff0000',
'#ffaa00',
'#aaff00',
'#00ff00',
'#0000ff',
'#aa00ff'
])


class RainbowRule(Rule):

def initial_state(self):
self.participating_bulbs = (
self.Philips_HUE_01,
self.Philips_HUE_02,
self.Philips_HUE_03,
self.Philips_HUE_04,
self.Philips_HUE_05,
self.Philips_HUE_06,
)

for a_bulb, initial_color in zip(self.participating_bulbs, the_rainbow_of_colors):
a_bulb.on = True
a_bulb.color = initial_color

def register_triggers(self):
self.heartbeat = HeartBeat('the heart', "2s")
return (self.heartbeat, )

def action(self, *args):
the_rainbow_of_colors.rotate(1)
for a_bulb, new_color in zip(self.participating_bulbs, the_rainbow_of_colors):
a_bulb.color = new_color
(see this code in situ in the rainbow_rule.py file in the pywot rule system demo directory)

The intial_state callback function sets up the bulbs by turning them on and setting the initial colors.  This time in register_triggers, a Heartbeat object is created with a period of two seconds.  The Heartbeat will call the action method every two seconds.  Finally, in the action, we rotate the list of colors by one and then assign new colors to each of the six bulbs.




By implementing the rule system within Python, rules can use the full power of the language.  Rules could be formulated that respond to anything that the language can do.  It wouldn't be difficult to have a Philips HUE bulb show red when your software testing system indicates a build error.  You could even hook up a big red button to physically press when you want to deploy the latest release of your code.  In a more close to home example, how about blinking the porch light green to guide the pizza delivery to the right door?  The possibilities are both silly and endless.

Chris H-CDistributed Teams: Regional Holidays

Today is German Unity Day, Germany’s National Day. Half of my team live in Berlin, so I vaguely knew they wouldn’t be around… but I’d likely have forgotten if not for a lovely tradition of “Holiday Inbound” emails at Mozilla.

Mozilla is a broadly-distributed organization with employees in dozens of countries worldwide. Each of these countries have multiple days off to rest or celebrate. It’s tough to know across so many nations and religions and cultures exactly who will be unable to respond to emails on exactly which days.

So on the cusp of a holiday it is tradition in Mozilla to send a Holiday Inbound email to all Mozilla employees noting that the country you’re trying to reach can’t come to the phone right now, so please leave a message at the tone.

More than just being a bland notification some Mozillians take the opportunity to explain the history and current significance of the event being celebrated. I’ve taken a crack at explaining the peculiarly-Canadian holiday of Christmas (pronounced [kris-muhs]) in the past.

Sometimes you even get some wonderful piece of alternate history like :mhoye’s delightful, 50% factual exploration of the origins of Canadian Labour Day 2016.

I delight in getting these notifications from our remotees and offices worldwide. It really brings us closer together through understanding, while simultaneously highlighting just how different we all are.

Maybe I should pen a Holiday Inbound email about Holiday Inbound emails. It would detail the long and fraught history of the tradition in a narrative full of villains and heroes and misspellings and misunderstandings…

Or maybe I should just try to get some work done while my German colleagues are out.

:chutten

Mozilla Release Management TeamUplift forms get a refresh

Firefox is shipped using a train model. Without going into too much details, this means that we maintain several channel in parallel (Nightly, Beta, Release and ESR). Normal changes happen in Nightly. When a change needs to be cherry-picked from Nightly to another branch, the process is called “Uplift”.

Uplifting is a key tool in the Firefox release management world. When developers want to apply a patch from Nightly to another branch, they will use Bugzilla, answering some questions in a textarea. Then, release managers will make a risk assessment to accept or reject the uplift. As an example, release managers will see the following comment:

Uplift form

The release and quality management team is plugging more and more automation (and Machine Learning in the future) in Bugzilla, and the freeform textarea was making it more difficult (also because developers are free to do anything they want with the prefilled text, even deleting fields). For this reason, we are moving to a typical form directly in the Bugzilla interface. The change, developed by Kohei who is volunteering as a Bugzilla UX designer, has been deployed yesterday (October 2nd)

A screenshot is a better explanation than words:

The new uplift form

Once submitted, the comment will be displayed just like before!

We are planning to move to a similar system for tracking and release notes requests.

As always, don’t hesitate to share feedback to release-mgmt@mozilla.com

Mozilla B-Teamhappy bmo push day - mojolicious edition

happy bmo push day – mojolicious edition

As previously announced at FOSDEM 2018 and then re-announced at MojoConf, bugzilla.mozilla.org is now running on Mojolicious “A next generation web framework for the Perl programming language”

This release incorporates 28 changes and the Mojolicious migration is the least interesting to the end-user, but it is pretty important in terms of being able to deliver rich experiences moving forward.

As…

View On WordPress

Mozilla Security BlogSupporting Referrer Policy for CSS in Firefox 64

The HTTP Referrer Value

Navigating from one webpage to another or requesting a sub-resource within a webpage causes a web browser to send the top-level URL in the HTTP referrer field. Inspecting that HTTP header field on the receiving end allows sites to identify where the request originated which enables sites to log referrer data for operational and statistical purposes. As one can imagine, the top-level URL quite often includes user sensitive information which then might leak through the referrer value impacting an end users privacy.

The Referrer Policy

To compensate, the HTTP Referrer Policy allows webpages to gain more control over referrer values on their site. E.g. using a Referrer Policy of “origin” instructs the web browser to strip any path information and only fill the HTTP referrer value field with the origin of the requesting webpage instead of the entire URL. More aggressively, a Referrer Policy of ‘no-referrer’ advises the browser to suppress the referrer value entirely. Ultimately the Referrer Policy empowers the website author to gain more control over the used referrer value and hence provides a tool for website authors to respect an end users privacy.

Expanding the Referrer Policy to CSS

While Firefox has been supporting Referrer Policy since Firefox 50 we are happy to announce that Firefox will expand policy coverage and will support Referrer Policy within style sheets starting in Firefox 64. With that update in coverage, requests originating from within style sheets will also respect a site’s Referrer Policy and ultimately contribute a cornerstone to a more privacy respecting internet.

For the Mozilla Security and Privacy Team,
  Christoph Kerschbaumer & Thomas Nguyen

The post Supporting Referrer Policy for CSS in Firefox 64 appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgHack on MDN: Better accessibility for MDN Web Docs

From Saturday, September 22 to Monday, September 24, more than twenty people met in London to work on improving accessibility on MDN Web Docs — both the content about accessibility and the accessibility of the site itself. While much remains to be done, the result was a considerable refresh in both respects.

Attendees at Hack on MDN listen to a lightning talk by Eva Ferreira

Attendees at Hack on MDN listen to a lightning talk by Eva Ferreira. Photo by Adrian Roselli.

Hack on MDN events

Hack on MDN events evolved from the documentation sprints for MDN that were held from 2010 to 2013, which brought together staff members and volunteers to write and localize content on MDN over a weekend. As implied by the name, “Hack on MDN” events expand the range of participants to include those with programming and design skills. In its current incarnation, each Hack on MDN event has a thematic focus. One in March of this year focused on browser compatibility data.

The Hack on MDN format is a combination of hackathon and unconference; participants pitch projects and commit to working on concrete tasks (rather than meetings or long discussions) that can be completed in three days or less. People self-organize to work on projects in which a group can make significant progress over a long weekend. Lightning talks provide an unconference break from projects.

Accessibility on MDN Web Docs

Making websites accessible to a wide range of users, including those with physical or cognitive limitations, is a vital topic for creators on the web. Yet information about accessibility on MDN Web Docs was sparse and often outdated. Similarly, the accessibility of the site had eroded over time. Therefore, accessibility was chosen as the theme for the September 2018 Hack on MDN.

Hack on MDN Accessibility in London

The people who gathered at Campus London (thanks to Google for the space), included writers, developers, and accessibility experts, from within and outside of Mozilla. After a round of introductions, there was a “pitch” session presenting ideas of projects to work on. Participants rearranged themselves into project groups, and the hacking began. Adrian Roselli gave a brief crash course on accessibility for non-experts in the room, which he fortunately had up his sleeve and was able to present while jet-lagged.

At the end of each morning and afternoon, we did a status check-in to see how work was progressing. On Sunday and Monday, there were also lightning talks, where anyone could present anything that they wanted to share. Late Sunday afternoon, some of us took some time out to explore some of the offerings of the Shoreditch Design Triangle, including playing with a “font” comprised of (more or less sit-able) chairs.

Glenda Sims, Estelle Weyl, Janet Swisher and Adrian Roselli pose with metal letter-shaped chairs spelling "HACK" and "MdN"

Glenda Sims, Estelle Weyl, Janet Swisher and Adrian Roselli pose with metal letter-shaped chairs spelling “HACK” and “MdN”. Photo by Dan Rubin.

Outcomes

One project focused on updating the WAI-ARIA documentation on MDN Web Docs, using a new ARIA reference page template created by Estelle Weyl. Eric Bailey, Eric Eggert, and several others completed documentation on 27 ARIA roles, including recommending appropriate semantic HTML elements to use in preference to an ARIA role. The team even had remote contributors, with Shane Hudson writing about the ARIA alert role.

A number of participants worked on adding sections on “Accessibility concerns” to relevant HTML, CSS, and JavaScript pages, such as the <canvas> element, display property, and the Animation API.

Other efforts included:

Also, a fun time was had and the group enjoyed working together. Check the #HackOnMDN tag on Twitter for photos, “overheard” quotes, nail art by @ninjanails and more. Also see blog posts by Adrian Roselli and Hidde de Vries for their perspectives and more details.

What’s next?

There is plenty of work left to make MDN’s accessibility content up-to-date and useful. The list of ARIA roles, states, and properties is far from complete. More reference pages need “accessibility concerns” information added. The accessibility of the MDN Web Docs site still can be improved. As a result of the enthusiasm from this event, discussions are starting about doing a mini-hack in connection with an upcoming accessibility conference.

If you find issues that need to be addressed, please file a bug against the site or the content. Better yet, get involved in improving MDN Web Docs. If you’re not sure where to begin, visit the MDN community forum to ask any questions you might have about how to make MDN more awesome (and accessible). We’d love to have your help!

The post Hack on MDN: Better accessibility for MDN Web Docs appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogNew Firefox Focus comes with search suggestions, revamped visual design and an under-the-hood surprise for Android users

When we first launched Firefox Focus, we wanted to quickly deliver a streamlined private browsing experience for your mobile device. Since then, we’ve been pleasantly surprised by how many people use Focus for more than just private browsing and we’ve made Focus better with a thoughtful set of features based on what our users are telling us. Custom tabs, tracker counter, full screen mode and so much more have been the result. Today, we’re pleased to announce another big update with another much-requested feature, a design refresh, and an exciting change to the underlying technology behind Focus for Android.

Learn more: search suggestions and home screen tips

Missed one of the feature releases? No problem! Now, we’re going to present the core functionalities of Firefox Focus on the start screen to give an overview of the whole range of possibilities your privacy browser has to offer  in a clear and unobtrusive way, not interrupting the usage at all and automatically refreshing after each click on the Erase button.

Just open the browser and you’ll see helpful feature recommendations in your preferred language on the Firefox Focus start screen (Android). For iOS users, the feature is currently available in English language.

Search suggestions are a key part of web search that can make searching more convenient.  Easily activate the feature by opening the app settings > “Search” > select the checkbox “Get search suggestions”.

We’re aware that privacy is a top priority for many Firefox Focus users and you might not want to share what you’re typing in the address bar with your search provider. So, the feature is turned “off” by default and we let you choose whether or not you want to turn it on. Why? Because that’s our style!

 Find what you’re looking for quickly  with search recommendations.

Siri Shortcuts for iOS users

In addition to home screen tips, iOS users will receive another much asked feature with today’s release: Siri Shortcuts. Siri is one of the more popular features on iOS devices and we’re all about ease for our users. So, in order to improve the Firefox Focus for iOS user experience further, you’ll now be able to set and open a favorite website, erase and open Firefox Focus, as well as erase in the background via shortcuts.

In line with updated designs

Style is key to today’s Firefox Focus release: the browser’s visual design is now completely optimized for the recently released Android Pie. New icons, a customized URL bar and a simplified settings menu make it easier to use and provide for a consistent user experience.

But no need for iOS users to feel left out: the new Firefox Focus has a fresh look for iOS 12.

Presentation matters: the new Firefox Focus comes with an updated design system, optimized for Android Pie (left) and iOS 12 (right).

A new engine for the Firefox Android browsers

While the new app design is obviously an eye-catcher, we’ve also made a groundbreaking change to the underlying Firefox Focus framework that’s not visible at first glance  but will make a huge difference in future releases. Focus is now based on GeckoView, Mozilla’s own mobile engine, making it a pioneer among our Android apps.

Switching to GeckoView will give Focus the benefits of Gecko’s Quantum improvements and enables Mozilla to implement unique privacy-enhancing features in the future, such as reducing the potential of third party collection. For now, you won’t notice much, but you’ll be helping us create the next generation of Firefox browsers just by using Focus, and we’ll return the favor by giving our Focus users unique features that other browsers on Android simply won’t be able to offer.

We’ll make sure to keep you updated on the progress as well as all new developments around Firefox Focus on this blog and are looking forward to your feedback! For now, if you’d like to learn more about the future of our privacy browser, please have a look at this post on the Mozilla Hacks Blog.

Get Firefox Focus now

The latest version of Firefox Focus for Android and iOS is now available for download on Google Play, in the App Store and now also in the Samsung store.

The post New Firefox Focus comes with search suggestions, revamped visual design and an under-the-hood surprise for Android users appeared first on The Mozilla Blog.

QMOFirefox 63 Beta 10 Testday Results

Hello Mozillians!

As you may already know, last Friday September 28th – we held a new Testday event, for Firefox 63 Beta 10.


Thank you all for helping us make Mozilla a better place!

From India team: Shweta Bhat, Amirtha .V, Monisha .R,
From Bangladesh team: Maruf Rahman

Results:

– several test cases executed for the Customize and Font UI features;
– bugs verified: 1475025, 1482476, 1473044;

Thanks for another successful testday! 🙂

Firefox NightlyThese Weeks in Firefox: Issue 46

Highlights

    • James Teh fixed a really annoying accessibility bug where search suggestions were interfering with the focused element.
    • We’re officially launching a new privacy tool called Firefox Monitor. You can read the official announcement and a bit of behind the scenes.
    • Focus for iOS 7.0 went live last week, Focus for Android 7.0 with GeckoView is scheduled for release on October 2.
    • Florian has been landing some massive improvements to the new about:performance.
      • To try out the new page, flip the dom.performance.enable_scheduler_timing pref to true, and restart your browser (or you’ll get crashed).
      • Please file bugs in Toolkit :: Performance Monitoring.
      • Instead of displaying ‘dispatches’ and ‘duration’, the values are combined into something (labelled “Energy Impact”) that users can better understand, with “High/Medium/Low/None” categories.
      • Sort order is more stable, and subframes/workers have values.
      • It’s possible to select a row. A double click will select the tab.
      • Tarek Ziade is making good progress on counting WebExtension activity in frame scripts, and is experimenting with collecting memory information per tab.
Screenshot of about:performance showing resource usage per tab

The new about:performance

Payment request UI showing 'edit credit card' and shipping address selector views

Friends of the Firefox team

Project Updates

Mobile

Activity Stream

  • The team is running an experiment with the new single overlay onboarding experience in Release.
  • The Contextual Feature Recommender, a doorhanger that recommends add-ons, is now in Nightly, and will run as an experiment in Beta next week.
  • We now show the logo and wordmark when only the search panel is enabled.

New Tab page with Firefox wordmark and search box

Add-ons / Web Extensions

Application Services (Sync / Firefox Accounts / Push)

Browser Architecture

Developer Tools

  • Arai’s work to massively cut down DevTools opening delay on script-heavy pages by fixing a 5 year old bug to add Debugger.findSources. This removes a 1s hang when opening the Console on Gmail.

Lint

NodeJS

Performance

Policy Engine

  • Mac policy engine try builds available, and Mac admins already love it! Thanks to Stephen Pohl!
  • Working on security devices, certificates, and generic prefs for 64.
  • MSI installer work happening as well.

Fission

Search and Navigation

Address Bar & Search
Places

Test Pilot

  • Conversation getting started around reusable React components for websites that fit the Photon UI specs.
  • Screenshots: bootstrap removal should land this week 🤞, see the metabug if you’re curious!
    • Bootstrap removal introduced Talos regressions that seem to be caused by unexpected storage init at startup.
    • Looks like we’ll finally be able to enable Screenshots for all tests, based on an encouraging Try run from yesterday
    • Huge thanks to aswan & kmag for jumping on add-ons bugs surfaced by the migration

Web Payments

Below the fold

This Week In RustThis Week in Rust 254

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Evcxr, a Rust REPL and Rust Jupyter Kernel. Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

114 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla Addons BlogOctober’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Default Bookmark Folder

by Teddy Gustiaux
Do you keep multiple bookmark folders? This extension makes it simple to add new bookmarks to specific folders.

“So useful and powerful. I no longer have to change bookmark locations every time!”

Featured: Search Site WE

by DW-dev
Perform website-specific searches.

“Fast, very cool & useful.”

Featured: Dark Reader

by Alexander Shutov
Turn the entire web dark. This extension inverts bright colors to make all websites easier on the eyes.

“This is hands down the best looking dark theme extension for Firefox that I have tried.”

Featured: Vertical Tabs Reloaded

by Croydon
Arrange your open tabs in an orderly vertical stack.

“This is great. Vertical tabs should be the standard nowadays.”

Featured: Text MultiCopy

by erosman
Save multiple snippets of text to paste and organize later.

“So very useful and it works flawlessly.”

Featured: Cookiebro

by Nodetics
Simple yet powerful cookie management. Automatically deletes unwanted cookies, while sparing those on your whitelist.

“I really like that Cookiebro recognizes session cookies, so deleting unwanted cookies will still keep you logged in on most sites.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post October’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Mozilla Open Policy & Advocacy BlogIndian Supreme Court rules on Aadhaar: Delayed scrutiny

This article first appeared on October 1st, 2018 in the Times of India print edition.

The Aadhaar judgment holds important lessons (and warnings) for how courts and the polity should respond to the technological vision of the state. The task before the Supreme Court was to evaluate the constitutionality of a specific choice and design of technology made by the government. Note that this choice, of a single biometric identifier for each resident linked to a centralised database, was made almost a decade ago. And decisions about this project have largely evolved within the closed quarters of the executive, including the one to roll it out, and the subsequent call to link Aadhaar to essential services. All this was done without any statutory backing, until its hurried passage as a money bill in 2016.

As one reads through the decision of the three judges that formed the majority opinion, it becomes clear that there are limits to this delayed judicial scrutiny of a technology-driven project that has already reached scale (over 99% of the population is already enrolled). While the judgment does well to impose limits on its scope, it disappoints in its reluctance to engage with its underlying technical and evidentiary claims, and the application of weak legal standards.

Take the issue framed by the majority opinion as “whether the Aadhaar project has the tendency to create a surveillance state.” The judges offer a very cursory analysis of security safeguards already in place, accepting UIDAI’s claim that the database is “sufficiently secure” and “strongly regulated”. The contrary evidence put forth by the petitioners is largely ignored, including the fact that 49,000 Aadhaar enrolment agencies have been blacklisted for fraud by UIDAI itself.

They do not engage with multiple news reports of recent security breaches, reasoning that this evidence came after the hearings concluded, but noting that these too have been “emphatically denied by the UIDAI.” They conclude that the project cannot be shelved “only (based) on apprehensions.” So while concerns of unrealised (or imperceptible) state surveillance are treated as anxieties that must be soothed, UIDAI is rewarded with the benefit of doubt to not overstep the limits it has set for itself.

On the other hand Justice Chandrachud, who dissents from the majority, pulls us out of this false fait accompli and finds the project to be wholly unconstitutional. He reframes the question in terms of power and asks how the technological architecture of Aadhaar, specifically its use of biometric data and the ability to link distinct databases, could alter the power relationship between citizen and the state. Not only does he differ in his factual analysis of the surveillance capability of Aadhaar as it appears today, he also takes seriously possible future risks to individual liberty.

The court separately considers the issue of whether the Aadhaar project violates the fundamental right to privacy. While the judges all accept that Aadhaar enabled some degree of state intrusion into the privacy of individuals, they differ on whether such intrusion could pass the test of proportionality. Asserted most recently in the case of Puttaswamy vs Union of India, this test is by no means straightforward. In an analysis that eventually rests on weak factual and legal foundations the majority judgment in the Aadhaar case only adds further ambiguity.

On facts, they find that Aadhaar collects “minimal data”, simply because information like names and photos are already routinely collected by a variety of services. This misunderstands a fundamental precept of data privacy. There is risk associated with collecting sensitive information like biometrics in the first place. However, there is potential for even greater harm when an individual’s personal data is linked with other datasets and used across contexts. Given this finding of “minimal data” use, they conclude that individuals have no “reasonable expectation of privacy” in such data, and therefore the intrusion is proportionate. It is worth noting that in the Puttaswamy judgment, Justice Nariman categorically rejected this legal standard, reasoning that our right to privacy cannot be reduced only to the subjective notion of what an individual may “reasonably expect”, and instead, that the law must in fact set out how our privacy should be protected.

The infirmities of the majority’s judgment aside, it seems there is a broader issue with courts being forced to constantly catch-up with technological choices, resigned to scrutinise them in hindsight and to lessen the blow with safeguards. With Aadhaar, this need not have been the case. The rollout of this project should have been preceded by rigorous evidence, independent evaluation of technical and security claims, and scrutiny by the public and Parliament. The ability to highlight errors in software design or in implementation depends on the degree to which systems are open to external audit. A project that has affected every resident of this country and has largely grown through coercion (and for the poor, more harshly than others), should have been open and accountable by design.

Aadhaar might be the largest example of a technology-driven decision system in India, and the first to be legally adjudicated, but both globally and in India such systems are increasingly finding their way into various spheres of government, including criminal justice, healthcare and employment. We must be prepared to ask difficult questions of both the technical claims and the political and economic motivations that drive such proposals.

This article first appeared on October 1st, 2018 in the Times of India print edition. Available here.

The post Indian Supreme Court rules on Aadhaar: Delayed scrutiny appeared first on Open Policy & Advocacy.

Will Kahn-GreeneSocorro: 2018q3 review

Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

2018q3 was a busy quarter. This blog post covers what happened.

Read more… (7 mins to read)

Daniel PocockStelvio, Mortirolo, Simplon and hacking Tiramisu

On Friday the adventure continued. A pit stop for fresh tyres and then north to the renowned Stelvio Pass, 2757m a.s.l., 75 challenging hairpin corners. There are plenty of helmet-cam videos of this ride online.

Mortirolo Pass

After Stelvio, I had to head south and the most direct route suggested by OpenStreetmap took me over the Mortirolo pass.

Dinner

At the end of all that, I had to hack my own Tiramisu but like the mountain passes, it was worth the effort:

Simplon Pass

Returned home using the Simplon Pass. It is a relatively easy road compared to the others, with nice views at the top and along the route.

Nick FitzgeraldSFHTML5 Rust and WebAssembly Talk

I gave a talk about Rust and WebAssembly for SFHTML5’s “All About WebAssembly” meetup. You can find the slide deck here. Use your arrow keys to cycle through the slides. Video recording embedded below.

You can watch the other (great!) talks from the meetup in this playlist.

Mozilla Open Policy & Advocacy BlogContributing to the European Commission’s review of digital competition

Following on the heels of our submission to the U.S. Federal Trade Commission last month, we have submitted a written filing to the European Commission Directorate-General for Competition, as part of a public consultation in advance of the Commission’s forthcoming January 2019 conference on competition challenges in the digital era. In our filing, we focus on two specific, related issues: the difficulty of measuring competitive harm in a data-powered and massively vertically integrated digital ecosystem, and the role played by interoperability (in particular, through technical interfaces known as APIs) in powering the internet as we know it.

Mozilla’s Internet Health Report 2018 explored concentration of power and centralization online through a spotlight article, “Too big tech?” The software and services offered by a few companies are entangled with virtually every part of our lives. These companies reached their market positions in part through massive innovation and investment, and they created extremely popular (and lucrative) user experiences. But we are headed today down a path of excessive centralisation and control, where someday the freedom to code and compete will be realised in full only for those who work for a few large corporations.

Our submission examines modern digital competition through the following key considerations:

  1. Increasing centralisation poses competition concerns;
  2. Traditional metrics and tools are insufficient to promote competition;
  3. Interoperability is a powerful, ready-to-use key to unlock competition in the tech sector; and
  4. Changes to law, policy, and practice regarding internet competition should be grounded in technology and built to benefit all internet users and businesses.

The EU has a well established track record in enforcing competition in digital markets. We encourage the Commission to continue its leadership by embracing interoperability as a core principle in its approach to digital competition. If the future of the internet stays grounded in standards and built out through an ecosystem of transparent third-party accessible APIs, we can preserve the digital platform economy as a springboard for our collective social and economic welfare, rather than watching it evolve into an oligarchy of gatekeepers over our data.

The post Contributing to the European Commission’s review of digital competition appeared first on Open Policy & Advocacy.

Mozilla VR BlogHubs by Mozilla: Immersive Communication on Any Device

Hubs by Mozilla: Immersive Communication on Any Device

Hubs by Mozilla lets people meet in a shared 360-environment using just their browser. Hubs works on any device from head-mounted displays like HTC Vive to 2D devices like laptops and mobile phones. Using WebVR, a JavaScript API, Mozilla is making virtual interactions with avatars accessible via Firefox and other browser that people use every day.

In the course of building the first online social platform for VR and AR on the web, Mozilla wanted confirm it was building a platform that would bring people together and do so in a low-friction, safe, and scalable way. With her years of experience and seminal studies examining the successes and pitfalls of social VR systems across the ecosystem, Jessica Outlaw and Tyesha Snow of The Extended Mind, set out to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

BACKGROUND ON THE RESEARCH STUDY
In July 2018, The Extended Mind recruited five pairs of people (10 total) to come to their office in Portland, OR and demo Hubs on their own laptops, tablets, and mobile phone. We provided them with head-mounted displays (HTC Vive, Oculus Rift & Go) to use as well.

Hubs by Mozilla: Immersive Communication on Any Device

Users were a relatively tech savvy crowd and represented a range of professions from 3D artist and engineer to realtor and psychologist. Participants in the study were all successful in entering Hubs from every device and had a lot of fun exploring the virtual environment with their companion’s avatar. Some of the participants in their early twenties also made a point to say that Hubs was better than texting or a phone call because:

“This makes it easier to talk because there are visual cues.”

And…

“Texting doesn’t capture our full [expression]”

In this series blog posts, The Extended Mind researchers will cover some of the research findings about the first-time user experience of trying Hubs. There are some surprising findings about how the environment shaped user behavior and best practices for usability in virtual reality to share across the industry.

BROWSER BASED VR (NO APP INSTALL REQUIRED)
Today, the focus is on how the accessibility of Hubs via a browser differentiates it from other social VR apps as well as other 2D communication apps like Skype, BlueJeans, and Zoom.

The process for creating a room and inviting a friend begins at hubs.mozilla.com. Once there, participants generated a link to their private room and then copied and pasted that link into their existing communication apps, such as iMessage or e-mail.

Once their companion received the link, they followed instructions and met the person who invited them in a 360-environment. This process worked for HMDs, computers, and mobile phone. When participants were asked afterward about the ease of use of Hubs, accessibility via link was listed as a top benefit.

“It’s pretty cool that it’s as easy as copy and pasting a link.”

And

“I’m very accustomed to virtual spaces having their own menu and software boot up and whole process to get to, but you open a link. That’s really cool. Simple.”

Some believed that because links are already familiar to most people, they would be able to persuade their less technologically sophisticated friends & family members to meet them in Hubs.

Another benefit of using the browser is that there is already one installed on people’s electronic devices. Obstacles to app installation range from difficulty finding them in the app store, to lack of space on a hard drive. One person noted that IT must approve any app she installs on her work computer. With Hubs, she could use it right away and wouldn’t need to jump that hurdle.

Because Hubs relies on people’s existing mental models of how hyperlinks work, only requires an internet browser (meaning no app installation), and is accessible from an XR or 2D device it the most accessible communication platform today. It could possibly be the first digital experience that people have which gets them familiar with the with the concepts of 360 virtual spaces and interacting with avatars, which subsequently launches them into further exploration of virtual and extended reality.

Now that you've got a sense of the capabilities of Hubs, the next blog posts will cover more specific findings of how people used it for conversation and how the environment shaped interections.

Support.Mozilla.OrgSupport Localization – Top 50 Sprint and More

Hello, current and future Mozillians!

I hope you can still remember that last month we kicked off a “Top 20 Sprint” for several locales available on the Support site. You can read more about the reasons behind it here and the way it had been going here.

In September, the goal has been extended to include a wider batch of articles that quality into the “Top 50” – that is, the 50 most popular Knowledge Base articles globally. You can see their list on this dashboard: https://support.mozilla.org/en-US/contributors/kb-overview

I wanted to share with you the progress our community has made over the last weeks and call out those who have contributed towards Mozilla’s broader linguistic coverage of support content, making all the possible versions of Firefox easier to use for millions of international users.

Arabic
After the impressive 1st milestone rush by Ahmad, the torch has been picked up by FFus3r, who has been working for the last few weeks on adding new and updated versions of Knowledge Base articles through the Arabic dashboard. شكرا لكم!
Bengali

Another case of passing the work on successfully, this time for Bengali localizers. We’ve had Nazir working hard on the Top 20 articles at first in August, and now we have S M Sarwar Nobin leading the charge in September. I also hear there’s an event for the Bengali community happening soon, so stay tuned for more details from that side of the world :)

Bosnian

Bosnian localizers have been quiet for a while now, so I hope we can hear from kicin again soon, as there is still time to add more content in that locale to our Knowledge Base.

Gujarati

Similarly to Bosnian, there’s not a lot of action taking place in the Gujarati part of the Knowledge Base, but hopefully we can see the localizers rally once more to reach the Top 50 goal soon.

Hindi

Hindi localizers have continued to contribute to the Knowledge Base, but with slightly diminished contributions, there’s still space for more contributions! If you know Hindi and want to join forces with Mahtab and Ritesh Raj, now is the time!

Tamil

It seems that the Tamil side of the Knowledge Base will have to wait for better days and more contributors with energy and time to spare. Here’s to hoping we can see that happen soon!

Telugu

To finish off the sprint part for the “magnificent 7” locales that responded postively to my summer call to action on a high note, I am happy to report that చిలాబు, sandeep, and Dinesh have continued improving the Knowledge Base with their translations and are well on the way of hitting the Top 50 articles if they keep up. ధన్యవాదాలు!

More news from all localizers of the above locales soon… While we wrap up September and move into October.

In the meantime, many other contributors have kept their parts of the Knowledge Base busy and updated… I would like to call out a few of them and thank them on behalf of the millions of users benefiting from their shared enthusiasm and knowledge.

The Czech team of soucet and Michal Stanke keep churning out update after update. Same goes for the Danish tag team of Joergen and Kim Ludvigsen. The unstoppable Artist makes most of the German Knowledge Base possible, together with graba.

Greek Firefox users have a lot to thank Jim Spentzos for, while those who prefer to use Spanish while browsing our site can enjoy high quality content coming from Ángela Velo (with us since 2012!). Jarmo is still looking for more people to help out with Finnish, but that does not stop him from contributing additional translations. The French language is proudly (and efficiently) supported by Mozinet, Cécile, YD, J2m06, Goofy , and Olpouin (a recent addition to the mix there – hello!).

Hungarian localizers Meskó Balázs and Kéménczy Kálmán slowly but steadily enable and improve the Knowledge Base for users over the blue Danube, while  Underpass and Michele Rodaro do the same for users on both shored of Tiber (and way beyond).

Over in Japan, dskmori (also active in Korean!), kenyama, hamasaki, and marsf provide great content for users who seem to (on average) spend the most time on each page they visit. Georgianizator is slowly working through the (obviously) Georgian (also known as Kartuli) Knowledge Base. For Korea, Narae Kim and seulgi work together with dskmori on more updates.

Tonnes (another localization MozGiant, active in the Knowledge Base – and not only – since 2010!) makes Dutch happen, while for Polish we have TyDraniu and Teo. MozBrazilians continue supporting their huge userbase through the work ofJhonatas Rodrigues, Marcelo Ghelman, leorockbar, and wikena (another new name, hello!).
Their tireless Portuguese counterparts on the other side of the ocean are Alberto Castro, ManSil and Cláudio Esperança, while over on the other side of Europe, the Russian trio of Valery Ledovskoy, Anticisco Freeman, and Harry is echoing the hard work of other localizers in the Cyrillic script.
kusavica and marcel11 keep clarifying Firefox in their own words for Slovak users, just like Lan and Rok do for Slovenians. The Turkish language is represented and supported by Burhan Keleş, SUNR, OmTi, and Selim Şumlu.
To wrap the long list of contributors up, we have Bor, ChenYJ, wxie, Yang Hanlin and xiaolu contributing for the benefit of all our Chinese users.
Each one of the people listed above helps countless others through their contributions to the open and helpful web that Mozilla is a part of. Adding their energy and skills to the language rainbow of the web, they help keep the web beautiful in its variety of cultures represented through modern and living languages.
Thank you all and may your weekend be unforgettable! Keep rocking the helpful web!

Firefox NightlyThe Developer Toolbar (or GCLI) is no longer in DevTools

The DevTools GCLI has been removed from the Firefox codebase (bug), which roughly translates into 20k less lines of code to think about, and the associated tests which are not running anymore, so yay for saving both brain and automation power!

We triaged all the existing bugs, and moved a bunch worth keeping to DevTools → Shared Components, to avoid losing track of them (they’re mostly about taking screenshots). Then the ever helpful Emma resolved the rest as incomplete, and moved the component to the DevTools Graveyard in Bugzilla, to avoid people filing bugs about code that does not exist anymore.

During this removal process we’ve heard from some of you that you miss certain features from GCLI, and we’ve taken note, and will aim to bring them back when time and resourcing allow. In the meantime, thank you for your feedback! It helps us better understand how you use the tools.

We also want to thank Eric Meyer for his everlasting appreciation of the screenshot feature, and his continuous dedication to making sure the world knows about this feature over the years. Thank you!

PS For background on why we removed it, you can read the initial intent to unship email.

The Mozilla Blog25,000 Americans Urge Venmo to Update Its Privacy Settings

Also: A new Mozilla-Ipsos poll reveals a majority of respondents want privacy, and not publicity, as their default setting online

 

Earlier this week, Mozilla visited Venmo’s headquarters in New York City and delivered a petition signed by more than 25,000 Americans. The petition urges the payment app to put users’ privacy first and make Venmo transactions private by default.

Also this week: A new poll from Mozilla and Ipsos reveals that 77% of respondents believe payment apps should not make transaction details public by default. (More on our poll results below.)

Millions of Venmo users’ spending habits are available for anyone to see. That’s because Venmo transactions are currently public by default — unless users manually update their settings, anyone, anywhere can see whom they’re sending money to, and why.

Mozilla’s petition urges Venmo to change these settings. By making privacy the default, Venmo can better protect its seven million users — and send a powerful message about the importance of privacy. But so far, Venmo hasn’t formally responded to our petition and to the 25,000 Americans who signed their names.

Earlier this year, Mozilla Fellow Hang Do Thi Duc exposed the serious implications of Venmo’s settings. Her project, Public By Default, revealed how Venmo users’ drug habits, junk food vices, personal finances, and fights with significant others are available for all to see. Here’s what TV reporters had to say about Hang’s findings:

 

Mozilla and Ipsos conducted an opinion poll this month, asking 1,009 Americans how they feel about the policy of “public by default.” Americans’ opinions were clear:

77% of respondents believe payment apps should not make transaction details public by default.

92% of respondents do not support Venmo’s justification for making transactions public by default. (In July, Venmo told CNET that transactions should be public because “it’s fun to share [information] with friends in the social world.”)

89% of respondents believe the most responsible default setting for payment apps is for transactions to be visible only to those involved.

Find the full poll results here.

The post 25,000 Americans Urge Venmo to Update Its Privacy Settings appeared first on The Mozilla Blog.

Mozilla GFXWebRender newsletter #23

Bonjour everyone! Here comes the twenty third installment of WebRender’s very best newsletter. This time I’m trying something a bit different. Instead of going through each pull request and bugzilla entry that landed since the last post, I’m only sourcing information from the team’s weekly meeting. As a result only the most important items make it to the list and not all items have links to their bug or pull request. Doing this allows me to spend considerably less time preparing the newsletter and will hopefully help with publishing it more often.

Last time I mentioned WebRender being enabled on nightly by default for a small subset of the users, focusing on nVidia desktop GPUs on Windows 10. I’m happy to report that we didn’t set our nightly user population on fire and that WebRender is still enabled in these configurations (as expected, sure, but with a project as large and ambitious as WebRender it isn’t something that could be taken for granted). The choice of this particular configuration of hardware and driver led to a lot of speculation online, so I just want clarify a few things. We did not strike any deal with nVidia. nVidia didn’t send engineers to help us get WebRender to work on their hardware first. No politics, I promise. We learnt from past mistakes and chose to target a small population of Firefox users at first specifically because it is small. Each combination of OS/Vendor/driver exposes its own set of bugs and a progressive and targeted rollout means we’ll be better equipped to react in a timely manner to incoming bugs than we have been with past projects.
Worry not, the end game is for WebRender to be Firefox’s rendering engine for everyone. Until then, are welcome to enable WebRender manually if your OS, hardware or driver isn’t in the initial target.

Notable changes in WebRender and Gecko

  • Bobby improved the memory reporting infrastructure for WebRender.
  • Bobby improved memory usage by better managing the lifetime of the render target pool items.
  • Bobby fixed a crash with clip masks.
  • Jeff Improved the performance of blob image rasterization.
  • Chris Fixed some pixels snapping issues.
  • Kvark Fixed a 3D transform rendering bug and redacted his investigation in the form of a tutorial. It’s a a very entertaining read!
  • Kvark brought back the use of texelFecth in vertex shaders.
  • Matt improved the performance of the scene building phase by pre-allocating memory.
  • Andrew avoided rasterizing vector images many times at similar sizes which caused performance issues on some pages.
  • Andrew improved the memory reporting of shared surfaces.
  • Andrew improved memory usage by unmapping the remaining shared surfaces of a pipeline when the latter is removed.
  • Lee finished implementing font variations for Windows.
  • Glenn improved gradient rendering performance.
  • Glenn introduced an interning data structure which will help with caching more resources across display lists.
  • Glenn improved the performance of clipping when scaling transformations are involved.
  • Glenn fixed some crashes.
  • Nical avoided building the frame twice each time a scene is built.
  • Nical prevented background tabs from blocking the UI in some cases.
  • Nical integrated the tab switching mechanism with WebRender.
  • Nical fixed a race condition between blob image rasterization and texture uploads.
  • Nical fixed some crashes.
  • Sotaro fixed incorrect ordering of transactions for video frames.
  • Markus assisted various people with investigating and fixing WebRender bugs.
  • Jean-Yves added support for 10/12 buts YUV images.
  • Patrick fixed some artifacts caused by the way we down-scale when blurring.
  • Emilio fixed a bug with border caching.
  • Emilio fixed a bug related to snapping and clips.

Enabling WebRender in Firefox Nightly

  • In about:config set “gfx.webrender.all” to true,
  • restart Firefox.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Niko MatsakisOctober Office Hour Slots

Just a quick note that the October 2018 office hour slots are now posted. If you’re having a problem with Rust, or have something you’d like to talk out, please sign up!

Mozilla Open Policy & Advocacy BlogA mixed bag: Mozilla reacts to the Indian Supreme Court’s landmark verdict on Aadhaar

By holding Section 57 of the Aadhaar Act to be unconstitutional, the Supreme Court of India has recognized the privacy risks created by the indiscriminate use of Aadhaar for private services. While this is welcome, by allowing the State wide powers to make Aadhaar mandatory for welfare subsidies and PAN, this judgment falls short of guaranteeing Indians meaningful choice on whether and how to use Aadhaar. This is especially worrisome given that India still lacks a data protection law to regulate government or private use of personal data. Now, more than ever, we need legal protections that will hold the government to account.

 

The post A mixed bag: Mozilla reacts to the Indian Supreme Court’s landmark verdict on Aadhaar appeared first on Open Policy & Advocacy.